url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://mathoverflow.net/questions/25484/moduli-of-extensions | # Moduli of Extensions
Given two modules $M$ and $N$ there is a nice scheme parametrizing extensions
$0 \rightarrow M \rightarrow E \rightarrow N \rightarrow 0$
namely $Ext^1(N,M)$ or, leaving out the trivial extension, the projective space $P(Ext^1(N,M))$.
There are (at least) two natrual generalizations
1. n-step extensions
$M \rightarrow E_1 \rightarrow E_2 \rightarrow \dots \rightarrow E_n \rightarrow N$
between $N$ and $M$.
2. Filtered modules: Parametrize modules $E$ which admit a filtration
$0 \subset F_1 \subset F_2 \dots F_n=E$
with fixed graded objects $E_i=F_i/F_{i-1}$.
I suppose in the first case one can use the group Ext^n(M,N), although I never saw a construction of a universal family. Is there a good reference?
In the second case, I do not have a clue. So my main question is:
Is there a nice moduli space of filtered objects?
-
As you probably know $\mathrm{Ext}^n$ does not parameterise $n$-fold extensions directly but rather certain equivalence classes of them. The equivalence relation is the smallest with the property that $M\to E_1\to E_2\to\cdots \to N$ and $M\to F_1\to F_2\to\cdots \to N$ are equivalent when there's a chain map between them which is the identity on $M$ and $N$. When $n\ge2$ onw can add on an arbitrary module to $E_1$ and $E_2$ and get an equivalent $n$-step extension, so the equivalence classes are proper classes :-) – Robin Chapman May 21 '10 at 13:21
I. In the first case, you can use the fact that if $$\cdots\to P_n\to P_{n-1}\to\cdots\to P_1\to P_0$$ is a projective resolution of $N$, then every Yoneda $n$-extension of $M$ by $N$ can be represented an extension of the form $$0\to M\to E\to P_{n-2}\to P_{n-3}\to\cdots \to P_1\to P_0\to N\to 0$$ and where $E$ is a module which is constructed as a pushout of a diagram of the form $$M\leftarrow P_n\rightarrow P_1$$ This gets you a sensible set of representatives of $n$-extensions (the isomorphism classes of $n$-extensions, as opposed to equivalece classes, do not form a set, so one needs to do something like this) which you can probably make into a scheme. You next want to quotient by equivalence---I do not see immediately how that'll work.
II. For the second case, and if you are considering finite dimensional modules over a finitely generated algebra $A$, you can construct an analogue of the representation variety $\mathrm{Rep}_d(A)$ for filtered modules with specific subquotients. For example, suppose you want a variety of modules $M$ of total dimension $d$ with a filtration $0=F_0\subseteq F_1\subseteq F_2\subseteq F_3=M$ such that $F_1/F_0\cong N_1$, $F_2/F_2\cong N_2$ and $F_3/F_2\cong N_3$. Up to isomorphism, you can suppose that $M=k^d$, and that the $F_i$ are a standard partial flag (so that $F_i$ is the subspace of $k^d$ of vectors whose last $d-\dim F_i$ coordinates vanish)
The action of $M$ is then completely given by $n$ $d$-by-$d$ matrices, where $n$ is the size of a generating set of $A$, and the fact that $M$ is an actual module, that chosen filtration is a module filtration, and that the subquotients are what they should be can be expressed in terms of polynomial equations involving the coefficients of those $n$ matrices.
This determines a scheme, whose points are $A$-module structures on $k^d$ which satisfy the desired conditions, and which contain representatives of all isoclasses of modules satisfying those conditions. Of course, the points of this scheme are not in correspondence with isoclasses: to do that, you need to pass to the quotient by the appropriate change-of-basis group (but that will kill the scheme structure, I guess...)
-
Thanks for your answer. Your hint for the first point is really helpful. Ad II) I have seen people (e.g. Reineke arXiv:0802.2147) doing the moduli of quiver representations using a similar construction and GIT quotients. So I am confident that one can produce a sheme in the way you described. However, the application I have in mind goes as follows. Suppose $E_i$ are sheaves on an algebraic variety and $E_i$ are stable in some stability cndition. How many sheaves $E$ have a Hader-Narasimhan filtration with semi-stable factors $E_i$? In this case Artin-Algebras are only of limited help. – Heinrich Hartmann May 22 '10 at 9:42
I can almost guarantee that a scheme structure does NOT exist on such a quotient. In fact, unless I misunderstood something, this is more or less equivalent to the problem of describing a moduli space of $n\times n$-matrices up to conjugacy. And this can never exist as a moduli space, even in a coarse sense. See the paper by Mumford and Suominen "An introduction to the theory of moduli", 1970. – Daniel Larsson May 22 '10 at 14:18
@Daniel, that really depends on the starting data. For example, if the algebra $A$ is the group algebra of a finite group (over a field of characteristic zero) you do get a (zero-dimensional) scheme. – Mariano Suárez-Alvarez May 24 '10 at 15:50
Hi Heinrich,
in the situation you have in mind (sheaves on an algebraic variety), such spaces are not too difficult to construct as Artin stacks. If you omit the condition that the i-th filtration quotient is isomorphic to a given one, then such a universal Artin stack is e.g. constructed in Bridgeland's introduction to Hall-algebras (arXiv:1002.4372, he calls them $\mathcal M^{(n)}$), but of course also in earlier articles by Joyce. Basically it follows from the existence of relative quot schemes.
These universal extension stacks have evaluation morphisms to $\mathcal M$, the stack of all sheaves, sending the filtration to its i-th quotients, so you can take a base change via the map from $\Spec k \to \mathcal M \times \dots \times \mathcal M$ given by your set of objects $E_i$, and the fiber product will be the Artin stack you are looking for.
If you want a scheme instead of an Artin stack - then I would ask back "why?" :) Nevertheless, it would be useful to understand this fiber product better when $n > 2$.
-
Hi Heinrich!
I don't know. But you could take a look at Carlos Simpson's general definition of "filtered object" on pages 24/25 of his paper The Hodge filtration on non-abelian cohomology: Roughly, a filtered X is a $\mathbb{G}_m$-equivariant map from an X to $\mathbb{A}^1$, so you could get your moduli space as a mapping space, or as an object of the "comma site" of maps of objects of your site into $\mathbb{A}^1$...
-
Dear Peter, thanks for your answer. Unfortunately I do not see how to apply this construction to the question. It seems to me that this gives an answer to the "inverse" problem of parametrizing filtrations of a given object. – Heinrich Hartmann May 21 '10 at 14:50
Well, I had thought that maybe you could look at the fibered category of all modules over the comma site/A^1. Since the fiber over 0 gives the graded object, you could maybe form a pullback where you fix the desired quotients of the filtration... But all of this is wild speculation from a layman and may be nonsense! If you like, I delete this answer to attract others! – Peter Arndt May 21 '10 at 15:41
At least it's abstract nonsense :) – Lars May 21 '10 at 18:19 | 2015-08-02 00:46:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778396844863892, "perplexity": 291.78036010494094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988924.75/warc/CC-MAIN-20150728002308-00302-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://www.stata.com/statalist/archive/2009-11/msg01077.html | [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
# st: re: management of missing
From Christopher Baum To statalist@hsphsun2.harvard.edu Subject st: re: management of missing Date Thu, 19 Nov 2009 12:38:40 -0500
<>
Rodrigo said
My question to the list is if Stata allows you to avoid dropping observations based on the listwise criteria. I want to know this because maybe this will be a possible solution for raising the sample where the estimations are based.
No, for obvious reasons. Consider linear regression, where you calculate X'X. Consider an X matrix, N x k, with some random elements of each column missing. What does it mean to calculate sums of squares and cross products of such a matrix? Calculating those sums over only the feasible (pairwise) terms will not give you anything sensible except in the limit, and even asymptotically, if the proportion of missing data is fixed, driving N -> \infty probably won't help.
It sounds like you might be a good candidate for Stata's new manual on Multiple Imputation.
Kit
Kit Baum | Boston College Economics and DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
© Copyright 1996–2013 StataCorp LP | Terms of use | Privacy | Contact us | What's new | Site index | 2013-12-10 08:53:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25444042682647705, "perplexity": 4821.831143784531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164014082/warc/CC-MAIN-20131204133334-00018-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/43699-given-f-x-3x-2-4-a.html | # Math Help - Given f(x)=3x^2-4
1. ## Given f(x)=3x^2-4
Given f(x)=3x^2-4, find F(x) "upside down A "FOR ALL" " x epsilon [x,3] if F(0) = 0
a. 17
b. 0
c. 19
d. 4.32
2. Originally Posted by shanniepooh2
Given $f(x)=3x^2-4$, find $F(x)$ $\forall{x}\in[x,3]$ if $F(0)=0$
a. 17
b. 0
c. 19
d. 4.32
This makes no sense? Is this the full question?
3. Originally Posted by Mathstud28
This makes no sense? Is this the full question?
f(x)=3x^2-4, "upside down A "FOR ALL" " x epsilon [2,3] if F(0) = 0
I Accidently place an x instead of a 2 and the "for all" and the x are seperate
4. Please see the previous reply... I accidently place an x instead of a 2 as an integer..
5. Originally Posted by shanniepooh2
f(x)=3x^2-4, "upside down A "FOR ALL" " x epsilon [2,3] if F(0) = 0
I Accidently place an x instead of a 2 and the "for all" and the x are seperate
Just compute $\int_2^{3}\bigg[3x^2-4\bigg]~dx$
6. I was reading Mathstud28's method and decided to try it, but I got 15 as my answer, which is not a choice.
Did I do something wrong? | 2015-04-28 20:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900978326797485, "perplexity": 2705.9326299597633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246662032.70/warc/CC-MAIN-20150417045742-00063-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://2012books.lardbucket.org/books/beginning-economic-analysis/s15-01-edgeworth-box.html | This is “Edgeworth Box”, section 14.1 from the book Beginning Economic Analysis (v. 1.0). For details on it (including licensing), click here.
For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. To download a .zip file containing this book to use offline, simply click here.
Has this book helped you? Consider passing it on:
Creative Commons supports free culture from music to education. Their licenses helped make this book available to you.
DonorsChoose.org helps people like you help teachers fund their classroom projects, from art supplies to books to calculators.
## 14.1 Edgeworth Box
### Learning Objectives
1. How are several prices simultaneously determined?
2. What are the efficient allocations?
3. Does a price system equilibrium yield efficient prices?
The EdgeworthFrancis Edgeworth (1845–1926) introduced a variety of mathematical tools, including calculus, for considering economics and political issues, and was certainly among the first to use advanced mathematics for studying ethical problems. box considers a two-person, two-good “exchange economy.” That is, two people have utility functions of two goods and endowments (initial allocations) of the two goods. The Edgeworth boxA graphical representation of the exchange problem facing participants in a two-good exchange economy. is a graphical representation of the exchange problem facing these people and also permits a straightforward solution to their exchange problem.
Figure 14.1 The Edgeworth box
The Edgeworth box is represented in Figure 14.1 "The Edgeworth box". Person 1 is “located” in the lower left (southwest) corner, and Person 2 in the upper right (northeast) corner. The X good is given on the horizontal axis, the Y good on the vertical. The distance between them is the total amount of the good that they have between them. A point in the box gives the allocation of the good—the distance to the lower left to Person 1, the remainder to Person 2. Thus, for the point illustrated, Person 1 obtains (x1, y1), and Person 2 obtains (x2, y2). The total amount of each good available to the two people will be fixed.
What points are efficient? The economic notion of efficiency is that an allocation is efficient if it is impossible to make one person better off without harming the other person; that is, the only way to improve 1’s utility is to harm 2, and vice versa. Otherwise, if the consumption is inefficient, there is a rearrangement that makes both parties better off, and the parties should prefer such a point. Now, there is no sense of fairness embedded in the notion, and there is an efficient point in which one person gets everything and the other gets nothing. That might be very unfair, but it could still be the case that improving 2 must necessarily harm 1. The allocation is efficient if there is no waste or slack in the system, even if it is wildly unfair. To distinguish this economic notion, it is sometimes called Pareto efficiencyCondition that exists when there is no waste or slack in a system, even if it is wildly unfair..Vilfredo Pareto (1848–1923) was a pioneer in replacing concepts of utility with abstract preferences. His work was later adopted by the economics profession and remains the modern approach.
We can find the Pareto-efficient points by fixing Person 1’s utility and then asking what point, on the indifference isoquant of Person 1, maximizes Person 2’s utility. At that point, any increase in Person 2’s utility must come at the expense of Person 1, and vice versa; that is, the point is Pareto efficient. An example is illustrated in Figure 14.2 "An efficient point".
Figure 14.2 An efficient point
In Figure 14.2 "An efficient point", the isoquant of Person 1 is drawn with a dark, thick line. This utility level is fixed. It acts like the “budget constraint” for Person 2. Note that Person 2’s isoquants face the opposite way because a movement southwest is good for 2, since it gives him more of both goods. Four isoquants are graphed for Person 2, and the highest feasible isoquant, which leaves Person 1 getting the fixed utility, has the Pareto-efficient point illustrated with a large dot. Such points occur at tangencies of the isoquants.
This process of identifying the points that are Pareto efficient can be carried out for every possible utility level for Person 1. What results is the set of Pareto-efficient points, and this set is also known as the contract curveCurve in which every point maximizes one person’s utility given another’s utility.. This is illustrated with the thick line in Figure 14.3 "The contract curve". Every point on this curve maximizes one person’s utility given the other’s utility, and they are characterized by the tangencies in the isoquants.
The contract curve need not have a simple shape, as Figure 14.3 "The contract curve" illustrates. The main properties are that it is increasing and ranges from Person 1 consuming zero of both goods to Person 2 consuming zero of both goods.
Figure 14.3 The contract curve
Example: Suppose that both people have Cobb-Douglas utility. Let the total endowment of each good be one, so that x2 = 1 – x1. Then Person 1’s utility can be written as
u1 = xα y1–αα, and 2’s utility is u2 = (1 – x)β (1 – y)1–β. Then a point is Pareto efficient if
$αy (1−α)x = ∂ u 1 ∂x ∂ u 1 ∂y = ∂ u 2 ∂x ∂ u 2 ∂y = β(1−y) (1−β)(1−x) .$
Thus, solving for y, a point is on the contract curve if $y= (1−α)βx (1−β)α+(β−α)x = x (1−β)α (1−α)β + β−α (1−α)β x = x x+( (1−β)α (1−α)β )(1−x) .$
Thus, the contract curve for the Cobb-Douglas case depends on a single parameter $(1−β)α (1−α)β .$ It is graphed for a variety of examples (α and β) in Figure 14.4 "Contract curves with Cobb-Douglas utility".
Figure 14.4 Contract curves with Cobb-Douglas utility
### Key Takeaways
• The Edgeworth box considers a two-person, two-good “exchange economy.” The Edgeworth box is a graphical representation of the exchange problem facing these people and also permits a straightforward solution to their exchange problem. A point in the Edgeworth box is the consumption of one individual, with the balance of the endowment going to the other.
• Pareto efficiency is an allocation in which making one person better off requires making someone else worse off—there are no gains from trade or reallocation.
• In the Edgeworth box, the Pareto-efficient points arise as tangents between isoquants of the individuals. The set of such points is called the contract curve. The contract curve is always increasing.
### Exercises
1. If two individuals have the same utility function concerning goods, is the contract curve the diagonal line? Why or why not?
2. For two individuals with Cobb-Douglas preferences, when is the contract curve the diagonal line? | 2019-08-17 14:36:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5850606560707092, "perplexity": 1487.2216631877707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00235.warc.gz"} |
https://www.physicsforums.com/threads/god-exists.65100/page-2 | # God exists ?
cronxeh
Gold Member
Your argument on the contrary misses my point entirely:
Everything we taught is correlated, and belongs to a set, lets call it "B"
B = { reality }
Scientific method is when your postulates are based on data and have mathematical meaning.
god, devil, hell, heaven, etcetera - Lets say they all belong to set A - aka "life after death" - according to religion. I think you got definition of word 'god' completely wrong - you assigning everything that is unknown to that word, im saying god is that specific thing religion refers to that claims that god created set B.
The fact is that there is no such (f: B -> A) so card(B) = card(A)
I ask this: If religion requires faith, that is believing in parts of it without physical proof, then why are people trying to justify it logically?
If it was completely provable, that would be using knowledge, not faith. So to those of us who are religious/spiritual/anything else that falls on faith, do you try to rationalize it to explain your view to others or to tell yourself you're right?
I am very aware, that we all take many things on faith, like we don't know if we really exist in this reality; we could all be a some kind of "Matrix" world. Even so, many things in this world are "provable" to the highest extent we can prove them. Physics, math, and other rational subjects do not call on faith as a proof.
I have nothing against faith, I just wonder why people try to mix the categories of faith and logical/rational justification. To put these things together seems quite like a paradox.
-----------------
I also agree with cronxeh. We are technically all born as atheists. I don't see how you could argue we are born with a belief in God.
DaveC426913
Gold Member
Your argument on the contrary misses my point entirely:
Everything we taught is correlated, and belongs to a set, lets call it "B"
B = { reality }
And what I'm saying is that most of what one "knows" is taken on faith, at someone else's word, because we don't have an infinite amount of time to do first-hand experiments ourselves. We have no problem taking a vast amount of our world on faith.
The fact is that there is no such (f: B -> A) so card(B) = card(A)
Sorry, I am not familiar with formal logical notation, so you've lost me here.
god, devil, hell, heaven, etcetera - Lets say they all belong to set A - aka "life after death" - according to religion.)
Why take such an antiquated view of God? Why not merely take a more general view of the creator that create the universe? Heaven, hell, and the Devil are antiquated notions; you won't find leaders of the church talking about them literally.
If, for the purpose of this discussion, you are calling forth a view of God that includes Heaven, Hell and the Devil, then I - as well as most educated religious followers - will agree with you that it has slipped into legend.
Point of order:
The temptation to use "you" as opposed to "one" is getting stronger as the syntax becomes more awkard! (i.e. "You take a lot on faith")
But I wish us to remain in advocate positions - meaning that I target the arguments and never target you as a person. IOW, this is not personal, and I don't intend to let it become so.
I just don't know how long I can keep saying 'one'!
Last edited:
AKG
Homework Helper
In all fairness, a lot of the discussion that goes in the general discussion forum, or elsewhere in "casual" parts of this website are pointless crap as far as I'm concerned, but if the discussion interests you, then go ahead and discuss it. However, if it doesn't interest you, or it seems pointless to you, what in the world is the point of going into a thread and saying it? I wouldn't bother going into those threads in GD and complaining, "Oh god! This topic is so pointless, why are you guys posting here?!" Evo, I couldn't care less how cute your dog is, but that's the reason why I haven't looked at your thread and not gone into it and posted how pointless the topic was.
On topic, indeed the argument is valid for any P, but the premise G -> []G (or some variant) is not true for all G. God, being defined as the greatest conceivable/possible being, is said to thus have the greatest possible existence, namely necessary existence. Because God is said to have necessary existence, then if he exists, he exists necessarily, hence G -> []G. Invisible hats aren't greatest possible beings, nor do they have necessary existence for other reasons, so the premise is not true for invisible hats. Although the argument is valid for any G, it is not sound for all G, one reason being that G -> []G is not true for all G. For that reason, the MOA has more credibility as a proof for god than it does for an invisible hat.
But if G is so defined such that G -> []G is true, then the remaining premise, <>G, is the only possible point of contention. I see no justification for <>G, so although the MOA does have some credibility as a proof for God, it doesn't have enough to be convincing.
Evo
Mentor
AKG said:
In all fairness, a lot of the discussion that goes in the general discussion forum, or elsewhere in "casual" parts of this website are pointless crap as far as I'm concerned, but if the discussion interests you, then go ahead and discuss it.
GD is just for fun, the philosophy forum isn't.
AKG said:
However, if it doesn't interest you, or it seems pointless to you, what in the world is the point of going into a thread and saying it?
Because Owen Holden asked what people think about the example he posted about trying to justify the existence of "god". I told him I think it's pointless, and it is. No one is going to prove or disprove it.
AKG
Homework Helper
Evo said:
GD is just for fun, the philosophy forum isn't.
Because Owen Holden asked what people think about the example he posted about trying to justify the existence of "god". I told him I think it's pointless, and it is. No one is going to prove or disprove it.
If you're not posting in this thread "just for fun" or for no good reason, i.e. if you think your posts are really relevant, then why don't you bother to provide some sort of argument for your position? You might notice that "No one is going to prove or disprove it," is a rather strong epistemic claim, care to substantiate it? You also claimed that what Owen posted did not have enough substance to be discussed. Assuming you understand the argument he presented, why does it lack substance? You made some flippant comment suggesting that fairies, unicorns, and God are all the same. Care to justify that? As I suggested, God (in the context of this argument) refers to the greatest possible being, therefore, it is a necessary being. Unicorns aren't necessary beings, so clearly, they aren't the same.
As you observantly pointed out, this is the philosophy forum, not GD. But philosophy doesn't consist of throwing out random comments about fairies, exclamations of how pointless the topic is, and strong claims with no justification, it consists of making claims (strong or otherwise) and justifying them. If this is a pointless activity for you, don't do it, and don't waste space in the thread. Otherwise, please justify the claims you do make, and refrain from making other irrelevant comments.
Integral
Staff Emeritus
Gold Member
My proof that god does not exist:
God is perfect,
Nothing that exists is perfect. (This could be seen as a result of HUP)
Therefore God Does not exist.
It certainly is up to the believers to prove the existence of their concept of god. First of all there are as many different concepts of god as there are religions. Which one are you talking about? The first step is to define what you mean by god. With out definitions all that follows is nonsense.
I have my concept of god, I am happy with that concept, it may well be meaningless to anyone else, so I keep my concepts to myself, unless specifically asked to share them. I only wish that others would have this same respect of personal believes.
The OP assumed his result the instant he writes $\exists p$
Last edited:
Evo
Mentor
AKG said:
As I suggested, God (in the context of this argument) refers to the greatest possible being, therefore, it is a necessary being. Unicorns aren't necessary beings, so clearly, they aren't the same.
You say "Unicorns aren't necessary beings, so clearly, they aren't the same" Ok, prove it. Prove Unicorns aren't the most necessary beings. You can't and it's silly of me to ask you to do so.
Since you disagree so vehemently with me that discussing that formula as a proof of a god will end up being pointless, than why haven't you said what new proof or conclusions - what "point" there would be to discussing it? All you have done is attack me for admitting I see no merit in it. I already said why I think it's pointless in a previous post, if you disagree, then you need to say why. I do not see the formula as a basis for a meaningful philosophical discussion.
I will go further and say that I think any discussion of if there is one god or one hundred or none or whose god is better is pointless. Hey, if you think there is a point, you're free to post your opinion.
That's not to say a discussion of how religion affects an individual, or society, or is a belief in a diety good or bad or even necessary fall into that category, those discussions have merit and can bring about understanding.
Last edited:
cronxeh
Gold Member
There are 2 definitions for 'god' - one is based in real world and another in imaginary world.
god that created the universe is not the same god that lives in heaven. if you adopt _this_ idea - then I'm agnostic, but if your definition of god is the one of 'god' that lives in heaven and in life after death, that created heaven and the earth and hell and evil and all that stuff - then im definately an atheist.
I think people in general need the two definitions to be in one 'god' - but this is impossible. There is no way to have created both the Universe and life after death 'world' - I can prove this to you with the most basic math
arildno
Homework Helper
Gold Member
Dearly Missed
AKG:
You are indulging yourself in the fantasy:
Suppose there exists a being which necessarily exists. Hence it exists.
As Evo said, this is just pointless.
honestrosewater
Gold Member
arildno said:
AKG:
You are indulging yourself in the fantasy:
Suppose there exists a being which necessarily exists. Hence it exists.
As Evo said, this is just pointless.
I don't think that's what they're saying, and I don't think (p -> []p, .: p) is valid. Is it?
What does ".->" stand for- logical implication?
DaveC426913
Gold Member
Integral said:
My proof that god does not exist:
God is perfect,
Nothing that exists is perfect. (This could be seen as a result of HUP)
Therefore God Does not exist.
You jest, of course.
Neither of your premises can be taken as given.
DaveC426913
Gold Member
Jameson said:
I ask this: If religion requires faith, that is believing in parts of it without physical proof, then why are people trying to justify it logically?
If it was completely provable, that would be using knowledge, not faith. So to those of us who are religious/spiritual/anything else that falls on faith, do you try to rationalize it to explain your view to others or to tell yourself you're right?
...
I have nothing against faith, I just wonder why people try to mix the categories of faith and logical/rational justification. To put these things together seems quite like a paradox.
This is the most sound argument I've heard so far, and very close to my owns beliefs. By definition, faith occurs without proof.
Jameson said:
I also agree with cronxeh. We are technically all born as atheists. I don't see how you could argue we are born with a belief in God.
We are technically born not being able to conceive of oxygen, but it sure turns out to be an important thing to have in existence.
Eventually, we all come to realize we need air, whether told or not. Even of we are raised by wolves, and don't understand what oxygen is, we still need it.
Integral
Staff Emeritus
Gold Member
We are all born ignorant. Does this imply a belief in god?
hypnagogue
Staff Emeritus
Gold Member
Jameson said:
I also agree with cronxeh. We are technically all born as atheists. I don't see how you could argue we are born with a belief in God.
We are not born with a belief in God, but evidence suggests that we are born with neural hardwire that is wired to create spiritual experiences, which are arguably the foundation of all religious frameworks. See for example the book Why God Won't Go Away.
There is the question of whether religious ideology is a high level human construct or whether its basis is, at some basic level, 'hard wired' into our brains. I think the evidence points to the latter. For instance, some epileptic seizures induce intense spiritual experiences. To me, this is rather strongly suggestive that the spiritual experience is not something we cogitate, but rather a fundamental kind of experience built into our brains, somewhat like vision. Of course, it is not as ubiquitously or as obviously active as vision. And, of course, religious ideologies and frameworks are largely the result of higher-order mental faculties. But the seeds of such frameworks seem to be found in something the brain is naturally built to do.
I found an interesting link relating to this subject, a transcript of a BBC program interviewing the authors of the book mentioned above. I haven't read over the whole thing, but it should come to bear directly on this topic. Here's the link.
Evo
Mentor
hypnagogue said:
We are not born with a belief in God, but evidence suggests that we are born with neural hardwire that is wired to create spiritual experiences, which are arguably the foundation of all religious frameworks. See for example the book Why God Won't Go Away.
The need in a belief system is so prevalent throughout human history I would tend to agree.
There is the question of whether religious ideology is a high level human construct or whether its basis is, at some basic level, 'hard wired' into our brains. I think the evidence points to the latter. For instance, some epileptic seizures induce intense spiritual experiences. To me, this is rather strongly suggestive that the spiritual experience is not something we cogitate, but rather a fundamental kind of experience built into our brains, somewhat like vision. Of course, it is not as ubiquitously or as obviously active as vision. And, of course, religious ideologies and frameworks are largely the result of higher-order mental faculties. But the seeds of such frameworks seem to be found in something the brain is naturally built to do.
I found an interesting link relating to this subject, a transcript of a BBC program interviewing the authors of the book mentioned above. I haven't read over the whole thing, but it should come to bear directly on this topic. Here's the link.
I saw a different show on tv about temporal lobe epilepsy, and all of the people thought they had spoken to God, or had some unique deeply religious episodes brought on by the epilepsy. It was very interesting.
learningphysics
Homework Helper
AKG said:
As I suggested, God (in the context of this argument) refers to the greatest possible being, therefore, it is a necessary being. Unicorns aren't necessary beings, so clearly, they aren't the same.
How about a "necessarily existing unicorn"?
saltydog
Homework Helper
hypnagogue said:
There is the question of whether religious ideology is a high level human construct or whether its basis is, at some basic level, 'hard wired' into our brains. I think the evidence points to the latter.
Can it not be hard-wired because of selective favor through Darwinian evolution? I've read where some think, as do I, that religion is an advantage to survival and reproduction (See, "The Biology of Religion" by V. Reynolds). Thus those who entertained such would be favored and in so doing, would contribute to "general neural architecture" that would exhibit the symptoms you speak of.
hypnagogue
Staff Emeritus
Gold Member
saltydog said:
Can it not be hard-wired because of selective favor through Darwinian evolution? I've read where some think, as do I, that religion is an advantage to survival and reproduction (See, "The Biology of Religion" by V. Reynolds). Thus those who entertained such would be favored and in so doing, would contribute to "general neural architecture" that would exhibit the symptoms you speak of.
I don't see how that's very different from what I suggested.
As far as evolutionary concerns go, there's something interesting to consider here. I would generally agree that the kinds of social institutions enforced by religious frameworks are evolutionarily advantageous; however, I also think it's highly likely that the vast majority of religious believers throughout history and across the globe have never had a true 'spiritual experience' as described variously by e.g. epileptics, users of psychedelics, and dedicated practioners of meditative techniques. If that's the case, it would seem to undermine a straightforward evolutionary explanation, or at least complicate things.
One way to compensate for this might be to note that spiritual experiences seem to be triggered by rather extreme physiological conditions-- starvation, very high or low levels of CNS/brain stimulation, and in the case of out of body experiences, trauma and near death. Perhaps the experience arose directly as an evolutionary coping mechanism to comfort people in times of extreme biological stress, where it might otherwise be easy to just give up and die, and the establishment of religious institutions and the like was just an indirect (and also beneficial) side effect that this experience had when it happened to certain charismatic individuals (Buddha, Jesus, Mohammed, etc).
It's also possible that something like spiritual experience is at work in most religious believers, but just at a much less intense level than in the extreme cases. However, I tend to think that most religious believers turn out that way primarily because of high level social factors. It's the extreme and surprising experiences of the people who are variously viewed as blessed, prophetic, or insane that I think find a strong basis in 'hard wired' neural architecture.
AKG
Homework Helper
Evo said:
You say "Unicorns aren't necessary beings, so clearly, they aren't the same" Ok, prove it. Prove Unicorns aren't the most necessary beings. You can't and it's silly of me to ask you to do so.
What does the term "God" mean? Many people will say that, "by definition," it would refer to some being that would be the greatest possible or conceivable being, and it can be argued that this entails that if it exists, it exists necessarily, i.e. it is not contingent on any other being or thing. It's not a matter of proving that unicorns aren't necessary, it is, at this stage, just a matter of definition. That God is a necessary (non-contingent) being is something that follows from definition in the context of this argument. What does the term "unicorn" mean? Does any part of its definition suggest that it is non-contingent? I don't think so.
All you have done is attack me for admitting I see no merit in it. I already said why I think it's pointless in a previous post, if you disagree, then you need to say why. I do not see the formula as a basis for a meaningful philosophical discussion.
No, I have attacked you for wasting space in a philosophical thread with pointless little comments, and some comments that had points but no justification.
Surely, you see natural language as the basis of a meaningful philosophical discussion. If someone explained the argument to you in plain English, would it suddenly become more meaningful? When the argument is simple enough that it can be clearly expressed symbollically in modal logic, then the fact that one does so doesn't make the argument less meaningful. Indeed, logic is just a way to clearly express the reasoning that would go in natural language if it had to. So all though you have something against these symbols, your claim that a symbolic argument for God is meaningless, is wrong.
I will go further and say that I think any discussion of if there is one god or one hundred or none or whose god is better is pointless. Hey, if you think there is a point, you're free to post your opinion.
Yes, you've claimed that no one can either prove or disprove. Nobody cares to read just your claims. Back that assertion.
arlidno said:
AKG:
You are indulging yourself in the fantasy:
Suppose there exists a being which necessarily exists. Hence it exists.
As Evo said, this is just pointless.
I think you missed the entire point. There is no premise in the argument which states the being exists. It only says that if it exists, then it exists necessarily, that is, this being called "God" is defined as one that is not contingent on anything else. Any being that you find which exists contingently is not God. But, is there some being which is not contingent on any other thing? Well, the proof asserts as a premise that God possibly exists. It follows from these two premises that God does exist. It is not a circular, tautologous argument as you seem to think it is. It defines God such that if G = "God exists" then G -> []G (which is just a definition, so it can't really be disagreed with), and it assumes <>G, or that God possible exists, and concludes G, that God exists. The deduction is valid, and no more meaningless than an argument in natural language. The main point of contention is whether God, as it is defined, is in fact possible.
There is also the point that this argument shows (assuming <>G) only that a being with the property that it would have to exist non-contingently if it were to exist at all, does exist, but this "being" is not necessarily the Christian God, or any other God, but simply a being with the property that it has necessary existence, and that's all. It is in fact a largely vacuous proof, and I do indeed believe it has serious problems, but that it is tautologous (simply stating that an existing being exists), or meaningless just because it is symbolic, or that it applies to unicorns (nothing about our definition of unicorns says anything about them being necessary), or that it is pointless just because it talks about God, are all not problems with the argument.
AKG
Homework Helper
learningphysics said:
How about a "necessarily existing unicorn"?
This unicorn would have to have a totally non-contingent existence. It must not be contingent, on, for example, space, so this being must exist even if there were no space. Since that doesn't make sense, any unicorn would be contingent, and thus a necessarily existing unicorn is not possible, and the argument fails, since the premise <>"necessarily existing unicorn exists" is false.
In some senses, it is not that simple. What exactly does it mean for a being to be contingent or necessary? If determinism is true, is everything necessary, or can we still speak of contingency, but just in a more relative sense? If contingency is just a relative thing, is it a meaningful term to use in relation to this argument?
AKG
Homework Helper
Some theists have responded to the discovery of the "God module" in our brain as a "sign" of God's design. Whereas atheists use this to write off religious belief as something we just evolved to do, not something we reasonably choose to do, theists suggest that this is evidence that God designed us to believe in him. I wouldn't bet a penny on either hypothesis, at least not until there is scientific evidence presented.
Evo
Mentor
AKG said:
What does the term "God" mean?
That's not the topic of the thread and it wasn't even brought up by the thread owner.
Many people will say that, "by definition," it would refer to some being that would be the greatest possible or conceivable being, and it can be argued that this entails that if it exists, it exists necessarily, i.e. it is not contingent on any other being or thing.
YOU are defining god and placing YOUR definition into the formula. There truly is no single definition of "god".
It's not a matter of proving that unicorns aren't necessary, it is, at this stage, just a matter of definition. That God is a necessary (non-contingent) being is something that follows from definition in the context of this argument.
You mean that this formula requires a "christian god" type in order to work? Yes, that's a major flaw. Gods throughout history do not necessarily fall into this definition. There are gods that are weak, that have very limited powers, have human vices, are killed by other gods, killed and wounded by humans.
I have attacked you for wasting space in a philosophical thread with pointless little comments, and some comments that had points but no justification.
Then you are wrong, but perhaps you are truly the Grand Poobah of philosophy and therefore you can decide what is or is not pointless, correct? Just because you can discuss something doesn't mean it has merit or is even worthy of being discussed.
There is also the point that this argument shows (assuming <>G) only that a being with the property that it would have to exist non-contingently if it were to exist at all, does exist, but this "being" is not necessarily the Christian God, or any other God, but simply a being with the property that it has necessary existence, and that's all. It is in fact a largely vacuous proof, and I do indeed believe it has serious problems, but that it is tautologous (simply stating that an existing being exists), or meaningless just because it is symbolic, or that it applies to unicorns (nothing about our definition of unicorns says anything about them being necessary), or that it is pointless just because it talks about God, are all not problems with the argument.
Ah, so you do admit the formula is seriously flawed and therefore it would be pointless to use it in a discussion of if there is a "god" or whatever.
I never said that it was "pointless just because it talks about a god", your mistake.
I asked you to show what merit using this formula would have in a discussion, not the formula itself, and you failed to do so. You have simply regurgitated the formula, inserted your personal opinions of what "god" is, and pointed out the formula is flawed anyway.
Last edited:
Evo
Mentor
Hypnagogue, here is the transcript of the program I mentioned. The part that you'd want to read is about John Sharon. If you bring up the edit box, type in "John Sharon has temporal lobe epilepsy." and it will take you right to it.
http://www.pbs.org/wgbh/nova/transcripts/2812mind.html
John Sharon has temporal lobe epilepsy.
John's epileptic seizures are essentially an electrical storm in his temporal lobes when a group of neurons starts firing at random, out of sync with the rest of his brain.
NARRATOR: John had never been religious, yet the onset of his seizures brought on overwhelming spiritual feelings.
V.S. RAMACHANDRAN: It has been known for a long time that some patients with seizures originating in the temporal lobes have intense religious auras, intense experience of God visiting them. Sometimes it's a personal god, sometimes it's a more diffuse feeling of being one with the cosmos. Everything seems suffused with meaning. The patient will say, "Finally I see what it's really about, Doctor. I really understand God. I understand my place in the universe, in the cosmic scheme." Why does this happen and why does it happen so often in patients with temporal lobe seizures?
V.S. RAMACHANDRAN: Now, why do these patients have intense religious experiences when they have these seizures? And why do they become preoccupied with theological and religious matters even in between seizures?
One possibility is that the seizure activity in the temporal lobes somehow creates all kinds of odd, strange emotions in the person's mind...in the person's brain. And this welling up of bizarre emotions may be interpreted by the patient as visits from another world, or as, "God is visiting me." Maybe that's the only way he can make sense of this welter of strange emotions going on in his brain. Another possibility is that this is something to do with the way in which the temporal lobes are wired up to deal with the world emotionally. As we walk around and interact with the world, you need some way of determining what's important, what's emotionally salient and what's relevant to you versus something trivial and unimportant.
How does this come about? We think what's critical is the connection between the sensory areas in the temporal lobes and the amygdala, which is the gateway to the emotional centers in the brain. The strength of these connections is what determines how emotionally salient something is. And therefore, you could speak of a sort of emotional salience landscape, with hills and valleys corresponding to what's important and what's not important. And each of us has a slightly different emotional salience landscape. Now, consider what happens in temporal lobe epilepsy when you have repeated seizures. What might be going on is an indiscriminate strengthening of all these pathways. It's a bit like water flowing down rivulets along the cliff surface. When it rains repeatedly there's an increasing tendency for the water to make furrows along one pathway and this progressive deepening of the furrows artificially raises the emotional significance of some categories of inputs. So instead of just finding lions and tigers and mothers emotionally salient, he finds everything deeply salient. For example, a grain of sand, a piece of driftwood, seaweed, all of this becomes imbued with deep significance. Now, this tendency to ascribe cosmic significance to everything around you might be akin to what we call a mystical experience or a religious experience.
Chronos | 2020-08-08 12:28:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5475264191627502, "perplexity": 1264.9264588736942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00002.warc.gz"} |
https://socratic.org/questions/what-are-the-bracket-for-around-the-lewis-structure | # What are the brackets for around the Lewis structure?
##### 1 Answer
Jan 29, 2017
They allow one to unambiguously describe the charge of the overall ion. Here is an example:
For ${\text{NH}}_{4}^{+}$, more accurately ${\left[{\text{NH}}_{4}\right]}^{+}$ (as the positive formal charge does not belong to hydrogen), we have a formal charge on $\text{N}$ of $4 - 3 = + 1$, while each $\text{H}$ has a formal charge of $1 - 1 = 0$. So, the overall charge adds up to be $+ 1$.
(If you recall, formal charge assumes evenly-shared valence electrons and is $\text{FC}$ $=$ $\text{Valence - Owned}$.) | 2020-01-22 03:07:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461102485656738, "perplexity": 1092.777980440178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00502.warc.gz"} |
https://tex.stackexchange.com/questions/11584/comma-separated-list | # Comma-separated list
I would like to produce comma-separated lists, like "a, b, c, d" or "a, b, c, and d".
Requirements:
• In the Latex source code, each line of the source code should contain one list element, and as little additional markup as possible.
• It should be easy to re-order the entries by changing the order of source code lines.
For example, I could simply write the list elements like this:
a,
b,
c, and
d.
This would (almost) satisfy the first requirement, but it would fail on the second requirement. I would have to remember to fix the punctuation whenever I re-order the entries (or simply add a new entry). It sounds trivial, but it is surprisingly easy to forget to replace the full stop with a comma, and not that easy to spot the mistake. And I have a longish Latex document that mixes other content and such lists, so it would not be convenient to use an external script to generate the appropriate Latex code – I am really looking for something that is as easy as possible to maintain in the long term.
I do not really know what would be an appropriate interface; hence this can also be seen as an interface-design challenge. Perhaps something like this:
xxx\begin{commasep}[and]
\item a
\item b
\item c
\item d
\end{commasep}yyy
would produce:
xxxa, b, c, and dyyy
(Note: no whitespace after "d", so that I can add appropriate punctuation right after the list.)
I wonder if, e.g., paralist can be tweaked to produce what I want?
• I can't provide an answer, but if you think about tweaking paralist, have a look at the alpha version of enumitem 3.0 (announcement), which (finally) includes run-in lists. – lockstep Feb 20 '11 at 18:27
• @lockstep: Thanks, that is great news indeed! I am using enumitem almost all the time, and I have tried to avoid the headache of mixing enumitem + paralist. – Jukka Suomela Feb 20 '11 at 18:32
• I could image something based on changing the catcode of the end-of-line character. This then doesn't work when the list is part of a macro argument. Would this be ok? – Martin Scharrer Feb 20 '11 at 18:43
• @Martin: Yes, it would be ok in my application. – Jukka Suomela Feb 20 '11 at 18:48
• Very pleased to see the Oxford comma in use: "a, b, c, and d". – Loop Space Feb 21 '11 at 14:35
Here some macro which reads the lines as macro arguments. It got inspired by this question. It would be possible to change the "interface" to LaTeX environments instead, but the TeX syntax (without \begin and \end) is easier.
\documentclass{article}
\makeatletter
\newcommand*\commasep[2][, ]{%
\begingroup
\def\commasepsep{#1\space}% store separator e.g. ','
\def\commasependsep{#2\space}% store last separator e.g. 'and'
\@commasep
}
\newcommand*\@commasep[1][.]{%
\def\commasepend{#1}% store end marker e.g. '.'
\@ifnextchar\endcommasep{}{% check for empty list
\catcode\endlinechar\active% make end-of-line active
}%
}%
\begingroup
\catcode\endlinechar\active%
% The ~ character represents the end-of-line character in the
% code below:
\lccode\~=\endlinechar%
\lowercase{%
\gdef\commasepfirstelement#1~}{%
#1%
\@ifnextchar\endcommasep{%
\commasepend% remove this if you don't want one with only one element
}{%
\commasepelement% no comma here!
}%
}%
\lowercase{%
\endgroup%
\gdef\commasepelement#1~}{%
\@ifnextchar\endcommasep{% Stop at \endcommasep
\commasependsep#1\commasepend%
}{%
\commasepsep#1\commasepelement%
}%
}%
\def\endcommasep{%
\@gobble{endcommasep}% be unique (for \@ifnextchar)
\endgroup
}%
\makeatother
\begin{document}
% Usage: \commasep[<separator>]{<last separator>}[<end marker>]
xxx\commasep[,]{and}[.]
a
b
c
d
\endcommasep yyy
xxx\commasep{and}
a
\endcommasep yyy
xxx\commasep{and}
\endcommasep yyy
xxx\commasep{or}
a
b
\endcommasep yyy
\end{document}
Please also note that there is a parselines package which might be used. However, it doesn't support handling the last entry different. Nevertheless, its code might be a good read for people interested in this kind of parsing.
• Very nice. Is there a reason for not adding the spaces in the definition of \commasependsep? That would make user input a lot more intuitive, rather than having to type { and }. (And in fact, I would suggest an optional argument) [,] which would put in a comma before the 'and' if desired. (It might also be useful to let the user specify the delimeter (',' or ';' for example) for the whole list too. – Alan Munn Feb 20 '11 at 20:11
• @Alan Munn: Good argument. I didn't added the spaces to allow for freedom to the user. He/she might want to use {,} instead of { and } or use the funny American style {, and }. It's not a problem to add an optional argument, but this wasn't requested, so I didn't added it. – Martin Scharrer Feb 20 '11 at 20:41
• @Alan Munn: I added now optional arguments for the separator as well as the end marker: \commasep[, ]{ and }[.]. I kept the spaces as part of these arguments because of the above mentioned reason of flexibility. – Martin Scharrer Feb 20 '11 at 20:48
• @Martin While I see the need for flexibility, the extra spaces are very unintuitive as input. For example, would anyone want no trailing space after the delimiter? (I'm making these comments because Jukka's question was also about what a good interface would be like; I'm not complaining about your solution!) – Alan Munn Feb 20 '11 at 21:03
• @Alan Munn: Ok, I added the spaces (\space) now, because it is simpler to remove them if they are not wanted than to add them if they are. – Martin Scharrer Feb 20 '11 at 21:08
Items are delimited by ,,, and the list is ended with ... In fact, each ,, should be followed by a space or the end of a line (without %). For very long lists (a few paragraphs?), this might get slow, since we grab the whole list as an argument at every step.
(Note that I "declare" each command with \newcommand\command{} to make sure they don't exist before I define them using \def.)
The whole construction relies on a generic macro to insert the desired separators between list elements, with some configuration possible. See e.g., \commasepitem below, which writes the list in an itemize environment.
\documentclass{article}
\makeatletter
% #1 at the beginning
% #2 between each element
% #3 between the last two
% #4 after the last
% #5 is the first item
\newcommand*\commasep@generic{}
\def\commasep@generic#1#2#3#4 #5,, {#1#5\commasep@next{#2}{#3}{#4}}
\newcommand*\commasep@next{}
\def\commasep@next#1#2#3#4,, #5..{%
% #1 is the list separator,
% #2 is the last list separator,
% #3 is the end text
% #4 the item,
% #5 the rest.
\expandafter\ifx\expandafter a\detokenize{#5}a% test if #3 is empty.
\expandafter\@firstoftwo
\else
\expandafter\@secondoftwo
\fi
{#2#4#3}%
{#1#4\commasep@next{#1}{#2}{#3}#5..}%
}%
\newcommand*\commasep[1]{\commasep@generic{}{, }{, #1 }{\ignorespaces}}
\newcommand*\commasepitem{%
\commasep@generic{\begin{itemize}\item}{\item}{\item[(last)]}{\end{itemize}} }
\makeatother
\begin{document}
xx\commasep{and}
a,,
b,,
c,,
d,,
..
yyyy
xxx\commasepitem
London bridge is falling down'' is much more frightening!,,
c,,
Mary had a little lamb, little lamb, little lamb'' is a nice
little song that my mom used to sing,,
d,,
..
yyyy
\end{document}
With the help of enumitem it is possible.
\usepackage[inline]{enumitem}
\begin{itemize*}[label={}, itemjoin={,}, itemjoin*={, and}]
\item one
\item two
\item three
\end{itemize*}
Output:
With a flexible interface for setting the separators:
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\commasep}{O{}m}
{
\group_begin:
\jukka_commasep:nn { #1 } { #2 }
\group_end:
\ignorespaces
}
\NewDocumentCommand{\commasepsetup}{m}
{
\keys_set:nn { jukka/commasep } { #1 }
}
\cs_new_protected:Nn \jukka_commasep:nn
{
\keys_set:nn { jukka/commasep } { #1 }
\seq_set_split:Nnn \l__jukka_commasep_seq { \\ } { #2 }
\str_if_eq_x:nnT { } { \seq_item:Nn \l__jukka_commasep_seq { -1 } }
{
\seq_pop_right:NN \l__jukka_commasep_seq \l_tmpa_tl
}
\seq_use:NVVV \l__jukka_commasep_seq
\l__jukka_commasep_two_tl
\l__jukka_commasep_more_tl
\l__jukka_commasep_last_tl
}
\cs_generate_variant:Nn \seq_use:Nnnn { NVVV }
\keys_define:nn { jukka/commasep }
{
two .tl_set:N = \l__jukka_commasep_two_tl,
more .tl_set:N = \l__jukka_commasep_more_tl,
last .tl_set:N = \l__jukka_commasep_last_tl,
two .initial:n = { ~and~ },
more .initial:n = { ,~ },
last .initial:n = { ,~and~ },
}
\ExplSyntaxOff
\begin{document}
xxx\commasep{
a \\
}yyy
xxx\commasep{
a \\
b \\
}yyy
xxx\commasep{
a \\
b \\
c \\
d \\
}yyy
\commasepsetup{
two=+,
more=-,
last=-,
}
xxx\commasep{
a \\
}yyy
xxx\commasep{
a \\
b \\
}yyy
xxx\commasep{
a \\
b \\
c \\
d \\
}yyy
\end{document}
` | 2019-11-22 00:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102577328681946, "perplexity": 3673.3589235079403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00234.warc.gz"} |
https://tex.stackexchange.com/questions/417259/how-to-keep-indentations-in-python-code-copied-from-latex-pdf | # How to keep indentations in Python code copied from LaTeX PDF?
I tried to include Python 3.6 code in a LaTeX PDF document which should be easily be copied to save to a file or to try the Python code.
Although,
\begin{verbatim}
for row in range(1,9):
for col in range(1,9):
print(int(str(row)+str(col)))
\end{verbatim}
shows nicely in the PDF but if I copy it and paste it to a text editor it looks like:
for row in range(1,9):
for col in range(1,9):
print(int(str(row)+str(col)))
The indentations which are essential for Python are gone.
I also tried the suggestion here using the listings package:
How to highlight Python syntax in LaTeX Listings \lstinputlistings command
where I can simply give the file name of an Python file e.g.
\pythonexternal{Test.py}
And the source code will be included and colored. But the same thing with the missing spaces at the start when copy&pasting.
If I use the option "showspaces=true" I get the following:
for␣row␣in␣range(1,9):
␣␣␣␣for␣col␣in␣range(1,9):
␣␣␣␣␣␣␣␣print(int(str(row)+str(col)))
which is also not suitable for copy&paste. Well, I could replace all ␣ with space... not a really practicable solution.
There have been some weird hacks described in 2011 here:
How to make listings code indentation remain unchanged when copied from PDF?
Is there anything new since then...? Any ideas how to achieve copy&paste Python code in a LaTeX PDF? Thank you for suggestions.
• I used once attachfile package to get code snippets attached to the PDF (but not every PDF viewer will display them). I also used filecontentsdef to create those snippet files from LaTeX source, but you might prefer VerbatimOut or like environments. – user4686 Feb 25 '18 at 21:22
• I prefer not to use something which is displayed (or not) depending on the viewer. I couldn't try filecontentsdef. How to include? \usepackage{filecontensdef}? It should be included in MikTeX but TeXnicCenter didn't load it. VerbatimInput would be an option, but when copying the Python code from PDF to text editor the indentations are also gone. As I understand the above mentioned thread and hacks from 2011, there was no working solution. – theozh Feb 26 '18 at 7:45
• you did not understand. I referred to VerbatimOut from package fancyvrb which allows (like filecontentsdef) to create some file from within the LaTeX source. Then you attach these files with package attachfile. As per filecontentsdef you need to know how to update your TeX installation this is not the problem here. – user4686 Feb 26 '18 at 12:08
• maybe, I did not make myself clear. I don't want to create an external file from LaTeX code and I don't want to attach a file. I want to include source code from an external file which can be selected and copied later from the resulting PDF and pasted into a text editor including indentations. – theozh Feb 26 '18 at 19:32
• I know no way to copy paste from PDF (especially if you don't want to restrict to specific viewers) and preserve code indentation. I proposed file attachments as the only foolproof method I know of. HTML or for that matter plain text are simply better than PDF. This is major issue with PDF. – user4686 Feb 26 '18 at 22:35 | 2019-08-22 22:19:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734826385974884, "perplexity": 1753.698705633513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317516.88/warc/CC-MAIN-20190822215308-20190823001308-00205.warc.gz"} |
https://math.stackexchange.com/questions/2690872/is-it-true-about-center | # Is it true about center? [closed]
Let $G$ be a group (finite ) and let $G$ be a center of $Z$, now assume that $G = G_1 \times G_2 \cdots G_k$ is the decomposition of $G$, where each $G_i$ is indecomposable.
Is it true that
$G /Z = G_1/Z \times G_2/Z \times ...G_k/Z$ ?
or just tell the correct relation here.
## closed as off-topic by Saad, JMP, Dietrich Burde, Derek Holt, Carl MummertMar 14 '18 at 18:43
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Saad, JMP, Dietrich Burde, Derek Holt, Carl Mummert
If this question can be reworded to fit the rules in the help center, please edit the question.
It is clear that $Z(G)=Z(G_1)\times Z(G_2)\times ...\times Z(G_k)$. Consider the map $\phi : G\rightarrow G_1/Z(G_1)\times ...\times G_k/Z(G_k)$ with $\phi(g)=(\overline{g_1},...,\overline{g_k})$ where $g=(g_1,...,g_k)$. What is the kernel? | 2019-09-17 14:23:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7546399235725403, "perplexity": 707.2365154612615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00389.warc.gz"} |
http://www.ruor.uottawa.ca/en/handle/10393/9646 | # Central nucleus of the amygdala and the development of hypertension in spontaneously hypertensive rats.
Title: Central nucleus of the amygdala and the development of hypertension in spontaneously hypertensive rats. Author: Sharma, Nishan B. Abstract: Electrolytic lesions of the central nucleus of the amygdala (ACe) have been shown to attenuate the development of hypertension in spontaneously hypertensive rats (SHR). Whether this was due to destruction local neurons and/or fibres of passage is unknown. In the present study, neuronal perikarya in the ACe of 4 week-old SHR were selectively destroyed with ibotenic acid. Three separate experiments were conducted, in which mean arterial pressure (MAP), heart rate and blood pressure responses to acute mental stress were measured in groups of lesioned and sham-lesioned SHR. In Experiment 1, in which rats were fed ad lib., lesioned SHR had a significantly lower average MAP (173 mmHg $\pm$ 7 S.E.) vs. sham-lesioned SHR (201 $\pm$ 4), 15 weeks post-operation (p 0.05, t-test). These results show that the attenuation of the development of hypertension in young SHR is due to the selective destruction of neurons in the ACe. The lesioned animals in Experiment 1 also had significantly lower body weights (BW) from 5 weeks post-operation onwards (p 0.5, two-way repeated-measures ANOVA). Therefore, in Experiment 2, food intake (and hence BW) among the lesioned and sham-lesioned rats was equalized. Average MAP in the lesioned SHR at 7 and 15 weeks post-operation was not different vs. sham-lesioned SHR, but was significantly higher (190 $\pm$ 9) vs. sham-lesioned SHR (164 $\pm$ 5) 22 weeks post-operation (p 0.05, t-test). These results indicate that destruction of neuronal perikarya in the ACe in young SHR merely delays the development of hypertension, due to a reduced BW gain. In Experiment 3, the effect of a high salt diet in ACe-lesioned SHR was examined. No significant differences in MAP were measured between lesioned and and sham-lesioned rats 4 or 11 weeks post-operation. Date: 1995 URI: http://hdl.handle.net/10393/9646
## Files in this item
Files Size Format View
MM07863.PDF 2.872Mb application/pdf View/Open
## Contact information
Morisset Hall (map)
65 University Private | 2013-05-26 03:03:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4302268922328949, "perplexity": 13080.731882560496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00097-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://ryanwingate.com/assets/projects/A-B_Test_Result_Analysis.html | ## Analyze A/B Test Results¶
This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!
### Introduction¶
A/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these.
For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.
As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question. The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the RUBRIC.
#### Part I - Probability¶
To get started, let's import our libraries.
In [1]:
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
# We are setting the seed to assure you get the same answers
# on quizzes as we set up.
random.seed(42)
1. Now, read in the ab_data.csv data. Store it in df. Use your dataframe to answer the questions in Quiz 1 of the classroom.
a. Read in the dataset and take a look at the top few rows here:
In [2]:
df = pd.read_csv('ab_data.csv')
Out[2]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1
b. Use the below cell to find the number of rows in the dataset.
In [3]:
row_count = df.shape[0]
print('row count = ' + str(row_count))
row count = 294478
c. The number of unique users in the dataset.
In [4]:
unique_users = df.user_id.nunique()
print('unique user count = ' + str(unique_users))
unique user count = 290584
d. The proportion of users converted.
In [5]:
unique_converted_users = df[df.converted == 1].user_id.nunique()
print('unique converted user count = ' \
+ str(unique_converted_users))
print('proportion of unique users that are converted = ' \
+ str(unique_converted_users/unique_users))
unique converted user count = 35173
proportion of unique users that are converted = 0.12104245244060237
e. The number of times the new_page and treatment don't line up.
In [6]:
df.groupby(['group', 'landing_page']).count()
Out[6]:
user_id timestamp converted
group landing_page
control new_page 1928 1928 1928
old_page 145274 145274 145274
treatment new_page 145311 145311 145311
old_page 1965 1965 1965
In [7]:
treatment_old_page = \
df[(df.group == 'treatment') & \
(df.landing_page == 'old_page')].shape[0]
control_new_page = \
df[(df.group == 'control') & \
(df.landing_page == 'new_page')].shape[0]
print('# times treatment group receives old_page: ' \
+ str(treatment_old_page))
print(' # times control group receives new_page: ' \
+ str(control_new_page))
print(' Sum of both: ' \
+ str(treatment_old_page + control_new_page))
# times treatment group receives old_page: 1965
# times control group receives new_page: 1928
Sum of both: 3893
f. Do any of the rows have missing values?
In [8]:
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 294478 entries, 0 to 294477
Data columns (total 5 columns):
user_id 294478 non-null int64
timestamp 294478 non-null object
group 294478 non-null object
landing_page 294478 non-null object
converted 294478 non-null int64
dtypes: int64(2), object(3)
memory usage: 11.2+ MB
In [9]:
print(df.isnull().sum())
print('\nTherefore, no.')
user_id 0
timestamp 0
group 0
landing_page 0
converted 0
dtype: int64
Therefore, no.
2. For the rows where treatment is not aligned with new_page or control is not aligned with old_page, we cannot be sure if this row truly received the new or old page. Use Quiz 2 in the classroom to provide how we should handle these rows.
a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in df2.
In [10]:
drop_index = df[( (df.group == 'treatment') & \
(df.landing_page == 'old_page') ) | \
( (df.group == 'control') & \
(df.landing_page == 'new_page') ) ].index
df2 = df.drop(drop_index)
In [11]:
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == \
(df2['landing_page'] == 'new_page')) == False].shape[0]
Out[11]:
0
3. Use df2 and the cells below to answer questions for Quiz3 in the classroom.
a. How many unique user_ids are in df2?
In [12]:
df2_unique_users = df2.user_id.nunique()
print('df2 unique user count = ' + str(df2_unique_users))
df2 unique user count = 290584
b. There is one user_id repeated in df2. What is it?
In [13]:
print(df2[df2.duplicated('user_id')].user_id)
repeat_user_id = str(df2[df2.duplicated('user_id')].user_id)[8:14]
2893 773192
Name: user_id, dtype: int64
In [14]:
print('repeat user id = ' + repeat_user_id)
repeat user id = 773192
c. What is the row information for the repeat user_id?
In [15]:
df2[df2.user_id == int(repeat_user_id)]
Out[15]:
user_id timestamp group landing_page converted
1899 773192 2017-01-09 05:37:58.781806 treatment new_page 0
2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0
d. Remove one of the rows with a duplicate user_id, but keep your dataframe as df2.
In [16]:
print('shape before dropping = ' + str(df2.shape))
df2.drop([1899], inplace=True)
print(' shape after dropping = ' + str(df2.shape))
shape before dropping = (290585, 5)
shape after dropping = (290584, 5)
4. Use df2 in the below cells to answer the quiz questions related to Quiz 4 in the classroom.
a. What is the probability of an individual converting regardless of the page they receive?
In [17]:
df2.converted.mean()
Out[17]:
0.11959708724499628
b. Given that an individual was in the control group, what is the probability they converted?
In [18]:
control_conv = df2[df2['group'] == 'control'].converted.mean()
control_conv
Out[18]:
0.1203863045004612
c. Given that an individual was in the treatment group, what is the probability they converted?
In [19]:
treatment_conv = df2[df2['group'] == 'treatment'].converted.mean()
treatment_conv
Out[19]:
0.11880806551510564
d. What is the probability that an individual received the new page?
In [20]:
new_page_count = df2[df2.landing_page == 'new_page'].user_id.count()
new_page_count / df2.shape[0]
Out[20]:
0.50006194422266881
e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions.
There does not appear to be sufficient evidence to conclude that the new treatment page produces more conversions than the current control page. The probability that a user who receives the treatment page will convert is actually slightly less than the probability that a user who receives the control page will convert.
### Part II - A/B Test¶
Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.
However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?
These questions are the difficult parts associated with A/B tests in general.
1. For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of $p_{old}$ and $p_{new}$, which are the converted rates for the old and new pages.
$$H_0: p_{new} - p_{old} \leq 0$$ $$H_1: p_{new} - p_{old} > 0$$
Or, verbally:
$H_{0}$: The likelihood of conversion for a user receiving the new page is less than or equal to the likelihood of conversion for a user receiving the old page.
$H_{1}$: The likelihood of conversion for a user receiving the new page is greater than the likelihood of conversion for a user receiving the old page.
2. Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the converted success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the converted rate in ab_data.csv regardless of the page.
Use a sample size for each page equal to the ones in ab_data.csv.
Perform the sampling distribution for the difference in converted between the two pages over 10,000 iterations of calculating an estimate from the null.
Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use Quiz 5 in the classroom to make sure you are on the right track.
a. What is the convert rate for $p_{new}$ under the null?
In [21]:
p_new = df2.converted.mean()
print(p_new)
0.11959708724499628
b. What is the convert rate for $p_{old}$ under the null?
In [22]:
p_old = df2.converted.mean()
print(p_old)
0.11959708724499628
c. What is $n_{new}$?
In [23]:
n_new = df2[df2.landing_page == 'new_page'].user_id.count()
print(n_new)
145310
d. What is $n_{old}$?
In [24]:
n_old = df2[df2.landing_page == 'old_page'].user_id.count()
print(n_old)
145274
e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in new_page_converted.
In [25]:
new_page_converted = \
np.random.choice([0, 1], size=n_new, p=[(1-p_new), p_new])
f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in old_page_converted.
In [26]:
old_page_converted = \
np.random.choice([0, 1], size=n_old, p=[(1-p_old), p_old])
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
In [27]:
new_page_converted.mean() - old_page_converted.mean()
Out[27]:
-0.00094496826737285045
h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts a. through g. above. Store all 10,000 values in a numpy array called p_diffs.
In [28]:
new_converted_simulation = \
np.random.binomial(n_new, p_new, 10000)/n_new
old_converted_simulation = \
np.random.binomial(n_old, p_old, 10000)/n_old
p_diffs = new_converted_simulation - old_converted_simulation
i. Plot a histogram of the p_diffs. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
This boils down to a computation of the "spread" of the data, assuming that the probability of converting a given user is the same whether they see the treatment page or the control page.
In [29]:
obs_diff = treatment_conv - control_conv
plt.hist(p_diffs);
plt.axvline(x=obs_diff, color='red');
j. What proportion of the p_diffs are greater than the actual difference observed in ab_data.csv?
In [30]:
p_diffs = np.array(p_diffs)
(p_diffs > obs_diff).mean()
Out[30]:
0.90329999999999999
k. In words, explain what you just computed in part j. What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?
The histogram plotted in part i contains the sampling distribution under the null hypothesis, namely, that the conversion rate of the control group is equal to the conversion rate of the treatment group. Part j involves calculating what proportion of the conversion rate differences were greater than the actual observed difference, which was calculated from the conversion rate data. The special name given to the proportion of values in the null distribution that were greater than our observed difference is the "p-value."
A low p-value (specifically, less than our alpha of 0.05) indicates that the null hypothesis is not likely to be true. Since the p-value is very large at 90%, it is likely that our statistic is from the null, and therefore we fail to reject the null hypothesis. Ultimately, this indicates that it would be best for Audacity to keep the current page.
l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let n_old and n_new refer the the number of rows associated with the old page and new pages, respectively.
In [31]:
import statsmodels.api as sm
convert_old = df2[df2['group'] == 'control'].converted.sum()
convert_new = df2[df2['group'] == 'treatment'].converted.sum()
n_old = df2[df2['group'] == 'control'].converted.size
n_new = df2[df2['group'] == 'treatment'].converted.size
/Users/ryanwingate/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
m. Now use stats.proportions_ztest to compute your test statistic and p-value. Here is a helpful link on using the built in.
In [32]:
from scipy.stats import norm
z_score, p_value = \
sm.stats.proportions_ztest( [ convert_new, convert_old ], \
[ n_new, n_old ], \
alternative='larger' )
print('Z-score critical value (95% confidence) to \n' \
+ ' reject the null: ' \
+ str(norm.ppf(1-(0.05/2))))
print('z_score = ' + str(z_score))
print('p_value = ' + str(p_value))
Z-score critical value (95% confidence) to
reject the null: 1.95996398454
z_score = -1.31092419842
p_value = 0.905058312759
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts j. and k.?
Since the magnitude of the z-score of 1.31 falls within the range implied by the critical value of 1.96, we fail to reject the null hypothesis. The null hypothesis is that there is no statistical difference between the conversion rates for the control and treatment groups.
Additionally, since the p_value of 0.90 (note, approximately the same value as was calculated manually) is larger than the alpha value of 0.05, we fail to reject the null hypothesis.
Thus, for both the foregoing reasons, the built-in method leads to the same conclusion as the manual method, the results of which are summarized in parts j and k, above.
### Part III - A regression approach¶
1. In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.
a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?
Logistic regression
b. The goal is to use statsmodels to fit the regression model you specified in part a. to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an intercept column, as well as an ab_page column, which is 1 when an individual receives the treatment and 0 if control.
In [33]:
df2.head()
Out[33]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1
In [34]:
df2['intercept'] = 1
df2[['drop', 'ab_page']] = pd.get_dummies(df2['group'])
df2.drop(['drop'], axis=1, inplace=True)
Out[34]:
user_id timestamp group landing_page converted intercept ab_page
0 851104 2017-01-21 22:11:48.556739 control old_page 0 1 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0 1 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 1 1
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0 1 1
4 864975 2017-01-21 01:52:26.210827 control old_page 1 1 0
c. Use statsmodels to import your regression model. Instantiate the model, and fit the model using the two columns you created in part b. to predict whether or not an individual converts.
In [35]:
import statsmodels.api as sm
logit_mod = sm.Logit(df2['converted'], df2[['intercept', 'ab_page']])
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
In [36]:
results = logit_mod.fit()
results.summary()
Optimization terminated successfully.
Current function value: 0.366118
Iterations 6
Out[36]:
Dep. Variable: No. Observations: converted 290584 Logit 290582 MLE 1 Mon, 12 Feb 2018 8.077e-06 13:18:43 -106390 True -106390 0.1899
coef std err z P>|z| [0.025 0.975] -1.9888 0.008 -246.669 0.000 -2.005 -1.973 -0.0150 0.011 -1.311 0.190 -0.037 0.007
e. What is the p-value associated with ab_page? Why does it differ from the value you found in Part II?
Hint: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the Part II?
The p-value associated with ab_page in this regression model is 0.19. The p-value that was returned from the built-in ztest method was ~0.90. The p-value that I calculated manually was also ~0.90.
The null hypothesis associated with a logistic regression is that there is no relationship between the dependent and independent variables. In this case, this means there is no relationship between which page a user is shown and the conversion rate. The alternative hypothesis would therefore be that there is a relationship of some sort.
The null hypothesis from part 2 is that the likelihood of conversion for a user receiving the new page is less than or equal to the likelihood of conversion for a user receiving the old page. The alternative hypothesis from part 2 is that the likelihood of conversion for a user receiving the new page is greater than the likelihood of conversion for a user receiving the old page.
The factor that accounts for the large difference in the p-values may be that part 2 hypothesized one of the pages (specifically, the new_page the treatment group received) would lead to more conversions than the other. This is different from the hypotheses of part 3, which merely predicted a difference of some sort.
f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?
Additional factors may make the model more predictive, yielding greater understanding. It may also result in business insights that would not have been evident in this simpler anlysis. For example, it would be possible to have different versions of the website for different locations. It is likely that people from different countries might have different tastes in website layout.
Possible disadvantages include increased risk of human error, especially misinterpretation, as well as possibly obscuring the message the data is really trying to tell (decreasing the so-called signal-to-noise ratio).
g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the countries.csv dataset and merge together your datasets on the approporiate rows. Here are the docs for joining tables.
Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - Hint: You will need two columns for the three dummy variables. Provide the statistical output as well as a written response to answer this question.
In [37]:
countries_df = pd.read_csv('./countries.csv')
df_new = countries_df.set_index('user_id')\
.join(df2.set_index('user_id'), how='inner')
In [38]:
df_new.head()
Out[38]:
country timestamp group landing_page converted intercept ab_page
user_id
834778 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0
928468 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1
822059 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1
711597 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0
710616 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1
In [39]:
### Create the necessary dummy variables
df_new[['CA', 'UK', 'US']] = pd.get_dummies(df_new['country'])
In [40]:
logit_mod_new = sm.Logit(df_new['converted'],\
df_new[['intercept', 'ab_page', 'US', 'UK']])
results_new = logit_mod_new.fit()
results_new.summary()
Optimization terminated successfully.
Current function value: 0.366113
Iterations 6
Out[40]:
Dep. Variable: No. Observations: converted 290584 Logit 290580 MLE 3 Mon, 12 Feb 2018 2.323e-05 13:18:44 -106390 True -106390 0.176
coef std err z P>|z| [0.025 0.975] -2.0300 0.027 -76.249 0.000 -2.082 -1.978 -0.0149 0.011 -1.307 0.191 -0.037 0.007 0.0408 0.027 1.516 0.130 -0.012 0.093 0.0506 0.028 1.784 0.074 -0.005 0.106
In [41]:
np.exp(0.0408), np.exp(0.0506)
Out[41]:
(1.0416437559600236, 1.0519020483004984)
The interpretation of the foregoing variables is counterintuitive. In this case, Canada is the baseline since it was the one out of three variables that wasn't included in the regression. We would say that US users are 1.04 times as likely (or 4% more likely) to convert as Canadians users. Similarly, we would say that UK users are 1.05 times as likely (or 5% more likely) to convert as Canadian users.
The effect is not statistically significant, given the fairly large P-values. Even if it were, it is not clear that such a small difference between the different countries would be practically significant.
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.
Provide the summary results, and your conclusions based on the results.
In [42]:
# These columns indicate that a given user received both the new page
# and lived in the country shown.
df_new['new_CA'] = df_new['ab_page']*df_new['CA']
df_new['new_UK'] = df_new['ab_page']*df_new['UK']
df_new['new_US'] = df_new['ab_page']*df_new['US']
In [43]:
df_new.head()
Out[43]:
country timestamp group landing_page converted intercept ab_page CA UK US new_CA new_UK new_US
user_id
834778 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0 0 1 0 0 0 0
928468 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1 0 0 1 0 0 1
822059 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1 0 1 0 0 1 0
711597 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0 0 1 0 0 0 0
710616 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1 0 1 0 0 1 0
In [44]:
### Fit Your Linear Model And Obtain the Results
lin_mod = sm.OLS(df_new['converted'], \
df_new[['intercept', 'ab_page', 'US', 'new_US', 'UK', 'new_UK']])
results = lin_mod.fit()
results.summary()
Out[44]:
Dep. Variable: R-squared: converted 0.000 OLS 0.000 Least Squares 1.466 Mon, 12 Feb 2018 0.197 13:18:44 -85265. 290584 1.705e+05 290578 1.706e+05 5 nonrobust
coef std err t P>|t| [0.025 0.975] 0.1188 0.004 31.057 0.000 0.111 0.126 -0.0069 0.005 -1.277 0.202 -0.017 0.004 0.0018 0.004 0.467 0.641 -0.006 0.010 0.0047 0.006 0.845 0.398 -0.006 0.016 0.0012 0.004 0.296 0.767 -0.007 0.009 0.0080 0.006 1.360 0.174 -0.004 0.020
Omnibus: Durbin-Watson: 125549 1.996 0 414286 2.345 0 6.497 26.1
In [45]:
log_mod2 = sm.Logit(df_new['converted'], \
df_new[['intercept', 'ab_page', 'US', 'new_US', 'UK', 'new_UK']])
results_log2 = log_mod2.fit()
results_log2.summary()
Optimization terminated successfully.
Current function value: 0.366109
Iterations 6
Out[45]:
Dep. Variable: No. Observations: converted 290584 Logit 290578 MLE 5 Mon, 12 Feb 2018 3.482e-05 13:18:45 -106390 True -106390 0.192
coef std err z P>|z| [0.025 0.975] -2.0040 0.036 -55.008 0.000 -2.075 -1.933 -0.0674 0.052 -1.297 0.195 -0.169 0.034 0.0175 0.038 0.465 0.642 -0.056 0.091 0.0469 0.054 0.872 0.383 -0.059 0.152 0.0118 0.040 0.296 0.767 -0.066 0.090 0.0783 0.057 1.378 0.168 -0.033 0.190
The foregoing present both a linear (included to comply with the comment "#Fit your linear model") and logistic regression for the case with the interaction terms. Neither effect is statistically significant, given the high P-values shown in the results. Additionally, in the case of the linear plot, the R-squared value is zero, implying a terrible fit.
## Conclusions¶
Congratulations on completing the project!
### Gather Submission Materials¶
Once you are satisfied with the status of your Notebook, you should save it in a format that will make it easy for others to read. You can use the File -> Download as -> HTML (.html) menu to save your notebook as an .html file. If you are working locally and get an error about "No module name", then open a terminal and try installing the missing module using pip install <module_name> (don't include the "<" or ">" or any words following a period in the module name).
You will submit both your original Notebook and an HTML or PDF copy of the Notebook for review. There is no need for you to include any data files with your submission. If you made reference to other websites, books, and other resources to help you in solving tasks in the project, make sure that you document them. It is recommended that you either add a "Resources" section in a Markdown cell at the end of the Notebook report, or you can include a readme.txt file documenting your sources.
### Submit the Project¶
When you're ready, click on the "Submit Project" button to go to the project submission page. You can submit your files as a .zip archive or you can link to a GitHub repository containing your project files. If you go with GitHub, note that your submission will be a snapshot of the linked repository at time of submission. It is recommended that you keep each project in a separate repository to avoid any potential confusion: if a reviewer gets multiple folders representing multiple projects, there might be confusion regarding what project is to be evaluated.
It can take us up to a week to grade the project, but in most cases it is much faster. You will get an email once your submission has been reviewed. If you are having any problems submitting your project or wish to check on the status of your submission, please email us at [email protected] In the meantime, you should feel free to continue on with your learning journey by beginning the next module in the program. | 2019-03-25 08:36:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2390163093805313, "perplexity": 1895.823618463668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203842.71/warc/CC-MAIN-20190325072024-20190325094024-00189.warc.gz"} |
https://phys.libretexts.org/Bookshelves/University_Physics/Exercises_(University_Physics)/Exercises%3A_College_Physics_(OpenStax)/34%3A_Frontiers_of_Physics_(Exercises) | $$\require{cancel}$$
# 34: Frontiers of Physics (Exercises)
## Conceptual Questions
#### 34.1: Cosmology and Particle Physics
1. Explain why it only appears that we are at the center of expansion of the universe and why an observer in another galaxy would see the same relative motion of all but the closest galaxies away from her.
2. If there is no observable edge to the universe, can we determine where its center of expansion is? Explain.
3. If the universe is infinite, does it have a center? Discuss.
4. Another known cause of red shift in light is the source being in a high gravitational field. Discuss how this can be eliminated as the source of galactic red shifts, given that the shifts are proportional to distance and not to the size of the galaxy.
5. If some unknown cause of red shift—such as light becoming “tired” from traveling long distances through empty space—is discovered, what effect would there be on cosmology?
6. Olbers’s paradox poses an interesting question: If the universe is infinite, then any line of sight should eventually fall on a star’s surface. Why then is the sky dark at night? Discuss the commonly accepted evolution of the universe as a solution to this paradox.
7. If the cosmic microwave background radiation (CMBR) is the remnant of the Big Bang’s fireball, we expect to see hot and cold regions in it. What are two causes of these wrinkles in the CMBR? Are the observed temperature variations greater or less than originally expected?
8. The decay of one type of $$\displaystyle K$$-meson is cited as evidence that nature favors matter over antimatter. Since mesons are composed of a quark and an antiquark, is it surprising that they would preferentially decay to one type over another? Is this an asymmetry in nature? Is the predominance of matter over antimatter an asymmetry?
9. Distances to local galaxies are determined by measuring the brightness of stars, called Cepheid variables, that can be observed individually and that have absolute brightnesses at a standard distance that are well known. Explain how the measured brightness would vary with distance as compared with the absolute brightness.
10. Distances to very remote galaxies are estimated based on their apparent type, which indicate the number of stars in the galaxy, and their measured brightness. Explain how the measured brightness would vary with distance. Would there be any correction necessary to compensate for the red shift of the galaxy (all distant galaxies have significant red shifts)? Discuss possible causes of uncertainties in these measurements.
11. If the smallest meaningful time interval is greater than zero, will the lines in Figure ever meet?
#### 34.2: General Relativity and Quantum Gravity
12. Quantum gravity, if developed, would be an improvement on both general relativity and quantum mechanics, but more mathematically difficult. Under what circumstances would it be necessary to use quantum gravity? Similarly, under what circumstances could general relativity be used? When could special relativity, quantum mechanics, or classical physics be used?
13. Does observed gravitational lensing correspond to a converging or diverging lens? Explain briefly.
14. Suppose you measure the red shifts of all the images produced by gravitational lensing, such as in Figure.You find that the central image has a red shift less than the outer images, and those all have the same red shift. Discuss how this not only shows that the images are of the same object, but also implies that the red shift is not affected by taking different paths through space. Does it imply that cosmological red shifts are not caused by traveling through space (light getting tired, perhaps)?
15. What are gravitational waves, and have they yet been observed either directly or indirectly?
16. Is the event horizon of a black hole the actual physical surface of the object?
17. Suppose black holes radiate their mass away and the lifetime of a black hole created by a supernova is about 1067 years. How does this lifetime compare with the accepted age of the universe? Is it surprising that we do not observe the predicted characteristic radiation?
#### 34.4: Dark Matter and Closure
18. Discuss the possibility that star velocities at the edges of galaxies being greater than expected is due to unknown properties of gravity rather than to the existence of dark matter. Would this mean, for example, that gravity is greater or smaller than expected at large distances? Are there other tests that could be made of gravity at large distances, such as observing the motions of neighboring galaxies?
19. How does relativistic time dilation prohibit neutrino oscillations if they are massless?
20. If neutrino oscillations do occur, will they violate conservation of the various lepton family numbers $$\displaystyle (L_e, L_μ,$$ and $$\displaystyle L_τ)$$? Will neutrino oscillations violate conservation of the total number of leptons?
21. Lacking direct evidence of WIMPs as dark matter, why must we eliminate all other possible explanations based on the known forms of matter before we invoke their existence?
#### 34.5: Complexity and Chaos
22. Must a complex system be adaptive to be of interest in the field of complexity? Give an example to support your answer.
23. State a necessary condition for a system to be chaotic.
#### 34.6: High-temperature Superconductors
24. What is critical temperature $$\displaystyle T_c$$? Do all materials have a critical temperature? Explain why or why not.
25. Explain how good thermal contact with liquid nitrogen can keep objects at a temperature of 77 K (liquid nitrogen’s boiling point at atmospheric pressure).
26. Not only is liquid nitrogen a cheaper coolant than liquid helium, its boiling point is higher (77 K vs. 4.2 K). How does higher temperature help lower the cost of cooling a material? Explain in terms of the rate of heat transfer being related to the temperature difference between the sample and its surroundings.
#### 34.7: Some Questions We Know to Ask
27. For experimental evidence, particularly of previously unobserved phenomena, to be taken seriously it must be reproducible or of sufficiently high quality that a single observation is meaningful. Supernova 1987A is not reproducible. How do we know observations of it were valid? The fifth force is not broadly accepted. Is this due to lack of reproducibility or poor-quality experiments (or both)? Discuss why forefront experiments are more subject to observational problems than those involving established phenomena.
28. Discuss whether you think there are limits to what humans can understand about the laws of physics. Support your arguments.
## Problems & Exercises
#### 34.1: Cosmology and Particle Physics
29. Find the approximate mass of the luminous matter in the Milky Way galaxy, given it has approximately $$\displaystyle 10^{11}$$ stars of average mass 1.5 times that of our Sun.
Solution
$$\displaystyle 3×10^{41}kg$$
30. Find the approximate mass of the dark and luminous matter in the Milky Way galaxy. Assume the luminous matter is due to approximately 1011 stars of average mass 1.5 times that of our Sun, and take the dark matter to be 10 times as massive as the luminous matter.
31. (a) Estimate the mass of the luminous matter in the known universe, given there are $$\displaystyle 10^{11}$$ galaxies, each containing $$\displaystyle 10^{11}$$ stars of average mass 1.5 times that of our Sun.
(b) How many protons (the most abundant nuclide) are there in this mass?
(c) Estimate the total number of particles in the observable universe by multiplying the answer to (b) by two, since there is an electron for each proton, and then by $$\displaystyle 10^9$$, since there are far more particles (such as photons and neutrinos) in space than in luminous matter.
Solution
(a) $$\displaystyle 3×10^{52}kg$$
(b) $$\displaystyle 2×10^{79}$$
(c) $$\displaystyle 4×10^{88}$$
32. If a galaxy is 500 Mly away from us, how fast do we expect it to be moving and in what direction?
33. On average, how far away are galaxies that are moving away from us at 2.0% of the speed of light?
Solution
0.30 Gly
34. Our solar system orbits the center of the Milky Way galaxy. Assuming a circular orbit 30,000 ly in radius and an orbital speed of 250 km/s, how many years does it take for one revolution? Note that this is approximate, assuming constant speed and circular orbit, but it is representative of the time for our system and local stars to make one revolution around the galaxy.
35. (a) What is the approximate speed relative to us of a galaxy near the edge of the known universe, some 10 Gly away?
(b) What fraction of the speed of light is this? Note that we have observed galaxies moving away from us at greater than $$\displaystyle 0.9c$$.
Solution
(a) $$\displaystyle 2.0×10^5km/s$$
(b) 0.67c
36. (a) Calculate the approximate age of the universe from the average value of the Hubble constant, $$\displaystyle H_0=20km/s⋅Mly$$. To do this, calculate the time it would take to travel 1 Mly at a constant expansion rate of 20 km/s.
(b) If deceleration is taken into account, would the actual age of the universe be greater or less than that found here? Explain.
37. Assuming a circular orbit for the Sun about the center of the Milky Way galaxy, calculate its orbital speed using the following information: The mass of the galaxy is equivalent to a single mass $$\displaystyle 1.5×10^{11}$$ times that of the Sun (or $$\displaystyle 3×10^{41}kg$$), located 30,000 ly away.
Solution
$$\displaystyle 2.7×10^5m/s$$
38. (a) What is the approximate force of gravity on a 70-kg person due to the Andromeda galaxy, assuming its total mass is $$\displaystyle 10^{13}$$ that of our Sun and acts like a single mass 2 Mly away?
(b) What is the ratio of this force to the person’s weight? Note that Andromeda is the closest large galaxy.
39. Andromeda galaxy is the closest large galaxy and is visible to the naked eye. Estimate its brightness relative to the Sun, assuming it has luminosity $$\displaystyle 10^{12}$$ times that of the Sun and lies 2 Mly away.
Solution
$$\displaystyle 6×10^{−11}$$ (an overestimate, since some of the light from Andromeda is blocked by gas and dust within that galaxy)
40. (a) A particle and its antiparticle are at rest relative to an observer and annihilate (completely destroying both masses), creating two γ rays of equal energy. What is the characteristic γ-ray energy you would look for if searching for evidence of proton-antiproton annihilation? (The fact that such radiation is rarely observed is evidence that there is very little antimatter in the universe.)
(b) How does this compare with the 0.511-MeV energy associated with electron-positron annihilation?
41. The average particle energy needed to observe unification of forces is estimated to be $$\displaystyle 10^{19}GeV.$$
(a) What is the rest mass in kilograms of a particle that has a rest mass of $$\displaystyle 10^{19}GeV/c^2$$?
(b) How many times the mass of a hydrogen atom is this?
Solution
(a) $$\displaystyle 2×10^{−8}kg$$
(b) $$\displaystyle 1×10^{19}$$
42. The peak intensity of the CMBR occurs at a wavelength of 1.1 mm.
(a) What is the energy in eV of a 1.1-mm photon?
(b) There are approximately $$\displaystyle 10^9$$ photons for each massive particle in deep space. Calculate the energy of $$\displaystyle 10^9$$ such photons.
(c) If the average massive particle in space has a mass half that of a proton, what energy would be created by converting its mass to energy?
(d) Does this imply that space is “matter dominated”? Explain briefly.
43. (a) What Hubble constant corresponds to an approximate age of the universe of $$\displaystyle 10^{10}y$$? To get an approximate value, assume the expansion rate is constant and calculate the speed at which two galaxies must move apart to be separated by 1 Mly (present average galactic separation) in a time of $$\displaystyle 10^{10}y$$.
(b) Similarly, what Hubble constant corresponds to a universe approximately $$\displaystyle 2×10^{10}-y$$ old?
Solution
(a) 30km/s⋅Mly
(b) 15km/s⋅Mly
44. Show that the velocity of a star orbiting its galaxy in a circular orbit is inversely proportional to the square root of its orbital radius, assuming the mass of the stars inside its orbit acts like a single mass at the center of the galaxy. You may use an equation from a previous chapter to support your conclusion, but you must justify its use and define all terms used.
45. The core of a star collapses during a supernova, forming a neutron star. Angular momentum of the core is conserved, and so the neutron star spins rapidly. If the initial core radius is $$\displaystyle 5.0×10^5km$$ and it collapses to 10.0 km, find the neutron star’s angular velocity in revolutions per second, given the core’s angular velocity was originally 1 revolution per 30.0 days.
Solution
960 rev/s
46. Using data from the previous problem, find the increase in rotational kinetic energy, given the core’s mass is 1.3 times that of our Sun. Where does this increase in kinetic energy come from?
47. Distances to the nearest stars (up to 500 ly away) can be measured by a technique called parallax, as shown in Figure. What are the angles $$\displaystyle θ_1$$ and $$\displaystyle θ_2$$ relative to the plane of the Earth’s orbit for a star 4.0 ly directly above the Sun?
Solution
$$\displaystyle 89.999773º$$ (many digits are used to show the difference between $$\displaystyle 90º$$)
48. (a) Use the Heisenberg uncertainty principle to calculate the uncertainty in energy for a corresponding time interval of $$\displaystyle 10^{−43}s$$.
(b) Compare this energy with the $$\displaystyle 10^{19}GeV$$ unification-of-forces energy and discuss why they are similar.
Consider a star moving in a circular orbit at the edge of a galaxy. Construct a problem in which you calculate the mass of that galaxy in kg and in multiples of the solar mass based on the velocity of the star and its distance from the center of the galaxy.
Distances to nearby stars are measured using triangulation, also called the parallax method. The angle of line of sight to the star is measured at intervals six months apart, and the distance is calculated by using the known diameter of the Earth’s orbit. This can be done for stars up to about 500 ly away.
#### 34.2: General Relativity and Quantum Gravity
50. What is the Schwarzschild radius of a black hole that has a mass eight times that of our Sun? Note that stars must be more massive than the Sun to form black holes as a result of a supernova.
Solution
23.6 km
51. Black holes with masses smaller than those formed in supernovas may have been created in the Big Bang. Calculate the radius of one that has a mass equal to the Earth’s.
52. Supermassive black holes are thought to exist at the center of many galaxies.
(a) What is the radius of such an object if it has a mass of $$\displaystyle 10^9$$ Suns?
(b) What is this radius in light years?
Solution
(a) $$\displaystyle 2.95×10^{12}m$$
(b) $$\displaystyle 3.12×10^{−4}ly$$
Consider a supermassive black hole near the center of a galaxy. Calculate the radius of such an object based on its mass. You must consider how much mass is reasonable for these large objects, and which is now nearly directly observed. (Information on black holes posted on the Web by NASA and other agencies is reliable, for example.)
#### 34.3: Superstrings
54. The characteristic length of entities in Superstring theory is approximately $$\displaystyle 10^{−35}m$$.
(a) Find the energy in GeV of a photon of this wavelength.
(b) Compare this with the average particle energy of $$\displaystyle 10^{19}GeV$$ needed for unification of forces.
Solution
(a) $$\displaystyle 1×10^{20}$$
(b) 10 times greater
#### 34.4: Dark Matter and Closure
55. If the dark matter in the Milky Way were composed entirely of MACHOs (evidence shows it is not), approximately how many would there have to be? Assume the average mass of a MACHO is 1/1000 that of the Sun, and that dark matter has a mass 10 times that of the luminous Milky Way galaxy with its $$\displaystyle 10^{11}$$ stars of average mass 1.5 times the Sun’s mass.
Solution
$$\displaystyle 1.5×10^{15}$$
56. The critical mass density needed to just halt the expansion of the universe is approximately $$\displaystyle 10^{−26}kg/m^3$$.
(a) Convert this to $$\displaystyle eV/c^2⋅m^3$$.
(b) Find the number of neutrinos per cubic meter needed to close the universe if their average mass is $$\displaystyle 7eV/c^2$$ and they have negligible kinetic energies.
57. Assume the average density of the universe is 0.1 of the critical density needed for closure. What is the average number of protons per cubic meter, assuming the universe is composed mostly of hydrogen?
Solution
$$\displaystyle 0.6m^{−3}$$
68. To get an idea of how empty deep space is on the average, perform the following calculations:
(a) Find the volume our Sun would occupy if it had an average density equal to the critical density of $$\displaystyle 10^{−26}kg/m^3$$ thought necessary to halt the expansion of the universe.
(b) Find the radius of a sphere of this volume in light years.
(c) What would this radius be if the density were that of luminous matter, which is approximately 5% that of the critical density?
(d) Compare the radius found in part (c) with the 4-ly average separation of stars in the arms of the Milky Way.
#### 34.6: High-temperature Superconductors
69. A section of superconducting wire carries a current of 100 A and requires 1.00 L of liquid nitrogen per hour to keep it below its critical temperature. For it to be economically advantageous to use a superconducting wire, the cost of cooling the wire must be less than the cost of energy lost to heat in the wire. Assume that the cost of liquid nitrogen is $0.30 per liter, and that electric energy costs$0.10 per kW·h. What is the resistance of a normal wire that costs as much in wasted electric energy as the cost of liquid nitrogen for the superconductor?
Solution
0.30 Ω
### Contributors
• Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). | 2020-04-06 22:28:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917526721954346, "perplexity": 501.60888100392145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00299.warc.gz"} |
http://experiment-ufa.ru/Prime-factorization-of-420 | # Prime factorization of 420
If it's not what You are looking for type in the field below your own integer, and You will get the solution.
Prime factorization of 420:
By prime factorization of 420 we follow 5 simple steps:
1. We write number 420 above a 2-column table
2. We divide 420 by the smallest possible prime factor
3. We write down on the left side of the table the prime factor and next number to factorize on the ride side
4. We continue to factor in this fashion (we deal with odd numbers by trying small prime factors)
5. We continue until we reach 1 on the ride side of the table
420 prime factors number to factorize 2 210 2 105 3 35 5 7 7 1
Prime factorization of 420 = 1×2×2×3×5×7= $1 × 2^2 × 3 × 5 × 7$
## Related pages
simplify 3xlcm of 9graph 2x 3y 8differentiating ln xcommon multiples of 2 and 33x2 x 4dxy solutionsprime factorization of 254square root of 4413x 5y 12what is the derivative of xe x2x2 equation solvermcdeltatfactor 2x2 5x 12subtraction of fractions calculatorprime factorization of 1442y 3x 0factor 8x 3 27log5 x 3prime factorization of 39y cot x4y y12.50 x 20greatest common factor of 48 and 5467-9-1prime factorization of 200simplify the square root of 169sin 2x cosxcalculator lcm5.3.7factor tree for 96prime factorization of 10000derivative of cos squared250000 dollars in poundssolve sin2x0.875 to fractionwhat is the derivative of tanxwhat is the prime factorization of 78poiuytrderive calculatormhcitts com45x4how to factor x cubed minus 1500000 naira in poundsprime factorisation of 471.7.70.08333 as a fractionderivative of e xsinx3x 5y 32018 roman numeralswhat is the prime factorization of 78is239what is the lcm of 3 51.5 percent as a decimalnumbers that multiply to 1685x squaredwhat is the lcm of 4derivative calculator step by stepprime factorization of 1903x-5y 13 x-2y 5lcm of 124y x 1fx hxprime factorization of 1122how to add mixed fractions calculatorderivative cos3xsolving quadratic equation calculator0.125 as fractionprime factorization 99adding fractions calc5abc3.14159265sinx cosx 1factor x 2 xy y 229 prime factorization | 2018-03-17 20:01:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5117545127868652, "perplexity": 7682.473655782531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645310.34/warc/CC-MAIN-20180317194447-20180317214447-00401.warc.gz"} |
https://mathoverflow.net/questions/242324/set-of-w-continuous-operators-closed-for-the-weak-topology-or-not | # Set of w*-continuous operators closed for the weak* topology or not?
Let $X$ be a dual Banach space, i.e. $X=(X_*)^*$ for some Banach space $X_*$. Consider the weak* topology of $B(X)$, i.e. the topology of pointwise convergence on $X$ endowed with the $\sigma(X,X_*)$-topology.
Consider the set $B_{w^*}(X)$ of $w^*$-continuous bounded operators on $X$. Is it a closed subset of $B(X)$ for the weak* topology of $B(X)$ ?
• Note that $B_{w^*}(X) = \{ T^* \colon T \in B(X_*)\}$ which makes me suspect the answer to your question is no – Yemon Choi Jun 16 '16 at 2:04
• Also, I don't think your description of the weak-star topology on B(X) is quite right, unless X is Hilbert space. The predual of B(X) is $X\hat{\otimes} X_*$; what you describe seems to be more like the weak-star version of WOT – Yemon Choi Jun 16 '16 at 2:06
• Interestingly enough for the more general case of $B(X^*,Y^*)$ (i.e. for two different dual Banach spaces) the answer to this question in general is no, cf. this counterexample by Jochen Glück. Of course this does not settle the special case $X=Y$ OP asked about---but it might give some further intuition for this problem. – Frederik vom Ende Jan 5 '20 at 17:06
The answer is no. I know that for some people here, saying "It's false for $$X = \ell^1$$" would be a good enough hint, but I also know that this question originated on Math StackExchange, so I've included many of the details, making a rather long answer.
The spaces I use for a counterexample are $$\newcommand{\R}{\mathbb{R}}X = \ell^1$$, and $$X_{*} = c_0$$, using the usual pairing $$\langle\mbox{-},\mbox{-}\rangle_1 : \ell^1 \times c_0 \rightarrow \R$$ defined by: $$\langle \phi, a \rangle_1 = \sum_{n=0}^\infty \phi(n)\cdot a(n).$$ I prefer a different notation from that used in the question, which I'll explain here. I'll use $$\ell^1$$ for that space considered as a Banach space, and $$\ell^1_\sigma$$ when equipped with the weak-* topology $$\sigma(\ell^1,c_0)$$. Then $$L(\ell^1) = L(\ell^1,\ell^1)$$ is the space of bounded/continuous linear maps from $$\ell^1$$ to itself in the norm topology, and $$L(\ell^1_\sigma)$$ the space of continuous linear maps $$\ell^1_\sigma \rightarrow \ell^1_\sigma$$.
The reason the answer is no is that $$L(\ell^1_\sigma)$$ is dense in $$L(\ell^1)$$, both in the $$\sigma(L(\ell^1),\ell^1 \hat{\otimes} c_0)$$-topology and the weak-* operator topology described in the question, but $$L(\ell^1_\sigma)$$ is a proper subset of $$L(\ell^1)$$. So $$L(\ell^1_\sigma)$$ is not closed in either of those topologies.
We'll begin with some definitions. For each $$\newcommand{\N}{\mathbb{N}}n \in \N$$ let's define $$\delta_n : \N \rightarrow \R$$ such that $$\delta_n(m)$$ is 1 if $$n=m$$, and $$0$$ otherwise. These functions belong to both $$\ell^1$$ and $$c_0$$ and are the usual Schauder bases of these spaces. Let's also define $$e_n : \ell^1 \rightarrow \R$$ by $$e_n(\phi) = \phi(n)$$. These functionals are weak-* continuous (i.e. $$\sigma(\ell^1,c_0)$$-continuous), essentially by definition.
We also define $$s : \ell^1 \rightarrow \R$$ by $$s(\phi) = \sum_{i=0}^\infty \phi(i)$$. This is norm continuous on $$\ell^1$$ but not weak-* continuous because $$\delta_n \to 0$$ in the weak-* topology, but $$\phi(\delta_n) = 1 \not\to 0$$. Therefore the rank 1 operator $$T(\phi) = s(\phi) \cdot \delta_0$$ is in $$L(\ell^1)$$ but not $$L(\ell^1_\sigma)$$. So all that remains is to show that $$L(\ell^1_\sigma)$$ is dense in $$L(\ell^1)$$. I'll give two proofs. I came up with the second one first, and it is simpler but requires more background knowledge. The first proof is more "bare hands" and is likely more what the OP was looking for.
In both cases, we actually show that the linear span of the rank one operators $$T_{n,m}(\phi) = e_n(\phi) \cdot \delta_m \in L(\ell^1_\sigma)$$ is dense in $$L(\ell^1)$$.
First proof:
The key statement in this proof is:
Given $$f \in L(\ell^1)$$, define: $$f_n = \sum_{i=0}^n \sum_{j = 0}^n f(\delta_i)(j) \cdot T_{ij}.$$ This is a sequence of finite rank operators in $$L(\ell^1_\sigma)$$ converging to $$f$$ in the strong operator topology.
We prove this as follows. As part of the proof that $$(\delta_n)$$ is a Schauder basis for $$\ell^1$$, we know that for all $$\phi \in \ell^1$$, $$\left(\sum_{i=0}^n \phi(i) \cdot \delta_i\right) \to \phi$$. So for each $$\epsilon > 0$$, there exists an $$N \in \N$$ such that for all $$n \geq N$$, $$\| \phi - \sum_{i=0}^n \phi(i) \cdot \delta_i \| < \frac{\epsilon}{\| f \|}$$. So for all $$n \geq N$$:
$$\| f(\phi) - f_n(\phi) \| = \sum_{p = 0}^\infty |f(\phi)(p) - f_n(\phi)(p)|$$ $$= \sum_{p=0}^\infty \left| f(\phi)(p) - \sum_{i=0}^n \sum_{j=0}^n f(\delta_i)(j) \cdot e_i(\phi) \cdot \delta_j(p) \right|$$ $$= \sum_{p=0}^\infty \left| f(\phi)(p) - \sum_{i=0}^n f(\delta_i)(p) \cdot \phi(i) \right|$$ $$= \left\| f(\phi) - f\left(\sum_{i=0}^n \phi(i) \cdot \delta_i \right) \right\| \leq \|f\| \cdot \frac{\epsilon}{\|f\|} = \epsilon$$ So $$f_n \to f$$ pointwise in $$\ell^1$$, i.e. in the strong operator topology. Since the weak-* topology is coarser than the norm topology, it also converges pointwise for that, so in the weak-* operator topology.
To prove that $$f_n \to f$$ in $$\sigma(L(\ell^1),\ell^1 \hat{\otimes}c_0)$$, we observe that by comparing the neighbourhood bases to each other, the weak-* operator topology and the topology $$\sigma(L(\ell^1),\ell^1 \otimes c_0)$$ agree (i.e. for the weak-* topology without completing the tensor product). Now, $$\| f_n \| \leq \| f\|$$, so we can use the following standard fact:
If $$E$$ is a Banach space, $$D \subseteq E$$ a dense subspace, then $$\sigma(E^{*},E)$$ and $$\sigma(E^{*},D)$$ agree on all norm bounded subsets of $$E^{*}$$.
I don't know a good reference for it in the above formulation, the closest being Schaefer's Topological Vector Spaces III.4.5 (taking $$F = \R$$ in that theorem). To finish, we apply it with $$E^* = L(\ell^1)$$, $$E = \ell^1 \hat{\otimes} c_0$$ and $$D = \ell^1 \otimes c_0$$, and the norm bounded set $$\{ g \in L(\ell^1) \mid \| g \| \leq \| f \| \}$$.
Second proof:
In this case, the key statement is
The set of maps $$(T_{n,m})_{n,m \in \N}$$ separates the points of $$\ell^1 \hat{\otimes} c_0$$ under the pairing with $$L(\ell^1)$$.
By a standard consequence of the Hahn-Banach theorem (see e.g. Schaefer's Topological Vector Spaces IV.1.3) this shows that the linear span of $$(T_{n,m})_{n,m \in \N}$$ is $$\sigma(L(\ell^1),\ell^1 \hat{\otimes} c_0)$$-dense. As the weak-* operator topology on $$L(\ell^1)$$ is coarser than this topology, this also proves density in that topology.
We use a theorem of Grothendieck to identify $$\ell^1 \hat{\otimes} c_0$$ with $$\ell^1[c_0]$$, i.e. the space of absolutely summable sequences in $$c_0$$. Under this isomorphism, the pairing of $$L(\ell^1)$$ with $$\ell^1 \hat{\otimes} c_0$$ is mapped to the following, where $$f \in L(\ell^1)$$ and $$\Phi \in \ell^1[c_0]$$: $$\langle f, \Phi \rangle_2 = \sum_{n = 0}^\infty \langle f(\delta_n), \Phi(n)\rangle_1$$ If $$\langle T_{n,m}, \Phi \rangle_2 = 0$$ for all $$n,m \in \N$$, this means $$0 = \sum_{i=0}^\infty \langle T_{n,m}(\delta_i), \Phi(i) \rangle_1 = \sum_{i=0}^\infty \sum_{j=0}^\infty T_{n,m}(\delta_i)(j) \cdot \Phi(i)(j) = \sum_{i=0}^\infty \sum_{j=0}^\infty \delta_i(n)\cdot \delta_m(j) \cdot \Phi(i)(j)$$ $$= \Phi(n)(m)$$ for all $$n,m \in \N$$, and therefore $$\Phi = 0$$. As discussed, this shows that the linear span of $$(T_{n,m})_{n,m \in \N}$$ is $$\sigma(L(\ell^1),\ell^1[c_0])$$-dense in $$L(\ell^1)$$ and so $$L(\ell^1_\sigma)$$ is also dense. | 2021-07-29 08:32:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 97, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755436778068542, "perplexity": 83.13063136799171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00614.warc.gz"} |
https://solvedlib.com/find-the-velocity-function-and-position-function,399795 | # Find the velocity function and position function of an object moving along a straight line with...
###### Question:
Find the velocity function and position function of an object moving along a straight line with the acceleration a(t) = et initial velocity v(0) = 60 and initial position (0) = 40. 3
#### Similar Solved Questions
##### IncorrectQuestion 20 / 2 ptsMatter is gained or lost in ecosystems. How does this occur? Select only ONE answer choice.Consumers can convert matter to energy:Matter can be moved from one ecosystem to another:Photosynthetic organisms convert solar energy to sugars_Decomposers convert matter to energy:
Incorrect Question 2 0 / 2 pts Matter is gained or lost in ecosystems. How does this occur? Select only ONE answer choice. Consumers can convert matter to energy: Matter can be moved from one ecosystem to another: Photosynthetic organisms convert solar energy to sugars_ Decomposers convert matter to...
##### Between 1975 and 2011, the wage gap between women with a high school education and women...
Between 1975 and 2011, the wage gap between women with a high school education and women with a college education increased. In particular, in 2011 women with a college degree earned about a. 12% more than women with a high school education. b. 26% more than women with a high sc...
##### Write a definition in own way a for : 1- diameter 2- Diameter of rivet 3-...
Write a definition in own way a for : 1- diameter 2- Diameter of rivet 3- Pitch 4-Margin...
##### Given an infinite simple continued fraction [a0 an+1, define po @0. Pi (1F0 akPk-1 + Pk-2,k 2 2. (o 1.91 (1 and ( akqk = Ik-2,k2 2 (a) Let € be any real number: Show thatand0n_2EDn_1 an-1,€] edr_1 'n-?Let € [en+ 1 (L7c" thatdenote tail, that is the infinite continued fraction after term On. ShowFrom the items above, show that the convergents 0,1,2"a] are always given by Pn/%u,nIuo.
Given an infinite simple continued fraction [a0 an+1, define po @0. Pi (1F0 akPk-1 + Pk-2,k 2 2. (o 1.91 (1 and ( akqk = Ik-2,k2 2 (a) Let € be any real number: Show that and 0n_2 EDn_1 an-1,€] edr_1 'n-? Let € [en+ 1 (L7c" that denote tail, that is the infinite continued...
##### On November 30, 2019, Paxton & Smith Inc purchased two delivery vans for a total of...
On November 30, 2019, Paxton & Smith Inc purchased two delivery vans for a total of €80.000, issuing a one-year, 4% note payable, all due at maturity. The interest on this loan is stated separately. a. Record the adjusting entry regarding the interest at December 31, 2019. b. What is the to...
##### Simulation: You are interested in making a pizza for dinner and you need to determine how long it will take to make. Create a simulation with 10,000 trials in Excel with the following steps.Step 1: Buy Pizza Ingredients: 65 to 95 mins; skewed positively Step 2: Make Pizza Dough: 25 mins to 45 mins, normal distribution Step 3: Wash and Chop Vegetables, Make Tomato Sauce: 25 mins to 35 mins, skewed negatively Step 4: Top Pizza: 8 mins to 11 mins, normal distribution Step 5: Bake Pizza: 20 mins to
Simulation: You are interested in making a pizza for dinner and you need to determine how long it will take to make. Create a simulation with 10,000 trials in Excel with the following steps. Step 1: Buy Pizza Ingredients: 65 to 95 mins; skewed positively Step 2: Make Pizza Dough: 25 mins to 45 mins,...
##### The reaction of bromine gas with chlorine gas, Brzlg) + Cllg) = 2 BrCIlg), at certain temperature has a Kp value of 9.70. If 0.580 atm of both Brz and Clz are placed in a closed vessel and allowed to come to equilibrium; what is the equilibrium partial pressure of BrCIg)? (answer in atm)Answer:Check
The reaction of bromine gas with chlorine gas, Brzlg) + Cllg) = 2 BrCIlg), at certain temperature has a Kp value of 9.70. If 0.580 atm of both Brz and Clz are placed in a closed vessel and allowed to come to equilibrium; what is the equilibrium partial pressure of BrCIg)? (answer in atm) Answer: Che...
##### 10) A certain dataset of systolic blood pressure measurements has a mean of 80 and a...
10) A certain dataset of systolic blood pressure measurements has a mean of 80 and a standard deviation of 3 Assuming the distribution is bell-shaped: a) About how many of the 550 data values will fall between 71 and 89? b) Is a value of 85 considered to be ordinary (usual) or unusual?...
##### Please convert the following electrolyte lab value from US units to Sl units. Electrolyte 0,55mEq/L into...
Please convert the following electrolyte lab value from US units to Sl units. Electrolyte 0,55mEq/L into units of mmovL. MW: 193 Valence: 2 Answer: A patient who is on vacation here from Europe says he's a diabetic and just borrowed a friend's glucose testing machine to check his levels sinc...
##### Use graphing utility with matrix capabilities to find the following_ where u = (2, 1, 1, -3) , v = (-1,2, 0, -2), and w = (3, 2, 1, -2)(a) u + 3v5,7,1,(b)4u = (-5,2, 3 + 10)(c) Zv Zu -3, 2,0,Need Help?Ruad [Mheehk
Use graphing utility with matrix capabilities to find the following_ where u = (2, 1, 1, -3) , v = (-1,2, 0, -2), and w = (3, 2, 1, -2) (a) u + 3v 5,7,1, (b) 4u = (-5, 2, 3 + 10) (c) Zv Zu - 3, 2,0, Need Help? Ruad [ Mheehk...
##### Prove or disprove the following: The greatest common divisor of two nonzero integers a and b is always positive
Prove or disprove the following: The greatest common divisor of two nonzero integers a and b is always positive...
##### Organize the list below into items that belong to managerial accounting, financial accounting, and those that...
Organize the list below into items that belong to managerial accounting, financial accounting, and those that are shared by both types of accounting Managerial Only Shared Financial Only :: Shows relevant and timely information : Assists internally with operations :: Available to organization outsid...
##### I1 Points]DETAILSSCALCCC4Write the sum in expanded form_ 0+} 2 72 Je"Need Help? Reag @Submit Answer
I1 Points] DETAILS SCALCCC4 Write the sum in expanded form_ 0+} 2 72 Je" Need Help? Reag @ Submit Answer...
##### Change of Variables In Exercises 53-60, find the indefinite integral by making a change of variables. $\int x \sqrt{x+6} d x$
Change of Variables In Exercises 53-60, find the indefinite integral by making a change of variables. $\int x \sqrt{x+6} d x$...
##### Locust Systems has the following information for the most recent year of operations. The firm uses...
Locust Systems has the following information for the most recent year of operations. The firm uses a manufacturing overhead rate of 167% of DL cost. Direct materials beginning inventory $23,800 Direct materials ending inventory$29,500 Selling and administrative costs \$75,000 Beginn...
##### What would be the major product of the following reaction sequence? H,0* "OH + Bra heat...
What would be the major product of the following reaction sequence? H,0* "OH + Bra heat OH X ОХ II III IV...
##### In Java please, Create a class called airConditioner which has 3 private attributes; integer temperature, boolean...
In Java please, Create a class called airConditioner which has 3 private attributes; integer temperature, boolean on and String location and has 5 behaviors: turning the air conditioner on and off (called turnON and turnOFF with no return and no arguments) checking to see if the air conditioner is o...
##### Considering an ideal transformer, if the primary coil is wound10 times and 200V is applied and the number of turns of thesecondary coil is 500 1) Calculate the output voltage. 2) Calculate the change in power loss duringtransmission by transforming as above.
Considering an ideal transformer, if the primary coil is wound 10 times and 200V is applied and the number of turns of the secondary coil is 500 1) Calculate the output voltage. 2) Calculate the change in power loss during transmission by transforming as above....
##### A metallic ball has spherical cavity at its centre. If the ball is heated, what happens to the cavity?(a) Its volume increases(b) Its volume decreases(c) Its volume remains unchanged(d) Its volume may decrease or increase depending upon the nature of material
A metallic ball has spherical cavity at its centre. If the ball is heated, what happens to the cavity? (a) Its volume increases (b) Its volume decreases (c) Its volume remains unchanged (d) Its volume may decrease or increase depending upon the nature of material...
##### 7: Double the Frequency 10.0 pts possible The light intensity incident OH metallic surface with a work function of 4.32 eV prO- duces photoeletrons with a maximum kinetic energy of 4.13 eV. If the frequency of the light is dloubled, what is the maximum kinetic energy of the photoeletrons? Answer in Units of eVYour response;..
7: Double the Frequency 10.0 pts possible The light intensity incident OH metallic surface with a work function of 4.32 eV prO- duces photoeletrons with a maximum kinetic energy of 4.13 eV. If the frequency of the light is dloubled, what is the maximum kinetic energy of the photoeletrons? Answer ...
##### T47:02 731 VPN 97% 5 TOⓇ + : 4. Find the Hermite interpolating polynomial which interpolates...
T47:02 731 VPN 97% 5 TOⓇ + : 4. Find the Hermite interpolating polynomial which interpolates the values f(1) = 4, f'(1) = -3, f(4) = 13, f'(4) = 9 and verify your answer. 5 13...
##### Name the following structures P 1)
name the following structures P 1)...
##### Amount Year (megawatts) 2000 17 2001 25 2002 39 2003 68 2004 82 2005 101 2006 142 2007 213 2008 254
Amount Year (megawatts) 2000 17 2001 25 2002 39 2003 68 2004 82 2005 101 2006 142 2007 213 2008 254...
##### In the figure below, h-4.7.0 cm and w = 2.3.0 cm. What is the direction (measured in degrees from the +x-axis) at the position indicated by the dot in the figure below? Example: +190 is counterclockwise from the positive X-axis, and -190 is clockwise from the positive X-axis:S.0nCI0 &CS01
In the figure below, h-4.7.0 cm and w = 2.3.0 cm. What is the direction (measured in degrees from the +x-axis) at the position indicated by the dot in the figure below? Example: +190 is counterclockwise from the positive X-axis, and -190 is clockwise from the positive X-axis: S.0nC I0 &C S01...
##### An internally reversible, externally irreversible process Ammonia in a piston-cylinder that maintains constant P is at...
An internally reversible, externally irreversible process Ammonia in a piston-cylinder that maintains constant P is at 20°C, 1600 kPa and is now heated to 60°C in an internally reversible process. The external heat source is at a constant 70°C. We want to find the specific heat transfer ...
##### Provide the major product(s) for the following reactions. racemic miXtures_SuTC to nole stereochemistry andNaOH HzoNaQEIDIBAL-H, -78 %C KOHHo_ SOCIz NaOEtIsOH
Provide the major product(s) for the following reactions. racemic miXtures_ SuTC to nole stereochemistry and NaOH Hzo NaQEI DIBAL-H, -78 %C KOHHo_ SOCIz NaOEt IsOH...
##### EA€7 adt A Fet 33 23 Ntizy] 3t 3s ' 0 4 Stansee olevstefen 1 Cita- Pat est WUaTs betu - Jo 4*J 40 (O N prepertien M<Patiest ~ili be (6) What 34 Pvokability tuat 82e4 ~ 45 minitsPercentile ativg) 7e (2) Whatis 2 384
EA€7 adt A Fet 33 23 Ntizy] 3t 3s ' 0 4 Stansee olevstefen 1 Cita- Pat est WUaTs betu - Jo 4*J 40 (O N prepertien M< Patiest ~ili be (6) What 34 Pvokability tuat 82e4 ~ 45 minits Percentile ativg) 7e (2) Whatis 2 384...
##### Find J5(3 2xy) dV where D is the region lying above the xy-plane and below the sphere x2 +y + 22 =4
Find J5(3 2xy) dV where D is the region lying above the xy-plane and below the sphere x2 +y + 22 =4...
##### A sphere of mass m and radius r rolls without slipping inside a curve surface of...
A sphere of mass m and radius r rolls without slipping inside a curve surface of radius R. Knowing that the sphere is released from rest in the position shown, derive an expression for (a) the linear velocity of the sphere as it passes through B (20), (b) the magnitude of the vertical reaction at th...
##### Ml) Find aglecoirdocument/d/130_TELXBXzFI polschoyoia ] will then 1 zero of the Ja factor [0 li and vertical 1-YrBeFUplTHbYfl h-IFUuFAdg5_ find the other the2 3 asymptotes 1 and svuthetic 1 of the divistoeate [ rati Fi other uSe function factor
ml) Find aglecoirdocument/d/130_TELXBXzFI polschoyoia ] will then 1 zero of the Ja factor [0 li and vertical 1-YrBeFUplTHbYfl h-IFUuFAdg5_ find the other the2 3 asymptotes 1 and svuthetic 1 of the divistoeate [ rati Fi other uSe function factor...
##### S_ 2 23 Sv = 2 '58.12
S_ 2 23 Sv = 2 '5 8.12... | 2022-07-04 03:36:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4828239679336548, "perplexity": 3255.54480178878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00335.warc.gz"} |
http://www.oalib.com/relative/3568227 | Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 2005, DOI: 10.1103/PhysRevB.72.134524 Abstract: We explore the metastability effects across the order-disorder transition pertaining to the peak effect phenomenonon in critical current density ($J_c$) via the first and the third harmonic ac susceptibility measurements in the weakly pinned single crystals of $2H$-$NbSe_2$. An analysis of our data suggests that an imprint of the limiting (spinodal) temperature above which $J_c$ is path independent can be conveniently located in the third harmonic data ($\chi_{3\omega}^{\prime}$).
Physics , 2004, Abstract: Magnetoresistance (MR) of the Bi$_{2-x}$Pb$_x$Sr$_2$Co$_2$O$_y$ ($x$=0, 0.3, 0.4) single crystals is investigated systematically. A nonmonotonic variation of the isothermal in-plane and out-of-plane MR with the field is observed. The out-of-plane MR is positive in high temperatures and increases with decreasing $T$, and exhibits a pronounced hump, and changes the sign from positive to negative at a centain temperature. These results strongly suggest that the observed MR consists of two contributions: one \emph{negative} and one \emph{positive} component. The isothermal MR in high magnetic fields follows a $H^2$ law. While the negative contribution comes from spin scattering of carriers by localized-magnetic-moments based on the Khosla-Fischer model.
Physics , 2006, DOI: 10.1007/BF02704939 Abstract: The existence of a peak effect in transport properties (a maximum of the critical current as function of magnetic field) is a well-known but still intriguing feature of type II superconductors such as NbSe2 and Bi-2212. Using a model of pinning by surface irregularities in anisotropic superconductors, we have developed a calculation of the critical current which allows estimating quantitatively the critical current in both the high critical current phase and in the low critical current phase. The only adjustable parameter of this model is the angle of the vortices at the surface. The agreement between the measurements and the model is really very impressive. In this framework, the anomalous dynamical properties close to the peak effect is due to co-existence of two different vortex states with different critical currents. Recent neutron diffraction data in NbSe2 crystals in presence of transport current support this point of view.
Physics , 2001, DOI: 10.1103/PhysRevLett.88.167005 Abstract: Magnetoresistance (MR) in the a-axis resistivity of untwinned YBa_{2}Cu_{3}O_{y} single crystals is measured for a wide range of doping (y = 6.45 - 7.0). The y-dependence of the in-plane coherence length \xi_{ab} estimated from the fluctuation magnetoconductance indicates that the superconductivity is anomalously weakened in the 60-K phase; this gives evidence, together with the Hall coefficient and the a-axis thermopower data that suggest the hole doping to be 12% for y = 6.65, that the origin of the 60-K plateau is the 1/8 anomaly. At high temperatures, the normal-state MR data show signatures of the Zeeman effect on the pseudogap in underdoped samples.
Physics , 2009, Abstract: We investigate charge transport in two-dimensional ferromagnet/feromagnet junction on a topological insulator. The conductance across the interface depends sensitively on the directions of the magnetizations of the two ferromagnets, showing anomalous behaviors compared with the conventional spin-valve. It is found that the conductance depends strongly on the in-plane direction of the magnetization. Moreover, in sharp contrast to the conventional magnetoresistance effect, in the p-n junction, the conductance at the parallel configuration is much smaller than that at the antiparallel configuration. This stems from the way how the wavefunctions connect between both sides.
Physics , 2012, DOI: 10.1103/PhysRevB.85.224416 Abstract: The present paper theoretically investigates magnetoresistance curves in quasiperiodic magnetic multilayers for two different growth directions, namely [110] and [100]. We considered identical ferromagnetic layers separated by non-magnetic layers with two different thicknesses chosen based on the Fibonacci sequence. Using parameters for Fe/Cr multilayers, four terms were included in our description of the magnetic energy: Zeeman, cubic anisotropy, bilinear and biquadratic couplings. The minimum energy was determined by the gradient method and the equilibrium magnetization directions found were used to calculate magnetoresistance curves. By choosing spacers with a thickness such that biquadratic coupling is stronger than bilinear coupling, unusual behaviors for the magnetoresistance were observed: (i) for the [110] case there is a different behavior for structures based on even and odd Fibonacci generations; and more interesting, (ii) for the [100] case we found magnetic field ranges for which the magnetoresistance increases with magnetic field.
Physics , 2012, Abstract: The periodic response of magnetoresistance to an externally tunable parameter, such as magnetic field or chemical composition, in the bulk or an artificially designed material has been exploited for technological applications as well as to advance our understanding of many novel effects of solid state physics. Some notable examples are the giant magnetoresistance effect in layered materials, the quantum hall effect in semiconductor heterostructure and the phase coherence of electronic wave function in disordered metals. In recent years, the ability to engineer materials at the nanoscale has played a key role in exploring new phenomenon. Using a system involving periodic Co dots array in direct contact with a surrounding polycrystalline Cu film, we report the observation of giant thermal hysteresis and an anomalous oscillatory magnetoresistance behavior. The unusual aspects of oscillatory magnetoresistance include its observation along only one field scan direction in an intermediate temperature range of 100 K < T < 200 K. Reducing the thickness of the Cu film weakens the magnetoresistance oscillation. These properties suggest a new phenomenon, which could be harnessed for future technological applications.
Physics , 2005, DOI: 10.1103/PhysRevLett.95.177005 Abstract: We present a mode locking (ML) phenomenon of vortex matter observed around the peak effect regime of 2H-NbSe$_2$ pure single crystals. The ML features allow us not only to trace how the shear rigidity of driven vortices persists on approaching the second critical field, but also to demonstrate a dynamic melting transition of driven vortices at a given velocity. We observe the velocity dependent melting signatures in the peak effect regime, which reveal a crossover between the disorder-induced transition at small velocity and the thermally induced transition at large velocity. This uncovers the relationship between the peak effect and the thermal melting.
Physics , 2006, DOI: 10.1016/j.physc.2007.04.032 Abstract: Electric and magnetic characterization of NbSe2 single crystals is first presented in detail. Then, some preliminary measurements of the fluctuation-diamagnetism (FD) above the transition temperature Tc are presented. The moderate uniaxial anisotropy of this compound allowed us to observe the fluctuation effects for magnetic fields H applied in the two main crystallographic orientations. The superconducting parameters resulting from the characterization suggest that it is possible to do a reliable analysis of the FD in terms of the Ginzburg-Landau (GL) theory.
Physics , 2010, DOI: 10.1103/PhysRevB.82.014524 Abstract: We report on measurements and a detailed analysis of the reversible magnetization of superconducting NbSe2 single crystals. By comparing the experimental data with Ginzburg Landau theory we show that superconductivity in NbSe2 cannot be explained by an anisotropic single-band, but by a multi-band scenario. Applying a simple two-band model reveals the basic mixed-state parameters, which are quite different in the two bands. We identify a strongly anisotropic band that determines the properties at high magnetic fields, and a second almost isotropic band that dominates at low fields. Our method is well suited for distinguishing anisotropic single-band from multi-band superconductivity in various materials.
Page 1 /100 Display every page 5 10 20 Item | 2019-11-12 11:44:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662761569023132, "perplexity": 1559.6209191373168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00353.warc.gz"} |
http://mathhelpforum.com/calculus/58786-integration-help.html | # Math Help - integration help
1. ## integration help
i cant seem to figure this out
find the integral of
log(base 8) (2x+1) dx
i'm not sure how to approach this problem
i assume theres some integration by parts
so would it be like this
intg( LOG(b8) (2x+1)dx=(x^2+x)(LOGb8) - intg(x^2+x/8)
2. Originally Posted by Lone21
i cant seem to figure this out
find the integral of
log(base 8) (2x+1) dx
i'm not sure how to approach this problem
i assume theres some integration by parts
so would it be like this
intg( LOG(b8) (2x+1)dx=(x^2+x)(LOGb8) - intg(x^2+x/8)
\begin{aligned}\int\log_8(2x+1)dx&=\frac{1}{\ln(8) }\int\ln(2x+1)dx\\ | 2015-07-03 00:19:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796997308731079, "perplexity": 9665.697825220885}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095677.90/warc/CC-MAIN-20150627031815-00022-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/1725-numerical-value-trig.html | 1. ## numerical value (trig)
find a numerical value of one triganomic function of each x.
2 sin^2x = 3 cos^2x
1-sin^2x = 1/9
1+ tan^2x = sin^2x + 1/sec^2x
2. First one,
$2\sin^2x=3\cos^2x$
Use identity $\cos^2x=1-\sin^2x$.
Thus,
$2\sin^2x=3(1-\sin^2x)$
Thus, open parantheses,
$2\sin^2x=3-3\sin^2x$
Thus,
$5\sin^2x=3$
Thus,
$\sin^2x=\frac{3}{5}=\frac{15}{25}$
Thus,
$\sin x=\pm\frac{\sqrt{15}}{5} \approx \pm.775$
Thus, all non-cotermial angles are, (use the arcsin)
$x\approx 51,129,231,309$
3. Second one,
$1-\sin^2x=\frac{1}{9}$
Use identity $1-\sin^2x=\cos^2x$
Thus, $\cos^2x=\frac{1}{9}$
Thus,
$\cos x=\pm \frac{1}{3}$
Thus, all the non-coterminal angles, (use arccos)
$x\approx 71,109,251,289$
4. Problem 3,
$1+\tan^2x=\sin^2x+\frac{1}{\sec^2x}$
Use identity $\frac{1}{\sec x}=\cos x$
Thus,
$1+\tan^2x=\sin^2x+\cos^2x$
Use identity $\sin^2x+\cos^2x=1$
Thus,
$1+\tan^2x=1$
Thus,
$\tan^2x=0$
Thus,
$\tan x=0$
Thus, all the non-coterminal angles, (use arctan)
$x=0,180$.
Q.E.D.
5. so more than one problem relating to the same topic counts as multiple posts?
6. I saw that you made multiple posts on the topic of trigonometry. I do not think you want to "spam" the forum. I would allow it, but be careful and do not start asking the same question everywhere, because then we, the moderators, have to waste time deleting them. CaptainBlack already banned a person for 1 day for doing "spamming".
7. Originally Posted by ThePerfectHacker
I saw that you made multiple posts on the topic of trigonometry. I do not think you want to "spam" the forum. I would allow it, but be careful and do not start asking the same question everywhere, because then we, the moderators, have to waste time deleting them. CaptainBlack already banned a person for 1 day for doing "spamming".
I would not have banned them for posting the same question to multiple
fora, at most I may have deleted all but one.
The person in question posted exactly the same message 5 times (and may
have been still posting copies when I banned them), and the message was
not a question but publicity for their web site (admittedly mathematical).
As it is I have left one copy of their message online.
I must confess to being touchy about spam on message boards, and if I
have the ability to stop it I will. I have seen enough good sites ruined by
spam.
RonL | 2017-08-23 03:32:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5020453333854675, "perplexity": 3650.025134831283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.82/warc/CC-MAIN-20170823020201-20170823040201-00099.warc.gz"} |
http://math.stackexchange.com/questions/73014/extend-positive-function-by-positive-function-in-sobolev-spaces | # Extend positive function by positive function in Sobolev spaces
This is in connection to this question. I understand the solution, but I want to ask something else regarding the extension of the function. The question is like this:
Suppose that $v$ is a positive real function with $v \in H^1(\Omega)$ and there is a ball $B$ such that $v$ has an extension in $w \in H^1(B)$. Is it true that the extension $w$ can be chosen to be also positive?
The answer is yes. You take an arbitrary extenion of $v$, say $w$ first, then $w_+$ will be the positive extension you want. – Syang Chen Oct 29 '11 at 1:15
In order to expand @Xianghong Chen 's comment, let $w\in H^1(B)$ an extension of $v$. We have to check that $w_+:=\max(w,0)$ is indeed in $H^1(B)$. Put $F_n(x):=\begin{cases}\sqrt{x^2+n^{-2}}-n^{-2}&\mbox{if }x\geq 0\\\ 0&\mbox{otherwise.}\end{cases}$. Then $w_n:=F_n(w)$ is in $H^1(B)$. Since $F_n(0)=0$ and for $x\geq 0$, $|F_n'(x)|=\frac{2x}{2\sqrt{x^2+n^{-2}}}\leq 1$, we have $\nabla w_n =F_n'(w)\nabla w$, and thanks to the dominated convergence theorem we can see that $w_n$ converges to $w_+$ in $L^2(B)$, and $\nabla w_n\to \nabla w\mathbf 1_{\{w(x)>0\}}$ in $L^2(B)$. So $w_+\in H^1(B)$ and is non-negative. $\nabla w_+=0$ where $w<0$, so $w_+>0$ almost everywhere. | 2015-04-26 03:43:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815647006034851, "perplexity": 59.81483602559604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652296.40/warc/CC-MAIN-20150417045732-00250-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://eprint.iacr.org/2011/003/20110105:023104 | ## Cryptology ePrint Archive: Report 2011/003
On the correct use of the negation map in the Pollard rho method
Daniel J. Bernstein and Tanja Lange and Peter Schwabe
Abstract: Bos, Kaihara, Kleinjung, Lenstra, and Montgomery recently showed that ECDLPs on the 112-bit secp112r1 curve can be solved in an expected time of 65 years on a PlayStation 3. This paper shows how to solve the same ECDLPs at almost twice the speed on the same hardware. The improvement comes primarily from a new variant of Pollard's rho method that fully exploits the negation map without branching, and secondarily from improved techniques for modular arithmetic.
Category / Keywords: public-key cryptography / Elliptic curves, discrete-logarithm problem, negation map, branchless algorithms, SIMD
Publication Info: Expanded version of PKC 2011 paper. | 2016-08-28 21:20:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009074926376343, "perplexity": 3620.1271530352788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982947845.70/warc/CC-MAIN-20160823200907-00042-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/387167/nuclear-physics-timing-experiment-without-amplifier | # Nuclear physics timing experiment without Amplifier
I am quite new in nuclear physics instrumentation. So my question may seem silly.
We are performing timing experiment in our lab to obtain better time resolution of NaI(Tl) detectors in conjunction with its relevant electronics. Modules are-
Na22 source; NaI(Tl) followed by preamp (gain set at X6, means 6 times gain); timing SCA (timing single channel analyser); TAC (time to amplitude converter); MCA (multi-channel analyser).
Now the reason we avoid using Amplifier is that, it will blur timing signal. So basically we are feeding preamp siganl of NaI(Tl) directly to timing SCA. Problem is then, it is very hard to set windows (LLD and ULD) of timing SCA so that both can trigger only when $511~\rm keV$ pass by.
Any ideas?
EDIT::
We have 22Na source which will emit two prompt (no time gap) 511 keV gamma rays deu to electron positron annihilation. Although in figure amplifier is shown but now we are trying the same experiment without Amplifier. With amplifier we got timing resolution of 13 ns approx. Here timing SCAs will act in window mode and their window will be fixed at appropriate position so that both only discriminate 511 keV photons (there are other gammas also see spectrum below). And one SCA will produce a start logical pulse and other SCA will produce a stop pulse for TAC. The duration between start and stop pulse will be used as discharge time of a capacitor placed inside TAC. Finally that charge will be used as signal to MCA. MCA signal graph will look like this (see below). From that we can get time resolution using calibration slope.
Now without ampilfier signal will be of low voltage. Hence there will be two ways. One- use preamp of NaI(Tl) as amplifier (changing jumper setting). Two- increase PMT volatage. I have tried both of them. But with no success. Everytime I am getting signals in MCA all over the place which means SCAs are not catching 511s.
Any suggesstion?
• -1. Please clarify what your acronyms stand for. – Chris Feb 18 '18 at 11:56
• Would Electrical Engineering be a better home for this question? (If so, please don't cross-post - flag for a moderator to migrate instead.) – Emilio Pisanty Feb 21 '18 at 20:39
• This seems clearly to be an experimental problem in nuclear physics! Some better wording is warranted. – freecharly Feb 21 '18 at 20:49
• This depends a lot on the specific instruments you are using, and if they are matched. Could you indicate the specific units you have hooked up? – Jon Custer Feb 21 '18 at 21:08
• @JonCuster 1. NaI(Tl) ORTEC Model No.-2M2/2, ORTEC Scinti Pack Model 256 PMT base, Apmplifier ORTEC 545A, Timing SCA 551, TAC 516, MCA Canberra. – aranyak Feb 21 '18 at 23:21 | 2020-10-26 04:09:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4785693287849426, "perplexity": 4834.146293422331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00413.warc.gz"} |
https://www.interviewquery.com/questions/converted-sessions | # Converted Sessions
0
Have you seen this question before?
Let’s say there are two user sessions that both convert with probability 0.5.
1. What is the probability that they both converted?
2. Given that there are $N$ sessions and they convert with probability $q$, what is the expected number of converted sessions?
Next question: Search Algorithm Recall
..... | 2022-05-20 13:07:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4669031500816345, "perplexity": 1281.3006612446409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00488.warc.gz"} |
https://learn.careers360.com/ncert/question-in-an-entrance-test-that-is-graded-on-the-basis-of-two-examinations-the-probability-of-a-randomly-chosen-student-passing-the-first-examination-is-0-point-8-and-the-probability-of-passing-the-second-examination-is-0-point-7/ | Q
# In an entrance test that is graded on the basis of two examinations, the probability of a randomly chosen student passing the first examination is 0.8 and the probability of passing the second examination is 0.7.
19. In an entrance test that is graded on the basis of two examinations, the probability of a randomly chosen student passing the first examination is $\small 0.8$ and the probability of passing the second examination is $\small 0.7$ . The probability of passing atleast one of them is $\small 0.95$ . What is the probability of passing both?
Views
Let A be the event that the student passes the first examination and B be the event that the students passes the second examination.
P(A $\cup$ B) is probability of passing at least one of the examination.
Therefore,
P(A $\cup$ B) = 0.95 , P(A)=0.8, P(B)=0.7
We know,
P(A $\cup$ B) = P(A)+ P(B) - P(A $\cap$ B)
$\implies$ P(A $\cap$ B) = 0.8 + 0.7 - 0.95 = 1.5 -0.95 = 0.55
Hence,the probability that the student will pass both the examinations is 0.55
Exams
Articles
Questions | 2020-02-24 21:29:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194255232810974, "perplexity": 566.3277326149503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00043.warc.gz"} |
https://oar.princeton.edu/handle/88435/pr1gx44t6x | Skip to main content
# Tropical Cyclone Simulation and Response to CO2 Doubling in the GFDL CM2.5 High-Resolution Coupled Climate Model
## Author(s): Kim, Hyeong-Seog; Vecchi, Gabriel A; Knutson, Thomas R; Anderson, Whit G; Delworth, Thomas L; et al
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1gx44t6x
Abstract: Global tropical cyclone (TC) activity is simulated by the Geophysical Fluid Dynamics Laboratory (GFDL) Climate Model, version 2.5 (CM2.5), which is a fully coupled global climate model with a horizontal resolution of about 50 km for the atmosphere and 25 km for the ocean. The present climate simulation shows a fairly realistic global TC frequency, seasonal cycle, and geographical distribution. The model has some notable biases in regional TC activity, including simulating too few TCs in the North Atlantic. The regional biases in TC activity are associated with simulation biases in the large-scale environment such as sea surface temperature, vertical wind shear, and vertical velocity. Despite these biases, the model simulates the large-scale variations of TC activity induced by El Niño–Southern Oscillation fairly realistically. The response of TC activity in the model to global warming is investigated by comparing the present climate with a CO2 doubling experiment. Globally, TC frequency decreases (−19%) while the intensity increases (+2.7%) in response to CO2 doubling, consistent with previous studies. The average TC lifetime decreases by −4.6%, while the TC size and rainfall increase by about 3% and 12%, respectively. These changes are generally reproduced across the different basins in terms of the sign of the change, although the percent changes vary from basin to basin and within individual basins. For the Atlantic basin, although there is an overall reduction in frequency from CO2 doubling, the warmed climate exhibits increased interannual hurricane frequency variability so that the simulated Atlantic TC activity is enhanced more during unusually warm years in the CO2-warmed climate relative to that in unusually warm years in the control climate. Publication Date: 1-Nov-2014 Citation: Kim, Hyeong-Seog, Gabriel A. Vecchi, Thomas R. Knutson, Whit G. Anderson, Thomas L. Delworth, Anthony Rosati, Fanrong Zeng, and Ming Zhao. "Tropical cyclone simulation and response to CO2 doubling in the GFDL CM2. 5 high-resolution coupled climate model." Journal of Climate 27, no. 21 (2014): 8034-8054. doi:10.1175/JCLI-D-13-00475.1. DOI: doi:10.1175/JCLI-D-13-00475.1 ISSN: 0894-8755 EISSN: 1520-0442 Pages: 8034 - 8054 Type of Material: Journal Article Journal/Proceeding Title: Journal of Climate Version: Final published version. Article is made available in OAR by the publisher's permission or policy.
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated. | 2022-05-28 22:06:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45134949684143066, "perplexity": 10517.073798982672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00347.warc.gz"} |
https://osc.github.io/ood-documentation/release-1.6/analytics/google-analytics.html | If you wish you can setup your Open-OnDemand instance to send usage data to Google Analytics (GA) that you can then query and report on, this page walks through how to do just that.
Note
You’ll need to have a Google Analytics account setup as a prerequisite to this.
To query GA You’ll need to have to have a service account setup with the appropriate permissions. See info on Google service accounts and Google IAM roles for more details on that.
## Configure Open OnDemand¶
# /etc/ood/config/ood_portal.yml
---
analytics:
# the id will be specific to your account, but url is likely the same
id: UA-99999999-1
This configuration will generate a block similar to this in your apache’s ood-portal.conf file (after running the ood-portal-generator).
<Location "/pun">
...
SetEnv OOD_ANALYTICS_TRACKING_ID "UA-99999999-1"
LuaHookLog analytics.lua analytics_handler
</Location>
You’ll need to create all of these custom dimensions and custom metrics in the appropriate GA account(s).
Warning
Order matters here! Index numbers are given to ensure you create and define these items in the correct order. Otherwise Google Analytics will be incorrectly indexing these metrics.
As an example say Username gets index 3 instead of index 1. Now when you query for dimension3 thinking it’s timestamps, you’ll get back usernames instead!
Table 1.1 GA custom dimensions
Name Index Scope
Session ID 2 Session
Timestamp 3 Hit
Request Method 5 Hit
Request Status 6 Hit
Document Referrer 7 Hit
Table 1.2 GA custom metrics
Name Index Scope Formatting Type
Proxy Time 1 Hit Integer
User Map Time 2 Hit Integer
Now that you have Open-OnDemand sending information to GA and it’s all configured correctly, you can now query GA for this information, parse it and present it in any fashion you like.
Here’s a small portion of how we query GA in ruby, but there are many GA client libraries available.
This example is not complete and is only meant to illustrate how to query GA given the defined metric set above. Let’s go through each of these things.
# Dimensions - here we want dimensions 1, 3 and something called pagePath which is the web
# page requested. pagePath is a google predefined dimension that we populated. Dimensions 1
# and 3 were created above and are the username and timestamp (this is why the order in
# which they're defined is important).
DIMENSIONS = %w(
ga:dimension3
ga:dimension1
ga:pagePath
)
# we only want to report the hit metrics
METRICS = %w(
ga:hits
)
# First we specify the host so that we only get metrics from a specific host. Secondly,
# we filter only only 200 responses (dimension6 is status code) and we don't want to
# report on file editor edits.
FILTERS = %W(
ga:hostname==#{HOST};ga:dimension6==200;ga:pagePath!=/pun/sys/file-editor/edit
)
# now we can create our analytics object and make the query
# Here we query for the data that we want. A lot of things are omitted in this example
# for brevity like START_DATE (dynamic query times like the first day of the month)
# or GA_PROFILE (part of our credentials). And the fact that this is in a loop paginating
# the results, updating 'start_index' and only requesting STEP_SIZE (10,000 in our case)
# results at a time.
results = analytics.get_ga_data(
"ga:#{GA_PROFILE}",
START_DATE,
END_DATE,
METRICS.join(','),
dimensions: DIMENSIONS.empty? ? nil : DIMENSIONS.join(','),
filters: FILTERS.empty? ? nil : FILTERS.join(','),
sort: SORT.empty? ? nil : SORT.join(','),
start_index: start_index,
max_results: STEP_SIZE
)
target = open('my-report', "w")
# now we can write out the results in a format that I want for my reporting.
results.rows.each do |row|
begin
app = row[2]
row[2] = parse_uri(app, user: row[1])
row << app
target.write "#{row.join('|')}\n"
end | 2023-02-05 20:29:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17010287940502167, "perplexity": 5591.3238638187495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00167.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-6-percent-6-7-simple-interest-6-7-exercises-page-446/26 | # Chapter 6 - Percent - 6.7 Simple Interest - 6.7 Exercises - Page 446: 26
$404 #### Work Step by Step We can calculate simple interest on a loan by using the formula$I=prt$(where I is the interest, p is the principal, r is the rate of interest, and t is the amount of time - expressed in years).$t=\frac{1}{2}$, because 6 months equals .5 years$I=prtI=400\times.02\times\frac{1}{2}=4$dollars Finally, we can find the amount due by adding the interest to the original principal.$400+4=404\$ dollars
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2021-03-01 01:46:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41691821813583374, "perplexity": 2946.8139843484596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00460.warc.gz"} |
https://www.semanticscholar.org/paper/Towards-Accelerated-Rates-for-Distributed-over-Rogozin-Lukoshkin/0eea7b7cf9a0c3276a5e4de20350756868de95e7 | # Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks
@inproceedings{Rogozin2021TowardsAR,
title={Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks},
author={Alexander Rogozin and Vladislav Lukoshkin and Alexander V. Gasnikov and D. Kovalev and Egor Shulgin},
booktitle={OPTIMA},
year={2021}
}
• Published in OPTIMA 23 September 2020
• Computer Science
We study the problem of decentralized optimization over time-varying networks with strongly convex smooth cost functions. In our approach, nodes run a multi-step gossip procedure after making each gradient update, thus ensuring approximate consensus at each iteration, while the outer loop is based on accelerated Nesterov scheme. The algorithm achieves precision $\varepsilon > 0$ in $O(\sqrt{\kappa_g}\chi\log^2(1/\varepsilon))$ communication steps and $O(\sqrt{\kappa_g}\log(1/\varepsilon… An Accelerated Method For Decentralized Distributed Stochastic Optimization Over Time-Varying Graphs • Computer Science, Mathematics 2021 60th IEEE Conference on Decision and Control (CDC) • 2021 This work proposes the first accelerated (in the sense of Nesterov’s acceleration) method that simultaneously attains optimal up to a logarithmic factor communication and oracle complexity bounds for smooth strongly convex distributed stochastic optimization. Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization • Computer Science ArXiv • 2021 The widely used accelerated gradient tracking is revisits and extended to time-varying graphs and the dependence on the network connectivity constants can be further improved to O(1) and O( γ 1−σγ ) for the computation and communication complexities, respectively. Newton Method over Networks is Fast up to the Statistical Precision • Computer Science ICML • 2021 This work proposes a distributed cubic regularization of the Newton method for solving (constrained) empirical risk minimization problems over a network of agents, modeled as undirected graph, and derives global complexity bounds for convex and strongly convex losses. ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks • Computer Science ICML • 2021 ADOM uses a dual oracle, i.e., it assumes access to the gradient of the Fenchel conjugate of the individual loss functions, and its communication complexity is the same as that of accelerated Nesterov gradient method (Nesterov, 2003). Recent theoretical advances in decentralized distributed convex optimization. • Computer Science • 2020 This paper focuses on how the results of decentralized distributed convex optimization can be explained based on optimal algorithms for the non-distributed setup, and provides recent results that have not been published yet. Parallel and Distributed algorithms for ML problems • Computer Science • 2020 A survey of modern parallel and distributed approaches to solve sum-type convex minimization problems come from ML applications is made. Inexact Tensor Methods and Their Application to Stochastic Convex Optimization • Computer Science • 2020 A general non-accelerated Tensor method under inexact information on higherorder derivatives is proposed, its convergence rate is analyzed, and sufficient conditions are provided for this method to have similar complexity as the exact tensor method. STRONGLY CONVEX DECENTRALIZED OPTIMIZATION OVER TIME-VARYING NETWORKS • Computer Science • 2021 This work designs two optimal algorithms, one of which is a variant of the recently proposed algorithm ADOM enhanced via a multi-consensus subroutine and a novel algorithm, called ADOM+, which is optimal in the case when access to the primal gradients is assumed. Decentralized Saddle-Point Problems with Different Constants of Strong Convexity and Strong Concavity • Computer Science, Mathematics • 2022 This paper study distributed saddle-point problems (SPP) with strongly-convex-strongly-concave smooth objectives that have different strong convexity and strong concavity parameters of composite terms, which correspond to min and max variables, and bilinear saddle- point part. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks • Computer Science NeurIPS • 2021 This work designs two optimal algorithms, one of which is a variant of the recently proposed algorithm ADOM enhanced via a multi-consensus subroutine and a novel algorithm, called ADOM+, which is optimal in the case when access to the primal gradients is assumed. ## References SHOWING 1-10 OF 37 REFERENCES Optimal Accelerated Variance Reduced EXTRA and DIGing for Strongly Convex and Smooth Decentralized Optimization • Computer Science ArXiv • 2020 The famous EXTRA and DIGing methods with accelerated variance reduction (VR) are extended, and two methods, which require the time of stochastic gradient evaluations and communication rounds to reach precision$\epsilon", are proposed.
A Sharp Convergence Rate Analysis for Distributed Accelerated Gradient Methods
• Computer Science
• 2018
Two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters are presented, which achieves the near optimal complexities for both computation and communication.
Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization
• Computer Science
• 2020
The widely used EXTRA and DIGing methods with variance reduction (VR) are extended, and the accelerated VR-EXTRA and VR-DIGing with both the optimal stochastic gradient computation complexity and communication complexity are proposed.
An Optimal Algorithm for Decentralized Finite Sum Optimization
• Computer Science
SIAM Journal on Optimization
• 2021
A lower bound of complexity is given to show that ADFS is optimal among decentralized algorithms, which uses local stochastic proximal updates and decentralized communications between nodes to derive ADFS.
Revisiting EXTRA for Smooth Distributed Optimization
• Computer Science, Mathematics
SIAM J. Optim.
• 2020
A sharp complexity analysis for EXTRA with the improved improved Catalyst framework is given and the strong convexity is absent and communication complexities of the accelerated EXTRA are only worse by the factors.
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
• Computer Science
ICML
• 2017
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.
Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
• Mathematics, Computer Science
NeurIPS
• 2018
The error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions, and the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate are provided.
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
• Mathematics, Computer Science
SIAM J. Optim.
• 2017
This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs. | 2022-06-24 23:06:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7727960348129272, "perplexity": 2576.4011763061844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00081.warc.gz"} |
http://tex.stackexchange.com/questions/106336/align-environment-has-too-much-white-space | # align environment has too much white space
How can I let the following code look good ?
\begin{align*}
&\text{Als} &&a,b \in H\\
&\Rightarrow &&a,b\in H_1 \land a,b \in H_2 &&\text{Definitie van doorsnede}\\
&\Rightarrow &&ab^{-1}\in H_1 \land ab^{-1}\in H_2 &&\text{Group axioma's}\\
&\Rightarrow &&ab^{-1}\in H &&\text{Definitie van intersectie}\\
&\Rightarrow &&H\leq G &&\text{Ondergroep test}
\end{align*}
No there is just way too much whitespace. Is there a way to control the amount of whitespace ?
Hm.. in codecogs it does look good. So this is how I would want it to look:
-
## 1 Answer
the desired result can be produced by alignat* -- but then you have to manage the spacing yourself:
\begin{alignat*}{4}
&\text{Als}\quad &&a,b \in H\\
&\Rightarrow {} &&a,b \in H_1 \land a,b \in H_2 &&\text{Definitie van doorsnede}\\
&\Rightarrow {} &&ab^{-1} \in H_1 \land ab^{-1}\in H_2 \quad &&\text{Group axioma's}\\
&\Rightarrow {} &&ab^{-1} \in H &&\text{Definitie van intersectie}\\
&\Rightarrow {} &&H \leq G &&\text{Ondergroep test}
\end{alignat*}
edit: a look at the amsmath documentation would be salutary -- texdoc amsldoc on a tex live system.
the (required) argument after \begin{alignat} was introduced to make it possible to determine (for ease of macro programming, it appears) the width of a particular single column so that it can be treated in a special manner if necessary. the value is equal to the number of &s in the line with the most, minus one. an alternate method of calculating the value is given in the user's guide cited above.
-
aaah thanks, looks great! why {4} after alignat ? – Kasper Apr 1 '13 at 12:30 | 2014-11-24 18:04:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999809205532074, "perplexity": 5799.370612405593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380948.74/warc/CC-MAIN-20141119123300-00009-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-6-derivatives-of-logarithmic-functions-3-6-exercises-page-223/23 | ## Calculus: Early Transcendentals 8th Edition
$y''=-\frac{1}{2}\frac{lnx}{2x\sqrt x}$
$y'=\frac{2+lnx}{2\sqrt x}$ and $y''=-\frac{1}{2}\frac{lnx}{2x\sqrt x}$ First derivative can be found by using product rule of differentiation. Here, $y'=\frac{d}{dx}(\sqrt xlnx)$ $=\sqrt x\frac{d}{dx}(lnx)+lnx\frac{d}{dx}{(\sqrt x)}$ $=\sqrt x.\frac{1}{x}+lnx.[\frac{1}{2}(x^{-1/2})]$ Thus, $y'=\frac{2+lnx}{2\sqrt x}$ After solving first derivative such as $y'=\frac{2+lnx}{2\sqrt x}$, we will find second derivative with the help of quotient rule of differentiation. $y''=\frac{1}{2}\frac{d}{dx}[\frac{2+lnx}{2\sqrt x}]$ $=\frac{\sqrt x\frac{d}{dx}(2+lnx)-(2+lnx)\frac{d}{dx}\sqrt x}{(\sqrt x)^{2}}$ $=-\frac{1}{2}\frac{lnx}{2x\sqrt x}$ Hence, $y''=-\frac{1}{2}\frac{lnx}{2x\sqrt x}$ | 2018-09-20 22:06:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554550647735596, "perplexity": 164.17562765556852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156622.36/warc/CC-MAIN-20180920214659-20180920235059-00062.warc.gz"} |
http://perimeterinstitute.ca/node/113425 | # Self-dual N=4 theories in four dimensions
Known N=4 theories in four dimensions are characterized by a choice of gauge group, and in some cases some "discrete theta angles", as classified by Aharony, Seiberg and Tachikawa. I will review how this data, for the theories with algebra su(N), is encoded in various familiar realizations of the theory, in particular in the holographic AdS_5 \times S^5 dual and in the compactification of the (2,0) A_N theory on T^2. I will then show how the resulting structure, given by a choice of polarization of an appropriate cohomology group, admits additional choices that, unlike known theories, generically preserve SL(2,Z) invariance in four dimensions.
Collection/Series:
Event Type:
Seminar
Scientific Area(s):
Speaker(s):
Event Date:
Tuesday, October 24, 2017 - 14:30 to 16:00
Location:
Space Room
Room #:
400 | 2018-04-23 17:00:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465455174446106, "perplexity": 2497.3061264413386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00069.warc.gz"} |
https://www.vedantu.com/question-answer/find-the-value-of-x3-8y3-36xy-216-when-x-2y-+-6-class-9-maths-cbse-5f5da6498f2fe2491853a4a9 | Question
Find the value of ${{\text{x}}^{\text{3}}}{\text{ - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36xy - 216,}}$ when ${\text{x = 2y + 6}}$
Verified
128.7k+ views
Hint: As per the given polynomial, and the condition given, we just need to replace the given condition into ${{\text{x}}^{\text{3}}}{\text{-8}}{{\text{y}}^{\text{3}}}{\text{ - 36xy - 216,}}$ this meansl means directly replace the value of x in the given polynomial and solve to get the required answer.
Complete step by step answer:
The given polynomial is ${{\text{x}}^{\text{3}}}{\text{ - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36xy - 216,}}$and the condition is ${\text{x = 2y + 6}}$
So, replacing the value of x in to the polynomial we get,
$\Rightarrow {{\text{x}}^{\text{3}}}{\text{ - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36xy - 216}} \\ {\text{as,x = 2y + 6}} \\ \Rightarrow {\text{(2y + 6}}{{\text{)}}^{\text{3}}}{\text{ - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36(2y + 6)y - 216}} \\$
Now, using the formula of ${{\text{(a + b)}}^{\text{3}}}{\text{ = }}{{\text{a}}^{\text{3}}}{\text{ + }}{{\text{b}}^{\text{3}}}{\text{ + 3ab(a + b)}}$and simplifying the equation further,
${\text{ = (2y}}{{\text{)}}^{\text{3}}}{\text{ + (6}}{{\text{)}}^{\text{3}}}{\text{ + 3(2y)(6)(2y + 6) - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36(2y + 6)y - 216}} \\ {\text{ = 8}}{{\text{y}}^{\text{3}}}{\text{ + 216 + 36(2y + 6)y - 8}}{{\text{y}}^{\text{3}}}{\text{ - 36(2y + 6)y - 216}} \\ {\text{ = 0}} \\$
Hence, zero is our correct answer.
Note: In mathematics, a polynomial is an expression consisting of variables (also called indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables. Polynomials are of different types. Namely, Monomial, Binomial, and Trinomial. A monomial is a polynomial with one term. A binomial is a polynomial with two, unlike terms.
Additional information: A polynomial function is a function that involves only non-negative integer powers or only positive integer exponents of a variable in an equation like the quadratic equation, cubic equation, etc. A polynomial that, when evaluated over each in the domain of definition, results in the same value. The simplest example is for and. a constant | 2021-12-06 08:59:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191663861274719, "perplexity": 357.2970919882141}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00210.warc.gz"} |
http://mathoverflow.net/questions/97682/a-homotopyish-landweber-exact-functor-theorem/97695 | # A homotopyish Landweber exact functor theorem
Let $M$ be a $\pi_*(MU)$-module. The Landweber exact functor theorem gives conditions for the functor that sends a space $X$ to $MU(X) \otimes_{\pi_*(MU)} M$ to define a homology theory on spaces, which thus comes from a spectrum.
It'd be nice, though, if one could construct the spectrum directly, instead of going through the homology theory. For instance, it would be nice if one could construct an actual $MU$-module (possibly under further hypotheses) or an $MU$-algebra when $M$ is an algebra. Is there another version of the exact functor theorem that lets one do this?
-
I'm sceptical about a possible positive answer because, if there were a more direct construction, I would expect it to be functorial on $M$, but the spectrum representing a cohomology theory is not functorial. – Fernando Muro May 22 '12 at 22:12
I'll second Fernando's comment. In particular, there are a lot of Landweber exact elliptic cohomology theories. Constructing them functorially is very difficult. Constructing MU-algebras can be terrifyingly difficult depending on how much structure you want. The problem is that you're fundamentally starting with "up to homotopy" data (a module), and rectifying that into an actual spectrum is very, very unlikely to be a canonical procedure. (This isn't specific to homotopy theory, either. The same problem should show up in the differential-graded world.) – Tyler Lawson May 23 '12 at 3:50
Here are three methods that I know:
• In the case $M_*=(MU_*/I)[S^{-1}]$ (where $I$ is generated by a regular sequence) there is a more direct construction by reducing to the cases $M_*=MU_*/a$ and $M_*=MU_*[a^{-1}]$. My paper 'Products on MU-modules' is probably the sharpest version, but there are many earlier versions in a similar spirit.
• In the case $M_*=MU_*[x_1,\dotsc,x_r]$ with $|x_i|=0$ you can use $MU\wedge\Sigma^\infty_+\mathbb{N}^r$ (and this has an $E_\infty$ structure).
• In the case $M_*=MU_*[n^{-1}]$ (for some $n\in MU_0=\mathbb{Z}$) you can note that there are natural $E_\infty$ maps $$MU\xleftarrow{f}\Sigma^\infty_+DS^0\xrightarrow{}\Sigma^\infty_+QS^0,$$ where $f$ has degree $n$ on the bottom cell. The smash product $$MU\wedge_{\Sigma^\infty_+DS^0}\Sigma^\infty_+QS^0$$ then has the required property.
There are some fairly obvious ways to combine these methods and generalise them slightly.
Under the general conditions of the Landweber theorem, I know of several people including myself who have looked quite hard for a more direct construction, but without success.
-
Thanks! This is pretty helpful. – Akhil Mathew May 25 '12 at 12:24
I would be very interested in a universal property for one of these spectra (for instance, my understanding is that Lurie has a universal property for complex K-theory), but that might not be a realistic expectation in general. – Akhil Mathew May 25 '12 at 12:26
I'm not sure that this is exactly what you are looking for, but I looked a bit at the Landweber exact functor theorem in the context of $MU$-modules at the end of a very short paper: Idempotents and Landweber exactness in brave new algebra. Homology, homotopy, and applications 3(2001), 355--359. Theorem 8 there reads: If $M_*$ is a Landweber exact $MU_*$-module, then there is an $MU$-module $M$ such that $\pi_*(M) = M_*$ and, for any finite cell $MU$-module $X$, $\pi_*(X)\otimes_{MU_*} M_* \cong \pi_*(X\wedge_{MU} M)$.
-
This result is quite interesting; does a full proof appear elsewhere? – Akhil Mathew May 24 '12 at 4:53
Akhil, short though that paper is, I claim that the proof there is as complete as it needs to be.
-
Perhaps this should be a comment on your original answer, rather than a separate answer? – Steve D May 26 '12 at 23:00
I'm a little confused here. Shouldn't the condition for all finite $MU$ modules be related to flatness over $\pi_* MU$, because one is asking about $\pi_* X \otimes MU_* M_*$ rather than $MU_*(X) \otimes MU_* M_*$ (i.e., in the usual LEFT a comodule structure is being used which doesn't seem to exist here)? Also, I'm not seeing how this is obvious; could you perhaps clarify? – Akhil Mathew May 28 '12 at 3:58
Steve, sorry about the etiquette of answers vs comments; can't expect an old guy to notice such distinctions. Akhil, I didn't say this is obvious. The paper is on my web page, [102], and the proof takes under two pages (because the serious math is in the references), but it probably shouldn't be repeated here. I can't answer your question precisely because I don't know what you mean by $MU_*M_*$, but here is the key lemma: If $X$ is an $R$-module, where $R$ is a commutative $S$-algebra such that $R_*R$ is $R_*$-flat, then the Hurewicz map gives $X_*$ a structure of $R_*R$-comodule. – Peter May Jun 3 '12 at 20:16
Whoops, I omitted a subscript and meant $\pi_* X \otimes_{MU_*} M_*$ (where $M_*$ is a graded module over $MU_*$). I will think some more about the lemma you mentioned, thogh. Thanks. – Akhil Mathew Jun 5 '12 at 2:00 | 2015-03-04 23:14:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385780692100525, "perplexity": 336.9215845795804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463660.11/warc/CC-MAIN-20150226074103-00024-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://codegolf.stackexchange.com/tags/linear-algebra/hot?filter=year | # Tag Info
9
JavaScript, 58 bytes a=>a.some((r,y)=>r.some((c,x)=>(r/(r=c)==a[x][x])/-y--<c)) Try it online! When c==0 r/(r=c) is NaN or Infinity; (r/(r=c)==a[x][x]) is false (r/(r=c)==a[x][x])/-y-- is 0 or NaN (r/(r=c)==a[x][x])/-y--<c is false When y==0 (cells on main diagonal) and c>=1 (r/(r=c)==a[x][x])/-y-- is NaN or Infinity (r/(r=c)==a[x][x])...
7
Jelly, 14 13 bytes ŒDµḢ=Ɲo@ƑḢ>FẠ Try it online! -1 because I actually thought to check if the input can contain negative integers ŒDµ Consider the diagonals of the input matrix. Ḣ Pop the main diagonal; Ɲ for each pair of its adjacent elements = are they equal? (Especially note ...
5
Rattle, 111 bytes |sI^>s[[PgI#1I#s2=#-#1[^0[^1g2[^0=q]][1g2[^0[^1=q]P=#4<s<=#3-sg0>I~<I~s_3P=#4+<s<=#3sg0>I~<I~<[^~=q]]]]g1]]=1 Try it Online! Needless to say, Rattle is not built to handle matrices and this approach is pretty brute-force. However, this code really shows off most of Rattle's features! Explanation | ...
5
APL (Dyalog Classic), 59 bytes {a b c d←⍵⋄l,2 1∘.○¯3○b÷⍨a-⍨l←2÷⍨a+d(+,-).5*⍨(4×b×c)+×⍨a-d} Derivation of formula for eigenvalues: det|A-\lambda I|=0 \\ det\left|\begin{pmatrix} a & b \\ c & d \end{pmatrix}- \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}\right|=0 \\ det\left|\begin{pmatrix} a-\lambda & b \\ c & d-\lambda ...
4
K (ngn/k), 14 bytes {y(+/x*)'/=#x} Try it online! -2 bytes thanks to @coltim on the k tree. The inner train multiplies x on the right side of the current matrix (instead of left side). Why (+/x*)' is also a matmul: (+/(e f;g h)*)' (a b;c d) ( (+/(e f;g h)*) a b; (+/(e f;g h)*) c d ) ( +/ (ae af;bg bh) ; +/ (ce cf;dg dh) ) ((ae+bg) (af+bh); (ce+dg) (cf+dh)) ...
4
Python 3, 69 bytes lambda m,n:re.sub(f"(.)({10**~-n}\\1)*(0%r|$)"%{n},"",m)>"" import re Try it online! input is a flatened string of the matrix and its size output False for Jordan and True for not jordan Edit: as it wasn't specified when I post this answer, my solution only works for matrix with single digits elements How ... 2 Jelly, 11 bytes L=þZḋþ³ƊƓ¡ Try it online! A more modern update to Dennis' answer, be sure to give that an upvote. Additionally, this is a 9 byte answer that takes the dimensions of the matrix as the first 2 command line args and the matrix as the third. Both take the power via STDIN. How it works L=þZḋþ³ƊƓ¡ - Main link. Takes A on the left L - ... 2 Julia, 27 24 a$n=round.(exp(log(a)n)) I am not sure what is allowed, but... julia> a 5×5 Matrix{Int64}: 35 18 40 37 77 31 5 45 23 73 62 67 29 85 97 20 9 83 70 65 2 13 53 59 52 julia> round.(exp(log(a)5)) ≈ a^5 true 24 thanks to @MarcMush.
2
R, 96 93 83 75 67 bytes function(m,k,j=1:k^2%%(k+1),x=m[!j])any(m[j>1],diff(diag(m))&x,x>1) Try it online! Takes input as matrix and it's size. Outputs inverted TRUE/FALSE values.
1
Wolfram Mathematica, 150 144 137 69 bytes With[{m=#,l=Length@#,d=Diagonal},Union@@Join[m~d~#&/@2~Range~l~Join~-Range@l,{If[#2==1,#1,#2]}&~MapThread~{Differences@d@m,m~d~1}]=={0}]& I know... I don't like it either! But this was the shortest code that I could think of, and I'm definitely not an expert. Please take it with a huge grain of salt and ...
1
R, 98 86 bytes function(x,y,+=array)aperm(apply(x,1:2,*,y)+c(w<-dim(y),v<-dim(x)),c(1,3,2,4))+v*w Try it online! Reimplementation of .kronecker and outer for matrices. I do think there's a golfier approach out there, maybe using apply? 6 bytes golfed using apply and array thanks to Dominic van Essen! The builtins are %x% for kronecker(A,B,"*&...
1
Python 3, 128 bytes def f(A,n):r=range(len(A));return n and[[sum(A[i][j]*x[i]for i in r)for j in r]for x in f(A,n-1)]or[[i==j for i in r]for j in r] Try it online!
1
CSASM v2.5.0.2, 306 bytes func a: dup dup ldelem 0 pop $3 ldelem 1 pop$4 ldelem 2 pop $5 dup dup ldelem 0 pop$a ldelem 1 pop $1 ldelem 2 pop$2 push $1 push$5 mul push $2 push$4 mul sub push $2 push$3 mul push $a push$5 mul sub push $a push$4 mul push $1 push$3 mul sub push 3 newarr f32 pop $a push$a swap stelem 2 push $a swap stelem 1 push$a swap ...
1
TI-BASIC, 75 73 71 bytes -4 bytes thanks to @MarcMush Not 71 characters -- TI-BASIC is tokenized. :Prompt A,B :Disp {ʟA(2)ʟB(3)-ʟA(3)ʟB(2),ʟA(3)ʟB(1)-ʟA(1)ʟB(3),ʟA(1)ʟB(2)-ʟA(2)ʟB(1 There's a decent chance I could make this smaller; this is just a minified version of something I keep on my graphing calculator. Usage example: pgrmX A=?{1,2,3} B=?{0,1,0} ...
1
APL (Dyalog Extended), 11 bytes {⍵ ⍵⍴1,⍵/0} Try it online! Just for fun dfn submission ⍵/0 replicate ⍵ zeros 1, prepend a 1 ⍵ ⍵⍴ mold to ⍵×⍵ square So like for left argument 4 it will construct 1 0 0 0 0 And molding cycles from the beginning so, it will generate 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 the 4×4 identity matrix.
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-10-17 12:58:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5608088970184326, "perplexity": 6861.404384784885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00039.warc.gz"} |
https://physics.stackexchange.com/questions/270032/whats-the-intuition-behind-the-choi-jamiolkowski-isomorphism | # What's the intuition behind the Choi-Jamiolkowski isomorphism?
What is the intuition behind the Choi-Jamiolkowski isomorphism? It says that with every superoperator $\mathbb{E}$ we can associate a state given by a density matrix
$$J(\mathbb{E}) = (\mathbb{E} \otimes 1) (\sigma)$$
where $\sigma = \sum_{ij} | ii \rangle \langle jj |$ is the density matrix of some maximally entangled state $\sum_{i} | ii \rangle$.
And then the action of the superoperator is equal to
$$\mathbb{E}(\rho) = \operatorname{tr}_2(J(\mathbb{E}) \cdot 1 \otimes \rho^T).$$
What is the point of this? How does one use this in practice? Is it to simulate the action of the channel $\mathbb{E}$ by first preparing a specific state? I really don't understand the intuition behind this concept.
• This seems relevant: 1. mattleifer.info/2011/08/01/… 2. cstheory.stackexchange.com/search?q=choi+jamiolkowski Jul 26, 2016 at 21:33
• Are you looking for an intuition behind the isomorphism, or rather for applications? These seem to be two quite distinct questions. For applications, search www-m5.ma.tum.de/foswiki/pub/M5/Allgemeines/MichaelWolf/… for Choi. Jul 27, 2016 at 10:05
• More for the intuition and implications, although I wouldn't mind a brief comment on the uses. As far as I know it's mainly a mathematical tool in the study of quantum channels Jul 27, 2016 at 14:23
• This blog post by Matt Leifer starts with a description of the gate-teleportation intuition Jul 27, 2016 at 16:21
### The intuition
Let us consider a channel $\mathcal E$, which we want to apply to a state $\rho$. (This could equally well be part of a larger system.) Now consider the following protocol for applying $\mathcal E$ to $\rho$:
1. Denote the system of $\rho$ by $A$. Add a maximally entangled state $|\omega\rangle=\tfrac{1}{\sqrt{D}}\sum_{i=1}^D|i,i\rangle$ of the same dimension between systems $B$ and $C$:
2. Now project systems $A$ and $B$ on $|\omega\rangle$:
[This can be understood as a teleportation where we have only consider the "good" outcome, i.e., where we don't have to make a (generalized) Pauli correction on $C$, see also the discussion.]
Our intuition on teleportation (or a simple calculation) tells us that we now have the state $\rho$ in system $C$:
3. Now we can apply the channel $\mathcal E$ to $C$, yielding the desired state $\mathcal E(\rho)$ in system $C'$:
However, steps 2 and 3 commute (2 acts on $A$ and $B$, and 3 acts on $C$), so we can interchange the ordering and replace 2+3 by 4+5:
1. Apply $\mathcal E$ to $C$, which is the right part of $|\omega\rangle$:
This results in a state $\eta=(\mathbb I\otimes \mathcal E) (|\omega\rangle\langle\omega|)$, which is nothing but the Choi state of $\mathcal E$:
(This is the original step 3.)
2. We can now carry out the original step 3: Project $A$ and $B$ onto $|\omega\rangle$:
Doing so, we obtain $\mathcal E(\rho)$ in $C'$:
Steps 4 and 5 are exactly the Choi-Jamiolkowski isomorphism:
• Step 4 tells us how to obtain the Choi state $\eta$ for a channel $\mathcal E$
• Step 5 tells us how we can construct the channel from the state
Going through the math readily yields the expression for obtaining $\mathcal E$ from $\mathcal \eta$ given in the question: \begin{align*} \mathcal E(\rho) &= \langle \omega|_{AB}\rho_A\otimes \eta_{BC}|\omega\rangle_{AB}\\ & \propto \sum_{i,j} \langle i|\rho_A|j\rangle_{A} \langle i|_B\eta_{BC} |j\rangle_B \\ & = \mathrm{tr}_B[(\rho_B^T\otimes \mathbb I_C) \eta_{BC}]\ . \end{align*}
### Discussion
The intuition above is closely linked to teleportation-based quantum computing and measurement based quantum computing. In teleportation-based computing, we first prepare the Choi state $\eta$ of a gate $\mathcal E$ beforehand, and subsequently "teleport through $\eta$", as in step 5. The difference is that we cannot postselect on the measurement outcome, so that we have to allow for all outcomes. This is, depending on the outcome $k$, we have implemented (for qubits) the channel $\mathcal E(\sigma_k \cdot \sigma_k)$, where $\sigma_k$ is a Pauli matrix, and generally $\mathcal E$ is a unitary. If we choose our gates carefully, they have "nice" commutation relations with Pauli matrices, and we can account for that in the course of the computation, just as in measurement based computing. In fact, measurement based computing can be understood as a way of doing teleportation based computation in a way where in each step, only two outcomes in the teleportation are allowed, and thus only one Pauli correction can occur.
### Applications
In short, the Choi-Jamiolkowski isomorphism allows to map many statements about states to statements about channels and vice versa. E.g., a channel is completely positive exactly if the Choi state is positive, a channel is entanglement breaking exactly if the Choi state is separable, and so further. Clearly, the isomorphism is very straightforward, and thus, one could equally well transfer any proof from channels to states and vice versa; however, often it is much more intuitive to work with one or the other, and to transfer the results later on.
This is how I have understood it and perhaps you will find it helpful:
Suppose you have a map (channel) $$\Phi$$ which acts on a system $$A$$. If $$A$$ exists in the state $$\rho$$ we can write,
$$\Phi(\rho) = \Phi(\rho_{ij}|i\rangle\langle j|) = \rho_{ij}\Phi(|i\rangle\langle j|)$$
Where the last step above follows because quantum mechanics is a linear theory. This means that knowing the matrices $$\Phi(|i\rangle\langle j|)$$ for each $$i$$ and $$j$$ helps us define the action of the map on any general density matrix and thus helps us define the map itself.
Note: $$\Phi(|i\rangle\langle j|)$$ above is a physically meaningless quantity because $$|i\rangle\langle j|$$ is not, in general, a valid density matrix. For now, let it just stand for one of the matrices that represent the map $$\Phi$$ and what it can mean physically we will see later.
Now suppose you have two systems of the same dimension as $$A$$. You have $$A \otimes B$$ which has been prepared in the Choi state given by $$|\Psi\rangle = \Sigma_i |i_Ai_B\rangle$$. Let us consider the action of the map $$\Phi \otimes I$$ (which is a valid transformation map) on this bipartite system.
$$\Phi \otimes I(\Sigma_{ij}|i_Ai_B\rangle\langle j_Aj_B|) = \Sigma_{ij}\Phi(|i_A\rangle\langle j_A|)|i_B\rangle\langle j_B|$$
And suppose you are able to physically perform the measurement $$\langle i_B|\sigma |j_B\rangle$$ on the above state what you get is $$\Phi(|i\rangle\langle j|)$$ itself.
Thus everything about $$\Phi$$ is encoded in the state $$\Phi \otimes I(|\text{Choi_state}\rangle)$$ and vice versa.
• It's not a valid density matrix so no quantum state will ever have that form. It's meaningless to ask how do we apply the map on such a matrix then. $\Phi(|0\rangle\langle 1 |)$ is just a physically meaningless matrix. Jul 28, 2018 at 10:31
• I don't get what you're trying to say. Sure, the operators $\Phi(|i\rangle\!\langle j|)$ characterise the channel, but how does this connect with the Choi representation?
– glS
Sep 3, 2020 at 13:27 | 2022-06-26 12:35:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957462728023529, "perplexity": 272.52616762044835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00017.warc.gz"} |
https://www.hackmath.net/en/math-problem/562 | # TV fail
The TV has after 10,000 hours average 25 failures. Determine the probability of TV failure after 200 hours of operation.
Result
p = 39.38 %
#### Solution:
$p = 100\cdot \left(1- (1- \dfrac{ 25}{ 10000})^{ 200}\right) = 39.38 \%$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Looking for a statistical calculator?
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
Check out our ratio calculator.
Do you want to convert time units like minutes to seconds?
Would you like to compute count of combinations?
## Next similar math problems:
1. Component fail
There is a 90 percent chance that a particular type of component will perform adequately under high temperature conditions. If the device involved has four such components, determine the probability that the device is inoperable because exactly one of the
2. Bulbs
The probability that the bulb can operate 4000 hours is 0.3. What is the probability that exactly one of eight bulbs can operate 4000 hours?
3. Trams
Trams have an average speed 23 km/h and run in tact 14 minutes. Pedestrian walking speed is 3.3 km/h. At what intervals trams outrun pedestrian?
4. Family
94 boys are born per 100 girls. Determine the probability that there are two boys in a randomly selected family with three children.
5. Shooters
In army regiment are six shooters. The first shooter target hit with a probability of 49%, next with 75%, 41%, 20%, 34%, 63%. Calculate the probability of target hit when shooting all at once.
6. Cards
Suppose that are three cards in the hats. One is red on both sides, one of which is black on both sides, and a third one side red and the second black. We are pulled out of a hat randomly one card and we see that one side of it is red. What is the probab
7. Distribution function
X 2 3 4 P 0.35 0.35 0.3 The data in this table do I calculate the distribution function F(x) and then probability p(2.5
8. Sales
From statistics of sales goods, item A buy 51% of people and item B buys 59% of people. What is the probability that from 10 people buy 2 item A and 8 item B?
9. Medicine
We test medicine on 6 patients. For all drug doesn't work. If the drug success rate of 20%, what is the probability that medicine does not work?
10. The test
The test contains four questions, and there are five different answers to each of them, of which only one is correct, the others are incorrect. What is the probability that a student who does not know the answer to any question will guess the right answer
11. Internet anywhere
In school, 60% of pupils have access to the internet at home. A group of 8 students is chosen at random. Find the probability that a) exactly 5 have access to the internet. b) At least 6 students have access to the internet
12. Win in raffle
The raffle tickets were sold 200, 5 of which were winning. What is the probability that Peter, who bought one ticket will win?
13. Test
The teacher prepared a test with ten questions. The student has the option to choose one correct answer from the four (A, B, C, D). The student did not get a written exam at all. What is the probability that: a) He answers half correctly. b) He answers
14. Lottery
The lottery is 60000 elk in which 6200 wins. What is the probability that the purchase of 12 elks won nothing?
15. Simple interest 3
Find the simple interest if 11928 USD at 2% for 10 weeks.
16. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
17. Researchers
Researchers ask 200 families whether or not they were the homeowner and how many cars they had. Their response was homeowner: 14 no car or one car, two or more cars 86, not homeowner: 38 no car or one car, two or more cars 62. What percent of the families | 2020-06-01 15:19:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4890027642250061, "perplexity": 990.7229402015269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00042.warc.gz"} |
https://guitarknights.com/guitar-origin-guitar-chords-vs-ukulele.html | Electric guitars and bass guitars have to be used with a guitar amplifier and loudspeaker or a bass amplifier and speaker, respectively, in order to make enough sound to be heard by the performer and audience. Electric guitars and bass guitars almost always use magnetic pickups, which generate an electric signal when the musician plucks, strums or otherwise plays the instrument. The amplifier and speaker strengthen this signal using a power amplifier and a loudspeaker. Acoustic guitars that are equipped with a piezoelectric pickup or microphone can also be plugged into an instrument amplifier, acoustic guitar amp or PA system to make them louder. With electric guitar and bass, the amplifier and speaker are not just used to make the instrument louder; by adjusting the equalizer controls, the preamplifier, and any onboard effects units (reverb, distortion/overdrive, etc.) the player can also modify the tone (aka timbre or "colour") and sound of the instrument. Acoustic guitar players can also use the amp to change the sound of their instrument, but in general, acoustic guitar amps are used to make the natural acoustic sound of the instrument louder without changing its sound that much.
Solid body seven-string guitars were popularized in the 1980s and 1990s. Other artists go a step further, by using an eight-string guitar with two extra low strings. Although the most common seven-string has a low B string, Roger McGuinn (of The Byrds and Rickenbacker) uses an octave G string paired with the regular G string as on a 12-string guitar, allowing him to incorporate chiming 12-string elements in standard six-string playing. In 1982 Uli Jon Roth developed the "Sky Guitar", with a vastly extended number of frets, which was the first guitar to venture into the upper registers of the violin. Roth's seven-string and "Mighty Wing" guitar features a wider octave range.[citation needed]
YellowBrickCinema’s deep sleep music videos have been specifically composed to relax mind and body, and are suitable for babies, children, teens, and adults who need slow, beautiful, soft, soothing music to assist them to fall asleep. See them as a form of sleep meditation or sleep hypnosis gently easing you into that wonderful relaxing world of healing sleep.
YellowBrickCinema’s Sleep Music is the perfect relaxing music to help you go to sleep, and enjoy deep sleep. Our music for sleeping is the best music for stress relief, to reduce insomnia, and encourage dreaming. Our calm music for sleeping uses Delta Waves and soft instrumental music to help you achieve deep relaxation, and fall asleep. Our relaxing sleep music can be used as background music, meditation music, relaxation music, peaceful music and sleep music. Let our soothing music and calming music help you enjoy relaxing deep sleep.
After you've made your selections from the best selection of guitar and bass tabs, you'll want to download the Musicnotes.com apps for your Android, iPad, iPhone, or other device to gain access to your digital library anywhere. The option to print the file is still available, and you will also have all of your sheet music stored in your personal account to access your digital file from any computer or mobile device. If any issues arise, make sure to contact our customer support of musicians, ready to help fellow musicians.
The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz.
Kyser®'s nickel-plated electric guitar strings give you a warm, rich, full sound. They are precision wound around a carefully drawn hex shaped carbon steel core. The outer nickel-plated wrap maintains constant contact with the hex core resulting in a string that vibrates evenly for maximum sustain, smooth sound, and allows easy bending. Click the individual string images to view more gauge information.
Pickups are transducers attached to a guitar that detect (or "pick up") string vibrations and convert the mechanical energy of the string into electrical energy. The resultant electrical signal can then be electronically amplified. The most common type of pickup is electromagnetic in design. These contain magnets that are within a coil, or coils, of copper wire. Such pickups are usually placed directly underneath the guitar strings. Electromagnetic pickups work on the same principles and in a similar manner to an electric generator. The vibration of the strings creates a small electric current in the coils surrounding the magnets. This signal current is carried to a guitar amplifier that drives a loudspeaker.
A guitar strap is a strip of material with an attachment mechanism on each end, made to hold a guitar via the shoulders at an adjustable length. Guitars have varying accommodations for attaching a strap. The most common are strap buttons, also called strap pins, which are flanged steel posts anchored to the guitar with screws. Two strap buttons come pre-attached to virtually all electric guitars, and many steel-string acoustic guitars. Strap buttons are sometimes replaced with "strap locks", which connect the guitar to the strap more securely.
This is a thick little book with nice big chord diagrams and shows chords in various positions. I like that it is spiral bound. The only thing missing is tabs for the letter sections so that you an easily flip to the letter note/chord you're searching for rather than having to turn a lot of pages to get to what you want. I plan to put in my own divider tabs.
The playing of conventional chords is simplified by open tunings, which are especially popular in folk, blues guitar and non-Spanish classical guitar (such as English and Russian guitar). For example, the typical twelve-bar blues uses only three chords, each of which can be played (in every open tuning) by fretting six-strings with one finger. Open tunings are used especially for steel guitar and slide guitar. Open tunings allow one-finger chords to be played with greater consonance than do other tunings, which use equal temperament, at the cost of increasing the dissonance in other chords.
The nut is a small strip of bone, plastic, brass, corian, graphite, stainless steel, or other medium-hard material, at the joint where the headstock meets the fretboard. Its grooves guide the strings onto the fretboard, giving consistent lateral string placement. It is one of the endpoints of the strings' vibrating length. It must be accurately cut, or it can contribute to tuning problems due to string slippage or string buzz. To reduce string friction in the nut, which can adversely affect tuning stability, some guitarists fit a roller nut. Some instruments use a zero fret just in front of the nut. In this case the nut is used only for lateral alignment of the strings, the string height and length being dictated by the zero fret.
In an acoustic instrument, the body of the guitar is a major determinant of the overall sound quality. The guitar top, or soundboard, is a finely crafted and engineered element made of tonewoods such as spruce and red cedar. This thin piece of wood, often only 2 or 3 mm thick, is strengthened by differing types of internal bracing. Many luthiers consider the top the dominant factor in determining the sound quality. The majority of the instrument's sound is heard through the vibration of the guitar top as the energy of the vibrating strings is transferred to it. The body of an acoustic guitar has a sound hole through which sound projects. The sound hole is usually a round hole in the top of the guitar under the strings. Air inside the body vibrates as the guitar top and body is vibrated by the strings, and the response of the air cavity at different frequencies is characterized, like the rest of the guitar body, by a number of resonance modes at which it responds more strongly.
Welcome to video eight in the Beginner Guitar Quick-Start Series. In this lesson, we’re going to go through your first two chords. You’ll learn A minor 7 and C major. These two guitar chords will be useful for you because you’ll be using them often through your guitar career. A minor 7 is good to start with because it is fairly easy, and C major is great chord to learn how to play clean sounding chords.
the fifth, which is a perfect fifth above the root; consequently, the fifth is a third above the third—either a minor third above a major third or a major third above a minor third.[13][14] The major triad has a root, a major third, and a fifth. (The major chord's major-third interval is replaced by a minor-third interval in the minor chord, which shall be discussed in the next subsection.)
Octo Music Studio teaches piano and guitar lessons to adults and children 5 years and older. Our lessons are personalized and are exciting this results in the fastest possible progress for all students. We use the best computer guided music teaching programs to teach and are ranked in the top best 18 best music studios in Brooklyn, NY. Our piano or guitar teachers have over 30 years experience. Learn Gospel,... | 2019-05-20 12:24:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21516309678554535, "perplexity": 3094.5347969877803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00369.warc.gz"} |
https://msp.org/agt/2011/11-2/p03.xhtml | #### Volume 11, issue 2 (2011)
Recent Issues
Author Index
The Journal About the Journal Editorial Board Subscriptions Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 To Appear Other MSP Journals
Volume distortion in groups
### Hanna Bennett
Algebraic & Geometric Topology 11 (2011) 655–690
##### Abstract
Given a space $Y$ in $X$, a cycle in $Y$ may be filled with a chain in two ways: either by restricting the chain to $Y$ or by allowing it to be anywhere in $X$. When the pair $\left(G,H\right)$ acts on $\left(X,Y\right)$, we define the $k$–volume distortion function of $H$ in $G$ to measure the large-scale difference between the volumes of such fillings. We show that these functions are quasi-isometry invariants, and thus independent of the choice of spaces, and provide several bounds in terms of other group properties, such as Dehn functions. We also compute the volume distortion in a number of examples, including characterizing the $k$–volume distortion of ${ℤ}^{k}$ in ${ℤ}^{k}{⋊}_{M}ℤ$, where $M$ is a diagonalizable matrix. We use this to prove a conjecture of Gersten.
##### Keywords
geometric group theory, volume distortion, subgroup distortion, Dehn function
##### Mathematical Subject Classification 2000
Primary: 20F65
Secondary: 20F67, 57M07 | 2020-06-03 04:50:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6846514344215393, "perplexity": 1002.2504184744519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00551.warc.gz"} |
https://stats.stackexchange.com/questions/490493/maximum-likelihood-estimation-of-the-variance | # maximum likelihood estimation of the variance [closed]
In what situations the maximum likelihood estimation of the variance of distribution can severely ruin the estimation? | 2021-05-17 22:48:34 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510749578475952, "perplexity": 929.3270461028046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00151.warc.gz"} |
https://cs.stackexchange.com/questions/65501/maximum-matching-in-a-bipartite-graph-for-solving-a-chess-rook-maximization-prob | # maximum matching in a bipartite graph for solving a chess rook maximization problem
There's an n x n chessboard where some cells are instead holes. I want to have as many rooks as possible in a way that the rooks won't be able to capture each other. Rooks cannot be placed on the holes but can jump over them. How can I solve this problem using the finding maximum matching principle in a bipartite graph?
• What did you try? Where did you get stuck? We're happy to help with conceptual problems but just solving homework-style exercises for you is unlikely to really help. – David Richerby Nov 3 '16 at 17:19
• @tobinulilo Really?! If in a bipartite graph you cannot match two vertices $u, v$, you don't know what that corresponds to? If you don't then you probably do not know what bipartite matching is. Fine, here is the answer, if the cell $(x, y)$ has a hole then there is no edge $(x, y)$ in the bipartite graph, otherwise there is an edge. – aelguindy Nov 3 '16 at 17:49
• This now makes everything much clearer. Thank you very much. I was really confused and could not connect concepts to each other. Basically in the chess/rooks problem, the placement of one rook per row-column pair corresponds to the bipartite matching problem, and then on top of that when we add the holes, we are basically making the problem correspond to the maximum matching in a bipartite graph. Am I correct in that sense? Thus, simply maximizing the n x n chess problem I have described corresponds to finding the maximum matching in an n x n bipartite graph? – tobinulilo Nov 3 '16 at 18:45 | 2021-04-12 13:39:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7384377717971802, "perplexity": 358.1911827441234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00598.warc.gz"} |
https://www.advanceduninstaller.com/DriverMax-7-d75721d82624ebe6a3a23193da14c5e9-application.htm | DriverMax 7
A way to uninstall DriverMax 7 from your PC
You can find below details on how to uninstall DriverMax 7 for Windows. It was developed for Windows by Innovative Solutions. You can find out more on Innovative Solutions or check for application updates here. More data about the program DriverMax 7 can be found at . DriverMax 7 is frequently set up in the C:\Program Files (x86)\Innovative Solutions\DriverMax folder, however this location may vary a lot depending on the user's choice while installing the application. DriverMax 7's complete uninstall command line is C:\Program Files (x86)\Innovative Solutions\DriverMax\unins000.exe. drivermax.exe is the programs's main file and it takes approximately 8.39 MB (8795000 bytes) on disk.
DriverMax 7 is composed of the following executables which occupy 14.71 MB (15423349 bytes) on disk:
• drivermax.exe (8.39 MB)
• innostp.exe (1.02 MB)
• innoupd.exe (1.58 MB)
• rbk32.exe (13.38 KB)
• rbk64.exe (13.38 KB)
• stop_dmx.exe (450.88 KB)
• unins000.exe (1.04 MB)
• dpinst.exe (663.97 KB)
• dpinst.exe (1.06 MB)
• dpinst.exe (531.97 KB)
...click to view all...
This web page is about DriverMax 7 version 7.44.0.738 alone. You can find below info on other releases of DriverMax 7:
...click to view all...
After the uninstall process, the application leaves leftovers on the computer. Part_A few of these are shown below.
Directories that were left behind:
• C:\Program Files (x86)\Innovative Solutions\DriverMax
Check for and delete the following files from your disk when you uninstall DriverMax 7:
• C:\Program Files (x86)\Innovative Solutions\DriverMax\dmx.url
• C:\Program Files (x86)\Innovative Solutions\DriverMax\DPInst\amd64\dpinst.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\DPInst\ia64\dpinst.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\DPInst\x86\dpinst.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\drivermax.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\drivermax.ntv.lng
• C:\Program Files (x86)\Innovative Solutions\DriverMax\drivermax.ROM.lng
• C:\Program Files (x86)\Innovative Solutions\DriverMax\InnoSolUOs.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\innostp.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\innoupd.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\rbk32.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\rbk64.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\stop_dmx.exe
• C:\Program Files (x86)\Innovative Solutions\DriverMax\sync.dll
• C:\Program Files (x86)\Innovative Solutions\DriverMax\unins000.dat
• C:\Program Files (x86)\Innovative Solutions\DriverMax\unins000.exe
Many times the following registry keys will not be uninstalled:
• HKEY_CURRENT_USER\Software\Innovative Solutions\DriverMax
• HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\DMX5_is1
Open regedit.exe to remove the values below from the Windows Registry:
• HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\bam\UserSettings\S-1-5-21-369909539-2070682481-3992334606-1001\\Device\HarddiskVolume3\Program Files (x86)\Innovative Solutions\DriverMax\drivermax.exe
• HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\bam\UserSettings\S-1-5-21-369909539-2070682481-3992334606-1001\\Device\HarddiskVolume3\Program Files (x86)\Innovative Solutions\DriverMax\stop_dmx.exe
• HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\bam\UserSettings\S-1-5-21-369909539-2070682481-3992334606-1001\\Device\HarddiskVolume3\Program Files (x86)\Innovative Solutions\DriverMax\unins000.exe
• HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\bam\UserSettings\S-1-5-21-369909539-2070682481-3992334606-1001\\Device\HarddiskVolume3\Users\JENNIF~1\AppData\Local\Temp\is-DRMGS.tmp\drivermax.tmp
• HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\bam\UserSettings\S-1-5-21-369909539-2070682481-3992334606-1001\\Device\HarddiskVolume3\Users\JENNIF~1\AppData\Local\Temp\is-Q3JJ2.tmp\drivermax.tmp
How to delete DriverMax 7 from your PC with the help of Advanced Uninstaller PRO
DriverMax 7 is an application released by the software company Innovative Solutions. Some people try to uninstall this program. This is hard because performing this by hand takes some know-how regarding Windows internal functioning. One of the best EASY practice to uninstall DriverMax 7 is to use Advanced Uninstaller PRO. Take the following steps on how to do this:
1. If you don't have Advanced Uninstaller PRO on your PC, add it. This is good because Advanced Uninstaller PRO is the best uninstaller and general utility to take care of your PC.
• set up Advanced Uninstaller PRO
2. Start Advanced Uninstaller PRO. It's recommended to take some time to get familiar with Advanced Uninstaller PRO's interface and wealth of features available. Advanced Uninstaller PRO is a very good program.
3. Click on the General Tools category
4. Activate the Uninstall Programs feature
5. A list of the applications existing on your computer will be made available to you
6. Scroll the list of applications until you locate DriverMax 7 or simply click the Search feature and type in "DriverMax 7". If it is installed on your PC the DriverMax 7 program will be found automatically. When you select DriverMax 7 in the list of apps, the following information about the program is shown to you:
• Safety rating (in the lower left corner). This tells you the opinion other people have about DriverMax 7, ranging from "Highly recommended" to "Very dangerous".
• Reviews by other people - Click on the Read reviews button.
• Details about the app you are about to uninstall, by pressing the Properties button.
For instance you can see that for DriverMax 7:
• The publisher is: http://www.innovative-sol.com/
• The uninstall string is: C:\Program Files (x86)\Innovative Solutions\DriverMax\unins000.exe
7. Click the Uninstall button. A confirmation window will show up. accept the uninstall by clicking Uninstall. Advanced Uninstaller PRO will then remove DriverMax 7.
8. After removing DriverMax 7, Advanced Uninstaller PRO will ask you to run a cleanup. Press Next to proceed with the cleanup. All the items that belong DriverMax 7 that have been left behind will be detected and you will be able to delete them. By uninstalling DriverMax 7 using Advanced Uninstaller PRO, you are assured that no registry entries, files or folders are left behind on your system.
Your PC will remain clean, speedy and able to take on new tasks.
Disclaimer
The text above is not a recommendation to remove DriverMax 7 by Innovative Solutions from your PC, nor are we saying that DriverMax 7 by Innovative Solutions is not a good application for your computer. This text simply contains detailed instructions on how to remove DriverMax 7 in case you decide this is what you want to do. The information above contains registry and disk entries that Advanced Uninstaller PRO stumbled upon and classified as "leftovers" on other users' PCs.
2016-06-19 / Written by Daniel Statescu for Advanced Uninstaller PRO | 2020-02-24 01:24:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197638392448425, "perplexity": 14471.837341356477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00016.warc.gz"} |
https://www.convexoptimization.com/wikimization/index.php?title=Nonnegative_matrix_factorization&diff=prev&oldid=1941 | # Nonnegative matrix factorization
(Difference between revisions)
Revision as of 02:44, 17 February 2010 (edit) (ruqKKcWrG)← Previous diff Revision as of 04:24, 17 February 2010 (edit) (undo)m (Reverted edits by 41.190.16.17 (Talk); changed back to last version by Ranjelin)Next diff → Line 1: Line 1: - xA3hHv fhanfeusylsr, [url=http://idhijwizkysp.com/]idhijwizkysp[/url], [link=http://cxgygcljwequ.com/]cxgygcljwequ[/link], http://ogpjkfcdsjji.com/ + Exercise from [http://meboo.convexoptimization.com/Meboo.html Convex Optimization & Euclidean Distance Geometry], ch.4: + + Given rank-2 nonnegative matrix + X=\!\left[\!\begin{array}{ccc}17&28&42\\ + 16&47&51\\ + 17&82&72\end{array}\!\right], + + find a nonnegative factorization + X=WH\,[/itex] + by solving + + $\begin{array}{cl}\mbox{find}_{A\in\mathbb{S}^3,\,B\in\mathbb{S}^3,\,W\in\mathbb{R}^{3\times2},\,H\in\mathbb{R}^{2\times3}}&W\,,\,H\\ + \mbox{subject to}&Z=\left[\begin{array}{ccc}I&W^{\rm T}&H\\W&A&X\\H^{\rm T}&X^{\rm T}&B\end{array}\right]\succeq0\\ + &W\geq0\\ + &H\geq0\\ + &\mbox{rank}\,Z\leq2\end{array} + + which follows from the fact, at optimality, + + [itex] Z^\star=\left[\!\begin{array}{c}I\\W\\H^{\rm T}\end{array}\!\right]\begin{array}{c}\textbf{[}\,I~~W^{\rm T}~H\,\textbf{]} + \end{array}$ + + Use the known closed-form solution for a direction vector $Y\,$ to regulate rank (rank constraint is replaced) by [[Convex Iteration]]; + + set $_{}Z^\star\!=Q\Lambda Q^{\rm T}\!\in\mathbb{S}^\mathbf{8}$ to a nonincreasingly ordered diagonalization and + [itex]_{}U^\star\!=_{\!}Q(:\,,_{^{}}3\!:\!8)\!\in_{\!}\reals^{\mathbf{8}\times\mathbf{6}}, + then [itex]Y\!=U^\star U^{\star\rm T}. + +
+ In summary, initialize [itex]Y=I\, then alternate solution of + + [itex]\begin{array}{cl}\mbox{minimize}_{A\in\mathbb{S}^3,\,B\in\mathbb{S}^3,\,W\in\mathbb{R}^{3\times2},\,H\in\mathbb{R}^{2\times3}}&\langle Z\,,Y\rangle\\ + \mbox{subject to}&Z=\left[\begin{array}{ccc}I&W^{\rm T}&H\\W&A&X\\H^{\rm T}&X^{\rm T}&B\end{array}\right]\succeq0\\ + &W\geq0\\ + &H\geq0\end{array} + + with + + [itex]Y\!=U^\star U^{\star\rm T}. + Global convergence occurs, in this example, in only a few iterations.
## Revision as of 04:24, 17 February 2010
Exercise from Convex Optimization & Euclidean Distance Geometry, ch.4:
Given rank-2 nonnegative matrix $LaTeX: X=\!\left[\!\begin{array}{ccc}17&28&42\\
16&47&51\\ 17&82&72\end{array}\!\right],$
find a nonnegative factorization $LaTeX: X=WH\,$ by solving
$LaTeX: \begin{array}{cl}\mbox{find}_{A\in\mathbb{S}^3,\,B\in\mathbb{S}^3,\,W\in\mathbb{R}^{3\times2},\,H\in\mathbb{R}^{2\times3}}&W\,,\,H\\ \mbox{subject to}&Z=\left[\begin{array}{ccc}I&W^{\rm T}&H\\W&A&X\\H^{\rm T}&X^{\rm T}&B\end{array}\right]\succeq0\\ &W\geq0\\ &H\geq0\\ &\mbox{rank}\,Z\leq2\end{array}$
which follows from the fact, at optimality,
$LaTeX: Z^\star=\left[\!\begin{array}{c}I\\W\\H^{\rm T}\end{array}\!\right]\begin{array}{c}\textbf{[}\,I~~W^{\rm T}~H\,\textbf{]} \end{array}$
Use the known closed-form solution for a direction vector $LaTeX: Y\,$ to regulate rank (rank constraint is replaced) by Convex Iteration;
set $LaTeX: _{}Z^\star\!=Q\Lambda Q^{\rm T}\!\in\mathbb{S}^\mathbf{8}$ to a nonincreasingly ordered diagonalization and $LaTeX: _{}U^\star\!=_{\!}Q(:\,,_{^{}}3\!:\!8)\!\in_{\!}\reals^{\mathbf{8}\times\mathbf{6}}$, then $LaTeX: Y\!=U^\star U^{\star\rm T}.$
In summary, initialize $LaTeX: Y=I\,$ then alternate solution of
$LaTeX: \begin{array}{cl}\mbox{minimize}_{A\in\mathbb{S}^3,\,B\in\mathbb{S}^3,\,W\in\mathbb{R}^{3\times2},\,H\in\mathbb{R}^{2\times3}}&\langle Z\,,Y\rangle\\ \mbox{subject to}&Z=\left[\begin{array}{ccc}I&W^{\rm T}&H\\W&A&X\\H^{\rm T}&X^{\rm T}&B\end{array}\right]\succeq0\\ &W\geq0\\ &H\geq0\end{array}$
with
$LaTeX: Y\!=U^\star U^{\star\rm T}.$ Global convergence occurs, in this example, in only a few iterations. | 2021-12-04 02:08:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576277494430542, "perplexity": 8215.080366456621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00062.warc.gz"} |
https://gmatclub.com/forum/is-x-y-y-2-1-x-y-2-y-127370.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Feb 2019, 07:36
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
• ### Free GMAT Prep Hour
February 20, 2019
February 20, 2019
08:00 PM EST
09:00 PM EST
Strategies and techniques for approaching featured GMAT topics. Wednesday, February 20th at 8 PM EST
February 21, 2019
February 21, 2019
10:00 PM PST
11:00 PM PST
Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th.
# Is x|y| > y^2? (1) x > y (2) y > 0
Author Message
TAGS:
### Hide Tags
Manager
Joined: 25 Aug 2011
Posts: 137
Location: India
GMAT 1: 730 Q49 V40
WE: Operations (Insurance)
Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
11 Feb 2012, 00:54
9
00:00
Difficulty:
35% (medium)
Question Stats:
66% (01:21) correct 34% (01:15) wrong based on 598 sessions
### HideShow timer Statistics
Is x|y| > y^2?
(1) x > y
(2) y > 0
I rephrased the question as x|y|>|y| (since y^= |y|. On solving this I rephrased as x>1?
basis this rephrased version. the answer id D. however OA is C..
Have I solved the equation wrongly?
Math Expert
Joined: 02 Sep 2009
Posts: 53020
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
11 Feb 2012, 01:10
2
4
devinawilliam83 wrote:
is x|y|>y^2?
1.x>y
2.y>0
I rephrased the question as x|y|>|y| (since y^= |y|. On solving this I rephrased as x>1?
basis this rephrased version. the answer id D. however OA is C..
Have I solved the equation wrongly?
Yes, your rephrasing is wrong. You cannot substitute $$y^2$$ with $$|y|$$, because generally they are not equal: $$\sqrt{y^2}=|y|$$. Next, even if it were "is $$x|y|>|y|$$?" you still cannot reduce it by $$|y|$$ and write "is $$x>1$$", as $$y$$ can be zero and you cannot reduce/divide by zero. $$x|y|>|y|$$ can be rephrased as $$|y|(x-1)>0$$.
Is x|y|>y^2?
(1) x>y --> if $$x>y>0$$ then obviously $$x|y|>y^2$$ but if $$0>x>y$$ then $$x|y|<0<y^2$$. Not sufficient.
(2) y>0 --> $$|y|=y$$. No info about x. Not sufficient.
(1)+(2) Since $$x>y>0$$ then $$xy>y^2$$. Sufficient.
Hope it's clear.
_________________
##### General Discussion
Senior Manager
Joined: 28 Jun 2009
Posts: 373
Location: United States (MA)
### Show Tags
26 Sep 2012, 19:56
Asked : is x|y| > y^2
Simplifying, is x|y| > |y|*|y| i.e is x > |y|
i) x > y. But this doesn't mean x > |y| unless y is positive
ii)y > 0 ie. y is positive
Hence, both statements together are sufficient. Hence C.
Director
Joined: 22 Mar 2011
Posts: 599
WE: Science (Education)
### Show Tags
Updated on: 27 Sep 2012, 03:21
piyatiwari wrote:
Asked : is x|y| > y^2
Simplifying, is x|y| > |y|*|y| i.e is x > |y|
i) x > y. But this doesn't mean x > |y| unless y is positive
ii)y > 0 ie. y is positive
Hence, both statements together are sufficient. Hence C.
Simplifying, is x|y| > |y|*|y| i.e is x > |y|
You can simplify, or more precisely, divide through by |y| if you know that y is non-zero.
The question "Is x|y| > y^2?" is equivalent to "Is y non-zero and x > |y|?"
(1) y can be 0. Also, x can be greater than |y|.
Not sufficient.
(2) Not sufficient, as nothing is known about x.
(1) and (2) together: y > 0 and x > y = |y|.
Sufficient.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Originally posted by EvaJager on 26 Sep 2012, 23:23.
Last edited by EvaJager on 27 Sep 2012, 03:21, edited 1 time in total.
Board of Directors
Joined: 01 Sep 2010
Posts: 3353
### Show Tags
27 Sep 2012, 02:17
I 'd like to know if my approach is correct
x | y | > y^2 ------> x < - y or x > y
1) x > y but we do not know if x is less of - y insuff
2) y> 0 we know only that y is positive but nothing about x
1) + 2) y is positive and x must be greater than y. suff.
is correct ??
_________________
Director
Joined: 22 Mar 2011
Posts: 599
WE: Science (Education)
### Show Tags
27 Sep 2012, 03:25
1
carcass wrote:
I 'd like to know if my approach is correct
x | y | > y^2 ------> x < - y or x > y
1) x > y but we do not know if x is less of - y insuff
2) y> 0 we know only that y is positive but nothing about x
1) + 2) y is positive and x must be greater than y. suff.
is correct ??
x | y | > y^2 ------> x < - y or x > y NO
First of all, y must be non-zero. Then dividing through by |y|, we obtain x > |y| > 0.
From the given inequality, $$x|y| > y^2>0$$ we deduce that x > 0, x cannot be negative.
And you cannot automatically assume that y is positive.
_________________
PhD in Applied Mathematics
Love GMAT Quant questions and running.
Intern
Joined: 28 Jan 2013
Posts: 9
Location: India
Schools: HBS '16, HEC Jan'16
GPA: 3
WE: Marketing (Manufacturing)
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
07 Jun 2013, 07:45
Can we take
$$Y^2$$ = $$[y][y]$$ and cancel [y] from both sides if y is not equal to zero.
Math Expert
Joined: 02 Sep 2009
Posts: 53020
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
07 Jun 2013, 07:59
karjan07 wrote:
Can we take
$$Y^2$$ = $$[y][y]$$ and cancel [y] from both sides if y is not equal to zero.
If $$y\neq{0}$$ and $$x|y|>y^2$$ we CAN reduce by $$|y|$$ and get: $$x>|y|$$.
Hope it's clear.
_________________
Senior Manager
Joined: 13 May 2013
Posts: 421
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
30 Jun 2013, 13:33
1
Is x|y|>y^2?
(1) x>y
Though we know that x>y, we don't know anything about the signs of x and y. For example:
x=2, y=1
x|y|>y^2
2|1|>1^2
2>1 Valid
x=2, y=-3
x|y|>y^2
2|-3|>-3^2
6>9 Invalid
INSUFFICIENT
(2) y>0
This tells us nothing about x. For example:
x=10, y=1
x|y|>y^2
10|1|>1^2
10>1 Valid
x=1, y=10
x|y|>y^2
1|10|>10^2
10>100 Invalid
INSUFFICIENT
1+2) x>y, y>0 ===> x>y>0
If y>0 then x is greater than zero (and y)
x=10, y=9
x|y|>y^2
10|9|>9^2
90>81
Think of it like this: The RHS is the smaller number times the smaller number. The LHS is the smaller number times a number larger than the smaller number.
SUFFICIENT
(C)
Senior Manager
Joined: 13 May 2013
Posts: 421
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
22 Jul 2013, 09:16
Is x|y|>y^2?
(1) x>y
x may be greater than y but we still cannot be sure if x|y|>y^2.
For example:
x>y
2>1
2|1| > 1^2
2>1 Valid
x>y
-1>-2
-1|-2| > -2^2
-2 > 4 Invalid
INSUFFICIENT
(2) y>0
This tells us nothing about x.
INSUFFICIENT
1+2) x>y and y>0. Therefore, x>y>0. We saw in #1 that when y is greater than zero, the inequality holds true. When it is less than zero, the inequality does not hold true. Just to be sure:
x>y>0
2>1>0
x|y|>y^2
2|1|>1^2
2>1
Also, take note that x|y| is y*a number larger than y. This will always be greater than y*y (y^2) so long as x is positive.
(C)
Intern
Joined: 16 May 2014
Posts: 39
Re: Is x*|y| > y^2? [#permalink]
### Show Tags
17 May 2014, 19:31
1
russ9 wrote:
Is x·|y| > y2?
(1) x > y
(2) y > 0
I'll paste my question in the second post. Good Luck!
We have x|y|> y2
Now, to solve this, we need to understand, what do we actually want to know.
y2 = |y|2
x|y|>|y|2
x|y| - |y|2 > 0
or (x-|y|)|y|>0
Now, we know that |y| is always greater than 0, Thus, our inequality yield x-|y|>0
So Is x|y|>y2 translates to Is x> |y|
Now, lets come to the statements,
Statement 1 says, x>y; which is not sufficient. In case y is negative, we can't say anything about x being greater than absolute value of y
Statement 2 say, y>0, which in itself is insufficient as we don't know anything about x here.
But when we combine these two statements, we would get the answer. As we know when y > 0; |y| = y and statement 1 says x > y, and if y > 0 we can write x > |y|
Hope it helps!!!
Kudos if it helped!!!
Math Expert
Joined: 02 Sep 2009
Posts: 53020
Re: Is x*|y| > y^2? [#permalink]
### Show Tags
18 May 2014, 00:27
mittalg wrote:
russ9 wrote:
Is x·|y| > y2?
(1) x > y
(2) y > 0
I'll paste my question in the second post. Good Luck!
We have x|y|> y2
Now, to solve this, we need to understand, what do we actually want to know.
y2 = |y|2
x|y|>|y|2
x|y| - |y|2 > 0
or (x-|y|)|y|>0
Now, we know that |y| is always greater than 0, Thus, our inequality yield x-|y|>0
So Is x|y|>y2 translates to Is x> |y|
Now, lets come to the statements,
Statement 1 says, x>y; which is not sufficient. In case y is negative, we can't say anything about x being greater than absolute value of y
Statement 2 say, y>0, which in itself is insufficient as we don't know anything about x here.
But when we combine these two statements, we would get the answer. As we know when y > 0; |y| = y and statement 1 says x > y, and if y > 0 we can write x > |y|
Hope it helps!!!
Kudos if it helped!!!
Everything is correct except that |y| is more than or equal to 0. So, x*|y| > y^2 after reducing by |y| can be translated: is $$x > |y|>{0}$$?
_________________
Manager
Joined: 10 Mar 2013
Posts: 192
GMAT 1: 620 Q44 V31
GMAT 2: 690 Q47 V37
GMAT 3: 610 Q47 V28
GMAT 4: 700 Q50 V34
GMAT 5: 700 Q49 V36
GMAT 6: 690 Q48 V35
GMAT 7: 750 Q49 V42
GMAT 8: 730 Q50 V39
Re: Is x*|y| > y^2? [#permalink]
### Show Tags
05 Oct 2014, 15:19
Hi Experts,
Is the following solution correct?
Question: Is x·|y| > y^2?
True when:
Case 1:
y>0 and x > y
Case 2:
y < 0 and x < -y
(1) NS; we don't know whether y > 0
(2) NS; we don't know anything about x
(1) + (2)
Case 1, so sufficient
Math Expert
Joined: 02 Sep 2009
Posts: 53020
Re: Is x*|y| > y^2? [#permalink]
### Show Tags
05 Oct 2014, 23:40
TooLong150 wrote:
Hi Experts,
Is the following solution correct?
Question: Is x·|y| > y^2?
True when:
Case 1:
y>0 and x > y
Case 2:
y < 0 and x < -y
(1) NS; we don't know whether y > 0
(2) NS; we don't know anything about x
(1) + (2)
Case 1, so sufficient
For $$x*|y| > y^2$$ to be true either $$0 < y < x$$ or $$-x < y < 0$$ (notice that in both cases x must be positive).
_________________
VP
Joined: 05 Mar 2015
Posts: 1001
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
06 Feb 2016, 10:21
devinawilliam83 wrote:
Is x|y|>y^2?
(1) x>y
(2) y>0
I rephrased the question as x|y|>|y| (since y^= |y|. On solving this I rephrased as x>1?
basis this rephrased version. the answer id D. however OA is C..
Have I solved the equation wrongly?
Can we rephrase the question as IxI>IyI??
reason.. if we square both side xIyI>y^2
we get x^2Y^2>Y^4
X^2>Y^2 or IXI>IYI...????
Math Expert
Joined: 02 Sep 2009
Posts: 53020
Re: Is x|y| > y^2? (1) x > y (2) y > 0 [#permalink]
### Show Tags
06 Feb 2016, 10:27
rohit8865 wrote:
devinawilliam83 wrote:
Is x|y|>y^2?
(1) x>y
(2) y>0
I rephrased the question as x|y|>|y| (since y^= |y|. On solving this I rephrased as x>1?
basis this rephrased version. the answer id D. however OA is C..
Have I solved the equation wrongly?
Can we rephrase the question as IxI>IyI??
reason.. if we square both side xIyI>y^2
we get x^2Y^2>Y^4
X^2>Y^2 or IXI>IYI...????
No, we cannot do this. We can raise both parts of an inequality to an even power if we know that both parts of an inequality are non-negative (the same for taking an even root of both sides of an inequality).
Check the following post for more: How to manipulate inequalities (adding, subtracting, squaring etc.).
Hope it helps.
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 7334
Re: Is x·|y| > y^2? [#permalink]
### Show Tags
21 May 2016, 05:47
2
nishatfarhat87 wrote:
Is x·|y| > y^2?
(1) x > y
(2) y > 0
Hi Bunuel,
Hi,
Bunuel too will be able to give a approach, but i will give you one--
$$x·|y| > y^2.....$$
what does this mean...
$$y^2-x|y|<0.............$$
since y^2 and |y| will be positive, we have to answer if x>|y|......OR x>0 and x>y...
lets see the statements-
$$(1) x > y$$
say y is -ive and x is also -ive...... ans will be NO
if x is +ive, ans is YES
Insuff
$$(2) y > 0$$
nothing about relation between x and y..
Insuff...
Combined-
x>y and y>0, so x>0..
ans is YES
Suff
C
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
4) Base while finding % increase and % decrease : https://gmatclub.com/forum/percentage-increase-decrease-what-should-be-the-denominator-287528.html
GMAT Expert
Intern
Joined: 05 Nov 2012
Posts: 46
Re: Is x·|y| > y^2? [#permalink]
### Show Tags
21 May 2016, 05:59
chetan2u wrote:
nishatfarhat87 wrote:
Is x·|y| > y^2?
(1) x > y
(2) y > 0
Hi Bunuel,
Hi,
Bunuel too will be able to give a approach, but i will give you one--
$$x·|y| > y^2.....$$
what does this mean...
$$y^2-x|y|<0.............$$
since y^2 and |y| will be positive, we have to answer if x>|y|......OR x>0 and x>y...
lets see the statements-
$$(1) x > y$$
say y is -ive and x is also -ive...... ans will be NO
if x is +ive, ans is YES
Insuff
$$(2) y > 0$$
nothing about relation between x and y..
Insuff...
Combined-
x>y and y>0, so x>0..
ans is YES
Suff
C
Hi Chetan,
Thanks for your response. How did you get this: since y^2 and |y| will be positive, we have to answer if x>|y|......OR x>0 and x>y..
Math Expert
Joined: 02 Aug 2009
Posts: 7334
Re: Is x·|y| > y^2? [#permalink]
### Show Tags
22 May 2016, 00:28
nishatfarhat87 wrote:
Hi Chetan,
Thanks for your response. How did you get this: since y^2 and |y| will be positive, we have to answer if x>|y|......OR x>0 and x>y..
Hi,
Quote:
$$x·|y| > y^2.....$$
what does this mean...
$$y^2-x|y|<0.............$$
since y^2 and |y| will be positive, we have to answer if x>|y|......OR x>0 and x>y...
Now when will x·|y| > y^2....
RHS= Y^2, which is always 0 or +ive....
LHS has two terms x and |y|, |y| is again 0 or +..
so if x is -ive , x|y| will be -ive and it will not be > + value... so x has to be +ive ..
and the numeric value has to be >y for x*y to be > than y*y
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
4) Base while finding % increase and % decrease : https://gmatclub.com/forum/percentage-increase-decrease-what-should-be-the-denominator-287528.html
GMAT Expert
Math Expert
Joined: 02 Sep 2009
Posts: 53020
### Show Tags
22 May 2016, 02:39
nishatfarhat87 wrote:
Is x·|y| > y^2?
(1) x > y
(2) y > 0
Hi Bunuel,
Is x·|y| > y^2?
(1) x > y. If y=0, then no matter what x is x*|y| = y^2 = 0, and we'll have a NO answer to the question but if y=1 and x=2, then x*|y| > y^2, and we'll have an YES answer to the question. Not sufficient.
(2) y > 0. This implies that |y| = y, thus the question becomes is x·y > y^2. Reduce by y (we can safely do this since we know that y is positive): is x > y. We don't know that. Not sufficient.
(1)+(2) From (2) the question became whether x > y and (1) confirms that. Sufficient.
Hope it's clear.
_________________
Is x*|y| > y^2? [#permalink] 22 May 2016, 02:39
Go to page 1 2 Next [ 28 posts ]
Display posts from previous: Sort by | 2019-02-20 15:36:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352987170219421, "perplexity": 6805.458429474729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495147.61/warc/CC-MAIN-20190220150139-20190220172139-00255.warc.gz"} |
https://growthecon.wordpress.com/tag/us/ | # Has the Long-run Growth Rate Changed?
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
My actual job bothered to intrude on my life over the last week, so I’ve got a bit of material stored up for the blog. Today, I’m going to hit on a definitional issue that creates lots of problems in talking about growth. I see it all the time in my undergraduate course, and it is my fault for not being clearer.
If I ask you “Has the long-run growth rate of the U.S. declined?”, the answer depends crucially on what I mean by “long-run growth rate”. I think of there as being two distinct definitions.
• The measured growth rate of GDP over a long period of time: The measured long-run growth rate of GDP from 1985 to 2015 is ${(\ln{Y}_{2015} - \ln{Y}_{1985})/30}$. Note that here the measurement does not have to take place using only past data. We could calculate the expected measured growth rate of GDP from 2015 to 2035 as ${(\ln{Y}_{2035} - \ln{Y}_{2015})/20}$. Measured growth rate depends on the actual path (or expected actual path) of GDP.
• The underlying trend growth of potential GDP: This is the sum of the trend growth rate of potential output per worker (we typically call this ${g}$) and the trend growth rate of the number of workers (which we’ll call ${n}$).
The two ways of thinking about long-run growth inform each other. If I want to calculate the measured growth rate of GDP from 2015 to 2035, then I need some way to guess what GDP in 2035 will be, and this probably depends on my estimate of the underlying trend growth rate.
On the other hand, while there are theoretical avenues to deciding on the underlying trend growth rate (through ${g}$, ${n}$, or both), we often look back at the measured growth rate over long periods of time to help us figure trend growth (particularly for ${g}$).
Despite that, telling me that one of the definitions of the long-run growth rate has fallen does not necessarily inform me about the other. Let’s take the work of Robert Gordon as an example. It is about the underlying trend growth rate. Gordon argues that ${n}$ is going to fall in the next few decades as the US economy ages and hence the growth in number of workers will slow. He also argues that ${g}$ will fall due to us running out of useful things to innovate on. (I find the argument regarding ${n}$ strong and the argument regarding ${g}$ completely unpersuasive. But read the paper, your mileage may vary.)
Now, is Gordon right? Data on the measured long-run growth rate of GDP does not tell me. It is entirely possible that relatively slow measured growth from around 2000 to 2015 reflects some kind of extended cyclical downturn but that ${g}$ and ${n}$ remain just where they were in the 1990s. I’ve talked about this before, but statistically speaking it will be decades before we can even hope to fail to reject Gordon’s hypothesis using measured long-run growth rates.
This brings me back to some current research that I posted about recently. Juan Antolin-Diaz, Thomas Drechsel, and Ivan Petrella have a recent paper that finds “a significant decline in long-run output growth in the United States”. [My interpretation of their results was not quite right in that post. The authors e-mailed with me and cleared things up. Let’s see if I can get things straight here.] Their paper is about the measured growth rate of long-run GDP. They don’t do anything as crude as I suggested above, but after controlling for the common factors in other economic data series with GDP (etc.. etc..) they find that the long-run measured growth rate of GDP has declined over time from 2000 to 2014. Around 2011 they find that the long-run measured growth rate is so low that they can reject that this is just a statistical anomaly driven by business cycle effects.
What does this mean? It means that growth has been particularly low so far in the 21st century. So, yes, the “long-run measured growth rate of GDP has declined” in the U.S., according to the available evidence.
The fact that Antolin-Diaz, Drechsel, and Petrella find a lower measured growth rate similar to the CBO’s projected growth rate of GDP over the next decade does not tell us that ${g}$ or ${n}$ (or both) are lower. It tells us that it is possible to reverse engineer the CBO’s assumptions about ${g}$ and ${n}$ using existing data.
But this does not necessarily mean that the underlying trend growth rate of GDP has actually changed. If you want to establish that ${g}$ or ${n}$ changed, then there is no retrospective GDP data that can prove your point. Fundamentally, predictions about ${g}$ and ${n}$ are guesses. Perhaps educated guesses, but guesses.
# Significant Changes in GDP Growth
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
A relatively quick post to highlight two other posts that recently came out regarding GDP growth. First, David Papell and Ruxandra Prodan have a guest post up at Econbrowser regarding the long-run effects of the Great Recession. They use the CBO projections of GDP into the future (similar to what I did here) and look at whether there was a statistically significant break in the level of GDP at the Great Recession. Short answer, yes. Their testing finds that the break was 2008:Q2, not a surprising date to end up with.
It is important to remember that David and Ruxandra are testing for a break in the level of GDP, and not GDP per capita. It is entirely possible to have a structural break in GDP while not having a structural break in GDP per capita. The next thing to remember is that they cannot reject that the growth rate of GDP is the same after 2008:Q2 as it was before. What I mean is easier to see in their figure than it is to explain:
Before and after the break, the growth rate is identical. It is just the level that has changed.
The second post is from Juan Antolin-Diaz, Thomas Drechsel, and Ivan Petrella. They use only existing data (not CBO projections) and find that there is statistical evidence of a change in the growth rate of U.S. GDP. They see a slowdown in growth starting in the mid-2000’s, consistent with John Fernald’s suggestions regarding productivity growth. It takes until 2015 to see this break statistically because you need several years of data to confirm that the growth slowdown was not a temporary phenomenon.
Note the subtle but very, very, very important difference between the two posts. Papell/Prodan find a significant shift in the level of GDP, while Antolin-Diaz, Drechsel, and Petrella (ADP) find a significant shift in the growth rate of GDP. The former sucks, but the latter is far more troubling. If the growth rate is truly lower, then we will get farther and farther away from the pre-GR trend, and the ratio of actual GDP to pre-GR trend GDP will go to zero. If it is just a level shift, then the ratio of actual GDP to pre-GR trend GDP will go to one as both become arbitrarily large.
I find the Papell/Prodan result more convincing. Keep in mind that David is my department chair and if I knocked on my office wall right now I could interrupt the phone call he is on. Ruxandra’s office is all of 20 feet from mine. I see these people every day. But regardless of the fact that I know them personally, I think they are right.
ADP are getting a false result showing slow growth because of the level shift that David and Ruxandra identify. If ADP do not allow for the level shift, then over any window of time that includes 2008:Q2 the growth rate will be calculated to be low. But that is just a statistical artifact of this one-time drop in GDP. It doesn’t mean that the long-run growth rate is in fact different. Put it this way: if they re-run their tests 25 years from now, they’ll find no statistical evidence of a growth change.
Of course, if the CBO is wrong about the path of GDP from 2015-2025, then Papell/Prodan could be wrong and ADP could be right. But given the current CBO projections, there is strong evidence of a negative level shift to GDP, but no change in the long-run growth rate.
# Is the U.S. Really Below Potential GDP?
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
The CBO just released a new projection of both GDP and the budget out to 2024. In short, the CBO sees the U.S. staying below potential GDP for several years. Menzie Chinn just did a short review of how people use inflation and/or unemployment to try and figure out the difference difference between actual and potential GDP.
From a growth perspective, I wanted to take a look at the projections a little differently. First, I don’t much care about the level of aggregate GDP, I care about the level of GDP per capita. So I took the CBO numbers and combined them with population figures and projections to get actual and projected GDP per capita for the U.S. Note, I’m using the CBO projections for actual GDP, not their potential GDP numbers. I want to look at the expected GDP numbers.
Second, I wanted to consider how this projected GDP per capita compared to long-run trends, rather than using inflation or unemployment to assess whether GDP per capita is “at potential”. I am looking instead whether GDP per capita has deviated from its long-run path. To do this I merged the GDP per capita projections from the CBO with the Maddison dataset on GDP per capita from 1970 to 2008. (The CBO goes back far enough that the two series overlap and I can adjust the actual levels of GDP per capita to match).
I took the trend in GDP per capita from 1990 to 2007, and extrapolated that out from 2008 to 2024. Then I plotted the actual and CBO-projected GDP per capita data against that trend. Here is what you get:
It’s clear here that in 2007 GDP per capita drops below the 1990-2007 trend line. Moreover, the CBO expects that GDP per capita will stay below that trend line out until 2024. It looks like a distinct “level shift” in the parlance of growth economics. GDP per capita is something like 13% below the 1990-2007 trend.
If you look at the post-war trend in GDP per capita from 1947 to 2007, you get something similar. The gap in 2024, 18% below trend, is actually worse than the gap using the post-1990 era.
But if you extend your view back even further, and incorporate the whole period of 1870-2007 to form the trend line, things look different. Now, if you plot the projected GDP per capita against the trend, it looks as if the U.S. is spot on.
GDP per capita is almost exactly where you’d expect it given the historical trend. The CBO expects GDP per capita to be a little low in 2024, about 2% behind the full trend line. Using the 1870-2007 trend, there doesn’t appear to be anything particularly unusual about the projected path of GDP per capita. The U.S. seems to be moving along the same balanced growth path it always has.
What really looks like the anomaly in U.S. data is the extended period from about 1990 to 2010 that we spent above trend. You could think of this as capturing John Fernald’s argument (or see here) that the IT boom of the 1990’s was a one-time level shift up in GDP. We got a big boost from that, but now the economy is settling back to the long-run growth path.
[You should not – NOT – use this as an argument that the financial crash and subsequent recession were necessary, useful, or welfare-improving. It is quite possible for the economy to have managed a graceful slide back to the long-run trend line after 2007 rather than experiencing it all in one dramatic plunge. The long-run trend is like gravity. Yes, it will win in the end, but that does not mean that I have to leap to the ground after cleaning out my gutters. I have a ladder.]
I really thought when I started playing with this data that I’d be writing a post about how the Great Recession had fundamentally shifted GDP per capita below the long-run trend, and that this represented a really fundamental shock given how stable the long-run trend had been until now. But the current path of GDP per capita doesn’t appear to be that surprising in historical perspective.
The big caveat here is that the CBO could be entirely wrong about future GDP per capita growth. If they have been overly optimistic, then we could certainly find ourselves falling below even the very long-run trend. Then again, they could have been pessimistic, and we might find ourselves above trend for all I know. But even with all the uncertainty, the expectation is that the U.S. economy will find itself right where you would have predicted it would be.
# Techno-neutrality
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
I’ve had a few posts in the past few months (here and here) about the consequences of mechanization for the future of work. In short, what will we do when the robots take our jobs?
I wouldn’t call myself a techno-optimist. I don’t think the arrival of robots necessarily makes everything better. But I do not buy the strong techno-pessimism that comes up in many places. Richard Serlin has been a frequent commenter on this blog, and he generally has a gloomy take on where we are going to end up once the robots arrive. I’m not bringing up Richard to pick on him. He writes thoughtful comments on this subject (and lots of others), and it is those comments that pushed me to try and be more clear on why I’m “techno-neutral”.
The economy is more creative than we can imagine. The coming of robots to mechanize away our jobs is the latest in a long, long, long, history of technology replacing workers. And yet here we still are, working away. Timothy Taylor posted this great selection a few weeks ago. This is a quote from Time Magazine:
The rise in unemployment has raised some new alarms around an old scare word: automation. How much has the rapid spread of technological change contributed to the current high of 5,400,000 out of work? … While no one has yet sorted out the jobs lost because of the overall drop in business from those lost through automation and other technological changes, many a labor expert tends to put much of the blame on automation. … Dr. Russell Ackoff, a Case Institute expert on business problems, feels that automation is reaching into so many fields so fast that it has become “the nation’s second most important problem.” (First: peace.)
The number of jobs lost to more efficient machines is only part of the problem. What worries many job experts more is that automation may prevent the economy from creating enough new jobs. … Throughout industry, the trend has been to bigger production with a smaller work force. … Many of the losses in factory jobs have been countered by an increase in the service industries or in office jobs. But automation is beginning to move in and eliminate office jobs too. … In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries. … Today’s new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.
That quote is from 1961. This is almost word for word the argument you will get about robots and automation leading to mass unemployment in the future. 50 years ago we were just as worried about this kind of thing, and in those 50 years we do not have massive armies of unemployed workers wandering the streets. The employment/population ratio in 1961 was about 55%, and then it steadily rose until the late 90’s when it topped out at about 64%. Even after the Great Recession, the ratio is still 59% today, higher than it was in 1961.
This didn’t happen without disruption and dislocation. And the robots will cause similar dislocation and disruption. Luddites weren’t wrong about losing their jobs, they were just wrong about the economy losing jobs in aggregate. But I don’t see why next-generation robots are any different than industrial robots, mainframes, PC’s, tractors, mechanical looms, or any other of the ten million innovations made in history that replaced labor. We can handle this with some sympathy and try to smooth things out for those dislocated, or we can do what usually happens and let them hang out to dry. The robots aren’t the problem here, we are.
What exactly are those new jobs that will be created? If I knew, then I wouldn’t be writing this blog post, I’d be out starting a company. The fact that I cannot conceive of an innovation myself is not evidence that innovation has ceased. But I do believe in the law of large numbers, and somewhere among the 300-odd million Americans is someone who *is* thinking of a new kind of company with new kinds of jobs.
Robots change prices as well as wages. An argument for pessimism goes like this. People have subsistence requirements, meaning they have a wage floor below which they cannot survive. Robots will be able to replace humans in production and this will drive the wage below that subsistence requirement. Either no firm will hire workers at the subsistence wage or people who do work will not meet subsistence.
The problem with this argument is that it ignores the impact of robots on the price of that subsistence requirement. Subsistence requirements are in real terms (I need clothes and housing and food), not nominal terms (I need $2000 dollars). The “subsistence wage” is a a real wage, meaning it is the nominal wage divided by the price level of a subsistence basket of goods. Robots lowering marginal costs of production lowers the nominal human wage, but it also lowers the price of goods. It is not necessary or even obvious that real wages have to fall because of robots. History says that despite all of the labor-saving technological change that has gone on over the last few hundred years, real wages have risen as the lower costs outweigh the downward pressure on wages. Who is going to buy what the robots produce? Call this the “Henry Ford” argument. If you are going to invest in opening up a factory staffed entirely by robots, then who precisely is supposed to buy that output? Ford raised wages at his highly mechanized (for the time) plants so that he had a ready-made market for his cars. The Henry Fords of robot factories are going to need a market for the stuff they build. Rich people are great, but diminishing marginal utility sets in pretty quick. That means robot owners either need to lower prices or raise wages for the people they do hire in order to generate a big enough market. Depending on the fixed costs involved in getting these proverbial robot factories up and running, robot owners may be a strong force for keeping wages high in the economy, just like Henry Ford was back in the day. The wealthy are wealthy because they own productive assets. A tiny fraction of the value of those assets is due to the utility to the owner of the widgets they kick out. The majority of the value of those assets is due to the fact that you can *sell* that output for money and use that money to buy other widgets. Rockefeller wasn’t wealthy because he had a lot of oil. He was wealthy because he could sell it to other people. No other people, no wealth. Just barrel after barrel of useless black gunk. The same holds for robot owners. Those robots and robot factories have value because you can sell them or the goods they make in the wider economy. And that means continuing to exchange with the non-wealthy. You cannot be wealthy in a vacuum. Bill Gates on an island with robots and a stack of 16 billion dollar bills is Gilligan with a lot of kindling. Wealthy robot owners will do what wealthy (fill in capital stock here) owners have done for eons. They’ll trade access to the capital, or the goods it produces, to the non-wealthy in exchange for services, effort, flattery, and new ideas on what to do with that wealth. Wealth concentration would be a problem with or without robots. The worry here is that because the wealthy will be the only ones able to build the robots and robot factories, they will control completely the production of goods and the demand for labor. That’s not a problem that arises with robots, that is a problem that arises with, well, settled agriculture 10,000 years ago. Wealth concentration makes owners both monopolists (market power selling goods) and monopsonists (market power buying labor), which is a bad combination. It gives them the ability to drive (real) wages down to minimum subsistence levels. This is bad, absolutely. But this was bad when (fill in example of a landed elite) did it in (fill in historical era here). This is bad in “company towns”. This is bad now, today. So if you want to argue against wealth concentration and the pernicious influence it has on wages, get started. Don’t wait for the robots, they’ve got nothing to do with it. Again, be clear that in arguing against techno-pessimism I am not arguing that robots will generate a techno-utopia with ponies and rainbows. I just do not buy the dystopian view that somehow it’s all going to come crashing down around our ears because of the very particular innovations coming in the near future. # Why Did Consumption TFP Stagnate? NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. I’ve been trying to think more about why consumption-sector TFP flatlined from about 1980 forward. What I mentioned in the last post about this was that the fact that TFP was constant does not imply that technology was constant. I then speculated that technology in the service sector may not have changed much over the last 30 years, partly explaining the lack of consumption productivity growth. By a lack of change, I mean that the service sector has not found a way to produce more services for a given supply of inputs, and/or produced the same amount of service with a decreasing supply of inputs. Take something that is close to a pure service – a back massage. A one-hour back massage in 1980 is almost identical to a one-hour back massage in 2014. You don’t get twice (or any other multiple) of the massage in 2014 that you got in 1980. And even if the therapist was capable of reducing back tension in 30 minutes rather than 60, you bought a 60-minute massage. We often buy time when we buy services, not things. And it isn’t so much time as it is attention. And it is very hard to innovate such that you can provide the same amount of attention with fewer inputs (i.e. workers). Because for many services you very specifically want the attention of a specific person for a specific amount of time (the massage). You’d complain to the manager if the therapist tried to massage someone else at the same appointment. So we don’t have to be surprised that even technology in services may not rise much over 30 years. But there were obviously technological changes in the service sector. As several people brought up to me, inventory management and logistics were dramatically changed by IT. This allows a service firm to operate “leaner”, with a smaller stock of inventory. But this kind of technological progress need not show up as “technological change” in doing productivity accounting. That is, what we call “technology” when we do productivity accounting is not the only kind of technology there is. The “technology” in productivity accounting is only the ability to produce more goods using the same inputs, and/or produce the same goods using fewer inputs. It doesn’t capture things like a change in the shape of the production function itself, say a shift to using fewer intermediate goods as part of production. Let’s say a firm has a production function of ${Y = AK^{\alpha}L^{\beta}M^{\gamma}}$ where ${A}$ is technology in the productivity sense, ${K}$ is capital, ${L}$ is labor, and ${M}$ is intermediate goods. Productivity accounting could reveal to us a change in ${A}$. But what if an innovation in inventory management/logistics means that ${\gamma}$ changes? If innovation changes the shape of the production function, rather than the level, then our TFP calculations could go anywhere. Here’s an example. Let’s say that in 1980 production is ${Y_80 = A_{1980}K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}$. Innovation in logistics and inventory management makes the production function in 2014 ${Y_14 = A_{2014}K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}$. Total factor productivity in 1980 is calculated as $\displaystyle TFP_{80} = \frac{Y_{80}}{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}} \ \ \ \ \ (1)$ and total factor productivity in 2014 is calculated as $\displaystyle TFP_{14} = \frac{Y_{14}}{K_{14}^{.4}L_{14}^{.4}M_{14}^{.2}}. \ \ \ \ \ (2)$ TFP in 2014 relative to 1980 (the growth in TFP) is $\displaystyle \frac{TFP_{14}}{TFP_{80}} = \frac{Y_{14}}{K_{14}^{.3}L_{14}^{.3}M_{14}^{.4}} \times \frac{K_{80}^{.3}L_{80}^{.3}M_{80}^{.4}}{Y_{80}} \times \frac{M_{14}^{.2}}{K_{14}^{.1}L_{14}^{.1}} \ \ \ \ \ (3)$ which is an unholy mess. The first fraction is TFP in 2014 calculated using the 1980 function. The second fraction is the reciprocal of TFP in 1980, calculated normally. So the first two fractions capture the relative TFP in 2014 to 1980, holding constant the 1980 production function. The last fraction represents the adjustment we have to make because the production function changed. That last term could literally be anything. Less than one, more than one, more than 100, less than 0.0001. If ${K}$ and ${L}$ rose by a lot while ${M}$ didn’t go up much, this will lower TFP in 2014 relative to 1980. It all depends on the actual units used. If I decide to measure ${M}$ in thousands of units rather than hundreds of units, I just made TFP in 2014 go down by a factor of 4 relative to 1980. Once the production function changes shape, then comparing TFP levels across time becomes nearly impossible. So in that sense TFP could definitely be “getting it wrong” when measuring service-sector productivity. You’ve got an apples to oranges problem. So if we think that IT innovation really changed the nature of the service-sector production function – meaning that ${\alpha}$, ${\beta}$, and/or ${\gamma}$ changed, then TFP isn’t necessarily going to be able to pick that up. It could well be that this looks like flat or even shrinking TFP in the data. If you’d like, this supports David Beckworth‘s notion that consumption TFP “doesn’t pass the smell test”. We’ve got this intuition that the service sector has changed appreciably over the last 30 years, but it doesn’t show up in the TFP measurements. That could be due to this apples to oranges issue, and in fact consumption TFP doesn’t reflect accurately the innovations that occurred. To an ambitious graduate student: document changes in the revenue shares of intermediates in consumption and/or services over time. Correct TFP calculations for these changes, or at least provide some notion of the size of that fudge factor in the above equation. # The Limited Effect of Reforms on Growth NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. I said in my last post that transitional growth is slow, and therefore changing potential GDP – as many of the recent Cato Growth proposals would do – could not add much to the growth rate of GDP in the near term. There were several questions that came up in the comments, so let me try to be more clear about distinguishing between influences of trend growth and short-run shocks. Output in period ${t+1}$ is $\displaystyle y_{t+1} = (1+g)y_t + (1+g)\lambda (y^{\ast}_t - y_t) \ \ \ \ \ (1)$ where the first term on the right is the normal trend growth rate, and the second term is the additional transitional growth that occurs because the economy is not at potential GDP, ${y^{\ast}_t}$. We need to distinguish between changes in potential GDP and changes in current GDP. Let’s take the above equation, plug in ${\lambda=0.02}$, and then use it to iterate forward from period 0 (today) until some arbitrary period ${t}$. You get $\displaystyle y_t = (1+g)^t \left[(1-0.98^t)y^{\ast}_0 + 0.98^t y_0 \right]. \ \ \ \ \ (2)$ In period ${t}$, GDP will have grown by a factor of ${(1+g)^t}$ due to trend growth in GDP. The term in the brackets shows the cumulative effect of having ${y_0 \neq y^{\ast}_0}$ in the initial period. The 0.98 terms are just ${1-.02}$, and capture the changing role of this transitional growth over time. Note that as ${t}$ goes up, ${0.98^t}$ goes to zero and the effect of initial GDP ${y_0}$ falls to nothing. As ${t}$ gets big, the economy reaches potential GDP. Now let’s assume that period 0 is 2014. Potential GDP is 17 trillion and actual GDP is 16 trillion, and the trend growth rate is 2%. Let’s consider two alternative policies to enact today that take effect in 2015. • Policy A: a short run spending surge sufficient to make GDP in 2015 equal to potential GDP. Policy A immediately eliminates the gap between actual and potential GDP, but has no other long term effect. • Policy B: raises potential GDP by 1 trillion dollars, but adds no immediate spending to GDP. The effect on potential GDP is permanent. For Policy A, GDP in 2015 (period 1) is $\displaystyle y_1 = (1.02)^1\left[(1-0.98)\times 17 + 0.98 \times 17 \right] = 17.34. \ \ \ \ \ (3)$ The growth rate of GDP from 2014 to 2015 is ${(17.34 - 16)/16 = 0.084}$ or about 8.4%. That’s a massive GDP growth rate for a developed economy like the US. But it is a one-time shock to the growth rate. From 2015 to 2016, and from 2016-2017, and every year thereafter, the growth rate will be exactly 2% because the economy is precisely back on trend. Policy A gives a one-year gigantic boost to the growth rate. What about Policy B? GDP in 2015 here is $\displaystyle y_1 = (1.02)^1\left[(1-0.98)\times 18 + 0.98 \times 16 \right] = 16.36. \ \ \ \ \ (4)$ This is nearly 1 trillion less than Policy A. The growth rate of GDP from 2014 to 2015 is ${(16.36 - 16)/16 = 0.023}$. As the prior post noted, reforms that raise potential GDP don’t have big effects on growth rates. But while the effect on growth is small, it is persistent. From 2015-2016, the growth rate of GDP will be roughly…0.023. It’s actually minutely smaller than from 2014-2015, but rounding makes them look the same. It will take a few years before the growth rate declines appreciably. Fifty years from now the growth rate will still be almost 0.021. Changing potential GDP, like with Policy B, is like turning an oil tanker with a tug boat. It doesn’t go fast, but it goes on for a long time. So is Policy B worse than Policy A? It depends entirely on your time preferences. In 2015 GDP under Policy A is nearly 1 trillion dollars higher than with policy B. But 100 years from now, GDP will be nearly 1 trillion dollars higher with Policy B. We can actually figure out how soon it will be before Policy B passes Policy A. Set $\displaystyle (1.02)^t \left[(1-0.98^t)17 + 0.98^t 17 \right] = (1.02)^t \left[(1-0.98^t)18 + 0.98^t 16 \right] \ \ \ \ \ (5)$ and solve for ${t}$. This turns out to be roughly 34 years from now, in 2048. It takes a long, long, time for changes in potential GDP to really pay off. If you want to increase the level of GDP in the near term, and hence raise near-term growth rates by implication, then you have to, you know, boost GDP. GDP is a measure of current spending, so raising GDP means raising current spending. There isn’t a trick to get around this. Now, could I be underselling Policy B as a near-term boost to growth rates and GDP? Let’s consider a few possibilities: • I’m underestimating the size of ${\lambda}$. As I mentioned last time, there is lots of empirical evidence that this is pretty small. But okay, let’s make ${\lambda = 0.05}$, more than double my 0.02 value. Now in 2015 policy B yields GDP of 16.4 trillion and a growth rate of 2.6%. Yes, it helps policy B, but doesn’t get it anywhere close to Policy A. It is still 14 years before GDP under Policy B is larger than under Policy A. • I’m underestimating the boost to potential GDP that Policy B can deliver. So let’s ask, given ${\lambda = 0.05}$, how much would ${y^{\ast}_0}$ have to go up to match the 8.4% growth rate of Policy A? Potential GDP would have to jump to roughly 36 trillion, meaning it has to roughly **double** in size thanks to the policy. I think it is totally fair to say that this is implausible in a country like the US. • But China was able to do it. Right, when China opened up, made reforms, etc.., it was able to raise its potential GDP by a large amount. You could probably plausibly argue that it raised potential GDP by a factor of something like 8-10. But the rapid growth in China over the last 30 years is not some victory lap for good state-led policy reforms, it’s a testament to just how screwed up Maoism was as an economic system. [Egad! An institutions explanation!] • What if Policy B raised the trend growth rate, ${g}$? If it changed ${g}$ appreciably, then Policy B would be something really special. Let’s review for a moment a few of the changes that did not change the long-run growth rate in the US: the introduction of electricity, the income tax, the Great Depression, the New Deal, Medicare, higher tax rates, the Cold War, the oil crisis, lower tax rates, de-regulation, the IT revolution, and New Coke. There have been shifts in the level of potential GDP, such as the IT revolution shifting up potential GDP and inducing a period of relatively rapid transitional growth in the 1990’s. But it’s hard – if not impossible if you take Chad Jones‘ semi-endogenous growth idea seriously – to fundamentally alter ${g}$. It is dictated by changes in the scale of the global economy, not by policy effects within the US. I’m all for policy reforms that raise potential GDP, and several of those proposed in the Cato forum would probably do that. We might want to undertake several of them at once to counteract the drags on potential GDP that Robert Gordon has outlined. But we can’t be fooled into thinking that any of them would make a really appreciable difference to economic growth today. You can revolutionize education, or corporate taxation, or urban planning, or immigration all you want, but the gains those changes induce will take decades to manifest themselves. # [insert policy here] Won’t Boost Growth Rates NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site. Over at the Cato Institute, they hosted an online forum about reviving economic growth. There are lots of smart people involved. The web page has lots of big pictures of their heads, I guess to indicate that their brains are like, totally huge. Anyway, each one wrote up some proposed policy reform that would help boost long-run growth prospects. Brad DeLong responded to many of the proposals here before his head exploded reading Doug Holtz-Eakin’s essay. I’m not going to quibble with any of the minutiae of the proposals. My point is going to be a general one on the possible growth effects of [insert policy here]. Short answer, there won’t be any. There are two ways to boost GDP growth. Either • Actively raise current GDP through increased spending by some sector of the economy. • Raise potential GDP and let transitional growth speed up. The second one perhaps deserves a little explanation. Transitional growth is an extra boost to growth that occurs when current GDP is below potential GDP. Why does this occur? Bob Solow is why. In an economy with accumulable factors of production (physical capital, human capital, knowledge capital) being below potential GDP means that the return to these factors is relatively high, and hence more investment in those factors is done, boosting GDP growth. The wider is the gap between current and potential GDP, the stronger this transitional growth. The issue is that [insert policy here] is a policy to raise potential GDP, not current GDP. But the transitional effects this encourages are inherently small. So even if [insert policy here] opens up a big gap between potential and actual GDP, this doesn’t translate into much extra growth. In fact, the effects are likely so small that they would be unnoticeable against the general noise in growth rates year by year. To give you an idea of how little an effect [insert policy here] will have on growth, let’s play with math. Output in period ${t+1}$ can be written in terms of output in period ${t}$ this way $\displaystyle y_{t+1} = (1+g)[y_t + \lambda (y^{\ast}_t - y_t)]. \ \ \ \ \ (1)$ This says that output in ${t+1}$ is equal to ${1+g}$ times current output. That is “regular” growth. The term with the ${\lambda}$ is the additional boost in growth we get from being below potential. ${y^{\ast}_t}$ is potential GDP in period ${t}$, and ${y^{\ast}_t - y_t}$ is the gap in GDP. ${\lambda}$ tells us how much of that gap we make up from period ${t}$ to ${t+1}$. If ${\lambda = 0}$, then we are stuck below potential (secular stagnation). If ${\lambda = 1}$, then immediately next period our GDP will be at potential again. Let’s think about this in terms of growth rates, so $\displaystyle Growth = \frac{y_{t+1}-y_t}{y_t} = (1+g)\left[\lambda \frac{y^{\ast}_t}{y_t} + (1-\lambda)\right] - 1. \ \ \ \ \ (2)$ The growth rate from ${t}$ to ${t+1}$ depends on the ratio of potential to actual GDP today, period ${t}$. If that ratio were equal to one – meaning that we were at potential – then the growth rate just becomes ${g}$, the trend growth rate. The larger is ${y^{\ast}_t/y_t}$ – meaning the farther we are from potential – the higher is the actual growth rate. Now we can go back to thinking about the possible growth impact of [insert policy here]. GDP today (${y_t}$) is about 16 trillion. Potential GDP today (${y^{\ast}_t}$) is probably about 17 trillion. You can get a lower estimate from the CBO, Robert Gordon, or John Fernald, or a higher estimate from older CBO forecasts. I’m going to err on the high side for potential because this will inflate the growth effect of [insert policy here]. We also need to know the value of ${\lambda}$, the percent of the GDP gap that is closed in a year. We’ve got lots of evidence that this value is about ${\lambda = 0.02}$, or 2% of the gap closes every year. This estimate goes back to the original cross-country convergence literature starting with Barro (1991), but consistently across samples (countries, US states, Japanese prefectures, Canadian provinces, etc..) economies converge to potential GDP at about 2% of the gap per year. You get higher values of ${\lambda}$ if you assume that economies pursue optimal savings plans, like in the Ramsey model, meaning that they save at a higher rate when they are farther below steady state. But if there is an economy that saves according to the predictions of the Ramsey model, it is populated by unicorns. Back to the calculation. The last thing we need is a value for ${g}$, trend growth. Let’s call that ${g = 0.02}$, or trend growth in GDP is about 2% per year. Again, we can argue about whether that is higher or lower, but that’s not going to be the important factor here. Okay, so based on the fact that we are currently 1 trillion below trend, the growth rate today should be $\displaystyle Growth = (1+.02)\left[.02 \frac{17}{16} + (1-.02)\right] - 1 = .0213 \ \ \ \ \ (3)$ or growth should be 2.13%. Growth will be about 0.13 percentage points higher than normal – that’s a little over one-tenth of one percent – because we are below potential. The value of ${g}$ is really irrelevant. All the action is inside the brackets. Because ${\lambda}$ is small, there isn’t much bite from transitional growth, even though we are$1 trillion below trend.
But what about [insert policy here]? That will *raise* potential GDP, and therefore will induce faster transitional growth to the new, higher potential GDP. Okay. Let’s say that [insert policy here] has an astonishingly positive impact on potential GDP. I mean massive. [insert policy here] adds a full $1 trillion to potential GDP, which is now$18 trillion. Now, growth under the [insert policy here] regime is
$\displaystyle Growth = (1+.02)\left[.02 \frac{18}{16} + (1-.02)\right] - 1 = .0225 \ \ \ \ \ (4)$
Uh, wow? Growth will be an additional 0.12 percentage points higher thanks to [insert policy here]. This is not a massive change in growth. And the growth boost will *decline* over time as we get closer to potential.
Fine, but what if [insert policy here] is truly revolutionary, and raises potential GDP by $2 trillion? Then growth will be 0.0238. This could be generously rounded to 0.025, meaning you added a half-point to the growth rate of GDP. But let’s not kid ourselves that [insert policy here] is going to have that big of an effect on growth.$2 trillion implies that [insert policy here] is raising potential GDP by about 12%. That would be an anomaly of historic proportions.
[insert policy here] will not generate any appreciable extra economic growth, even though in the very long-run [insert policy here] may be a net positive for the level of economic activity. The problem is that it takes a very, very, very long time for those positive effects to manifest themselves, and thus [insert policy here] won’t do anything to fundamentally change GDP growth.
What about the exceptions I mentioned? Among the proposals, there are a few that could boost current GDP (and thus growth) directly and immediately by encouraging spending.
• Scott Sumner’s NGDP targeting. The proposal speaks directly to raising current GDP, as opposed to raising potential GDP. I think of this as solving the balance sheet problems of households. Boost nominal spending and nominal incomes rise, while nominal debts like mortgages remain fixed, leading to extra spending.
• Brad DeLong’s raising K-12 teacher salaries. If you could do it *now*, then it would raise incomes for these folks, and boost spending. The second part of the proposal, to tie this to teacher tenure changes, is more of a potential GDP changer. Question, how big of an impact would this really have on spending?
• A number of people mention infrastructure spending. Yes, if we would spend that money *now*, then it would materially boost GDP growth *now*, and as a bonus have long-run benefits for potential GDP.
Ultimately, the issue in the U.S. right now is not with potential GDP. We do not need policies to raise this potential GDP so much as we need policies to get us back to potential. That requires actively boosting immediate spending.
# Why I Care about Inequality
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
“Inequality” is a term that has been tossed about quite a bit. The Occupy movement, to Piketty’s book, to debates over the minimum wage, to Greg Mankiw‘s defense of the 1%. Just today Mark Thoma published an op-ed on inequality. A few days ago John Cochrane had a post about why we care about inequality.
One of Cochrane’s main points is that the term “inequality” has been used in so many contexts, and to refer to so many different things, that it is ceasing to lose meaning. I’ll agree with him on this completely. If you want to talk about “inequality”, you have to be very clear about what precisely you mean.
There are three things that people generally mean by “inequality”:
1. The 1% versus the 99%. That is, the difference in average annual income of the top 1% of all households versus the average annual income of the bottom 99%.
2. The stagnation of median real wages and those below the median.
3. The college premium, or the gap in earnings between those who finished college and those who did not (or did not attend).
When I say I care about inequality, I mean mainly the second – the stagnation of median wages – but this is going to take me into territory covered by the first – the growth in top 1% income. There are things to say about the college premium, but I’m not going to say them here.
Why do I care about the stagnation of median wages?
• Because I’m going to be better off if everyone shares in prosperity. I want services like education, health care, and home repairs to be readily available and cheap. The way to achieve that is to invest in developing a large pool of skilled workers – teachers, nurses, electricians, carpenters. Those at the bottom of the distribution don’t have sufficient income to make those investments privately, so that requires public provision of those investments (i.e. schools) or transfers to support private investments. You want to have an argument about whether public provision or transfers are more efficient? Okay. But the fact that there is an argument on implementation doesn’t change the fact that stagnant wages are a barrier to these investments right now.
• Because people at the bottom of the income distribution aren’t going to disappear. We can invest in these people, or we can blow our money trying to shield ourselves from them with prisons, police officers, and just enough income support to keep them from literally starving. I vote for investment.
One response to this is that I don’t care about inequality per se, I care about certain structural issues in labor markets, education, and law enforcement. So why don’t we address those fundamental structural issues, rather than waving our hands around about inequality, which is meaningless? Because these strutural issues are a problem of under-investment. The current allocation of income/wealth across the population is not organically producing enough of this investment, so that allocation is a problem. In short, if you care about these structural issues, you cannot escape talking about the distribution of income/wealth. In particular, you have to talk about another kind of inequality, the 1%/99% kind.
Let me be very clear about this too, because I don’t want anyone to think I’m trying to be clever and hide something. I would take some of the income and/or wealth from people with lots of it, and then (a) give some of that to currently poor people so they can afford to make private investments and (b) use the rest to invest in public good provision like education, infrastructure, and health care.
Would I use a pitchfork and torches to do this? No. Would I institute “confiscatory taxation” on rich people? No, that’s a meaningless term that Cochrane and others use to suggest that somehow rich people are going to be persecuted for being rich. I am talking about raising marginal income tax rates and estate tax rates back to the archaic levels seen in the 1990s.
• Because rich people spend their money on useless stuff. Not far from where I live, there is a new house going up. It will be over 10,000 square feet when it is complete. 2,500 of those square feet will be a closet that has two separate floors, one for regular clothes and one for formal wear. If that is what you are spending your money on, then yes, I believe raising your taxes to fund education, infrastructure, and health spending is a net gain for society.
Don’t poor people spend money on stupid stuff? Of course they do. Isn’t the government an inefficient provider of some of these goods, like education? Maybe. But even if both those things are true, public investment and/or transfers to poor people will result in some net investment that I’m not currently getting from the mega-closet family. I’m happy to talk about alternative institutional settings that would ensure a greater proportion of the funds get spent on actual investments.
• Because I’m not afraid that some embattled, industrious core of “makers” will decide to “go Galt” and drop out of society, leaving the rest of us poor schleps to fend for ourselves. Oh, however will we figure out how to feed ourselves without hedge fund managers around to guide us?
This is actually a potential feature of higher marginal tax rates, by the way, not a bug. You’re telling me that a top tax rate at 45% will convince a number of wealthy self-righteous blowhards (*cough* Tom Perkins *cough*) to flee the country? Great. Tell me where they live, I’ll help them pack. And even if these self-proclaimed “makers” do stop working, the economy is going to be just fine. How do I know? Imagine that the entire top 1% of the earnings distribution left the country, took all of their money with them, and isolated themselves on some Pacific island. Who’s going to starve first, them or the remaining 300-odd million of us left here? The income and wealth of the top 1% have value only because there is an economy of another 300-odd million people out there willing to provide services and make goods in exchange for some of that income and wealth.
So, yes, I care about 1%/99% inequality itself, because I cannot count on the 1% to privately make good investment decisions regarding the human capital of the bottom 99%. And the lack of investment in the human capital of the bottom part of the income distribution is a colossal waste of resources.
# The Slowdown in Reallocation in the U.S.
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
One of the components of productivity growth is reallocation. From one perspective, we can think about the reallocation of homogenous factors (labor, capital) from low-productivity firms to high-productivity firms, which includes low-productivity firms going out of business, and new firms getting started. A different perspective is to look more closely at the shuffling of heterogenous workers between (relatively) homogenous firms, with the idea being that workers may be more productive in one particular environment than in another (i.e. we want people good at doctoring to be doctors, not lawyers). Regardless of how exactly we think about reallocation, the more rapidly that we can shuffle factors into more productive uses, the better for aggregate productivity, and the higher will be GDP. However, evidence suggests that both types of reallocation have slowed down recently.
Foster, Grim, and Haltiwanger have a recent NBER working paper on the “cleansing effect of recessions”. This is the idea that in recessions, businesses fail. But it’s the really crappy, low-productivity businesses that fail, so we come out of the recession with higher productivity. The authors document that in recessions prior to the Great Recession, downturns tend to be “cleansing”. Job destruction rates rise appreciably, but job creation rates remain about the same. Unemployment occurs because it takes some time for those people whose jobs were destroyed to find newly created jobs. But the reallocation implied by this churn enhances productivity – workers are leaving low productivity jobs (generally) and then getting high productivity jobs (generally).
But the Great Recession was different. In the GR, job destruction rose by a little, but much less than in prior recessions. Job creation in the GR fell demonstrably, much more than in prior recessions. So again, we have unemployment as the people who have jobs destroyed are not able to pick up newly created jobs. But because of the pattern to job creation and destruction, there is little of the positive reallocation going on. People are not losing low productivity jobs, becoming unemployed, and then getting high productivity jobs. People are staying in low productivity jobs, and new high productivity jobs are not being created. So the GR is not “cleansing”. It is, in some ways, “sullying”. The GR is pinning people into *low* productivity jobs.
This holds for firm-level reallocation well. In recessions prior to the GR, low productivity firms tended to exit, and high productivity firms tended to grow in size. So again, we had productivity-enhancing recessions. But again, the GR is different. In the GR, the rate of firm exit for low productivity firms did not go up, and the growth rate of high-productivity firms did not rise. The GR is not “cleansing” on this metric either.
Why is the GR so different? The authors don’t offer an explanation, as their paper is just about documenting these changes. Perhaps the key is that a financial crash has distinctly different effects than a normal recession. A lack of financing means that new firms cannot start, and job creation falls, leading to lower reallocation effects. A “normal” recession doesn’t involve as sharp a contraction in financing, so new firms can take advantage of others going out of business to get themselves going. Just an idea, I have no evidence to back that up.
[An aside: For the record, there is no reason that we need to have a recession for this kind of reallocation to occur. Why don’t these crappy, low-productivity firms go out of business when unemployment is low? Why doesn’t the market identify these crappy firms and compete them out of business? So don’t take Foster, Grim, and Haltiwanger’s work as some kind of evidence that we “need” recessions. What we “need” is an efficient way to reallocate factors to high productivity firms without having to make those factors idle (i.e. unemployed) for extended periods of time in between.]
In a related piece of work Davis and Haltiwanger have a new NBER working paper that discusses changes in workers reallocations over the last few decades. They look at the rate at which workers turn over between jobs, and find that in general this rate has declined since 1980 to today. Some of this may be structural, in the sense that as the age structure and education breakdown of the workforce changes, there will be changes in reallocation rates. In general, reallocation rates go down as people age. 19-24 year olds cycle between jobs way faster than 55-65 year olds. Reallocation rates are also higher among high-school graduates than among college graduates. So as the workforce has aged and gotten more educated from 1980 to today, we’d expect some decline in job reallocation rates.
But what Davis and Haltiwanger find is that even after you account for these forces, reallocation rates for workers are declining. No matter which sub-group you look at (e.g. 25-40 year old women with college degrees) you find that reallocation rates are falling over time. So workers are flipping between jobs *less* today than they did in the early 1980s. Which is probably somewhat surprising, as my guess is that most people feel like jobs are more fleeting in duration these days, due to declines in unionization, etc.. etc..
The worry that Davis and Haltiwanger raise is that lower rates of reallocation lower productivity growth, as mentioned at the beginning of this post. So what has caused this decline in reallocation rates across jobs (or across firms as the first paper described)? From a pure accounting perspective, Davis and Haltiwanger gives us several explanations. First, reallocation rates within the Retail sector have declined, and since Retail started out with one of the highest rates of reallocation, this drags down the average for the economy. Second, more workers tend to be with older firms, which have less turnover than young firms. Last, the above-mentioned shift towards an older workforce that tends to shift jobs less than younger workers.
Fine, but what is the underlying explanation? Davis and Haltiwanger offer several possibilities. One is increased occupational licensing. In the 1950s, only about 5 % of workers needed a government (state or federal) license to do their job. In 2008, that is now 29%. So it can be incredibly hard to reallocate to a new job or sector of work if you have to fulfill some kind of licensing requirement (which could involve up to 2000 hours of training along with fees). Second is a decreased ability of firms to fire-at-will. Starting in the 1980s there were a series of court decisions that made it harder for firms to just fire someone, which makes it both less likely for people to leave jobs, and less likely for firms to hire new people. Both act to lower reallocation between jobs. Third is employer-provided health insurance, which generates some kind of “job lock” where people are unwilling to move jobs because they don’t want to lose, or create a gap in, coverage.
Last is the information revolution which may have had perverse effects on reallocation. We might expect that IT allows more efficient reallocation as people can look for jobs more easily (e.g. Monster.com, LinkedIn) and firms can cast a wider net for applicants. But IT also allows firms to screen much more effectively, as they have access to credit reports, criminal records, and the like, that would have been prohibitive to acquire in the past.
So we appear to have, on two fronts, declining dynamic reallocation in the U.S. This certainly contributes to a slowdown in productivity growth, and may perhaps be a better explanation than “running out of ideas from the IT revolution” that Gordon and Fernald talk about. The big worry is that, if it is regulation-creep, as Davis and Haltiwanger suspect, we don’t know if or when the slowdown in reallocation would end.
In summary, reading John Haltiwanger papers can make you have a bad day.
# Age Structure, Experience, Productivity…… and France!
NOTE: The Growth Economics Blog has moved sites. Click here to find this post at the new site.
Miles Kimball posted a link to a relatively old Scott Sumner post that was discussing a Paul Krugman post from 2011. Which means I am only about 3 years behind, which is good, because I would have estimated I was about 5 years behind.
Anyway, Scott’s post deals with some facts about France. Namely, while GDP per capita in France is only roughly 70% of the U.S. level, GDP per hour worked is essentially equal to that in the U.S. French workers are just as productive per hour as U.S. workers, but just work fewer hours in aggregate.
There are generally two responses to this. The optimistic one: “The French have made a decision to spend their high productivity by taking more vacations and retiring earlier, leading to lower GDP per capita, but probably higher utility.” The pessimistic one: “The French labor system is so mucked up by taxes and regulations that despite being as productive per hour as the U.S., firms do not find it profitable, and workers do not find it desirable, to have more hours provided.”
It’s non-obvious which view is correct. Scott’s post makes two great points, though, about how to think about this. The first is one that I’m not going to deal with here. Comparing France to the U.S. is not an apples to apples comparison. The U.S. is better compared to the EU, or at least Western Europe, as a whole. French productivity looks much worse when compared to New England or the Mid-Atlantic as a region, and only looks good in comparison to the U.S. because the U.S. includes Mississippi and Alabama (which I will arbitrarily call the Sicily and Greece of Europe). It’s a great point.
The second idea that Scott talks about is whether we should be impressed by French output per hour being as high as the U.S. In France, the high youth unemployment rate and early retirement rate mean that the employed population is concentrated in the 30-55 age range. If this age range tends to be particularly productive compared to other age groups, then shouldn’t French output per hour be much higher than in the U.S., where we employ lots of sub-30 and over-55 workers?
Jim Feyrer has a paper from a few years back that looks precisely at the relationship of age structure and measures of productivity. What he finds is that the most productive group of workers are those aged 40-49. An 1% increase in the number of those workers (holding other age groups constant) is associated with about a 0.2% increase in productivity. Ages 50-plus imply lower productivity, but the statistical significance is low. Ages under 39, though, are significantly negative for productivity. Jim uses these relationships to partly explain the productivity slowdown in the US during the 1970s, when the Baby Boomers were filling up the labor force and were still under 40, meaning they were relatively low productivity.
But the results speak to this French question that Scott poses as well. By employing so few under 39-year-olds, France is essentially only using the very high productivity workers in the economy. Thus their GDP per hour is likely inflated by that fact, and their workers are not necessarily just as productive as those in the U.S. What you’d want is some kind of equivalent measure for the U.S. to make this concrete. What is the age-structure-adjusted GDP per hour worked in the U.S. and France? Based on Jim’s results, the U.S. would be ahead in that comparison.
This is related to the well-known result in labor economics that wages rise with labor market experience, but at a decreasing rate. That is, people’s wages always tend to rise with experience, but once you hit about 25-30 years of experience (meaning you are somewhere between 40-55 most likely, the increase gets close to zero. You can see a bunch of these wage/experience relationships in a paper by Lagakos, Moll, Porzio, and Qian, who compare the relationship across countries. One of the features of the data is that in rich countries (like France and the U.S.) the wage/experience relationship is really, really steep when experience is below 10 years. In other words, wages are particularly low for people who have little labor market experience, like young workers aged 18-25.
The U.S. tends to employ a lot more 18-25 year olds as a fraction of our labor force than France. Even prior to 2007, unemployment among those under 25 was roughly 20% in France, and only 10% in the U.S., see here. So the U.S. is employing far more workers that have not yet hit the sweet spot in labor market experience and their wages are very low. On the assumption that wages are some indication of how productive workers are, this means that the U.S. employs proportionately more low-productivity workers. So, again, France’s measured GDP per hour should really be higher than the U.S. level if in fact France and the U.S. have similar productivity levels.
Scott’s point is that we can’t take the equivalence between France’s and the U.S.’s GDP per hour at face value. This doesn’t necessarily mean that the pessimistic view noted above is correct. France could well be making some kind of optimal decision to take lots of leisure time and retirement. But that decision is not one made with the same “budget constraint” as the U.S. – France is very likely not as productive as the U.S.
If you do want to subscribe to the pessimistic viewpoint, then you could argue that not only have French regulations mucked up the labor market, but they have also given the statistical illusion of high productivity. Hence, France is in fact much worse off than the U.S. Even if they fixed their labor market, their GDP per capita would not reach U.S. levels. | 2023-04-01 07:06:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 98, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5097187161445618, "perplexity": 1618.6491512056787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00415.warc.gz"} |
http://clay6.com/qa/1086/evaluate-the-definite-integral | # Evaluate the definite integral$\int\limits_0^1x\;e^{x^2}dx$
Toolbox:
• (i)$\int \limits_a^bf(x)dx=F(b)-F(a)$
• (ii)If there are two functions u and v, and the integral function is of the form $\int udv,$then it can be solved by the method of integration by parts.$\int udv=uv-\int vdu$
• (iii)$\int e^x=e^x+c.$
Given $\int\limits_0^1x\;e^{x^2}dx$
Let $x^2=t$
On differentiating we get w.r.t.x
$2xdx=dt$
Therefore $xdx=dt/2$
Substituting $t_1$ and dt we get
$I=\frac{1}{2}\int \limits_0^1 e^t dt$
On integrating we get,
$\frac{1}{2}[e^t]_0^1$
$=\frac{1}{2}[e^1-e^0]$
But $e^0=1$
$I=\frac{1}{2}[e-1]$
$\int \limits_0^1 xe^xdx=\frac{1}{2}[e-1]$ | 2018-02-18 03:28:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870269894599915, "perplexity": 796.1984386863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811352.60/warc/CC-MAIN-20180218023321-20180218043321-00548.warc.gz"} |
https://electronics.stackexchange.com/questions/560956/pic-18f4550-capture-mode | # PIC 18F4550 capture mode
I would like to measure a pulse using the PIC 18F4550 in capture mode. This pulse is generated by the PIC microcontroller itself.
For this, I use a function which plays the role of the XOR logic gate (you find the function that I've used below), with RC0 and RC2 being the inputs and RC6 being the signal output. The pulse leaving RC6 enters CCP2 to be measured.
The problem I found is that the CCP2 cannot detect the pulse generated by the microcontroller. I don't know if there are any conditions to connect the pins of the microcontroller or something.
If anyone has an answer or a hint to fix this, I will be grateful and if you have any questions feel free to ask.
#include <stdio.h>
#include <stdlib.h>
#include "osc_config.h"
#include "LCD_8bit_file.h"
#include <string.h>
unsigned long comtage,capt0,x;
char pulse[20];
char cosinus[20];
float period,dephTempo,deph,phi;
void init (){
IRCF0 =1; /* set internal clock to 8MHz */
IRCF1 =1;
IRCF2 =1;
PIE2bits.CCP2IE=1;
PIR2bits.CCP2IF=0;
CCPR2 =0; /*CCPR1 is capture count Register which is cleared initially*/
T3CONbits.RD16=1;
T3CKPS0=0;
T3CKPS1=0;
TMR3CS=0;
TMR3IF=0;
T3CCP2=0; /*Timer3 is the capture clock source for CCP2*/
}
void xor()
{
if (PORTCbits.RC0==PORTCbits.RC2)
{
PORTCbits.RC6=0;
}
else if (PORTCbits.RC0!=PORTCbits.RC2)
{
PORTCbits.RC6=1;
}
}
void main()
{
TRISCbits.TRISC0=1;
TRISCbits.TRISC2=1;
TRISCbits.TRISC6=0;
xor();
LCD_Init();
while(1)
{
CCP2CON = 0b00000101;
PIR2bits.CCP2IF = 0;
TMR3ON = 0;
TMR3 = 0;
while (!PIR2bits.CCP2IF);
TMR3ON = 1;
CCP2CON = 0b00000100;
PIR2bits.CCP2IF = 0;
while (!PIR2bits.CCP2IF);
comtage = CCPR2;
dephTempo = (((float)comtage /30.518)/65536 );
sprintf(pulse,"%.3f ",dephTempo);
LCD_String_xy(0,0,"the pulse width is : ");
LCD_String_xy(2,9,pulse);
}
}
The xor() must be inside the infinite loop. In particular inside the while (!PIR2bits.CCP2IF) loops. | 2021-05-07 00:31:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24519656598567963, "perplexity": 6419.127590165812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00041.warc.gz"} |
https://gmatclub.com/forum/if-x-and-y-are-integers-is-x-y-an-even-integer-253675.html | GMAT Changed on April 16th - Read about the latest changes here
It is currently 25 Apr 2018, 13:26
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
If x and y are integers, is x + y an even integer?
Author Message
TAGS:
Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 5280
GMAT 1: 800 Q59 V59
GPA: 3.82
If x and y are integers, is x + y an even integer? [#permalink]
Show Tags
16 Nov 2017, 01:43
Expert's post
1
This post was
BOOKMARKED
00:00
Difficulty:
45% (medium)
Question Stats:
73% (00:59) correct 27% (01:39) wrong based on 70 sessions
HideShow timer Statistics
[GMAT math practice question]
If x and y are integers, is $$x + y$$ an even integer?
(1) $$x$$ is an odd integer.
(2) $$x^2 + y^2$$ has a remainder of 2 when it is divided by 4.
[Reveal] Spoiler: OA
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only $79 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" DS Forum Moderator Joined: 22 Aug 2013 Posts: 1036 Location: India Re: If x and y are integers, is x + y an even integer? [#permalink] Show Tags 16 Nov 2017, 08:00 IF x and y are both integers, then x+y is even when either both x/y are odd, or both x/y are even. (1) x is odd. But we dont know anything about y. Insufficient. (2) x^2 + y^2 gives remainder 2 when divided by 4. Which means x^2 + y^2 is even (since its of the form 4K + 2, where k is a non negative integer). x^2 + y^2 will be even when either x/y are both even or x/y are both odd. So now we know that x+y is also going to be even (an even number stays even and an odd number stays odd when raised to any integer power). Sufficient. Hence B answer PS Forum Moderator Joined: 25 Feb 2013 Posts: 1061 Location: India GPA: 3.82 Re: If x and y are integers, is x + y an even integer? [#permalink] Show Tags 16 Nov 2017, 09:34 MathRevolution wrote: [GMAT math practice question] If x and y are integers, is $$x + y$$ an even integer? (1) $$x$$ is an odd integer. (2) $$x^2 + y^2$$ has a remainder of 2 when it is divided by 4. Statement 1: No information about $$y$$. Insufficient Statement 2: $$x^2 + y^2=4q+2=Even$$ this is only possible if both $$x$$ & $$y$$ are even or both $$x$$ & $$y$$ are odd. In either case $$x+y=Even$$. Sufficient Another approach- $$(x+y)^2=x^2 + y^2+2xy=Even+Even$$ $$=>x+y=\sqrt{Even}$$ and as $$x$$ & $$y$$ are integer so $$x+y=Even$$. Sufficient Option B Senior Manager Joined: 06 Jul 2016 Posts: 425 Location: Singapore Concentration: Strategy, Finance Re: If x and y are integers, is x + y an even integer? [#permalink] Show Tags 16 Nov 2017, 10:05 MathRevolution wrote: [GMAT math practice question] If x and y are integers, is $$x + y$$ an even integer? (1) $$x$$ is an odd integer. x = odd y = odd x + y = even x = odd y = even x + y = odd Insufficient. AD are OUT. Quote: (2) $$x^2 + y^2$$ has a remainder of 2 when it is divided by 4. Multiples of 4 -> 4,8,12,..... => Multiples of 4 are always EVEN. We are given => $$\frac{(x^2+y^2)}{4}$$ = 4Q + 2 => The resulting integer will also be an even number. Sufficient. B is the answer. _________________ Put in the work, and that dream score is yours! Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 5280 GMAT 1: 800 Q59 V59 GPA: 3.82 Re: If x and y are integers, is x + y an even integer? [#permalink] Show Tags 19 Nov 2017, 18:15 => Forget conventional ways of solving math questions. For DS problems, the VA (Variable Approach) method is the quickest and easiest way to find the answer without actually solving the problem. Remember that equal numbers of variables and independent equations ensure a solution. Since the question includes 2 variables (x and y) and no equation, C is most likely to be the answer. Since this is an integer question (one of the key question areas), we should also consider choices A and B by CMT 4(A). Conditions 1) & 2) Since $$x^2 + y^2$$ has a remainder of 2 when it is divided by 4, $$x^2 + y^2$$ must be even. Since x is odd, $$x^2$$ is odd and so $$y^2$$ must also be odd. Therefore, y is odd, and $$x + y$$ is even. The answer is ‘yes’. Condition 1) Since we don’t know whether y is even or odd, this is not sufficient. Condition 2) The condition tells us that $$x^2+y^2=4k+2=2(2k+1)$$ is even. Since$$x^2+y^2=(x+y)^2-2xy$$, and $$2xy$$ is even, this implies that $$(x+y)^2$$ is also even. But this can only happen if x+y is even. So, the answer is ‘yes’. This condition is sufficient. Therefore, the answer is B. Normally, in problems which require 2 or more additional equations, such as those in which the original conditions include 2 variables, or 3 variables and 1 equation, or 4 variables and 2 equations, each of conditions 1) and 2) provide an additional equation. In these problems, the two key possibilities are that C is the answer (with probability 70%), and E is the answer (with probability 25%). Thus, there is only a 5% chance that A, B or D is the answer. This occurs in common mistake types 3 and 4. Since C (both conditions together are sufficient) is the most likely answer, we save time by first checking whether conditions 1) and 2) are sufficient, when taken together. Obviously, there may be cases in which the answer is A, B, D or E, but if conditions 1) and 2) are NOT sufficient when taken together, the answer must be E). Answer: B _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Intern
Joined: 12 Oct 2017
Posts: 39
Re: If x and y are integers, is x + y an even integer? [#permalink]
Show Tags
19 Nov 2017, 22:04
(1) x is an odd integer => absolutely insufficient since we don't know anything about y.
(2) x2+y2 has a remainder of 2 when it is divided by 4 => x2 + y2 is even => we have 2 cases: x2 and y2 are both odd or both even.
- x2, y2 are odd => x, y are both odd => x+y = even
- x2, y2 are odd => x, y are both even => x + y = even
---
Kindly press +1 kudos if the explanation is clear!
Thank you!
Re: If x and y are integers, is x + y an even integer? [#permalink] 19 Nov 2017, 22:04
Display posts from previous: Sort by | 2018-04-25 20:26:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5137292146682739, "perplexity": 4729.378325024131}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00034.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Miguel%20S%C3%A1nchez-Portal | • ### GLACE survey: OSIRIS/GTC Tuneable Filter H$\alpha$ imaging of the rich galaxy cluster ZwCl 0024.0+1652 at z = 0.395. Part I -- Survey presentation, TF data reduction techniques and catalogue(1502.03020)
Feb. 10, 2015 astro-ph.CO, astro-ph.GA
The cores of clusters at 0 $\lesssim$ z $\lesssim$ 1 are dominated by quiescent early-type galaxies, whereas the field is dominated by star-forming late-type ones. Galaxy properties, notably the star formation (SF) ability, are altered as they fall into overdense regions. The critical issues to understand this evolution are how the truncation of SF is connected to the morphological transformation and the responsible physical mechanism. The GaLAxy Cluster Evolution Survey (GLACE) is conducting a study on the variation of galaxy properties (SF, AGN, morphology) as a function of environment in a representative sample of clusters. A deep survey of emission line galaxies (ELG) is being performed, mapping a set of optical lines ([OII], [OIII], H$\beta$ and H$\alpha$/[NII]) in several clusters at z $\sim$ 0.40, 0.63 and 0.86. Using the Tunable Filters (TF) of OSIRIS/GTC, GLACE applies the technique of TF tomography: for each line, a set of images at different wavelengths are taken through the TF, to cover a rest frame velocity range of several thousands km/s. The first GLACE results target the H$\alpha$/[NII] lines in the cluster ZwCl 0024.0+1652 at z = 0.395 covering $\sim$ 2 $\times$ r$_{vir}$. We discuss the techniques devised to process the TF tomography observations to generate the catalogue of H$\alpha$ emitters of 174 unique cluster sources down to a SFR below 1 M$_{\odot}$/yr. The AGN population is discriminated using different diagnostics and found to be $\sim$ 37% of the ELG population. The median SFR is 1.4 M$_{\odot}$/yr. We have studied the spatial distribution of ELG, confirming the existence of two components in the redshift space. Finally, we have exploited the outstanding spectral resolution of the TF to estimate the cluster mass from ELG dynamics, finding M$_{200}$ = 4.1 $\times$ 10$^{14}$ M$_{\odot} h^{-1}$, in agreement with previous weak-lensing estimates.
• ### The Pointing System of the Herschel Space Observatory. Description, Calibration, Performance and Improvements(1405.3186)
May 13, 2014 astro-ph.IM
We present the activities carried out to calibrate and characterise the performance of the elements of attitude control and measurement on board the Herschel spacecraft. The main calibration parameters and the evolution of the indicators of the pointing performance are described, from the initial values derived from the observations carried out in the performance verification phase to those attained in the last year and half of mission, an absolute pointing error around or even below 1 arcsec, a spatial relative pointing error of some 1 arcsec and a pointing stability below 0.2 arsec. The actions carried out at the ground segment to improve the spacecraft pointing measurements are outlined. On-going and future developments towards a final refinement of the Herschel astrometry are also summarised. A brief description of the different components of the attitude control and measurement system (both in the space and in the ground segments) is also given for reference. We stress the importance of the cooperation between the different actors (scientists, flight dynamics and systems engineers, attitude control and measurement hardware designers, star-tracker manufacturers, etc.) to attain the final level of performance.
• ### The Herschel PACS photometer calibration - A time dependent flux calibration for the PACS chopped point-source photometry AOT mode(1308.4068)
Aug. 28, 2013 astro-ph.IM
We present a flux calibration scheme for the PACS chopped point-source photometry observing mode based on the photometry of five stellar standard sources. This mode was used for science observations only early in the mission. Later, it was only used for pointing and flux calibration measurements. Its calibration turns this type of observation into fully validated data products in the Herschel Science Archive. Systematic differences in calibration with regard to the principal photometer observation mode, the scan map, are derived and amount to 5-6%. An empirical method to calibrate out an apparent response drift during the first 300 Operational Days is presented. The relative photometric calibration accuracy (repeatability) is as good as 1% in the blue and green band and up to 5% in the red band. Like for the scan map mode, inconsistencies among the stellar calibration models become visible and amount to 2% for the five standard stars used. The absolute calibration accuracy is therefore mainly limited by the model uncertainty, which is 5% for all three bands.
• ### The lack of star formation gradients in galaxy groups up to z~1.6(1307.0833)
July 16, 2013 astro-ph.CO
In the local Universe, galaxy properties show a strong dependence on environment. In cluster cores, early type galaxies dominate, whereas star-forming galaxies are more and more common in the outskirts. At higher redshifts and in somewhat less dense environments (e.g. galaxy groups), the situation is less clear. One open issue is that of whether and how the star formation rate (SFR) of galaxies in groups depends on the distance from the centre of mass. To shed light on this topic, we have built a sample of X-ray selected galaxy groups at 0<z<1.6 in various blank fields (ECDFS, COSMOS, GOODS). We use a sample of spectroscopically confirmed group members with stellar mass M >10^10.3 M_sun in order to have a high spectroscopic completeness. As we use only spectroscopic redshifts, our results are not affected by uncertainties due to projection effects. We use several SFR indicators to link the star formation (SF) activity to the galaxy environment. Taking advantage of the extremely deep mid-infrared Spitzer MIPS and far-infrared Herschel PACS observations, we have an accurate, broad-band measure of the SFR for the bulk of the star-forming galaxies. We use multi-wavelength SED fitting techniques to estimate the stellar masses of all objects and the SFR of the MIPS and PACS undetected galaxies. We analyse the dependence of the SF activity, stellar mass and specific SFR on the group-centric distance, up to z~1.6, for the first time. We do not find any correlation between the mean SFR and group-centric distance at any redshift. We do not observe any strong mass segregation either, in agreement with predictions from simulations. Our results suggest that either groups have a much smaller spread in accretion times with respect to the clusters and that the relaxation time is longer than the group crossing time.
• ### On Star Formation Rates and Star Formation Histories of Galaxies out to z ~ 3(1106.5502)
June 27, 2011 astro-ph.CO
We compare multi-wavelength SFR indicators out to z~3 in GOODS-South. Our analysis uniquely combines U-to-8um photometry from FIREWORKS, MIPS 24um and PACS 70, 100, and 160um photometry from the PEP survey, and Ha spectroscopy from the SINS survey. We describe a set of conversions that lead to a continuity across SFR indicators. A luminosity-independent conversion from 24um to total infrared luminosity yields estimates of LIR that are in the median consistent with the LIR derived from PACS photometry, albeit with significant scatter. Dust correction methods perform well at low to intermediate levels of star formation. They fail to recover the total amount of star formation in systems with large SFR_IR/SFR_UV ratios, typically occuring at the highest SFRs (SFR_UV+IR \gtrsim 100 Msun/yr) and redshifts (z \gtrsim 2.5) probed. Finally, we confirm that Ha-based SFRs at 1.5<z<2.6 are consistent with SFR_SED and SFR_UV+IR provided extra attenuation towards HII regions is taken into account (Av,neb = Av,continuum / 0.44). With the cross-calibrated SFR indicators in hand, we perform a consistency check on the star formation histories inferred from SED modeling. We compare the observed SFR-M relations and mass functions at a range of redshifts to equivalents that are computed by evolving lower redshift galaxies backwards in time. We find evidence for underestimated stellar ages when no stringent constraints on formation epoch are applied. We demonstrate how resolved SED modeling, or alternatively deep UV data, may help to overcome this bias. The age bias is most severe for galaxies with young stellar populations, and reduces towards older systems. Finally, our analysis suggests that SFHs typically vary on timescales that are long (at least several 100 Myr) compared to the galaxies' dynamical time.
• ### Search for H alpha emitters in Galaxy Clusters with Tunable Filters(1009.1557)
Sept. 8, 2010 astro-ph.CO
The studies of the evolution of galaxies in Galaxy Clusters have as a traditional complication the difficulty in establishing cluster membership of those sources detected in the field of view. The determination of spectroscopic redshifts involves long exposure times when it is needed to reach the cluster peripherical regions of/or clusters at moderately large redshifts, while photometric redshifts often present uncertainties too large to offer significant conclusions. The mapping of the cluster of galaxies with narrow band tunable filters makes it possible to reach large redshifts intervals with an accuracy high enough to establish the source membership of those presenting emission/absorption lines easily identifiable, as H alpha. Moreover, the wavelength scan can include other lines as [NII], [OIII] or $H_{\beta}$ allowing to distinguish those sources with strong stellar formation activity and those with an active galactic nuclei. All this makes it possible to estimate the stellar formation rate of the galaxies observed. This, together with ancillary data in other wavelengths may lead to a good estimation of the stellar formation histories. It will shed new light over the galaxy evolution in clusters and will improve our understanding of galaxy evolution, especially in the outer cluster regions, usually less studied and with significant unexploited data that can not be correctly interpreted without redshift determination.
• A large sub-mm survey with Herschel will enable many exciting science opportunities, especially in an era of wide-field optical and radio surveys and high resolution cosmic microwave background experiments. The Herschel-SPIRE Legacy Survey (HSLS), will lead to imaging data over 4000 sq. degrees at 250, 350, and 500 micron. Major Goals of HSLS are: (a) produce a catalog of 2.5 to 3 million galaxies down to 26, 27 and 33 mJy (50% completeness; 5 sigma confusion noise) at 250, 350 and 500 micron, respectively, in the southern hemisphere (3000 sq. degrees) and in an equatorial strip (1000 sq. degrees), areas which have extensive multi-wavelength coverage and are easily accessible from ALMA. Two thirds of the of the sources are expected to be at z > 1, one third at z > 2 and about a 1000 at z > 5. (b) Remove point source confusion in secondary anisotropy studies with Planck and ground-based CMB data. (c) Find at least 1200 strongly lensed bright sub-mm sources leading to a 2% test of general relativity. (d) Identify 200 proto-cluster regions at z of 2 and perform an unbiased study of the environmental dependence of star formation. (e) Perform an unbiased survey for star formation and dust at high Galactic latitude and make a census of debris disks and dust around AGB stars and white dwarfs.
• ### Structural parameters of nearby emission-line galaxies(astro-ph/0403403)
March 17, 2004 astro-ph
We present the results of an investigation on the main structural properties derived from VRI and Halpha surface photometry of galaxies hosting nuclear emission-line regions (including Seyfert 1, Seyfert 2, LINER and starburst galaxies) as compared with normal galaxies. Our original sample comprises 22 active galaxies, 4 starbursts and 1 normal galaxy and has been extended with several samples obtained from the literature. Bulge and disc parameters, along with B/D relation, have been derived applying an iterative procedure. The resulting parameters have been combined with additional data in order to reach a statistically significant sample. We find some differences in the bulge distribution across the different nuclear types that could imply familes of bulges with different physical properties. Bulge and disc characteristic colours have been defined and derived for our sample and compared with a control sample of early type objects. The results suggest that bulge and disc stellar populations are comparable in normal and active galaxies. | 2021-04-17 02:11:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6161893010139465, "perplexity": 2528.889413397447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00326.warc.gz"} |
https://www.gamedev.net/forums/topic/627963-opensharedresource-renders-black/ | # OpenSharedResource renders black
This topic is 2226 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I am a newbie to DirectX and I have a problem with OpenSharedResource.
I am aquiring a handle to a shared resource from one process and wanting to display this same resource in a secondary process.
In the secondary process I get a texture handle to the shared resource:
_d3dDevice->OpenSharedResource( _sharedHandle, __uuidof(ID3D10Texture2D), (LPVOID*) &_sharedTexture );
I then copy this resource to my back buffer, pBuffer:
HRESULT hr = _sharedTexture->QueryInterface( __uuidof( ID3D10Resource ), ( void** )&pSrcResource ); if ( FAILED( hr ) ) return hr; D3D10_TEXTURE2D_DESC desc; ZeroMemory( &desc, sizeof(desc) ); _sharedTexture->GetDesc( &desc ); _d3dDevice->CopySubresourceRegion( pBuffer, 0, 0, 0, 0, pSrcResource, 0, 0 );
What I get is a black square drawn to my window. If I check the D3D10_TEXTURE2D_DESC I can see I have the correct dimensions etc but what appears on the screen seems to be a null texture.
Any ideas where I am going wrong? Is there something I need to do to the shared resource before I call CopySubresourceRegion? Should I be using a different technique to get at the shared resource?
1. 1
2. 2
3. 3
4. 4
5. 5
frob
11
• 16
• 13
• 20
• 12
• 19
• ### Forum Statistics
• Total Topics
632172
• Total Posts
3004569
× | 2018-08-19 19:07:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20368735492229462, "perplexity": 4917.439099303965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00391.warc.gz"} |
https://codereview.stackexchange.com/questions/73977/code-that-finds-the-largest-palindrome-from-two-three-digit-factors | # Code that finds the largest palindrome from two three-digit factors
My code is really ugly, but I don't know how to make it better.
#include "stdafx.h"
#include <iostream>
#include <math.h>
using namespace std;
bool palindromeTest(double numberVectorAddition){//tests if the factors of the palindrome are both below 3 digits each.
double i = 999;
double c;
for (;; i--){
if (fmod(p, i) != 0)
continue;
if (fmod(p, i) == 0)
c = p / i;
if (1000 <= c){
cout << "One of the factors is too high. Looking for next largest palindrome..." << endl;
cout << endl;
return false;
}
else if (c <= 1000){
cout << "Factors of the number both below 1000 are " << i << " and " << c;
return true;
}
}
}
void nLargestPalindrome(double number){
int i = 5; //subtract "1" because array starts at zero
int intermediate = i; //intermediate value - equal to the array index;
double *decimalDigit = new double[i + 1];
while (0 <= i){ //acquire digits of number
decimalDigit[i] = (number - fmod(number, pow(10, i))) / pow(10, i);
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow(10, i));
i--;
}
for (int q = 0; q < 100; q++){ //decreases size of palindrome, and also tests after the decrease.
double numberVectorAddition = pow(10, 5)*decimalDigit[0] + pow(10, 4)*decimalDigit[1] + pow(10, 3)*decimalDigit[2] + pow(10, 2)*decimalDigit[3] + pow(10, 1)*decimalDigit[4] + decimalDigit[5];
cout << decimalDigit[0] << decimalDigit[1] << decimalDigit[2] << decimalDigit[3] << decimalDigit[4] << decimalDigit[5] << '\t';
break;
decimalDigit[2]--;
decimalDigit[3]--;
if (decimalDigit[2] == 0){
numberVectorAddition = pow(10, 5)*decimalDigit[0] + pow(10, 4)*decimalDigit[1] + pow(10, 3)*decimalDigit[2] + pow(10, 2)*decimalDigit[3] + pow(10, 1)*decimalDigit[4] + decimalDigit[5];
cout << decimalDigit[0] << decimalDigit[1] << decimalDigit[2] << decimalDigit[3] << decimalDigit[4] << decimalDigit[5] << '\t';
break;
if ((decimalDigit[1] == 0) && (decimalDigit[2] == 0)){
decimalDigit[1] = 9;
decimalDigit[4] = 9;
decimalDigit[2] = 9;
decimalDigit[3] = 9;
decimalDigit[0]--;
decimalDigit[5]--;
continue;
}
decimalDigit[2] = 9;
decimalDigit[3] = 9;
decimalDigit[1]--;
decimalDigit[4]--;
}
}
delete decimalDigit;
}
int main(){
double n = 997799; //997799 is equal to 999*999 - 2; palindrome is smaller.
nLargestPalindrome(n); //finds the largest palindrome below "n" that has both factors of three digits.
cin.get(); //stops the command window from closing prematurely
}
• Welcome to CodeReview.SE! Your question seems interesting. Would you be able to provide more information about the problem you are trying to solve and the solution you have chosen ? – SylvainD Dec 17 '14 at 21:15
• Can the code be made cleaner, i.e. more simplified? – Don Larynx Dec 17 '14 at 23:55
• @Josay: Is that large block of nested if loops needed? – Don Larynx Dec 18 '14 at 0:23
• My main concern is it is hard to understand what your code is supposed to do from your question. I guess that this comment //finds the largest palindrome below "n" that has both factors of three digits. is better than the current title. Also, you might want to quickly explain the algorithm you have applied. – SylvainD Dec 18 '14 at 8:26
Code formatting
Your code seems to be poorly indented. I suggest you have a look at the documentation of your favorite text editor to see how to fix this.
Basic logic
It doesn't make much sense to check :
if (fmod(p, i) != 0)
continue;
if (fmod(p, i) == 0)
Similarly,
if (1000 <= c){
return false;
}
else if (c <= 1000){
is quite redundant.
using namespace std;
Using using namespace std; is usually frowned upon, some will say that it is ok in a cpp file, some will say the opposite. I'll let you decide.
The right type
You are using double but long int are more than enough for what you are trying to do.
Right place to define variables
It is usually a good idea to define variables in the smallest possible scope. Among other things, it helps the reader not to have to keep too many things in mind. Also, it helps you see that intermediate is an unused variable. As a side-note, it is a good idea to activate all the warnings in your compiler to detect such a thing (and many other potential errors).
Define constants
Instead of having 5 in the middle of your code and need to comment it, you could store 6 in a constant named for instance NB_DIGITS and remove the need for any comment.
Use for loops when you can
Now, the loop to aquire digits can be rewritten :
for (int i = NB_DIGITS -1; 0 <= i; i--) { //acquire digits of number
decimalDigit[i] = (number - fmod(number, pow(10, i))) / pow(10, i);
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow(10, i));
}
Compute things only once
You call pow(10, i) way too many times compared to what you actually need. You can simply write :
double pow_ten = pow(10, i);
decimalDigit[i] = (number - fmod(number, pow_ten)) / pow_ten;
cout << "Value of digit place " << i << " is " << decimalDigit[i] << endl;
number = fmod(number, pow_ten);
Smarter decomposition of a number
Instead of going through the hassle of getting digits starting with the one with the strongest weight, it is much easier to start by the end. Also, this makes you perform the iteration in the logicial order.
Better names
numberVectorAddition is a pretty bad name for a parameter. n conveys enough information as far as I can tell.
palindromeTests is also quite bad. hasDivisorsInRange is probably better.
Finally, DecimalDigit is a bit redundant are digits already include the notion of 10 but also, it probably should be plural to tell that it's a collection.
No need for new/delete here
You don't need to use new/delete here. If you just define it long int digits[NB_DIGITS];, it will be deleted when it does out of the scope.
Clearer responsability for functions
It could be interesting for your function to return values instead of printing them. This would lead to separation of concerns. One of the multiple advantages is to make your code easier to test.
At this point your code, looks like :
#include <iostream>
#include <math.h>
using namespace std;
const int NB_DIGITS = 6;
bool hasDivisorsInRange(long int n){//tests if the factors of the palindrome are both below 3 digits each.
for (long int i = 999;; i--){
if (n % i == 0)
{
long int c = n / i;
if (1000 <= c){
return false;
}
else if (c <= 1000){
cout << "Factors of the number both below 1000 are " << i << " and " << c << endl;
return true;
}
}
}
}
long int LargestPalindrome(long int number){
long int digits[NB_DIGITS];
for (int i = 0; i < NB_DIGITS; i++) { //acquire digits of number
digits[i] = number % 10;
number /= 10;
cout << "Value of digit place " << i << " is " << digits[i] << endl;
}
for (int q = 0; q < 100; q++){ //decreases size of palindrome, and also tests after the decrease.
long int n = pow(10, 5)*digits[0] + pow(10, 4)*digits[1] + pow(10, 3)*digits[2] + pow(10, 2)*digits[3] + pow(10, 1)*digits[4] + digits[5];
cout << digits[0] << digits[1] << digits[2] << digits[3] << digits[4] << digits[5] << '\t';
cout << n << endl;
if (hasDivisorsInRange(n))
return n;
digits[2]--;
digits[3]--;
if (digits[2] == 0){
long int n2 = pow(10, 5)*digits[0] + pow(10, 4)*digits[1] + pow(10, 3)*digits[2] + pow(10, 2)*digits[3] + pow(10, 1)*digits[4] + digits[5];
cout << digits[0] << digits[1] << digits[2] << digits[3] << digits[4] << digits[5] << '\t';
cout << n2 << endl;
if(hasDivisorsInRange(n2))
return n;
if ((digits[1] == 0) && (digits[2] == 0)){
digits[1] = 9;
digits[4] = 9;
digits[2] = 9;
digits[3] = 9;
digits[0]--;
digits[5]--;
continue;
}
digits[2] = 9;
digits[3] = 9;
digits[1]--;
digits[4]--;
}
}
return 0;
}
int main(){
long int n = 906608; //997799 is equal to 999*999 - 2; palindrome is smaller.
int pal = LargestPalindrome(n); //finds the largest palindrome below "n" that has both factors of three digits.
cout << "Solution is " << pal << endl;
}
Better algorithm
You are trying to iterate over palindromes in a quite complicated way (that I do not understand at all). To iterate over palindromes composed of 6 digits is nothing but iterating over values between 100 and 999 and adding the mirrored version of the number at the end.
This can be badly written :
long int LargestPalindrome(long int number){
for (int beg = (number / 1000); beg > 100; beg--)
{
// Could be done in a clean way with array but I'm lazy
int i = beg;
int a = i % 10; i/=10;
int b = i % 10; i/=10;
int c = i % 10; i/=10;
long int pal = beg * 1000 + 100 * a + 10 * b + c;
if (pal < number && hasDivisorsInRange(pal))
return pal;
}
return 0;
}
For n = 906608, this finds 888888 which happens to be a better solution that the one found by your code.
Also, you could perform a smarter research for divisors.
I have to go, I'll try to continue this answer at some point. | 2020-06-06 22:08:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3660309314727783, "perplexity": 5924.772733007221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348519531.94/warc/CC-MAIN-20200606190934-20200606220934-00593.warc.gz"} |
http://ecyf.pamish.de/absorbance-vs-concentration-graph.html | # Absorbance Vs Concentration Graph
Determination of DNA concentration by Spectrophotometric Estimation. The absorbance is proportional to the amount of maltose present. From the graph determine the molarity of a copper sulfate pentahydrate solution whose absorbance is 0. Because when you graph a molar concentration vs. 15, 2017 entitled, “Methods of. time, ln Absorbance vs. Concentration of unknown (ppm iron) Estimated uncertainty in concentration (from s m and s b) The two reactions that are important for this experiment Report: (Submit plots and answer the questions on a separate sheet of paper) Plot absorbance vs. Equation 1 determined the size of the gold nanoparticles to be 24 nm, whereas equation 3 (concentration method) determined it to be 16 nm. Concentration of known solutions. For each solution, you measure the absorbance at the wavelength of strongest absorption - using the same container for each one. 0 mg of his sample in 500. Since the absorbance of a CV solution is directly proportional to the concentration of CV, according to Beers’ Law, the actual [CV] can be replaced by A max, the solution’s maximum absorbance (somewhere around 600 nm). What is an absorbance spectrum? How do we know that the BCA reagent has an 8max of 562nM? This can be determined by measuring the absorbance of a substance over a range of wavelengths to create an absorbance spectrum - a plot of absorbance vs wavelength. And then, essentially, this absorbance is going to sit on the line. 714*10^4 and y-intercept 0. Determination the calibration curve of cobalt nitrate by spectrophotometer 15. 646803778184034 to 11. For each individual value, plot the concentration on the X-axis and absorbance on the Y-axis. We then measure the absorbance of our unknown, and by comparing to our standard curve, determine the concentration. Any electronic noise will affect the absorbance values. Obtain the trendline, line equation, and R2 value and have them appear on the graph. Draw a Standard Curve on graph paper by plotting Absorbance (Y axis) versus Concentration (X axis). I have to graph a standard curve of absorbance vs concentration, and i have the the values, but i am not sure what units to use for absorbance. Using your Standard Curve, determine the concentration of phenol red in the sample with unknown concentration. Determine the best wavelength (highest absorbance) and record on your report. solution based on its absorbance. Under the column showing a list of wavelengths, click on Clear Selections and the select the wavelength closest to 600nm as the desired wavelength. As we don’t know the concentration of the cordial in molarity, and the manufacturers say you should be using a 20% (v/v) solution of the concentrate, the units of the “x axis” in this case is in %. The standard (calibration) curve is obtained by drawing the best straight line that most closely approximates the data points. concentration of an analyte in solution by plotting the absorption of standards versus their know concentrations to calculate ε. 0 increments from 0 as high as the graph paper will allow. Absorbance. A = -log(T) Evidently, percent transmittance is simply some fraction of T. So anyway, I worked out a molecular weight of 158, which is extremely close to that of 1,2,3,4,5-pentahydroxyphenol. Use of optical densities Measuring the optical density of growing cultures is a common method to quantify various important culture parameters like cell concentration, biomass production or changes in the cell morphology. The slope of the graph (absorbance over concentration) equals the molar absorptivity coefficient, ε x l. The absorption of a sample is then measured and that value combined with the calculated value of ε is entered back into the Beer’s law equation to determine concentration. In an ideal Beer-Lambert case if a concentration of atoms, c, produces an absorbance, A, then a concentration 2c should produce an absorbance 2A. To get the rate law, we need to convert from absorbance to dye concentration. added to a large excess of sodium hydroxide, and the absorbance (Abs) of the red solution will be measured at specific time intervals. Ideally, the product concentration would have increased steadily, but something apparently inhibited the reaction, slowly at 4C, faster at the higher temperatures. concentration of iron (Beer's Law) for these standard solutions. Calibration Curve Using Excel or Graphical Analysis, plot a graph of absorbance vs. Prepare a calibration curve and convert absorbance to mg/L as follows: Make an absorbance versus concentration graph on graph paper: (a) Make the vertical (y) axis and label it "absorbance. Colorimetric methods represent the simplest form of absorption analysis. Using the absorbance values obtained for a series of volumetric. This is a calibration curve. 85 were prepared. In this method, the following relationship is utilized: Concentration of Unknown Concentration of Standard Absorbance of Unknown. Office Tutorials - Determining the Concentration of an Unknown Sample (Microsoft Excel 2010) (assuming that we are able to obtain its absorbance). Its values are constant at a particular wavelength and concentration for a given species. See Figure 3. absorbance = 0. Still, its worth making a graph. The measured absorbance at each urea concentration is Y. absorbance of each sample tube in Table 4. The visible light absorbance of each of the prepared calibration solutions will be measured. This first requires absorbance data on a series of solutions of known concentration called standard solutions. Beer's law says that the relationship between the absorbance of the chromophore and its concentration is linear, allowing construction of a standard curve by plotting absorbance versus concentration, such as shown in Figure 1. time follows three phases marked on the graph below. I am interested to know the method which can be used to calculate the absorbance coefficient from an absorbance vs wavelength graph which is obtained from UV-vis measurements. Transmittance can be recognized as the amount of light passed through that sample. The absorbance and concentration data is then plotted in a calibration curve to establish their mathematical relationship. Action: The kinetic traces we obtain in steps 1 through 3 give absorbance of the solution versus time. Determination of Unknown Concentration: Using the best-fit equation from the graph of. Beer Lambert Law Calculator. 009855204664079503 to 0. concentration is plotted for a series of standard solutions, a direct relationship should result. Include the spreadsheet and graph (known. Materials: VIS spectrophotometer 0. and the absorbance at max. On the simulation, the solution chosen for you is of a generic “Drink Mix” and measure the Absorbance for 5 different concentrations, chosen for you in the table below, with the fixed wavelength setting and graph. Graphing Example 2. Appendix: Plotting Data on Microsoft Excel2003. The theoretical best accuracy for most instruments is in the range near 1 AU. uneven loss of growth medium depending on the microwell position on the plate (edge and corner versus central positions) led to significantly different readouts in a cell metabolism assay (alamarBlue®) as indicator for cellular health and proliferation. Note: I am using Excel for Windows 2016. Still, its worth making a graph. The concentration of an unknown solution can then be read from the graph if its absorbance is measured. wavelength for a hypothetical compound. Graph The graph displays a full-spectrum analysis of the sample in the cuvette holder. How can I calculate enzyme velocity from absorbance? Absorbance = epsilon X concentration X length ( Beer Lambert's law) From the linear part of the graph first plot Time vs absorbance for. , cuvettes) are used for each measurement, so that the pathlength l is constant. Part (b) assessed the students’ understanding of the experimental methodology by requiring them to select an appropriate modification to a poor experimental setup. This graph is called a standard curve. pH and explain the shape of the resulting graph. Darling, Nicholas R. give a graph of corrected instrument response versus analyte concentration, and this graph in turn can be used to find the concentration of an unknown. While many modern instruments perform Beer's law calculations by simply comparing a blank cuvette with a sample, it's easy to prepare a graph using standard solutions to determine the concentration of a specimen. Design a procedure for creating a solution of a given concentration. Click on Insert then Chart on the drop down menu, then Scatter, then Next and finally Series to get the following. The x-axis shows values from 4. Have several copies and a transparency of the graph available. absorbance Prepare and include a graph of absorbance vs. The main difference between absorbance and transmittance is that absorbance measures how much of an incident light is absorbed when it travels in a material while transmittance measures how much of the light is transmitted. 6 x 10-3 mg/mL. wavelength), c is the molar concentration of the solute, and l is the path length or the distance that the light travels through the cuvette (usually 1 cm). The graph slowly increased in the beginning and in the end, while the gradient in the middle is extremely steep, which represents a huge change in absorbance in 60% ethanol range. If the standard curve is leveling off, then you should not use the points with the higher absorbance. Plot the following data as absorbance vs concentration of X in excel Absorbance, A Molar Concentrationof X 0. of each patient. Absorbance and transmittance are two very important concepts discussed in spectrometry and analytical chemistry. What is the concentration of a sample if its absorbance is 0. 714*10^4 and y-intercept 0. The absorbance of the unknown solution is related to the absorbance of a single calibration standard solution. Time or vs. +2] and the absorbance of the complex ion can be measured. 0148 absorbance units per second. concentration of ligand has slope b/y. concentration (on the x-axis). Transmittance. Use the plot you used to construct the graph. Under these conditions, Beer's law describes a straight-line relationship for a graph of absorbance versus solute concentration whose slope is simply the product of the molar absorptivity constant and path length. Graphing the absorbance data (ln A vs time and 1/A vs time) will reveal whether the “fading” reaction is first or second order in phenolphthalein. The absorbance and concentration data is then plotted in a calibration curve to establish their mathematical relationship. r The graph is a full page. The Bradford assay for protein is widely used because of its sensitivity, speed, convenience, lack of need for a UV-capable spectrophotometer, and adaptability to 96-well plates. Use a hypothetical absorbance and the transparency to show how to read the concentration from the graph. This plot is known as a standard curve, and should look something like the figure shown below. Then you can draw a vertical line from that intersection to the x-axis to determine the concentration. A pretty Beer's law graph (concentration vs. To determine if your data is consistent with Beer's law, plot a graph of absorbance vs. The absorbance measurements of the standard solutions are used to determine the concentration of iron in the pill solution, which is used to determine the amount of iron present in the original pill. max) for measuring the absorbance of a given solution. 62/428,029 filed Nov. wavelength (x) on a piece of graph paper and determine where the. Using the following data points, graph absorbance versus concentration (absorbance on the y-axis and concentration on the x-axis) using the piece of graph paper included with the lab. absorbance and the known concentration (Figure 2). Creating the Initial Scatter Plot. The direct relationship between absorbance and concentration for a solution is known as Beer’s law. In class today to introduce solutions we did a quick experiment with a blue substance and light. Their graphs resulting from their experiment are very similar to the graph resulting from this experiment. Graph The graph displays a full-spectrum analysis of the sample in the cuvette holder. Quantifying a DNA, RNA or protein sample concentration is now as easy as a click of the pipette, a push of a button and a dab of tissue to clean up. Spectrophotometric Analysis of Mixtures: Simultaneous Determination of Two Dyes in Solution Jo Melville and Giulio Zhou 9/27/2012 1 Abstract In this experiment, we created a set of 8 concentrations of 2 dyes, then used a spectrophotometer to calculate the absorbance of the dyes with respect to both concentration and wavelength. Where does denaturation fit into the graph? If another enzyme from a north sea crustacean was studied and its enzyme activity was plotted on the graph, where would it appear? If a hot springs bacterial enzyme was studied and its activity data was plotted, where would it lie?. 375 during this time. Then for each standard I will measure the absorbance using the machine and create a straight line plot of Absorbance (y-axis) vs. Dilute the resulting solution to volume with deionized water and measure the absorbance at 332 nm, 303 K by using 1. Best Answer: beers law: A=e l c so when plotting A vs. To do so, several solutions of known concentration (the standards) are prepared, and the absorbance of each is measured A student prepared several five standard solutions of copper (I) acetate solutions to determine the molar extinction coefficient of copper (II) acetate at 625 nm and the concentration of an unknown. In an ideal Beer-Lambert case if a concentration of atoms, c, produces an absorbance, A, then a concentration 2c should produce an absorbance 2A. Each of the absorbance values was then plotted on one of three graphs, which graph pH versus absorbance for a given week. Organize, analyze and graph and present your scientific data. Beer’s Law, which relates absorbance to concentration, will be derived as part of experimental module 2. 1 and enter your absorbance data from Table 1. DNA easily dissolves in aqueous solutions. -these standards are than plotted as an absorbance vs concentration graph-after the curve is plotted, the unknown solution can be determined by measuring the absorbance of the unknown solution and comparing it to what concentration it would fall under basing it off the standard curve. absorbance of each solution and then plot absorbance versus concentration, we should get a straight line with a y-intercept of zero and a slope of k (where k = l). In the graph above, it was the second strongest absorbance reading, second only to phenol. According to Beer's law, a larger amount of light being transmitted through the sample corresponds to a smaller amount of light being absorbed by the sample. Thus readings should always be taken in the region where all reagents are in excess where the curve of abs vs. However, at high concentrations (10 mg/ml and above), dissolved DNA is viscous. Strategy: A Use the data in the table to separately plot concentration, the natural logarithm of the concentration, and the reciprocal of the concentration (the vertical axis) versus time (the horizontal axis). point is the point on the graph of absorbance vs. wavelength: Select Store Latest Run from the Experiment menu. (1) Tabulate and plot the absorbance vs. According to the Beer-Lambert Law, absorbance is proportional to concentration, and so you would expect a straight line. enzyme concentration. concentration will show a linear relationship. I'm not given $\epsilon$ or concentration at any other point, but I'm supposed to be able to calculate the final concentration and maximum reaction rate. What wavelength should you choose to measure the copper concentration spectrophotometrically? Why? 2. To a 50 cm 3 volumetric flask add 5 cm 3 of 5% ethanoic acid solution, and. Curve for Ferroin Solution This calibration 0. Main Difference – Absorbance vs. Creating the Initial Scatter Plot. Introduction. The objective of this lab is to calculate the molar extinction coefficients of three different dyes from their Beer's Law plot. The two plots above are an Absorbance spectrum on the left, and a calibration plot on the left. Plot the absorbance vs concentration for each standard solution on a graph. Students will be using the data collected in Lesson #7 to graph a standard curve of absorbance vs. If you do not have data from your own columns, you can use the sample data sheet provided. Sample Preparation: When determining the protein concentration of an unknown sample, several dilutions. You can plot the concentration versus absorbance, then perform linear regression to find the formula that relates concentration and absorbance. To do so, several solutions of known concentration (the standards) are prepared, and the absorbance of each is measured A student prepared several five standard solutions of copper (I) acetate solutions to determine the molar extinction coefficient of copper (II) acetate at 625 nm and the concentration of an unknown. Because when you graph a molar concentration vs. Also in theory, since a and b are constants, a simple measurement of absorbance should. I simply used the difference between the baseline and the sample as a measure of absorbance (A=I-Io) at particular wavelengths as you deduced, and then simply converted this to a percentage absorbance. (The absorbance is a property of the solution, not the protein itself. If you graph light absorbance versus concentration for a series of solutions of known concentration, the line, or standard curve, which fits to your points can be used to figure out the concentrations of unknown solutions. Absorbance vs concentration. Predict what a graph of absorbance versus concentration would look like. When a graph of absorbance vs. You will then use the Vernier spectrometer to measure the absorbance and transmittance of each dilution at the λ max. point is the point on the graph of absorbance vs. 0 cm quartz cell against a similarly prepared blank of the same pH. Phenol has a large peak at 220 nm, over 2. Investigating Absorption and Concentration. Biomass concentration is one of the most critically needed measurements in fermentation studies. In the current study, the Bradford dye will be mixed with known concentrations of a protein, in this case Bovine Serum Albumin (BSA). Use the initial slope (0-5 minutes) as an estimate of V and plot V vs temperature. the BCG is in the acidic form. Since slope (m) = Absorbance / concentration, [K2CrO4] = absorbance/slope = 0. of each patient. (Response, absorbance, intensity, peak height, etc. ) Tabulate the absorbance readings in a table labeled Table I. Oxidation of Ethanol by Chromium(VI) Adapted by J. of specified. The y-axis shows values from -0. Absorbance vs. Action: The kinetic traces we obtain in steps 1 through 3 give absorbance of the solution versus time. absorbance range: 0. Determine the concentrations of the unknown samples from the graph. We can use it to determine the concentration of a chemical in solution. Use the initial slope (0-5 minutes) as an estimate of V and plot V vs temperature. concentration, the slope is equal to the molar absorptivity, ε, if the path length is 1 cm. Glycogen concentration is measured by absorbance. Then you plot a graph of that absorbance against concentration. concentration for the standard solutions, a direct relationship should result. This is done. You measure their absorbance, find that point on the standard curve, and then see which concentration matches up to it. This is a calibration curve. concentration of methylene blue in Excel using the exported data file. ,2013) From figure 2 above, the graph shows a straight line, which illustrates the direct proportionality of concentration vs. The graph slowly increased in the beginning and in the end, while the gradient in the middle is extremely steep, which represents a huge change in absorbance in 60% ethanol range. concentration for the standard cobalt solutions. Appendix: Plotting Data on Microsoft Excel2003. Obtain the trendline, line equation, and R2 value and have them appear on the graph. We learned from the Beer-Lambert law, that is a linear relationship between absorbance and concentration. Use a ruler to determine the linear portion of the plots and note the absorbance limit (remember the plot curves down at the higher concentrations). From that we will do the same with an unknown protein concentration and determine it’s concentration from where it lays on the standard graph. Absorbance of Nickel in the unknown sample was obtained to be 0. Graph wavelength on the x-axis and either absorbance or percent transmittance on the y-axis. Your images and procedures may differ slightly from those here. 1 you learned a method of measuring the reaction as peroxidase converts hydrogen peroxide to oxygen and water. If we know that 170 mg/L of protein has an absorbance of 0. Thus, molar absorptivity can most easily be calculated using a graph, when you have varied known concentrations of the same chemical species. Why is the reaction mixture quickly transferred to the cuvette for data collecting?. You measure their absorbance, find that point on the standard curve, and then see which concentration matches up to it. Construct and evaluate a calibration curve by plotting absorbance vs concentration 3. However, as the concentration of a solution increases the transmission percentage decreases. Generating and Using a Calibration Graph. This equation is then used to calculate the concentration of an unknown phosphorus solution. Make a plot on the graph paper provided with the absorbance on the y-axis and the wavelength on the x-axis. Solutions of analyte may absorb light of different wavelengths. 1 x Suppose an unknown SCN− solution is treated in exactly the same manner as described in Part B of the procedure. Add A Best Fit Line. Absorbance vs concentration. The y-axis shows values from -0. concentration is linear. Using the absorbance values obtained for a series of volumetric. Beer law can be used for absorbance values between 0 and 1. If the solution is known, then we can create a linear graph like this: Figure 2: This shows the Absorbance vs. For example: “Absorbance at 630 nm vs. If there are two different solutions with the same concentration, the absorbance would not be the same because the molecules would be arranged differently. Make a Beer's Law plot of absorption versus concentration of FeSCN 2+ (A vs [FeSCN 2+]) for these 7 points. Draw a Standard Curve on graph paper by plotting Absorbance (Y axis) versus Concentration (X axis). Since a and b are both constants, equation (2) has the form of a straight line, y = mx + b, with an intercept, b, of zero. Analyzing each of these standards using the chosen technique will produce a series of measurements. The slope of the graph (absorbance over concentration) equals the molar absorptivity coefficient, ε x l. asked by Madasan on July 3, 2010; chemistry. Concentration of unknown (ppm iron) Estimated uncertainty in concentration (from s m and s b) The two reactions that are important for this experiment Report: (Submit plots and answer the questions on a separate sheet of paper) Plot absorbance vs. Beer's Law is applied most accurately when a calibration graph is used. 6) Plot absorbance vs. 9 mls of water. plot of absorbance at max versus concentration is a straight line plot known as the eer’s Law plot. The slope has units of Absorbance/concentration. Both the absorbance and plate count plots can be made on the same graph. absorbance graph, the graph is linear, making the graph easier to read. 02 M 1,10-phenanthroline Dilute to volume and measure absorbance at max Construct Beer’s Law Plot (A vs c) and calculate %Fe in original solid sample. DETERMINATION OF MAXIMUM ABSORBANCE FOR FOOD COLORING. UV-VIS Nomenclature and Units measure very small absorbances that are very close to zero absorbance. Question: What's the difference in using a calibration curve in absorption spectrometry vs other analytical methods such a fluorescence or emission spectroscopy?. measurement (Absorbance in this example) vs. wavelength (molar absorptivity constant) and the calibration plot is vs. the concentration of a given solution, you can use a few different methods. concentration on semi-logarithmic graph paper; both graph papers can be found at the end of this exercise. Determining An Equilibrium Constant Using Spectrophotometry and Beer's Law Objectives: 1. Introduction: In this experiment, you will be given an unknown mixture of acetone, benzene and chloroform you will determine the %v/v1 composition of the solution by analyzing the UV absorbance spectrum of a 1:100 dilution of your sample in. Absorbance (A) Concentration (C) slope = ab. The absorbance at 420 nm for increasing concentrations of b-galactosidase were determined at 5, 10, 30, and 60 minutes. Once Vial #7’s absorbance has been measured, the second calibration point can be plotted. Read the absorbance and transmittance of the assigned unknown solutions and record them on the data sheet. Absorbance refers to a measure of the capacity associated with a substance as regards absorption of light of a specified wavelength. Absorbance vs concentration. Using the following data points, graph absorbance versus concentration (absorbance on the y-axis and concentration on the x-axis) using the piece of graph paper included with the lab. The readings had a wide range from 0. Absorbance Absorbance vs. If we know that 170 mg/L of protein has an absorbance of 0. Online photometry allows continuous real time. As mentioned before, the standard curve is a plot of absorbance (y-axis) vs. You could also adjust the pH of a BCG solution to a value equal the pKIn. If you graph absorbance versus concentration for a series of known solutions, the line, or standard curve, which fits to your points can be used to figure out the concentrations of an unknown solution. 714*10^4 and y-intercept 0. When a graph of absorbance vs. The direct relationship between absorbance and concentration for a solution is known as Beer’s law. Why is the reaction mixture quickly transferred to the cuvette for data collecting?. Step 4: To determine the concentration of an unknown, analyze the unknown sample along with a blank, subtract the. 260/280 and 260/230 Ratios NanoDrop® ND-1000 and ND-8000 8-Sample Spectrophotometers C As absorbance measurements will measure any molecules absorbing at a specific wavelength, nucleic acid samples will require purifi-cation prior to measurement to ensure accurate results. (25 cr) Since the absorbance will be different with different wavelength settings, you must first determine which. How does Concentration affect how much light is absorbed and transmitted through the solution? As the concentration increases, the absorbance increases and less light is transmitted through. free chlorine concentration (Beer’s law), you will be able to determine the free chlorine content of your swimming pool sample. This type of graph is known as a Standard Curve. c) To display the straight line on the graph of absorbance vs. Absorption may be presented as transmittance (T = I/I 0) or absorbance (A= log I 0 /I). Record the absorbance in the above table. absorbance vs. Table I Prepare a plot of absorbance (y-axis) vs. This equates to a change of + 0. Make a graph of absorbance vs. the concentration in moles/L of the chlorophyll-a in spinach was determined by plugging the absorbance as y in the calibration (standard) curve trendline equation and solving for x the concentration of the chlorophyll-a in spinach was in calculated in units of. concentration is expected to produce a straight line, for light absorbance is usually directly proportional to concentration. Get the best straight line and the slope of this line. 0 cm quartz cell against a similarly prepared blank of the same pH. Checklist for Graphs Make sure to read and follow the directions in the graphing appendix of the lab manual. ) To determine the concentration of an unknown by evaluating the relationship. In the study, bleach is present in large excess so that the concentration of OCI- is essentially constant throughout the reaction. Transmittance. The value of e is the slope/gradient/rate of a line graph of an OD vs concentration graph, i. Both are plotting absorbance, but the spectrum plots it vs. If uncontaminated, the concentration of protein in aqueous solution can be determined from the absorbance at 280 nanometers, provided the extinction coefficient is known for the protein in question, since the amounts of aromatic amino acids per milligram of protein varies among different proteins. concentration give a straight line with a slope of e·l. Calculating the molar absorbance coefficient (ε) from absorbance and concentration data. The concentration thus obtained has to be multiplied by the dilution factor. Learn more at http://www. AP® CHEMISTRY 2006 SCORING GUIDELINES (Form B) solution of unknown concentration has an absorbance of 0. You measure their absorbance, find that point on the standard curve, and then see which concentration matches up to it. Determining k values: • In a spreadsheet, plot absorbance vs. Predict what a graph of absorbance versus concentration would look like. The change in absorbance will be measured from 0-20 seconds, and the rate of reaction can be calculated by finding the slope of the absorbance vs. absorbance or a percent transmittance value. The Beer-Lambert law is used in chemistry to relate the concentration of a solution to the amount of light it absorbs. concentration (x-axis) using the above data. In the example below, the standard absorbance values for abx155737, Rat IL6 ELISA Kit, are shown as a reference. The absorption of a sample is then measured and that value combined with the calculated value of ε is entered back into the Beer’s law equation to determine concentration. Standard Curve of Net Absorbance versus protein sample concentration 0 0. The standard curve of absorbance vs nmole/ml of nitrophenol The amount of nitrophenol produced in the assay vs time a) What is the purpose of each graph? The standard curve of absorbance was used to estimate the concentration of nitrophenol based on the OD410. Reaction rate vs. concentration gives a straight line at a particular wavelength and temperature (see. For a single solute, absorbance and concentration are directly proportional if the path length is constant. Using the values obtained from the spectrophotometer, plot each point on a line graph. If this is not the case, you and your partner can consider re-doing the graph. Graph The graph displays a full-spectrum analysis of the sample in the cuvette holder. Print two copies of the graph—one for the white pages and one for the yellow pages in the results section of your lab report. the BCG is in the acidic form. Plot the absorbance vs concentration for each standard solution on a graph. But when making a calibration graph you are looking at the absorbance based on concentration. On the simulation, the solution chosen for you is of a generic “Drink Mix” and measure the Absorbance for 5 different concentrations, chosen for you in the table below, with the fixed wavelength setting and graph. If you graph light absorbance versus concentration for a series of solutions of known concentration, the line, or standard curve, which fits to your points can be used to figure out the concentrations of unknown solutions. , a standard curve. Because when you graph a molar concentration vs. The y-axis shows values from -0. Use the TI Graph link cable and program to transfer the graph of absorbance vs. We then measure the absorbance of our unknown, and by comparing to our standard curve, determine the concentration. To do so, several solutions of known concentration (the standards) are prepared, and the absorbance of each is measured. The concentration of the unknown sample can be determined by measuring its absorbance. Question: What's the difference in using a calibration curve in absorption spectrometry vs other analytical methods such a fluorescence or emission spectroscopy?. Absorbance. Investigating Absorption and Concentration a. the unknown concentration of sample. A trend line based on the collected data is given at y=0. Standard Curve of Net Absorbance versus protein sample concentration 0 0. | 2019-12-13 00:10:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5418253540992737, "perplexity": 1276.8344143039294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00558.warc.gz"} |
http://www.hep.ph.ic.ac.uk/seminars/abstracts/2019/20190710.html | Srubabati Goswami (PRL)
Synergy between atmospheric and long-baseline neutrino experiments in determination of oscillation parameters
Abstract: Oscillation of atmospheric neutrinos was first established conclusively by the Super-Kamiokande experiment. This was later confirmed by the long-baseline beam based experiments K2K, MINOS and more recently T2K and NOVA. The unknown neutrino oscillation parameters at present are the tmass hierarchy, octant of $\theta_{23}$ and the CP phase $\delta_{CP}$. One of the main problems in unambiguous determination of these parameters is the occurrence of degneracies . In this talk I will discuss the various degeneracies that hinder the definitive determination of these parameters . I will also discuss how the combination of long-baseline and atmospheric neutrino data can help in resolving some of these issues. | 2022-01-23 22:16:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6546674966812134, "perplexity": 2023.4971283779996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00497.warc.gz"} |
https://ug.mathematicstip.com/3054-34-the-chain-rule-mathematics.html | # 3.4: The Chain Rule - Mathematics
We have covered almost all of the derivative rules that deal with combinations of two (or more) functions. one function "inside'' another).
One example of a composition of functions is (f(x) = cos(x^2)). We currently do not know how to compute this derivative. If forced to guess, one would likely guess (f^prime(x) = -sin(2x)),where we recognize (-sin x) as the derivative of (cos x) and (2x) as the derivative of (x^2). However, this is not the case; (f^prime(x) eq -sin(2x)). In Example 62 we'll see the correct answer, which employs the new rule this section introduces, the Chain Rule.
Before we define this new rule, recall the notation for composition of functions. We write ((f circ g)(x)) or (f(g(x))),read as "(f) of (g) of (x),'' to denote composing (f) with (g). In shorthand, we simply write (f circ g) or (f(g)) and read it as "(f) of (g).'' Before giving the corresponding differentiation rule, we note that the rule extends to multiple compositions like (f(g(h(x)))) or (f(g(h(j(x))))),etc.
To motivate the rule, let's look at three derivatives we can already compute.
Example 59: Exploring similar derivatives
Find the derivatives of
1. (F_1(x) = (1-x)^2),
2. (F_2(x) = (1-x)^3,) and
3. (F_3(x) = (1-x)^4.)
We'll see later why we are using subscripts for different functions and an uppercase (F).
Solution
In order to use the rules we already have, we must first expand each function as
1. (F_1(x) = 1 - 2x + x^2),
2. (F_2(x) = 1 - 3x + 3x^2 - x^3) and
3. (F_3(x) = 1 - 4x + 6x^2 - 4x^3 + x^4).
It is not hard to see that:
[egin{align*} F_1^prime(x) &= -2 + 2x [4pt] F_2^prime(x) &= -3 + 6x - 3x^2 [4pt] F_3^prime (x) &= -4 + 12x - 12x^2 + 4x^3. end{align*}]
An interesting fact is that these can be rewritten as
[F_1^prime (x) = -2(1-x),quad F_2^prime(x) = -3(1-x)^2 ext{ and }F_3^prime (x) = -4(1-x)^3.]
A pattern might jump out at you. Recognize that each of these functions is a composition, letting (g(x) = 1-x):
[egin{eqnarray*}F_1(x) = f_1(g(x)),& ext{ where } f_1(x) = x^2, F_2(x) = f_2(g(x)),& ext{ where } f_2(x) = x^3, F_3(x) = f_3(g(x)),& ext{ where } f_3(x) = x^4. end{eqnarray*}]
We'll come back to this example after giving the formal statements of the Chain Rule; for now, we are just illustrating a pattern.
Theorem 18: The Chain Rule
Let (y = f(u)) be a differentiable function of (u) and let (u = g(x)) be a differentiable function of (x). Then (y=f(g(x))) is a differentiable function of (x),and [y^prime = f^prime(g(x))cdot g^prime(x).]
Example 60: Using the Chain Rule
Use the Chain Rule to find the derivatives of the following functions, as given in Example 59.
Solution
Example 59 ended with the recognition that each of the given functions was actually a composition of functions. To avoid confusion, we ignore most of the subscripts here.
(F_1(x) = (1-x)^2):
We found that [y=(1-x)^2 = f(g(x)), ext{ where } f(x) = x^2 ext{ and } g(x) = 1-x.]
To find (y^prime), we apply the Chain Rule. We need (f^prime(x)=2x) and (g^prime(x)=-1.)
Part of the Chain Rule uses (f^prime(g(x))). This means substitute (g(x)) for (x) in the equation for (f^prime(x)). That is, (f^prime(x) = 2(1-x)). Finishing out the Chain Rule we have [y^prime = f^prime(g(x))cdot g^prime(x) = 2(1-x)cdot (-1) = -2(1-x)= 2x-2.]
(F_2(x) = (1-x)^3):
Let (y = (1-x)^3 = f(g(x))),where (f(x) = x^3) and (g(x) = (1-x)). We have (f^prime(x) = 3x^2),so (f^prime(g(x)) = 3(1-x)^2). The Chain Rule then states [y^prime = f^prime(g(x))cdot g^prime (x) = 3(1-x)^2cdot(-1) = -3(1-x)^2.]
(F_3(x) = (1-x)^4):
Finally, when (y = (1-x)^4),we have (f(x)= x^4) and (g(x) = (1-x)). Thus (f^prime(x) = 4x^3) and (f^prime(g(x)) = 4(1-x)^3). Thus [y^prime = f^prime(g(x))cdot g^prime(x) = 4(1-x)^3cdot (-1) = -4(1-x)^3.]
Example 60 demonstrated a particular pattern: when (f(x)=x^n),then (y^prime =ncdot (g(x))^{n-1}cdot g^prime (x)). This is called the Generalized Power Rule.
Theorem 19: Generalized Power Rule
Let (g(x)) be a differentiable function and let (n eq 0) be an integer. Then [dfrac{d}{dx}Big(g(x)^nBig) = ncdot ig(g(x)ig)^{n-1}cdot g^prime (x).]
This allows us to quickly find the derivative of functions like (y = (3x^2-5x+7+sin x)^{20}). While it may look intimidating, the Generalized Power Rule states that [y^prime = 20(3x^2-5x+7+sin x)^{19}cdot (6x-5+cos x).]
Treat the derivative--taking process step--by--step. In the example just given, first multiply by 20, then rewrite the inside of the parentheses, raising it all to the 19(^{ ext{th}}) power. Then think about the derivative of the expression inside the parentheses, and multiply by that.
We now consider more examples that employ the Chain Rule.
Example 61: Using the Chain Rule
Find the derivatives of the following functions:
1. (y = sin{2x})
2. (y= ln (4x^3-2x^2))
3. (y = e^{-x^2})
Solution
1. Consider (y = sin 2x). Recognize that this is a composition of functions, where (f(x) = sin x) and (g(x) = 2x). Thus [y^prime = f^prime(g(x))cdot g^prime(x) = cos (2x)cdot 2 = 2cos 2x.]
2. Recognize that (y = ln (4x^3-2x^2)) is the composition of (f(x) = ln x) and (g(x) = 4x^3-2x^2). Also, recall that [dfrac{d}{dx}Big(ln xBig) = dfrac{1}{x}.]This leads us to:[y^prime = dfrac{1}{4x^3-2x^2} cdot (12x^2-4x) = dfrac{12x^2-4x}{4x^3-2x^2}= dfrac{4x(3x-1)}{2x(2x^2-x)} = dfrac{2(3x-1)}{2x^2-x}.]
3. Recognize that (y = e^{-x^2}) is the composition of (f(x) = e^x) and (g(x) = -x^2). Remembering that (f^prime(x) = e^x),we have [y^prime = e^{-x^2}cdot (-2x) = (-2x)e^{-x^2}.]
Example 62: Using the Chain Rule to find a tangent line
Let (f(x) = cos x^2). Find the equation of the line tangent to the graph of (f) at (x=1).
Solution
The tangent line goes through the point ((1,f(1)) approx (1,0.54)) with slope (f^prime(1)). To find (f^prime),we need the Chain Rule.
(f^prime(x) = -sin(x^2) cdot(2x) = -2xsin x^2). Evaluated at (x=1),we have (f^prime(1) = -2sin 1approx -1.68). Thus the equation of the tangent line is [y = -1.68(x-1)+0.54 .]
The tangent line is sketched along with (f) in Figure 2.17.
The Chain Rule is used often in taking derivatives. Because of this, one can become familiar with the basic process and learn patterns that facilitate finding derivatives quickly. For instance, [dfrac{d}{dx}Big(ln ( ext{anything})Big) = dfrac{1}{ ext{anything}}cdot ( ext{anything})^prime = dfrac{( ext{anything})^prime}{ ext{anything}}.]
A concrete example of this is [dfrac{d}{dx}Big(ln(3x^{15}-cos x+e^x)Big) = dfrac{45x^{14}+sin x+e^x}{3x^{15}-cos x+e^x}.] While the derivative may look intimidating at first, look for the pattern. The denominator is the same as what was inside the natural log function; the numerator is simply its derivative.
This pattern recognition process can be applied to lots of functions. In general, instead of writing "anything'', we use (u) as a generic function of (x). We then say [dfrac{d}{dx}Big(ln uBig) = dfrac{u^prime}{u}.]
The following is a short list of how the Chain Rule can be quickly applied to familiar functions.
Of course, the Chain Rule can be applied in conjunction with any of the other rules we have already learned. We practice this next.
Example 63: Using the Product, Quotient and Chain Rules
Find the derivatives of the following functions.
1. (f(x) = x^5 sin{2x^3})
2. (f(x) = dfrac{5x^3}{e^{-x^2}}).
Solution
1. We must use the Product and Chain Rules. Do not think that you must be able to "see'' the whole answer immediately; rather, just proceed step--by--step.[f^prime(x) = x^5ig(6x^2cos 2x^3ig) + 5x^4ig(sin 2x^3ig)= 6x^7cos2x^3+5x^4sin 2x^3.]
2. We must employ the Quotient Rule along with the Chain Rule. Again, proceed step--by--step.[egin{align*} f^prime(x) = dfrac{e^{-x^2}ig(15x^2ig) - 5x^3ig((-2x)e^{-x^2}ig)}{ig(e^{-x^2}ig)^2} &=dfrac{e^{-x^2}ig(10x^4+15x^2ig)}{e^{-2x^2}} &= e^{x^2}ig(10x^4+15x^2ig). end{align*}]
A key to correctly working these problems is to break the problem down into smaller, more manageable pieces. For instance, when using the Product and Chain Rules together, just consider the first part of the Product Rule at first: (f(x)g^prime(x)). Just rewrite (f(x)),then find (g^prime(x)). Then move on to the (f^prime(x)g(x)) part. Don't attempt to figure out both parts at once.
Likewise, using the Quotient Rule, approach the numerator in two steps and handle the denominator after completing that. Only simplify afterward.
We can also employ the Chain Rule itself several times, as shown in the next example.
Example 64: Using the Chain Rule multiple times
Find the derivative of (y = an^5(6x^3-7x)).
Solution
Recognize that we have the (g(x)= an(6x^3-7x)) function "inside'' the (f(x)=x^5) function; that is, we have (y = ig( an(6x^3-7x)ig)^5). We begin using the Generalized Power Rule; in this first step, we do not fully compute the derivative. Rather, we are approaching this step--by--step.
[y^prime = 5ig( an(6x^3-7x)ig)^4cdot g^prime(x).]
We now find (g^prime(x)). We again need the Chain Rule; [g^prime(x) = sec^2(6x^3-7x)cdot(18x^2-7).]Combine this with what we found above to give
[egin{align*} y^prime &= 5ig( an(6x^3-7x)ig)^4cdotsec^2(6x^3-7x)cdot(18x^2-7) &= (90x^2-35)sec^2(6x^3-7x) an^4(6x^3-7x). end{align*}]
This function is frankly a ridiculous function, possessing no real practical value. It is very difficult to graph, as the tangent function has many vertical asymptotes and (6x^3-7x) grows so very fast. The important thing to learn from this is that the derivative can be found. In fact, it is not "hard;'' one must take several simple steps and be careful to keep track of how to apply each of these steps.
It is a traditional mathematical exercise to find the derivatives of arbitrarily complicated functions just to demonstrate that it can be done. Just break everything down into smaller pieces.
Example 65: Using the Product, Quotient and Chain Rules
Find the derivative of ( f(x) = dfrac{xcos(x^{-2})-sin^2(e^{4x})}{ln(x^2+5x^4)}.)
Solution
This function likely has no practical use outside of demonstrating derivative skills. The answer is given below without simplification. It employs the Quotient Rule, the Product Rule, and the Chain Rule three times.
[f^prime(x) = dfrac{Big(ln(x^2+5x^4)Big)cdotBig[ig(xcdot(-sin(x^{-2}))cdot(-2x^{-3})+1cdot cos(x^{-2})ig)-2sin(e^{4x})cdotcos(e^{4x})cdot(4e^{4x})Big]-Big(xcos(x^{-2})-sin^2(e^{4x})Big)cdotdfrac{2x+20x^3}{x^2+5x^4}}{ig(ln(x^2+5x^4)ig)^2}.]
The reader is highly encouraged to look at each term and recognize why it is there. (I.e., the Quotient Rule is used; in the numerator, identify the "LOdHI'' term, etc.) This example demonstrates that derivatives can be computed systematically, no matter how arbitrarily complicated the function is.
The Chain Rule also has theoretic value. That is, it can be used to find the derivatives of functions that we have not yet learned as we do in the following example.
Example 66: The Chain Rule and exponential functions
Use the Chain Rule to find the derivative of (y= a^x) where (a>0),(a eq 1) is constant.
Solution
We only know how to find the derivative of one exponential function: (y = e^x); this problem is asking us to find the derivative of functions such as (y = 2^x).
This can be accomplished by rewriting (a^x) in terms of (e). Recalling that (e^x) and (ln x) are inverse functions, we can write
[a = e^{ln a} quad ext{and so } quad y = a^x = e^{ln (a^x)}. onumber]
By the exponent property of logarithms, we can "bring down'' the power to get
[y = a^x = e^{x (ln a)}. onumber]
The function is now the composition (y=f(g(x))),with (f(x) = e^x) and (g(x) = x(ln a)). Since (f^prime(x) = e^x) and (g^prime(x) = ln a), the Chain Rule gives
[y^prime = e^{x (ln a)} cdot ln a. onumber]
Recall that the (e^{x(ln a)}) term on the right hand side is just (a^x),our original function. Thus, the derivative contains the original function itself. We have
[y^prime = y cdot ln a = a^xcdot ln a. onumber]
The Chain Rule, coupled with the derivative rule of (e^x),allows us to find the derivatives of all exponential functions.
The previous example produced a result worthy of its own "box.''
Theorem 20: Derivatives of Exponential Functions
Let (f(x)=a^x),for (a>0, a eq 1). Then (f) is differentiable for all real numbers and
[f^prime(x) = ln acdot a^x. onumber]
## Alternate Chain Rule Notation
It is instructive to understand what the Chain Rule "looks like'' using "(dfrac{dy}{dx})'' notation instead of (y^prime) notation. Suppose that (y=f(u)) is a function of (u),where (u=g(x)) is a function of (x),as stated in Theorem 18. Then, through the composition (f circ g),we can think of (y) as a function of (x),as (y=f(g(x))). Thus the derivative of (y) with respect to (x) makes sense; we can talk about (dfrac{dy}{dx}.) This leads to an interesting progression of notation:
[egin{align*}y^prime &= f^prime(g(x))cdot g^prime(x) dfrac{dy}{dx} &= y^prime(u) cdot u^prime(x)quad ext{(since (y=f(u)) and (u=g(x)))} dfrac{dy}{dx} &= dfrac{dy}{du} cdot dfrac{du}{dx}quad ext{(using "fractional'' notation for the derivative)}end{align*}]
Here the "fractional'' aspect of the derivative notation stands out. On the right hand side, it seems as though the "(du)'' terms cancel out, leaving [ dfrac{dy}{dx} = dfrac{dy}{dx}.]
It is important to realize that we are not canceling these terms; the derivative notation of (dfrac{dy}{dx}) is one symbol. It is equally important to realize that this notation was chosen precisely because of this behavior. It makes applying the Chain Rule easy with multiple variables. For instance,
[dfrac{dy}{dt} = dfrac{dy}{digcirc} cdot dfrac{digcirc}{d riangle} cdot dfrac{d riangle}{dt}.]
where (igcirc) and ( riangle) are any variables you'd like to use.
One of the most common ways of "visualizing" the Chain Rule is to consider a set of gears, as shown in Figure 2.18. The gears have 36, 18, and 6 teeth, respectively. That means for every revolution of the (x) gear, the (u) gear revolves twice. That is, the rate at which the (u) gear makes a revolution is twice as fast as the rate at which the (x) gear makes a revolution. Using the terminology of calculus, the rate of (u)-change, with respect to (x),is (dfrac{du}{dx} = 2).
Likewise, every revolution of (u) causes 3 revolutions of (y): (dfrac{dy}{du} = 3). How does (y) change with respect to (x)? For each revolution of (x),(y) revolves 6 times; that is, [dfrac{dy}{dx} = dfrac{dy}{du}cdot dfrac{du}{dx} = 2cdot 3 = 6.]
We can then extend the Chain Rule with more variables by adding more gears to the picture.
It is difficult to overstate the importance of the Chain Rule. So often the functions that we deal with are compositions of two or more functions, requiring us to use this rule to compute derivatives. It is often used in practice when actual functions are unknown. Rather, through measurement, we can calculate (dfrac{dy}{du}) and (dfrac{du}{dx}). With our knowledge of the Chain Rule, finding (dfrac{dy}{dx}) is straightforward.
In the next section, we use the Chain Rule to justify another differentiation technique. There are many curves that we can draw in the plane that fail the "vertical line test.'' For instance, consider (x^2+y^2=1),which describes the unit circle. We may still be interested in finding slopes of tangent lines to the circle at various points. The next section shows how we can find (dfrac{dy}{dx}) without first "solving for (y).'' While we can in this instance, in many other instances solving for (y) is impossible. In these situations, implicit differentiation is indispensable.
## The Chain Rule
Suppose we want to find the derivative of y=(x 2 +3x+1) 2 . We could hopefully multiply it out and then take the derivative with little difficulty. But, what if, instead, it was y=(x 2 +3x+1) 50 ? Would you want to apply the same method to this problem? Certainly not. Instead we need a method for dealing with composite functions, functions which are one function applied to another. For example, if we let u=f(x)=(x 2 +3x+1) and g(u)=u 2 then y=(x 2 +3x+1) 2 = g(f(x)) . This is sometimes written as
. This is read " g composite f ."
Our goal is to find the derivative
based on our knowledge of the functions f and g. Now, we know that
Leibniz's differential notation suggests that perhaps derivatives can be treated as fractions, leading to the speculation that
This leads to the (possible) chain rule:
Let's apply this to our example and see if it works. First, we'll multiply the product out and then take the derivative. Then we'll apply the chain rule and see if the results match:
So, our rule checks out, at least for this example. It turns out that this rule holds for all composite functions, and is invaluable for taking derivatives.
This rule is called the chain rule because we use it to take derivatives of composties of functions by chaining together their derivatives. The chain rule can be though of as taking the derivative of the outer function (applied to the inner function) and multiplying it times the derivative of the inner function.
The discussion given here is not by any means a proof and should not satisfy any reader. A proof of the chain rule can be found here. Please look at it.
Failure to apply the chain rule is probably the most common mistake in differential calculus. Remember that the chain rule applies to all composite functions.
## MATH 131: Applied Calculus I
Applied Calculus for Loyola University Chicago Custom (packaged with WileyPlus) by Deborah Hughes-Hallett, et al.
Chapter 1: Foundations For Calculus: Functions and Limits
1.1 Functions and Change
1.2 Exponential Functions
1.3 New Functions from Old
1.4 Logarithmic Functions
1.5 Trigonometric Functions
1.6 Powers, Polynomials, and Rational Functions
1.7 Introduction to Continuity
1.8 Limits
Chapter 2: Key Concept: The Derivative
2.1 How Do We Measure Speed?
2.2 The Derivative at a Point
2.3 The Derivative Function
2.4 Interpretations of the Derivative
2.5 The Second Derivative
Chapter 3: Short-Cuts to Differentiation
3.1 Powers and Polynomials
3.2 The Exponential Function
3.3 The Product and Quotient Rules
3.4 The Chain Rule
3.5 The Trigonometric Functions
3.6 The Chain Rule and Inverse Functions
Chapter 4: Using the Derivative
4.1 Using First and Second Derivatives
4.2 Optimization
4.3 Optimization and Modeling
4.4 Families of Functions and Modeling
4.5 Applications to Marginality
4.7 L’Hopital’s Rule, Growth, and Dominance
Chapter 5: Key Concept: The Definite Integral
5.1 How Do We Measure Distance Traveled?
5.2 The Definite Integral
5.3 The Fundamental Theorem and Interpretations
Chapter 6: Constructing Antiderivatives
6.1 Antiderivatives Graphically and Numerically
6.2 Constructing Antiderivatives Analytically
6.3 [Optional] Differential Equations and Motion
Chapter 1: Foundation For Calculus: Functions and Limits
1. Functions and Change: 1, 8, 13, 16, 23, 27, 31, 33, 53, 55, 70(No WP*)
2. Exponential Functions: 2, 5, 6, 8, 10, 16, 17, 31, 38, 39
3. New Functions From Old: 9, 13, 14, 28, 32, 43, 45, 50, 51, 52, 56, 61, 68, 19, 71
4. Logarithmic Functions: 2, 4, 6, 7, 12, 16, 19, 24, 26, 30, 36, 45, 47, 48
5. Trigonometric Functions: 6, 10, 12, 18, 20, 30, 31, 33, 62, 63, 68
6. Powers, Polynomials and Rational Functions: 3, 6, 8, 9 , 12, 15, 17, 19, 47 (No WP), 50, 52, 54
7. Introduction to Limits and Continuity: 2, 3, 12 (No WP), 14 (No WP), 24( No WP), 26(No WP), 28 (No WP), 31 (No WP), 39 (No WP), 43 (No WP), 49 (No WP)
8. Extending the Idea of a Limit: 4, 6, 31, 33, 38, 45, 52 (No WP)
Chapter 2: Key Concept: The Derivative
2.1 How Do We Measure Speed?: 1, 3, 5, 7, 10, 15, 16, 17, 21, 22, 29, 31, 33
2.2 The Derivative at a Point:1, 3, 4, 8, 10, 19, 22, 23(No WP), 32, 37, 47, 56, 60, 63
2.3 The Derivative Function: 1, 2, 5, 13, 20, 22, 33, 49, 50, 52
2.4 Interpretations of the Derivative: 2, 3, 9, 12(No WP), 17, 19, 24 (No WP), 27, 33, 38, 51, 54 (No WP)
2.5 The Second Derivative: 2, 3, 8, 9, 12, 14, 15, 25, 37, 38, 41
Chapter 3: Short-Cuts to Differentiation
3.1 Powers and Polynomials: 6, 10, 11, 14, 18, 23, 25, 28, 30, 32, 35, 38, 48, 58, 70, 75, 99
3.2 The Exponential Function: 2, 4, 6, 8, 10, 12, 13, 17, 24, 42, 44, 46, 49, 58
3.3 The Product and Quotient Rules: 4, 6, 7, 10, 12, 16, 19, 20, 24, 28, 31, 43, 47, 52, 90
3.4 The Chain Rule: 2, 4, 7, 11, 17, 18, 28, 33, 43, 45, 48, 58, 60, 61, 67, 70, 73, 77, 86
3.5 The Trigonometric Functions: 4, 8, 10, 12, 16, 19, 22, 24, 26, 30, 36, 38, 45, 61
3.6 The Chain Rule and Inverse Functions: 1, 9, 12, 13, 17, 22, 25, 26, 28, 30, 32, 35, 38, 39, 41, 50
Chapter 4: Using the Derivative
4.1 Using First and Second Derivatives: 1, 5, 18, 25, 32, 34, 35, 36, 37, 40, 52, 53, 59
4.2 Optimization: 4, 6, 7, 8, 10, 13, 18, 19, 30, 37, 39, 40, 43
4.3 Optimization and Modeling: 5, 6, 7, 9, 11, 14, 18, 22, 25, 27, 38, 39, 45
4.4 Families of Functions and Modeling: 3, 4, 16, 25, 26, 47, 49, 50, 51, 57, 63
4.5 Applications to Marginality: 1, 4, 7, 12, 13, 15, 16, 18
4.7 L’Hopital’s Rule, Growth, and Dominance: 6, 11, 35, 41, 42, 48, 53, 58, 59, 65, 67, 76, 87
Chapter 5: Key Concept: The Definite Integral
5.1How We Measure Distance Traveled?: 1, 2, 4, 8, 14 ,15, 23, 25, 28
5.2The Definite Integral:4, 8, 12, 24, 29, 30, 32, 36
5.3The Fundamental Theorem and Interpretations: 1, 2, 3, 4, 5, 7, 9, 10, 12, 14, 16, 18, 22, 30
5.4Theorems About Definite Integrals: 1, 2, 3, 4, 6, 7, 8, 11, 14, 16, 19, 25, 26, 28, 30, 33
Chapter 6: Constructing Antiderivatives
6.1Antiderivatives Graphically and Numerically: 3, 6, 13, 15, 20, 25, 33
6.2Constructing the Antiderivative Analytically: 7, 9, 10, 11, 12, 15, 18, 20, 21, 26, 28, 30, 31, 35, 41, 44, 50, 55, 56, 57, 58, 60, 65
* No WP means the problem is not in WileyPlus and should be completed from the textbook.
## How to Use the Chain Rule for Derivatives
Suppose $displaystyle h(x) = sin(x^2)$. Find $h'(x)$ .
##### Example 2
Suppose $displaystyle f(x) = sqrt$. Find $f'(x)$ .
Write the square-root in exponent form.
Use the power rule and the chain rule.
Step 3
(Optional) Write the derivative in radical form.
$egin f'(x) & = frac 3 2 x^2 (x^3 + 2)^<-1/2>[6pt] & = frac 3 2 x^2 cdot frac 1 >[6pt] & = frac 3 2 x^2 cdot frac 1 >[6pt] & = frac<3x^2><2sqrt> end$
##### Example 3
Use the chain rule to find $displaystyle frac d left(sec x ight)$ .
Rewrite the function in terms of the cosine.
$sec x = frac 1 = ig(cos xig)^ <-1>$
Differentiate using the chain rule.
$egin frac d left(sec x ight) & = frac d left[(cos x)^<-1> ight][6pt] & = -1(cos x)^<-2>cdot (-sin x)[6pt] & = -frac 1 cdot (-sin x)[6pt] & = frac end$
Simplify by separating into two fractions and using trigonometric identities.
$egin frac d left(sec x ight) & = frac[6pt] & = frac 1 cdot frac[6pt] & = sec x an x end$
$displaystyle frac d left(sec x ight) = sec x an x$
##### Example 4
Notice that this function will require both the product rule and the chain rule.
Identify the factors in the function.
Differentiate using the product rule.
Step 3
(Optional) Factor the derivative.
$egin f'(x) & = -2xlue>sin(x^3) + lue>,3x^2cos(x^3)[6pt] & = lue>left(-2 ed xsin(x^3) + 3 edcos(x^3) ight)[6pt] & = ed xe^<-x^2>left(-2sin(x^3) + 3xcos(x^3) ight)[6pt] & = xe^<-x^2>left(3xcos(x^3)-2sin(x^3) ight) end$
$displaystyle f'(x) = xe^<-x^2>left(3xcos(x^3)-2sin(x^3) ight)$.
##### Example 5
Notice that $f$ is a composition of three functions. This means we will need to use the chain rule twice.
Write the square-root as an exponent.
Use the power rule and the chain rule for the square-root.
Find the derivative of the cosine.
$f'(x) = frac 1 2[cos(5x + 1)]^<-1/2>cdot left(-sin( ed<5x+1)> ight)cdot frac d ( ed<5x+1>)$
Find the derivative of the linear function.
$f'(x) = frac 1 2[cos(5x + 1)]^<-1/2>cdot left(-sin(5x+1) ight)cdot 5$
$egin% f'(x) & = frac 1 2[cos(5x + 1)]^<-1/2>cdot left(-sin(5x+1) ight)cdot 5[6pt] & = -frac 5 2[cos(5x + 1)]^<-1/2>cdot left(sin(5x+1) ight)[6pt] & = -frac 5 2cdot frac 1 <[cos(5x + 1)]^<1/2>>cdot left(sin(5x+1) ight)[6pt] &= -frac<5sin(5x+1)><2sqrt> end$
## Calculus I: Activities
An additional property of derivatives is the which addresses composition of functions. In addition to being useful for calculating derivatives it enables a variety of applications.
### Subsection 2.4.1 Illustrating the Chain Rule
###### Checkpoint 2.4.1 . Illustration of Chain Rule.
Suppose for every shovel full of coal a steam engine produces 1000 psi, and for every 2000 psi the train travels one mile.
1. What is the rate of psi to coal?
2. What is the rate of miles to psi?
3. What is the rate of coal to miles?
4. Write a function that outputs psi given number of shovels of coal.
5. Calculate the derivative of this function.
6. Write a function that outputs miles given psi.
7. Calculate the derivative of this function.
8. Write a function that outputs miles given number of shovels of coal using the previous two functions.
9. Calculate the derivative of this function.
1. What is the rate of psi to coal? (1000 ext< psi>/1 ext < shovel>)
2. What is the rate of miles to psi? (1 ext< mile>/2000 ext < psi>)
3. What is the rate of coal to miles? (frac<1000 ext< psi>><1 ext< shovel>> cdot frac<1 ext< mile>><2000 ext< psi>> = frac<1 ext< mile>><2 ext< shovels>>. )
4. Write a function that outputs psi given number of shovels of coal. (Psi(c ext< shovels>) = frac<1000 ext< psi>><1 ext< shovel>> c ext< shovels>. )
5. Calculate the derivative of this function. (frac= frac<1000 ext< psi>>< ext>. )
6. Write a function that outputs miles given psi. (s(p ext< psi>) = frac<1 ext< mile>><2000 ext< psi>> p ext< psi>. )
7. Calculate the derivative of this function. (frac= frac<1 ext< mile>><2000 ext< psi>>. )
8. Write a function that outputs miles given number of shovels of coal using the previous two functions. (s(Psi(c ext< shovels>))=frac<1 ext< mile>><2000 ext< psi>> left( frac<1000 ext< psi>><1 ext< shovel>> c ext < shovels> ight))
9. Calculate the derivative of this function. (frac=frac<1><2000>cdot frac<1000><1>.)
### Subsection 2.4.2 Property and Example
###### Theorem 2.4.2 . Chain Rule.
If (g) is differentiable at (x) and (f) is differentiable at (g(x)) then (h=f circ g) is differentiable at (x) and
###### Example 2.4.3 . Understanding chain rule execution.
Calculate the derivative of (h(x)=sqrt.)
###### Example 2.4.4 . Using the chain rule.
Calculate the derivative of (h(x)=sqrt.)
### Subsection 2.4.3 Implicit Differentiation
Sometimes we know an equation involving a function before we know what the function is. We can still calculate the derivative of this unknown function.
###### Example 2.4.5 . Using Implicit Differentiation.
Calculate (f'(x)) given (f(x)^2+x^2 = 2x+3.)
###### Checkpoint 2.4.6 .
Note in Example 2.4.5 we can calculate the function from the first step. Do this then calculate the derivative. Compare this to the result above.
###### Example 2.4.7 . Using Implicit Differentiation.
Calculate (y') given (y) is a function of (x) and (y^2-5y+6=x.)
### Subsection 2.4.4 Related Rates
Implicit differentiation is needed in some applications. The expression will make sense after considering examples.
###### Example 2.4.8 . Related Rate: Volume and Radius.
Air is being pumped into a spherical balloon so that its volume increases at a rate of 100 ( ext^3/ ext.) How fast is the radius of the balloon increasing when the diameter is 50cm?
First, we need to look up a formula for the volume of a sphere. It is (V=frac<4><3>pi r^3.) Next note that we are told the change in volume is 100. Change in volume means we know the derivative of (V) with respect to time. We also note that time is not an explicit variable (i.e., there is no variable (t)). Thus to use this formula we need to differentiate the it.
Note we need the (r') at the end because the radius is also a function of time thus we use implicit differentiation. Finally we can plug in the values we were given. Note (r=25) cm because the diameter is 50 cm.
Notice that in Example 2.4.8 the rate of change of the radius is related to the rate of change of the volume. This is the origin of the name .
## The Chain Rule
One of the most handy tools when computing derivatives is the chain rule, but it can be easy to get lost in the process of applying the rule. Let see if we can make the chain rule a bit easier and look at a bunch of examples.
Given differentiable functions and the derivative of the composition is given by
Lets rewrite this by calling (for “outside”) and (for “inside”), so the chain rule is:
Steps to using the chain rule:
1. Identify the outside function and the inside function.
2. Find the derivatives of the outside function and the inside function.
3. Use the above formula to get the derivative we want.
1. The chain rule is used when there is a function inside of another function.
2. The chain rule says that the derivative works its way from the outside in.
3. The chain rule says to take the derivative of the outermost function leaving the inside alone. Then multiply by the derivative of the inside.
4. Sometimes we must use the chain rule along with other rules like the product or quotient rules.
Example 1. Differentiate
Example 2. Differentiate
Example 3. Which of the functions below do you think need the chain rule.
1. Chain rule!
2. Chain rule!
3. You don’t need the chain rule here. You can either use the product rule, or distribute and then use the power rule.
4. You could distribute and then use the power rule, but that would take forever. Instead use the chain rule.
5. Chain rule!
6. Chain rule!
7. You don’t need the chain rule here. Use the product rule.
8. Chain rule!
Find the derivatives of the following function using the steps above.
The Chain Rule: Part 2
The chain rule can be used in addition to other rule. It will help you to remember the following phrase for using the chain rule:
Take the derivative of the outside function, leaving the inside alone, then multiply by the derivative of the inside.
## Exercise 3
Calculate the derivative of h ( x ) = cos ( 2 x + 1 ) sin ( x 2 + 3 x ) . +3x).>
Using the Product Rule, we have
For the two remaining derivatives, we need to use the Chain Rule.
So, using the Chain Rule, we have
h ′ ( x ) = cos ( 2 x + 1 ) cos ( x 2 + 3 x ) ⋅ ( x 2 + 3 x ) ′ − sin ( 2 x + 1 ) ⋅ ( 2 x + 1 ) ′ sin ( x 2 + 3 x ) = cos ( 2 x + 1 ) cos ( x 2 + 3 x ) ( 2 x + 3 ) − sin ( 2 x + 1 ) ( 2 ) sin ( x 2 + 3 x ) . displaystyle &=&displaystyle +3x)cdot (x^<2>+3x)'-sin(2x+1)cdot (2x+1)'sin(x^<2>+3x)>&&&=&displaystyle +3x)(2x+3)-sin(2x+1)(2)sin(x^<2>+3x).>end>>
## Recommended Pages
#### Privacy Overview
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
## Symbolab Blog
In the previous posts we covered the basic derivative rules, trigonometric functions, logarithms and exponents (click here). But we are still missing the most important rule dealing with compound functions, the chain rule.
Why is it so important? Because most of the functions you will have to derive, and later integrate, are most likely compound. For example sin(2x) is the composition of f(x)=sin(x) and g(x)=2x or √(x²-3x) is the composition of f(x)=√x and g(x)= x²-3x
The chain rule formula is as follows: (f(g(x)))’=f’(g(x)) *g’(x)
That is, the derivative of the composition of two functions equals the derivative of the outer function times the derivative of the inner function
Here’s a more complex example involving multiple applications of the chain rule (click here):
With the chain rule we put it all together you should be able to derive almost any function. There are some advanced topics to cover including inverse trig functions, implicit differentiation, higher order derivatives, and partial derivatives, but that’s for later.
## Problems
### Basic
Questions involving the chain rule will appear on homework, at least one Term Test and on the Final Exam. Such questions may also involve additional material that we have not yet studied, such as higher-order derivatives. You will also see chain rule in MAT 244 (Ordinary Differential Equations) and APM 346 (Partial Differential Equations). If the questions here do not give you enough practice, you can easily make up additional questions of a similar character. You can also find questions of this sort in any book on multivariable calculus.
Suppose that (f:R^3 o R) is of class (C^1) , and consider the function (phi:R^2 o R) defined by [ phi(x,y) = f(x^2-y, xy, xcos y) ] Express (partial_xphi) and (partial_y phi) in terms of (x,y) and partial derivatives of (f) .
Suppose that (f:R^2 o R) is of class (C^1) , and consider the function (phi:R^3 o R) defined by [ phi(x,y,z) = f(x^2-yz, xy+cos z) ] Express partial derivatives of (phi) with respect to (x,y,z) in terms of (x,y,z) and partial derivatives of (f) .
Suppose that (f:R^2 o R) is of class (C^1) . Let (S = <(r,s)in R^2 : s e 0>) , and for ((r,s)in S) , define (phi(r,s) = f(rs, r/s)) . Find formulas for (partial_rphi) and (partial_sphi) in terms of (r,s) and derivatives of (f) .
Suppose that (f:R^2 o R) is of class (C^1) . Let (S = <(x,y,z)in R^3 : z e 0>) , and for ((x,y,z)in S) , define (phi(x,y,z) = f(xy, y/z)) . Find formulas for partial derivatives of (phi) in terms of (x,y,z) and partial derivatives of (f) .
1. Use the chain rule to find relations between different partial derivatives of a function. For example:
Suppose that (f:R o R) is of class (C^1) , and that (u = f(x^2+y^2+z^2)) . Prove that [ xpartial_y u - y partial_x u = 0 ] Suppose that (f:R^2 o R) is of class (C^1) , and that (u = f(x^2+y^2+z^2, y+ z)) . Prove that [ (y-z)partial_x u - x partial_y u + x partial_z u = 0. ]
Find the tangent plane to the set (ldots) at the point (mathbf a = ldots) . For example:
1. Let (q:R^n o R) be the (quadratic) function defined by (q(mathbf x) = |mathbf x|^2) . Determine ( abla q) (either by differentiating, or by remembering material from one of the tutorials.)
2. Suppose that (mathbf x:R o R^3) is a function that describes the trajectory of a particle. Thus (mathbf x(t)) is the particle’s position at time (t) .
• If (mathbf x) is differentiable, then we will write (<f v>(t) = mathbf x'(t)) , and we say that (<f v>(t)) is the velocity vector at time (t) .
• Simlarly, if (f v) is differentiable, then we will write (<f a >(t)= <f v>'(t)) , and we say that (<f a>(t)) is the acceleration vector at time (t) .
• We also say that (|<f v>(t)|) is the speed at time (t) .
Prove that the speed is constant if and only if $a(t) v(t) = 0$ for all (t) .
For the next three exercises, let (M^) denote the space of (n imes n) matrices, We write a typical element of (M^) as a matrix (X) with entries: [ X = left( egin x_ <11>& cdots & x_<1n> vdots & ddots & vdots x_ & cdots & x_ end ight) ] Define the function (det:M^ o R) by saying that (det(X)) is the determinant of the matrix. We can view this as a function of the variables (x_<11>,ldots, x_) .
Now consider (n imes n) matrices for an arbitrary positive integer (n) . Let (I) denote the (n imes n) identity matrix. Thus, in terms of the variables ((x_)) , (I) corresponds to [ x_=egin1& ext< if >i=j 0& ext< if >i e j end ] For every (i) and (j) , compute [ frac> det(I), ] This means: the derivative of the determinant function, evaluated at the identity matrix.
Hint There are two cases: (i=j) and (i e j) .
1. Now suppose that (X(t)) is a “differentiable curve in the space of matrices”, in other words, that [ X(t) = left( eginx_<11>(t) & cdots & x_<1n>(t) vdots & ddots & vdots x_(t) & cdots & x_(t) end ight) ] where (x_(t)) is a differentiable function of (tin R) , for all (i,j) . Also suppose that (X(0) = I) .
Use the chain rule and the above exercise to find a formula for (left. frac d
det(X(t)) ight|_) in terms of (x_'(0)) , for (i,j=1,ldots, n) .
1. Here we sketch a proof of the Chain Rule that may be a little simpler than the proof presented above. To simplify the set-up, let’s assume that (mathbf g:R o R^n) and (f:R^n o R) are both functions of class (C^1) . Define (phi = fcirc mathbf g) . Thus (phi) is a function (R o R) . Your goal is to compute its derivative at a point (tin R) . To simplify still further, let’s assume that (n=2) . Let’s write (mathbf g(t) = (x(t), y(t))) . Then [egin phi(t+h)-phi(t) &= f(mathbf g(t+h)) - f(mathbf g(t)) &= f( x(t+h), y(t+h)) - f(x(t),y(t)) &= [f( x(t+h), y(t+h)) - f(x(t+h),y(t))] &qquad qquadqquad+ [f(x(t+h),y(t)) -f(x(t),y(t))] . end] Starting from the above, mimic the proof of Theorem 3 in Section 2.1 to show that [ phi'(t) = lim_frac 1 h(phi(t+h)-phi(t) ) ext < exists and equals >fracfrac+ fracfrac, ] where the partial derivatives of (f) on the right-hand side are evaluated at ((x(t),y(t)) = mathbf g(t)) .
/> | 2021-12-01 12:17:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935848474502563, "perplexity": 1541.4078188723101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00102.warc.gz"} |
http://www.darwinproject.ac.uk/letter/DCP-LETT-2123.xml | # To John Lubbock 14 [July 1857]1
Down.—
14th
My dear Lubbock
You have done me the greatest possible service in helping me to clarify my Brains. If I am as muzzy on all subjects as I am on proportions & chance,—what a Book I shall produce!—
I have divided N. Zealand Flora as you suggested. There are 339 species in genera of 4 & upwards & 323 in genera of 3 & less. The 339 species have 51 species presenting one or more varieties—2 The 323 species have only 37: proportionally (as 339:323 \:\: 51.:48.5) they ought to have had 48$\frac{1}{2}$ species presenting vars.— So that the case goes as I want it, but not strong enough, without it be general, for me to have much confidence in.
I am quite convinced yours is the right way; I had thought of it, but shd never have done it, had it not been for my most fortunate conversation with you.
I am quite shocked to find how easily I am muddled, for I had before thought over the subject much, & concluded my way was fair. It is dreadfully erroneous. What a disgraceful blunder you have saved me from. I heartily thank you—3
Ever yours | C. Darwin
It is enough to make me tear up all my M.S. & give up in despair.—
It will take me several weeks to go over all my materials. But oh if you knew how thankful I am to you.—
## Footnotes
Dated by the relationship to the letter to J. D. Hooker, 1 August [1857].
CD’s calculations on J. D. Hooker 1853–5 are in DAR 16.2: 239a–46.
Since 1855, CD had been engaged in recording the incidence of varieties in various botanical and zoological catalogues. He wished to show that varieties were most frequent in genera that also contained a large number of species; further calculations aimed to demonstrate that these large genera were also geographically widespread and their component species individually abundant. His calculations consistently gave him the correlations he hoped for, but he failed to notice that his results were the entirely artificial consequence of a faulty method that would always have indicated an apparent positive correlation between genus size and any chosen characteristic. CD’s manuscripts (DAR 15.2, 16.1, and 16.2) show that in July 1857 he changed his method of calculation. Under Lubbock’s guidance, he began to compare the proportion of variable species in large genera with the proportion in small genera using a valid method like the one described in the letter. See J. Browne 1980.
## Summary
Thanks JL for saving him from "a disgraceful blunder". Following their conversation he has divided the New Zealand flora as JL suggested and finds genera with four or more species are more variable than those with three or less. It will take several weeks to go back over all his material.
## Letter details
Letter no.
DCP-LETT-2123
From
Darwin, C. R.
To
Lubbock, John
Sent from
Down
Source of text
DAR 263: 18 (EH88206467)
Physical description
4pp | 2017-02-26 19:23:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5359421372413635, "perplexity": 3030.1949384525283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00393-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://bioinformatics.stackexchange.com/questions/14561/cif-pdb-conversion-with-python/14566 | # .cif .pdb conversion with python
can you please give me an advice how to convert .cif files into .pdb preferably using python?
Thanks in advance! Best, Balint Biro
• Its certainly doable and PyMol will also do this. A number of site members hold this expertise. What I think is that the format you are downloading in is cif rather than what you want which is pdb. If that is the case please modify your question – M__ Oct 15 '20 at 16:13
• Dear Michael,yes, that is the case but I cant see how should I modify my question... – Balint Biro Oct 15 '20 at 22:07
• Using the edit button :-) – Kamil S Jaron Oct 16 '20 at 10:20
As @Michael said PyMOL can do this. PyMOL is not only an app, but also a python package —installed via conda not via the regular download (you can have both).
import pymol2
with pymol2.PyMOL() as pymol:
• Hi @BalintBiro please upvote or mark 'accepted'. This is the answer. You need to define the location of the infile before the with statement starts, thats why you have the error. You need a certain degree of coding proficiency. – M__ Oct 15 '20 at 22:32 | 2021-08-02 22:58:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3921009302139282, "perplexity": 2231.6152332470388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00201.warc.gz"} |
https://walkccc.me/LeetCode/problems/1519/ | 1519. Number of Nodes in the Sub-Tree With the Same Label
• Time: $O(n)$
• Space: $O(n)$
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 class Solution { public: vector countSubTrees(int n, vector>& edges, string labels) { vector ans(n); vector> graph(n); for (const vector& e : edges) { const int u = e[0]; const int v = e[1]; graph[u].push_back(v); graph[v].push_back(u); } dfs(graph, 0, -1, labels, ans); return ans; } private: vector dfs(const vector>& graph, int u, int parent, const string& labels, vector& ans) { vector count(26); // count of letters down from this u for (const int v : graph[u]) { if (v == parent) continue; vector childCount = dfs(graph, v, u, labels, ans); for (int i = 0; i < 26; ++i) count[i] += childCount[i]; } ans[u] = ++count[labels[u] - 'a']; // the u itself return count; } };
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 class Solution { public int[] countSubTrees(int n, int[][] edges, String labels) { int[] ans = new int[n]; List[] graph = new List[n]; for (int i = 0; i < n; ++i) graph[i] = new ArrayList<>(); for (int[] e : edges) { final int u = e[0]; final int v = e[1]; graph[u].add(v); graph[v].add(u); } dfs(graph, 0, -1, labels, ans); return ans; } private int[] dfs(List[] graph, int u, int parent, final String labels, int[] ans) { int[] count = new int[26]; // count of letters down from this u for (final int v : graph[u]) { if (v == parent) continue; int[] childCount = dfs(graph, v, u, labels, ans); for (int i = 0; i < 26; ++i) count[i] += childCount[i]; } ans[u] = ++count[labels.charAt(u) - 'a']; // the u itself return count; } } | 2022-09-29 02:36:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20070159435272217, "perplexity": 3145.587704809102}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00143.warc.gz"} |
http://pyke.keplerscience.org/api/kepextract.html | # kepextract: create a light curve from a target pixel file by summing user-selected pixels¶
pyke.kepextract.kepextract(infile, outfile=None, bitmask=1114543, maskfile='ALL', bkg=False, psfcentroid=False, overwrite=False, verbose=False, logfile='kepextract.log')
kepextract – create a light curve from a target pixel file by summing user-selected pixels
kepextract calculates simple aperture photometry, from a target pixel file, for a user-supplied set of pixels. The Kepler pipeline sums a specific set of pixels to produce the standard light curves delivered to users. Termed the optimal aperture, the default pixel set is designed to maximize the signal-to-noise ratio of the resulting light curve, optimizing for transit detection. This tool provides users with a straightforward capability to alter the summed pixel set. Applications include:
• Use of all pixels in the aperture
• The Kepler pipeline does not produce a light curve for sources observed with custom or dedicated pixel masks. The user can create a light curve for these sources using kepextract.
• Construction of pixel light curves, in which the time series for a single pixel can be examined.
• Light curves for extended sources which may be poorly sampled by the optimal aperture.
$kepextract kplr008256049-2010174085026_lpd-targ.fits --maskfile ALL One further can plot the resulted light curve by doing import matplotlib.pyplot as plt from astropy.io import fits f = fits.open('outlc.fits') plt.plot(f[1].data['TIME'], f[1].data['SAP_FLUX']) or $ kepdraw outlc.fits | 2017-11-24 07:34:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31415197253227234, "perplexity": 4199.432903463162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00775.warc.gz"} |
https://www.physicsforums.com/threads/srednicki-equation-7-7.383058/ | # Srednicki equation 7.7
vaibhavtewari
Hello everyone,
I will be glad if someone can explain how equation 7.7
$$\tilde{x}(E)$$ = $$\tilde{q}(E)$$ + $$\frac{\tilde{f}(E)}{E^2-\omega^2+i\epsilon}$$
is a shift by constant, heres the link for the book
http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf
thanks
LAHLH
Hi, I'm working through this book at the moment too, and have received lots of excellent help here, so would be glad to try and help (although being a newbie at this stuff too I may not be completley correct).
I believe in chapter 7, the path integral has a measure Dq, i.e. it is with respect to variations of the configuration space (integrating over all possible configurations the particle could traverse, including the the classically realised one, but also every single other possible. Later these will be field configurations)
If we make the change in 7.7 $$\tilde{x}(E)= \tilde{q}(E)+ \frac{\tilde{f}(E)}{E^2-\omega^2+i\epsilon}$$, then the last term is a constant as far as $$\int Dq$$ is concerned as it doesn't depend on the dynamical variable q. It's like shifting everyone of the infinite possible trajectories you're integrating over by a constant.
This is more or less the same as changing variables in a normal integral $$\int dx$$, by having $$y \rightarrow x-\chi$$. Clearly dy=dx, the same thing is going on in the path integral case, it's just complicated by integrating over all the paths.
Last edited:
vaibhavtewari
Thank you for replying, I am glad I am not the only one struggling trough qft,
vaibhavtewari
I wil be glad if you can also explain why eq 7.17 is not just
$$\frac{1}{i^2}[G(t_2-t_1)G(t_4-t_3)]$$
LAHLH
Well it ultimately comes down to the Leibniz product rule (and making sure you don't set f=0 until after the differentiation, so you have to differentiate the exponential as part of the product when you go onto the next derivative etc). Acting with the three $$\delta$$ derivatives on $$\langle 0 \mid 0 \rangle_f$$ (as given explicitly in 7.11), you will get a range of terms, but then you set f to 0 only the ones in 7.17 survive.
I hope that helps, it's hard to explain without writing it out explicitly which would probably take forever in latex. But it really is just the product rule, and making sure you only set f=0 at the end, so you don't lose any terms.
LAHLH
I will do part of the process for the 3 point function (which will be zero but will show the details of how this works anyway)
$$\langle 0 \mid TQ(t_1)Q(t_2)Q(t_3) \mid 0 \rangle= \frac{1}{i}\frac{\delta}{\delta f(t_1)}\frac{1}{i}\frac{\delta}{\delta f(t_2)}\frac{1}{i}\frac{\delta}{\delta f(t_3)} \langle0\mid0\rangle_f$$
$$\langle 0 \mid TQ(t_1)Q(t_2)Q(t_3) \mid 0 \rangle= \frac{1}{i}\frac{\delta}{\delta f(t_1)}\frac{1}{i}\frac{\delta}{\delta f(t_2)}\frac{1}{i}\frac{\delta}{\delta f(t_3)} \langle0\mid0\rangle_f$$
$$\langle0\mid0\rangle_f=exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'})]$$
So acting with first derivative:
$$\frac{1}{i}\frac{\delta}{\delta f(t_3)}exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'})]= \{\frac{1}{i} \frac{i}{2} \int dt dt^{'} \delta (t-t_3) G(t-t^{'})f(t^{'})+\frac{1}{i}\frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})\delta (t^{'}-t_3) ) \}exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
This is equal to
$$\{\frac{1}{2} \int dt^{'} G(t_3-t^{'})f(t^{'})+\frac{1}{2} \int dt f(t)G(t-t_3) \}exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
Now since $$t, t^{'}$$ are dummy variables and since G is symmetric in it's arguments (because of the modulo) we can write this simply as
$$\int dt^{'} G(t_3-t^{'})f(t^{'})exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
This is how Srednicki gets line 2 of 7.16.
Now you want to act on this in the same way with the $$\frac{1}{i}\frac{\delta}{\delta f(t_2)}$$
So now you just use the product rule again not forgetting exp term yet which hasn't gone to one as we haven't yet set f=0.
This leads you to the next line:
$$\frac{1}{i}\frac{\delta}{\delta f(t_2)} \int dt^{'} G(t_3-t^{'})f(t^{'})exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ] = \{ G(t_3-t_2)+\int dt^{'} G(t_3-t^{'})f(t^{'})\int dt^{''} G(t_2-t^{''})f(t^{''})\}exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
Note this is equation 7.16 (only with t2, t3 instead of t1,t2) explcitly, my second term being what he calls "term with fs", which dies when we set f=0 at the end.
Finally we can hit this with the last derivative as follows:
$$\frac{1}{i}\frac{\delta}{\delta f(t_1)}\{ G(t_3-t_2)+\int dt^{'} G(t_3-t^{'})f(t^{'})\int dt^{'} G(t_2-t^{'})f(t^{'})\}exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
Last edited:
LAHLH
Which gives:
$$\{ G(t_3-t_2)\int dt^{'} G(t_1-t^{'})f(t^{'})+G(t_3-t_1)\int dt^{'} G(t_2-t^{'})f(t^{'})$$
$$+G(t_2-t_1)\int dt^{'} G(t_3-t^{'})f(t^{'})+\int dt^{'} G(t_3-t^{'})f(t^{'})\int dt^{''} G(t_2-t^{''})\int dt^{'} G(t_1-t^{'}) \} exp[ \frac{i}{2} \int dt dt^{'} f(t)G(t-t^{'})f(t^{'}) ]$$
Notice that setting f to zero now, will kill everything, this is why all odd number correlation functions are vanishing. Hopefully I haven't made too many mistakes and you can carry out this process to get the 4 point correlation function. You should be left with Srednicki's three terms as the only ones without f's then you can set f=0 to kill all others.
Last edited:
vaibhavtewari
wonderful, thankyou very much...
I am grateful you helped me in both the problems | 2022-08-15 15:56:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7525040507316589, "perplexity": 457.88717319303544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00273.warc.gz"} |
https://www.gamedev.net/forums/topic/561552-how-do-i-work-with-id3d10includeopen/?page=1 | DX11 How do I work with ID3D10Include::Open()?
Recommended Posts
Hi guys, since yesterday I want to try to include additional HLSL Shader Files with the D3DX11 Effect Framework but I have none success. As I looked to the documentation, I found out that I have to create a class which inherits from ID3D10Include and I have to write the functions Open() and Close(). I think Close() isn't a problem, but Open(). As I read from documentation, I'm reading "A user-implemented method for opening and reading the contents of a shader #include file.". So I'm understanding that I have to read the content from a file and save it into ppData. Am I right?
// The defined open Method for ID3D10Include
STDMETHOD(Open)(D3D10_INCLUDE_TYPE IncludeType, LPCSTR pFileName, LPCVOID pParentData, LPCVOID *ppData, UINT *pBytes)
{
FILE *f = fopen(pFileName, "rb");
if(f == nullptr)
{
char buffere[2048];
sprintf_s(buffere, 2048, "Could not load the effect include file \"%s\".", pFileName);
BSX_ERROR(buffere);
return E_FAIL;
}
// Get the file size
fseek(f, 0, SEEK_END);
long sz = ftell(f);
fseek(f, 0, SEEK_SET);
// Get the file content
char *buffer = new char[sz];
fclose(f);
// Save the file data into ppData and the size into pBytes.
*ppData = buffer;
pBytes = new UINT; *pBytes = UINT(sz);
// return E_FAIL; // Because it isn't successfull supported...
return S_OK;
}
But it doesn't work and my shader is telling me, that the definitions from the include file are not known. Another possibility I checked out was to compile the shader file. But here the shader will completly fail. The ID3D10Include::Open() should be identical with it from D3D9 and D3D10 the documentation says. Has somebody an idea where the problem is or knows my ideologic error? [oh] Thank you
Share on other sites
DieterVW 724
First, your code has an error. pBytes is being passed in so that you can set the size, not create a new UINT. Your code should read *pByte = sz; It's not working right now because the size value after the call to open would still be zero and your memory would be leaked.
The other thing is that the compiler will search for include files automatically in the same directory as the file you are trying to compile (provided that the include object is NULL). If I recall correctly, I believe that it even recursively searches down the sub directories. This interface is more useful for doing things like opening resources that are in the exe, like a resource file, or unpacking shaders that are in zip files.
Share on other sites
Oh my god, and now it works fine.
Sometimes I watching hours to the monitor and a can't find the bug. What for an horror to waste time for this simple bug.
Really really thank you! [smile] And the day is saved!
Create an account
Register a new account
• Similar Content
• By gsc
Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
I will really appreciate for any advice!
• By isu diss
I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
• I made my obj parser
and It also calculate tagent space for normalmap.
it seems calculation is wrong..
any good suggestion for this?
https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space
and I uploaded my code here
• Hi guys,
I dont know if this is the right section, but I did not know where to post this.
I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
Thank you.
• I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
Otherwise, would you suggest another library to create a GUI for game-related tools?
EDIT:
I've found a tutorial here in gamedev that shows a solution:
Though it's for D3D9, I'm not sure if it would work for D3D11?
• 9
• 10
• 18
• 9
• 9 | 2017-08-17 23:36:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18672332167625427, "perplexity": 6002.095403658631}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00344.warc.gz"} |
http://quant.stackexchange.com/questions?page=5&sort=active | All Questions
66 views
In portfolio allocation literature there is lot of effort made in obtaining 'better' portfolio weights, for example via improving parameter estimates, introducing Bayesian approaches, incorporating ...
41 views
LIBOR with different tenor
Let $F(t;S,T)$ be the forward rate from $S$ to $T$ seen at time $t$, and $I$ be one of tenors, i.e. $I$ is one of {1M, 3M, 6M, 12M}. Then the forward curve $t\mapsto F(0;t,t+I)$ is $I$-forward curve. ...
40 views
Where to get historical daily settlement price of each VSTOXX futures contract
I'm doing some analysis on VIX and VSTOXX futures and require historical prices of each contract as a result. VIX info is free to download on CBOE website: ...
53 views
Multi-asset class allocation
How to allocate asset classes in a multi-asset portfolio? An institutional client needs to meet his pension liabilities, and suggested a multi-asset-class strategy. I'm trying to find ideas to pitch. ...
82 views
Reuters RIC chain for Eurodollar midcurve options
Can someone please tell me what this is? Thanks. Edit: The RIC for the straight eurodollar options is 0#GE+, I need RICs for the 1,2,3,4 mid curve options which the IMM/IOM calls GE0, GE2, GE3, ...
1k views
Have Goldman Sachs Quantitative Strategies Research Notes been published as a book or a comprehensive collection?
Back in the 90's, Goldman Sachs (publicly?) released a series called "Quantitative Strategies Research Notes" — mostly technical papers on topic. Emanuel Derman co-authored almost all of them. Some ...
84 views
Why can CDS indices be used as a bond market index?
I don't understand why the iTraxx indices family, which are credit default swap indices, are in practice often used to gauge the bond market. How are CDS prices related to bonds prices ? And what ...
208 views
How to break down an FX option P&L?
I am comparing the mark-to-market (MtM) valuations of two risk systems, with respect to FX Options. My question is can I quantify the difference in MtM given the following: System1 AUD/JPY, MTM = ...
116 views
What are the canonical references on wholesale credit risk management?
I am trying to read up on "Wholesale credit risk ", but I can't find any useful references. Why is the emphasis on wholesale?
80 views
Confidence Intervals of Stock Following a Geometric Brownian Motion
In preparation for my Options, Future's and Risk Management examination next week, I have been presented with a series of questions and their answers. Unfortunately, my lecturer, one of the less ...
99 views
How to use calibrated Standard Stochastic Volatility?
I'm considering the standard stochastic volatility model: $$x_t = \rho x_{t-1} + \sigma \epsilon_x$$ $$y_t = \beta \exp\left[ \frac{x_t}{2} \right] \epsilon_y$$ where $y_t$ is the log-returns and ...
72 views
Technical Analysis - OBV indicator calculation in R
Here is a few references about OBV calculations: http://ta.mql4.com/indicators/volumes/on_balance_volume ...
548 views
How to price a Swing Option?
I'm working in the commodity market and I've to price Swing Options with MATLAB, preferably with finite element. Has anyone already priced these kind of derivatives? I'm thinking about using the ...
278 views
Seasonal patterns in financial markets (weekday effects)
What seasonal patterns are there in financial markets? Is my feeling "true" that Mondays are more volatile than e.g. Tuesdays (as information gathered during the weekend can only be turned into an ...
78 views
Does heteroskedasticity of returns depend on the time frame?
Similarly to my last question, for which I obtained very interesting and useful answers, I would like to know if there has been any study regarding heteroskedasticity and time-frames of the returns. ...
47 views
Calculating Greeks using BinomialTree in Matlab [closed]
section 1. Calculating sensitivity of the price of derivatives American or European option using binomial tree model section 2. Calculating first order greeks the code compiles till this point ...
59 views
Measuring Volatility from Execution Prices
I was told of a way of measuring the volatility of a stock by looking at the reported execution prices (from Level III or Level II data.) I'm well aware of how to measure volatility by looking at the ...
96 views
Extracting Signal from Noisy Data
Consider a scenario in which Y_t represents the % change in price and we want to use X_t to predict Y_t. We assume that X_t is information we get before Y_t is revealed. Suppose that in reality Y_t ...
50 views
Price of an American call option [closed]
I'm working through revision questions at the moment and we are asked to compute the price of an American call option. Suppose that $dS_t = \sigma S_t dW^*_t, S_0 >0$ Let $0<U<T$ be fixed ...
261 views
Sharpe ratio and leverage
Does leverage affect the Sharpe ratio? If my Sharpe is 2 at no leverage goes it change, fall by half say, at no leverage?
15 views
how does a bond maturing affect the pricing of the corresponding CDS?
if a bond matures, and there is no other existing bond from the legal entity that has not matured. Then how does that affect the CDS that corresponds to that bond?
167 views
Suppose you bought a July ITM call and sold an August ATM put, am I net long or short?
Here is the full question, even though ive broken it down to the mini question above. Suppose you have bought a July ITM call and sold an August ATM put. What would be your delta in this position? ...
171 views
Can Gaussianity of returns depend on the time frame?
I would be interested in knowing if the fact that returns are Gaussian is disproved on all time frames, or if, for example, the 5 minute intra-day time frame could exhibits Gaussian returns assuming ...
114 views
160 views
Deposit vs. LIBOR rates? (Bloomberg/SuperDerivatives)
I noticed that Bloomberg and SuperDerivatives both use "Deposit Rates" for the calculation of forward points for currencies. I couldn't find anything online that describes precisely where these rates ...
246 views
what is a typical way forex brokerages can provide cheap leverage for their customers?
I'm not very well read in the area of high finance but I'm curious how forex brokerages are able to provide the backing for leverage that they can provide to customers. Is it possible to do this ...
63 views
Asset Liability Management Test Topic Interpretation
I will write a test based on Excel and one of the topics is "The Asset Liability related analysis: including the input assumptions generation, constraints, portfolio optimization analysis and results ...
50 views
Online database of ETF & Mutual Fund Fees?
Is there any online data source of ETF and/or mutual fund fees? Free or paid is fine, although hopefully there's something out there cheaper than Bloomberg
49 views
What interest rate should I use for testing the covered interest parity?
I am doing an empirical test of the CIP from the recent financial crisis between Canada and the United States. I am using 1,2,3,6,12 month forwards (monthly data). What interest rates should I use? I ...
190 views
How to work out weights for a portfolio based on an inverse ratio with positive and negative values?
I am trying to work out how to determine weights for the assets in order to form a portfolio. The ratio I am using is EV/EBIT, hence the smaller the better. The problem is I don't know how to handle ... | 2015-07-06 15:48:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7909914255142212, "perplexity": 3155.6904356828404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00271-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://brilliant.org/problems/find-continous/ | # Determine continuity
Calculus Level pending
Is the following function $$f(x)$$ continuous at $$x=0:$$ (use $$\sin2x= 2 \sin x \cos x$$)
$\mathit{f}(x) = \lim_{n \rightarrow \infty}\left(\cos x \cdot \cos \frac{x}{2} \cdot \cos \frac{x}{2^2} \cdot \cdot\cdot\cdot \cdot \cos \frac{x}{2^n} \right) ?$
× | 2017-07-27 14:47:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539183378219604, "perplexity": 10128.77500358286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00404.warc.gz"} |
https://prairielearn.readthedocs.io/en/latest/dev-guide/ | # Developer Guide
In general we prefer simplicity. We standardize on JavaScript (Node.js) and SQL (PostgreSQL) as the languages of implementation and try to minimize the number of complex libraries or frameworks being used. The website is server-side generated pages with minimal client-side JavaScript.
## High level view
• The questions and assessments for a course are stored in a git repository. This is synced into the database by the course instructor and DB data is updated or added to represent the course. Students then interact with the course website by doing questions, with the results being stored in the DB. The instructor can view the student results on the website and download CSV files with the data.
• All course configuration is done via plain text files in the git repository, which is the master source for this data. There is no extra course configuration stored in the DB. The instructor does not directly edit course data via the website.
• All student data is all stored in the DB and is not pushed back into the git repository or disk at any point.
## Directory layout
PrairieLearn
+-- autograder # files needed to autograde code on a separate server
| -- ... # various scripts and docker images
+-- config.json # server configuration file (optional)
+-- cron # jobs to be periodically executed, one file per job
| +-- index.js # entry point for all cron jobs
| -- ... # one JS file per cron job, executed by index.js
+-- docs # documentation
+-- exampleCourse # example content for a course
+-- lib # miscellaneous helper code
+-- middlewares # Express.js middleware, one per file
+-- migrations # DB migrations
| +-- index.js # entry point for migrations
| -- ... # one PGSQL file per migration, executed in order by index.js
+-- package.json # npm configuration file
+-- pages # one sub-dir per web page
| +-- partials # EJS helper sub-templates
| -- ... # other "instructor" and "user" pages
+-- public # all accessible without access control
| +-- javascripts # external packages only, no modifications
| +-- localscripts # all local site-wide JS
| -- stylesheets # all CSS, both external and local
+-- question-servers # one file per question type
+-- server.js # top-level program
+-- sprocs # DB stored procedures, one per file
| +-- index.js # entry point for all sproc initialization
| -- ... # one JS file per sproc, executed by index.js
+-- sync # code to load on-disk course config into DB
-- tests # unit and integration tests
## Unit tests and integration tests
• Tests are stored in the tests/ directory. They are run with:
# make sure you are in the top-level PrairieLearn/ directory
npm test
make lint
• The above tests are run by the CI server on every push to GitHub.
• The tests are mainly integration tests that start with a blank database, run the server to initialize the database, load the testCourse, and then emulate a client web browser that answers questions on assessments. If a test fails then it is often easiest to debug by recreating the error by doing questions yourself against a locally-running server.
• If the PL_KEEP_TEST_DB environment is set, the test database (normally pltest) won't be DROP'd when testing ends. This allows you inspect the state of the database whenever your testing ends. The database will get overwritten when you start a new test.
• Individual tests can be run with:
npx mocha tests/[testName].js
## Debugging server-side JavaScript
• Use the debug package to help trace execution flow in JavaScript. To run the server with debugging output enabled:
DEBUG=* node server
• To just see debugging logs from PrairieLearn you can use:
DEBUG=prairielearn:* node server
• To insert more debugging output, import debug and use it like this:
var path = require('path');
var debug = require('debug')('prairielearn:' + path.basename(__filename, '.js'));
// in some function later
debug('func()', 'param:', param);
• As of 2017-08-08 we don't have very good coverage with debug output in code, but we are trying to add more as needed, especially in code in lib/.
• UnhandledPromiseRejectionWarning errors are frequently due to improper async/await handling. Make sure you are calling async functions with await, and that async functions are not being called from callback-style code without a callbackify(). To get more information, NodeJS v14 can be run with the --trace-warnings flag. For example, npx mocha --trace-warnings tests/index.js.
## Debugging client-side JavaScript
• Make sure you have the JavaScript Console open in your browser and reload the page.
## Debugging SQL and PL/pgSQL
• Use the psql commandline interface to test SQL separately. A default development PrairieLearn install uses the postgres database, so you should run:
psql postgres
• To debug syntax errors in a stored procedure, import it manually with \i filename.sql in psql.
• To follow execution flow in PL/pgSQL use RAISE NOTICE. This will log to the console when run from psql and to the server log file when run from within PrairieLearn. The syntax is:
RAISE NOTICE 'This is logging: % and %', var1, var2;
• To manually run a function:
SELECT the_sql_function(arg1, arg2);
## HTML page generation
• Use Express as the web framework. As of 2016-09-27 we are using v4.
• All pages are server-side rendered and we try and minimize the amount of client-side JavaScript. Client-side JS should use jQuery and related libraries. We prefer to use off-the-shelf jQuery plugins where possible.
• Each web page typically has all its files in a single directory, with the directory, the files, and the URL all named the same. Not all pages need all files. For example:
pages/instructorGradebook
+-- instructorGradebook.js # main entry point, calls the SQL and renders the template
+-- instructorGradebook.ejs # the EJS template for the page
-- instructorGradebookClient.js # any client-side JS needed
• The above instructorGradebook page is loaded from the top-level server.js with:
app.use('/instructor/:courseInstanceId/gradebook', require('./pages/instructorGradebook/instructorGradebook'));
• The instructorGradebook.js main JS file is an Express router and has the basic structure:
var ERR = require('async-stacktrace');
var _ = require('lodash');
var express = require('express');
var router = express.Router();
var sqldb = require('@prairielearn/prairielib/sql-db');
router.get('/', function(req, res, next) {
var params = {course_instance_id: res.params.courseInstanceId};
sqldb.query(sql.user_scores, params, function(err, result) { // SQL queries for page data
if (ERR(err, next)) return;
res.locals.user_scores = result.rows; // store the data in res.locals
// inside the EJS template, "res.locals.var" can be accessed with just "var"
});
});
module.exports = router;
• Use the res.locals variable to build up data for the page rendering. Many basic objects are already included from the selectAndAuthz*.js middleware that runs before most page loads.
• Use EJS templates (Embedded JavaScript) templates for all pages. Using JS as the templating language removes the need for another ad hoc language, but does require some discipline to not get in a mess. Try and minimize the amount of JS code in the template files. Inside a template the JS code can directly access the contents of the res.locals object.
• Sub-templates are stored in pages/partials and can be loaded as below. The sub-template can also access res.locals as its base scope, and can also accept extra arguments with an arguments object:
<%- include('../partials/assessment', {assessment: assessment}); %>
## HTML style
• Use Bootstrap as the style. As of 2019-12-13 we are using v4.
• Local CSS rules go in public/stylesheets/local.css. Try to minimize use of this and use plain Bootstrap styling wherever possible.
• Buttons should use the <button> element when they take actions and the <a> element when they are simply links to other pages. We should not use <a role="button"> to fake a button element. Buttons that do not submit a form should always start with <button type="button" class="btn ...">, where type="button" specifies that they don't submit.
## SQL usage
• Use PostgreSQL and feel free to use the latest features. As of 2017-08-05 we run version 9.6.
• The PostgreSQL manual is an excellent reference.
• Write raw SQL rather than using a ORM library. This reduces the number of frameworks/languages needed.
• Try and write as much code in SQL and PL/pgSQL as possible, rather than in JavaScript. Use PostgreSQL-specific SQL and don't worry about SQL dialect portability. Functions should be written as stored procedures in the sprocs/ directory.
• The sprocs/ directory has files that each contain exactly one stored procedure. The filename is the same as the name of the stored procedure, so the variants_insert() stored procedure is in the sprocs/variants_insert.sql file.
• Stored procedure names should generally start with the name of the table they are associated with and try to use standard SQL command names to describe what they do. For example, variants_insert() will do some kind of INSERT INTO variants, while submission_update_parsing() will do an UPDATE submissions with some parsing data.
• Use the SQL convention of snake_case for names. Also use the same convention in JavaScript for names that are the same as in SQL, so the question_id variable in SQL is also called question_id in JavaScript code.
• Use uppercase for SQL reserved words like SELECT, FROM, AS, etc.
• SQL code should not be inline in JavaScript files. Instead it should be in a separate .sql file, following the Yesql concept. Each filename.js file will normally have a corresponding filename.sql file in the same directory. The .sql file should look like:
-- BLOCK select_question
SELECT * FROM questions WHERE id = $question_id; -- BLOCK insert_submission INSERT INTO submissions (submitted_answer) VALUES ($submitted_answer) RETURNING *;
From JavaScript you can then do:
var sqlLoader = require('@prairielearn/prairielib/sql-loader'); // adjust path as needed
// run the entire contents of the SQL file
sqldb.query(sql.all, params, ...);
// run just one query block from the SQL file
sqldb.query(sql.select_question, params, ...);
• The layout of the SQL code should generally have each list in separate indented blocks, like:
SELECT
ft.col1,
ft.col2 AS renamed_col,
st.col1
FROM
first_table AS ft
JOIN second_table AS st ON (st.first_table_id = ft.id)
WHERE
ft.col3 = select3
AND st.col2 = select2
ORDER BY
ft.col1;
WITH first_preliminary_table AS (
SELECT
-- first preliminary query
),
second_preliminary_table AS (
SELECT
-- second preliminary query
)
SELECT
-- main query here
FROM
first_preliminary_table AS fpt,
second_preliminary_table AS spt;
## DB stored procedures (sprocs)
• Stored procedures are created by the files in sprocs/. To call a stored procedure from JavaScript, use code like:
const workspace_id = 1342;
const message = 'Startup successful';
sqldb.call('workspaces_message_update', [workspace_id, message], (err, result) => {
if (ERR(err, callback)) return;
// we could use the result here if we want the return value of the stored procedure
callback(null);
});
• The stored procedures are all contained in a separate database schema with a name like server_2021-07-07T20:25:04.779Z_T75V6Y. To see a list of the schemas use the \dn command in psql.
• To be able to use the stored procedures from the psql command line it is necessary to get the most recent schema name using \dn and set the search_path to use this quoted schema name and the public schema:
set search_path to "server_2021-07-07T20:25:04.779Z_T75V6Y",public;
• During startup we initially have no non-public schema in use. We first run the migrations to update all tables in the public schema, then we call sqldb.setRandomSearchSchema() to activate a random per-execution schema, and we run the sproc creation code to generate all the stored procedures in this schema. This means that every invocation of PrairieLearn will have its own locally-scoped copy of the stored procedures which are the correct versions for its code. This lets us upgrade PrairieLearn servers one at a time, while old servers are still running with their own copies of their sprocs. When PrairieLearn first starts up it has search_path = public, but later it will have search_path = "server_2021-07-07T20:25:04.779Z_T75V6Y",public so that it will first search the random schema and then fall back to public. The naming convention for the random schema uses the local instance name, the date, and a random string. Note that schema names need to be quoted using double-quotations in psql because they contain characters such as hyphens.
• For more details see sprocs/array_and_number.sql and comments in server.js near the call to sqldb.setRandomSearchSchema().
## DB schema (simplified overview)
• The most important tables in the database are shown in the diagram below (also as a PDF image).
• Detailed descriptions of the format of each table are in the list of DB tables.
• Each table has an id number that is used for cross-referencing. For example, each row in the questions table has an id and other tables will refer to this as a question_id. The only exceptions are the pl_courses table that other tables refer to with course_id and users which has a user_id. These are both for reasons of interoperability with PrairieSchedule.
• Each student is stored as a single row in the users table.
• The pl_courses table has one row for each course, like TAM 212.
• The course_instances table has one row for each semester (“instance”) of each course, with the course_id indicating which course it belongs to.
• Every question is a row in the questions table, and the course_id shows which course it belongs to. All the questions for a course can be thought of as the “question pool” for that course. This same pool is used for all semesters (all course instances).
• Assessments are stored in the assessments table and each assessment row has a course_instance_id to indicate which course instance (and hence which course) it belongs to. An assessment is something like “Homework 1” or “Exam 3”. To determine this we can use the assessment_set_id and number of each assessment row.
• Each assessment has a list of questions associated with it. This list is stored in the assessment_questions table, where each row has a assessment_id and question_id to indicate which questions belong to which assessment. For example, there might be 20 different questions that are on “Exam 1”, and it might be the case that each student gets 5 of these questions randomly selected.
• Each student will have their own copy of an assessment, stored in the assessment_instances table with each row having a user_id and assessment_id. This is where the student's score for that assessment is stored.
• The selection of questions that each student is given on each assessment is in the instance_questions table. Here each row has an assessment_question_id and an assessment_instance_id to indicate that the corresponding question is on that assessment instance. This row will also store the student's score on this particular question.
• Questions can randomize their parameters, so there are many possible variants of each question. These are stored in the variants table with an instance_question_id indicating which instance question the variant belongs to.
• For each variant of a question that a student sees they will have submitted zero or more submissions with a variant_id to show what it belongs to. The submissions row also contains information the submitted answer and whether it was correct.
## DB schema conventions
• Tables have plural names (e.g. assessments) and always have a primary key called id. The foreign keys pointing to this table are non-plural, like assessment_id. When referring to this use an abbreviation of the first letters of each word, like ai in this case. The only exceptions are aset for assessment_sets (to avoid conflicting with the SQL AS keyword), top for topics, and tag for tags (to avoid conflicts). This gives code like:
-- select all active assessment_instances for a given assessment
SELECT
ai.*
FROM
assessments AS a
JOIN assessment_instances AS ai ON (ai.assessment_id = a.id)
WHERE
a.id = 45
AND ai.deleted_at IS NULL;
• We (almost) never delete student data from the DB. To avoid having rows with broken or missing foreign keys, course configuration tables (e.g. assessments) can't be actually deleted. Instead they are "soft-deleted" by setting the deleted_at column to non-NULL. This means that when using any soft-deletable table we need to have a WHERE deleted_at IS NULL to get only the active rows.
## DB schema modification
• The database is built with a series of consecutive "migrations". A migration is a modification to table schema and is represented as a file in migrations/.
• The filenames in migrations/ are of the form <nnn>_<description>.sql where <nnn> is a number to ensure ordering. The leading number must be unique among all migrations. A suggested form for the <description> is <table name>__<column name>__<operation> if only a single column is being changed.
• It's fine to put multiple logically-related migration statements in the same file.
• The database has a special migrations table that tracks which migrations have already been applied. This ensures that migrations are always applied exactly once.
• The current state of the DB schema is stored in a human-readable form in the database/ directory. This is checked automatically by the unit tests and needs to be manually updated after migrations and the updates should be committed to git along with the migrations.
• Historical node: Migration statements started with PrairieLearn version 2.0.0. Starting with 2.0.0, the schema was maintained with separate models and migrations directories, which had to be kept in sync. In version 2.0.10, that was switched to solely migrations, and the state of models as of 2.0.0 was captured in migrations/000_initial_state.sql. All future migrations are applied on top of that.
• Some useful migration statements follow.
-- add a column to a table
ALTER TABLE assessments ADD COLUMN auto_close boolean DEFAULT true;
-- add a foreign key to a table
ALTER TABLE variants ADD COLUMN authn_user_id bigint;
-- remove a constraint
ALTER TABLE alternative_groups DROP CONSTRAINT alternative_groups_number_assessment_id_key;
ALTER TABLE alternative_groups ADD UNIQUE (assessment_id, number);
## JSON syncing
• Edit the DB schema; e.g., to add a require_honor_code boolean for assessments, modify database/tables/assessments.pg:
@@ -16,2 +16,3 @@ columns
order_by: integer
+ require_honor_code: boolean default true
shuffle_questions: boolean default false
• Add a DB migration; e.g., create migrations/167_assessments__require_honor_code__add.sql:
@@ -0,0 +1 @@
+ALTER TABLE assessments ADD COLUMN require_honor_code boolean DEFAULT true;
• Edit the JSON schema; e.g., modify schemas/schemas/infoAssessment.json:
@@ -89,2 +89,7 @@
"default": true
+ },
+ "requireHonorCode": {
+ "description": "Requires the student to accept an honor code before starting exam assessments.",
+ "type": "boolean",
+ "default": true
}
• Edit the sync parser; e.g., modify sync/fromDisk/assessments.js:
@@ -44,2 +44,3 @@ function buildSyncData(courseInfo, courseInstance, questionDB) {
+ const requireHonorCode = !!_.get(assessment, 'requireHonorCode', true);
@@ -63,2 +64,3 @@ function buildSyncData(courseInfo, courseInstance, questionDB) {
+ require_honor_code: requireHonorCode,
auto_close: !!_.get(assessment, 'autoClose', true),
• Edit the sync query; e.g., modify sprocs/sync_assessments.sql:
@@ -44,3 +44,4 @@ BEGIN
allow_issue_reporting,
+ require_honor_code)
(
@@ -64,3 +65,4 @@ BEGIN
(assessment->>'allow_issue_reporting')::boolean,
+ (assessment->>'require_honor_code')::boolean
)
@@ -83,3 +85,4 @@ BEGIN
allow_issue_reporting = EXCLUDED.allow_issue_reporting,
+ require_honor_code = EXCLUDED.require_honor_code
WHERE
• Edit the sync tests; e.g., modify tests/sync/util.js:
@@ -128,2 +128,3 @@ const syncFromDisk = require('../../sync/syncFromDisk');
+ * @property {boolean} requireHonorCode
* @property {boolean} multipleInstance
## Database access
• DB access is via the sqldb.js module. This wraps the node-postgres library.
• For single queries we normally use the following pattern, which automatically uses connection pooling from node-postgres and safe variable interpolation with named parameters and prepared statements:
var params = {
course_id: 45,
};
sqldb.query(sql.select_questions_by_course, params, function(err, result) {
if (ERR(err, callback)) return;
var questions = result.rows;
});
Where the corresponding filename.sql file contains:
-- BLOCK select_questions_by_course
SELECT * FROM questions WHERE course_id = $course_id; • For queries where it would be an error to not return exactly one result row: sqldb.queryOneRow(sql.block_name, params, function(err, result) { if (ERR(err, callback)) return; var obj = result.rows[0]; // guaranteed to exist and no more }); • For transactions with correct error handling use following pattern. It is important to use async.series() to run all the operations rather than using a callback stack, because with async.series() we can guarantee that endTransaction() is called no matter whether any of the intermediate operations produce an error. // do this sqldb.beginTransaction(function(err, client, done) { if (ERR(err, callback)) return; async.series([ function(callback) { // only use queryWithClient() and queryWithClientOneRow() inside the transaction sqldb.queryWithClient(client, sql.block_name, params, function(err, result) { if (ERR(err, callback)) return; // do things callback(null); }); }, // more series functions inside the transaction ], function(err) { sqldb.endTransaction(client, done, err, function(err) { // will rollback if err is defined if (ERR(err, callback)) return; // transaction successfully committed at this point callback(null); }); }); }); // don't do this sqldb.beginTransaction(function(err, client, done) { if (ERR(err, callback)) return; sqldb.queryWithClient(client, sql.block_name, params, function(err, result) { if (ERR(err, callback)) return; // THIS IS WRONG, it may exit without endTransaction() sqldb.endTransaction(client, done, err, function(err) { if (ERR(err, callback)) return; callback(null); }); }); }); • Use explicit row locking whenever modifying student data related to an assessment. This must be done within a transaction. The rule is that we lock either the variant (if there is no corresponding assessment instance) or the assessment instance (if we have one). It is fine to repeatedly lock the same row within a single transaction, so all functions involved in modifying elements of an assessment (e.g., adding a submission, grading, etc) should call a locking function when they start. All locking functions are equivalent in their action, so the most convenient one should be used in any given situation: Locking function Argument assessment_instances_lock assessment_instance_id instance_questions_lock instance_question_id variants_lock variant_id submission_lock submission_id • To pass an array of parameters to SQL code, use the following pattern, which allows zero or more elements in the array. This replaces $points_list with ARRAY[10, 5, 1] in the SQL. It's required to specify the type of array in case it is empty:
var params = {
points_list: [10, 5, 1],
};
sqldb.query(sql.insert_assessment_question, params, ...);
-- BLOCK insert_assessment_question
INSERT INTO assessment_questions (points_list) VALUES ($points_list::INTEGER[]); var params = { id_list: [7, 12, 45], }; sqldb.query(sql.select_questions, params, ...); -- BLOCK select_questions SELECT * FROM questions WHERE id IN (SELECT unnest($id_list::INTEGER[]));
• To pass a lot of data to SQL a useful pattern is to send a JSON object array and unpack it in SQL to the equivalent of a table. This is the pattern used by the "sync" code, such as sprocs/sync_news_items.sql. For example:
let data = [
{a: 5, b: "foo"},
{a: 9, b: "bar"}
];
let params = {data: JSON.stringify(data)};
sqldb.query(sql.insert_data, params, ...);
-- BLOCK insert_data
INSERT INTO my_table (a, b)
SELECT *
FROM jsonb_to_recordset($data) AS (a INTEGER, b TEXT); • To use a JSON object array in the above fashion, but where the order of rows is important, use ROWS FROM () WITH ORDINALITY to generate a row index like this: -- BLOCK insert_data INSERT INTO my_table (a, b, order_by) SELECT * FROM ROWS FROM ( jsonb_to_recordset($data)
AS (a INTEGER, b TEXT)
) WITH ORDINALITY AS data(a, b, order_by);
## Asynchronous control flow in JavaScript
• New code in PrairieLearn should use async/await whenever possible.
• Older code in PrairieLearn uses the traditional Node.js error handling conventions with the callback(err, result) pattern.
• Use the async library for complex control flow. Versions 3 and higher of async support both async/await and callback styles.
## Using async route handlers with ExpressJS
const asyncHandler = require('express-async-handler');
router.get('/', asyncHandler(async (req, res, next) => {
// can use "await" here
}));
## Interfacing between callback-style and async/await-style functions
• To write a callback-style function that internally uses async/await code, use this pattern:
const util = require('util');
function oldFunction(x1, x2, callback) {
util.callbackify(async () => {
# here we can use async/await code
y1 = await f(x1);
y2 = await f(x2);
return y1 + y2;
})(callback);
}
• To write a multi-return-value callback-style function that internally uses async/await code, we don't currently have an established pattern.
• To call our own library functions from async/await code, we should provide a version of them with "Async" appended to the name:
const util = require('util');
module.exports.existingLibFun = (x1, x2, callback) => {
callback(null, x1*x2);
};
module.exports.existingLibFunAsync = util.promisify(module.exports.myFun);
# in async code we can now call existingLibFunAsync() directly with await:
async function newFun(x1, x2) {
let y = await existingLibFunAsync(x1, x2);
return 3*y;
}
• If our own library functions use multiple return values, then the async version of them should return an object:
const util = require('util');
module.exports.existingMultiFun = (x, callback) => {
const y1 = x*x;
const y2 = x*x*x;
callback(null, y1, y2); # note the two return values here
};
module.exports.existingMultiFunAsync = util.promisify((x, callback) =>
module.exports.existingMultiFun(x, (err, y1, y2) => callback(err, {y1, y2}))
);
async function newFun(x) {
let {y1, y2} = await existingMultiFunAsync(x); # must use y1,y2 names here
return y1*y2;
}
• To call a callback-style function in an external library from within an async/await function, use this pattern:
util = require('util');
async function g(x) {
x1 = await f(x + 2);
x2 = await f(x + 4);
z = await util.promisify(oldFunction)(x1, x2);
return z;
}
• As of 2019-08-15 we are not calling any multi-return-value callback-style functions in external libraries from within async/await functions, but if we need to do this then we could include the bluebird package and use the pattern:
bluebird = require('bluebird');
function oldMultiFunction(x, callback) {
return callback(null, x*x, x*x*x);
}
async function g(x) {
let [y1, y2] = await bluebird.promisify(oldMultiFunction, {multiArgs: true})(x); # note array destructuring with y1,y2
return y1*y2;
}
• To call an async/await function from within a callback-style function, use this pattern:
util = require('util');
function oldFunction(x, callback) {
util.callbackify(g)(x, (err, y) => {
if (ERR(err, callback)) return;
callback(null, y);
});
}
• To call an multi-return-value async/await function from within a callback-style function, use this pattern:
util = require('util');
async function gMulti(x) {
y1 = x*x;
y2 = x*x*x;
return {y1, y2};
}
function oldFunction(x, callback) {
util.callbackify(gMulti)(x, (err, {y1, y2}]) => {
if (ERR(err, callback)) return;
callback(null, y1*y2);
});
}
## Stack traces with callback-style functions
• Use the async-stacktrace library for every error handler. That is, the top of every file should have ERR = require('async-stacktrace'); and wherever you would normally write if (err) return callback(err); you instead write if (ERR(err, callback)) return;. This does exactly the same thing, except that it modifies the err object's stack trace to include the current filename/linenumber, which greatly aids debugging. For example:
// Don't do this:
function foo(p, callback) {
bar(q, function(err, result) {
if (err) return callback(err);
callback(null, result);
});
}
ERR = require('async-stacktrace'); // at top of file
function foo(p, callback) {
bar(q, function(err, result) {
if (ERR(err, callback)) return; // this is the change
callback(null, result);
});
}
• Don't pass callback functions directly through to children, but instead capture the error with the async-stacktrace library and pass it up the stack explicitly. This allows a complete stack trace to be printed on error. That is:
// Don't do this:
function foo(p, callback) {
bar(q, callback);
}
function foo(p, callback) {
bar(q, function(err, result) {
if (ERR(err, callback)) return;
callback(null, result);
});
}
• Note that the async-stacktrace library ERR function will throw an exception if not provided with a callback, so in cases where there is no callback (e.g., in cron/index.js) we should call it with ERR(err, function() {}).
• If we are in a function that does not have an active callback (perhaps we already called it) then we should log errors with the following pattern. Note that the first string argument to logger.error() is mandatory. Failure to provide a string argument will result in error: undefined being logged to the console.
function foo(p) {
bar(p, function(err, result) {
if (ERR(err, e => logger.error('Error in bar()', e);
...
});
}
• Don't call a callback function inside a try block, especially if there is also a callback call in the catch handler. Otherwise exceptions thrown much later will show up incorrectly as a double-callback or just in the wrong place. For example:
// Don't do this:
function foo(p, callback) {
try {
let result = 3;
callback(null, result); // this could throw an error from upstream code in the callback
} catch (err) {
callback(err);
}
}
function foo(p, callback) {
let result;
try {
result = 3;
} catch (err) {
callback(err);
}
callback(null, result);
}
## Security model
• We distinguish between authentication and authorization. Authentication occurs as the first stage in server response and the authenticated user data is stored as res.locals.authn_user.
• The authentication flow is:
1. We first redirect to a remote authentication service (either Shibboleth or Google OAuth2 servers). For Shibboleth this happens by the “Login to PL” button linking to /pl/shibcallback for which Apache handles the Shibboleth redirections. For Google the “Login to PL” button links to /pl/auth2login which sets up the authentication data and redirects to Google.
2. The remote authentication service redirects back to /pl/shibcallback (for Shibboleth) or /pl/auth2callback (for Google). These endpoints confirm authentication, create the user in the users table if necessary, set a signed pl_authn cookie in the browser with the authenticated user_id, and then redirect to the main PL homepage.
3. Every other page authenticates using the signed browser pl_authn cookie. This is read by middlewares/authn.js which checks the signature and then loads the user data from the DB using the user_id, storing it as res.locals.authn_user.
• Similar to unix, we distinguish between the real and effective user. The real user is stored as res.locals.authn_user and is the user that authenticated. The effective user is stored as res.locals.user. Only users with role = TA or higher can set an effective user that is different from their real user. Moreover, users with role = TA or higher can also set an effective role and mode that is different to the real values.
• Authorization occurs at multiple levels:
• The course_instance checks authorization based on the authn_user.
• The course_instance authorization is checked against the effective user.
• The assessment checks authorization based on the effective user, role, mode, and date.
• All state-modifying requests must (normally) be POST and all associated data must be in the body. GET requests may use query parameters for viewing options only.
## State-modifying POST requests
• Use the Post/Redirect/Get pattern for all state modification. This means that the initial GET should render the page with a <form> that has no action set, so it will submit back to the current page. This should be handled by a POST handler that performs the state modification and then issues a redirect back to the same page as a GET:
router.post('/', function(req, res, next) {
if (req.body.__action == 'enroll') {
var params = {
course_instance_id: req.body.course_instance_id,
user_id: res.locals.authn_user.id,
};
sqldb.queryOneRow(sql.enroll, params, function(err, result) {
if (ERR(err, next)) return;
res.redirect(req.originalUrl);
});
} else {
return next(error.make(400, 'unknown __action', {body: req.body, locals: res.locals}));
}
});
<form name="enroll-form" method="POST">
<input type="hidden" name="__action" value="enroll">
<input type="hidden" name="__csrf_token" value="<%= __csrf_token %>">
<input type="hidden" name="course_instance_id" value="56">
<button type="submit" class="btn btn-info">
Enroll in course instance 56
</button>
</form>
• The res.locals.__csrf_token variable is set and checked by early-stage middleware, so no explicit action is needed on each page.
## Logging errors
• We use Winston for logging to the console and to files. To use this, require lib/logger and call logger.info(), logger.error(), etc.
• To show a message on the console, use logger.info().
• To log just to the log files, but not to the console, use logger.verbose().
• All logger functions have a mandatory first argument that is a string, and an optional second argument that is an object containing useful information. It is important to always provide a string as the first argument.
## Coding style
• ESLint is used to enforce a consistent coding style throughout the codebase. We use the default rule set , with the following additional rules enforced:
• Use 4 spaces for indents
• Always terminate lines with a semicolon
• For callbacks with standard function signatures (e.g. Express route handlers), unused arguments should be included but prefaced with an underscore. For instance:
app.use(/^\/?\$/, function(req, res, _next) {res.redirect('/pl');});
• To lint the code, use make lint. This is also run by the CI tests.
## Question-rendering control flow
• The core files involved in question rendering are lib/question.js, lib/question.sql, and pages/partials/question.ejs.
• The above files are all called/included by each of the top-level pages that needs to render a question (e.g., pages/instructorQuestionPreview, pages/studentInstanceQuestionExam, etc). Unfortunately the control-flow is complicated because we need to call lib/question.js during page data load, store the data it generates, and then later include the pages/partials/question.ejs template to actually render this data.
• For example, the exact control-flow for pages/instructorQuestion is:
1. The top-level page pages/instructorQuestion/instructorQuestion.js code calls lib/question.getAndRenderVariant().
2. lib/question.getAndRenderVariant() inserts data into res.locals for later use by pages/partials/question.ejs.
3. The top-level page code renders the top-level template pages/instructorQuestion/instructorQuestion.ejs, which then includes pages/partials/question.ejs.
4. pages/partials/question.ejs renders the data that was earlier generated by lib/question.js.
## Question open status
• There are three levels at which “open” status is tracked, as follows. If open = false for any object then it will block the creation of new objects below it. For example, to create a new submission the corresponding variant, instance_question, and assessment_instance must all be open.
Variable Allow new instance_questions Allow new variants Allow new submissions
assessment_instance.open
instance_question.open
variant.open
## v3 question code calling
• v3 questions run course code in subprocesses. follows. A separate child process is started for every page request (actually, for every call to a top-level freeform.js function) which adds robustness but causes some slowdown. As of 2017-08-16 the calling sequence is as follows.
• We get a page request that’s handled in pages/studentInstanceQuestionHomework or similar.
• This calls a function in lib/question.js (possibly via lib/assessment.js) which starts a DB transaction and creates a DB client object.
• This calls a function in question-servers/freeform.js. Functions in freeform do not interact with the DB.
• The freeform function creates a PythonCaller object from lib/code-caller.js.
• The PythonCaller object starts a new python process and runs lib/python-caller-trampoline.py.
• python-caller-trampoline listens for function calls specified by a JSON write on STDIN, loads the specified python module and calls the specified function inside it, returning the output as JSON on file descriptor 3.
• The PythonCaller object unpacks the return value, captures STDIN+STDOUT, and returns all this back up the chain.
• Once the freeform functions have finished making all the python calls they want, they call PythonCaller.done() that terminates the python process.
• The question.js function that we are inside finishes, and ends the DB transaction, committing the changes.
• Page render completes and the response is sent, finishing this request cycle.
## Errors in question handling
• We distinguish between two different types of student errors:
1. The answer might be not be gradable (submission.gradable = false). This could be due to a missing answer, an invalid format (e.g., entering a string in a numeric input), or a answer that doesn't pass some basic check (e.g., a code submission that didn't compile). This can be discovered during either the parsing or grading phases. In such a case the submission.format_errors object should store information on what was wrong to allow the student to correct their answer. A submission with gradable = false will not cause any updating of points for the question. That is, it acts like a saved-but-not-graded submission, in that it is recorded but has no impact on the question. If gradable = false then the score and feedback will not be displayed to the student.
2. The answer might be gradable but incorrect. In this case submission.gradable = true but submission.score = 0 (or less than 1 for a partial score). If desired, the submission.feedback object can be set to give information to the student on what was wrong with their answer. This is not necessary, however. If submission.feedback is set then it will be shown to the student along with their submission.score as soon as the question is graded.
• There are three levels of errors that can occur during the creation, answering, and grading of a question:
Error level Caused Stored Reported Effect
System errors Internal PrairieLearn errors On-disk logs Error page Operation is blocked. Data is not saved to the database.
Question errors Errors in question code issues table Issue panels on the question page variant.broken or submission.broken set to true. Operation completes, but future operations are blocked.
Student errors Invalid data submitted by the student (unparsable or ungradable) submission.gradable set to false and details are stored in submission.format_errors Inside the rendered submission panel The submission is not assigned a score and no further action is taken (e.g., points are changed for the instance question). The student can resubmit to correct the error.
• The important variables involved in tracking question errors are:
Variable Error level Description
variant.broken Question error Set to true if there were question code errors in generating the variant. Such a variant will be not have render() functions called, but will instead be displayed as This question is broken.
submission.broken Question error Set to true if there question code errors in parsing or grading the variant. After submission.broken is true, no further actions will be taken with the submission.
issues table Question error Rows are inserted to record the details of the errors that caused variant.broken or submission.broken to be set to true.
submission.gradable Student error Whether this submission can be given a score. Set to false if format errors in the submitted_answer were encountered during either parsing or grading.
submission.format_errors Student error Details on any errors during parsing or grading. Should be set to something meaningful if gradable = false to explain what was wrong with the submitted answer.
submission.graded_at None NULL if grading has not yet occurred, otherwise a timestamp.
submission.score None Final score for the submission. Only used if gradable = true and graded_at is not NULL.
submission.feedback None Feedback generated during grading. Only used if gradable = true and graded_at is not NULL.
• Note that submission.format_errors stores information about student errors, while the issues table stores information about question code errors.
• The question flow is shown in the diagram below (also as a PDF image). | 2021-07-24 17:54:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.231705442070961, "perplexity": 8538.394320122361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00288.warc.gz"} |
https://zbmath.org/?q=an:1017.35029 | ## Uniqueness of continuous solutions for BV vector fields.(English)Zbl 1017.35029
The authors study the Cauchy problem for a transport equation $$Xu=cu$$ associated with a vector field $$X=\partial_t+\sum_{j=1}^d a_j(t,x)\partial_j$$, in which coefficients $$a_j$$ are assumed to be functions of bounded variation. Under some additional assumptions on growth of the coefficients the authors prove the uniqueness of continuous solutions.
### MSC:
35F10 Initial value problems for linear first-order PDEs 26A45 Functions of bounded variation, generalizations
### Keywords:
transport equation; BV vector fields; Cauchy problem
Full Text:
### References:
[1] H. Bahouri and J.-Y.Chemin, Équations de transport relatives à des champs de vecteurs non-lipschitziens et mécanique des fluides , Arch. Rational Mech. Anal. 127 (1994), 159–181. · Zbl 0821.76012 [2] F. Bouchut, On transport equations and the chain rule , preprint, 1999. [3] F. Bouchut and L. Desvillettes, On two-dimensional Hamiltonian transport equations with continuous coefficients , Differential Integral Equations 14 (2001), 1015–1024. · Zbl 1028.35042 [4] F. Bouchut and F. James, One-dimensional transport equations with discontinuous coefficients , Nonlinear Anal. 32 (1998), 891–933. · Zbl 0989.35130 [5] J.-Y. Chemin and N. Lerner, Flot de champ de vecteurs non lipschitziens et équations de Navier-Stokes, J. Differential Equations 121 (1995), 314–328. · Zbl 0878.35089 [6] F. Colombini and N. Lerner, Hyperbolic equations with non-Lipschitz coefficients , Duke Math. J. 77 (1995), 657–698. · Zbl 0840.35067 [7] B. Desjardins, Global existence results for the incompressible density-dependent Navier-Stokes equations in the whole space , Differential Integral Equations 10 (1995), 587–598. · Zbl 0902.76027 [8] –. –. –. –., Linear transport equations with initial values in Sobolev spaces and application to the Navier-Stokes equations , Differential Integral Equations 10 (1995), 577–586. · Zbl 0902.76028 [9] –. –. –. –., A few remarks on ordinary differential equations , Comm. Partial Differential Equations 21 (1996), 1667–1703. · Zbl 0899.35022 [10] R. J. DiPerna and P. L. Lions, Ordinary differential equations, transport theory and Sobolev spaces , Invent. Math. 98 (1989), 511–547. · Zbl 0696.34049 [11] H. Federer, Geometric Measure Theory , Grundlehren Math. Wiss. 153 , Springer, New York, 1969. · Zbl 0176.00801 [12] T. M. Flett, Differential Analysis: Differentiation, Differential Equations, and Differential Inequalities , Cambridge Univ. Press, Cambridge, 1980. · Zbl 0442.34002 [13] L. Hörmander, The Analysis of Linear Partial Differential Operators, I, II; III, IV , Grundlehren Math. Wiss. 256 , 257 ; 274 , 275 , Springer, Berlin, 1983; 1985., ;, Mathematical Reviews (MathSciNet): Mathematical Reviews (MathSciNet): MR87d:35002a Mathematical Reviews (MathSciNet): MR87d:35002b [14] ——–, Lectures on Nonlinear Hyperbolic Differential Equations , Math. Appl. 26 , Springer, Berlin, 1997. · Zbl 0881.35001 [15] P. L. Lions, Sur les équations différentielles ordinaires et les équations de transport , C. R. Acad. Sci. Paris Sér. I Math. 326 (1998), 833–838. · Zbl 0919.34028 [16] G. Petrova and B. Popov, Linear transport equations with discontinuous coefficients , Comm. Partial Differential Equations 24 (1999), 1849–1873. · Zbl 0992.35104 [17] F. Poupaud and M. Rascle, Measure solutions to the linear multi-dimensional transport equation with non-smooth coefficients , Comm. Partial Differential Equations 22 (1997), 337–358. · Zbl 0882.35026 [18] F. Trèves, Topological Vector Spaces, Distributions and Kernels , Pure Appl. Math. 25 , Academic Press, New York, 1967. · Zbl 0171.10402 [19] A. I. Vol’pert, Spaces $$\BV$$ and quasilinear equations (in Russian), Mat. Sb (N.S.) 73 ( 115 ) (1967), 255–302.; English translation in Math. USSR-Sb. 2 (1967), 225–267. · Zbl 0168.07402 [20] W. P. Ziemer, Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation , Grad. Texts in Math. 120 , Springer, New York, 1989. · Zbl 0692.46022
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-05-17 03:48:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457272052764893, "perplexity": 1353.990319686692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00615.warc.gz"} |
http://mathoverflow.net/questions/12095/f-q-structures-on-schemes | # F_q-structures on schemes
Let $k|\mathbb{F}_q$ be a field extension. An $\mathbb{F}_q$-structure on a $k$-algebra $A$ is an $\mathbb{F}_q$-subalgebra $A _0$ of $A$ such that $A _0 \otimes _{\mathbb{F}_q} k \cong A$ via the canonical morphism $a \otimes \lambda \mapsto a \lambda$.
Now, my question is if this notion can be properly globalized to $k$-schemes? I saw a definition like: an $\mathbb{F}_q$-structure on a $k$-scheme $X$ is an $\mathbb{F}_q$-scheme $X _0$ such that $X \cong X _0 \times _{\mathrm{Spec}(\mathbb{F}_q)} \mathrm{Spec}(k)$ as $k$-schemes (see for example "Representations of finite groups of Lie type" by Digne and Michel, where $\cong$ is even replaced by $=$). But my problem is that here the particular choice of the canonical morphism as above does not appear so that on affines this definition is not the same as above. Is this a problem?
(The reason why I care about this is that I want to defined the (geometric) Frobenius on a $k$-Scheme with $\mathbb{F}_q$-structure as the "base change" of the canonical Frobenius (raising to the $q$-th power) on the $\mathbb{F}_q$-structure $X _0$.)
-
I think that notion you cite from Digne and Michel is not a good one as you will not get a well-defined Frobenius. I suggest replacing $X_0$ by a pair $(X_0,p)$, where $p:X\to X_0$ is a morphism of $\mathbb{F}_q$ schemes such that $X$ is a product $X_0\times_{\mathbb{F}_q} \mathrm{Spec}(k)$ via the structure morphism $X\to \mathrm{Spec}(k)$ and via $p$. The Frobenius on $X$ will then be the unique map $F:X\to X$ of $k$-schemes such that $F\circ p= F_0\circ p$, where $F_0:X_0\to X_0$ denotes the canonical Frobenius.
To compare this to the definition for $k$-algebras, note that we could define two $\mathbb{F}_q$-structures $(X_0,p)$ and $(X_0',p')$ to be equivalent if there is an isomorphism $f:X_0\to X_0'$ such that $fp=p'$. Two equivalent $\mathbb{F}_q$-structures on $X$ will then give rise to the same(!) geometric Frobenius. In the case of a $k$-algebra $A$ every $\mathbb{F}_q$-structure will have a unique representative of the form $(\mathrm{Spec}(A_0),p)$, where $A_0\subset A$ and $p$ is induced by this inclusion.
I assume that's why Digne and Michel used '$=$' in place of '$\simeq$'. This is a little sloppy, but I would read the equality sign as we are given an isomorphism between $X$ and $X_0\times Spec(k)$ that agrees with all structures', in this case, an isomorphism over $k$. We are given' means that the isomorphism is part of the data. – t3suji Jan 17 '10 at 14:36
I agree with the answer: given a variety $X$ over a field $K$ and a subfield $k$ of $K$, then a model of $X$ over $k$ (or a $k$-structure on $X$) is a variety $X_0$ over $k$ together with an isomorphism $X_0\otimes_k K\to X. Digne and Michel are being sloppy. – JS Milne Jan 17 '10 at 19:46 Okay, thanks! Is it correct that your definition is then more general in the case of affine schemes as the one I have given above (because I fixed a particular isomorphism)? I am not sure if this particular isomorphism is needed somewhere. As far as I can see, Digne and Michel just concentrate on affine and projective varieties... – user717 Jan 17 '10 at 20:59 In the second paragraph I tried to explain the relationship between the definition for schemes I have given and the one for algebras you have mentioned in the beginning. In both cases we fix one particular isomorphism (that's the difference to the one in Digne and Michel), but the one for schemes gives a slightly more general notion when applied to the affine case in that we do not require$A_0$to be a subalgebra of$A$but that we allow for an arbitrary injective morphism$A_0\to A\$. That doesn't really change anything and if we pass to isomorphism classes as above we get the "same" notion – Philipp Hartwig Jan 17 '10 at 21:10 | 2014-11-27 13:26:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775274395942688, "perplexity": 142.77677756511426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008520.8/warc/CC-MAIN-20141125155648-00147-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41467-022-35189-2?error=cookies_not_supported&code=562f6457-ec9f-496a-8ba0-9cf9093f02df | Introduction
Decades of empirical and theoretical research have shown that a greater number of species enhances the productivity of an ecosystem, in the short-term, and can sustain higher levels of productivity in the long-term1,2,3,4,5,6,7. However, the effects of diversity on ecosystem functioning can change over years, whereby the positive effect of species richness on ecosystem functioning often becomes stronger with time in experimental communities following initial establishment8,9,10,11,12. Consequently, there has been a growing interest as to why biodiversity–ecosystem functioning relationships change through time and the underlying mechanisms by which species richness maintains a more stable ecosystem functioning13,14,15. There are few long-term studies able to experimentally address such long-term temporal patterns. The experiments that exist have demonstrated that species richness–ecosystem functioning relationships can strengthen over the years because of the various demographic and evolutionary processes that take place; such as species turnover and local selection to avoid competition that can thus lead to complementary resource use4,8,11,12,14,16,17,18. Regardless of the underlying processes, these studies all illustrate the temporal importance of biodiversity for sustaining ecosystem functioning. This can be attributed to an increasing complementarity effect (CE)19,20,21 among species through time, whereby species are, on average able to maintain, or even increase, their productivity over many years in mixtures better than in monoculture (e.g., by resource partitioning, facilitation, or biotic interactions22). By maintaining greater temporal productivity, more diverse communities also maintain more stable productivity. Thus, the temporally increasing biodiversity–productivity relationships should lead to increasing biodiversity–stability relationships and its underlying mechanisms over time, which has not yet been tested in biodiversity experiments.
The ability of a community to maintain temporally stable productivity across multiple years is captured by the inverse of the coefficient of variation (CV−1: the temporal standard deviation relative to the mean) of community productivity3. Past long-term grassland biodiversity experiments have shown that greater species richness can maintain more stable productivity due to greater insurance that some species will be able to maintain productivity during times when others cannot, such as during a drought or other disturbances, referred to as portfolio or insurance effect1,23. Thus, plant community productivity is stabilized by species that are temporally asynchronous in their performance as well as by the presence of particularly productive species that exhibit stable population dynamics through time24,25,26. Furthermore, high community productivity and overyielding in species mixtures (i.e., mixtures yielding more than the average of their species grown in monocultures) can stabilize community productivity26,27,28,29.
While it has been documented that species richness–productivity relationships strengthen over time in biodiversity experiments8,10,11, it has not been assessed whether species richness–stability (of productivity) relationships do also; and if they do, what the contribution of the three mechanisms mentioned above—asynchrony, population stability, and overyielding—would be. Furthermore, linkages between these mechanisms stabilizing community productivity and the temporal dynamics of biodiversity effects, in particular the mentioned complementarity effect (CE), have been little explored20,21. This is largely because there are few long-term studies that can address such questions.
Here, we assessed the change in species richness–productivity and richness–stability relationships over 17 years in a long-term grassland biodiversity experiment, the Jena Experiment13. We hypothesize that the species richness–productivity relationship strengthens over time due to increasing CEs, but also that these increasing biodiversity effects and CEs at the same time can strengthen the species richness–stability relationship over time. For instance, the species richness–productivity relationship may strengthen due to declining monoculture productivity and thus increasing CEs. The resulting maintenance of relatively greater productivity at higher diversity levels may also temporally increase the positive effect of diversity on stability. In addition, a strengthening of the CE through time could also indicate an increase in the temporal niche segregation among species to avoid competition, and thus lead to more stable population dynamics of the species, again contributing to increased stability. Finally, increasing species asynchrony over time could also reflect yearly varying selection effects (SEs, where more diverse communities have a greater probability to contain species that dominate and have a strong effect on ecosystem functions19). These annual SEs could scale up to an interannual CE when different species dominate the community among years30, thus, over time, increasingly stabilizing productivity in diverse communities through increasing asynchrony20,21. Here we test these hypotheses about temporally changing species richness–productivity, –complementarity, –stability, and –asynchrony relationships using data from the Jena Experiment13. Results of our study show that species richness increasingly supported higher productivity over 17 years, due to increasing CEs among species in more species-rich communities. Consequently, greater species richness-driven CEs had an increasingly positive effect on stabilizing the community productivity over time. Further, we found that only after the first decade of the experiment did the CE also stabilize the community productivity through a positive effect on species asynchrony. Together these results show that the underlying mechanisms of community stability, namely species asynchrony and population stability, and overyielding-related CEs, are also temporally dynamic.
Results
Temporal change in biodiversity–productivity relationships
Over the years, the aboveground net primary productivity (ANPP) of all communities generally declined (Fig. 1a, d and Fig. S1). Greater species richness consistently resulted in greater ANPP. The positive effect of richness on ANPP increased significantly over the 17-year period (log-richness by linear-year interaction: F1, 329.9 = 8.34, P = 0.004, Table S3), reflected in an increasingly steeper richness–productivity slope (see Table S4). Similarly, the slope of the species richness–relative yield (RY, ANPP divided by mean ANPP of monocultures in that year) relationship became increasingly steeper and less saturating over the years (log-richness by linear-year interaction: F1, 331.1 = 44.29, P < 0.001, Fig. 1b, c and Table S3). Productivity declines relative to the first year were steepest for monocultures and low-diversity mixtures and flatter for high-diversity mixtures (year by richness-as factor: F4, 244.3 = 13.79, P < 0.001, Table S5). This revealed that the strengthening effect of species richness on productivity increased over the 17-year period because of a greater decline in monocultures relative to more diverse plant communities, with the 16-species mixtures still declining, but declining the least (Fig. 1d).
Temporal change in biodiversity effects
The relative yield total (RYT), which is the sum of species productivities in mixtures relative to their monocultures, increased with richness (F1, 57.8 = 50.49, P < 0.001), and this positive effect of richness on the RYT significantly increased over the 17 years (log-richness by linear-year interaction: F1, 495.2 = 7.33, P = 0.007, Fig. 2a and Table S6). Species richness also increased the net biodiversity effect (NE, being the difference in a mixture’s ANPP and the average monoculture ANPP: F1, 57.3 = 40.9, P < 0.001). The NE varied among years (F15, 872.5 = 10.17, P < 0.001), but did not show a significant species richness by linear-year interaction (F1, 290.1 = 10.17, P = 0.318, Table S6). However, richness–NE slopes showed a declining trend over the years when the slopes were regressed against the experimental year due to the declining overall productivity over the years (Fig. 2b). Greater species richness increased the CE (F1, 57.4 = 39.87, P < 0.001, Fig. 2c and Table S6) and decreased the SE (F1, 61.3 = 22.04, P < 0.001, Fig. 2d and Table S6). The CE and SE did not vary significantly among years (factor-year effect: F14, 746.6 = 1.42, P = 0.140 and F14, 677.4 = 0.92, P = 0.540, respectively, Table S6), and their relationships with richness did not significantly increase or decrease over the years (log-richness by linear-year interaction: F1, 417.7 = 0.01, P = 0.932 and F1, 397.7 < 0.01, P = 0.975, respectively, Fig. 2c, d and Table S6). Because biodiversity effects are measured on the scale of ANPP (g/m2), which declined across the years, accounting for the overall ANPP decline in the field over time by dividing the richness–biodiversity effects slopes by the average ANPP of all plots in each year revealed that on this relative scale the richness–NE and richness–CE relationships did significantly increase and the richness–SE relationships did significantly decrease over the 17-year period (Fig. 2e). To link CE and SE with species asynchrony, population stability, and community stability we calculated these indices over sequential 5-year rolling windows (see Methods, results using 3-year rolling windows were very similar and are presented in the Supplementary Information). This also allowed us to see if the annual SEs scaled up to a 5-year interannual CE29. The 5-year CE (see Methods) was significantly correlated with the average annual CE over the same 5 years (Spearman’s rho = 0.655, P < 0.001) and the 5-year SE was significantly correlated with the average annual SE (Spearman’s rho = 0.380, P < 0.001), indicating that the 5-year CE and SE are reflective of the annual CE and CE. Contrary to expectation, however, the 5-year CE and the average annual SE over the same 5 years were negatively correlated (Spearman’s rho = −0.261, P < 0.001).
Temporal change in diversity–stability relationships and their components
Pooled over 17 years, species richness increased community stability and species asynchrony but decreased population stability (Fig. S2). The stabilizing effect of richness increased across the 13 five-year rolling windows (log-richness by linear-rolling window interaction F1, 888.0 = 14.23, P < 0.001, Fig. 3a and Table S7), but this increasing effect seemed to taper off after the first decade (Fig. 3b). By partitioning the relative effects of species richness on reducing the temporal standard deviation ($${b}_{{{{{{{\mathrm{SD}}}}}}}}$$) and increasing the temporal mean productivity ($${b}_{{{{{{{\mathrm{mean}}}}}}}}$$) we found that the latter significantly increased over time (Fig. 3c). Conversely, the richness–$${b}_{{{{{{{\mathrm{SD}}}}}}}}$$ relationship oscillated through time and did not show any significant directional trend (Fig. 3c). Thus, greater species richness had an increasing effect on stabilizing the community ANPP because of the increasingly positive effect of richness on maintaining a greater 5-year mean ANPP through time compared with their respective monocultures.
Population stability (CVpop−1) had a negative relationship with species richness (F1, 74.0 = 4.97, P = 0.029), but this effect became less negative across the 5-year rolling windows toward richness having little effect on population stability (log-richness by linear-rolling window interaction F1, 888.0 = 17.43, P < 0.001, Fig. 3d and Table S7). The slope of the richness–asynchrony relationship was positive and did not decline over the five-year rolling windows (F11, 888.0 = 2.11, P = 0.017, Fig. 3d and Table S7). Thus, the temporally increasing effect of species richness on community stability can be attributed to the waning negative richness–population stability relationship while the richness–asynchrony relationship continued to exert its positive effect (Fig. 3d).
Linking community stability to asynchrony, population stability, and biodiversity effects
Results of the multigroup structural equation model revealed that the underpinning mechanisms behind the impact of species richness on community stability varied depending on the 5-year window (Fig. 4 and Table S10). Specifically, the CE had a strong significant positive effect on asynchrony in the last 5-year window (Fig. 4c), which differed significantly from the first decade of the experiment, where CE had no significant relationship with asynchrony (Fig. 4a, b). The relationship between the SE and asynchrony also differed among the three independent 5-year windows, where the SE only had a significant positive effect on asynchrony during the first and last 5-year windows (Fig. 4a, c), which differed significantly from the 2009–2013 5-year window (Fig. 4b). Consequently, richness had the strongest positive effect on asynchrony through increasing the CE during the last 5-year window (Fig. 4c), and a lesser indirect negative effect on asynchrony through the SE (Fig. 4d). During this 2015–2019 5-year window the direct effect of richness and the indirect effect of richness through the CE on asynchrony were similarly positive and together drove the positive effect of richness on asynchrony (Fig. 4d). Thus, only after the first decade of the experiment did the effect of richness through the CE start to play a prominent role in driving species asynchrony (also see Fig. S5 for 3-year windows).
The CE had a significant negative effect on the population stability in all three non-overlapping 5-year windows, with the strongest effect occurring during 2009–2013 (Fig. 4a, b). The SE had a significant negative relationship with the population stability during the 2009–2013 and 2015–2019 windows (Fig. 4b, c), which differed from the first 5 years where the SE had no effect on population stability (Fig. 4a). The effect of species richness on the population stability during the first 5-year window (2003–2007) was largely driven by its direct effect (Fig. 4e). During the 2009–2013 window both the negative effects of richness directly, and indirectly through the CE, drove the negative effect of richness on the population stability (Fig. 4e). While richness had a direct negative effect on the population stability during the final 2015–2019 window, this was countered by the positive effect of richness on increasing the ANPP.
Overall, the effect of species richness on community stability increased through time because of the increasing CE in more diverse communities that maintained a greater ANPP (Fig. 5a). However, the effect of richness on community stability through the effect of CE on population stability declined through time (Fig. 5b) and no significant temporal trend in the effect of richness on asynchrony through the CE could be detected (Fig. 5c). The effects of richness on community stability through the SE were also significantly negative through its effect on the ANPP and positive through its effect on the population stability, but changes were not as strong in comparison with the changes in the effects of richness through the CE (Fig. 5a, b).
Discussion
A growing number of studies have observed that the positive effect of species richness on community productivity (ANPP) can strengthen over time, which can be attributed to a temporally increasing overyielding in more species-rich communities4,8,10,11,12,14,16. Here we further show that this strengthening of the richness–productivity relationship through time results in stronger richness–stability (of community productivity) relationships due to two main mechanisms that also exhibit temporal changes over nearly two decades. First, species richness results in greater community stability over time because of the temporally increasing effect of species richness on productivity through the strengthening effect of richness on the complementarity effect (CE) within 5-year windows. Second, the effect of species richness on destabilizing the population stability weakened, whereas, after a decade, species richness had no effect on the 5-year population stability. Thus, the increasingly positive effect of species richness on community stability became mainly driven by the effects of species richness on species asynchrony within the 5-year windows. Finally, these two mechanisms that lead to greater stability in more diverse mixtures over nearly two decades are not mutually exclusive, because toward the final 5-year window (2015–2019), we found that greater species richness not only influenced asynchrony directly but also indirectly through increasing the five-year complementarity effect (CE). These results show that the underlying mechanisms by which species diversity stabilizes ecosystem functioning themselves can change as the communities develop over time.
The temporally strengthening effect of richness on community stability of productivity occurred via the temporally strengthening effects of richness on mean productivity that occurred due to a strengthening richness–CE relationship. There are several potential mechanisms underlying an increase in the richness–CE relationship through time that maintains greater and more stable productivity in species-rich communities. For instance, changes in diversity–productivity relationships through time have often been thought to be a consequence of deteriorating monoculture performance compared with relatively stable or increasing performance of more species-rich plant communities10,31,32,33. Here we demonstrate that while monoculture productivity declined most rapidly, the rate of declining productivity lessened with each successively higher species richness level (see Fig. 1d). Therefore, in the Jena Experiment, it is the increasing relative decline in productivity with decreasing species richness that strengthened the richness–CE relationship through time and not solely the deterioration of monocultures. It has been hypothesized that a temporal decline in productivity over many years in less species-rich communities could be due to negative plant-enemy feedbacks (i.e., the accumulation of plant species-specific pathogens and herbivores that reduce net productivity)32,34,35,36. Conversely, at the other end of the diversity spectrum in more species-rich communities, greater CE may result from character displacement, where a shift in trait values among co-occurring species occurs over time to avoid resource competition and thus leading to greater complementarity18,37,38.
An increasing contribution of the CE to the richness–productivity relationship through time in grassland systems may also be related to the fact that in grassland biodiversity experiments, where local management involves the removal of harvested aboveground biomass without fertilizer addition, soil fertility and plant productivity decrease over time39. An increase in the CE as a mechanism behind sustained or increasing diversity effects, therefore, may be partly driven by this temporal reduction in soil fertility in less diverse communities40. For instance, as resources are removed from the system over time with the continuous harvesting of aboveground biomass, increasing CEs could be due to the assimilation of atmospheric N2 by legumes which may facilitate the N uptake and growth of neighboring non-legume species over time39,41,42. Moreover, more diverse plant communities seem to support more efficient soil microbial communities43 that maintain soil fertility via soil carbon storage44,45,46,47 and the reduced leaching of nutrients48 and thus closed nutrient cycles13. While there are several potential mechanisms by which more diverse communities can maintain relatively greater productivity through temporally increasing CE, where species are, on average able to maintain greater productivity through time in mixtures than if grown in monocultures independently of other species, it is likely that all of these above-mentioned mechanisms are simultaneously at play to drive the increasing importance of diversity for maintaining more stable ecosystem productivity over nearly two decades.
A recent meta-analysis across different terrestrial, aquatic, experimental and observational study systems found that diversity consistently increases stability in ecosystem functioning through increasing species asynchrony, whereas effects via population stability can be positive, neutral, or negative49. Coinciding with this observation, we found that although the effect of richness on asynchrony oscillated significantly over the 13 five-year rolling windows, richness consistently had a strong positive effect on species asynchrony with no overall increasing or decreasing trend through time. Conversely, however, while species richness reduced the population stability during the first decade, as has been observed in other experimental biodiversity–stability studies in terrestrial ecosystems27,34,50, this negative effect of richness on population stability weakened in the second decade toward richness having little to no effect on population stability. Thus, the community stability became increasingly driven by asynchrony and less by population stability in the second decade of the experiment.
Population stability is comprised of the average temporal standard deviation in species productivity weighted by the net productivity of the community25. In our case, the initial negative effect of richness on population stability was due to richness resulting in a greater increase in the temporal standard deviation relative to its effect on productivity. However, the richness–population stability relationship weekend toward neutral in time as the richness–productivity relationship became increasingly positive. This means that eventually, the positive effect of richness on ANPP balanced off the negative effect of richness on increasing species temporal variation in productivity. Taken together, this indicates that species richness had a generalizable effect on increasing the asynchrony within any given 5-year window. However, the increasing positive effect of richness on productivity, via increasing complementarity (CE) among species within a five-year window, countered any destabilizing effect of population stability. While in observational diversity–stability studies, the effect of richness on population stability is generally positive, it is generally negative in experimental studies49. This suggests that our experimental plant communities are trending toward a richness–population stability relationship of natural systems as the plant species establish and respond to one another and their local environment for over a decade. However, whether this effect of richness on population stability will eventually progress to being significantly positive will require additional years of observation, highlighting the value of the few existing long-term studies.
Importantly, a notable finding of our study is that only after the first decade did the 5-year CE begin relating to asynchrony. This implies that there is a type of temporal insurance effect of diversity that had developed after the first decade, where interannual complementarity drives the interannual asynchrony in species productivity30. Therefore, only after the first decade of the experiment did species in more diverse mixtures in our study become increasingly complementary among years in their productivity, resulting in a greater temporal asynchrony over a 5-year period. This points to the importance of the complementary dynamics among species across years that can result in a portfolio effect resulting in greater asynchrony4,51. These complementary temporal dynamics among species are a mechanism that may take many years to become apparent. There could be several drivers for this, one being year-to-year environmental climatic variations. For instance, the experimental site experienced some exceptionally dry (2003, 2011, 2015, and 2018) and wet years (2007, 2009, and 2010), as well as a major flooding event in 2013, where more diverse communities showed increased resilience post flooding37,52,53. However, it has been shown elsewhere that environmental variations seem to play a small role in driving species asynchrony and community stability49,54.
In addition to annual climatic variations in our system, it is likely that rapid evolutionary changes occurred through interspecific competition, and plant–soil interactions, leading to natural selection processes55. For example, we have previously shown that these plant communities result in species complementarity because of increased character displacement to avoid competition when compared with the same plant community composition that has had no co-occurrence history18,56,57. Furthermore, it has also been shown that after over a decade, these plant communities are more resilient to environmental perturbation, such as a major flooding event37. This implies that more diverse plant communities are increasingly more stable over time as they undergo co-selection and adapt to their local environment. Indeed, after 10–15 years, most of the plant species have likely undergone at least one or two-generational turnover events, since the average maximum age of these plants is around 4 years58. This also makes sense in light that previous studies have shown that greater phylogenetic and functional differences among species can lead to greater ecosystem stability26,42,59,60,61, thus inherently also indicating there is an evolutionary basis for the temporally developing diversity–ecosystem stability relationship.
In one of the longest-running biodiversity experiments (the Jena Experiment) after nearly two decades, we found that greater species richness increasingly maintained greater productivity and greater temporal stability of productivity through increasing species complementarity, providing evidence that plant diversity can maintain greater and more stable productivity and that these effects increase over time4,5,27,50,62,63,64. Furthermore, we could show that the underlying mechanisms of community stability, namely species asynchrony and population stability, and overyielding-related complementarity, were also temporally dynamic. Over the 17 years of the experiment, asynchrony and complementarity underpinned diversity effects on stability, whereas population stability played an increasingly less important role. As the communities developed over time, the influence of these mechanisms may have changed due to demographic changes in species populations, including natural selection processes, changes in abiotic and biotic environmental conditions, including resource depletion and build-up of enemy populations and larger-scale perturbations such as a flooding event. It could well be that these temporal changes lead to experimental communities that function more like natural communities that have undergone such temporal development over even longer timespans. Considering that biodiversity effects on stabilizing ecosystem functioning can take well over a decade to develop, it will be important to further assess how asynchrony, population stability, and overyielding-related complementarity continue to support ecosystem stability into the future as the climate and species–species and species–environment interactions continue to change.
Methods
Experimental design and data collection
The experiment was set up in 2002 in Jena, Germany, at a site located near the Saale River (50°55′ N, 11°35′ E; 130 m above sea level). The experimental design and field site details are described elsewhere13; also see www.the-jena-experiment.de). In brief, the site had been previously used as arable land for more than four decades, but in 2001, the year before the experimental setup, the field was tilled every 2 months and treated with glyphosate in July 2001. A total of 60 plant species typical of local grasslands were selected, including 12 legumes, 16 grasses, 20 tall herbs, and 12 small herbs (Table S1). The experiment consists of 74 large main plots (originally 20 × 20 m in size, in 2010 reduced to 6 × 6 m) set up in four blocks at increasing distances to the Saale River. Plots were sown in a diversity gradient of 1, 2, 4, 8, or 16 plant species crossed with a gradient of functional-group richness ranging from 1 to 4, i.e., including plots of single functional groups ranging in species richness from 1 to 16 (1 to 8 for legumes and small herbs; see Table S2). All species-richness levels had 16 different species compositions as biological replicates, except for the 16-species mixture, which had only 14 different species compositions (no mono-functional-group mixtures of legumes or small herbs could be established at this level). While some species were lost from plots over the years, the weeding ensured that a species richness gradient was maintained based on the initially sown richness (Fig. S2). The plots with different species richness were equally spread across the four blocks. All plant species were also sown as monoculture in plots of 3.5 × 3.5 m (1 × 1 m from 2009 onwards). All plant communities were sown at a density of 1000 germinable seeds per m2, with species in mixtures being sown in equal proportions. Two large monoculture plots were abandoned after some years (Bellis perennis in 2005; Cynosurus cristatus in 2008) because the species were barely present on these plots.
The plant communities were maintained by manual weeding twice per year in early spring (April) and mid-summer (July). From 2010 onward, an additional weeding was done in autumn (late September). In late spring (end of May) and late summer (end of August), standing plant biomass was harvested 3 cm above the soil surface within four randomly positioned 0.5 × 0.2 m quadrats in the large plots and two quadrats of the same size in the small monoculture plots. With the reduction of the size of the plots, the number of quadrats from which biomass was sampled was also reduced to half the number in 2009. At all harvests, except for the summer harvest of 2004, harvested plant material was sorted by species, dried at 70 °C for a minimum of 48 h and weighed by species. In 2004 only the pooled biomass of the sown species was collected in August. After plant material had been collected, the plots were mown to approximately 5 cm above the soil surface at each harvest and the mown plant material was removed. Two biomass harvests per year are the typical management regime of extensively used grasslands in the region. For all following analyses, the biomass data were pooled by year (sum of spring and summer biomass) to assess the aboveground net primary productivity (ANPP) of the communities from 2003 to 2019.
Calculation of biodiversity effects
We additively partitioned annual net biodiversity effects (NEs) into annual complementarity effects (CEs) and selection effects (SEs) following the additive partitioning method19. The additive partitioning is based on the relative yields of the individual plant species in a mixture: $${{{{{{{\mathrm{RY}}}}}}}}_{i}=\frac{{O}_{i}}{{M}_{i}}$$, where $${O}_{i}$$ is the observed productivity of species i in the mixture and $${M}_{i}$$ is the productivity of the same species in the monoculture. We first calculated overyielding as the relative yield total $${{{{{{\mathrm{RYT}}}}}}}=\,\sum {{{{{{{\mathrm{RY}}}}}}}}_{i}$$. This essentially is the complementarity effect (CE), but on a relative scale, since the complementarity effect is the RYT weighted by the average productivity of those species in monoculture: $${{{{{{\mathrm{CE}}}}}}}=({{{{{{\mathrm{RYT}}}}}}}-1)(\sum \frac{{M}_{i}}{N})$$, and the selection effect is calculated as $${{{{{\mathrm{SE}}}}}}=(N-1){{{{{\mathrm{cov}}}}}}({M}_{i},\,{{{{{\mathrm{RY}}}}}}_{i}-1/N)$$, where N is the number of species in a mixture. The sum of CE and SE equals NE, which is the difference in the observed productivity of the mixture from the average of the respective plant species in monoculture: $${{{{{{\mathrm{NE}}}}}}}=\sum {O}_{i}-\,\sum \frac{{M}_{i}}{N}$$. However, since $${{{{{{{\mathrm{RY}}}}}}}}_{i}$$ is dependent on the performance of the respective species in monoculture, it is not possible to determine the CE and SE when a species is unable to establish as a monoculture (i.e., $${M}_{i}$$ cannot be 0 in the calculation of $${{{{{{{\mathrm{RY}}}}}}}}_{i}$$). Therefore, the CE and SE were calculated by excluding species that did not establish in monoculture in either the spring or summer harvests of any specific year17. Furthermore, extremely small values in the monoculture productivity of a single species can inflate the complementarity effect and the inclusion of the top three most extreme values strongly influenced the ANOVA model outcomes (see Table S7) and skewed the distribution of the residuals. Accordingly, extreme CE and SE outliers (i.e., those caused by extremely large RYi) were removed if they were more than six times above or below the upper or lower quartile in magnitude65. For all mixed-species plots and years for which CE and SE could be calculated, the exclusion of extreme outliers resulted in the removal of around 6% of the CE and SE values.
Calculation of community stability and species synchrony
We used a 5-year rolling window, resulting in 13 consecutive 5-year windows with three non-overlapping windows, to also assess whether plant species productivity and their temporal asynchrony changed over the 17-year period. For robustness, we also used three-year rolling windows. Results from the five-year and three-year windows were very similar (see Table S9 and Figs. S35 for results using three-year windows). For each 5-year window, we calculated the temporal variation in annual net productivity using the coefficient of variation as $${{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{net}}}}}}}}=\,{\mu }_{{{{{{{\mathrm{net}}}}}}}}/{\sigma }_{{{{{{{\mathrm{net}}}}}}}}$$, where $${\sigma }_{{{{{{{\mathrm{net}}}}}}}}$$ is the standard deviation in productivity over 5 years and $${\mu }_{{{{{{{\mathrm{net}}}}}}}}$$ is the 5-year mean. We used the inverse of CVnet (CVnet−1), which is frequently used as a measure of “stability”3. Species synchrony66 was calculated as: $$\theta=\sqrt{\frac{{{\sigma }^{2}}_{{{{{{{\mathrm{net}}}}}}}}}{{\left(\mathop{\sum }\limits_{i}^{N}{\sigma }_{i}\right)}^{2}}}$$ where σ2net is the temporal variance in ANPP of a community and $$\mathop{\sum }\limits_{i}^{N}{\sigma }_{i}$$ is the sum of the temporal standard deviations in ANPPs of the species populations within the community. Because this index of synchrony ranges between 0 and 1, we used 1-$$\theta$$ as the measure of asynchrony. The index of species synchrony is useful as it can be mathematically partitioned out as a component of the variation in community ANPP ($${{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{net}}}}}}}}$$) because $${{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{net}}}}}}}}=\,\theta \bullet {{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{pop}}}}}}}}$$, where $${{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{pop}}}}}}}}$$ is the mean temporal variation of population ANPPs of species within the community, calculated by $${{{{{{{\mathrm{CV}}}}}}}}_{{{{{{{\mathrm{pop}}}}}}}}=\frac{\left(\mathop{\sum }\limits_{i}^{N}{\sigma }_{i}\right)}{{\mu }_{{{{{{{\mathrm{net}}}}}}}}}$$25$$.$$ The inverse of the mean temporal variation in species ANPP is a measure of population stability ($${{{{{{\rm{CV}}}}}}_{{pop}}}^{-1}$$).
We determined the effect of species richness on stabilizing ANPP through the relative effects of richness on maintaining a greater temporal mean and reducing the temporal standard deviation in ANPP. This was done by calculating the power coefficients of the functions $$\log ({\mu }_{{{{{{\mathrm{net}}}}}}}) \sim {b}_{{{{{{\mathrm{mean}}}}}}}\cdot \,\log ({{{{{\mathrm{richness}}}}}})$$ and $$\log ({\sigma }_{{{{{{\mathrm{net}}}}}}}) \sim {b}_{{{{{{\mathrm{SD}}}}}}}\cdot \,\log ({{{{{\mathrm{richness}}}}}})$$, where $${b}_{{{{{{{\mathrm{mean}}}}}}}}$$ is the relative effect of richness on the temporal mean, and $${b}_{{SD}}$$ is the relative effect of richness on the temporal standard deviation. These are the relative effects of species richness on the temporal mean and standard deviation that determine $${b}_{{{{{{{{\mathrm{CVnet}}}}}}}}^{-1}}={b}_{{{{{{{\mathrm{mean}}}}}}}}-{b}_{{{{{{{\mathrm{SD}}}}}}}}$$, where $${b}_{{{{{{{{\mathrm{CVnet}}}}}}}}^{-1}}$$ is derived from the power function $$\log ({{{{{{{\mathrm{CV}}}}}}}_{{{{{{\mathrm{net}}}}}}}}^{-1})={b}_{{{{{{\mathrm{CVnet}}}}}}{}^{-1}}\cdot \,\log ({{{{{\mathrm{richness}}}}}})$$67. Similarly, we partitioned the relative effects of species richness on asynchrony and population stability, where $${b}_{{{{{{{\mathrm{CVnet}}}}}}}}={b}_{{{{{{{\mathrm{async}}}}}}}.}+{b}_{{{{{{{{\mathrm{CVpop}}}}}}}}^{-1}}$$ using the functions $$\log ({{{{{\mathrm{asynchrony}}}}}}) \sim {b}_{{{{{{\mathrm{async}}}}}}}\cdot \,\log ({{{{{\mathrm{richness}}}}}})$$ and $$\log ({{{{{\mathrm{CVpop}}}}}}^{-1}) \sim {b}_{{{{{{\mathrm{CVpop}}}}}}^{-1}}\cdot \,\log ({{{{{\mathrm{richness}}}}}})$$.
Data analysis
All data analyses were done with R version 3.2.4 (http://www.R-project.org). All mixed-effects ANOVA models were calculated using the ASReml package for R (VSN International Ltd., Herts, UK) and the R package pascal (available at: https://github.com/pascal-niklaus/pascal). For all mixed-effects models assessing responses across years, the temporal autocorrelation of residuals across sequential years was included, and the block and plot were included as random-effect terms68. The ANPP, annual NE, CE, SE, and RYT, were assessed for relationships with species richness (log-transformed), year as linear followed by a year as a factor and the interactions with richness as fixed-effects terms. The ANPP was square root transformed prior to analysis to meet assumptions of homoscedasticity. Since ANPP varies from year to year, making it difficult to compare the absolute effects of richness on productivity across years, we also assessed the effect of species richness on relative yield (RY), which was calculated by dividing the annual productivity of plots by the mean productivity of all monocultures in that year8. The richness–RY relationships were assessed as mentioned above for ANPP and biodiversity effects but using the power function log(RY) ~ log(richness), with year (factor) and the year by log-richness interaction as fixed-effect terms following8. The slope coefficients from this log–log regression (power exponent b) were extracted from the model and regressed against year (as a linear term) to determine whether the effects of richness on the RY showed a trend over the 17-year period. Similarly, we also regressed the slopes of the effects of richness on NE, CE, SE, and RYT against year as a linear term. Because the biodiversity effects CE, SE, and NE are also measured on the absolute scale, we divided the richness–biodiversity effect slope coefficients by the average ANPP of all plots within the field for each year to express the richness–biodiversity effect slopes relative to the yearly ANPP across the field.
To further understand the temporal changes in the species richness–productivity relationship, we also assessed the relative change in productivity from year 1 in each plot by dividing the annual productivity of each plot by its productivity in year 1 (2003). The productivity relative to year 1 was log-transformed and assessed as a function of species richness level (factor) and year (linear) and their interaction as fixed terms. This allowed us to compare temporal changes in productivity among different richness levels to specifically assess whether less diverse communities declined in productivity more rapidly than did more species-rich communities.
Community stability, population stability, and asynchrony calculated for each 5-year rolling window were assessed for relationships with richness (log-transformed), sequential 5-year window as linear term followed by the 5-year window as a factor and the interactions with richness as fixed-effects terms with block and plot included as random terms. The community and population stability were log-transformed prior to analyses. The power exponents from the log(response) ~ b*log(richness): $${b}_{{{{{{{\mathrm{CVnet}}}}}}}}$$, $${b}_{{{{{{{\mathrm{mean}}}}}}}}$$, $${b}_{{{{{{{\mathrm{SD}}}}}}}}$$, $${b}_{{{{{{{\mathrm{async}}}}}}}.}$$, $${b}_{{{{{{{\mathrm{CVpop}}}}}}}}$$, relationships were also regressed against the sequential 13 five-year windows (linear or log-linear time) to assess how the relationships changed over time.
Linking temporal changes in biodiversity effects with stability
To link the CE and SE with the temporal indices of asynchrony, population stability, and community stability we calculated the CE and SE, as well as the net ANPP, for each of the 5-year windows calculating the CE and SE as mentioned above, but with the biomass of the species summed over each five-year period. It should be noted that the 5-year calculation of biodiversity effects holds a slightly different biological meaning than the annual calculation of biodiversity effects. On the annual scale, biodiversity effects result from their spatial and seasonal growth abilities within a given year and growing season. But over a 5-year window, biodiversity effects can arise from a temporal portfolio effect where different species asynchronously drive the ANPP in different years such that varying yearly selection effects, for example, may scale up to 5-year interannual complementarity effects30. We then built a multigroup structural equation model to assess how species richness increasingly stabilized the ANPP over the 17 years using three non-overlapping 5-year windows: 2003–2007, 2009–2013, and 2015–2019 using the R package lavaan69. We chose three non-overlapping windows as groups for comparison to illustrate that the direct and indirect effects of richness on stability can differ depending on the age of the plant community with data that are unique to each group. To further show the temporal changes in the direct and indirect effects across time, we also used the 15 consecutive 3-year windows. For each window, we assessed the effects of species richness on the 5-year complementarity and selection biodiversity effects that contribute to the five-year productivity. In turn, the population stability is then driven by the 5-year productivity and species richness25. Species richness also drives species asynchrony, and together both species asynchrony and population stability determine the community stability25. We included in the models the direct effects of species richness on productivity and the direct effect of CE and SE on population stability. We also included the links between asynchrony and CE and SE because in more diverse communities, species that differ more in their performance among years can result in their temporal complementarity and thus increase asynchrony through such a portfolio effect20,21. Community stability, population stability, and 5-year productivity were all log-transformed and 5-year CE was min-max scaled and log-transformed. Because extreme outliers in the 5-year CE and SE persisted to influence the model fit, we assessed the model fit across a gradient of sequentially omitting extreme values until the model first reached an RMSEA value of 0. This occurred after omitting the top nine extreme values (about 3% of observations). The 5-year CE and SE were allowed to covary as well as the asynchrony and population stability. We then ran the model over all 13 consecutive 5-year windows to calculate the indirect effects of richness on community stability through the SE and CE effects on the ANPP, population stability, and asynchrony. These indirect effects were then regressed against time (consecutive windows) to detect any increasing or decreasing trends in their effects. This was also repeated with the 3-year windows.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | 2023-02-06 10:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.576246440410614, "perplexity": 2961.846957608437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00506.warc.gz"} |
https://www.physicsforums.com/threads/problem-involving-derivative-and-left-hand-limit.715743/ | Problem involving derivative and left-hand limit
1. Oct 10, 2013
Loopas
This is from an online homework that's due in an hour. This question has been bothering me all day and I'm convinced that there's a problem with the website.
It's asking for the expression that's used to find the left-hand limit of the derivative, f'(0).
It won't take 2x+6 as the answer... Am I missing something or is the website just screwed up?
I attached a picture of the problem. Help fast!
Attached Files:
• Problem 14.jpg
File size:
68.3 KB
Views:
74
2. Oct 10, 2013
klondike
Do you remember the derivative of of $$x^n$$?
3. Oct 10, 2013
Loopas
Wouldn't that be n? | 2017-10-17 07:01:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430220365524292, "perplexity": 1431.9915806707727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00238.warc.gz"} |
http://math.stackexchange.com/questions/129088/why-is-wv-simeq-dkx-1-dots-x-n/129171 | # Why is $W(V)\simeq D(k[X_1,\dots,X_n])$?
I was reading about the Weyl algebra, but don't get a certain isomorphism.
For a little background, let $V$ be a vector space of dimension $2n$, with a bilienar form $\omega$, nondegenerate, and let $W(V)$ be the corresponding Weyl algebra. So recall that $W(V)=T(V)/I$ where $I$ is the ideal generated by $x\otimes y-y\otimes x-\omega(x,y)$ for $x,y\in V$ and $T(V)$ denotes the tensor algebra.
If $k$ is a field of characteristic $0$, then $W(V)\simeq D(k[X_1,\dots,X_n])$, but I'm having trouble realizing this. Does anyone have a clear proof, or reference to a proof to bring this isomorphism to light? Many thanks.
-
If $V^{2n}$ is a symplectic vector space (and we assume characteristic 0), then we can always find a basis $\{x_1, p_1, \cdots, x_n, p_n\}$ of $V$ so that $$\omega(p_i, x_j) = \delta_{ij}$$ and $\omega(x_i, x_j) = 0 = \omega(p_i, p_j)$. Then the Weyl algebra is the algebra with generators $x_i, p_j$ with the relations $x_i x_j = x_j x_i$, $p_i p_j = p_j p_i$, and $p_i x_j -x_j p_i = \delta_{ij}$.
Now consider the algebra of polynomial differential operators. This has generators $x_i, \partial_j$, satisfying $x_i x_j = x_j x_i$, $\partial_i \partial_j = \partial_j \partial_i$, and $\partial_i x_j - x_j \partial_i = \delta_{ij}$.
So in this basis for $V$, we see that both algebras have the same generators and relations, so the isomorphism is given by $p_j \mapsto \partial_j$.
The fact that such a basis of $V$ exists is so basic that I'm not sure it even has a name! (For symplectic manifolds this is the Darboux theorem, but that already presupposes the result about vector spaces.) Any book or lectures notes on symplectic geometry should prove it. It's easy enough that you should be able to prove it yourself (hint: use nondegeneracy of $\omega$ and induction on $n$).
Update:
Here is a proof that we can always choose a symplectic basis $\{x_1, p_1, \cdots, x_n, p_n\}$. Pick some vector $x_1 \in V$. Since $\omega$ is nondegenerate, we can find some $p_1 \in V$ so that $\omega(p_1, x_1) \neq 0$. Then by rescaling $p_1$ if necessary, we can assume $\omega(p_1, x_1) = 1$. If $\dim V = 2$ we're done, and if not assume by induction that the result is true for symplectic vector spaces of strictly smaller dimension. Now consider the subspaces $W$ and $W^\perp$ defined by \begin{align} W &= \mathrm{span}\{x_1, p_1\} \\\ W^\perp &= \{v \in V \ | \ \omega(v, w) = 0 \ \forall\ w \in W \} \end{align} A quick calculation shows that $W \cap W^\perp = 0$, and that the symplectic form restriced to $W^\perp$ is nondegenerate. Hence by induction there is a basis $\{x_2, p_2, \cdots, x_n, p_n\}$ of $W^\perp$ satisfying $\omega(p_i, x_j) = \delta_{ij}$. Since $V = W \oplus W^\perp$, we're done.
An good reference is the lecture notes by Cannas da Silva.
-
Thanks Jonathan. I'm not familiar with symplectic vector spaces really, so I've been having a hard time proving this claim these past days. Do you mind including it if you have the time, or possibly giving a reference where I could look it up? (I'm not familiar with texts on symplectic geometry either.) – Buble Apr 12 '12 at 23:46
Updated to include a sketch of the proof. You can find the details in the lecture notes by Cannas da Silva. – Jonathan Apr 13 '12 at 22:43
This is a very slick proof, more so than the one I'm familiar with. Out of curiosity, how do you know that the algebra of polynomial differential operators has generators $x_i,\partial_i$ satisfying those claimed relations? – Kally Apr 14 '12 at 21:20
@Kally what is your working definition of polynomial differential operators, if not the algebra generated by $x_i, \partial_j$ subject to the above relations? There are several equivalent definitions, so the answer depends on which you take as fundamental. – Jonathan Apr 25 '12 at 19:24 | 2014-04-19 20:40:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938036203384399, "perplexity": 130.04249207357287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://read.somethingorotherwhatever.com/entry/FertilityNumbers | # Fertility Numbers
• Published in 2018
In the collections
A nonnegative integer is called a fertility number if it is equal to the number of preimages of a permutation under West's stack-sorting map. We prove structural results concerning permutations, allowing us to deduce information about the set of fertility numbers. In particular, the set of fertility numbers is closed under multiplication and contains every nonnegative integer that is not congruent to $3$ modulo $4$. We show that the lower asymptotic density of the set of fertility numbers is at least $1954/2565\approx 0.7618$. We also exhibit some positive integers that are not fertility numbers and conjecture that there are infinitely many such numbers.
### BibTeX entry
@article{FertilityNumbers,
title = {Fertility Numbers},
abstract = {A nonnegative integer is called a fertility number if it is equal to the
number of preimages of a permutation under West's stack-sorting map. We prove
structural results concerning permutations, allowing us to deduce information
about the set of fertility numbers. In particular, the set of fertility numbers
is closed under multiplication and contains every nonnegative integer that is
not congruent to {\$}3{\$} modulo {\$}4{\$}. We show that the lower asymptotic density of
the set of fertility numbers is at least {\$}1954/2565\approx 0.7618{\$}. We also
exhibit some positive integers that are not fertility numbers and conjecture
that there are infinitely many such numbers.},
url = {http://arxiv.org/abs/1809.04421v3 http://arxiv.org/pdf/1809.04421v3},
year = 2018,
author = {Colin Defant},
comment = {},
urldate = {2020-10-16},
archivePrefix = {arXiv},
eprint = {1809.04421},
primaryClass = {math.CO},
collections = {attention-grabbing-titles,combinatorics,integerology}
} | 2020-11-25 14:09:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162487745285034, "perplexity": 678.3155585641521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00700.warc.gz"} |
https://ncatlab.org/nlab/show/JCMcKeown | # nLab JCMcKeown
jesse dot mckeown at mail dot utoronto dot ca
Around here causes a small amount of trouble on occasion, which leads to smarter people improving the general quality and coverage of the whole.
category: people
Last revised on June 1, 2016 at 04:48:35. See the history of this page for a list of all contributions to it. | 2019-12-07 16:14:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5302934050559998, "perplexity": 2599.672421785194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00295.warc.gz"} |
http://www.themathcitadel.com/2017/06/13/probabilistic-ways-to-represent-the-lifetime-of-an-object/ | Probabilistic Ways to Represent the Lifetime of an Object
# Probabilistic Ways to Represent the Lifetime of an Object
Every item, system, person, or animal has a lifetime. For people and animals, we typically just measure the lifetime in years, but we have other options for items and systems. We can measure airplane reliability in flight hours (hours actually flown), or stress test a manufacturing tool in cycles. Regardless of the units we use, there are many things in common. We don’t know how long any item will “live” before it’s manufactured or deployed, so an item’s lifetime is a random variable. We wish to make decisions about manufacturing, warranties, or even purchasing by taking the reliability of an object into account.
We can represent each class of items (a brand of 100W lightbulbs, USR’s NS4 robots, etc) by a random variable for the lifetime. We’ll call it $Y$. Like any random variable, it has a probability distribution. There isn’t only one way to represent the distribution of $Y$. We can look at equivalent representations, each one useful for answering different types of questions. This article will run through a few of them and the uses by studying the theoretical lifetime distribution of USR’s famous NS4 robots.
Disclaimer: This example is derived from the fictonal company USR from Isaac Asimov’s I, Robot. This should not be construed to represent any real product. I’m sure I’m missing some other disclaimer notices, but assume the standard ones are here.
## Survivor Function
The survivor function is the most common way to study the lifetime of an item. Colloquially, this is the probability that the item survives past time $t$. We denote it by $S(t)$, and we can write mathematically that
$$S(t) = P(Y \geq t)$$
This equation can be given by a standard probability distribution (the exponential distribution is the most common) or other formula.
Example (Exponential NS4s)
Without having access to USR’s manufacturing data, let’s assume that the survivor function of the NS4 robot is given by $S(t) = e^{-t/3}$. Let’s also assume that $t$ is measured in years. What is the probability that a brand new NS4 lasts more than 5 years?
From the graph above, we can simply plug $t=5$ into the survivor function to get the answer to our question. The probability that the new NS4 survives longer than 5 years is
$$S(5) = e^{-5/3} \approx 0.189$$
We could use the survivor function to help NSR decide where to place the cutoff for warranty claims. Depending on the cost of either repairing the NS4 or replacing the NS4 with the NS5, we can backsolve to find out what $t$ would satisfy management. Suppose the cost function requires that the probability of surviving past the cutoff $t$ is 85%. Then we can use the survivor function to backsolve for $t$:
\begin{aligned}0.85 &= e^{-t/3} \\\ln(0.85) &= \frac{-t}{3} \\0.49 &\approx t\end{aligned}
Thus, we would set the warranty claims to be valid only for about the first half year after the NS4 is purchased.
Remark. Another way to judge an item is by looking at the shape of the survivor function. A steep decline like the one shown in the above graph tells us that the NS4 isn’t exactly the most reliable robot. Only about half of them survive two years.
### Conditional Survivor Function
For those who wish to dive into a little bit more math, we can dive into the conditional survivor function. This “spinoff” of the survivor function will tell us the probability of surviving past time $t$ when it is currently functioning at time $a$. The survivor function above assumes $t$ starts at 0; that is, the object is brand new. If we have bought a used NS4, or perhaps have been sending it to the grocery store for a while, then we need to take into account the fact that the NS4 has been operational for some time $a$.
We write the conditional survivor function $S_{Y|Y\geq a}(t)$ for some fixed $a$. We can use the famous Bayes formula to express this mathematically:
$$S_{Y|Y\geq a}(t) = P(Y \geq t | Y \geq a) = \frac{P(Y \geq t \text{ and } Y \geq a)}{P(Y \geq a)} = \frac{P(Y \geq t)}{P(Y\geq a)} = \frac{S(t)}{S(a)}$$
What this formula is basically saying is that the probability that the NS4 survives past $t$ given that it has already lived for $a$ years is given by $S(t)/S(a)$, and derived via Bayes’ formula.
Example
Suppose we bought a used NS4 that was 2 years old (and it is working now). What is the probability that this NS4 is still working more than 3 years from now?
We are looking for the probability that the NS4 is still operational after more than 5 years given that it has already been working for 2. So
$$S_{Y|Y\geq 2}(5) = \frac{S(5)}{S(2)} = \frac{e^{-5/3}}{e^{-2/3}} = \frac{1}{e} \approx 0.367$$
Thus, we only have a 36.7% chance of getting more than 3 years out of our used NS4. Perhaps we may want to consider haggling for a lower price…
## Cumulative Distribution Function (Cumulative Failure Probability)
This is the cumulative distribution function straight from basic probability, but we can add an additional interpretation in the context of reliability. Mathematically, the cumulative distribution function for a random variable $Y$ is the probability that the random variable is less than or equal to a fixed value $y$. Mathematically, we denote this by $F_{Y}(t) = P(Y \leq t)$.
You may recognize this as the “opposite” of the survival function ($S_{Y}(t) = P(Y \geq t)$). In probability, we call this the complement, and we can get from one event to its complement by noting that for an event $A$ and its complement $A^{c}$, $P(A) + P(A^{c}) = 1$. Thus, moving back to the survivor function and CDF, $S_{Y}(t) + F_{Y}(t) = 1$. Therefore, $F_{Y}(t) = 1-S_{Y}(t)$. With the NS4 example, the CDF is given by
$$F_{Y}(t) = 1-e^{-t/3}$$
The interpretation is exactly the opposite of the survivor function. The CDF gives us the probability that the NS4 will fail before time t.
Example
The probability that a new NS4 will fail before the 5th year is given by either $F_{Y}(t) = 1-e^{-5/3} = 1-0.189 = 0.821$.
## Hazard Function
The most common way to look at a lifetime distribution in reliability is called the hazard function. This is also called the failure rate in some circles, and gives a measure of risk. We’ll take a little bit of a dive into math to derive this, since the derivation is illuminating as to its interpretation.
The hazard function is denoted by $h(t)$. The question we want to answer here is the conditional probability that the item will fail in a time interval $[t, t+\Delta t]$, given that it has not occurred before. We want to know the probability of failure per unit time. So,
$$h(t)\Delta t = P(t \leq Y \leq t + \Delta t| Y \geq t).$$
We’re going to get into some calculus here, so this can be skipped if you’d rather not deal with this.
$$h(t) = \frac{P(t \leq Y \leq t + \Delta t| Y \geq t)}{\Delta t}= \frac{F'(t)}{S(t)\Delta t}$$
Now, if we let the interval length $\Delta t$ get smaller (to 0), we get the instantaneous failure rate.
$$h(t) = \frac{-S'(t)}{S(t)}$$
Remark Those sharp in calculus will notice via the chain rule that $h(t) = -\frac{d}{dt}\ln(S(t))$, so we can express the hazard function in terms of the survivor function.
Example.
We can directly derive the hazard rate for the NS4 population.
$$h(t) = -\frac{d}{dt}\ln(e^{-t/3}) = \frac{d}{dt}\frac{t}{3} = \frac{1}{3}$$
which is one failure every three years of operation.
The hazard function is commonly used in engineering maintenance to determine schedules for checks or component replacement. For example, the hazard function can be used to determine how many flight hours an Lockheed F-22 fighter jet can be operated before a certain component is at risk for failure and should be inspected for replacement.
There are other forms we can use to express the distribution of an object’s lifetime, but these are the most common. Another thing to note is that we can easily move from one form to another. They all represent the same thing–lifetime of a system, but in slightly different ways. We were able to make several different decisions about USR’s NS4 robots thanks to these different representations. | 2019-02-20 16:44:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 43, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7983489632606506, "perplexity": 415.14116790514987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495147.61/warc/CC-MAIN-20190220150139-20190220172139-00328.warc.gz"} |
https://www.computer.org/csdl/trans/tc/1990/12/t1475-abs.html | ABSTRACT
<p>A statistical model is considered for clock skew in which the propagation delays on every source-to-processor path are sums of independent contributions, and are identically distributed. Upper bounds are derived for expected skew, and its variance, in tree distribution systems with N synchronously clocked processing elements. The results are applied to two special cases of clock distribution. In the first, the metric-free model, the total delay in each buffer stage is Gaussian with a variance independent of stage number. In this case, the upper bound on skew grows as Theta (log N). The second, metric, model, is meant to reflect VLSI constraints. Here, the clock delay in a stage is Gaussian with a variance proportional to wire length, and the distribution tree is an H-tree embedded in the plane. In this case, the upper bound on expected skew is Theta (N/sup 1/4/ (log N)/sup 1/2/).</p>
INDEX TERMS
upper bound; expected clock skew; synchronous systems; statistical model; propagation delays; tree distribution systems; synchronously clocked processing elements; buffer stage; VLSI constraints; H-tree; multiprocessor interconnection networks.
CITATION
K. Steighlitz, S.D. Kugelmass, "An Upper Bound on Expected Clock Skew in Synchronous Systems", IEEE Transactions on Computers, vol. 39, no. , pp. 1475-1477, December 1990, doi:10.1109/12.61068
CITATIONS
SHARE
89 ms
(Ver 3.3 (11022016)) | 2017-05-23 09:13:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819861888885498, "perplexity": 3590.87171531318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607593.1/warc/CC-MAIN-20170523083809-20170523103809-00596.warc.gz"} |
https://www.greencarcongress.com/2009/04/tesla-takes-520-reservations-for-model-s-in-a-week.html | ## Tesla Takes 520 Reservations for Model S in a Week
##### 02 April 2009
Since the launch on 26 March, Tesla Motors has taken 520 reservations for the Model S, an all-electric family sedan that carries up to seven people and travels up to 300 miles per charge. The $5,000 reservation fee is refundable. Production of the Model S is planned to begin in late 2011. Tesla has applied for a$350 million loan from the Department of Energy’s Advanced Technology Vehicle Manufacturing Program, which would be used to build the Model S assembly plant in California.
The Model S can be recharged from any 120V, 208V or 240V outlet or quick-charged from an external direct current supply in 45 minutes. The Model S does 0-60 mph in 5.6 seconds, and will have an electronically limited top speed of 130 mph (209 km/h). The anticipated base price of the Model S is $49,900 after a federal tax credit of$7,500.
Three battery pack choices will offer a range of 160, 230 or 300 miles per charge. The company has not released options pricing.
Tesla also is taking reservations for the Model S Signature Edition with a $40,000 reservation fee. Tesla will produce only 2,000 Signature Edition cars, which will be the first built and have unique interior and exterior features. Signature Edition cars will be evenly split between US and European customers. Separately, Tesla delivered 104 Roadsters to customers in March, marking the first triple-digit delivery month in the company’s history. Tesla delivered more than 170 cars in the first quarter—more than the total delivered in 2008. Tesla has delivered about 320 Roadsters so far. The base price of the Roadster is$101,500 after a $7,500 federal tax credit. Tesla plans to introduce more affordable cars and partner with other automakers to help them produce mass-market EVs. Tesla announced in January it is partnering with Daimler AG to produce the battery packs and chargers for at least 1,000 Smart EVs. (Earlier post.) ### Comments That's indicative of high market demand, isn't it? So why is the federal government giving$30 billion to decrepit car companies like GM and Buick, with zero demand, while ignoring the electric car manufacturers? Tesla still needs some manufacturing capacity - so why isn't the government trying to broker a GM-Tesla partnerships to produce these cars and make the U.S. the world leader in electric vehicles?
As the zen master said "we will see". A few reservations and a few orders from people with money that have several cars does not make a market.
"so why isn't the government trying to broker a GM-Tesla partnerships to produce these cars and make the U.S. the world leader in electric vehicles?"
I'd say the US government and governments in general, are inept at running anything.
Telsa will be a stronger company if it figures out how to make money without government hand holding. Keep the government as far away from Tesla as possible, and you'll have a world class company.
Just like the butteryfly emerging from its cocoon- if it doesn't struggle and do it on its own, it comes out all deformed and warped. Government aid should be the last resort a company turns to.
@TM:
"Keep the government as far away from Tesla as possible, and you'll have a world class company."
It is just this kind of anti-establishment heresy that is destroying the world! Get with the program man! Government can't DO anything original so it steals it from those who can. You make out like government running car companies with backsliders, and bureaucrats is a bad thing. WTF?
"Tesla has applied for a $350 million loan from the Department of Energy’s Advanced Technology Vehicle Manufacturing Program" If they could get the money some where else they would. It is my guess that they can not. Raising money for ventures is almost impossible. Now you want to get into something as competitive and capital intensive as cars...really forget it. @SJC: "Raising money for ventures is almost impossible. Now you want to get into something as competitive and capital intensive as cars...really forget it." Either you are remarkably ignorant or simply a troll in service of the marxist doomers and therefor a perfect example of fundamental alarmist tripe. "The US accounted for 83 per cent of global cleantech [VC] investment in 2007. With$2.52bn invested in 159 cleantech deals, the US saw an impressive 79 per cent increase in investment over 2006. Cleantech accounted for more than eight per cent of the total US venture capital investment in 2007.
Global venture capital investment last year reached US$35.2 billion, the highest level since 2001, and is maintaining a robust pace in 2007. The acceleration has been bolstered by the increasing globalization of both venture capital funds and venture-backed companies and a substantial investor focus on emerging sectors." http://www.altassets.com/news/arc/2008/nz12803.php And one more thing. Tesla, the startup company arguably responsible for catalyzing the electrification of transport around the world - has another big hit on their hands. And at half the cost of their high-end first sports car. Nice work guys. You deserve the guvmn't (read American citizen's) dough - all they do is throw it in a street called Fannie Mae. Reel$$, You need not resort to insults. Sure there seems to be a lot of venture money out there, but in a multitrillion dollar world economy, it is a drop in the bucket. Try getting some venture capital yourself and you might see how difficult it is. Again, why would they want government money IF there is so much private money available everywhere? SJC:$32billion for VC starts is damned good. I wouldn't sneeze at it and best part is it's not dictated by a bunch of agenda-driven fundamentalists (i.e. guvmn't.)
The comments to this entry are closed. | 2023-02-07 10:44:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17495138943195343, "perplexity": 3037.267339771186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00025.warc.gz"} |
https://ask.sagemath.org/answers/45789/revisions/ | # Revision history [back]
This is a linux box answering. First of all, how does the matrix look like? Is it a piece of code? Is it a csv type?
In my case, the following worked. I stored in a file named Matrix.sage in the folder \home\dan\temp the following data:
1234 928374 1412341 123412431
443313341 13 131134 3414315634
10945203452 2341 122 74532423
9345234 7834192 10234123 99
To make things simple (well, for me complicated), i will not use / import csv and take advantage of the offered functionality.
Then the following code parses the lines of the data file line by line, splits the entries w.r.t. the blank (my choice of a delimiter), puts each row obtained in this way in a list, and appends the rows one by one to the variable rows, used to initialize the matrix A in the next line.
stream = open( r'/home/dan/temp/Matrix.sage', 'r' )
rows = []
rows.append( [ ZZ(entry) for entry in line.strip().split(' ')] )
A = matrix( ZZ, 4, 4, rows )
print( 'The matrix A is as follows:\n{!r}'.format(A) )
print( '\n\ndet(A) = {} = {}'.format(A.det(), A.det().factor()) )
This gives:
The matrix A is as follows:
[ 1234 928374 1412341 123412431]
[ 443313341 13 131134 3414315634]
[10945203452 2341 122 74532423]
[ 9345234 7834192 10234123 99]
det(A) = 56986363083817785658295994231136 = 2^5 * 7 * 61 * 1136790418207 * 3668704083691007
On a Win* box try instead the full path to the Matrix.sage file, for instance
stream = open( r'C:\temp\Matrix.sage', 'r' )
to open for read a presumably existing file that lives under C:\temp. | 2020-04-08 02:04:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3781958222389221, "perplexity": 4212.836184451747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00121.warc.gz"} |
https://scitools.org.uk/iris/docs/latest/whatsnew/1.8.html | # What’s new in Iris 1.8¶
Release: 1.8.1 3rd June 2015
This document explains the new/changed features of Iris in version 1.8. (View all changes.)
## Iris 1.8 features¶
Showcase: Rotate winds
Iris can now rotate and unrotate wind vector data by transforming the wind vector data to another coordinate system.
For example:
>>> from iris.analysis.cartography import rotate_winds
>>> target_cs = iris.coord_systems.GeogCS(6371229.0)
>>> u_prime, v_prime = rotate_winds(u_cube, v_cube, target_cs)
Showcase: Nearest-neighbour scheme
A nearest-neighbour scheme for interpolation and regridding has been added to Iris. This joins the existing Linear and AreaWeighted interpolation and regridding schemes.
For example:
>>> result = cube.interpolate(sample_points, iris.analysis.Nearest())
>>> regridded_cube = cube.regrid(target_grid, iris.analysis.Nearest())
Showcase: Slices over a coordinate
You can slice over one or more dimensions of a cube using iris.cube.Cube.slices_over(). This provides similar functionality to slices() but with almost the opposite outcome.
Using slices() to slice a cube on a selected dimension returns all possible slices of the cube with the selected dimension retaining its dimensionality. Using slices_over() to slice a cube on a selected dimension returns all possible slices of the cube over the selected dimension.
To demonstrate this:
>>> cube = iris.load(iris.sample_data_path('colpex.pp'))[0]
>>> print(cube.summary(shorten=True))
air_potential_temperature / (K) (time: 6; model_level_number: 10; grid_latitude: 83; grid_longitude: 83)
>>> my_slice = next(cube.slices('time'))
>>> my_slice_over = next(cube.slices_over('time'))
>>> print(my_slice.summary(shorten=True))
air_potential_temperature / (K) (time: 6)
>>> print(my_slice_over.summary(shorten=True))
air_potential_temperature / (K) (model_level_number: 10; grid_latitude: 83; grid_longitude: 83)
• A cube’s lazy data payload will still be lazy after saving; the data will not be loaded into memory by the save operation.
• Cubes with data payloads larger than system memory can now be saved to NetCDF through biggus streaming the data to disk.
• ocean sigma coordinate,
• ocean s coordinate,
• ocean s coordinate, generic form 1, and
• ocean s coordinate, generic form 2.
## Bugs fixed¶
### 1.8.0¶
• Fix in netCDF loader to correctly determine whether the longitude coordinate (including scalar coordinates) is circular.
• iris.cube.Cube.intersection() now supports bounds that extend slightly beyond 360 degrees.
• Lateral Boundary Condition (LBC) type FieldFiles are now handled correctly by the FF loader.
• Making a copy of a scalar cube with no data now correctly copies the data array.
• Height coordinates in NAME trajectory output files have been changed to match other NAME output file formats.
• Fixed datatype when loading an integer_constants array from a FieldsFile.
• FF/PP loader adds appropriate cell methods for lbtim.ib = 3 intervals.
• An exception is raised if the units of the latitude and longitude coordinates of the cube passed into iris.analysis.cartography.area_weights() are not convertible to radians.
• GRIB1 loader now creates a time coordinate for a time range indicator of 2.
• NetCDF loader now loads units that are empty strings as dimensionless.
### 1.8.1¶
• The PP loader now carefully handles floating point errors in date time conversions to hours.
• The handling fill values for lazy data loaded from NetCDF files is altered, such that the _FillValue set in the file is preserved through lazy operations.
• The risk that cube intersections could return incorrect results due to floating point tolerances is reduced.
• The new GRIB2 loading code is altered to enable the loading of various data representation templates; the data value unpacking is handled by the GRIB API.
• Saving cube collections to NetCDF, where multiple similar aux-factories exist within the cubes, is now carefully handled such that extra file variables are created where required in some cases.
### 1.8.2¶
• A fix to prevent the error: AttributeError: ‘module’ object has no attribute ‘date2num’. This was caused by the function netcdftime.date2num() being removed from the netCDF4 package in recent versions.
## Deprecations¶
• The original GRIB loader has been deprecated and replaced with a new template-based GRIB loader.
• Deprecated default NetCDF save behaviour of assigning the outermost dimension to be unlimited. Switch to the new behaviour with no auto assignment by setting iris.FUTURE.netcdf_no_unlimited to True.
• The former experimental method “iris.experimental.regrid.regrid_bilinear_rectilinear_src_and_grid” has been removed, as iris.analysis.Linear now includes this functionality. | 2019-01-18 03:54:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20744867622852325, "perplexity": 12214.09327031761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659677.17/warc/CC-MAIN-20190118025529-20190118051529-00607.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/136273-matrices-complex-numbers-eigenvalues-vectors-print.html | # Matrices / Complex Numbers /eigenvalues+vectors
• March 29th 2010, 05:22 AM
thatgirlrocks
Matrices / Complex Numbers /eigenvalues+vectors
I really need some help answering this question, it's doing my head in!!
Q:
a) let z in the set of C (complex numbers). show that if z+zbar = 0, then Re(z) = 0 (something to do with Argand diagrams?)
(where zbar is z with a horizontal line across the top :S)
b) let A in the set of R^nxn (a square matrix, n by n) be skew-symmetric, that is Atransposed = -A. show that all eigenvalues of A have zero real-part.
c) suppose that x and y are eigenvectors corresponding to distinct eigenvalues of a real-values skew-symmetric matrix. show that (xbartransposed)y=0
can anyone help me solve this please?
• March 29th 2010, 02:25 PM
mr fantastic
Quote:
Originally Posted by thatgirlrocks
I really need some help answering this question, it's doing my head in!!
Q:
a) let z in the set of C (complex numbers). show that if z+zbar = 0, then Re(z) = 0 (something to do with Argand diagrams?)
(where zbar is z with a horizontal line across the top :S)
[snip]
Let z = x + iy and solve for x.
Quote:
Originally Posted by thatgirlrocks
[snip]
b) let A in the set of R^nxn (a square matrix, n by n) be skew-symmetric, that is Atransposed = -A. show that all eigenvalues of A have zero real-part.
[snip]
Apply a well known theorem about the eigenvalues of $A$ and $A^T$.
Quote:
Originally Posted by thatgirlrocks
[snip]
c) suppose that x and y are eigenvectors corresponding to distinct eigenvalues of a real-values skew-symmetric matrix. show that (xbartransposed)y=0
can anyone help me solve this please? | 2014-07-22 14:09:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788256645202637, "perplexity": 1057.560468253946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858892.28/warc/CC-MAIN-20140722025738-00181-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.sangakoo.com/en/unit/validity-area-of-several-simultaneos-linear-inequations | # Validity area of several simultaneos linear inequations
In this section we will explain how to solve the following type of problems:
Given several restrictions (inequations), we have to determine the area on the plane that satisfies all of them by giving its vertexes.
We usually find more than one simultaneous restriction for the variables in inequations exercises. For example, if we have to find the number of chairs (of $10$ kg each) and tables (of $20$ kg each) that can be carried by a truck which cannot carry more than $1000$ kg, we have to consider the restriction that the number of chairs and tables has to be positive. Therefore, we do not only have to consider the weight restriction for the truck:
(i) $10\cdot x+20\cdot y\leqslant 1000$
but also restrictions of being positive both the number of chairs ($x$) and the number of tables ($y$):
(ii) $x\geqslant0$
(iii) $y\geqslant0$
Each of these restrictions has a straight line associated with the plane XY, that separates the plane in two regions: the validity region (region where the restriction is satisfied) and the area in where it is not satisfied. Next for these straight lines and areas of validity, there are three restrictions:
(i) The restriction is: $$10\cdot x+20\cdot y\leqslant 1000$$
and therefore the associated straight line is: $$f(x)=-\dfrac{1}{2}\cdot x+50$$
If we also try the point $(x=0,y=0)$ in the inequation:
$$10\cdot 0+20\cdot 0\leqslant 1000$$
therefore the validity region will be the one that contains the point $(0,0)$:
(ii) The restriction is: $$x\geqslant0$$
This type of restriction represent a vertical straight line (parallel to the axis $y$) that separates the values of $x$ greater than and less than $0$ respectively. The values will be the validity area of $x$ greater than zero:
iii) The restriction is: $$y\geqslant0$$
The straight line associated with this restriction is: $$g(x)=0$$
and the validity area is, obviously, the region over $g(x)$, $\ y\geqslant0$:
Now we should know the region of the plane XY where all the restrictions are satisfied simultaneously. This region will the one that is common to the validity regions that are free of restrictions. For the case of the chairs and the tables it will be the triangle composed by the straight line $f(x)$ and the axes $x$ and $y$:
We can see that in this case, as we take into account all the restrictions simultaneously, the validity region (from now on we will refer to the common region of all the areas of validity from the different restrictions simply as the validity region) is an enclosed area of the plane. In the previous examples the validity region was spreading over some side up to infinity, that's why these areas were not bounded.
To know the validity region well the coordinates of its apexes have to be known. In this case it is very simple. We already know the coordinates of one of the points: $(0,0)$. The other two intersection points are the points where the straight line $f(x)$ cross with the axes.
The intersection points with the axes can be found easily:
• For the cross point with the axis $y$, we only have to know that the whole axis $y$ has a coordinate $x=0$, and the value of $y$ in the intersection point will be the one that takes the function $f(x)=-\dfrac{1}{2}\cdot x+50$ in $x=0$ (on the axis). So the intersection point will be: $(x=0,y=f(0)=50)$.
• And the intersection point with the axis $x$ is when $y = 0$, that is to say, in the value of $x$ where the function takes the value $0$: $$f(x)=0 \Rightarrow -\dfrac{1}{2}\cdot x+50=0 \Rightarrow x=100$$ and therefore the point where the straight line $f(x)$ crosses the axis $x$ is $(x=100,y=0)$.
So the apexes of the region of validity have as their coordinates: $$(0,0) \quad (0,50) \quad (100,0)$$
In this case it has been very easy to find the apexes.
The following example will illustrate the most general way to find the apexes of the region of validity.
We have the following restrictions: $$\begin{array}{rcl} x+4 &\geqslant& 4 \\ y &\leqslant& 4 \\ y &\geqslant& x \end{array}$$
that they take as associated straight lines: $$\begin{array}{l} r(x): \ y=-x+4 \\ s(x):\ y=4 \\ t(x):\ y=x \end{array}$$
We can visualize these straight lines and determine the semiplanes where every inequation is satisfied separately.
For the straight line $r$:
Since the inequation is not satisfied at point $(0,0)$, $\ 0+0\ngeqslant 4$, we see that the area of validity of the inequation is the semiplane over the straight line.
For the straight line $s$:
Since the inequation is satisifed at point $(0,0)$, $\ 0\leqslant 4$, we see that the area of validity of the inequation is the semiplane below the straight line.
For the straight line $t$:
Since the inequation is satisifed at point $(0,1)$, $\ 1\geqslant 0$ we see that the area of validity of the inequation is the semiplane over the straight line.
As a whole we have:
The area where all the semiplanes coincide is the feasible region. We see that in this case it is also a question of a bounded area.
Now the calculation of the apexes of this area has to be done. To do so, it will be necessary to know the point where the straight lines cross. We will have to find three intersection points: the one of the straight line $r$ with $s$, the one of $r$ with $t$ and the one of $s$ with $t$.
How to find the point of intersection of two straight lines:
To know a point means to know the coordinates $x$ and $y$ of the above mentioned point. If two straight lines $f(x)$ and $g(x)$ cross, we know that both functions will be taking the same value in the position where they cross. Graphically it is:
Therefore we know that the intersection point will be $(x_0,y_0)$ and that the value of $y_0$ equals the value of two functions in $x_0$, since we know that two functions have to take the same value at this point in order to cross: $$f(x_0)=g(x_0)$$
Back to the example.
Intersection point between $r$ and $s$:
We have to find the coordinates $x$ and $y$ of the intersection point. We will call the coordinates of this point $x_0$ and $y_0$.
To find the coordinate $x$ where the straight lines cross, $x_0$, we equal two functions $r(x)=-x+4$ and $s(x)=4$ at the point where the straight lines cross ($x_0$):
$$r(x_0)=s(x_0) \Rightarrow -x_0+4=4 \Rightarrow x_0=0$$
Therefore two straight lines cross at $x_0=0$.
Determining the coordinate $y$ of the intersection point $y_0$ is simple, for it is the value that both functions $r(x)$ and $s(x)$ take in $x=x_0$.
$$y_0=r(x_0)=s(x_0) \Rightarrow y_0=r(0)=s(0)=-0+4=4$$
Therefore the intersection point between the straight lines $r$ and $s$ is: $(x_0=0,y_0=4)$.
Intersection point between $r$ and $t$:
We proceed just as in the previous case. We equal the functions $r(x)=-x+4$ and $t(x)=x$ at the point of intersection, that this time will take as its coordinates $(x_1,y_1)$.
$$r(x_1)=t(x_1) \Rightarrow -x_1+4=x_1 \Rightarrow x_1=2$$
As in the previous case, the coordinate $y$ of the intersection point, $y_1$, is equal to the value that the functions $r(x)$ and $t(x)$ take at the intersection point:
$$y_1=r(x_1)=t(x_1)=-2+4=2$$
And so, the coordinates of the point of intersection are: $(x_1=2,y_1=2)$.
Intersection point between $s$ and $t$:
This intersection point will have as its coordinates $(x_2,y_2)$. First we determine the value of $x_2$ as in the previous cases, that is to say by equaling $s(x)=4$ and $t(x)=x$ at the intersection point $x_2$: $$s(x_2)=t(x_2) \Rightarrow 4=x_2$$
We decide the value of the coordinate $y$ of the point of intersection, $y_2$, as in the previous occasions: $$y_2=s(x_2)=t(x_2)=4$$
Therefore the intersection point between the straight lines $s(x)$ and $t(x)$ is the one that takes as its coordinates: $(x_2=4,y_2=4)$.
In short, the coordinates of the apexes of the feasible region are: $$(x_0,y_0)=(0,4) \quad (x_1,y_1)=(2,2) \quad (x_2,y_2)=(4,4)$$
Other examples:
Considering the following inequations system: $$\begin{array}{rcl} x &\geqslant& 0 \\ y &\leqslant& 4 \\ y &\geqslant& \dfrac{x}{2} \end{array}$$
We look first for the straight lines associated with every inequation and the areas of validity of each one:
• The first of them gives us a straight line parallel to the axis $y$ in $x=0$ and its validity region is the one that has $x$ greater than $0$ (towards the right of the axis $y$).
• The second one is a straight line parallel to the axis $x$, that meets the point $y=4$ and takes as a validity region the semiplane that it has below (where $y$ is less than $4$).
• The third straight line is $y=\dfrac{x}{2}$ and its validity region is the one that is above the straight line (it is possible to verify this easily, seeing that the point $(0,1)$ satisfies the inequation: $1\geqslant$).
We will determine the apexes of the area of validity as the intersection points between the different straight lines.
• The straight line $x=0$ crosses with $y=4$ at point $(x_0=0,y_0=4)$.
• The straight line $x=0$ crosses with $y=\dfrac{x}{2}$ at point $(x_1=0,y_1=0)$.
• The straight line $y=4$ crosses with $y=\dfrac{x}{2}$ at point $(x_2=8,y_2=0)$.
Therefore, the apexes of the region of validity are: $$(x_0,y_0)=(0,4) \quad (x_1,y_1)=(0,0) \quad (x_2,y_2)=(8,0)$$
Given the set of restrictions:
$$\begin{array}{rcl} x+3 &\geqslant& y \\ 8 &\geqslant& x+y \\ y &\geqslant& x-3 \\ x &\geqslant& 0 \\ y &\geqslant& 0 \end{array}$$
We look first for the straight lines associated with every restriction and the areas of validity of every inequation (checking it with a point in the inequations). The associatd straight lines are:
$$\begin{array}{l} f: \ y=3+x \\ g:\ y=-x+8 \\ h:\ y=x-3 \\ i:\ x=0 \\ j:\ y=0 \end{array}$$
Drawing the straight lines and the validity areas we can visualize the validity region.
If we cannot do the drawing we have an alternative. Having so many straight lines, normally we will have more intersection points between the straight lines than apexes in the validity area. For this reason, not all the points of intersection between the different straight lines will be apexes of the region of validity. To admit which ones are the apexes of the region of validity the following will be done:
• All the intersection points are calculated between the different straight lines.
• Those points of intersection where all the restrictions are fulfilled simultaneously will be the apexes of the area of validity (what can allow us to visualize it, if we have not done so before).
Thus, we are going to calculate all the intersection points between the different straight lines.
• $f$ with $g$ will cross at point $(x_0,y_0)$. We calculate the coordinates of the point as has already been done before: $$f(x_0)=g(x_0) \Rightarrow 3+x_0=-x_0+8 \Rightarrow x_0=\dfrac{5}{2}$$ And the coordinate $y$: $$y_0=f(x_0)=g(x_0)=\dfrac{11}{2}$$ Therefore the coordinates of the point of intersection are: $(x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2})$.
• $f$ with $h$ will cross at point $(x_1,y_1)$. We calculate the coordinates of the point as has already been done before: $$f(x_1)=h(x_1) \Rightarrow 3+x_1=x_1-3 \Rightarrow 3=-3$$ We see that, when we try to find the coordinate x at the intersection point, an equation that is not satisfied appears. This means that two straight lines are in fact parallel (therefore they never cross).
• $f$ with $i$ will cross at point $(x_2,y_2)$. The straight line $f(x)=3+x$ will cross the straight line $x=0$ (straight line that coincides with the axis $y$) with point $(x=0,y=f(0))$. That is to say: $(x_2,y_2)=(0,3)$.
• $f$ with $j$ will cross at point $(x_3,y_3)$. We calculate the coordinates of the point as has already been done before: $$f(x_3)=j(x_3) \Rightarrow 3+x_3=0 \Rightarrow x_3=-3$$ And the coordinate $y$, $$y_3=f(x_3)=j(x_3)=0$$ Therefore the coordinates of the point of intersection are: $(x_3,y_3)=(-3,0)$.
• $g$ with $h$ will cross at point $(x_4,y_4)$. We calculate the coordinates of the point as has already been done before: $$g(x_4)=h(x_4) \Rightarrow -x_4+8=x_4-3 \Rightarrow x_4=\dfrac{11}{2}$$ And the coordinate $y$: $$y_4=g(x_4)=h(x_4)=\dfrac{5}{2}$$ Therefore the coordinates of the point of intersection are: $(x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})$.
• $g$ with $i$ will cross at point $(x_5,y_5)$. The straight line $i$ tells us $x=0$, therefore the intersection point will be: $(0,g(0))=(0,8)$. Therefore the coordinates of the point of intersection are: $(x_5,y_5)=(0,8)$.
• $g$ with $j$ will cross at point $(x_6,y_6)$. We calculate the coordinates of the point as has already been done before: $$g(x_6)=j(x_6) \Rightarrow -x_6+8=0 \Rightarrow x_6=8$$ And the coordinate $y$: $$y_6=g(x_6)=j(x_6)=0$$ Therefore the coordinates of the point of intersection are: $(x_6,y_6)=(8,0)$.
• $h$ with $i$ will cross at point $(x_7,y_7)$. The straight line $i$ says to us that $x=0$, therefore the intersection point will be: $(0,h(0))=(0,-3)$. Therefore the coordinates of the point of intersection are: $(x_7,y_7)=(0,-3)$.
• $h$ with $j$ will cross at point $(x_8,y_8)$. We calculate the coordinates of the point as has already been done before: $$h(x_8)=j(x_8) \Rightarrow x_8-3=0 \Rightarrow x_8=3$$ And the coordinate $y$: $$y_8=h(x_8)=j(x_8)=0$$ Therefore the coordinates of the point of intersection are: $(x_8,y_8)=(3,8)$.
• $i$ with $j$ will cross at point $(x_9,y_9)$. The straight line $i$ tells us that $x=0$, and $j$ that $y=0$, therefore the coordinates of the point of intersection are: $(x_9,y_9)=(0,0)$.
Determination of the apexes of the area of validity:
We have that nine intersection points between the straight lines. As we have said before, it has to be verified at what points all the inequations are satisfied, and these will be the apexes of the region of validity.
$$\begin{array}{l} (x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2})\ \text{ all the inequations are satisfied.} \\ (x_2,y_2)=(0,3)\ \text{ all the inequations are satisfied.}\\ (x_3,y_3)=(-3,0)\ \text{ the inequation is not satisfied } x\geqslant0. \\ (x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})\ \text{ all the inequations are satisfied.} \\ (x_5,y_5)=(0,8)\ \text{ the inequation is not satisfied } x+3\geqslant y . \\ (x_6,y_6)=(8,0)\ \text{ the inequation is not satisfied } y\geqslant x-3 . \\ (x_7,y_7)=(0,-3)\ \text{ the inequation is not satisfied } y\geqslant 0 . \\ (x_8,y_8)=(3,8)\ \text{ all the inequations are satisfied.}\\ (x_9,y_9)=(0,0)\ \text{ all the inequations are satisfied.} \end{array}$$
Therefore the apexes of the region of validity are: $$(x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2}) \quad (x_2,y_2)=(0,3)\quad (x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})$$ $$(x_8,y_8)=(3,8)\quad (x_9,y_9)=(0,0)$$
To sum up, if we have several inequations simultaneously, each one will determine a semiplane where it is satisfied. The intersection of these semiplanes (common region to all of them) will be called the feasible region, which is the region where all the inequations are satisfied simultaneously. This area can be bounded or not.
We determine the apexes of the area of validity, determining the points of intersection of the straight lines two by two. If we have two straight lines: $$f(x)=ax+b \qquad g(x)=cx+d$$
The intersection point will be $(x_0,f(x_0))$ or equivalently $(x_0,g(x_0))$. To determine the intersection point we do: $$f(x_0)=g(x_0) \Rightarrow ax_0+b=cx_0+d$$
The solution to this equation is: $$x_0=\dfrac{d-b}{a-c}$$
And the functions $f(x)$ and $g(x)$ take in this point the value: $$f(x_0)=g(x_0)=\dfrac{ad-bc}{a-c}$$
Where the intersection point is between both straight lines: $$\Big( x_0=\dfrac{d-b}{a-c}, y_0=\dfrac{ad-bc}{a-c} \Big)$$
In this way we calculate the apexes of the region of validity or feasible region. | 2019-06-17 05:21:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008305072784424, "perplexity": 185.48644362950995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00074.warc.gz"} |
http://math.stackexchange.com/questions/375/are-x-cdot-0-0-x-cdot-1-x-and-x-x-axioms/37568 | # Are $x \cdot 0 = 0$, $x \cdot 1 = x$, and $-(-x) = x$ axioms?
Context: Rings.
Are $x \cdot 0 = 0$ and $x \cdot 1 = x$ and $-(-x) = x$ axioms?
Arguably three questions in one, but since they all are properties of the multiplication, I'll try my luck...
-
your second one isn't usually true... – Eric O. Korman Jul 21 '10 at 19:47
Axioms for what? – Scott Morrison Jul 21 '10 at 20:48
I will assume this is in the context of rings (e.g., real numbers, integers, etc). In this case, the axiom defining $0$ is that $x + 0 = x$ for all $x$. $x*0 = 0$ is a result of this since we have $x*0 = x*(0+0) = x*0 + x*0$ which implies $x*0 = 0$ (canceling one of the $x*0$'s).
I am guessing that for the second one you mean $x*1 = x$. This is a definition (axiom).
The third one is a consequence of the definition of $-x$ being the element such that $x + (-x) = 0$. For then we have $(-x) + x$ is also zero so that $x$ is the negative of $-x$.
-
Your assumption is correct! – guillermooo Jul 21 '10 at 19:58
in other words, only the second one is an axiom. – mau Jul 21 '10 at 20:04
@guillermooo - Better to edit your question to state the assumption explicitly. P.S. That's you from Sublime right? :) – Edan Maor Jul 22 '10 at 10:28
It's me, yeah! :) – guillermooo Jul 22 '10 at 22:36
The question is more profound than is initially seems, and is really about algebraic structures. The first question you have to ask yourself is where you're working:
In general, addition and multiplication are defined on a structure, which in this case is a set (basically a collection of "things") with two operators we call addition (marked $+$) and multiplication (marked $\cdot$ or $\times$ or $\ast$ or whatever). If this structure holds some properties, which are sometimes called axioms, then it is called a unit ring. The properties are:
1. The set is closed under the operator $+$. That is, if $a$ and $b$ are in $R$, then $a+b$ is also in $R$.
2. The set has a member which we mark as $0$. It has the properties that for every $a$ in $R$, $a+0 = 0$ and $0+a = 0$.
3. The operation $+$ is commutative: $a+b = b+a$.
4. The operation $+$ is associative: $(a+b)+c = a+(b+c)$.
5. Every member has an additive inverse: for every $a$ in $R$ there is some $b$ in $R$ such that $a+b = 0$ (we mark $b$ as $-a$).
6. The set is closed under the operator $*$. That is, if $a$ and $b$ are in $R$, then $a*b$ is also in $R$.
7. The set has a member which we mark as $1$. It has the properties that for every $a$ in $R$, $a*1 = a$ and $1*a = a$.
8. The operation $*$ is associative: $(a*b) * c = a * (b*c)$.
9. Multiplication is distributive over addition: $a * (b+c) = a*b + a*c$ and $(a+b) * c = a*b + a*c$.
While this is a long list, and introduces the operator $+$ which is not even explicitly mentioned in the question, these properties are quite natural. For example, the integers $\{ \ldots, -2, -1, 0, 1, 2, \ldots\}$ we all know and love indeed form a ring. The real numbers also form a ring (in fact they form a field, which means they hold even more properties).
In regard to your question, the identity $x * 1 = x$ (I assume that's what you meant) is in fact an axiom - it is axiom 7. However, the other two identities are results of the other axioms.
First identity: We use axioms 2 and 9 to get $0 * x = (0+0) * x = 0*x + 0*x$ and then by adding $-(0*x)$ (the additive inverse of $0*x$, from axiom 5) to both sides, $0 = 0*x$.
Second identity: As stated in axiom 5, $-(-x)$ is just a notation used which means "the additive inverse of $-x$". To show that $-(-x) = x$ we need to show that $x$ is in fact the additive inverse of $-x$, or in other words that $x + -x = 0$ and $-x + x = 0$. But that's just what axiom 5 says, so we're done.
Last point: You might be wondering why did we have to go and introduce addition to answer a question about multiplication? Well, it so happens that without addition the other two identities are simply not true. For example, if we look at the positive integers $\{1, 2, 3, \ldots\}$ with only multiplication, then there is no $0$ there! Simply put, this is because the positive integers do not form a ring.
-
Let $A$ be a ring (which I will assume commutative).
Only the second sentence is an axiom: multiplication has a (provably unique) neutral element, id est, $a \in A$ such that $ax = xa = x$ for every $x \in A$; it's called one and its symbol is $1$.
The others are consequences of axioms:
• The 3rd sentence comes from the existence of a symmetric to every element in the ring: for every $x \in A$, there is a (unique) element $y \in A$ such that $x + y = y + x = 0$. It is denoted $–x$. From this equality, also comes that the symmetrical of $y$ can only be $x$.
• Analogously to the multiplication axiom above, addition has a neutral element, a (unique) number $b$ such that $x + b = b + x = x$ for each $x \in A$; it's called zero and its symbol is $0$. The first sentence is deduced by using the inverse additive (for some $a$), distributivity and multiplicative associativity axioms: $x · 0 = x · (a + (–a)) = xa + x(–a) = xa + (–xa) = 0$.
-
x*1 =x is axiom.
The rest is not.
Let me pencil push the rest :D. It's been a while.
X*0
=x*0+0 ' adding 0 does no harm
=x*0+(x*0+(-(x*0))) ' 0 is the sum of x* 0 and it's additive inverse
=(x*0+x*0)+(-(x*0)) ' associativity.
=x*(0+0)+(-(x*0)) 'distributive
=x*(0)+(-(x*0)) '0+0 is 0
=x*0+(-(x*0)) '(0) is just 0
=0 ' the sum of x*0 and it's own additive inverse is 0
From now one I would write +(-x) by simply -. So we define a subtraction as adding by the additive inverse. (is this the standard definition of substraction?)
Now, let's do -(-x)
-(-x)
=-(-x)+0 'adding 0 does no harm
=-(-x)+(-x+x) '0 is (-x+x) by definition of -x.
I wonder if the axiom only specify that only (x+ -x) is zero, then things can get more tricky. We would need the commutative axiom as reinforcement
=(-(-x)+(-x))+x ' by associative.
=0+x ' the additive inverse of (-x) added by (-x) is 0
=x 'by definition of 0
I sort of like this kind of derivation because it follows the pattern of
A=...=...=...=...=...C which is better than to go half way and then use logic to imply stuffs.
-
Answering what you asked, the question is poorly formed because you didn't specify the theory you are talking about. To do that, well, you must define symbols, deduction rules... and axioms.
Answering what you probably meant to ask, no. Those are properties that - and × have in the Z set, not axioms. They can be deduced from the operations themselves.
- | 2015-04-27 04:45:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558253288269043, "perplexity": 359.1095020576705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657041.90/warc/CC-MAIN-20150417045737-00219-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://mathsgenii.com/topic/subset-equality-of-set-proper-subset/ | # Subset, Equality of Set, Proper Subset
If every element of a set is also an element of a set , the is called a subset of and it is denoted as .
Example:
Note:
Hence is the superset of .
Equality of Sets:
If every element of belongs to and every element of belongs to , i.e., if and only if
or
Example:
Proper Subset:
If every element of the set is an element of the set and contains at least one element which does not belongs to , i.e., and .
Example:
.
Scroll to Top | 2021-12-06 08:29:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671244025230408, "perplexity": 414.16372847873566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00162.warc.gz"} |
https://unapologetic.wordpress.com/2011/02/02/generalized-young-tableaux/?like=1&source=post_flair&_wpnonce=d0199d491e | # The Unapologetic Mathematician
## Generalized Young Tableaux
And now we have another generalization of Young tableaux. These are the same, except now we allow repetitions of the entries.
Explicitly, a generalized Young tableau $T$ — we write them with capital letters — of shape $\lambda$ is an array obtained by replacing the points of the Ferrers diagram of $\lambda$ with positive integers. Any skipped or repeated numbers are fine. We say that the “content” of $T$ is the composition $\mu=(\mu_1,\dots,\mu_m)$ where $\mu_i$ is the number of $i$ entries in $T$.
As an example, we have the generalized Young tableau
$\displaystyle\begin{array}{ccc}4&1&4\\1&3&\end{array}$
of shape $(3,2)$ and content $(2,0,1,2)$.
Notice that if $\lambda\vdash n$, then $\mu\vdash n$ as well, since both count up the total number of places in the tableau. Given a partition $\lambda$ and a composition $\mu$, both decomposing the same number $n$, we define $T_{\lambda\mu}$ to be the collection of generalized Young tableaux of shape $\lambda$ and content $\mu$. All the tableaux we’ve considered up until now have content $(1,\dots,1)=(1^n)$.
Now, pick some fixed (ungeneralized) tableau $t$. We can use the same one we usually do, numbering the rows from $1$ to $n$ across each row and from top to bottom, but it doesn’t really matter which we use. For our examples we’ll pick
$\displaystyle t=\begin{array}{ccc}1&2&3\\4&5&\end{array}$
Using this “reference” tableau, we can rewrite any generalized tableau as a function; define $T(i)$ to be the entry of $T$ in the same place as $i$ is in $t$. That is, any generalized tableau looks like
$\displaystyle\begin{array}{ccc}T(1)&T(2)&T(3)\\T(4)&T(5)&\end{array}$
and in our particular example above we have $T(1)=T(3)=4$, $T(2)=T(4)=1$, and $T(5)=3$. Conversely, any such function assigning a positive integer to each number from $1$ to $n$ can be interpreted as a generalized Young tableau. Of course the particular correspondence depends on exactly which reference tableau we use, but there will always be some such correspondence between functions and generalized tableaux. | 2018-03-18 07:55:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395070433616638, "perplexity": 162.74228231901444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00394.warc.gz"} |
https://zbmath.org/?q=an%3A0916.35135 | ## Some remarks on the problem of source identification from boundary measurements.(English)Zbl 0916.35135
The authors consider the problem of determining a source term from boundary measurements, in an elliptic problem. The direct and inverse problems are formulated as follows.
Direct problem: Let $$\Omega$$ be a bounded domain in $$\mathbb{R}^d$$, with sufficiently regular boundary $$\Gamma$$. One considers the Poisson equation $-\Delta u= g\quad\text{in }\Omega,\quad \gamma_0 u:= u|_\Gamma= f,\tag{1}$ where $$f$$ and $$g$$ are given in $$H^{{1\over 2}}(\Gamma)$$ and $$L^2(\Omega)$$, respectively. Problem (1) admits a unique solution in the functional space $$H^1(\Delta, \Omega)= \{u\in H^1(\Omega); \Delta u\in L^2(\Omega)\}$$, on which the normal trace $\gamma_1 u:={\partial u\over\partial n}\quad\text{on }\Gamma$ is well defined in $$H^{-{1\over 2}}(\Gamma)$$ as a continuous function of $$u$$. One defines the observation operator $C(u):= \gamma_1u.$ Inverse problem: Given any input data $$f\in H^{{1\over 2}}(\Gamma)$$, and a corresponding observation $$\varphi\in H^{-{1\over 2}}(\Gamma)$$. Can we uniquely determine the source term $$g$$ such that $$C(u)= \varphi$$ on $$\Gamma$$, where $$u$$ is solution of (1)?
The last two sections of the article are dedicated to the problem of identifying the sources when some a priori information is available: (a) separation of variables is possible and one factor of the product is known (Section 3); or (b) in the case of a domain source of cylindrical geometry, the area of the base is known (Section 4).
### MSC:
35R30 Inverse problems for PDEs 35J05 Laplace operator, Helmholtz equation (reduced wave equation), Poisson equation
Full Text: | 2022-06-26 08:27:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109564423561096, "perplexity": 196.46023191884382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00677.warc.gz"} |
https://zbmath.org/?q=an:0987.51012 | zbMATH — the first resource for mathematics
Minkowski geometric algebra of complex sets. (English) Zbl 0987.51012
For subsets $$A$$ and $$B$$ of the field of complex numbers two operations are defined, the sum of $$A$$ and $$B$$ as $$\{a+b : a\in A$$, $$b\in B\}$$, and their product (by replacing + with $$\times$$ in the definition). The sum has been introduced by H. Minkowski in 1903 and studied extensively.
The authors devote this paper to the study of the product operation, in particular to the effect multiplication by lines and circles has on various curves. The motivation behind this study lies in the potential applications to “a generalization of real interval arithmetic to the complex domain, reflection and refraction of wavefronts in geometrical optics, stability characterization of multi-parameter control systems,” mathematical morphology, and in the expectation that many more such applications will be found.
MSC:
51N20 Euclidean analytic geometry 51M15 Geometric constructions in real or complex geometry 53A04 Curves in Euclidean and related spaces 65D18 Numerical aspects of computer graphics, image analysis, and computational geometry 65E05 General theory of numerical methods in complex analysis (potential theory, etc.) 65G40 General methods in interval analysis
Full Text: | 2021-03-09 07:18:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5712584257125854, "perplexity": 708.9559625098485}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00234.warc.gz"} |
http://dml.cz/dmlcz/145754 | # Article
Full entry | Fulltext not available (moving wall 24 months)
Keywords:
summation equation; sign-changing kernel; discrete fractional calculus; positive solution; nonlocal boundary condition
Summary:
We consider the summation equation, for $t\in[\mu-2,\mu+b]_{\mathbb{N}_{\mu-2}}$, \begin{align*} y(t)=\gamma_1(t)H_1\left(\sum_{i=1}^{n}a_iy\left(\xi_i\right)\right) & + \gamma_2(t)H_2\left(\sum_{i=1}^{m}b_iy\left(\zeta_i\right)\right) &+ \lambda\sum_{s=0}^{b}G(t,s)f(s+\mu-1,y(s+\mu-1)) \end{align*} in the case where the map $(t,s)\mapsto G(t,s)$ may change sign; here $\mu\in(1,2]$ is a parameter, which may be understood as the order of an associated discrete fractional boundary value problem. In spite of the fact that $G$ is allowed to change sign, by introducing a new cone we are able to establish the existence of at least one positive solution to this problem by imposing some growth conditions on the functions $H_1$ and $H_2$. Finally, as an application of the abstract existence result, we demonstrate that by choosing the maps $t\mapsto\gamma_1(t)$, $\gamma_2(t)$ in particular ways, we can recover the existence of at least one positive solution to various discrete fractional- or integer-order boundary value problems possessing Green's functions that change sign.
References:
[1] Anderson D.R.: Existence of three solutions for a first-order problem with nonlinear nonlocal boundary conditions. J. Math. Anal. Appl. 408 (2013), 318–323. DOI 10.1016/j.jmaa.2013.06.025 | MR 3079969 | Zbl 1314.34048
[2] Atici F.M., Acar N.: Exponential functions of discrete fractional calculus. Appl. Anal. Discrete Math. 7 (2013), 343–353. DOI 10.2298/AADM130828020A | MR 3135934 | Zbl 1299.39001
[3] Atici F.M., Eloe P.W.: A transform method in discrete fractional calculus. Int. J. Difference Equ. 2 (2007), 165–176. MR 2493595
[4] Atici F.M., Eloe P.W.: Discrete fractional calculus with the nabla operator. Electron. J. Qual. Theory Differ. Equ. (2009), Special Edition I, 12 pp. MR 2558828 | Zbl 1189.39004
[5] Atici F.M., Eloe P.W.: Initial value problems in discrete fractional calculus. Proc. Amer. Math. Soc. 137 (2009), 981–989. DOI 10.1090/S0002-9939-08-09626-3 | MR 2457438 | Zbl 1166.39005
[6] Atici F.M., Eloe P.W.: Two-point boundary value problems for finite fractional difference equations. J. Difference Equ. Appl. 17 (2011), 445–456. DOI 10.1080/10236190903029241 | MR 2783359 | Zbl 1215.39002
[7] Atici F.M., Eloe P.W.: Linear systems of fractional nabla difference equations. Rocky Mountain J. Math. 41 (2011), 353–370. DOI 10.1216/RMJ-2011-41-2-353 | MR 2794443 | Zbl 1218.39003
[8] Atici F.M., Eloe P.W.: Gronwall's inequality on discrete fractional calculus. Comput. Math. Appl. 64 (2012), 3193–3200. DOI 10.1016/j.camwa.2011.11.029 | MR 2989347 | Zbl 1268.26029
[9] Atici F.M., Şengül S.: Modeling with fractional difference equations. J. Math. Anal. Appl. 369 (2010), 1–9. DOI 10.1016/j.jmaa.2010.02.009 | MR 2643839 | Zbl 1204.39004
[10] Atici F.M., Uyanik M.: Analysis of discrete fractional operators. Appl. Anal. Discrete Math. 9 (2015), 139–149. DOI 10.2298/AADM150218007A | MR 3362702
[11] Baoguo J., Erbe L., Goodrich C.S., Peterson A.: The relation between nabla fractional differences and nabla integer differences. Filmoat(to appear).
[12] Baoguo J., Erbe L., Goodrich C.S., Peterson A.: Monotonicity results for delta fractional differences revisited. Math. Slovaca(to appear).
[13] Bastos N.R.O., Mozyrska D., Torres D.F.M.: Fractional derivatives and integrals on time scales via the inverse generalized Laplace transform. Int. J. Math. Comput. 11 (2011), 1–9. MR 2800417
[14] Dahal R., Duncan D., Goodrich C.S.: Systems of semipositone discrete fractional boundary value problems. J. Difference Equ. Appl. 20 (2014), 473–491. DOI 10.1080/10236198.2013.856073 | MR 3173559 | Zbl 1319.39002
[15] Dahal R., Goodrich C.S.: A monotonicity result for discrete fractional difference operators. Arch. Math. (Basel) 102 (2014), 293–299. DOI 10.1007/s00013-014-0620-x | MR 3181719 | Zbl 1330.39022
[16] Dahal R., Goodrich C.S.: Erratum to “R. Dahal, C.S. Goodrich, A monotonicity result for discrete fractional difference operators, Arch. Math. (Basel) 102. (2014), 293–299”, Arch. Math. (Basel) 104 (2015), 599–600. DOI 10.1007/s00013-014-0620-x | MR 3181719
[17] Erbe L., Peterson A.: Positive solutions for a nonlinear differential equation on a measure chain. Math. Comput. Modelling 32 (2000), 571–585. DOI 10.1016/S0895-7177(00)00154-0 | MR 1791165 | Zbl 0963.34020
[18] Erbe L., Peterson A.: Eigenvalue conditions and positive solutions. J. Difference Equ. Appl. 6 (2000), 165–191. DOI 10.1080/10236190008808220 | MR 1760156 | Zbl 0949.34015
[19] Ferreira R.A.C.: Nontrivial solutions for fractional $q$-difference boundary value problems. Electron. J. Qual. Theory Differ. Equ. (2010), 10 pp. MR 2740675 | Zbl 1207.39010
[20] Ferreira R.A.C.: Positive solutions for a class of boundary value problems with fractional $q$-differences. Comput. Math. Appl. 61 (2011), 367–373. DOI 10.1016/j.camwa.2010.11.012 | MR 2754144 | Zbl 1216.39013
[21] Ferreira R.A.C.: A discrete fractional Gronwall inequality. Proc. Amer. Math. Soc. 140 (2012), 1605–1612. DOI 10.1090/S0002-9939-2012-11533-3 | MR 2869144 | Zbl 1243.26012
[22] Ferreira R.A.C.: Existence and uniqueness of solution to some discrete fractional boundary value problems of order less than one. J. Difference Equ. Appl. 19 (2013), 712–718. DOI 10.1080/10236198.2012.682577 | MR 3049050 | Zbl 1276.26013
[23] Ferreira R.A.C., Goodrich C.S.: Positive solution for a discrete fractional periodic boundary value problem. Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal. 19 (2012), 545–557. MR 3058228 | Zbl 1268.26010
[24] Ferreira R.A.C., Torres D.F.M.: Fractional $h$-difference equations arising from the calculus of variations. Appl. Anal. Discrete Math. 5 (2011), 110–121. DOI 10.2298/AADM110131002F | MR 2809039 | Zbl 1289.39007
[25] Gao L., Sun J.P.: Positive solutions of a third-order three-point BVP with sign-changing Green's function. Math. Probl. Eng. (2014), Article ID 406815, 6 pages. MR 3268274
[26] Goodrich C.S.: Solutions to a discrete right-focal boundary value problem. Int. J. Difference Equ. 5 (2010), 195–216. MR 2771325
[27] Goodrich C.S.: On discrete sequential fractional boundary value problems. J. Math. Anal. Appl. 385 (2012), 111–124. DOI 10.1016/j.jmaa.2011.06.022 | MR 2832079 | Zbl 1236.39008
[28] Goodrich C.S.: On a discrete fractional three-point boundary value problem. J. Difference Equ. Appl. 18 (2012), 397–415. DOI 10.1080/10236198.2010.503240 | MR 2901829 | Zbl 1253.26010
[29] Goodrich C.S.: On a first-order semipositone discrete fractional boundary value problem. Arch. Math. (Basel) 99 (2012), 509–518. DOI 10.1007/s00013-012-0463-2 | MR 3001554 | Zbl 1263.26016
[30] Goodrich C.S.: On semipositone discrete fractional boundary value problems with nonlocal boundary conditions. J. Difference Equ. Appl. 19 (2013), 1758–1780. DOI 10.1080/10236198.2013.775259 | MR 3173516
[31] Goodrich C.S.: A convexity result for fractional differences. Appl. Math. Lett. 35 (2014), 58–62. DOI 10.1016/j.aml.2014.04.013 | MR 3212846 | Zbl 1314.26010
[32] Goodrich C.S.: An existence result for systems of second-order boundary value problems with nonlinear boundary conditions. Dynam. Systems Appl. 23 (2014), 601–618. MR 3241607 | Zbl 1310.34035
[33] Goodrich C.S.: Semipositone boundary value problems with nonlocal, nonlinear boundary conditions. Adv. Differential Equations 20 (2015), 117–142. MR 3297781 | Zbl 1318.34034
[34] Goodrich C.S.: Coupled systems of boundary value problems with nonlocal boundary conditions. Appl. Math. Lett. 41 (2015), 17–22. DOI 10.1016/j.aml.2014.10.010 | MR 3282393 | Zbl 1312.34050
[35] Goodrich C.S.: Systems of discrete fractional boundary value problems with nonlinearities satisfying no growth conditions. J. Difference Equ. Appl. 21 (2015), 437–453. DOI 10.1080/10236198.2015.1013537 | MR 3334521 | Zbl 1320.39001
[36] Goodrich C.S.: On nonlinear boundary conditions involving decomposable linear functionals. Proc. Edinb. Math. Soc. (2) 58 (2015), 421–439. DOI 10.1017/S0013091514000108 | MR 3341447 | Zbl 1322.34038
[37] Goodrich C.S.: Coercivity of linear functionals on finite dimensional spaces and its application to discrete boundary value problem. J. Difference Equ. Appl., doi: 10.1080/10236198.2015.1125896. DOI 10.1080/10236198.2015.1125896 | MR 3516118
[38] Goodrich C.S., Peterson A.C.: Discrete Fractional Calculus. Springer, Cham, 2015, doi: 10.1007/978-3-319-25562-0. DOI 10.1007/978-3-319-25562-0 | MR 3445243
[39] Graef J., Kong L., Wang H.: A periodic boundary value problem with vanishing Green's function. Appl. Math. Lett. 21 (2008), 176–180. DOI 10.1016/j.aml.2007.02.019 | MR 2426975 | Zbl 1135.34307
[40] Graef J., Kong L.: Positive solutions for a class of higher order boundary value problems with fractional $q$-derivatives. Appl. Math. Comput. 218 (2012), 9682–9689. MR 2916148 | Zbl 1254.34010
[41] Holm M.: Sum and difference compositions and applications in discrete fractional calculus. Cubo 13 (2011), 153–184. DOI 10.4067/S0719-06462011000300009 | MR 2895482
[42] Infante G.: Nonlocal boundary value problems with two nonlinear boundary conditions. Commun. Appl. Anal. 12 (2008), 279–288. MR 2499284 | Zbl 1198.34025
[43] Infante G., Pietramala P., Tenuta M.: Existence and localization of positive solutions for a nonlocal BVP arising in chemical reactor theory. Commun. Nonlinear Sci. Numer. Simul. 19 (2014), 2245–2251. DOI 10.1016/j.cnsns.2013.11.009 | MR 3157933
[44] Infante G., Pietramala P.: Multiple nonnegative solutions of systems with coupled nonlinear boundary conditions. Math. Methods Appl. Sci. 37 (2014), 2080–2090. DOI 10.1002/mma.2957 | MR 3248749 | Zbl 1312.34060
[45] Jankowski T.: Positive solutions to fractional differential equations involving Stieltjes integral conditions. Appl. Math. Comput. 241 (2014), 200–213. MR 3223422 | Zbl 1334.34058
[46] Jia B., Erbe L., Peterson A.: Two monotonicity results for nabla and delta fractional differences. Arch. Math. (Basel) 104 (2015), 589–597. DOI 10.1007/s00013-015-0765-2 | MR 3350348 | Zbl 1327.39011
[47] Jia B., Erbe L., Peterson A.: Convexity for nabla and delta fractional differences. J. Difference Equ. Appl. 21 (2015), 360–373. DOI 10.1080/10236198.2015.1011630 | MR 3326277 | Zbl 1320.39003
[48] Jia B., Erbe L., Peterson A.: Some relations between the Caputo fractional difference operators and integer order differences. Electron. J. Differential Equations (2015), No. 163, pp. 1–7. MR 3375994 | Zbl 1321.39024
[49] Karakostas G.L.: Existence of solutions for an $n$-dimensional operator equation and applications to BVPs. Electron. J. Differential Equations (2014), No. 71, 17 pp. MR 3193977 | Zbl 1298.34118
[50] Ma R.: Nonlinear periodic boundary value problems with sign-changing Green's function. Nonlinear Anal. 74 (2011), 1714–1720. DOI 10.1016/j.na.2010.10.043 | MR 2764373
[51] Picone M.: Su un problema al contorno nelle equazioni differenziali lineari ordinarie del secondo ordine. Ann. Scuola Norm. Sup. Pisa Cl. Sci. 10 (1908), 1–95. MR 1556636
[52] Sun J.P., Zhao J.: Multiple positive solutions for a third-order three-point BVP with sign-changing Green's function. Electron. J. Differential Equations (2012), No. 118, pp. 1–7. MR 2967183 | Zbl 1260.34049
[53] Wang J., Gao C.: Positive solutions of discrete third-order boundary value problems with sign-changing Green's function. Adv. Difference Equ. (2015), 10 pp. MR 3315295
[54] Whyburn W.M.: Differential equations with general boundary conditions. Bull. Amer. Math. Soc. 48 (1942), 692–704. DOI 10.1090/S0002-9904-1942-07760-3 | MR 0007192 | Zbl 0061.17904
[55] Wu G., Baleanu D.: Discrete fractional logistic map and its chaos. Nonlinear Dyn. 75 (2014), 283–287. DOI 10.1007/s11071-013-1065-7 | MR 3144852
[56] Yang Z.: Positive solutions to a system of second-order nonlocal boundary value problems. Nonlinear Anal. 62 (2005), 1251–1265. DOI 10.1016/j.na.2005.04.030 | MR 2154107 | Zbl 1089.34022
[57] Yang Z.: Positive solutions of a second-order integral boundary value problem. J. Math. Anal. Appl. 321 (2006), 751–765. DOI 10.1016/j.jmaa.2005.09.002 | MR 2241153 | Zbl 1106.34014
[58] Zeidler E.: Nonlinear Functional Analysis and Its Applications, I: Fixed-Point Theorems. Springer, New York, 1986. MR 0816732 | Zbl 0583.47050
[59] Zhang P., Liu L., Wu Y.: Existence and uniqueness of solution to nonlinear boundary value problems with sign-changing Green's function. Abstr. Appl. Anal. (2013), Article ID 640183, 7 pp. MR 3121401
Partner of | 2016-10-27 01:12:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340103387832642, "perplexity": 2703.642170372393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721027.15/warc/CC-MAIN-20161020183841-00341-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://openreview.net/forum?id=B1xmOgrFPS | ## Meta-RCNN: Meta Learning for Few-Shot Object Detection
Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
• TL;DR: We develop Meta-RCNN which learns both the object classifier and the region proposal network via meta-learning in order to do few-shot detection
• Abstract: Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named Meta-RCNN", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results.
• Keywords: Few-shot detection, Meta-Learning, Object Detection
• Original Pdf: pdf
0 Replies | 2020-01-23 04:53:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3078497648239136, "perplexity": 3912.4405895297205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00425.warc.gz"} |
https://rebelsky.cs.grinnell.edu/~rebelsky/Courses/CSC207/2019S/01/labs/debugging.html | # Lab: Debugging with Eclipse
Summary
We begin to explore the ways in which we can use a debugger to better understand flaws in our code.
Repository
https://github.com/Grinnell-CSC207/lab-unit-testing-2019
## Preparation
In the laboratory on unit testing, you forked and cloned the repository https://github.com/Grinnell-CSC207/lab-unit-testing-2019. You’ll work with that same repository. (So return to the directory if you have it, and make a new copy if you don’t.)
## Exercises
### Exercise 1: Removing A’s
As you may have noted in the the laboratory on unit testing, the procedure SampleMethods.removeAs is not quite successful in its attempt to remove all copies of the letter ‘a’ from its parameter string.
If you haven’t yet written your test cases, here’s one.
public void testRemoveAs() {
assertEquals("",
SampleMethods.removeAs(""),
"empty string");
assertEquals("hello",
SampleMethods.removeAs("hello"),
"no as");
assertEquals("",
SampleMethods.removeAs("a"),
"eliminate one a");
assertEquals("",
SampleMethods.removeAs("aaaa"),
"eliminate many as");
assertEquals("pin",
SampleMethods.removeAs("pain"),
"eliminate one a, short string");
assertEquals("lphbet",
SampleMethods.removeAs("alphabet"),
"eliminate many as, medium string");
assertEquals("BCDEFGHIJKLMNOPQ",
SampleMethods.removeAs("aBaaCDaaaEFGaaaaHIJKaaaaLMNaaaOPaaQa"),
"eliminate many as, silly string");
assertEquals("bbb",
SampleMethods.removeAs("aaabbbaaa"),
"eliminate prefix and suffix as");
} // testRemoveAs
You may be able to tell by inspection why the method fails. But let’s assume that you don’t.
Open the code for removeAs and right click in the grey bar to the left of the code to set a breakpoint at the start of the method.
Create a new main class that duplicates the failed calls to removeAs from your unit tests.
Select Run > Debug As > Java Application.
A dialog box should pop up asking you to confirm switching to the Java perspective.
If all goes well, Eclipse should stop at the point that you inserted a breakpoint.
a. What do you expect to happen if you click the “Resume” button - the button that looks like a green triangle. (Note that in the future, you can also hit F8.)
As you may have noted, Eclipse resumed computation and ran until the completion of this program. (Presumably, with incorrect output.) To see the results, you may need to switch back to the Java perspective. You can get that perspective by clicking on the downward arrow in the upper-right-corner of the screen.
c. Start the program again. This time, let’s single step through the procedure, using the “Step Over” button (also F6). See if you can identify where the code goes wrong.
d. Correct the code to the best of your ability, remove the breakpoint, run the unit tests again, and see if your code passes all of the tests.
If so, go on to the next exercise. If not, repeat the debugging steps until you find the next bug.
### Exercise 2: Removing B’s
The removeBs procedure has much the same goals as removeAs although it uses a different (but still buggy) approach.
Use JUnit and the Eclipse debugger to identify and correct the errors.
Note: Your goal is to correct the errors in this approach. Inserting slightly modified code from removeAs is not an acceptable strategy.
### Exercise 3: Exponentiation
The SampleMethods.expt method computes xp using a divide-and-conquer approach.
• x0 = 1
• x2k = xk * xk
• x2k = (x2)k
• xk+1 = x * xk Some people combine the last two when dealing with an odd exponent.
This approach requires only log2p multiplications to raise x to the pth power, while the naive loop requires p multiplications. (Of course, if you have a book of tables, or functions that simulate those tables, you can compute xp in two table lookups.)
It’s a nice approach, but have we implemented it correctly?
If you haven’t done so already write unit tests for SampleMethods.expt(int,int).
assertEquals(1024, expt(2, 10), "1K");
b. Run the test. It will likely fail.
c. Since the test failed, it will be useful to write a short experiment to just do that one call.
/**
* A quick experiment with the expt method.
*/
import java.io.PrintWriter;
public class ExptExpt {
static final int base = 2;
static final int expt = 10;
public static void main(String[] args) throws Exception {
PrintWriter pen = new PrintWriter(System.out, true);
pen.println(base + "^" + expt + " = " + SampleMethods.expt(base, expt));
pen.close();
} // main(String[])
} // class ExptExpt
c. Set a breakpoint at the start of the expt method. (Make sure that you choose the right one. There are two!)
d. Start the debugger. It should bring you to the first line of expt.
e. What do you expect to happen if you click the “Resume” button? (The button that looks like a green triangle.)
f. You may have discovered that instead of returning to the call in the unit test, the debugger continued executing the code until the next call to expt, which is a recursive call. Hit the “Resume” button another time.
g. You are now three levels deep in the recursive call stack for expt. In the “Debug” pane, navigate between them to see the changing values of x and p.
h. Single step through the code to see if you can identify where the error occurs.
i. Since intermediate values are not clearly represented in the code, you may find it difficult, if not impossible, to quickly identify the error. So what next? You could explicitly insert temporary values for the recursive call. Instead of calling return in each case, you could set a local values (e.g., results and then exit in the logical case). Or you could get Eclipse to behave better.
Choose one approach and see if you can identify the error. Get help if you’re not sure which approach you should use or if you still can’t identify the error after trying additional approaches.
## For those with extra time
### Extra 1: Exponentiation
Consider the expt(double, int) method. As you might have noted, it doesn’t work any more correctly than the old version of expt
One issue we may hit in unit testing is that doubles are approximate. Hence, slightly different orders of computation can make slight differences in the result (e.g., in practice Math.sqrt(2)*Math.sqrt(2) is often not the same as Math.sqrt(2*2), even though they are logically the same.
Write appropriate unit tests for this alternate version. Then determine if your corrections from the exercise above suffice. If not, use the debugger to figure out why. | 2020-10-30 11:33:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39288651943206787, "perplexity": 1870.4249004272015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00271.warc.gz"} |
https://computers.tutsplus.com/tutorials/how-to-use-automation-apps-to-quickly-type-international-characters--cms-29767 | Unlimited WordPress themes, graphics, videos & courses! Unlimited asset downloads! From $16.50/m Advertisement # How to Use Automation Apps to Quickly Type International Characters Difficulty:IntermediateLength:ShortLanguages: If you’re in the American or English bubble, it’s easy to think that the QWERTY keyboard is completely sufficient for everyone’s computer needs. It’s got all the letters and common punctuation marks there for you to use, right? Right. Except, there’s a problem; they’re only the letters and common punctuation marks for English. Even other languages that use the Latin characters, like German, French or Spanish, have a few more common symbols like the umlaut (ü), acute accent (á) and cedilla (ç) that don’t feature on the QWERTY keyboard. Although they’re not directly on the keyboard, you can type any of these symbols on a Mac, but the process is slow and breaks your typing flow. To type a letter with an accent, hold down that letter key on the keyboard for about a second, then select the alternate letter, either with your mouse or by pressing the corresponding number key. This is fine if you’re only typing the occasional accented letter, but if you’ve need to type them on a daily basis this is going to get annoying fast. Another bad option is to get a regional keyboard. The AZERTY keyboard used in France makes accents much quicker to type (Shift-2 gets you an é for example), but then you’re stuck using an AZERTY keyboard. This works if you’re living in France, but if you’re anywhere that QWERTY is the standard, switching between AZERTY and QWERTY every time you need to use a different computer is really not that feasible. The best solution is to use one of the great automation apps available for macOS. You can use either TextExpander or Keyboard Maestro depending on your exact needs. ## TextExpander Public Snippet Groups I'll begin by looking at TextExpander The basics of TextExpander are that you type an abbreviation, say .pn, and it gets expanded into a Snippet, in this case it would probably be my phone number. I’ve covered TextExpander in a lot of depth before so if you’re completely unfamiliar with the app, check out my series on it starting with TextExpander: An Introduction. Since I wrote those articles TextExpander has moved from a paid upgrade model to a subscription service. It now costs$40 a year or $4.16 a month. There’s a 30-day free trial so you can check it out before committing to a subscription. Sign up and download TextExpander before continuing. While the idea of paying monthly may turn you off the app, one of the new features they’ve introduced is Public Snippet Groups that solve the international character problem. A Public Snippet Group is a collection of related snippets that someone has put together and made them available to other TextExpander users. They’re groups for things like common typos, Emoji and, critically for us, accented words and common typos in a few foreign languages. To add a Public Group to TextExpander, head to the Public Groups page on TextExpander’s website, find the group you want and click Subscribe. I’m using Accented Words for this example. Click Subscribe to Group and it will automatically sync with the TextExpander app on the Mac. Now when you type a word that takes an accent without the accents, TextExpander automatically adds them. For example, if you type smorgasbord it's automatically be changed to smörgåsbord The one problem with using TextExpander is that you’re limited to the words in a Public Group or the Snippets you add yourself. If you need to type the same collection of words over and over again, it works great, but if you need something more flexible we need to look elsewhere. ## Make Your Own Text Expansion in Keyboard Maestro Keyboard Maestro is my favourite Mac app. It’s able to automate pretty much any feature you want. Along with other instructors, I’ve covered it quite a lot here at Tuts+ If it’s new to you, the best place to start is with Keyboard Maestro I: Introduction. It’s$36 but it’s worth every penny. There’s also a free trial so you can check it out before buying.
One of the best features of Keyboard Maestro is that you can create keyboard shortcuts that can do almost anything, including insert text. I’ve covered how to do that before so read that before continuing if you’re not familiar with the app.
I'll use the three accented French es—é, è and ê—as an example. I want to create a shortcut that I can quickly type that will insert the right accent.
Open Keyboard Maestro and create a new Macro. Let’s call it é.
Give it a new Typed String Trigger and enter ’e. Make sure Simulate 2 Deletes Before Executing is checked.
Add a New Action and select Insert Text by Typing from the Text category. For the text you want to insert, add the accented e: é.
And that’s it. Now when you type ‘e it automatically gets replaced with é. Next, repeat the process for the other two es: I’d suggest using e’ for è and ‘e’ for ê
To finish, create similar shortcuts for the other special characters you need to type regularly.
The big advantage of using Keyboard Maestro like this is it’s flexible. You can insert an accented letter or other symbol at any time with a few quick keystrokes. There’s no holding down letters so it doesn’t break your flow and you’re not relying on an incomplete wordlist.
## Wrapping Up
Both methods I’ve covered today have their place. TextExpander is quicker and easier to start with as long as there is a Public Group with the right Snippets. If there’s not, it will take you quite a while to build one up.
Keyboard Maestro is much more flexible. You can quickly type the characters you need at any time. You could even build a huge wordlist if you wanted to, although it would take a lot of work. | 2021-05-14 07:43:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3216567039489746, "perplexity": 1493.2438233547448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00108.warc.gz"} |
https://bitworking.org/news/2003/10/proposed_changes_to_draft_gregorio_07/ | # Proposed changes to draft-gregorio-07
I've been discussing search as being the most angst filled facet. After having tossed around a couple ideas, here is a concrete proposal for how to change the spec in the next revision. While I'm at it let's slip in another proposal for the 'createEntry' facet too.
First, a quick review, the Introspection file lists all the facets that an implementation of the AtomAPI implements.
<?xml version="1.0" encoding='utf-8'?>
<introspection xmlns="http://purl.org/atom/ns#" >
<create-entry>http://example.org/reilly/</create-entry>
<user-prefs>http://example.org/reilly/prefs</user-prefs>
<search-entries>http://example.org/reilly/search</search-entries>
<edit-template>http://example.org/reilly/templates</edit-template>
<categories>http://example.org/reilly/categories</categories>
</introspection>
The 'search-entries' facet, which is described in Secion 5.5 of the draft RFC gets split into two different facets in the Introspection file. Remove <search-entries/> and add in two new elements: <recent-entries/> and <browse-entries/>.
### recent-entries
The recent-entries facet allows the client to retrieve information about the last N entries to the site. This will function just like the 'search-entries' facet except that it will only accept one search parameter 'atom-last'. The 'atom-last' query parameter is set to the number of recent entries to return. It returns a file in the same format as is currently specified for the 'search-entries' facet.
For example, if the 'recent-entries' element has the URI http://example.org/recent/ for a value, then doing a GET on the URI:
http://example.org/recent/?atom-last=2
Will retrieve descriptions of the last 2 Entries. The results returned would look like:
HTTP/1.1 200 Ok
Content-Type: application/x.atom+xml
<?xml version="1.0" encoding='utf-8'?>
<recent-entries xmlns="http://purl.org/atom/ns#" >
<entry>
<title>My First Post</title>
<id>http://example.org/reilly/1</id>
</entry>
<entry>
<title>My Second Post</title>
<id>http://example.org/reilly/2</id>
</entry>
</recent-entries>
Now there have been questions regarding the use of id and title in the search results. The use of id isn't consistent with the use in the Atom format, or at least it could conflict with the use that the server implementation had chosen. Title is also a bit problematic in that an Entry may not have a title. Here is a proposed alternative:
HTTP/1.1 200 Ok
Content-Type: application/x.atom+xml
<?xml version="1.0" encoding='utf-8'?>
<recent-entries xmlns="http://purl.org/atom/ns#" >
<resource>
<description>My First Post</description>
</resource>
<resource>
<description>My Second Post</description>
</resource>
</recent-entries>
description is a string that is to be displayed to the user when choosing an Entry to edit. It should contain enough information that the user can adequately distinguish between Entries. It could contain the title of the Entry if it had one, but that is just suggested practice and the server completely determines the content of this element.
The link element contains the 'editEntry' URI for the Entry being described.
### browse-entries
The browse-entries facet contains a URI of a file in 'archive' format. This is the alternate search mechanism I discussed in Reconsidering Search (Kinda) in the AtomAPI and elaborated on further in a message to the atom-syntax mailing list.
Both the 'recent-entries' facet and the 'browse-entries' facet are optional. This lets the implementation choose which, if any, searching mechanism works best.
### Hinting at a location
The last proposed change is for the 'create-entry' facet. In this case I propose that the <link> element of the POSTed Entry can optionally be filled in with a relative URI path. The value of the <link> element can serve as a hint to the server on what URI to assign the Entry. This could be used by Blosxom to determine the directory to place the entry. The link element is optional and the server may ignore its value when processing a POST to create a new Entry. Here is an example of a POST to create an Entry with the link tag filled in:
POST /some-atom-cgi-handler.cgi HTTP/1.1
Content-Type: application/x.atom+xml
<entry>
<title>Mac OS: less crap</title>
<content type="application/xhtml+xml" mode="xml">
<div xmlns="http://www.w3.org/1999/xhtml">
The NYT has a glowing
</div>
</content>
<issued>2003-10-23T08:17:00-04:00</issued>
<modified>2003-10-23T08:17:00-04:00</modified>
</entry>
Looking good.
I'm not sure why recent-entries should be treated as a special kind of query, it seems like building in a limitation. Will we also need a last-weeks-entries element, for instance?
The modified entry list looks an improvement, but if you're talking about resources and their description, surely this could be modified a little so that it's also valid RDF.
Nice to see the introspection example is now valid RDF, btw.
Personally I'm not sure about the hint - it seems to me that decisions about location should be entirely down to the server, and if it goes in as a hint, it's only a matter of time before people build systems that rely on it.
Posted by Danny on 2003-10-24
Danny,
Recent-entries is just a list of entries in reverse chronological order, so if you want to go back further in time you request more entries. What I am trying to get to is the simplest thing that could work for clients and not be too much of a burden for servers to implement.
So what are the 'little' changes I can make to the entry list to make it valid RDF?
The problem is that there are systems such as Bloxsom that need the location to be specified. Any suggestions on where else that information could go?
Posted by Joe on 2003-10-24
I'm quite concerned that the content and structure of the recent-entries and browse-entries data is so vaguely defined and explicitly left up to the server. This makes life difficult for a client app that wants to download a blog to local storage and allow the user to view and edit it. Such a client will want to give the user a lot of control over how to view data.
By contrast, the browse-entries format is about giving the server control, which means that a client app isn't sure what it's going to get. It really just wants a no-nonsense list of entries with all available metadata that it can use to populate its local database and UI. The more the server tries to get clever about what information to show and how to organize it, the more likely that the client app is going to have trouble extracting what it needs.
This also goes for the completely vague "description" field in the recent-entries results. A client app isn't going to know what to do with this. I guess the best guess is to shove it into the subject field of the incomplete entry; but that leads to the prospect that, after the user decides to view the entry and the client downloads its entire data, the subject will change suddenly. This doesn't make a lot of sense to the user.
IMHO Atom should just expose the raw data behind the blog to an application. Server-side fanciness and formatting should be left up to the CGI code that generates the HTML.
Posted by Jens Alfke on 2003-10-27
"IMHO Atom should just expose the raw data behind the blog to an application."
We now have three suggested interfaces for how to choose an Entry to edit: the original multi-parameter search facet, the simplified 'recent-entries' search facet, and finally the a static page navigation (either in the RESTLog archive format I proposed or in Sam Rubys suggested navigation).
Do you have a concrete suggestion for an alternative mechanism?
Posted by Joe on 2003-10-27
My understand is that the browse-entries provides a way to navigate entries in multiple dimensions (category, timeline, etc.) which are optionally and creatively determined and supported by the server implementation. Is this understanding correct?
It also seems, however, some of us (including myself) see the need for browsing (searching/navigation) entries remotely (on the entries on server) and locally (entries that are on user's local storage) uniformly using the same set of API's. Local entries include archived entries, drafted but not yet published entries, and also entries that are created and available on the server. It is, of course, possible, but the client implementation and the server implementation could not be decoupled.
I have not thought of an concrete alternative (though still trying...)
Posted by anonymous on 2003-10-27
Posted by anonymous on 2005-03-06 | 2023-03-30 23:49:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2648916244506836, "perplexity": 1712.3562114653982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00773.warc.gz"} |
http://tiku.21cnjy.com/?mod=quest&channel=4&xd=2&catid=10660 | ## 新目标(Go for it)版 试题
• --Your father’s never visited to Hainan, __________ he?
--__________. How he wishes to go there again! A.has, Yes, he has.) b3 D+ Z L" ]7 \) c B.is, Yes, he is( I6 b ]* H, ^* R T! d+ d; H C.hasn’t, No, he hasn’t.% S6 i% [! b$P3 D# V D.has, No, he hasn’t. L7 i! h# U( S; B • 写出划线字母的音标 【小题1】wait / / 【小题2】silk / / 【小题3】snake / / 【小题4】pink / / 【小题5】light / / 【小题6】change / / 【小题7】yellow/ / 【小题8】usually / / 【小题9】quiet / / 【小题10】couch / / • 现代化的世界里,广告无处不在。有人喜欢广告,有人讨厌。请你来当评论家,谈谈自己的高见,说说广告的优点与不足。 (词数为80词左右。) • 根据短文内容和所给汉语提示,写出短文空白处各单词的正确形式。每空限填一词。 At school many things happen to us. We may feel 1 (激动的)when we succeed in a school play. We may feel sorry if we lose an important 2 (比赛). We want to keep the memory in our lives. How to keep memories? Our teacher, Mr. Smith, has taught us how to 3 (记住)things to make our own yearbook. A yearbook is a 4 (种) of book which is used to memory wonderful moments. It’s usually made at the end of year. Last 5 (十二月), we began to make our yearbook. 6 (首先) we chose the persons that had done something 7 (特殊的), then some 8 (学生) interviewed them, some 9 (写)down their stories, others took photos of them. Finally our teacher helped us to put the things 10 (一起). Then we had our first yearbook. • 同学们,经过多年的学习,你一定积累了很多良好学习方法,现请你介绍五种成功的学习方法。 • _____________ your grandfather live? He lives next to the library. A.What does' M: M N3 E' G" E2 S B.How is ; H( ?7 M# i, ]7 O. ]8 e: A4 ]& e C.Where does7 P6 E& d1 D1 c$ g2 R0 Y D.How old is, d- c' E0 J- i3 X. I0 Q* H) ^
• Now,please look at this picture.This girl is my sister.Her name is Sally! She is l3 years old.She is in Class Two,Grade Seven.This boy is aunt’s son.He is my cousin.His name is Jeff.He is an English boy.He is 12 years old.His mother is my aunt.My aunt and uncle are both teachers.This jacket is Jeff’s.Its color is black.You Can see a green pencil case,a blue notebook on the chair.My sister, Jeff and I and in the same school.
【小题1】How old is Sally?
【小题2】将划线部分句子译成汉语:
【小题3】What color is Jeff's jacket?
【小题4】Are Jeff’s mother and father teachers?
【小题5】Where are the green pencil case and the blue notebook?
• 根据所给汉语或英语首字母写出短文中所缺英语单词的正确形式。
Last week, my friends and I talked about the rules in our school. Here are some of our ideas.
At our school, we have to wear uniforms every 【小题1】d______. The problem is that all my classmates think the uniforms are 【小题2】丑的______.We think young people should【小题3】l______ smart and so we would like to wear our own clothes. Our teachers believe that if we did that, we would concentrate more on our clothes than our studies. We don’t【小题4】同意________.We would feel more comfortable and that is 【小题5】g______ for studying. If we can’t do that, we should be【小题6】允许__________to design our own uniforms. That would be a good way to keep both teachers and 【小题7】学生___________ happy.
Besides, vacations should be longer. At present they’re too【小题8】s______. Longer vacations would give us time to do things like volunteering. Last summer I had a(n) 【小题9】机会_________ to volunteer at the local hospital, 【小题10】b________ I couldn’t because I had to go back to school.
• My parents usually take a walk ______ dinner to keep healthy. A.at8 W3 E7 J- S$\8 ?. b: ^' T- M B.of- R7 E$ d3 Q$d) ^' M/ F$ i C.during- ^; j& B% f0 U, Q D.after5 _" R7 d1 X) [( B
• Hobo, _____ wake me up ________ you finish building another house for me. A.doesn’t, until 3 ^) A6 G" B" a4 A3 D B.not, till; N% [( a4 D- G+ `$K& A C.don’t, until! R! X" X L0 A& H" L D.not, until+ c$ T' W( j0 d% f\$ P | 2019-06-25 09:51:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34039628505706787, "perplexity": 12552.43325211112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00245.warc.gz"} |
https://bayeconsoft.com/html_documentationse15.html | ### 2.11 Plotting
BayES provides functions for producing elementary graphics. The following types of plots are supported:
1. histograms using the hist() function
2. scatter plots (y versus x) using the scatter() function
3. correlograms (acf plots) using the acf() function
4. line plots (y versus x or the values of y versus their row index) using the plot() function
5. kernel density estimates using the kden() function
Section B.16 provides extensive documentation on the plotting functions. The remainder of this section gives a general overview of how plots are handled in BayES and presents some simple examples.
When a plotting function is called in BayES in its simple form, a new figure window is created and the corresponding plot is drawn within this window. Figure windows are named consecutively as "Figure 1", "Figure 2", etc., and these names can be used to interact with them, for example to close them programmatically or save their plots in any of the supported graphics formats. For example, the statement:
close("Figure 2");
will close the figure window with "Figure 2" appearing on its title bar and release the memory occupied by this figure. To prevent extreme use of memory resources for presenting plots, BayES restricts the maximum number of figure windows that are open simultaneously to 20. This number can be changed using the maxfigures() function.
Once a figure window is closed, its name will be reused. For example, if there are currently two figure windows open with titles "Figure 1" and "Figure 3" (the user closed the figure window with title "Figure 2"), the next time a plotting function is used, the title of the new figure window will be "Figure 2". The statement:
close(all);
closes all currently open figure windows and releases resources.
The titles of figure windows can also be used to export the associated plots to the following graphics formats:
1. encapsulated postscript (.eps)
2. portable network format (.png)
3. joint photographic experts group (.jpeg)
This is achieved using the export() function, by passing the name of the figure window that contains the plot to be exported as the first argument of the export() function. Section B.4 provides extensive documentation on the export() function.
The five basic plotting functions mentioned above differ in the number of numerical arguments they take, but all of them have the following optional arguments:
• "title"
• "xlabel"
• "ylabel"
• "grid"
• "colors"
These arguments, if provided, must be given after the numerical arguments of the respective plotting function, separated by commas.8 The values of the first four options should be strings and of the "colors" option a matrix. The three first options, as presented above, specify the title of the plot and the labels on the ‘x’ and ‘y’ axes, respectively. The values of the "grid" option must equal to either "on" or "off", with the first value requesting that a grid is plotted on the graph. The last option specifies the colors to be used in the graph and its value must a matrix with three columns and, possibly, multiple rows. The values in each row represent a color in RGB (red-green-blue) format and should be between zero and one. The first row specifies the background color of the graph9 and the second the color of the axes and text labels. The remaining rows specify the colors to be used when plotting the data.
BayES provides support for figure windows which can contain multiple plots. A call to the multiplot() function will initialize a figure window which can store multiple plots. Subsequent calls to the five basic plotting functions, accompanied by calls to the subplot() function, can be used to populate the spaces of this window with actual plots. See section B.16 for more details.
Example 2.11 demonstrates how to plot a histogram of a set of values contained in a vector, while example 2.12 shows how to overlay kernel density estimates of the values contained in two vectors. Finally, 2.13 demonstrates how to plot a set of functions. Sample "4$-$Plotting data.bsf", located at "\$BayESHOME/Samples/1$-$GettingStarted" contains some more complete examples of using the plotting functions.
Example 2.11
▼ Input ▼ Output
// Set the seed for the random-number
// generator and draw 500 numbers from
// a Gamma(4,2) distribution
seed(42);
x = gamrnd(4,2,500,1);
// Plot a histogram of the values in x
hist(x, 20,
"title"="Histogram of x",
"grid"="on" );
// Export the graph as eps
export( "Figure 1", "./Hist.eps",
"width"=420, "height"=280 );
Example 2.12
▼ Input ▼ Output
// Draw two sets of 500 numbers from
// a Gamma(4,2) distribution
seed(42);
x = gamrnd(4,2,500,1);
y = gamrnd(4,2,500,1);
// Overlay the kernel density estima-
// tes for the values in x and y
kden( [ x, y ],
"title"="Kernel density estimate",
"grid"="on" );
// Export the graph as eps
export( "Figure 1", "./Kden.eps",
"width"=420, "height"=280 );
Example 2.13
▼ Input ▼ Output
// Define x-axis values
x = range(0.01, 4, 0.05);
// Plot the Gamma pdf with varying
// shape parameter
y1 = gampdf(x, 2, 3);
y2 = gampdf(x, 3, 3);
y3 = gampdf(x, 4, 3);
y4 = gampdf(x, 5, 3);
myPlot = plot( [ y1, y2, y3, y4 ], x,
"title" = "Gamma pdfs",
"xlabel" = "x", "ylabel" = "pdf",
"grid"="on" );
// Export the graph as eps
export( myPlot, "./Gampdf.eps",
"width"=420, "height"=280 );
8Optional arguments passed to the plotting functions can be provided in any order, but always come in pairs (eg. "title"="my title").
9Subplots within graphs must have the same background color. The overall background color in graphs that contain multiple subplots is the background color specified for the subplot at the upper left corner of the graph. | 2021-08-03 03:37:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24411465227603912, "perplexity": 604.8738549854385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00402.warc.gz"} |
http://www.transtutors.com/questions/need-homework-help-for-business-income-tax-1938.htm | # Need homework help for Business Income Tax
I am having trouble figuring out the three attached multiple choice problems for my weekly homework.
Document Preview:
1. Bjorn owns a 35% interest in an S Corporation that earned $200,000 in 2009. He also owns 10% of the stock in a C Corporation that earned$200,000 during the year. The S Corporation distributed $10,000 to Bjorn and the C Corporation paid dividends of$10.000 to Bjorn. How much income must Bjorn report from these businesses? A. $10,000 income from the S Corporation and$10,000 income from the C Corporation. B. $70,000 income from the S Corporation and$10,000 of dividend income from the C Corporation. C. $70,000 income from the S Corporation and$0 income from the C Corporation. D. $0 income from the S Corporation and$0 from the C Corporation. E. None of the above. 2. Elk, a C corporation, has $500,000 operating income and$350,000 operating expenses during the year. In addition, Elk has a $20,000 long-term capital gain and a$52,000 short-term capital loss. Elk’s taxable income is: A. $170,000. B.$150,000. C. $118,000. D.$98,000. E. None of the above. 3. Owl Corporation, a calendar year taxpayer, has a beginning balance in accumulated earnings and profits of $3.5 million and current earning of$1 million. If Owl can justify accumulations for the needs of the business of $3.7 million, its accumulated earnings credit for ATI purposes is: A.$0. B. $200,000. C.$250,000. D. $3.7 million. E. None of the above. ar taxpayer, has a beginning balance in accumulated earnings and profits of$3.5 million and current earning of $1 million. If Owl can justify accumulations for the needs of the business of$3.7 million, its accumulated earnings credit for ATI purposes is: A. $0. B.$200,000. C. $250,000. D.$3.7 million. E. None of the above. income from the S Corporation and $10,000 income from the C Corporation. B.$70,000 income from the S Corporation and $10,000 of dividend income from the C Corporation. C.$70,000 income from the S Corporation and $0 income from the C Corporation. D.$0 income from...
Attachments:
• Bjorn owns a 35% interest in an S corporation that earned... (Solved) September 15, 2014
Bjorn owns a 35 % interest in an S corporation that earned $200,000 in 2010. He also owns 10 % of the stock in a... Solution Preview : Answer: Considering the above case it can be stated that Bjorn must report his$70,000 share (35% of $200,000) of the S corporation's income • corproation taxation October 12, 2010 CHAPTER 3 CORPORATIONS: SPECIAL SITUATIONS 1 . Green Corporation, a calendar year taxpayer, has alternative minimum taxable income (before the exemption amount)... • The Wendt Corporation had$10.5 million of taxable income. a. What is... May 07, 2016
The Wendt Corporation had $10 .5 million of taxable income. a. What is the company’s federal income tax bill for the year ? b. Assume the firm... • FEDERAL TAXATION CORPORATION November 29, 2010 of interior All 50 states 35 ,000,000 Total$ 205,000,000 Problem 40 LO.7 Hernandez which has been an S corporation since inception, is subject to tax in States Y and Z. On...
• The Talley Corporation had a taxable income of $365,000 from operations... May 07, 2016 The Talley Corporation had a taxable income of$365,000 from operations after all operating costs but before ( 1 ) interest charges of \$50,000, (2)... | 2017-08-23 00:25:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1927296221256256, "perplexity": 13317.609291638131}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00306.warc.gz"} |
https://mathoverflow.net/questions/362927/on-examples-of-subspaces-of-cx-for-which-state-spaces-are-choquet-simplices | # On examples of subspaces of $C(X)$ for which state spaces are Choquet Simplices
Let $$C(X)$$ be the Banach space of all Real valued continuous functions on a compact Hausdorff space $$X$$. What are examples of uniformly closed subspace $$\mathcal{A}$$ of $$C(X)$$ such that $$\mathcal{A}$$ separates points, containing constants and the state space of $$\mathcal{A}$$ is a Choquet Simplex.
The state space of $$\mathcal{A}$$ viz. $$S_{\mathcal{A}}$$ is defined as $$\{\Lambda\in\mathcal{A}^*:\|\Lambda\|=1 ~\mbox{and}~ \Lambda (1)=1\}$$. By a Choquet Simplex we mean a compact convex subset $$K$$ of a locally convex topological vector space $$E$$ such that for each $$p\in K$$, there exists a unique measure $$\lambda$$ on $$K$$ such that $$f(p)=\int_K f(t)d \lambda (t)$$, $$\forall~ f\in E^*$$. When $$K$$ is metrisable then $$\lambda$$ can be assumed to satisfy $$S(\lambda)\subseteq ext (K)$$ but for non metrisable case $$S(\lambda)\subseteq \overline{ext(K)}$$, in some sense these measures are 'maximal'. $$ext(K)$$ represents the set of all extreme points of $$K$$.
• Yes, but I need an example with Real scalar. The space $C(X)$ I considered with Real scalars. – Tanmoy Paul Jun 16 '20 at 6:58 | 2021-08-01 03:11:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9483965635299683, "perplexity": 80.7738277856893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00092.warc.gz"} |
https://tex.stackexchange.com/questions/338005/typesetting-a-big-sum-with-index-next-to-it?noredirect=1 | # Typesetting a big sum with index next to it [duplicate]
How can I typeset a big summation with index next to it (like that one in the photo below) instead of an index in the bottom ?! . I need help please :) .
• This yields a small summation not a big one . Besides, I use align environment. – Hussein Eid Nov 7 '16 at 18:41
• \sum\nolimits_{n} – egreg Nov 7 '16 at 18:43
• although this is most peculiar, it can be obtained by \displaystyle]sum\nolimits_n – barbara beeton Nov 7 '16 at 18:45
• I am deleting my earlier comment, since I misunderstood and provided a misleading comment. – Steven B. Segletes Nov 7 '16 at 18:46
The place (above/below or right) for the indices of large operators can be set explicitly using \limits or \nolimits:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align}
E &= \sum\nolimits_n f(x) \\
E &= 42
\end{align}
\end{document}
Code:
\documentclass[]{amsart}
\begin{document}
$E=\sum_n{f(x)}$
\end{document}
This yields:
• This yields a small summation not a big one . Besides, I use align environment. – Hussein Eid Nov 7 '16 at 18:41 | 2021-06-19 19:55:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939928650856018, "perplexity": 3361.595735650019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00293.warc.gz"} |
https://gamedev.stackexchange.com/questions/167640/how-do-i-compile-a-namespace-to-a-dll-in-unity | # How do I compile a namespace to a .DLL in Unity?
I'm developing a core library that I will use in my other games. I've only heard of using Unity .DLLs recently, and have spent some time putting my core scripts into a namespace. If I understand .DLLs correctly, I should be able to take that .DLL, plop it into one of my other projects, and then immediately start using it.
The question is, how do I compile that namespace into a .DLL that I can use in other projects? I'm using Visual Studio and Unity 2018.2.17f1.
• Does this help? Google is your friend. xinyustudio.wordpress.com/2015/12/09/…
– Almo
Jan 31 '19 at 14:32
• @Almo Ew, MonoDevelop. That said, its similar process in VS. Later this year I think there will be a way to do it inside an existing Unity project (or better management of it) with their new package manager. Jan 31 '19 at 14:57
• Did you follow the "Step by step guide" in the Unity documentation? Did you run into any specific snags in that process? Jan 31 '19 at 16:06
• Using DLLs in Unity for a while now, my advise is to think really hard if you need it. The main difference between a (managed) DLL file and just *.cs files is that you can't change the DLL. For every change in there, you'll have to switch to your library project, make that change, build it, copy the new version to Unity, have Unity reload it, and then adjust the Unity code accordingly. No quick refactor=>rename anymore. Plus if you want to have MonoBehaviours in there, all of them are listed as direct children of the DLL on the same level and sorted alphabetically (no folders etc). Jan 31 '19 at 18:02
• Furthermore, the connection with those MonoBehaviours is kinda brittle (I think only by class name?). So whatever advantage you get from DLLs needs to outweigh all of that. One example of that is when you sell it in the asset store, because then you want to prevent 3 hours of troubleshooting only for the customer to say "oh yeah I made this little change, that can't be it right?". If you just have code that you want to reuse in another project, an asset package or literally copy-pasting is easier. And your library probably will grow from project to project. Jan 31 '19 at 18:08
## 1 Answer
Yes, that's basically how it works. But DLLs aren't exclusive to unity. DLLs are a widespread concept for reusable code components in many different programming languages. Just to make clear that there are many many differences on how .dlls are treated.
1. TL;DR:
.NET was originally implemented for windows only. .NET by itself is not a specific programming lanugage but a whole infrastructure that can be compared to how Java works. The whole idea is to execute code on a virtual CPU. This might not appear like a big deal at first but it is.
By concept, .NET separates a program and its execution environment. While the .NET platform (the CLR which runs written programs to be more precise) must be installed on a system, programs remain platform independent. This allows you to write code on say Windows and reuse it on mac OS or Linux for example. While this is old news for .NET developers, this might be news for people who know .NET only from Unity.
Let's talk about Mono. Originally, there were no other platforms to support .NET other than differnet versions of Windows (Windows RT, Windows mobile, etc.). After a while, some people started to write a runtime which was capable of executing .NET code without relying on Windows. This lead to an abstraction of the underlying operating system. You can think of Mono as a cross-platform version of .NET.
Meanwhile, Microsoft is really invested in open source and dropped the 'Windows-only' part of .NET some years ago. That's the story behind .NET Core. Core has basically the same goal as the Mono project but is developed by Microsoft.
Why the history lesson? Because in the context of Unity this really is news. Until the version release in 2018, Unity was based on a super old implementation of the Mono runtime. .NET compatibility was there but only up to version 3.5. That version was released late 2007 and since then never updated for Unity until 2018. This means that no new C# language features were available for Unity developers for 12 years. That's huge.
If you're still rading this, I might as well go ahead and bother you with .NET Standard. Late 2018, .NET Standard 2.1 was released. This is nothing but a definition of language features which must be supported by a .NET runtime (Mono or Microsoft or others) in order to be compatible with a specific version. So instead of .NET 3.5, .NET 4.0 and so on, all versions - it doesn't matter if CORE, not Core, Microsoft or Mono - are identified by .NET Standard, .NET Standard 2.0 and .NET Standard 2.1.
You don't have to know all of this information but why not understand the technology you're working with. You'll see some .NET specific stuff in Unity so you might as well know the story behind. In Unity, check the 'Other Settings' panel under Edit / Project Settings / Player to see your .NET configuration:
We finally have support for modern .NET and can use the latest C# language features. Awesome. This is important because you have to consider these things not just when consuming a .dll but especially when writing one. By selecting the right .NET version for your Class Library project in Visual Studio, you define whether your DLL is compatible with Unity and other .NET components. So take that into consideration when creating a new Class Library project. Without this knowledge you'll have a hard time understanding the differences between the many different project templates:
• Portable Library
• Multiplatform Library
• .NET Standard Library
• Class Library
• Library
Make sure to understand which project template is compatible to which .NET version. If you select the wrong library type, Unity won't be able to understand the DLL and that will cost you some research time to figure out why.
Finally the answer
Just make sure to select a compatible Class Library project template and write down your code. The whole namespace thing: A DLL always exports the namespaces just as you use them:
namespace I.Like.Big.Namespaces.And.I.Can.Not {
public class Lie {
}
}
class Whatever {
}
There's nothing more to it. After building your DLL, navigate to your output folder (should be bin/ in your project's root directory) and copy the .dll file into your Unity project (Assets/Scripts/_dlls/ for example). Open Unity and wait until the newly added DLL is detected and there you go. Now you can import and use code from your lib.
While DLLs are a good way of organizing your project, there are some things to consider before using them. Be sure you really need DLLs in your project. You won't be able to access Unity classes out of the box for example. There are ways of adding certain dependencies and build complex Libraries even specifically for Unity, but that complicates your overall project setup.
• How would this process change if I wrote up the namespace + code BEFORE creating the .DLLs? Would I just have to copy over the code to the new project? Jan 31 '19 at 19:23 | 2021-12-01 00:35:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23478540778160095, "perplexity": 1772.774181723163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00404.warc.gz"} |
https://help.syscad.net/Reaction_Editor | # Reaction Editor
Navigation: User Guide ➔ Editors and Utilities ➔ Reaction Editor
Reaction Format RB Data Section RB Sub Model (Model Theory)
Reaction Editor Text File Format Reaction Block
(RB) Summary
Individual Reactions Reaction Extents Source / Sink /
Heat Exchange
Solving Order -
Sequential or Simultaneous
Energy Balance Heat of Reaction / Heat of Dilution / Partial Pressures
# Introduction
With all the features it provides, the SysCAD Reaction Editor makes it easier to create and edit reaction files. Therefore, we highly recommend our SysCAD users install it.
These short video clips are available in the Tutorial Section of the Help documentation, or directly in YouTube.
# Set up the Reaction Editor
## Installing the SysCAD Reaction Editor
1. When SysCAD is installed there is an option to install the Reaction Editor. If this option was selected then there is nothing more to do.
2. If Reaction Editor wasn't installed as part of the SysCAD install then the user needs to obtain a copy of the Reaction Editor installation file. They can obtain this:
• Run the Reaction Editor installer file (.msi) and follow the steps outlined in the dialog boxes. If the default settings are used, the Reaction Editor will be installed in the folder C:\Program Files\SysCAD.
3. The Reaction Editor uses the .net framework (version 4.5).
## Reaction Editor Options
On the main Reaction Editor Edit menu, there is an Options field. This field has two options which can be enabled or disabled: Autoload All On Startup: If enabled, all reactions files in the current project will be loaded when the reaction editor is first launched Enforce Charge Balance: If enabled, Charge imbalances in reactions will be reported as errors. This is a new option only available in a recent version of Reaction Editor (v1.2 or later).
# Reaction Files
## Location of Reaction Files
When a new project is created, a sub-folder called Reactions is automatically added, any reactions created within the project will be automatically added to this folder. The picture shown here is for the Demo Nickel Copper Project. This folder contains the file 'SpeciesData.ini', which contains all of the information about the species used within the project. The SysCAD Reaction Editor uses the information in this file to display the available species, determine if reactions are balanced, autobalance reactions if required, etc. The SysCAD Reaction Editor will automatically save any new reaction files into this folder. Note: If the user add or deletes species from the SysCAD project while the reaction editor is opened, the user will need to: restart the SysCAD project to update the 'SpeciesData.ini' file restart the reaction editor to update the species list.
## Creating a New Reaction File
A reaction file can be automatically created when you press the "Edit Rct File" button.
• The RCT_Name can be left empty, in this case, the reaction file name created will be "UnitName".rct, so for the example used in the picture above, it will be Ni_Diss_2.rct.
• The RCT_Name can also be typed in prior to pressing the Edit_Rct_File" button. If the file does not exist, it will create one for you.
• The first time a reaction file is created in a project, the Reaction Editor will start with a blank reaction.
• Please see Adding a New Reaction for information on how to add the reactants and products.
## Opening Existing Reaction Files
Once SysCAD has been set to use Reaction Editor (see How to Set SysCAD to use the SysCAD Reaction Editor), reaction files can be opened from SysCAD by pressing the 'Edit RCT' button.
The SysCAD Reaction Editor will start up and shows all reaction files in the current project with the selected one being in the current window. See example below:
Number Description
The existing reactions are displayed in the reaction Window.
The reaction editing panel will show details of the selected reaction, such as reactants, products, reaction extent, heat of reaction override and so on. Use this to edit the reaction.
The species used in the currently selected reaction are in bold and in blue in the species list in the right hand side window. The species used in other reactions in the current reaction file are in bold and in black in the species list in the right hand side window.
The filter can be used to shorten the display of the species list. For example, type in (aq) will show only the aqueous species, or type in Ni will display species containing Ni only.
The Title bar shows the current reaction file name.
All the reaction file in the current project are listed here.
Files in Grey colour are in the same project but has not been opened in the reaction editor, click on the file to open it Files in Black are opened in the reaction editor with no unsaved modifications. Files in Blue are opened in the reaction editor with unsaved modifications.
This shows the species database location. See note in Location of Reaction Files
The log window will display any messages/warnings/errors found by the Reaction Editor. To clear the log go to View-Clear Log.
## Saving and Closing Reaction Files
If a file has been changed it will be blue in the left hand side window and have an '*' next to the file name on the top of the reaction window. If there are any unsaved changes to the file, the user will be asked if they wish to save the file prior to it being closed. Reaction files can be individually closed either by using the menu command File-Close or clicking on the X in the top right corner of the reaction window.
# Editing the Reactions
For example, let's add the following reaction, note it is in an unbalanced state:
$\ce{ FeSO4(aq) + O2(g) + H2SO4(aq) \; = \; Fe2[SO4]3(aq) + H2O(l) }$
The image below shows a new reaction being added:
Number Description
This table continues from the previous table in Opening Existing Reaction Files, refers to the numbers listed in that table.
Reactant box - drag the species from the species list to here. Users do not have to type in the + signs when using drag and drop, simply drag and drop all the reactants. For our example: These are shown with a green dot in the above picture, we will also need drag in O2(g), which is not shown in the picture.
Product box - drag the species from the species list to here. Users do not have to type in the + signs when using drag and drop, simply drag and drop all the products. For our example: These are shown with a purple dot in the above picture.
button - Once all the reactants and products are placed in the boxes, press to balance the reaction. See Balancing the Reaction.
Reaction Extents - Specify the reaction extent type, values, and the extent species. Please see Reaction Extents for more detail.
Heat of Reaction - SysCAD normally uses the Heats of Formation of the products and reactants to calculate the Heat of Reaction. However, if a Heat of Formation value is missing in the species database for any compounds involved in the reaction, then user will need to tick the Override tick box and specify a user defined Heat of Reaction.
Once a reaction has been entered, its status will be shown just above the log window on the status line.
Notes:
1. Users should FIRST drag species into the Reactant and Product boxes and THEN adjust the number of moles of each species reacting, or use the automatic balance functionality.
2. The finished reaction may look like this
or the one shown in Opening Existing Reaction Files, depends on the stoichiometric coefficient of the first species in the reaction when the button is pressed. See Balancing the Reaction for more information.
## Balancing the Reaction
Normally two things will happen when the button is pressed, user will either be shown the balanced reaction or be asked to specify extra species. We will explain these using two examples:
### Example 1 Sufficient Species
For example, in the above topic, we have added the Ferrous to Ferric reaction,
1. $\ce{ FeSO4(aq) + O2(g) + H2SO4(aq) \; = \; Fe2[SO4]3(aq) + H2O(l) }$
• This reaction is not balanced, users will find a message displayed in the Status bar: Status: Imbalanced. Excess product elements: {Fe = 1}, {S = 1}, {O = 3}.
2. To balance the reaction, either
• manually enter the number of moles of each species in the reaction;
• Or press the button for suggestion from SysCAD.
3. When the button is pressed, the Reaction Editor will attempt to balance the active reaction with the specified Reactants and Products.
• The Autobalancer will display both the original and the balanced reaction: and the user may
• The user may then either:
• accept the changes by pressing 'OK', or
• reject the changes by pressing 'Cancel'
4. Note that the Auto balance will balance the equation to meet the number of moles specified for the first reactant. As shown in the above picture. If user prefers whole numbers, then cancel the balance, adjust the number of moles for the first species, or move the species with the lower number of moles to be the first species.
### Example 2 Insufficient Species
For example,
1. If a user simply enters Ferrous and Ferric as reactant and product, and nothing else, then pressed the button,
2. As we are missing Sulfuric acid, water and oxygen, we will have the "Autobalancer Additional Species" dialog box opened:
3. When this happens, user can either:
• use a previously specified 'Set' of species, or set up a new Set of species for balancing reactions.
4. In the dialog box above there are no previous species Sets specified that is useful for this reaction, and hence we will define a new Species Set
• Type in a name in the Species Group box
• Type in the name of the required additional species, including phase
• You CANNOT drag species into this dialog box, but can select from a drop list after the first letter is entered as shown below:
• After all the additional species are entered, you can press OK and the reaction can now be balanced:
NOTES:
• When a species set is saved, users can use it again to define other reactions.
• The species sets can be managed via the reactions editor - Edit menu.
• If a species set is no longer required, press the Delete button to remove it.
• For a new group, simply change the name in the species group and modify the species in the group as we have done in Example 2 above.
• This functionality is especially useful when the user has a number of reactions that involve the same reagent and by-product.
For example, using the Sulfuric Acid species set (with additional species H2SO4(aq) and H2O(l), we can simply drag in the main reactant and product, and the autobalancer will fill in the additional species from the species set:
• Please see Video Link for a video showing you how to set up and use a Species Group in the Reaction Editor.
Number Description
This table continues from the previous tables in Opening Existing Reaction Files and Adding a New Reaction, refers to the numbers listed in previous tables.
• The source species can be added in a number of ways:
1. Drag the species from species list and drop into the Source Species box
2. Right click on the species in species list and select set as Source
3. Manually typed in.
• Any species being used a source will have a icon displayed next to it.
• You may only add a Source once to a reaction file, but you can add multiple species to a single Source.
• The source will always be reaction 1.
Select what the source should be treated as: Source only, Recycle only or Source and recycle. Please see Reaction Source for more information.
This is similar to Adding a New Source. The differences are listed here:
• Click on Add Other and select Sink.
• Any species being used a sink will have a icon displayed next to it.
• You may only add a Sink once to a reaction file, but you can add multiple species to a single Sink.
• The Sink will always be the last reaction.
• To add Heat Exchange, click on Add Other and select Heat Exchange.
• In the Heat Exchange box, select the appropriate option from the drop down list and then specify the value of associated variables depending on options chosen.
• These options are explained in Reaction Heat Exchange.
### Remove a Reaction, Source, Sink or Heat Exchange
• To remove a reaction, source, sink or heat exchange, simply select it in the reaction window and click Remove.
• If you only wish to disable the reaction, source, sink or heat exchange, clear the Enabled box at the bottom of the reaction window.
## Specifying the Reaction Order
By default, SysCAD reactions are added sequentially. Alternatively, user can choose to specify two or more simultaneous reactions. Simultaneous reactions requires the Sequences Enabled option to be on.
• The reaction order is as they appear in the reaction file. To change the order of the reactions in the reaction file, select the desired reaction in the reaction window and click the up and down arrow buttons to move to the desired place in the reaction file.
Notes:
• If included, sources will always be first, sinks will always be last or second last and heat exchange will always be last (as shown above). These cannot be re-ordered.
• The reaction file in the picture shown above started with an existing reaction file, then new reactions, sink and HX options were added for demonstration purposes.
• Any reaction with "*" marked means it is new or has been modified, and the new information has not been saved yet
• If the reaction has been saved previously, and the order is now changed, then a number in (x) will shows the reaction order change. In the example above, the reaction 5 was saved as reaction 2 before the move.
• When the reaction file is saved, these markers will disappear.
## Other Features
### Clone a Reaction
The user may wish to clone a reaction when a very similar reaction is required. Select the required reaction in the reaction window and click Clone. A new reaction will be created at the bottom of the list which can be edited as for other reactions.
NOTE: when editing a cloned reaction, be sure to check and edit the extent species as this may have changed, especially if the "old" compound has been deleted.
### Copying Reactions from other opened files in the Reaction Editor
Reactions can be easily copied from one reaction file to another using the Reaction Editor.
If the reaction files are in the same project, and they are both opened in the reaction editor, then
• With the Files pane open, drag the reaction to be copied with the left mouse, hold down and drop it into the other reaction file.
### Copying Reactions from another instance of Reactions Editor
To copy a reaction from a reaction file in a different project, open the reaction files in a different instances of the Reaction Editor.
Select the reaction to be copied, <ctrl+C> or right mouse click and select copy.
Go to the other instance of Reaction Editor, right click on the reaction window of the desired reaction file and select paste.
NOTE:
• Existing Reaction Files can be opened from the SysCAD Reaction Editor without starting the SysCAD project.
• Start up the Reaction Editor from the start menu. Menu command <File> - <Open> and select the reaction file to open.
• Alternatively, with one instance of the reaction editor opened, Menu command <File> - <Open Folder> and select the reactions folder for a particular project. The files will be opened in another instance of the Reaction Editor.
## Options Tab
For every reaction file there is a Options tab. Click on the tab to access the Options tab. An example is shown below.
This tab allows the user to enable the following:
### Sequences Enabled
• This causes the reactions to be solved simultaneously, if required. This is NOT the recommended solution method - please see Reaction Block - Solving Reactions for more information.
• With the Sequences Enabled option ticked, each reaction will automatically be assigned a Sequence number of 1 - which means that all of the reactions are placed in sequence 1 and solved simultaneously.
• The user may change the sequence number of a reaction on the Reactions tab and hence create groups of reactions. Each group of reactions is solved simultaneously. In the example shown below, we have moved one of the reaction to a second sequence for demonstration purposes.
### Always Use First Reactant for Extent
• This allows the user to specify that the first reactant is always to be used for the extent for the reactions in this reaction file.
• Normally when a reaction is inserted the default extent species used is the first reactant, but the user may change this. If this option is enabled, then the user cannot change the extent species.
### Description
• Add any comments to the description box for this reaction file.
• Any comments which were previously in a reaction file created without the Reaction Editor will be automatically moved to the description box. The date that the file was first saved using the Reaction Editor will automatically be noted in the description box as shown in the example above.
# Troubleshooting
1. If you are getting this error message during the reaction editor start up: "An item with the same key has already been added", it may be caused by issues with the species database definition. Please check the message window - species database tab for any errors. | 2022-05-20 22:56:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41022226214408875, "perplexity": 2010.0273278223729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00249.warc.gz"} |
https://tex.stackexchange.com/questions/120735/typesetting-a-table-to-look-like-a-spreadsheet | # Typesetting a table to look like a spreadsheet
Continuing in my efforts to create particular computing environments in LaTeX, I'm now try to coerce a table to look like a spreadsheet. It needs
• All columns other than the first having equal width (say 3cm)
• First row and column having a gray background
• First column and first row right aligned
• All other rows and columns left aligned
I can probably do this with a mess of stuff from the array and colortbl packages, but so far my efforts have been less than successful. (If I get the alignment right, then the background doesn't fill the cell properly, for example).
Here's what I have so far, which is almost good, except for the formatting:
\begin{tabular}[h]{>{\columncolor[gray]{.9}}c|*{5}{>{\hfill}p{2cm}|}}
\hline
\rowcolor[gray]{.9}&A&B&C&D&E\\
\hline
1&0&1&2&3&4\\
\hline
2&185&&&&\\
\hline
3&-31&&&&\\
\hline
4&-39&&&&\\
\hline
5&-367&&&&\\
\hline
6&-1159&&&&\\
\hline
\end{tabular}
The main difficulty seems to be formatting the first row differently from all other rows. Is there a LaTeX environment which allows you to changing formatting mid-table?
One possibility using TikZ:
\documentclass{article}
\usepackage[margin=1cm]{geometry}
\usepackage{tikz}
\usetikzlibrary{matrix}
\begin{document}
\tikzset{
table/.style={
matrix of nodes,
row sep=-\pgflinewidth,
column sep=-\pgflinewidth,
nodes={rectangle,draw=black,text width=3cm,align=left},
text depth=0.5ex,
text height=1.75ex,
nodes in empty cells
},
row 1/.style={nodes={fill=gray!10,align=right}},
column 1/.style={nodes={fill=gray!10,text width=1cm,align=right}}
}
\begin{tikzpicture}
\matrix (mat) [table]
{
& 20 & 30 & 40 & 50 & 60 \\
80 & 78 & 79 & 80 & 81 & 82 \\
80 & 78 & 79 & 80 & 81 & 82 \\
80 & 78 & 79 & 80 & 81 & 82 \\
80 & 78 & 79 & 80 & 81 & 82 \\
80 & 78 & 79 & 80 & 81 & 82 \\
};
\end{tikzpicture}
\end{document}
And a possibility using longtable, array and colortbl:
\documentclass{article}
\usepackage[margin=1cm]{geometry}
\usepackage{array}
\usepackage{longtable}
\usepackage[table]{xcolor}
\newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{#1}}
\newcolumntype{R}[1]{>{\raggedleft\arraybackslash}p{#1}}
\begin{document}
{
\renewcommand\arraystretch{1.25}
\begin{longtable}{|>{\columncolor{gray!10}}R{1cm}*{5}{|L{3cm}}|}
\hline
\rowcolor{gray!10}& \hfill20 & \hfill30 & \hfill40 & \hfill50 & \hfill60 \\
\hline
80 & 78 & 79 & 80 & 81 & 82 \\
\hline
80 & 78 & 79 & 80 & 81 & 82 \\
\hline
80 & 78 & 79 & 80 & 81 & 82 \\
\hline
80 & 78 & 79 & 80 & 81 & 82 \\
\hline
80 & 78 & 79 & 80 & 81 & 82 \\
\hline
\end{longtable}
}
\end{document}
In the last code I used longtable just in case a multi-page table is required (if this is not so, one can simply use tabular).
• That's a great idea - I never would have thought of using TiKZ for typesetting matrices! – Alasdair Jun 24 '13 at 2:43
• @Alasdair I've updated my answer with a TikZ-free option. – Gonzalo Medina Jun 24 '13 at 2:46
• Very nice indeed. However, I'm happy to use TiKZ - it's my preferred option for diagrams now. Why do you need longtable? – Alasdair Jun 24 '13 at 2:59
• @Alasdair longtable is just in case the table should span more than one page. – Gonzalo Medina Jun 24 '13 at 3:02
An approach using "cals":
\documentclass{minimal}
\usepackage{cals}
\usepackage{xcolor}
\begin{document}
\begin{calstable}
\makeatletter
\colwidths{{5mm}{20mm}{20mm}{20mm}{20mm}{20mm}}
\alignR
\brow
\def\bgcolor{gray!20}
\def\cals@bgcolor{\bgcolor}
\cell{}\cell{A}\cell{B}\cell{C}\cell{D}\cell{E}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{1}\def\cals@bgcolor{}
\cell{0}\cell{1}\cell{2}\cell{3}\cell{4}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{2}\def\cals@bgcolor{}
\cell{185}
\cell{}\cell{}\cell{}\cell{}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{3}\def\cals@bgcolor{}
\cell{-31}
\cell{}\cell{}\cell{}\cell{}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{4}\def\cals@bgcolor{}
\cell{-39}
\cell{}\cell{}\cell{}\cell{}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{5}\def\cals@bgcolor{}
\cell{-367}
\cell{}\cell{}\cell{}\cell{}
\erow
\brow
\def\cals@bgcolor{\bgcolor}\cell{6}\def\cals@bgcolor{}
\cell{-1159}
\cell{}\cell{}\cell{}\cell{}
\erow
\end{calstable}
\end{document} | 2018-11-19 00:53:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7431588768959045, "perplexity": 373.938791558722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744803.68/warc/CC-MAIN-20181119002432-20181119024432-00259.warc.gz"} |
https://physics.stackexchange.com/tags/conventions/new | The Stack Overflow podcast is back! Listen to an interview with our new CEO.
# Tag Info
4
$10^{-10}m$ actually. According to Wikipedia, it is metric but not SI. SI is a subset of the metric system. "The ångström is not a part of the SI system of units, but it can be considered part of the metric system." Like many units, it is named after a person and this person happened to be Swedish so it is not very surprising that his name contains ...
1
The SI system (aka metric system) has, by definition, seven "base units" that can be scaled by a standardized list of prefixes. The base unit for length is the meter. There is no $10^{-10}$ prefix, so you cannot express 1 angstrom as 1 of any such combination. You can convert it, of course, to any such combination, for example the 0.1 nm that you listed ...
0
Depends on how you are defining "weight". If you mean weight as in the vector or scalar of the gravity acting on an object, Marco's answer is what you want. There is a third meaning of weight. This is essentially "what a scale measures". This definition also fits with how people typically experience the effects of weight. If we use that definition, then ...
1
This is an arbitrary convention that was fixed historically around the time that Einstein published the general theory of relativity. It's similar to the right-hand rule for defining torque, or the convention that the charge of the electron is negative. Although it's arbitrary, it's fixed, so different authors do not use different conventions as a matter of ...
2
There are two objects in relativity that contain the same information, but are of different mathematical nature - vectors and covectors. Covector is dual vector to the original vector. That is, it is member of different vector space, but (in space endowed with metric) there exists natural one-to-one correspondence between vectors and covectors. Now, since ...
6
For all practical purposes the weight will remain 1000 pounds when you freeze it. In theory the mass of the water will reduce somewhat through cooling because it will contain less energy when frozen, but the effect would be utterly negligible. Although the weight will be the same, water expands in volume by about a tenth as it freezes (which is why, for ...
0
You can of course find this on wiki...or many other places. Today, charge is related to the elementary charge e The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere A". Conversely, a current of one ampere is one coulomb C of charge going past a given point per second: $1 A = 1 \frac{C}{s}$ The ...
0
Written as scalars, the DE looks like: $$m\frac{\text{d}^2x}{\text{d}t^2}+k^2x=0$$ Or: $$x''(t)+\frac{k^2}{m}x(t)=0$$ Set: $$\omega^2=\frac{k^2}{m}$$ The DE then solves to: $$x(t)=A\cos(\omega t+\varphi)$$ Where $A$ and $\varphi$ are determined from the initial conditions (not specified here). Using this notation the angular velocity $\omega$ then ...
1
But how can energy be negative? Your question is not entirely clear to me, but is it correctly understood that you essentially are asking why electric potential energy can be negative? The answer is that the value of potential energy doesn't matter. Only the difference matters. If you compare the amount of stored potential energy with another amount of ...
0
It may depend on the metric convention in the papers. In the West Coast metric $g_{\mu\nu}= {\rm diag}(+,-,-,-)$ the flat space Dirac equation is usually written as $$(-i\gamma^\mu\partial_\mu +m)\psi=0.$$ In the East Coast metric $g_{\mu\nu}= {\rm diag}(-,+,+,+)$ it is $$(\gamma^\mu\partial_\mu+M)\psi=0.$$ In both cases $$\gamma^\mu\gamma^\nu+\... 3 check the definitions of \Gamma^{\mu}, D _{\mu}, and the signature of g^{\mu\nu}. I think you will find that the equations are actually the same, possibly modulo an overall sign. 1 A statvolt is a statcoulomb per centimeter, because the electrostatic potential of a point charge in Gaussian units is \varphi=q/r. 0 Conventional current_ Current flow from positive to negative termminal of a body due to flow of positive charge known sa conventional current. 0 Note that these are slightly modified conventions which I developed after practising a few questions. For mirrors, simply use coordinate system conventions (fixing the pole of the mirror as the origin), but for lenses, as far as I know, this doesn't hold so good. You will have to check the direction of incident light to the lens. In the direction of ... 0 Negative sign is used for virtual images. For mirror, if the reflected ray comes where the radius of the curvature is and forms a real image, give a plus sign before i. Otherwise, give minus sign. 0 Our universe seems to tell a story that is independent of the words in which we have always chosen to express it. – Kate Becker I like to shorten it down to: The world works the same, regardless of how we speak about it. Mathematical formulations also count as a way of speaking about it. Math is a human invention - it is a "language" - that we use to ... 1 We can say "light travels in water 1.333 times slower than in vacuum", OR "light travels in water at 0.75c". Both ways of describing speed of light in medium are valid. Besides more valid definitions of light speed in medium can be constructed, so what ? It changes nothing at all. EDIT Several alternative refractive index definitions: n= \lambda_0 / \... 7 All one would have to do is replace n with 1\over n everywhere. Changing the definition will not affect the physics in any way. 2 What is the historical reason why this unit of measure was adopted as a submultiple of the International System? The SI prefixes denote factors of ten (deci, deka), hundred (centi, hecto), and thousand (milli, kilo). Beyond that, the prefixes go in steps of 1000. According to the wikipedia article, micro- (10^{-6}) dates to 1873; nano- and pico- (10^{-... 1 I'm getting the same result, which is not correct since this is a left-handed system. What am I missing here? You are not missing anything. It is correct, you should be getting the same result. You have changed both the handed ness of the coordinate system and the roles of the vectors. Those two changes cancel out so that in the end you get the same result. ... 1 In a left-handed system (which is the one on the right), the relation that connects your basis vectors \textbf{e}_1, \textbf{e}_2, \textbf{e}_3 (that signify \textbf{i}, \textbf{j} and \textbf{k} respectively) is:$$\textbf{e}_i \times \textbf{e}_j = \sum_{k=1}^3\epsilon_{ijk} \textbf{e}_k$$where \epsilon_{ijk} is the \textbf{Levi - Civita Symbol}... 0 In a left-handed system, positive rotation is clockwise about the axis of rotation. Did you take this into account properly? And did you use left hand? It seems to work for me. The method you sited is for a right-handed coordinate system. Its discussed in your reference but a specific method is not given for a left-handed system. 0 A wrong assumption was made in the question asked. An element A\in SL(2,\mathbb{C}) that induces the Lorentz transformation X \rightarrow A X A^\dagger is a Spinor transformation for a right-handed Spinor (not left-handed, as claimed). Rather, a left-handed spinor transformes with an element A\to SL(2,\mathbb{C}) that induces the Lorentz ... 0 As stated in some of the comments above, signs are a matter of convention, but they are not arbitrary ! Consistency is the key. Since e^{\pm i \frac{1}{2} \vec{\phi} \cdot \vec{\sigma} + \frac{1}{2} \vec{\beta} \cdot \vec{\sigma}} and e^{\pm i \frac{1}{2} \vec{\phi} \cdot \vec{\sigma} - \frac{1}{2} \vec{\beta} \cdot \vec{\sigma}} (same sign for the ... 0 The naming of the two types of electricities is not wrong and it cannot be a matter of human's convention at all. (in this answer I will use the words “plus” and “minus” instead of “positive” and “negative”) “Plus” is the effect towards outside (expansion, blowing, explosion, yang), “minus” is the effect towards inside (contraction, suctioning, implosion, ... 0 To be consistent with the vector notation, when r points to the center of mass from the center of rotation it is$$ v = \omega \, r$$in scalar form and$$ \vec{v} = \vec{\omega} \times \vec{r} $$in vector form where \times is the vector cross product. 6 If you're seeing web sites disagreeing about something very basic like this, why not just look it up in a reliable source like a textbook? The relation is v=\omega\times r. You can verify this using the right-hand rule. 1 This seems like a misapplication of the concept of pushforward and pullback. Carroll is speaking the language of physicists, but I think in the language of a modern differential geometer, tensors such as the metric do not transform under a change of coordinates. The tensor is invariant, but its components can be expressed in different bases. Assuming I'm ... 0 I.e. potential difference = change in potential / charge. \Delta V is the difference in potential (voltage) between points a and b. \Delta U is the change in potential energy, not the difference in potential. Regarding the second equation, the first part of the equation$$\Delta V=\frac{W}{Q} comes from the definition of potential difference, ...
Top 50 recent answers are included | 2019-10-19 01:49:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922350406646729, "perplexity": 417.10500482831577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00085.warc.gz"} |
https://gitter.im/FreeCodeCamp/HelpJavaScript/archives/2016/04/13 | 13th
Apr 2016
Vik
@vvang044
Apr 13 2016 00:00 UTC
string.charCodeAt( ) and string.fromCharcode is confusing
marvin veasy
@juniorveasy
Apr 13 2016 00:01 UTC
i got nothing @ndburrus man
Coy Sanders
@coymeetsworld
Apr 13 2016 00:01 UTC
helps to look at the examples @vvang044
Moisés Man
@moigithub
Apr 13 2016 00:01 UTC
sometimes w3school is easier to read @vvang044
Coy Sanders
@coymeetsworld
Apr 13 2016 00:01 UTC
'ABC'.charCodeAt(0); // returns 65
Moisés Man
@moigithub
Apr 13 2016 00:01 UTC
MDN is too much technical
Coy Sanders
@coymeetsworld
Apr 13 2016 00:01 UTC
it returns 65 because thats the ASCII code for the letter 'A'
Vik
@vvang044
Apr 13 2016 00:02 UTC
yeah that part of 65 i have no clue
Coy Sanders
@coymeetsworld
Apr 13 2016 00:03 UTC
each character has a numerical code to represent it
Moisés Man
@moigithub
Apr 13 2016 00:03 UTC
String.fromCharCode(65)<-- return "A"
(notice the Uppercase S from string.. and camelCase on fromCharCode)
Coy Sanders
@coymeetsworld
Apr 13 2016 00:03 UTC
look at the chart i just provided, 65 is A, 66 B, 67 C...etc
the way to convert that code back to a character to put in a String, you use fromCharCode
Norvin Burrus
@ndburrus
Apr 13 2016 00:04 UTC
@juniorveasy alright - let's start here:
js
if (strokes == 1) {
return "Hole-in-one!";
}
Vik
@vvang044
Apr 13 2016 00:04 UTC
Norvin Burrus
@ndburrus
Apr 13 2016 00:05 UTC
@juniorveasy can u see the code?
githubusr1
@githubusr1
Apr 13 2016 00:08 UTC
HI, GOOD MORNING
VioletLove
@violetlove26
Apr 13 2016 00:09 UTC
// Only change code below this line
var remainder; 11 % 3 = 2;
What do I have wrong here?
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:11 UTC
I am converting the switch statement to a look up table, and I am having trouble getting it to work. can someone tell me what I am doing wrong?
// Setup
function phoneticLookup(val) {
var result = "";
// Only change code below this line
var lookup = {
"bravo": "Boston",
charlie: "Chicago",
delta: "Denver",
echo: "Easy",
foxtrot: "Frank"
};
// Only change code above this line
return result;
}
// Change this value to test
phoneticLookup("charlie");
Norvin Burrus
@ndburrus
Apr 13 2016 00:14 UTC
@Limelightbuzz which challenge is that?
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:14 UTC
hi people
can somebody help me in javascript ?
where do i belong exercise
Norvin Burrus
@ndburrus
Apr 13 2016 00:14 UTC
@Limelightbuzz .. be consistent with ur use of parens
henrywashere
@henrywashere
Apr 13 2016 00:15 UTC
soooo....would this code be correct @coymeetsworld ??
var sentence = "The" + ' ' + myAdjective + ' ' + myNoun + ' ' + myVerb + ' ' + myAdverb;
Norvin Burrus
@ndburrus
Apr 13 2016 00:15 UTC
@Limelightbuzz u used parens with alpha & bravo, but discontued with the others
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:15 UTC
using objects for Lookups, @ndburrus
Norvin Burrus
@ndburrus
Apr 13 2016 00:15 UTC
@Limelightbuzz discontinued*
Coy Sanders
@coymeetsworld
Apr 13 2016 00:15 UTC
close @henrywashere
look at what it outputs, the words will be squashed together
you want to add spaces between them
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:16 UTC
function getIndexToIns(arr, num) {
// Find my place in this sorted array.
var myArr = arr.slice(0,arr.length);
var args = Array.prototype.slice.call(arguments);
var orderedArray =[];
myArr.sort(function(a,b){
return a-b;
});
return arr.indexOf(num);
}
//Menor que el ultimo y mayor que el primero
getIndexToIns([10, 20, 30, 40, 50], 30);
Coy Sanders
@coymeetsworld
Apr 13 2016 00:16 UTC
+ '' + does nothing
also may want to add more Strings like "The" in your sentence
Norvin Burrus
@ndburrus
Apr 13 2016 00:16 UTC
@Limelightbuzz see the issue?
Coy Sanders
@coymeetsworld
Apr 13 2016 00:16 UTC
to make it sound more like English
henrywashere
@henrywashere
Apr 13 2016 00:16 UTC
gotcha
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:16 UTC
@ndburrus i tried it both ways
Norvin Burrus
@ndburrus
Apr 13 2016 00:17 UTC
@Limelightbuzz ok, but in the code u posted, i'm suggesting that u be consistent with parenthesis usage... :)
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:19 UTC
okay, @ndburrus . here is the code updated
Norvin Burrus
@ndburrus
Apr 13 2016 00:19 UTC
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:19 UTC
// Setup
function phoneticLookup(val) {
var result = "";
// Only change code below this line
var lookup = {
"bravo":"Boston",
"charlie":"Chicago",
"delta":"Denver",
"echo":"Easy",
"foxtrot":"Frank"
};
// Only change code above this line
return result;
}
// Change this value to test
phoneticLookup("charlie");
Norvin Burrus
@ndburrus
Apr 13 2016 00:20 UTC
@Limelightbuzz ok! is it working?
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:20 UTC
no. I've tried it without parenthesis, also
Norvin Burrus
@ndburrus
Apr 13 2016 00:21 UTC
@Limelightbuzz please place a comma after Frank
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:22 UTC
I'll try it later, I'm heading to a JS meetup @ Galvanize.
Thank you, again!
henrywashere
@henrywashere
Apr 13 2016 00:22 UTC
huh, thats weird...i added space between the quotation marks but it wont go through
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:22 UTC
hello somebody can help me in js?
Norvin Burrus
@ndburrus
Apr 13 2016 00:22 UTC
@Limelightbuzz - i think you changed the result line - s/b: result = lookup[val];
@Limelightbuzz oh, i c. u didn't assign a value to result
Limelightbuzz
@Limelightbuzz
Apr 13 2016 00:23 UTC
i didn't. might be a bug. I'll follw up, when i get there
Norvin Burrus
@ndburrus
Apr 13 2016 00:24 UTC
@Limelightbuzz i just gave u the answer... add that statement above return reslt, and u should be fine.
@Limelightbuzz resuilt*
@Limelightbuzz result*
@Limelightbuzz the val ("charlie") inputwill be evaluated in the key/value storage, and assigned the value for "charlie" - which is "Chicago"
@Limelightbuzz is it getting clearer?
@carlospulido which challenge r u working on?
brb
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:28 UTC
Return the lowest index at which a value (second argument) should be inserted into an array (first argument) once it has been sorted.
For example, getIndexToIns([1,2,3,4], 1.5) should return 1 because it is greater than 1 (index 0), but less than 2 (index 1).
Likewise, getIndexToIns([20,3,5], 19) should return 2 because once the array has been sorted it will look like [3,5,20] and 19 is less than 20 (index 2) and greater than 5 (index 1).
function getIndexToIns(arr, num) {
// Find my place in this sorted array.
var myArr = arr.slice(0,arr.length);
var args = Array.prototype.slice.call(arguments);
var orderedArray =[];
myArr.sort(function(a,b){
return a-b;
});
return arr.indexOf(num);
}
//Menor que el ultimo y mayor que el primero
getIndexToIns([10, 20, 30, 40, 50], 30);
Norvin Burrus
@ndburrus
Apr 13 2016 00:31 UTC
@carlospulido im not there yet carlos - sorry.... which challenge is that anyway?
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:32 UTC
@ndburrus Where do I belong
Norvin Burrus
@ndburrus
Apr 13 2016 00:33 UTC
@carlospulido i cant help with that one, sorry. which exercise is it - i cant see the entire text (1st screen/portion)
@carlospulido oh, hey. it scrolls...
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:33 UTC
@ndburrus
Return the lowest index at which a value (second argument) should be inserted into an array (first argument) once it has been sorted.
For example, getIndexToIns([1,2,3,4], 1.5) should return 1 because it is greater than 1 (index 0), but less than 2 (index 1).
Likewise, getIndexToIns([20,3,5], 19) should return 2 because once the array has been sorted it will look like [3,5,20] and 19 is less than 20 (index 2) and greater than 5 (index 1).
Norvin Burrus
@ndburrus
Apr 13 2016 00:35 UTC
@carlospulido sorry carlos - i havent done that one yet. i found the challenge though. "javascript lingop: finding and indexing data in arrays"
lingo*
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:36 UTC
ok @ndburrus dont worry ;)
:)
Tyler Moeller
@TylerMoeller
Apr 13 2016 00:37 UTC
@carlospulido I don't see you adding num to the array before sorting and getting the index
henrywashere
@henrywashere
Apr 13 2016 00:41 UTC
thanks @coymeetsworld
CamperBot
@camperbot
Apr 13 2016 00:41 UTC
henrywashere sends brownie points to @coymeetsworld :sparkles: :thumbsup: :sparkles:
:star: 535 | @coymeetsworld | http://www.freecodecamp.com/coymeetsworld
henrywashere
@henrywashere
Apr 13 2016 00:41 UTC
finally finished this damn exercise
Moisés Man
@moigithub
Apr 13 2016 00:41 UTC
@carlospulido this is NOT needed var args = Array.prototype.slice.call(arguments);
because getIndexToIns([10, 20, 30, 40, 50], 30); u only pass 2 arguments/parameters
and those 2 parameters are NAMed on the funcitn header function getIndexToIns(arr, num) {
(also u not using args anywhere)
John Drevniok
@johndrevniok
Apr 13 2016 00:42 UTC
is for each llop still usable in javascript?
Tyler Moeller
@TylerMoeller
Apr 13 2016 00:42 UTC
Moisés Man
@moigithub
Apr 13 2016 00:43 UTC
@carlospulido u also NOT using ur orderedArray variable
John Drevniok
@johndrevniok
Apr 13 2016 00:45 UTC
@TylerMoeller Thanks. I guess the the specific for each is deprecated (with the space between) https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for_each...in
CamperBot
@camperbot
Apr 13 2016 00:45 UTC
johndrevniok sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles:
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:45 UTC
yes i delete it variable now
CamperBot
@camperbot
Apr 13 2016 00:45 UTC
:star: 492 | @tylermoeller | http://www.freecodecamp.com/tylermoeller
Tyler Moeller
@TylerMoeller
Apr 13 2016 00:46 UTC
@johndrevniok You taught me something new, thanks :) Never learned that one.
CamperBot
@camperbot
Apr 13 2016 00:46 UTC
tylermoeller sends brownie points to @johndrevniok :sparkles: :thumbsup: :sparkles:
:star: 126 | @johndrevniok | http://www.freecodecamp.com/johndrevniok
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:48 UTC
@moigithub and now?
function getIndexToIns(arr, num) {
// Find my place in this sorted array.
arr.slice(0,arr.length);
num = Array.prototype.slice.call(arguments);
arr.push(num);
arr.sort(function(a,b){
return a-b;
});
return arr.indexOf(num);
}
//Menor que el ultimo y mayor que el primero
getIndexToIns([10, 20, 30, 40, 50], 10);
Tyler Moeller
@TylerMoeller
Apr 13 2016 00:48 UTC
@carlospulido You are very close - just add num to the array with an arr.push, then arr.sort like you're already doing with myArr, and then return arr.indexOf(num).
Moisés Man
@moigithub
Apr 13 2016 00:48 UTC
u forgot push num..
ahh i see it
remove this num = Array.prototype.slice.call(arguments);
and this arr.slice(0,arr.length);
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:51 UTC
thsnks @moigithub Thank had in the previous fiscal year Mixed Ideas
CamperBot
@camperbot
Apr 13 2016 00:51 UTC
carlospulido sends brownie points to @moigithub :sparkles: :thumbsup: :sparkles:
:star: 892 | @moigithub | http://www.freecodecamp.com/moigithub
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:51 UTC
jajaja
i mixin it with challenge seek and destroy in my brain :)
now run !!
;)
bitgrower
@bitgrower
Apr 13 2016 00:53 UTC
interesting gitter change ... I like it ... but it will take me a few minutes to get used to not typing the first name I see ...
Moisés Man
@moigithub
Apr 13 2016 00:56 UTC
@carlospulido https://gitter.im/FreeCodeCamp/Espanol
tambien puedes preguntar por ahi :)
no translator needed :P
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:57 UTC
gracias moises
he entrado muchas veces pero casi nunca hay gente :)
Moisés Man
@moigithub
Apr 13 2016 00:57 UTC
de nada :)... ahora a meme a la tutu :)
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:57 UTC
? meme ??
tutu?
Moisés Man
@moigithub
Apr 13 2016 00:57 UTC
dormir
Carlos Pulido
@carlosfrontend
Apr 13 2016 00:58 UTC
ahhh
aqui se dice a dormir o a sobar
ir a la cama === irse al sobre
jejej
para los niños pequeños se dice mimir es parecido
Si mañana más y mejor gracias por tu ayuda me voy a meme a la tutu :)
good bye all people !
bitgrower
@bitgrower
Apr 13 2016 01:12 UTC
hasta la vista, @carlospulido ... you may have me pulling out my spanish books ... :)
Tien Anh Nguyen
@tienanh2007
Apr 13 2016 01:36 UTC
help validate US Telephone Numbers
CamperBot
@camperbot
Apr 13 2016 01:36 UTC
# Problem Explanation:
• The task is not that hard to understand, implementing it is the hardest part. You have a to validate an US phone number. This means there is a certain amount of numbers required, while you don't need to put the country code, you will still need the area code and use one of the few formats allowed.
:pencil: read more about algorithm validate us telephone numbers on the FCC Wiki
Louis
@samfisher6
Apr 13 2016 01:45 UTC
can some one give a pointer i don't want the answer
i reversed the strings but It's not marking it as correct
bitgrower
@bitgrower
Apr 13 2016 01:47 UTC
you need to put your code within the function, and then use the parameter passed to the function @samfisher6
CodeFay
@CodeFay
Apr 13 2016 01:48 UTC
@samfisher6 you got the basic idea, but the question is asking each string to be reversed, not an array of strings. Just like @bitgrower described
bitgrower
@bitgrower
Apr 13 2016 01:50 UTC
yep -- no need to create your own array of strings ... the challenge will send you, in the parameter passed, the string (just one word for the first 3) that needs to be reversed ...
Louis
@samfisher6
Apr 13 2016 01:54 UTC
sorry I'm not sure i understand
Norvin Burrus
@ndburrus
Apr 13 2016 01:55 UTC
anyone have questions on javascript challenges? i'm not finished with the js section, but can assist with those i've completed....
Jared Pranke
@EtherWolf
Apr 13 2016 01:56 UTC
This message was deleted
CodeFay
@CodeFay
Apr 13 2016 02:00 UTC
@samfisher6 Have you encountered the "Profile Lookup" under Basic Javascript? (It's before the algorithms, so I'm hoping you may have)... that kind of shows an example of how they are calling the function vs hardcoding the array of what they're resting
@samfisher6 Another way to think about it is... for the reverseString function, you want to write a function that you can call, no matter what your input is. Vs hardcoding your query into the function
so... if you wanted to add two numbers , the code would say something like "return a + b;", but the function would be addNumbers(a,b)
Louis
@samfisher6
Apr 13 2016 02:02 UTC
ah ok thanks
CodeFay
@CodeFay
Apr 13 2016 02:02 UTC
:) no worries! happy to help
Jas
@JB2016
Apr 13 2016 02:05 UTC
help Split Strings
CamperBot
@camperbot
Apr 13 2016 02:05 UTC
## :point_right: challenge split strings with split [wiki]
You can use the .split() method to split a string into an array.
split uses the argument you give to to split the string.
array = string.split(' ');
:pencil: read more about challenge split strings with split on the FCC Wiki
Jas
@JB2016
Apr 13 2016 02:05 UTC
help Split Strings with split
CamperBot
@camperbot
Apr 13 2016 02:05 UTC
## :point_right: challenge split strings with split [wiki]
You can use the .split() method to split a string into an array.
split uses the argument you give to to split the string.
array = string.split(' ');
:pencil: read more about challenge split strings with split on the FCC Wiki
reedhammond
@reedhammond
Apr 13 2016 02:05 UTC
Use the \d selector to select the number of numbers in the string, allowing for the possibility of one or more digit.
// Setup
var testString = "There are 3 cats but 4 dogs.";
// Only change code below this line.
var expression = /\3+4/g; // Change this line
// Only change code above this line
// This code counts the matches of expression in testString
var digitCount = testString.match(expression).length;
Yomi
@Joll59
Apr 13 2016 02:06 UTC
@CodeFay I am actaully on"Profile Lookup" in basic javascript
reedhammond
@reedhammond
Apr 13 2016 02:06 UTC
I can not figure out what I am doing wrong
Yomi
@Joll59
Apr 13 2016 02:06 UTC
can i burrow a second set of eyes
Jas
@JB2016
Apr 13 2016 02:06 UTC
@reedhammond \d+
reedhammond
@reedhammond
Apr 13 2016 02:07 UTC
/\d+3+4/g;
LIke that?
Jas
@JB2016
Apr 13 2016 02:07 UTC
@reedhammond na just \d no 3+4
Coy Sanders
@coymeetsworld
Apr 13 2016 02:07 UTC
remove +3+4
\d matches a digit
Jas
@JB2016
Apr 13 2016 02:07 UTC
@reedhammond where counting any digits
Tiffany White
@twhite96
Apr 13 2016 02:07 UTC
@ndburrus I am at Return Early Pattern for Functions
reedhammond
@reedhammond
Apr 13 2016 02:07 UTC
Thanks so much!
Tiffany White
@twhite96
Apr 13 2016 02:08 UTC
And I am a bit stuck
Yomi
@Joll59
Apr 13 2016 02:08 UTC
for (var a = 0; a < contacts.length; a++)
{
if (firstName == contacts[a].firstName && contacts[a].hasOwnProperty(prop))
{
return contacts[a][prop];}
else if (firstName != contacts[a].firstName)
{
return "No such contact";}
else if (prop != contacts[a].prop)
{
return "No such property";}
}
please take a look anyone and tell me where i am screwing this thing up. This is for Profile LookUp
Norvin Burrus
@ndburrus
Apr 13 2016 02:08 UTC
@twhite96 Hi Tiffany, ok let's take a look
Jas
@JB2016
Apr 13 2016 02:09 UTC
@reedhammond can you put @JB2016 then thanks :)))
CamperBot
@camperbot
Apr 13 2016 02:09 UTC
jb2016 sends brownie points to @reedhammond :sparkles: :thumbsup: :sparkles:
:warning: could not find receiver for reedhammond
Tiffany White
@twhite96
Apr 13 2016 02:09 UTC
@ndburrus I know I don't need to initialize these variables
I don't want to define them
Jas
@JB2016
Apr 13 2016 02:09 UTC
@Joll59 try just a simple if (contacts[a][prop])
Tiffany White
@twhite96
Apr 13 2016 02:10 UTC
But they are asking if a or b is less that 0
Norvin Burrus
@ndburrus
Apr 13 2016 02:10 UTC
@twhite96 ok, so far, so good...
Tiffany White
@twhite96
Apr 13 2016 02:10 UTC
And I am confused as to what to put in the console.log statement
Jas
@JB2016
Apr 13 2016 02:10 UTC
@Joll59 you shouldn't need .hasOwnProp
Alex
@MisterM22
Apr 13 2016 02:10 UTC
hint
CamperBot
@camperbot
Apr 13 2016 02:10 UTC
:construction: Spoilers are only in the Bonfire's Custom Room :point_right:
Tiffany White
@twhite96
Apr 13 2016 02:11 UTC
@ndburrus I have set each variable less than 0 but I know that isn't the right thing to do
Norvin Burrus
@ndburrus
Apr 13 2016 02:11 UTC
@twhite96 ok, we're only going to focus on the code between the comments to be changed?
Tiffany White
@twhite96
Apr 13 2016 02:11 UTC
@ndburrus yes
Norvin Burrus
@ndburrus
Apr 13 2016 02:12 UTC
@twhite96 ok, so, based on what the challenge is asking for - a/b conditions, which function do u suppose u'd like to use?
Tiffany White
@twhite96
Apr 13 2016 02:13 UTC
@ndburrus so I should add my own function in here?
Norvin Burrus
@ndburrus
Apr 13 2016 02:13 UTC
@twhite96 just a moment - u said u are considering setting values for a/b. wait - is that really what ur after - or are u seeking a comparison result?
Louis
@samfisher6
Apr 13 2016 02:14 UTC
thanks everyone that took me a while
Tiffany White
@twhite96
Apr 13 2016 02:14 UTC
@ndburrus here are the instructions: Modify the function abTest so that if a or b are less than 0 the function will immediately exit with a value of undefined.
Hint
Remember that undefined is a keyword, not a string.
Norvin Burrus
@ndburrus
Apr 13 2016 02:14 UTC
let's think about the objective. u want to see what? can u post ur code?
Tiffany White
@twhite96
Apr 13 2016 02:15 UTC
Here is the code:
// Setup
function abTest(a, b) {
// Only change code below this line
console.log(-a < 0 || -b < 0 );
// Only change code above this line
return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2));
}
// Change values below to test your code
abTest(2,2);
Norvin Burrus
@ndburrus
Apr 13 2016 02:15 UTC
@twhite96 ok, lets take a look...
Tiffany White
@twhite96
Apr 13 2016 02:16 UTC
// Setup
function abTest(a, b) {
// Only change code below this line
console.log(-a < 0 || -b < 0 );
// Only change code above this line
return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2));
}
// Change values below to test your code
abTest(2,2);
Norvin Burrus
@ndburrus
Apr 13 2016 02:16 UTC
@twhite96 hmmm... that looks like one way to proceed. question: why are a & b negative?
Tiffany White
@twhite96
Apr 13 2016 02:17 UTC
I just tested it a few minutes ago. They aren't supposed to be @ndburrus
Norvin Burrus
@ndburrus
Apr 13 2016 02:17 UTC
@twhite96 ok, so lets get rid of that/those
Tiffany White
@twhite96
Apr 13 2016 02:17 UTC
@ndburrus using this without the -a/b doesn't pass all the tests
@ndburrus abTest(-2,2) doesn't pass neither does abTest(2,-2)
The rest do
Norvin Burrus
@ndburrus
Apr 13 2016 02:19 UTC
@twhite96 i approached this one differently using if, else if statements. but, the challenge specifically states (or requests) that a & b be evaluated against/compared to 0.
Tiffany White
@twhite96
Apr 13 2016 02:19 UTC
Okay
Norvin Burrus
@ndburrus
Apr 13 2016 02:20 UTC
@twhite96 let's focus on interpretting the challenge and not the results. the results will follow a proper understanding of the challenge, won't it?
Tiffany White
@twhite96
Apr 13 2016 02:20 UTC
@ndburrus I suppose
Norvin Burrus
@ndburrus
Apr 13 2016 02:20 UTC
@twhite96 so, what types of values should a & b be? (hint: positive or negative)
@twhite96 based on the challenge text
@twhite96 instructions (1st & 2nd lines)
Tiffany White
@twhite96
Apr 13 2016 02:21 UTC
So it should be negative @ndburrus
Norvin Burrus
@ndburrus
Apr 13 2016 02:22 UTC
@twhite96 are a & b positive or negative in the challenge?
Tiffany White
@twhite96
Apr 13 2016 02:22 UTC
They are both positive-- abTest(2,2)
Norvin Burrus
@ndburrus
Apr 13 2016 02:22 UTC
@twhite96 don't guess. just look at the red a & b. did they put a negative sign on those?
Tiffany White
@twhite96
Apr 13 2016 02:23 UTC
They are positive.
Norvin Burrus
@ndburrus
Apr 13 2016 02:23 UTC
@twhite96 ok, bingo. using ur method - do those values work?
Tiffany White
@twhite96
Apr 13 2016 02:23 UTC
No
Norvin Burrus
@ndburrus
Apr 13 2016 02:23 UTC
@twhite96 hmmm...moment
Yomi
@Joll59
Apr 13 2016 02:23 UTC
@JB2016 i made the changes and still not working..... this is what it looks like now.
ffs
for (var a = 0; a < contacts.length; a++)
{if (contacts[a].firstName == firstName)
{if (contacts[a][prop])
{return contacts[a][prop];}
else {return "No such property";}}
else {return "No such contact";}
}
lol
Tiffany White
@twhite96
Apr 13 2016 02:24 UTC
Your return statements shouldn't be inside those brackets @Joll59
Norvin Burrus
@ndburrus
Apr 13 2016 02:24 UTC
@twhite96 ok, it looks like u aren't requesting that anything be done if the condition/s is /are met.
Tiffany White
@twhite96
Apr 13 2016 02:25 UTC
Ahhhh @ndburrus
Norvin Burrus
@ndburrus
Apr 13 2016 02:25 UTC
@twhite96 what would u like to be done if the conditions are met?
Tiffany White
@twhite96
Apr 13 2016 02:25 UTC
I need the function to exit with an undefined
Norvin Burrus
@ndburrus
Apr 13 2016 02:25 UTC
@twhite96 the comp did the evaluation, just like u asked it to. then it did nothing - because u didn't request any action, no?
Yomi
@Joll59
Apr 13 2016 02:26 UTC
@twhite96 what do you mean?
Tiffany White
@twhite96
Apr 13 2016 02:26 UTC
Yeah but I don't understand how to get the function to quit with undefined unless I add another return statement to return undefined @ndburrus
Norvin Burrus
@ndburrus
Apr 13 2016 02:26 UTC
@twhite96 i (and the comp) - see no request/function that produces undefined
@twhite96 hmmmm... that sounds like an idea worth trying....
Tiffany White
@twhite96
Apr 13 2016 02:27 UTC
@Joll59 Usually return statements aren't inside brackets. For instance your code should look like this:
tekac
@tekac
Apr 13 2016 02:27 UTC
^
Norvin Burrus
@ndburrus
Apr 13 2016 02:28 UTC
@twhite96 i dont see any code - or example
Jas
@JB2016
Apr 13 2016 02:29 UTC
Hey, does anyone know why for Factorialize a Number: factorialize(0) should return 1.
Norvin Burrus
@ndburrus
Apr 13 2016 02:29 UTC
@twhite96 i haven't using ur methodolgy, but i think ur right that u need the return statement to get a "undefined" result
tekac
@tekac
Apr 13 2016 02:30 UTC
doesn't just return; work as undefined? am I worng?
wrong*
cannelflow
@cannelflow
Apr 13 2016 02:30 UTC
@twhite96 if(a is less then 0 or b is less then 0) return undefined @twhite96
Norvin Burrus
@ndburrus
Apr 13 2016 02:31 UTC
@twhite96 btw, i used return statements inside curly braces/brackets in if/esle if statements
Tiffany White
@twhite96
Apr 13 2016 02:31 UTC
for (var a = 0; a < contacts.length; a++ {
if (contacts[a].firstName == firstName) {
if (contacts[a][prop]) {
return contacts[a][prop];
} else {
return "No such property";
} else {
return "No such contact";}
}
}
tekac
@tekac
Apr 13 2016 02:31 UTC
shouldn't it be a if then else if and then else?
I'm new.. but just my 2cents
cannelflow
@cannelflow
Apr 13 2016 02:32 UTC
Gerard Jorgensen
@gerardjorgensen
Apr 13 2016 02:32 UTC
Hey I'm having trouble with the order the code runs on codepen. Test2 should print first but I guess code pend goes through that later??? Here is my codepen link http://codepen.io/GerardJ/pen/RaQPQm?editors=1010
tekac
@tekac
Apr 13 2016 02:32 UTC
holy moly @cannelflow
Yomi
@Joll59
Apr 13 2016 02:33 UTC
@twhite96 did you delete some of the brackets or just move them around so they arent on same line?
Norvin Burrus
@ndburrus
Apr 13 2016 02:33 UTC
js
<// Setup
function abTest(a, b) {
// Only change code below this line
if (a < 0) {
return undefined;
}
else if (b < 0) {
return undefined;
}
// Only change code above this line
return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2));
}
// Change values below to test your code
abTest(-2,2);>
Jas
@JB2016
Apr 13 2016 02:33 UTC
@cannelflow Thank you
CamperBot
@camperbot
Apr 13 2016 02:33 UTC
jb2016 sends brownie points to @cannelflow :sparkles: :thumbsup: :sparkles:
:star: 1059 | @cannelflow | http://www.freecodecamp.com/cannelflow
Norvin Burrus
@ndburrus
Apr 13 2016 02:33 UTC
@twhite96 take a look
Tiffany White
@twhite96
Apr 13 2016 02:33 UTC
@Joll59 Apparently you can have return statements inside brackets. Your code is really hard to read though
Norvin Burrus
@ndburrus
Apr 13 2016 02:34 UTC
@twhite96 can u try it in ur code?
cannelflow
@cannelflow
Apr 13 2016 02:35 UTC
@ndburrus i think this won't pass abTest(2,-2)
Tiffany White
@twhite96
Apr 13 2016 02:35 UTC
@ndburrus yeah I did
It works
Jas
@JB2016
Apr 13 2016 02:35 UTC
@Joll59 you're so close
Norvin Burrus
@ndburrus
Apr 13 2016 02:35 UTC
@twhite96 Shazzam!!!
Jas
@JB2016
Apr 13 2016 02:35 UTC
for (var a = 0; a < contacts.length; a++)
{if (contacts[a].firstName == firstName)
{if (contacts[a][prop]){return contacts[a][prop];}
else {return "No such property";}}
else {return "No such contact";}
}
ignore that accidentally hit enter
Norvin Burrus
@ndburrus
Apr 13 2016 02:36 UTC
@twhite96 ...so u got it?
Tiffany White
@twhite96
Apr 13 2016 02:36 UTC
@JB2016 why is the code all on one line like that? It makes it hard to read
Yomi
@Joll59
Apr 13 2016 02:36 UTC
@JB2016 i knew i was close, it bothers me that i didnt figure out already
@twhite96 for you
Tiffany White
@twhite96
Apr 13 2016 02:36 UTC
@ndburrus yep. I wasn’t sure about the if else if thing. I guess I should have known
Jas
@JB2016
Apr 13 2016 02:37 UTC
for (var a = 0; a < contacts.length; a++)
{if (contacts[a].firstName == firstName)
{if (contacts[a][prop]){
return contacts[a][prop];
} else {
return "No such property";}
}
}
return "No such contact";
Norvin Burrus
@ndburrus
Apr 13 2016 02:37 UTC
@twhite96 head up, u learned something!
Tiffany White
@twhite96
Apr 13 2016 02:37 UTC
@Joll59 I had never seen return statements inside of brackets. Newb. Ha. 😁
Jas
@JB2016
Apr 13 2016 02:38 UTC
@twhite96 wasn't my code - just copied and pasted and accidentally hit enter
Tiffany White
@twhite96
Apr 13 2016 02:38 UTC
@ndburrus this frustrates me that I didn’t think of that already
@JB2016 no worries
Norvin Burrus
@ndburrus
Apr 13 2016 02:38 UTC
@twhite96 the example shows a return statement inside curly braces/brackets.... :)
Yomi
@Joll59
Apr 13 2016 02:38 UTC
@JB2016 i am trying to figure out what you did differently
Jas
@JB2016
Apr 13 2016 02:39 UTC
return "No such contact" outside of for loop
Tiffany White
@twhite96
Apr 13 2016 02:39 UTC
@ndburrus why would you do that, though?
Norvin Burrus
@ndburrus
Apr 13 2016 02:39 UTC
@twhite96 no frustration allowed - ur in ur happy place.... :)
@twhite96 ...do what?
Tiffany White
@twhite96
Apr 13 2016 02:39 UTC
@ndburrus haha. It was such a simple thing
@ndburrus put a return statement in brackets
Jas
@JB2016
Apr 13 2016 02:39 UTC
Yomi
@Joll59
Apr 13 2016 02:40 UTC
i see, i was thinking was too literal when i created the code. Thank you
Norvin Burrus
@ndburrus
Apr 13 2016 02:40 UTC
@twhite96 well, the brackets identify whatever it is u would like to have done if the conditions are met, don't they?
Yomi
@Joll59
Apr 13 2016 02:41 UTC
@JB2016, that still doesnt clear the test
lol
Norvin Burrus
@ndburrus
Apr 13 2016 02:41 UTC
brb
Yomi
@Joll59
Apr 13 2016 02:41 UTC
I swear this thing i trying to haunt me
Tiffany White
@twhite96
Apr 13 2016 02:41 UTC
@ndburrus I suppose.
Jas
@JB2016
Apr 13 2016 02:41 UTC
@Joll59 don't worry I did the same thing, was so annoyed that if (contacts[a][prop]) simply returned true or false and if false return "No such prop" but it helps that you know that boolean expression now :)
@Joll59 I think I missed a }
@Joll59
for (var a = 0; a < contacts.length; a++)
{if (contacts[a].firstName == firstName)
{if (contacts[a][prop]){
return contacts[a][prop];
} else {
return "No such property";}
}
}
return "No such contact";
Yomi
@Joll59
Apr 13 2016 02:44 UTC
@JB2016 i didn't
Apr 13 2016 02:44 UTC
@gerardjorgensen - you are seeing the effects of the lag that usually takes place in an AJAX request. The call to navigator.geolocation.getCurrentPosition has a callback function that is activated when that function returns. I don't think that they really emphasize then when they introduce you to that function. But what happens is that it posts a request to (wherever it goes) and some time later it returns that values and calls your anonymous function. But in the meantime, the straight-line code in your ready function has executed so it prints "test1" first. Finally, when the getCurrentPosition returns the value, your callback is invoked.
Jas
@JB2016
Apr 13 2016 02:44 UTC
@Joll59
for (var a = 0; a < contacts.length; a++){
if (contacts[a].firstName == firstName){
if (contacts[a][prop]){
return contacts[a][prop];
} else {
return "No such property";}
}
}
}
return "No such contact";
Apr 13 2016 02:45 UTC
@gerardjorgensen - so you have to wait for that event to complete before you have the latitude and longitude to be able to use it in the call to the weather API.
I hope that is a clear explanation?
Norvin Burrus
@ndburrus
Apr 13 2016 02:45 UTC
@twhite96 ok, that didn't sound like u are convinced... take another look at the Example:
Jas
@JB2016
Apr 13 2016 02:45 UTC
Yomi
@Joll59
Apr 13 2016 02:45 UTC
@JB2016 lol, i still hadnt moved the return statement to outside the for loop
no need, i got it
@JB2016 thank you sir
CamperBot
@camperbot
Apr 13 2016 02:46 UTC
joll59 sends brownie points to @jb2016 :sparkles: :thumbsup: :sparkles:
Jas
@JB2016
Apr 13 2016 02:46 UTC
@Joll59 classic
Yomi
@Joll59
Apr 13 2016 02:46 UTC
or maam
CamperBot
@camperbot
Apr 13 2016 02:46 UTC
:star: 255 | @jb2016 | http://www.freecodecamp.com/jb2016
Tiffany White
@twhite96
Apr 13 2016 02:46 UTC
@ndburrus no I get it. Just not sure why I can’t do that now
I guess I’m just not there yet
Yomi
@Joll59
Apr 13 2016 02:46 UTC
@twhite96 thank you for the tips
Norvin Burrus
@ndburrus
Apr 13 2016 02:46 UTC
js
<function myFun() {
console.log("Hello");
return "World";
console.log("byebye")
}
myFun();>
CamperBot
@camperbot
Apr 13 2016 02:46 UTC
:star: 237 | @twhite96 | http://www.freecodecamp.com/twhite96
joll59 sends brownie points to @twhite96 :sparkles: :thumbsup: :sparkles:
Tiffany White
@twhite96
Apr 13 2016 02:46 UTC
I don’t understand when I should use it and when I shouldn’t @ndburrus
You’re welcome @Joll59
Norvin Burrus
@ndburrus
Apr 13 2016 02:47 UTC
@twhite96 do what now? ... {see the return inside of brackets, in the example?}
Tiffany White
@twhite96
Apr 13 2016 02:47 UTC
@ndburrus yeah
Norvin Burrus
@ndburrus
Apr 13 2016 02:49 UTC
@twhite96 Definition and Usage
The return statement stops the execution of a function and returns a value from that function.
@twhite96 there are some examples of return statements at http://www.w3schools.com/jsref/jsref_return.asp
brb
Apr 13 2016 02:55 UTC
@twhite96 - if you don't mind my chiming in - you use it when you need to! Sometimes the function is supposed to check the arguments to make sure that they are not undefined, or that they are valid numbers. If they are not the type of value as required, usually you return and undefined or null or something like that, perhaps an error message. (I think that there are some challenges or algorithms that use that type of requirement.) In the case of this "contacts" processing, you could make a more complicated type of control mechanism to make it that you would only have one return statement, but it is much more direct to return from the function when you know that you have the correct condition - either the property that was being requested, the "no such property" status, or lastly the "no such contact" status. As I said, you could write it to have only one return statement, but you would have to add additional logic to the code to make it work.
Tiffany White
@twhite96
Apr 13 2016 02:56 UTC
CamperBot
@camperbot
Apr 13 2016 02:56 UTC
twhite96 sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles:
Sumit Roy
@sroy8091
Apr 13 2016 03:02 UTC
//Setup
var contacts = [
{
"firstName": "Akira",
"lastName": "Laine",
"number": "0543236543",
"likes": ["Pizza", "Coding", "Brownie Points"]
},
{
"firstName": "Harry",
"lastName": "Potter",
"number": "0994372684",
"likes": ["Hogwarts", "Magic", "Hagrid"]
},
{
"firstName": "Sherlock",
"lastName": "Holmes",
"number": "0487345643",
"likes": ["Intriguing Cases", "Violin"]
},
{
"firstName": "Kristian",
"lastName": "Vos",
"number": "unknown",
"likes": ["Javascript", "Gaming", "Foxes"]
}
];
function lookUpProfile(firstName, prop){
// Only change code below this line
for(var i=0; i<contacts.length; i++){
if(contacts[i][firstName]===firstName){
for(var j=0; j<contacts[i].length; j++){
if(contacts[i][j]==prop){
return contacts[i][prop];
}
else{
return "No such property";
}
}
}
else{
return "No such contact";
}
}
// Only change code above this line
}
// Change these values to test your function
lookUpProfile("Akira", "likes");
whats wrong in this code/
goodm0urning
@goodm0urning
Apr 13 2016 03:04 UTC
Anyone here familiar with JSON?
Jas
@JB2016
Apr 13 2016 03:04 UTC
@sroy help code
Norvin Burrus
@ndburrus
Apr 13 2016 03:05 UTC
@sroy8091 what challenge is that?
Sumit Roy
@sroy8091
Apr 13 2016 03:05 UTC
profile lookup
Jas
@JB2016
Apr 13 2016 03:05 UTC
@sroy8091 if you scroll up the solution is there
@sroy8091 someone just asked the same question
goodm0urning
@goodm0urning
Apr 13 2016 03:06 UTC
I'm trying to do the Record Collection challenge, and only need to figure out how to add something to the end of the "tracks" array
// Setup
var collection = {
2548: {
album: "Slippery When Wet",
artist: "Bon Jovi",
tracks: [
"Let It Rock",
"You Give Love a Bad Name"
]
},
2468: {
album: "1999",
artist: "Prince",
tracks: [
"1999",
"Little Red Corvette"
]
},
1245: {
artist: "Robert Palmer",
tracks: [ ]
},
5439: {
album: "ABBA Gold"
}
};
// Keep a copy of the collection for tests
var collectionCopy = JSON.parse(JSON.stringify(collection));
// Only change code below this line
function updateRecords(id, prop, value) {
if (value !== "" && prop !== "tracks") {
collection[id][prop] = value;
}
else if (value !== "" && prop === "tracks") {
collection.prop.push(value);
}
else if (value === ""){
delete collection[id][prop];
}
return collection;
}
// Alter values below to test your code
updateRecords(2548, "tracks", "ABBA");
Jas
@JB2016
Apr 13 2016 03:06 UTC
tekac
@tekac
Apr 13 2016 03:07 UTC
@goodm0urning use brackets for the id and props
collection[id][prop]push(value);
Yomi
@Joll59
Apr 13 2016 03:08 UTC
@sroy8091 I also think you may have too many for statements....I could be wrong since I am super noob.
Sumit Roy
@sroy8091
Apr 13 2016 03:08 UTC
i'm not understanding what r u saying? @JB2016
goodm0urning
@goodm0urning
Apr 13 2016 03:08 UTC
AH! Snap! Thanks @tekac
CamperBot
@camperbot
Apr 13 2016 03:08 UTC
goodm0urning sends brownie points to @tekac :sparkles: :thumbsup: :sparkles:
:star: 231 | @tekac | http://www.freecodecamp.com/tekac
Yomi
@Joll59
Apr 13 2016 03:09 UTC
@sroy8091 if you scroll up you'll see my code and @JB2016 solution. You are very freaking close.
Apr 13 2016 03:09 UTC
@sroy8091 - a few things that you need to adjust...
• contacts is an array of objects, so your first for loop to look through that array is right on the money
• this is not correct: if(contacts[i][firstName]===firstName){. You can just access the object property for firstName with contacts[i].firstName, although I think that if you did this: contacts[i]["firstName"] it would be equivalent. The previous method is more clean-looking.
• since the elements of the array are objects, you cannot (and do not) use a for loop to process the properties. They mention the .hasOwnProperty() method. You can use that directly to look for the property that they request.
• with the code formatted as it is, and those errors, I cannot tell if you have the other common problem, which is that many who struggle with this challenge have the return "No such contact"; within the for loop that is processing the contacts array. That can be problematic if you are required to search beyond the first contact - you need to see all of the contacts in the array before you can determine that you cannot find the one that exists.
I hope that helps?
Jas
@JB2016
Apr 13 2016 03:10 UTC
@sroy8091 and you have two for statements
function lookUpProfile(firstName, prop){
// Only change code below this line
for(var i=0; i<contacts.length; i++){
if(contacts[i][firstName]===firstName){
for(var j=0; j<contacts[i].length; j++){ \\ and remove this
if(contacts[i][j]==prop){
return contacts[i][prop];
}
else{
return "No such property";
}
}
}
else{
return "No such contact";
} \\ remove these
}
// Only change code above this line
}
@goodm0urning use push :)
Has anyone done palidromes yet?
Sumit Roy
@sroy8091
Apr 13 2016 03:14 UTC
CamperBot
@camperbot
Apr 13 2016 03:14 UTC
sroy8091 sends brownie points to @jb2016 and @khaduch and @joll59 :sparkles: :thumbsup: :sparkles:
:star: 237 | @joll59 | http://www.freecodecamp.com/joll59
:star: 257 | @jb2016 | http://www.freecodecamp.com/jb2016
MBJ
@mbjusa
Apr 13 2016 03:29 UTC
hello world
CamperBot
@camperbot
Apr 13 2016 03:29 UTC
## welcome to FreeCodeCamp @mbjusa!
ulucay
@ulucay
Apr 13 2016 03:32 UTC
guys
Gerard Jorgensen
@gerardjorgensen
Apr 13 2016 03:34 UTC
@khaduch Oh ok that makes sense I'll try to find ways to somehow wait till it returns something. Thank you!
CamperBot
@camperbot
Apr 13 2016 03:34 UTC
gerardjorgensen sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles:
Robert Friedman
@robfr77
Apr 13 2016 03:37 UTC
Can anyone suggest why this is not passing Finders Keepers?
function findElement(arr, func) {
var result = "";
arr = arr.filter(func);
arr = arr.splice(0,1);
result += arr;
if (result.length > 0) {
return result;
}
else {
return undefined;
}
}
findElement([1, 3, 5, 8, 9, 10], function(num) { return num % 2 === 0; });
Coy Sanders
@coymeetsworld
Apr 13 2016 03:40 UTC
just return arr[0]; @robfr77
you don't need to splice the array or check for the length
Robert Friedman
@robfr77
Apr 13 2016 03:42 UTC
derp
Coy Sanders
@coymeetsworld
Apr 13 2016 03:42 UTC
:)
Robert Friedman
@robfr77
Apr 13 2016 03:43 UTC
@coymeetsworld :D
Guillermo Agudelo
@guillermo7227
Apr 13 2016 03:49 UTC
Hellow fellow campers. Could someone please help me pass this challenge https://www.freecodecamp.com/challenges/smallest-common-multiple . Here is the code I'm using http://pastebin.com/AwPU4X2p . It does well when range of numbers is ten or less, for instance, f(1,5), but when the range is greater than then, f(1,13), it enter into a kind of infinite loop and I'm not sure why. Could you please take a look at it?
*ten not then
Sean
@ofperfection
Apr 13 2016 03:52 UTC
function fearNotLetter(str) {
console.clear();
var newStr = str.slice();
//create reference to test against
var checker = newStr.split("");
console.log("this is the check split array " + checker);
var last = checker.length-1;
var checkerLast = checker[last];
console.log("this is how long it thinks the checker is minus one " + last);
var alphabet = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"];
var start = alphabet.indexOf(checker[0]);
console.log("this is the start variable " + start);
var end = alphabet.indexOf(checkerLast);
console.log("this is the end variable " + end);
var verify = alphabet.slice(start,end+1);
//if the variable is z, allow for a full slice
if(end>24 || end<0){
verify = alphabet.slice(start);
}
console.log("this is the verify variable " + verify);
var same = verify.join("");
console.log("this is the same variable " + same);
if(str==same){
return undefined;
}
//start checking for differences between verify and checker
// A = [1, 2, 3, 4];
//B = [1, 3, 4, 7];
var diff = checker.filter(function(x) { return verify.indexOf(x) < 0 ;});
console.log(diff);
return diff;
}
fearNotLetter("bcd");
can anyone help me use the filter function?
and tried multiple iterations
but the console.log isn't returning anything
and I want to learn the .filter method instead of using nested for loops
hrokr
@hrokr
Apr 13 2016 03:56 UTC
@goodm0urning @tekac -- I'm on the same exercise and have an (almost working) solution of a different sort:
// Setup
var collection = {
2548: {
album: "Slippery When Wet",
artist: "Bon Jovi",
tracks: [
"Let It Rock",
"You Give Love a Bad Name"
]
},
2468: {
album: "1999",
artist: "Prince",
tracks: [
"1999",
"Little Red Corvette"
]
},
1245: {
artist: "Robert Palmer",
tracks: [ ]
},
5439: {
album: "ABBA Gold"
}
};
// Keep a copy of the collection for tests
var collectionCopy = JSON.parse(JSON.stringify(collection));
// Only change code below this line
function updateRecords(id, prop, value) {
if (value !=="")
{
if (collection[id][prop] !=="tracks")
{collection[id][prop]= value;}
else
{collection[id][prop].push(value);}
}
else
{delete collection[id][prop];}
return collection;
}
// Alter values below to test your code
updateRecords(5439, "artist", "ABBA");
Coryphaeus
@cvdeby
Apr 13 2016 03:58 UTC
@SeanSaibot If inside the filter you got false this value won't extecute. It's so simple. Filters is not so complecated)
var arr = ['a', 'b', 'c'];
return arr.filter(function (value) {
if (value === 'a') {
return false;
} else {
return true;
}
});
// return ['b', 'c']
Sean
@ofperfection
Apr 13 2016 04:00 UTC
how would you test for any value in an array?
hrokr
@hrokr
Apr 13 2016 04:02 UTC
hrokr @hrokr Apr 12 23:56
@goodm0urning @tekac --but it won't let me push a value onto an array. It instead replaces the array with the value. Any thoughts?
Sean
@ofperfection
Apr 13 2016 04:02 UTC
got a code example?
Coryphaeus
@cvdeby
Apr 13 2016 04:07 UTC
@SeanSaibot I didn't understand you. Inside the filter you have current value. If this current iteration return true, you'll see this value inside the new array, if false - vice versa. Filters couldn't make something more. Above example is enough.
@SeanSaibot Instead of 'alphabet' variable try to use RegEx(regular expressions). Maybe it'll be a little difficult for beginner, but they are so useful and work faster.
Vyani
@vyani
Apr 13 2016 04:19 UTC
Hi guys, anyone familiar with For Loops?
I've the the solution for Odd Numbers in Github, but still don't get it
var myArray = [];
for(var i = 1; i < 10; i += 2){
myArray.push(i);
}
// Only change code above this line.
if(typeof(myArray) !== "undefined"){(function(){return myArray;})();}
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:20 UTC
@hrokr try prop !== tracks in your first nested if statement
hrokr
@hrokr
Apr 13 2016 04:22 UTC
I had tried that one but tried it again -- no love. :-(
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:23 UTC
works for me @hrokr when I changed it
hrokr
@hrokr
Apr 13 2016 04:23 UTC
@Jlipschitz --- such that it would look like this if (collection[id][prop] !== tracks)?
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:24 UTC
no it would look like if( prop !== "tracks") @hrokr
Frank XC
@tenkdayz
Apr 13 2016 04:25 UTC
@vyani how does the function look
Vyani
@vyani
Apr 13 2016 04:26 UTC
This message was deleted
Sean
@ofperfection
Apr 13 2016 04:26 UTC
yeah @cvdeby I saw regex was recommended but tbh I don't have any idea how to work with regex whereas I have a small idea of how to work with arrays.
Vyani
@vyani
Apr 13 2016 04:26 UTC
@tenkdayz
// Setup
var myArray = [];
// Only change code below this line.
var myArray = [];
for(var i = 1; i < 10; i += 2){
myArray.push(i);
}
// Only change code above this line.
if(typeof(myArray) !== "undefined"){(function(){return myArray;})();}
Frank XC
@tenkdayz
Apr 13 2016 04:27 UTC
@vyani there are 2 myArray? ..
delete the one above the for loop.. that might work
Vyani
@vyani
Apr 13 2016 04:28 UTC
This message was deleted
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:28 UTC
@hrokr it's just a matter of comparison. when comparing the object property, it looks like it's not the same as comparing to a string like "tracks" unless the code was in JSON format like below
var collection = {
2548: {
"album": "Slippery When Wet",
"artist": "Bon Jovi",
"tracks": [
"Let It Rock",
"You Give Love a Bad Name"
]
},
2468: {
"album": "1999",
"artist": "Prince",
"tracks": [
"1999",
"Little Red Corvette"
]
},
1245: {
"artist": "Robert Palmer",
"tracks": [ ]
},
5439: {
"album": "ABBA Gold"
}
};
were you able to get it to work?
Vyani
@vyani
Apr 13 2016 04:30 UTC
@tenkdayz thanks, will do that. but what about the code at the very bottom line? can you help to explain? I see that code in github without sufficient explanation for me :(
CamperBot
@camperbot
Apr 13 2016 04:30 UTC
vyani sends brownie points to @tenkdayz :sparkles: :thumbsup: :sparkles:
:star: 369 | @tenkdayz | http://www.freecodecamp.com/tenkdayz
hrokr
@hrokr
Apr 13 2016 04:31 UTC
@Jlipschitz -- that didn't work either.
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:32 UTC
Frank XC
@tenkdayz
Apr 13 2016 04:33 UTC
@vyani
if(typeof(myArray) !== "undefined"){(function(){return myArray;})();}
that part it's just to make sure myArray is defined.. if so return it.
hrokr
@hrokr
Apr 13 2016 04:34 UTC
@Jlipschitz -- sure but I've done about four changes so give a sec and I'll do the one I think you were mentioning.
@Jlipschitz -- OK, I think you were saying this:
// Setup
var collection = {
2548: {
album: "Slippery When Wet",
artist: "Bon Jovi",
tracks: [
"Let It Rock",
"You Give Love a Bad Name"
]
},
2468: {
album: "1999",
artist: "Prince",
tracks: [
"1999",
"Little Red Corvette"
]
},
1245: {
artist: "Robert Palmer",
tracks: [ ]
},
5439: {
album: "ABBA Gold"
}
};
// Keep a copy of the collection for tests
var collectionCopy = JSON.parse(JSON.stringify(collection));
// Only change code below this line
function updateRecords(id, prop, value) {
if (value !=="")
{
if(prop !== "tracks")
{collection[id][prop] = value;}
else
{collection[id][prop].push(val);}
}
else
{delete collection[id][prop];}
return collection;
}
// Alter values below to test your code
updateRecords(5439, "artist", "ABBA");
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:37 UTC
@hrokr you have a typo in your first else statement
Eldar Tinjić
@EldarT90
Apr 13 2016 04:38 UTC
is it possible to append something to property ?
hrokr
@hrokr
Apr 13 2016 04:38 UTC
@Jlipschitz --- Yep. that's it. You are the man. Thanks!
CamperBot
@camperbot
Apr 13 2016 04:38 UTC
hrokr sends brownie points to @jlipschitz :sparkles: :thumbsup: :sparkles:
:star: 102 | @jlipschitz | http://www.freecodecamp.com/jlipschitz
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:38 UTC
hrokr
@hrokr
Apr 13 2016 04:40 UTC
@Jlipschitz -- I bet not as much as I am. That thing was giving me fits for four days. I was reading MDN (also with not much love) just after I had posted.
Eldar Tinjić
@EldarT90
Apr 13 2016 04:41 UTC
need help with appending the string to a property
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:41 UTC
MDN always gives me jitters when I see prototype and how it's originally created
Eldar Tinjić
@EldarT90
Apr 13 2016 04:41 UTC
and i dont know if its even possible
Jamie Lipschitz
@Jlipschitz
Apr 13 2016 04:43 UTC
@EldarT90 if you have a property "string": "hello" you want to append something to "string"? i probably can't assist tbh but curious
Eldar Tinjić
@EldarT90
Apr 13 2016 04:43 UTC
@Jlipschitz yeah something like that i have element a and property is href , so i want to add link to it href with variable (wikipedia challenge)
Nick Rameau
@R4meau
Apr 13 2016 04:44 UTC
@EldarT90
$(element).attr("href", "http://newlink.com"); Eldar Tinjić @EldarT90 Apr 13 2016 04:45 UTC @R4meau hmm tnx let me try that CamperBot @camperbot Apr 13 2016 04:45 UTC eldart90 sends brownie points to @r4meau :sparkles: :thumbsup: :sparkles: :star: 314 | @r4meau | http://www.freecodecamp.com/r4meau Jamie Lipschitz @Jlipschitz Apr 13 2016 04:45 UTC ahh that I could of helped with! but @R4meau beat me to it Nick Rameau @R4meau Apr 13 2016 04:45 UTC @Jlipschitz ;) Eldar Tinjić @EldarT90 Apr 13 2016 04:48 UTC @R4meau yeah it works really good vivekraj @vivekraj-kr Apr 13 2016 04:48 UTC Hi Nick Rameau @R4meau Apr 13 2016 04:48 UTC @EldarT90 You're welcome. @vivekraj-kr Hello there. vivekraj @vivekraj-kr Apr 13 2016 04:49 UTC Hello im new here started learn oop in js Nick Rameau @R4meau Apr 13 2016 04:49 UTC @vivekraj-kr Welcome to FCC. vivekraj @vivekraj-kr Apr 13 2016 04:49 UTC ok thank you Im stuck on " constructor" section in freecodecamp exercises. could you pls help me out. Jamie Lipschitz @Jlipschitz Apr 13 2016 04:51 UTC @vivekraj-kr ask away and we'll try to help vivekraj @vivekraj-kr Apr 13 2016 04:51 UTC var Car = function() { //Change this constructor this.wheels = 4; this.seats = 1; this.engines = 1; }; //Try it out here var myCar = new Car(3,1,2); this is the code snippet and i need to invoke that new object Jamie Lipschitz @Jlipschitz Apr 13 2016 04:54 UTC @vivekraj-kr create your new car and change afterwards. you won't be able to change like Car(3,1,2) because your constructor doesn't take in arguments vivekraj @vivekraj-kr Apr 13 2016 04:54 UTC ohh ok thank. Jamie Lipschitz @Jlipschitz Apr 13 2016 04:54 UTC var Car = function() { //Change this constructor this.wheels = 4; this.seats = 1; this.engines = 1; }; //Try it out here var myCar = new Car(); // myobj.property = something // myobj.property2 = somethingElse vivekraj @vivekraj-kr Apr 13 2016 04:54 UTC Ill try it and let you know. Thank you it works. Jamie Lipschitz @Jlipschitz Apr 13 2016 04:56 UTC awesome! @vivekraj-kr vivekraj @vivekraj-kr Apr 13 2016 04:56 UTC That was a blunder mistake ryt? Jamie Lipschitz @Jlipschitz Apr 13 2016 04:59 UTC happens to us all! you would of been right though by calling myCar(1, 2, 3) if the constructor was originally setup like: var Car = function(arg1, arg2, arg3) { this.wheels = arg1; this.seats = arg2; this.engines = arg3; }; Frank XC @tenkdayz Apr 13 2016 05:00 UTC @Jlipschitz would this work also? var Car = function() { this.wheels = arguments[0]; this.seats = arguments[1]; this.engines = arguments[2]; }; guess so right? Alexander Berezkin @Leidone Apr 13 2016 05:01 UTC You should be getting the length of lastName by using .length like this: lastName.length. // Example var firstNameLength = 0; var firstName = "Ada"; firstNameLength = firstName.length; // Setup var lastNameLength = 8; var lastName = "Lovelace"; // Only change code below this line. lastNameLength = lastName.length; Jamie Lipschitz @Jlipschitz Apr 13 2016 05:02 UTC that would work @tenkdayz . I guess it's a matter of which way is easier to read? interesting what's your question @Leidone ? Coryphaeus @cvdeby Apr 13 2016 05:09 UTC help roman CamperBot @camperbot Apr 13 2016 05:09 UTC ## :point_right: algorithm roman numeral converter [wiki] # Problem Explanation: • You will create a program that converts an integer to a Roman Numeral. :pencil: read more about algorithm roman numeral converter on the FCC Wiki Justin @daemedeor Apr 13 2016 05:10 UTC @tenkdayz it would work BUT i would suggest never to do that, it should be obvious to people that you can pass in arguments Nick Rameau @R4meau Apr 13 2016 05:10 UTC @daemedeor I agree. But since this is just a challenge and not a production code, it's completely fine. Coryphaeus @cvdeby Apr 13 2016 05:11 UTC It is my solution for Roman nums) function convertToRoman(num) { var romanDigitsPrimary = ["I", "X", "C", "M"]; // 1, 10, 100, 1000 var romanDigitsSecondary = ["V", "L", "D", "ↁ"]; // 5, 50, 500 var arr = (""+num).split("").reverse(); function getRoman(digit, primary, secondary, third) { primary = "" + primary; secondary = "" + secondary; third = "" + third; switch(digit) { case 1: return third + primary; case 2: return third + primary.repeat(2); case 3: return third + primary.repeat(3); case 4: return primary + secondary; case 5: return secondary; } } var result = ""; for(var i = 0; i < arr.length; i++) { if(arr[i] > 0 && arr[i] < 6) { result = getRoman(parseInt(arr[i]), romanDigitsPrimary[i], romanDigitsSecondary[i], "") + result;// 1-5 } else if (arr[i] >= 6 && arr[i] < 10) { result = getRoman(parseInt(arr[i]) - 5, romanDigitsPrimary[i], romanDigitsPrimary[i+1], romanDigitsSecondary[i]) + result; // 6-9 } } return result; } convertToRoman(127); // 36 Justin @daemedeor Apr 13 2016 05:11 UTC @R4meau except building good habits is just as important as otherwise @cvdeby congrats :) Coryphaeus @cvdeby Apr 13 2016 05:12 UTC @daemedeor Thanks) CamperBot @camperbot Apr 13 2016 05:12 UTC cvdeby sends brownie points to @daemedeor :sparkles: :thumbsup: :sparkles: :star: 327 | @daemedeor | http://www.freecodecamp.com/daemedeor Nick Rameau @R4meau Apr 13 2016 05:12 UTC @daemedeor Indeed. Justin @daemedeor Apr 13 2016 05:13 UTC as a thought experiment its nice to play around, see what works and what doesn't but as far as promoting as "what's different" and "they are functionally the same" and are "good habits" that is as far as you get Nick Rameau @R4meau Apr 13 2016 05:15 UTC @cvdeby I would suggest to declare getRoman outside of convertToRoman. Good habit. Justin @daemedeor Apr 13 2016 05:16 UTC ya what r4meau says jorgon1022 @jorgon1022 Apr 13 2016 05:18 UTC hi any of you guys available to help a fellow camper? Frank XC @tenkdayz Apr 13 2016 05:19 UTC @daemedeor but for editing purposes it's better imo.. what if you want to add this.color ? Justin @daemedeor Apr 13 2016 05:19 UTC @tenkdayz no.... its not, not to do arguments[0] Alexander Berezkin @Leidone Apr 13 2016 05:19 UTC @Jlipschitz You should be getting the length of lastName by using .length like this: lastName.length. Nick Rameau @R4meau Apr 13 2016 05:20 UTC @jorgon1022 Go ahead and ask your question mate. jorgon1022 @jorgon1022 Apr 13 2016 05:22 UTC So I am doing a challenge. The challenge is Escape Sequences with Strings. But I am not sure that I am understanding the question correctly Greg Duncan @GregatGit Apr 13 2016 05:22 UTC @jorgon1022 just put your question up - everyone will be all over it jorgon1022 @jorgon1022 Apr 13 2016 05:22 UTC Encode the following sequence, separated by spaces: backslash tab tab carriage-return new-line and assign it to myStr //Encode the following sequence, separated by spaces: backslash tab tab carriage-return new-line and assign it to myStr Coryphaeus @cvdeby Apr 13 2016 05:23 UTC @jorgon1022 Instead of word 'tab' put "\t" and so for each one word. jorgon1022 @jorgon1022 Apr 13 2016 05:25 UTC @cvdeby Thank you . You gave me the answer. I appreciate it. CamperBot @camperbot Apr 13 2016 05:25 UTC jorgon1022 sends brownie points to @cvdeby :sparkles: :thumbsup: :sparkles: :star: 311 | @cvdeby | http://www.freecodecamp.com/cvdeby Jamie Lipschitz @Jlipschitz Apr 13 2016 05:26 UTC @Leidone was "lovelace" originally set to 8, could be that it should be 0? looks good to me. don't see why it wouldn't take it Justin @daemedeor Apr 13 2016 05:28 UTC @Jlipschitz also don't put arg1, arg2, arg3 as parameters, you should strive to be as self documenting as possible and make variables that are semantic ^.^ Jamie Lipschitz @Jlipschitz Apr 13 2016 05:28 UTC just looked and it should be var lastNameLength = 0 duly noted! I was just showing him lol @daemedeor Justin @daemedeor Apr 13 2016 05:30 UTC @Jlipschitz fair enough i've used my fair share of bad variables names when making examples XD except i havent usd one as bad as ids (not here in an exercise book) Alexander Berezkin @Leidone Apr 13 2016 05:35 UTC @Jlipschitz thanks CamperBot @camperbot Apr 13 2016 05:35 UTC leidone sends brownie points to @jlipschitz :sparkles: :thumbsup: :sparkles: :star: 105 | @jlipschitz | http://www.freecodecamp.com/jlipschitz Jamie Lipschitz @Jlipschitz Apr 13 2016 05:36 UTC i think you've definitely right though that i should apply the same standards, even if it's something simple it's still good habit building. don't see any negative to that! good thing i stopped declaring variables a,b,c, one, two, three, etc lol @daemedeor Justin @daemedeor Apr 13 2016 05:38 UTC @Jlipschitz usually i only use single variable letters in 2 situations: maths(and not always all the time either) and loops, anything outside of those you should never ever use single letters Jamie Lipschitz @Jlipschitz Apr 13 2016 05:40 UTC yeah learned that the hard way when trying to back track lol Ashutosh Parmar Jain @AshutoshParmar Apr 13 2016 06:09 UTC Francis Yvan Jubera @StoneDisk Apr 13 2016 06:17 UTC Hi guys I need help on Chunky Monkey challenge, is someone available? Frank XC @tenkdayz Apr 13 2016 06:20 UTC @StoneDisk go ahead Ivan @elementWebDev Apr 13 2016 06:20 UTC Return Early Pattern for Functions? have tried several different things... not sure what to do // Setup function abTest(a, b) { // Only change code below this line // Only change code above this line return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2)); } // Change values below to test your code abTest(2,2); Frank XC @tenkdayz Apr 13 2016 06:21 UTC @oghosting what does it have to return? Francis Yvan Jubera @StoneDisk Apr 13 2016 06:22 UTC My code is producing empty element arrays: Ivan @elementWebDev Apr 13 2016 06:22 UTC if a or b are less than 0 the function will immediately exit with a value of undefined Frank XC @tenkdayz Apr 13 2016 06:23 UTC @oghosting else what? kirbyedy @kirbyedy Apr 13 2016 06:23 UTC else undefined Kristoforus Rua @kru Apr 13 2016 06:23 UTC can you post your code here @StoneDisk Ivan @elementWebDev Apr 13 2016 06:24 UTC undefined k.. Francis Yvan Jubera @StoneDisk Apr 13 2016 06:25 UTC Sorry but how do I post my code in Gitter? kirbyedy @kirbyedy Apr 13 2016 06:25 UTC help format CamperBot @camperbot Apr 13 2016 06:25 UTC ## :point_right: code formatting [wiki] ### Multi line Code js ⇦ Type 3 backticks and then press [shift + enter ⏎] (type js or html or css) <paste your code here>, then press [shift + enter ⏎] ⇦ Type 3 backticks, then press [enter ⏎] ### Single line Code This an inline <paste code here> code formatting with a single backtick() at start and end around the code. See also: ☛ How to type Backticks | ☯ Compose Mode | ❄ Gitter Formatting Basics Frank XC @tenkdayz Apr 13 2016 06:25 UTC if a||b === undefind return undefined else return undefined? Kristoforus Rua @kru Apr 13 2016 06:25 UTC nah thats will do it kirbyedy @kirbyedy Apr 13 2016 06:26 UTC @tenkdayz no he has to test against the 0 Kristoforus Rua @kru Apr 13 2016 06:26 UTC let c = test; Kevin Myrick @aphextwin234 Apr 13 2016 06:26 UTC @StoneDisk there is a shortcut follow the @camperbot kirbyedy @kirbyedy Apr 13 2016 06:27 UTC @oghosting if a or b is less than zero, return Kevin Myrick @aphextwin234 Apr 13 2016 06:27 UTC then people can view this array, Ivan @elementWebDev Apr 13 2016 06:32 UTC // Setup function abTest(a, b) { // Only change code below this line if (a||b < 0) { return undefined; } // Only change code above this line return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2)); } // Change values below to test your code abTest(-2, 8); not happening When a return statement is reached, the execution of the current function stops and control returns to the calling location. Example function myFun() { console.log("Hello"); return "World"; console.log("byebye") } myFun(); The above outputs "Hello" to the console, returns "World", but "byebye" is never output, because the function exits at the return statement. Instructions Modify the function abTest so that if a or b are less than 0 the function will immediately exit with a value of undefined. Markus Kiili @Masd925 Apr 13 2016 06:35 UTC @oghosting I actually like this solutions the most, leaving returning undefined for the engine: function abTest(a, b) { // Only change code below this line if (a>0 && b>0) // Only change code above this line return Math.round(Math.pow(Math.sqrt(a) + Math.sqrt(b), 2)); } Francis Yvan Jubera @StoneDisk Apr 13 2016 06:36 UTC here is my code: var group = []; var extra = arr.length % size; var round = Math.floor((arr.length / size) + extra); var pos = 0; for (var x = 1; x <= round; x++) { for (var y = 0; y < size; y++) { if (x === round) { group.push(arr.slice(pos)); } group.push(arr.slice(pos, pos + size)); pos += size; } } Jas @JB2016 Apr 13 2016 06:38 UTC help Check for Palindromes CamperBot @camperbot Apr 13 2016 06:38 UTC ## :point_right: algorithm check for palindromes [wiki] # Explanation: Our goal for solving this problem is tidying up the string passed in, and checking whether it is in fact a palindrome. • If you are unsure of what a palindrome is, it is a word or phrase that when reversed spells the same thing forwards or backwards. A simple example is mom, when you reverse the letters, it spells the same thing! Another example of a palindrome is race car. When we take out anything that is not a character it becomes racecar which is the same spelled forwards or backwards! Once we have determined whether it is a palindrome or not we want to return either true or false based on our findings. Ivan @elementWebDev Apr 13 2016 06:38 UTC wow... thank you @Masd925 CamperBot @camperbot Apr 13 2016 06:38 UTC oghosting sends brownie points to @masd925 :sparkles: :thumbsup: :sparkles: :star: 1356 | @masd925 | http://www.freecodecamp.com/masd925 Ivan @elementWebDev Apr 13 2016 06:39 UTC lol Francis Yvan Jubera @StoneDisk Apr 13 2016 06:41 UTC I tried removing excess empty array elements using pop() but test still failed. Kevin Myrick @aphextwin234 Apr 13 2016 06:44 UTC are you trying to put those variables into your array? @StoneDisk I don't know this challenge, I'm not very far. @StoneDisk Francis Yvan Jubera @StoneDisk Apr 13 2016 06:49 UTC @aphextwin234, round variable is supposed to calculate the exact array size and pos variable is used to set the element position for group array. mohitjarvissharma @mohit-jarvis-sharma Apr 13 2016 06:50 UTC in celcius to farenheit conversion my code seems right but the result is null always fahrenheit = (celcius*9)/5 + 32; Blauelf @Blauelf Apr 13 2016 06:51 UTC @Masd925 Shouldn't it be if (a>=0 && b>=0) then, as 0 can be considered non-negative (at least the square root for 0 is the same whether it is -0 or +0) Francis Yvan Jubera @StoneDisk Apr 13 2016 06:53 UTC @krua, that is the code that makes excess empty array elements. Markus Kiili @Masd925 Apr 13 2016 06:53 UTC @Blauelf Yes, you are correct. Blauelf @Blauelf Apr 13 2016 06:57 UTC @StoneDisk I don't think that you need nested loops if you use slice. And your variable round is not that good, why not simply use pos as the iterator variable and do for (var pos = 0; pos < arr.length; pos += size) {? If slice has not enough elements, its result will be shorter, so you don't even have to care about that special case of extra elements, they'll be included in the last batch. Francis Yvan Jubera @StoneDisk Apr 13 2016 07:04 UTC @Blauelf It worked, thanks a lot :smile: CamperBot @camperbot Apr 13 2016 07:04 UTC stonedisk sends brownie points to @blauelf :sparkles: :thumbsup: :sparkles: :star: 1606 | @blauelf | http://www.freecodecamp.com/blauelf mohitjarvissharma @mohit-jarvis-sharma Apr 13 2016 07:06 UTC here is my code var myStr; // Change this line myStr = "I am a \"double quoted\" string inside \"double quotes\" " ; it gives an output of myStr = "I am a \"double quoted\" string inside \"double quotes\" " ; why is it giving myStr = also :;/ Dominic Barretto @Dominicbarretto Apr 13 2016 07:08 UTC Hey Guys I am doing the "counting cards" challenge following is my switch statment switch(card) { case 2: case 3: case 4: case 5: case 6: count++;sString=count +" Bet"; break; case 7: case 8: case 9: sString = count + " Hold"; break; case 10: case 'J': case 'Q': case 'K': case 'A': count--; sString= count + " Hold"; break; } i am not getting proper result for the following values Cards Sequence 2, J, 9, 2, 7 should return "1 Bet" Cards Sequence 2, 2, 10 should return "1 Bet" Here is my function which has the switch statement in it function cc(card) { // Only change code below this line var sString=""; switch(card) { case 2: case 3: case 4: case 5: case 6: count++;sString=count +" Bet"; break; case 7: case 8: case 9: sString = count + " Hold"; break; case 10: case 'J': case 'Q': case 'K': case 'A': count--; sString= count + " Hold"; break; } return sString; // Only change code above this line } Blauelf @Blauelf Apr 13 2016 07:11 UTC @Dominicbarretto Separate the card->count part from the count->return value part. Don't set the return value in your switch statement. The count->return value part does not care about the card, it depends only on count. Dominic Barretto @Dominicbarretto Apr 13 2016 07:12 UTC sorry i didnt get u @Blauelf Blauelf @Blauelf Apr 13 2016 07:13 UTC Pseudo-code 1. Examine card, change count accordingly 2. Examine count, return the right answer Those should be separate parts in your code, both in the same function, but separate. The return value depends on count only, so it should not be assembled in a path that depends on card. Dominic Barretto @Dominicbarretto Apr 13 2016 07:20 UTC but why dose it work for the other condtions ie Cards Sequence 2, 3, 4, 5, 6 should return "5 Bet" Cards Sequence 7, 8, 9 should return "0 Hold" Cards Sequence 10, J, Q, K, A should return "-5 Hold" Cards Sequence 3, 7, Q, 8, A should return "-1 Hold" Cards Sequence 3, 2, A, 10, K should return "-1 Hold" Blauelf @Blauelf Apr 13 2016 07:22 UTC Just because a code sometimes returns the right result does not mean it's correct. If I am in Hamburg or London and I predict "at least some rain in the day", I'm right in most cases, yet that prediction is not that useful. Dominic Barretto @Dominicbarretto Apr 13 2016 07:29 UTC @Blauelf i did it ur way n it worked! but my question is why didnt it worked my way? @Blauelf n e ways thanks! CamperBot @camperbot Apr 13 2016 07:30 UTC dominicbarretto sends brownie points to @blauelf :sparkles: :thumbsup: :sparkles: :star: 1607 | @blauelf | http://www.freecodecamp.com/blauelf mostlind @mostlind Apr 13 2016 07:37 UTC Why should factorialize(0) return 1? Markus Kiili @Masd925 Apr 13 2016 07:40 UTC @mostlind Factorial of 0 is defined that way. mostlind @mostlind Apr 13 2016 07:42 UTC @Masd925 weird.. so I just have to put an if to check if the argument is 0? Markus Kiili @Masd925 Apr 13 2016 07:42 UTC @mostlind Yes. mostlind @mostlind Apr 13 2016 07:42 UTC thanks @Masd925 CamperBot @camperbot Apr 13 2016 07:42 UTC mostlind sends brownie points to @masd925 :sparkles: :thumbsup: :sparkles: :star: 1357 | @masd925 | http://www.freecodecamp.com/masd925 Blauelf @Blauelf Apr 13 2016 07:44 UTC @Dominicbarretto Consider the sequences 2, "A" 7, 8 "A", 2, all of them result in no change in count (after the second card that is), yet your code would have returned the very same number of count sometimes with Bet, sometimes with Hold. According to the task description, the decision Bet/Hold should be a function of count only. @mostlind There is one way to put 0 elements in an order, same for 1, but two for two, six ways for three elements, ... It makes a lot sense if you consider that $n!=n*(n-1)!$, which means that $1!=1*0!$ or $0!=1$ And you don't have to treat 0 as a special case in all ways, you'll have to if you do it recursively, but you don't if you use a simple loop. vetoCode @vetoCode Apr 13 2016 07:48 UTC can someone help me? why fires the if and the else at the same time?! http://codepen.io/vetoCode/pen/GZQEgp?editors=0010 Olawale Akinseye @brainyfarm Apr 13 2016 07:58 UTC Eldar Tinjić @EldarT90 Apr 13 2016 08:00 UTC why nothing is showed in first box Olawale Akinseye @brainyfarm Apr 13 2016 08:01 UTC @mostlind, setting it as 0 would ensure that the law of exponent works consistently and the when finding permutation as @Blauelf answered earlier, you only have one way to organise 0 element(s) Hi @EldarT90! Justin @daemedeor Apr 13 2016 08:02 UTC @EldarT90 you need to actually call twitch() and then you need to add jquery Olawale Akinseye @brainyfarm Apr 13 2016 08:02 UTC Sorry for digression @EldarT90, are you done with the Wikipedia viewer? Eldar Tinjić @EldarT90 Apr 13 2016 08:02 UTC @brainyfarm @daemedeor hey guys ^^ @brainyfarm yes, you wanna check how it looks like ? Justin @daemedeor Apr 13 2016 08:03 UTC hi @EldarT90 :D Eldar Tinjić @EldarT90 Apr 13 2016 08:03 UTC @brainyfarm http://codepen.io/EldarT/pen/qZxJZp - its basically 99% done Justin @daemedeor Apr 13 2016 08:04 UTC @EldarT90 nice i enjoyed your design :) Eldar Tinjić @EldarT90 Apr 13 2016 08:04 UTC @daemedeor thanks mate, however from where should i call function? i want it to be "executed" asap, not on click like in previous exercise CamperBot @camperbot Apr 13 2016 08:04 UTC eldart90 sends brownie points to @daemedeor :sparkles: :thumbsup: :sparkles: :star: 337 | @daemedeor | http://www.freecodecamp.com/daemedeor Justin @daemedeor Apr 13 2016 08:05 UTC @EldarT90 just right below twitch is defined so just twitch() you could also make it self invoking Olawale Akinseye @brainyfarm Apr 13 2016 08:05 UTC @EldarT90, is your image broken? And you did a good job man! Eldar Tinjić @EldarT90 Apr 13 2016 08:05 UTC @brainyfarm its not broken, you mean the wikipedia logo ? Justin @daemedeor Apr 13 2016 08:06 UTC the search text is a bit hard to read though Eldar Tinjić @EldarT90 Apr 13 2016 08:06 UTC @daemedeor like this? http://codepen.io/EldarT/pen/VaXLGV i called , but its not working Olawale Akinseye @brainyfarm Apr 13 2016 08:06 UTC @EldarT90, imgur would not work on codepen.io. Justin @daemedeor Apr 13 2016 08:06 UTC @EldarT90 yea but then you dont have jquery added Eldar Tinjić @EldarT90 Apr 13 2016 08:07 UTC @daemedeor aaa @daemedeor sec Olawale Akinseye @brainyfarm Apr 13 2016 08:07 UTC You can add http://crossorigin.me before the image to make it work or host on Dropbox @EldarT90. Eldar Tinjić @EldarT90 Apr 13 2016 08:07 UTC @brainyfarm :/ but it works for me, so is it just me or ? Olawale Akinseye @brainyfarm Apr 13 2016 08:08 UTC It does not work for me @EldarT90 :P Eldar Tinjić @EldarT90 Apr 13 2016 08:08 UTC @brainyfarm check again i added that crossorigin thingy ^^ Justin @daemedeor Apr 13 2016 08:09 UTC hmmm for the wiki image, it popped up fine for me Eldar Tinjić @EldarT90 Apr 13 2016 08:09 UTC @daemedeor i added jquery, still not working =( Olawale Akinseye @brainyfarm Apr 13 2016 08:10 UTC Ashutosh Parmar Jain @AshutoshParmar Apr 13 2016 08:10 UTC Olawale Akinseye @brainyfarm Apr 13 2016 08:11 UTC It only worked with crossorigin.me for me @daemedeor :P Justin @daemedeor Apr 13 2016 08:11 UTC shrug Olawale Akinseye @brainyfarm Apr 13 2016 08:11 UTC Could you share your code @AshutoshParmar? Eldar Tinjić @EldarT90 Apr 13 2016 08:12 UTC @brainyfarm check again ^^ Justin @daemedeor Apr 13 2016 08:12 UTC @EldarT90 there's no stream property on your object Eldar Tinjić @EldarT90 Apr 13 2016 08:12 UTC @daemedeor but what about plain "blabla", not even that is showing up @daemedeor which makes me thing some syntax is wrong Justin @daemedeor Apr 13 2016 08:12 UTC once js breaks, it stops executing Ashutosh Parmar Jain @AshutoshParmar Apr 13 2016 08:13 UTC // Setup var collection = { 2548: { album: "Slippery When Wet", artist: "Bon Jovi", tracks: [ "Let It Rock", "You Give Love a Bad Name" ] }, 2468: { album: "1999", artist: "Prince", tracks: [ "1999", "Little Red Corvette" ] }, 1245: { artist: "Robert Palmer", tracks: [ ] }, 5439: { album: "ABBA Gold" } }; // Keep a copy of the collection for tests var collectionCopy = JSON.parse(JSON.stringify(collection)); // Only change code below this line function updateRecords(id, prop, value) { return collection; } // Alter values below to test your code updateRecords(5439, "artist", "ABBA"); Justin @daemedeor Apr 13 2016 08:13 UTC so you need to remove the data.streams[0] @EldarT90 Eldar Tinjić @EldarT90 Apr 13 2016 08:13 UTC @daemedeor aa, sec Ashutosh Parmar Jain @AshutoshParmar Apr 13 2016 08:13 UTC @brainyfarm Eldar Tinjić @EldarT90 Apr 13 2016 08:13 UTC @daemedeor even when i remove, i dont see blabla :/ sorry i see sorry Justin @daemedeor Apr 13 2016 08:13 UTC :D here ya go Olawale Akinseye @brainyfarm Apr 13 2016 08:14 UTC @AshutoshParmar, now you need to do some testing + conditional statements. Justin @daemedeor Apr 13 2016 08:14 UTC gl;hf; XD am i calling it on wrong way? Olawale Akinseye @brainyfarm Apr 13 2016 08:15 UTC Working fine now @EldarT90 :+1: Eldar Tinjić @EldarT90 Apr 13 2016 08:15 UTC @brainyfarm okay ^^ Justin @daemedeor Apr 13 2016 08:16 UTC @EldarT90 are you sending the right headers? with the api key and all Eldar Tinjić @EldarT90 Apr 13 2016 08:16 UTC @daemedeor we need key for this api also ? @daemedeor like the weather one or you mean keyvalue Justin @daemedeor Apr 13 2016 08:17 UTC @EldarT90 i think you do... but also this: Returns a stream object if live. so it might not be live? Eldar Tinjić @EldarT90 Apr 13 2016 08:17 UTC @daemedeor i chose 24/7 channel as testing @daemedeor for testing* it doesnt say anytihing about api key , onl streaming keys but thats for embedded streaming i think Justin @daemedeor Apr 13 2016 08:19 UTC hmmm Dave @copendaven Apr 13 2016 08:22 UTC can someone explain this in english please? I thought I had the language down but apparently not… "If you specify any object, including a Boolean object whose value is false, as the initial value of a Boolean object, the new Boolean object has a value of true.” var myFalse = new Boolean(false); // initial value of false Justin @daemedeor Apr 13 2016 08:22 UTC @EldarT90 it looks liek the right api call but...... Eldar Tinjić @EldarT90 Apr 13 2016 08:23 UTC @daemedeor now i even try with direct link to streamer they provided there http://codepen.io/EldarT/pen/VaXLGV Justin @daemedeor Apr 13 2016 08:24 UTC @EldarT90 it says this: Returns a stream object if live. damn wait Vlad Serebriakov @Vargentum Apr 13 2016 08:25 UTC Can anybody explain test case with Object.keys in https://www.freecodecamp.com/challenges/make-a-person I’ve defined methods of Person in prototype, so they aren’t visible from Object.keys. But if I do this, and has 6 methods, I need some helper props like this._firstName and this._lastName to glue methods together. So there will be 8 keys. Any ideas? Justin @daemedeor Apr 13 2016 08:26 UTC @EldarT90 i have to think that you do need some authorization Shivam Arora @shivamarora13 Apr 13 2016 08:26 UTC how can I get value from a bootstrap modal? Eldar Tinjić @EldarT90 Apr 13 2016 08:26 UTC @daemedeor hmm Frank XC @tenkdayz Apr 13 2016 08:27 UTC @EldarT90 youre link should be "https://api.twitch.tv/kraken/channels" +userName Justin @daemedeor Apr 13 2016 08:27 UTC @tenkdayz you're forgeting a slash i'm sure Frank XC @tenkdayz Apr 13 2016 08:27 UTC true Justin @daemedeor Apr 13 2016 08:28 UTC i haven't played around with the twitch api so i'll just leave you guys to figure it out Rada @Radascript Apr 13 2016 08:28 UTC hey guys, can someone explain why my loop skips over second value? I made a copy array specifically so that shifting didn't mess up the indexes, but it seems to do so anyway: Justin @daemedeor Apr 13 2016 08:28 UTC XD Eldar Tinjić @EldarT90 Apr 13 2016 08:28 UTC Rada @Radascript Apr 13 2016 08:28 UTC function dropElements(arr, func) { var arrCopy = arr; for (i = 0; i < arr.length; i++) { console.log(arr[i]); console.log(func(arr[i])); if (func(arr[i]) === false) { arrCopy.shift(); console.log(arrCopy); } else { console.log("result " + arrCopy); return arrCopy; } } } dropElements([1, 2, 3, 4], function(n) {return n >= 3;}); Eldar Tinjić @EldarT90 Apr 13 2016 08:28 UTC wont work :/ Rada @Radascript Apr 13 2016 08:29 UTC sorry since question got lost: can someone explain why my loop skips over second value? I made a copy array specifically so that shifting didn't mess up the indexes, but it seems to do so anyway Shivam Arora @shivamarora13 Apr 13 2016 08:29 UTC @Radascript declare var i Justin @daemedeor Apr 13 2016 08:29 UTC @shivamarora13 not an issue but you're right one that should be corrected Shivam Arora @shivamarora13 Apr 13 2016 08:29 UTC @daemedeor I know, just told it, when saw it. Rada @Radascript Apr 13 2016 08:30 UTC @shivamarora13 you mean outside the for loop? Frank XC @tenkdayz Apr 13 2016 08:30 UTC @EldarT90 yes.. then create a var = data Shivam Arora @shivamarora13 Apr 13 2016 08:30 UTC no like for(var i = 0; i<1;i++) @Radascript Eldar Tinjić @EldarT90 Apr 13 2016 08:30 UTC @tenkdayz its created by function ? @tenkdayz i mean it is created already Rada @Radascript Apr 13 2016 08:31 UTC @shivamarora13 just tried it, same results in console Justin @daemedeor Apr 13 2016 08:31 UTC @Radascript are you sure you want to return arrCopy after its true? Shivam Arora @shivamarora13 Apr 13 2016 08:31 UTC @Radascript results might not change, but it was also a problem, in your code. Frank XC @tenkdayz Apr 13 2016 08:32 UTC ok yes.. then use data to query the info you want to use from the obj Rada @Radascript Apr 13 2016 08:32 UTC @shivamarora13 I've never actually seen them use var within the for loop hm not sure that's the convention Frank XC @tenkdayz Apr 13 2016 08:33 UTC try data.display_name Rada @Radascript Apr 13 2016 08:33 UTC @daemedeor pretty sureee it's the Drop It challenge Help Drop It CamperBot @camperbot Apr 13 2016 08:33 UTC ## :point_right: algorithm drop it [wiki] # Explanation: Basically while the second argument is not true, you will have to remove the first element from the left of the array that was passed as the first argument. Eldar Tinjić @EldarT90 Apr 13 2016 08:33 UTC @tenkdayz im not sure what you meant by that :/ Frank XC @tenkdayz Apr 13 2016 08:34 UTC Justin @daemedeor Apr 13 2016 08:35 UTC @Radascript i'm sure there's a problem.......? Frank XC @tenkdayz Apr 13 2016 08:35 UTC download a tidy api plugin for your browser to understand it Eldar Tinjić @EldarT90 Apr 13 2016 08:35 UTC @tenkdayz aa it works @tenkdayz i found game and put game directly , and it says "Music" Frank XC @tenkdayz Apr 13 2016 08:36 UTC @EldarT90 yes that all information about that streamer.. Justin @daemedeor Apr 13 2016 08:36 UTC @Radascript no its definitely the convention to use for(var i = 0; i < arr.length; i++), you can declare it before your loop but frequently you can find it like that Eldar Tinjić @EldarT90 Apr 13 2016 08:36 UTC @tenkdayz yes i knew about this from previous exercises but problem was i couldnt "target" it, i was using some other attribute first than game, but its direct @tenkdayz anyway tnx alot, now i will try to isolate the ones i need CamperBot @camperbot Apr 13 2016 08:37 UTC eldart90 sends brownie points to @tenkdayz :sparkles: :thumbsup: :sparkles: :star: 374 | @tenkdayz | http://www.freecodecamp.com/tenkdayz Rada @Radascript Apr 13 2016 08:37 UTC @daemedeor ah ok I'll start doing that ty Frank XC @tenkdayz Apr 13 2016 08:37 UTC @EldarT90 the most relevant properties are data.display_name data.logo and data.status Eldar Tinjić @EldarT90 Apr 13 2016 08:38 UTC @tenkdayz oke ty once again, also after i set it up for 1 box(streamer) i will have to make loop for other 16, i dont think its really efficient to do this for 16 individually, right ? Frank XC @tenkdayz Apr 13 2016 08:39 UTC yeas you will have to create an obj for each one and push those objs into an array at the end of the loop. hjernefrys @hjernefrys Apr 13 2016 08:48 UTC Hi can someone help me to understand what's going on with my code, "Check for Palindromes". It passes all the test except: "palindrome("A man, a plan, a canal. Panama") should return true." and "palindrome("My age is 0, 0 si ega ym.") should return true." here is the code I have so far: function palindrome(str) { // Good luck! var myArray = []; reverseString = str.toLowerCase(); reverseString = str.replace(/[^a-z0-9]/g,""); originalString = str.toLowerCase(); originalString = str.replace(/[^a-z0-9]/g,""); originalString = str.split(""); //Reverses the string: for(var x = reverseString.length -1; x >= 0; x--){ myArray.push(reverseString[x]); } originalString = myArray.join(""); if(reverseString === originalString){ return true; } else{ console.log(reverseString + " original: " + originalString); return false; } } palindrome("A man, a plan, a canal. Panama"); Rada @Radascript Apr 13 2016 08:51 UTC @hjernefrys first thing I notice is make sure you are declaring variables (reverseString and originalString) and make sure you are console.logging often to see that each action is functioning properly your second reverseString action is messing it up: reverseString = str.replace(/[^a-z0-9]/g, ""); console.log("now it's " + reverseString); negyvenketto @negyvenketto Apr 13 2016 08:55 UTC @hjernefrys you need reverseString = str.toLowerCase(); reverseString = reverseString.replace(/[^a-z0-9]/g,""); hjernefrys @hjernefrys Apr 13 2016 08:55 UTC aha, I'll try to change it negyvenketto @negyvenketto Apr 13 2016 08:56 UTC @Radascript you were faster :D Rada @Radascript Apr 13 2016 08:56 UTC @hjernefrys you can even tie them into one sentence if you'd like: var reverseString = str.toLowerCase().replace(/[^a-z0-9]/g, ""); @negyvenketto it's not a race :D :P hjernefrys @hjernefrys Apr 13 2016 08:57 UTC @Radascript is that considered good practice? I find the code looks cleaner otherwise negyvenketto @negyvenketto Apr 13 2016 08:58 UTC @Radascript i know :D hjernefrys @hjernefrys Apr 13 2016 08:58 UTC thanks @Radascript and @negyvenketto it passed now CamperBot @camperbot Apr 13 2016 08:58 UTC hjernefrys sends brownie points to @radascript and @negyvenketto :sparkles: :thumbsup: :sparkles: :star: 297 | @radascript | http://www.freecodecamp.com/radascript :star: 352 | @negyvenketto | http://www.freecodecamp.com/negyvenketto Eldar Tinjić @EldarT90 Apr 13 2016 08:59 UTC ok so first problem is how to take username and put it in json function, when i need to have json function to get the username how do i bypass this crap? Rada @Radascript Apr 13 2016 08:59 UTC @hjernefrys I have no idea actually. I'm pretty new, we should ask someone with more experience Eldar Tinjić @EldarT90 Apr 13 2016 08:59 UTC or actually i dont, i have an array but should i put whole array as username variabl Coy Sanders @coymeetsworld Apr 13 2016 09:00 UTC think its fine to chain the functions together IMO Eldar Tinjić @EldarT90 Apr 13 2016 09:00 UTC and than go with for loop ? Coy Sanders @coymeetsworld Apr 13 2016 09:00 UTC as long as its readable negyvenketto @negyvenketto Apr 13 2016 09:01 UTC @hjernefrys i'm pretty new as well, but i asked a senior developper often for code review, and it seems to me that the shorter, the more compact, the better Mooli @Mooli88 Apr 13 2016 09:01 UTC finished with the twitch tv zipline. please let me know what you think http://codepen.io/Mooli88/full/eZVbJw/ Rada @Radascript Apr 13 2016 09:01 UTC I solved my Drop It challenge, now staring at the SteamRoll challenge trying to figure out how to start hjernefrys @hjernefrys Apr 13 2016 09:01 UTC @negyvenketto ok I see. Many language has pretty strict guidelines on how to properly format the code Rada @Radascript Apr 13 2016 09:02 UTC @Mooli88 holy crap this makes me feel like I half-assed mine. Looks great Justin @daemedeor Apr 13 2016 09:02 UTC @hjernefrys its okay to chain it, but don't chain it too long, if its too long you can just newline it Coy Sanders @coymeetsworld Apr 13 2016 09:03 UTC yeah looks nice @Mooli88 Rada @Radascript Apr 13 2016 09:03 UTC man this makes me wonder if I should fluff up my ziplines more than I do. That's all I did for the twitch one: http://codepen.io/RadaCodes/pen/ONzVdb negyvenketto @negyvenketto Apr 13 2016 09:03 UTC @hjernefrys would you like other ideas on how to make your palindrome code shorter/cleaner? Justin @daemedeor Apr 13 2016 09:04 UTC return string.split('').reverse().join('') === string; done sorta haha Mooli @Mooli88 Apr 13 2016 09:04 UTC @Radascript lol thanks . i think i missed up with the account status . not sure how to tell if its closed or not CamperBot @camperbot Apr 13 2016 09:04 UTC mooli88 sends brownie points to @radascript :sparkles: :thumbsup: :sparkles: :star: 298 | @radascript | http://www.freecodecamp.com/radascript hjernefrys @hjernefrys Apr 13 2016 09:04 UTC @negyvenketto sure, go for it! Brendan Kinahan @BKinahan Apr 13 2016 09:05 UTC @daemedeor var? Justin @daemedeor Apr 13 2016 09:05 UTC @BKinahan im tired Brendan Kinahan @BKinahan Apr 13 2016 09:05 UTC oh, edited hjernefrys @hjernefrys Apr 13 2016 09:05 UTC @daemedeor that reverse method is quite convenient :-P Mooli @Mooli88 Apr 13 2016 09:06 UTC @Radascript maybe consider to add target="_blank" since clicking on one of the channels doesn't do anything Justin @daemedeor Apr 13 2016 09:06 UTC @hjernefrys more so like this .... var replacedString = //regex here too lzy to put; return replacedString.split('').reverse.join('') === replacedString; Rada @Radascript Apr 13 2016 09:07 UTC @hjernefrys yeah looking back through older challenges you see all these things you coulda done easier it's fun Justin @daemedeor Apr 13 2016 09:07 UTC 2 lines worth of js to do the palindrome XD yippeeee herochua @herochua Apr 13 2016 09:08 UTC :( I did 1 whole block code for it haha Eldar Tinjić @EldarT90 Apr 13 2016 09:08 UTC$("#box"+[i]).append("<h1>" + data.display_name + "</h1>");
Apr 13 2016 09:08 UTC
@Mooli88 ohhh that's right tyty
Eldar Tinjić
@EldarT90
Apr 13 2016 09:08 UTC
is this correct way @daemedeor
Apr 13 2016 09:09 UTC
fixxed
hjernefrys
@hjernefrys
Apr 13 2016 09:09 UTC
it's certainly shorter, but perhaps writing longer code has it advantages for the beginner as well, as it's easier to walk trough each step and spot errors
Eldar Tinjić
@EldarT90
Apr 13 2016 09:09 UTC
@daemedeor when it comes to [i] addition to that
Mooli
@Mooli88
Apr 13 2016 09:09 UTC
can someone explain to me this bit
User Story: I will see a placeholder notification if a streamer has closed their Twitch account (or the account never existed). You can verify this works by adding brunofin and comster404 to your array of Twitch streamers.
Justin
@daemedeor
Apr 13 2016 09:10 UTC
@EldarT90 sure if its there
Eldar Tinjić
@EldarT90
Apr 13 2016 09:10 UTC
@daemedeor but it wont work, can you check it plz ? http://codepen.io/EldarT/pen/VaXLGV
Justin
@daemedeor
Apr 13 2016 09:10 UTC
@hjernefrys its better to understand than to do clever solutions like mine... it comes from experience
negyvenketto
@negyvenketto
Apr 13 2016 09:11 UTC
@hjernefrys ok, so:
1) you can define originalString and reverseString in this order, and reverse the original, that way you need the lowercase and regex only once.
2) you can reverse a string as @daemedeor has written it :point_up:
3) you can return a logical expression: when it's true, it will return true, when it's false, it will return false.
so the code becomes:
function palindrome(str) {
// Good luck!
var originalString = str.toLowerCase().replace(/[^a-z0-9]/g,"");
var reverseString = originalString.split('').reverse().join('');
return originalString === reverseString;
}
Justin
@daemedeor
Apr 13 2016 09:12 UTC
@negyvenketto i don't even bother with the making of a variable for the split reverse and join
negyvenketto
@negyvenketto
Apr 13 2016 09:12 UTC
it can be made even more compact, but i think this is pretty neat and easy to understand
usharya
@usharya
Apr 13 2016 09:12 UTC
hello everybody :)
Justin
@daemedeor
Apr 13 2016 09:12 UTC
hi @usharya
@negyvenketto you also have an extra array instantiated for near no reason XD
negyvenketto
@negyvenketto
Apr 13 2016 09:13 UTC
@daemedeor oh, yeah, i forgot to delete it :D
Flurb
@Flurb
Apr 13 2016 09:13 UTC
Anyone here knows how to convert ISO-8859-1 to UTF-8? I tried every page when I google to it
Anyone has experience with it?
negyvenketto
@negyvenketto
Apr 13 2016 09:13 UTC
@usharya hi :)
Eldar Tinjić
@EldarT90
Apr 13 2016 09:17 UTC
@daemedeor i mean if you know to what am i referring
Justin
@daemedeor
Apr 13 2016 09:17 UTC
@EldarT90 looking through it now, it looks like you have #img11?
@EldarT90 but you also have closure problems, since the last time i is defined, it'll be 5, so you should either do a self-invoking function and pass a parameter, a new scope (like with a .forEach) or use a named function to call that
karim khalfaoui
@Kaiiim
Apr 13 2016 09:24 UTC
hi everyone, can you help me please, i dont understand why my code doesn't works, for " Find the Longest Word in a String "
function findLongestWord(str) {
var i = str.split(' ');
var j = 0;
for ( j = 0; j < i.length; j++)
{
if (i[0].length < i[+1].length)
{
return i[+1].length;
}
else {
return i[0].length;
}
}
return i;
}
Brendan Kinahan
@BKinahan
Apr 13 2016 09:25 UTC
re: the palindrome discussion earlier, I think I have shortened mine a bit:
palindrome = s => s.toLowerCase().match(/[^_\W]/g).every((v,i,a)=>v==a[a.length-i-1]);
karim khalfaoui
@Kaiiim
Apr 13 2016 09:26 UTC
function findLongestWord(str) {
var i = str.split(' ');
var j = 0;
for ( j = 0; j < i.length; j++)
{
if (i[0].length < i[+1].length)
{
return i[+1].length;
}
else {
return i[0].length;
}
}
return i;
}
Chris Cullen
@123xylem
Apr 13 2016 09:26 UTC
// Example
function ourFunction(ourMin, ourMax) {
return Math.floor(Math.random() * (ourMax - ourMin + 1)) + ourMin;
}
ourFunction(1, 9);
// Only change code below this line.
function randomRange(myMin, myMax) {
if (Math.floor(Math.random() (myMin + myMax) ) <= myMin && (Math.floor(Math.random() (myMin + myMax) >= myMax) ))
{ return (Math.floor(Math.random() (myMin + myMax))) ;}
else Math.floor(Math.random()
(myMin + myMax));
}
// Change these values to test your function
var myRandom = randomRange(9, 15);
Any idea why this isnt working? I need to make a number more than myMin and less than myMax
Eldar Tinjić
@EldarT90
Apr 13 2016 09:26 UTC
@daemedeor well i can put .length , its ont an issue, issue is to make 1 element work, than its easy to edit detials :D
@daemedeor and 1 (any element) can work with streamers [0] or [5] or whatever, but NOT with for loop :/
so problem is in FOR loop
but i dont know where
Brendan Kinahan
@BKinahan
Apr 13 2016 09:26 UTC
@123xylem format your code, it's impossible to really read with the gitter markdown formatting like that with the italics instead of *
Olawale Akinseye
@brainyfarm
Apr 13 2016 09:27 UTC
wiki repeat
CamperBot
@camperbot
Apr 13 2016 09:27 UTC
# Problem Explanation:
• This task requires us to look at each possible permutation of a string. This is best done using a recursion function. Being able to build a function which collects all permutations of a string is a common interview question, so there is no shortage of tutorials out there on how to do this, in many different code languages.
• This task can still be daunting even after watching a tutorial. You will want to send each new use of the function 3 inputs: 1. a new string (or character array) that is being built, 2. a position in your new string that's going to be filled next, and 3. an idea of what characters (more specifically positions) from the original string have yet to be used. The pseudo code will look something like this:
var str = ???;
perm(current position in original string, what's been used in original string, current string build thus far){
if(current string is finished) {
print current string;
}else{
for(var i = 0; i < str.length; i++) {
if(str[i] has not been used) {
put str[i] into the current position;
mark str[i] as used
perm(current position in original string, what's been used in original string, current string build thus far)
unmark str[i] as used because another branch in the tree for i + 1 will still likely use it;
}
Justin
@daemedeor
Apr 13 2016 09:35 UTC
@EldarT90 sorry getting tired hopefully someone can sort it out... my hint is function closure and probably console loggin some responses
Kevin
@KevinBruland
Apr 13 2016 09:44 UTC
Is the "Passing Values to Functions with Arguments " section bugged currently?
Eldar Tinjić
@EldarT90
Apr 13 2016 09:44 UTC
@daemedeor oke mate dont worry thanks for your help, i also have to go afk for some hours
CamperBot
@camperbot
Apr 13 2016 09:44 UTC
eldart90 sends brownie points to @daemedeor :sparkles: :thumbsup: :sparkles:
:star: 340 | @daemedeor | http://www.freecodecamp.com/daemedeor
Eldar Tinjić
@EldarT90
Apr 13 2016 09:44 UTC
@daemedeor take care and cya later
Chris Cullen
@123xylem
Apr 13 2016 09:45 UTC
return Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin;
Can some1 explain how this equals a number <=MyMax and >=MyMin???
I dont get it
Kevin
@KevinBruland
Apr 13 2016 09:48 UTC
try substituting in numbers and it might help visualize why, @123xylem
Rujool Doshi
@rujool
Apr 13 2016 09:51 UTC
@123xylem
Lets say Math.random() returns x
0<=x < 1
Multiplying all sides by (myMax+1) - myMin
0<=x((myMax+1)-myMin)<(myMax+1-myMin)
myMin <= x
((myMax+1)-myMin < myMax + 1
taking floor
Math.floor(myMin) <= Math.floor(x*(myMax+1)-myMin) <= myMax
where x is Math.random()
Chris Cullen
@123xylem
Apr 13 2016 09:55 UTC
Eurgh... I still cant grasp it... I used to be good at maths
:) This makes me feel so stupid lol
Blauelf
@Blauelf
Apr 13 2016 09:56 UTC
@rujool If you don't want to use code blocks for that, you can escape * in chat by prepending a backslash like \* (same for _).
Rujool Doshi
@rujool
Apr 13 2016 10:00 UTC
@123xylem sorry in the second last step onwards the middle part will be x*((myMax+1)-myMin)+myMin
Chris Cullen
@123xylem
Apr 13 2016 10:00 UTC
lets say mymin=5 myMax=10... can some1 run through Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin; How that is always between 5-10 inclusive?
Rujool Doshi
@rujool
Apr 13 2016 10:01 UTC
Lets say Math.random() = 0.3
Then floor(0.3((10+1)-5))+5
= floor(0.3
(11-5))+5
= floor(0.3*6)+5
= floor(1.8) + 5
= 1 + 5
= 6
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:03 UTC
hey there,
I have a small doubt.
how to remove space from a string using String.replace() ?
Blauelf
@Blauelf
Apr 13 2016 10:03 UTC
0 <= x < 1
0 <= x * (10 - 5 + 1) < 10 - 5 + 1
5 <= x * 6 + 5 < 11
5 <= trunc(x * 6 + 5) <= 10 :)
Rujool Doshi
@rujool
Apr 13 2016 10:04 UTC
@Blauelf well explained
Blauelf
@Blauelf
Apr 13 2016 10:05 UTC
@pmohdameen "foo bar".replace(" ", "") maybe? What do you want to achieve with that?
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:05 UTC
I want to remove white spaces along with special characters
Blauelf
@Blauelf
Apr 13 2016 10:05 UTC
@rujool Just following your path (and using \* instead of just *)
@pmohdameen Learn some regular expression then :)
help js regex
CamperBot
@camperbot
Apr 13 2016 10:06 UTC
## :point_right: js regex resources [wiki]
See also: :clipboard: Tutorials | :syringe: Testing | :soccer: Games | :newspaper: Blogs | :package: Software
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:06 UTC
.replace(/ /g,'')
@Blauelf :) yeah. still this aint working .replace(/ /g,'')
Rujool Doshi
@rujool
Apr 13 2016 10:07 UTC
@pmohdameen u want to replace special characters also, so put them in the regex
UDAY PRAPHULLA MALANGAVE
@malangaveuday
Apr 13 2016 10:08 UTC
@malangaveuday
no case-sensitive and string will be random
example ["like", "Like"] ==> true
["like","keli"] => true
my code :
function mutation(arr) {
return (arr[0].toLowerCase().indexOf(arr1.toLowerCase())) !== -1;
}
mutation(["hello", "hey"]);
its not working for second exampl
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:09 UTC
@rujool I tried this
replace(/\s+/g, '');
still aint working
:(
Flurb
@Flurb
Apr 13 2016 10:09 UTC
Anyone here knows how to convert ISO-8859-1 to UTF-8? I tried every page when I google to it
Anyone has experience with it?
Rujool Doshi
@rujool
Apr 13 2016 10:09 UTC
@pmohdameen that would only work for whitespaces r8? which other characters do you want to replace?
Flurb
@Flurb
Apr 13 2016 10:09 UTC
Problem is: Our server responds with Latin1
And our frontend expects UTF-8
Blauelf
@Blauelf
Apr 13 2016 10:10 UTC
@pmohdameen Bad internet connection, so I'll write it again: Build a global (flag g) regex matching the to-be-removed characters. And remember that strings are immutable, so replace returns another string, you have to use that return value.
Flurb
@Flurb
Apr 13 2016 10:10 UTC
Best solution is ofcourse fix it in the backend, but in frontend is maybe faster
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:11 UTC
@Blauelf @rujool will this help ?
val.replace(/^[^a-zA-Z0-9]|[^a-zA-Z0-9]$/g, ''); Blauelf @Blauelf Apr 13 2016 10:12 UTC @Flurb Can you live with some characters not being able to be encoded in ISO-8859-1? So no names (where people might have spanish or turkish names, or even eastern european or kyrillic) Rujool Doshi @rujool Apr 13 2016 10:13 UTC @pmohdameen You have to remove the ^ and$ from the beginning and end respectively. Also use only one bracket
@pmohdameen Whats the difference between the two brackets
Chris Cullen
@123xylem
Apr 13 2016 10:14 UTC
Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin; Is this easy to understand for u guys? Imagine your explaining it to a 10 year old... Could you try explaining it again ? :)
Blauelf
@Blauelf
Apr 13 2016 10:14 UTC
@pmohdameen No need to be at the start or end of the string, so just val.replace(/[^a-zA-Z0-9]+/g, '') (also, remember you need that assignment, as replace cannot change the immutable string)
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:15 UTC
@rujool @Blauelf :) let me try,
Blauelf
@Blauelf
Apr 13 2016 10:15 UTC
@rujool There's no difference between the brackets, just that one will only match at string start, the other only at string end. No idea why it's done like that.
UDAY PRAPHULLA MALANGAVE
@malangaveuday
Apr 13 2016 10:15 UTC
matching strings in array
no case-sensitive and string will be random
example ["like", "Like"] ==> true
["like","keli"] => true
my code :
function mutation(arr) {
return (arr[0].toLowerCase().indexOf(arr1.toLowerCase())) !== -1;
}
mutation(["hello", "hey"]);
its not working for second exampl
Flurb
@Flurb
Apr 13 2016 10:16 UTC
@Blauelf
Its something like this:
Sîne klâwen durh die wolken sint geslagen
After trying to convert, it shows:
S�ne kl�wen durh die wolken sint geslagen
Sîne klâwen durh die wolken sint geslagen is what Im getting back from the backend
Not converting turns into: S�ne kl�wen durh die wolken sint geslagen
That wrong :smile:
Blauelf
@Blauelf
Apr 13 2016 10:18 UTC
@malangaveuday That would only work if the whole string would be included in the other string. You'll have to check for individual characters to all be included in the first string.
Flurb
@Flurb
Apr 13 2016 10:19 UTC
I wanna have UTF-8
Because frontend expects that
UDAY PRAPHULLA MALANGAVE
@malangaveuday
Apr 13 2016 10:25 UTC
@Blauelf Thank you, I will try to search any solution for this.
CamperBot
@camperbot
Apr 13 2016 10:25 UTC
malangaveuday sends brownie points to @blauelf :sparkles: :thumbsup: :sparkles:
:star: 1608 | @blauelf | http://www.freecodecamp.com/blauelf
Chris Cullen
@123xylem
Apr 13 2016 10:31 UTC
Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin; ---------------------------------------does the max+1 always guarantee you can get a number == max?
Blauelf
@Blauelf
Apr 13 2016 10:31 UTC
@Flurb I understand the problem, and the solution I am most familiar with is making it all UTF-8, in every place, everywhere (has quite some performance impact on counting characters). Those codes for î and â somehow look the same to me (I think it's the UTF-8 encoding of that unknown-character sign below), so it looks as if encoding already broke things completely.
@123xylem It does guarantee that you get (myMax - myMin + 1) different states. If your first state is myMin, then the last is myMax. Those should be integers in any case ;)
Flurb
@Flurb
Apr 13 2016 10:33 UTC
We use a database that doesn't use UTF-8, so thats not a solution
Blauelf
@Blauelf
Apr 13 2016 10:35 UTC
@Flurb The codes you showed us already have the information destroyed, can you track where that happens? At which point î and â are still different?
Flurb
@Flurb
Apr 13 2016 10:35 UTC
Its one sample code.
We have a JSON response
in ISO-8895-1
Lets say 'r'
What I do is unescape(encodeURIComponent(r))
response of r is Sîne klâwen durh die wolken sint geslagen is what Im getting back from the backend
output of unescape(encodeURIComponent(r)) is S�ne kl�wen durh die wolken sint geslagen
Thats it :package:
Thats it :smile: lol
Chris Cullen
@123xylem
Apr 13 2016 10:38 UTC
@Blauelf if it was just --- Math.floor(Math.random() * ((myMax + 1) ---- Would that always be max or less inclusive/
?
Blauelf
@Blauelf
Apr 13 2016 10:38 UTC
Have you tried using decodeURIComponent instead of unescape?
@123xylem Your parentheses don't match (missing two closing ones at least)
If you add two closing ), you effectively get the same code as for myMin=0
Flurb
@Flurb
Apr 13 2016 10:41 UTC
Turns out into S�ne kl�wen durh die wolken sint geslagen
Haha
Damn, tricky shizzle
Chris Cullen
@123xylem
Apr 13 2016 10:41 UTC
I lknow im just trying to understand the code
piece by piece
Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin;
Justin
@daemedeor
Apr 13 2016 10:41 UTC
@Flurb what was the solution?
Chris Cullen
@123xylem
Apr 13 2016 10:41 UTC
That doesnt make intuitive sense to me so im trying to break it down
Flurb
@Flurb
Apr 13 2016 10:41 UTC
I dont have the solution yet :smile:
E YG
@laed37
Apr 13 2016 10:42 UTC
Tip for those working on their intermediate front end development projects...CONSOLE LOGGING IS A GODSEND TO TROUBLESHOOT YOUR PROBLEMS. I'm sure all the experts here can agree with that haha
Justin
@daemedeor
Apr 13 2016 10:42 UTC
@Flurb oh hmmm
@laed37 ya it is... did you also know you can do console.table and console.warn
E YG
@laed37
Apr 13 2016 10:43 UTC
I spent almost 5 hours trying to fix my quote generator, it wasn't until in the last 10 mins I decided to console log each function's output to see the problem.
Justin
@daemedeor
Apr 13 2016 10:43 UTC
and also @laed37 did you know you can print out objects with phrases like console.log("hey",a.anotherObj,"this works");
usharya
@usharya
Apr 13 2016 10:43 UTC
for (var i = 0; i <= 5; i++) {
myArray.push();
}
I am trying to push 1-5 to myArray, but its not working?
Brendan Kinahan
@BKinahan
Apr 13 2016 10:43 UTC
@usharya push needs an argument
Blauelf
@Blauelf
Apr 13 2016 10:43 UTC
@usharya You push nothing. Try myArray.push(i);
usharya
@usharya
Apr 13 2016 10:44 UTC
Ahh Thank you once more @Blauelf @BKinahan
You guys are online most of the times?
CamperBot
@camperbot
Apr 13 2016 10:44 UTC
usharya sends brownie points to @blauelf and @bkinahan :sparkles: :thumbsup: :sparkles:
:star: 1609 | @blauelf | http://www.freecodecamp.com/blauelf
:star: 1273 | @bkinahan | http://www.freecodecamp.com/bkinahan
Brendan Kinahan
@BKinahan
Apr 13 2016 10:44 UTC
@laed37 congrats, you have learned to debug
Justin
@daemedeor
Apr 13 2016 10:44 UTC
console and all its methods are insane asylumns
XD
@Flurb are you using jquery?
E YG
@laed37
Apr 13 2016 10:46 UTC
my quote generator was sort of working before but it didnt always return a quote, turns out I 'forgot' that the filter function for my json iterates through each object and I was generating a new random number for each object (thirty in total)...all I did was move the random number var out of the filter function.
i feel accomplished lol...even though if I showed you guys the code.. you'd probably spot the issue in a heartbeat
Justin
@daemedeor
Apr 13 2016 10:47 UTC
@laed37 its a process
Flurb
@Flurb
Apr 13 2016 10:47 UTC
@Justin no React
jQuery is oldschool :smile:
Justin
@daemedeor
Apr 13 2016 10:48 UTC
@Flurb hmmmm did you try setting the header to contentType: "application/x-www-form-urlencoded;charset=ISO-8859-15",?
cannelflow
@cannelflow
Apr 13 2016 10:48 UTC
hey @BKinahan you there
long time WB
Brendan Kinahan
@BKinahan
Apr 13 2016 10:48 UTC
@cannelflow howdy
@cannelflow how's it going?
cannelflow
@cannelflow
Apr 13 2016 10:49 UTC
@BKinahan good @BKinahan finished d3
Brendan Kinahan
@BKinahan
Apr 13 2016 10:49 UTC
nice :)
cannelflow
@cannelflow
Apr 13 2016 10:49 UTC
@BKinahan need some help with leaflet tough
can you help
Brendan Kinahan
@BKinahan
Apr 13 2016 10:50 UTC
probably not, but maybe :D
Flurb
@Flurb
Apr 13 2016 10:51 UTC
@Justin frontend expects UTF-8. We have meta charset="utf-8"
Thats the problem
Justin
@daemedeor
Apr 13 2016 10:51 UTC
@Flurb when you grab it!
oh wait
hmm
Flurb
@Flurb
Apr 13 2016 10:52 UTC
Backend is ISO-8895-1, frontend is UTF-8
Justin
@daemedeor
Apr 13 2016 10:52 UTC
can't you change from charset utf-8?
cannelflow
@cannelflow
Apr 13 2016 10:52 UTC
this is what i got so far https://jsfiddle.net/cannelflow/fzrp4uvc/ @BKinahan need bigger radius and some help on implementing tool tip
Chris Cullen
@123xylem
Apr 13 2016 10:52 UTC
Math.floor(Math.random() * ((myMax + 1) - myMin)) + myMin; Can i just ask... How fast would you need to be able to work this out if you were making a number between min max inclusive? It would take me a long time to work it out...
Mohamed Ameen
@pmohdameen
Apr 13 2016 10:52 UTC
@Blauelf thanks mahn
CamperBot
@camperbot
Apr 13 2016 10:52 UTC
pmohdameen sends brownie points to @blauelf :sparkles: :thumbsup: :sparkles:
Flurb
@Flurb
Apr 13 2016 10:52 UTC
That fucks a whole lot up haha
CamperBot
@camperbot
Apr 13 2016 10:52 UTC
:star: 1610 | @blauelf | http://www.freecodecamp.com/blauelf
Justin
@daemedeor
Apr 13 2016 10:52 UTC
@Flurb hmmmmmmmmmm solutions solutions
Flurb
@Flurb
Apr 13 2016 10:52 UTC
I just want to convert haha
Justin
@daemedeor
Apr 13 2016 10:54 UTC
@Flurb the thing is ... there is no simple convert like i said XD
or are you pulling straight
Flurb
@Flurb
Apr 13 2016 10:54 UTC
Backend is Latin1 (ISO-8895-1)
So UTF-8 has all the characters of Latin1, except on other places
Justin
@daemedeor
Apr 13 2016 10:55 UTC
no no i mean like language
Flurb
@Flurb
Apr 13 2016 10:55 UTC
Java
Spring
Justin
@daemedeor
Apr 13 2016 10:55 UTC
oh okay
@Flurb did you look up java solutions converting it to utf-8?
usharya
@usharya
Apr 13 2016 10:56 UTC
how will you make the for loop for "Iterate Through an Array with a For Loop"?
wiki Iterate Through an Array with a For Loop
CamperBot
@camperbot
Apr 13 2016 10:57 UTC
# Challenge: Iterate Through an Array with a For Loop
A common task in Javascript is to iterate through the contents of an array. One way to do that is with a for loop. This code will output each element of the array arr to the console:
var arr = [10,9,8,7,6];
for (var i=0; i < arr.length; i++) {
console.log(arr[i]);
}
Remember that Arrays have zero-based numbering, which means the last index of the array is length - 1. Our condition for this loop is i < arr.length, which stops when i is at length - 1.
usharya
@usharya
Apr 13 2016 11:03 UTC
how will you make the for loop for "Iterate Through an Array with a For Loop"?
anyone there?
usharya
@usharya
Apr 13 2016 11:08 UTC
anyway, my code
var total = 0;
for (i = 0; i < myArr.length; i++){
total = total + myArr;
}
Rujool Doshi
@rujool
Apr 13 2016 11:09 UTC
@usharya u have to access the element of the array myArr in each iteration, try to figure that out.
usharya
@usharya
Apr 13 2016 11:11 UTC
thats where I am having the problem. I don't know what to do :(
Ultras05
@Ultras05
Apr 13 2016 11:15 UTC
i need some help
function convertToF(celsius) {
// Only change code below this line
celsius = 9/5;
// Only change code above this line
return fahrenheit ;
}
// Change the inputs below to test your code
convertToF(20);
function convertToF(celsius) {
// Only change code below this line
// Only change code above this line
return fahrenheit;
}
// Change the inputs below to test your code
convertToF(30);
tom
@tpercival01
Apr 13 2016 11:22 UTC
Can anyone help me with "Make object properties private"?
dennis-noah
@dennis-noah
Apr 13 2016 11:24 UTC
hey everyone
I am struggling with a not that difficult javascript task
can anyone help me?
Rujool Doshi
@rujool
Apr 13 2016 11:24 UTC
@usharya read up on how to access array elements in javascript
dennis-noah
@dennis-noah
Apr 13 2016 11:24 UTC
var result = "";
// Your code below this line
myNoun="dog";
myVerb="ran";
// Your code above this line
return result;
}
// Change the words here to test your function
wordBlanks(myNoun + "" + myAdjective + "" + myVerb + "" + myAdverb);
this is my code
first of all, it tells me : referenceerror: myNoun is not defined even though I clearly define it?
kirbyedy
@kirbyedy
Apr 13 2016 11:26 UTC
in order to define it you need a var infront
but anyway the approach you are taking is not good
dennis-noah
@dennis-noah
Apr 13 2016 11:26 UTC
@kirbyedy what should I change?
@kirbyedy but it was already defined in the function
kirbyedy
@kirbyedy
Apr 13 2016 11:26 UTC
it should be similar to this line: myNoun + "" + myAdjective + "" + myVerb + "" + myAdverb);
dennis-noah
@dennis-noah
Apr 13 2016 11:26 UTC
Nari Roh
@NariRoh
Apr 13 2016 11:27 UTC
help Golf Code
CamperBot
@camperbot
Apr 13 2016 11:27 UTC
# Details
We will now use our knowledge about else if statements and comparison with equality, less and greater operators.
In the game of golf each hole has a par for the average number of strokes needed to sink the ball. Depending on how far above or below par your strokes are, there is a different nickname.
Your function will be passed a par and strokes. Return strings according to this table (based on order of priority - top (highest) to bottom (lowest)):
Strokes Return
1 "Hole-in-one!"
<= par - 2 "Eagle"
par - 1 "Birdie"
par "Par"
par + 1 "Bogey"
par + 2 "Double Bogey"
>= par + 3 "Go Home!"
par and strokes will always be numeric and positive.
Ultras05
@Ultras05
Apr 13 2016 11:27 UTC
function convertToF(celsius) {
// Only change code below this line
// Only change code above this line
return fahrenheit;
}
// Change the inputs below to test your code
convertToF(30);
dennis-noah
@dennis-noah
Apr 13 2016 11:27 UTC
@kirbyedy thats what I did isnt it?: wordBlanks(myNoun + "" + myAdjective + "" + myVerb + "" + myAdverb);
kirbyedy
@kirbyedy
Apr 13 2016 11:27 UTC
not really
// Your code below this line
myNoun="dog";
myVerb="ran";
// Your code above this line
return result;
this is what you wrote
note the return
dennis-noah
@dennis-noah
Apr 13 2016 11:28 UTC
@kirbyedy if I define it with var it tell me myNoun is alreadydefined
kirbyedy
@kirbyedy
Apr 13 2016 11:28 UTC
as I said the approach is wrong from the start
but assign it to the result
dennis-noah
@dennis-noah
Apr 13 2016 11:29 UTC
@kirbyedy but it says your code below this line
kirbyedy
@kirbyedy
Apr 13 2016 11:29 UTC
because as you see in the code that i pasted you are returning the result
dennis-noah
@dennis-noah
Apr 13 2016 11:29 UTC
@kirbyedy ahh
Ultras05
@Ultras05
Apr 13 2016 11:29 UTC
I need some help plz
dennis-noah
@dennis-noah
Apr 13 2016 11:30 UTC
@Ultras05 with what?
@kirbyedy but it still says myNoun is not defined
Ultras05
@Ultras05
Apr 13 2016 11:30 UTC
function convertToF(celsius) {
// Only change code below this line
// Only change code above this line
return fahrenheit;
}
// Change the inputs below to test your code
convertToF(30);
dennis-noah
@dennis-noah
Apr 13 2016 11:30 UTC
@kirbyedy myNoun = "dog"
it says not defined all the time
@Ultras05 do you not understand the task?
Ultras05
@Ultras05
Apr 13 2016 11:30 UTC
Yup
dennis-noah
@dennis-noah
Apr 13 2016 11:31 UTC
Aleksa Rakic
@aleksarakic
Apr 13 2016 11:31 UTC
Can someone explain me what is happening in this code?
var an_obj = { 100: 'a', 2: 'b', 7: 'c' };
console.log(Object.keys(an_obj)); // console: ['2', '7', '100']
Why its [2,7,100] and not [100,2,7]?
Justin
@daemedeor
Apr 13 2016 11:32 UTC
@aleksarakic object keys don't inheriently have an order
so its arbitary
dennis-noah
@dennis-noah
Apr 13 2016 11:33 UTC
@Ultras05 ?
Ultras05
@Ultras05
Apr 13 2016 11:33 UTC
function convertToF(celsius) {
// Only change code below this line
// Only change code above this line
return fahrenheit;
}
// Change the inputs below to test your code
convertToF(30);
dennis-noah
@dennis-noah
Apr 13 2016 11:33 UTC
@kirbyedy pleasee help me. I really dont get it
kirbyedy
@kirbyedy
Apr 13 2016 11:34 UTC
erase what you have, reset the challenge
try to assign your arguments to the result
your last line says 'return result'
dennis-noah
@dennis-noah
Apr 13 2016 11:35 UTC
@kirbyedy thank you :* I got it
CamperBot
@camperbot
Apr 13 2016 11:35 UTC
dennis-noah sends brownie points to @kirbyedy :sparkles: :thumbsup: :sparkles:
:star: 848 | @kirbyedy | http://www.freecodecamp.com/kirbyedy
dennis-noah
@dennis-noah
Apr 13 2016 11:35 UTC
:package:
:panda_face:
kirbyedy
@kirbyedy
Apr 13 2016 11:35 UTC
paste it here
dennis-noah
@dennis-noah
Apr 13 2016 11:35 UTC
@kirbyedy it worked
kirbyedy
@kirbyedy
Apr 13 2016 11:36 UTC
let me see what you did
dennis-noah
@dennis-noah
Apr 13 2016 11:36 UTC
@kirbyedy okay wait
var result = "";
// Your code below this line
result = myNoun + " " + myAdjective + " " + myVerb + " " + myAdverb;
// Your code above this line
return result;
}
// Change the words here to test your function
wordBlanks("dog", "big", "ran", "quickly");
kirbyedy
@kirbyedy
Apr 13 2016 11:36 UTC
good
dennis-noah
@dennis-noah
Apr 13 2016 11:36 UTC
@kirbyedy Thanks for your help, appreciate it!
CamperBot
@camperbot
Apr 13 2016 11:36 UTC
dennis-noah sends brownie points to @kirbyedy :sparkles: :thumbsup: :sparkles:
:warning: dennis-noah already gave kirbyedy points
kirbyedy
@kirbyedy
Apr 13 2016 11:36 UTC
:thumbsup:
Ultras05
@Ultras05
Apr 13 2016 11:37 UTC
@dennis-noah ?
dennis-noah
@dennis-noah
Apr 13 2016 11:37 UTC
@Ultras05 if you want my help send me the description of the task I know how it goes but I forgot the formular for the calculation of celsius/fahreinheit
@kirbyedy By the way, nice page! Did you make it during your training here?
Aleksa Rakic
@aleksarakic
Apr 13 2016 11:38 UTC
@daemedeor i am trying to understand that, but I cant :) How it is arbitrary? When it is ordered this odd way, and when it is 'normal'?
Ultras05
@Ultras05
Apr 13 2016 11:38 UTC
convertToF(0) should return a number
convertToF(-30) should return a value of -22
convertToF(-10) should return a value of 14
convertToF(0) should return a value of 32
convertToF(20) should return a value of 68
convertToF(30) should return a value of 86
here's the description
Apr 13 2016 11:38 UTC
Change the provided string from double to single quotes and remove the escaping.
var myStr = '<a href = "http://www.example.com" target = "_blank" >Link</a>';
kirbyedy
@kirbyedy
Apr 13 2016 11:39 UTC
@dennis-noah well I did some html before, so I am not totally new to this, but still need lot to learn
Apr 13 2016 11:39 UTC
Change the provided string from double to single quotes and remove the escaping.
var myStr = '<a href = "http://www.example.com" target = "_blank" >Link</a>';
can help me
Aleksa Rakic
@aleksarakic
Apr 13 2016 11:45 UTC
var an_obj = { 100: 'a', 2: 'b', 7: 'c' };
console.log(Object.keys(an_obj)); // console: ['2', '7', '100']
Why not [100,2,7]? When are keys ordered 'normal' way and when like this?
dennis-noah
@dennis-noah
Apr 13 2016 11:48 UTC
$(document).ready(function){$("h1").addClass("animated bounce");
};
where do I have a missing or wrong token here?
Aleksa Rakic
@aleksarakic
Apr 13 2016 11:48 UTC
anyone?
kirbyedy
@kirbyedy
Apr 13 2016 11:49 UTC
dennis-noah
@dennis-noah
Apr 13 2016 11:49 UTC
• $(document).ready(function){$("h1").addClass("animated bounce");
});
$(function() {$("h1").addClass("animated bounce");
});
Dmitry Frolov
@ayatorii
Apr 13 2016 11:51 UTC
Hi guys
cannelflow
@cannelflow
Apr 13 2016 11:52 UTC
@dennis-noah which exercise?
@dennis-noah i think you need to target something else
Dmitry Frolov
@ayatorii
Apr 13 2016 11:52 UTC
I have a brief question about javascript objects
Olawale Akinseye
@brainyfarm
Apr 13 2016 11:52 UTC
Go on @ayatorii
cannelflow
@cannelflow
Apr 13 2016 11:52 UTC
@ayatorii yes you can ppl will help if they can
Shivam Arora
@shivamarora13
Apr 13 2016 11:53 UTC
when I pass idStore = '1'
if (idStore === '2' || '4' || '6' || '8') {
$("#5").replaceWith('<p id="5">' + comp + '</p>'); } else if (idStore === '1' || '3' || '9' || '5') {$("#7").replaceWith('<p id="7">' + comp + '</p>');
}
with this code
than also, only the if loop runs, and not the else if,
even though I think, else if loop should run
can anyone tell why?
Dmitry Frolov
@ayatorii
Apr 13 2016 11:54 UTC
following Javascript lessons here on FreeCodeCamp i noticed, that properties in objects sometimes typed as strings in quotes, and sometimes without them. What does it mean?
I understand that values can be anything, i.e. arrays, strings etc
Olawale Akinseye
@brainyfarm
Apr 13 2016 11:54 UTC
The correct way to do it @dennis-noah is:
$(document).ready(function{$("h1").addClass("animated bounce");
});
You added an unnecessary ) after function
cannelflow
@cannelflow
Apr 13 2016 11:54 UTC
@ayatorii without strings are variable i guess
Dmitry Frolov
@ayatorii
Apr 13 2016 11:55 UTC
in lessons i encountered this: "Hunter" : Doberman, ... And this: 16: Ibragimobich, 27: ...
does it make difference to how i get a value from a property of an object?
Islam Ibakaev
@dagman
Apr 13 2016 11:56 UTC
@aleksarakic the documentation says that this method traverse object keys in same order like for...in method do.
Dmitry Frolov
@ayatorii
Apr 13 2016 11:57 UTC
var myDog = {
"name": "Coder",
"legs": 4,
"tails": 1,
"friends": ["Free Code Camp Campers"]
};
var testObj = {
12: "Namath",
16: "Montana",
19: "Unitas"
};
myDog has properties as strings, testObj has props as numbers
Islam Ibakaev
@dagman
Apr 13 2016 11:58 UTC
@aleksarakic the documentation says that for...in traverse object keys in arbitrary order
Dmitry Frolov
@ayatorii
Apr 13 2016 11:58 UTC
Does it make any difference in JAvascript?
Shivam Arora
@shivamarora13
Apr 13 2016 11:58 UTC
When I pass idStore === '1',
console.log(idStore === '2' || '4' || '6' || '8');
this returns me, 4, how I can't understand.
Can anyone tell?
dennis-noah
@dennis-noah
Apr 13 2016 11:59 UTC
@brainyfarm I did it like this: $(document).ready(function() {$("h1").addClass("text-center");
});
Theodore P.
@Ierofantis
Apr 13 2016 11:59 UTC
i have two problems in my tic tac toe project. First of all my reload function for the reload button(the last function)is not working and secondly when I push for example the 'o' that i already add It changes to x.
http://codepen.io/Ierofantis/pen/aNVMQd
Islam Ibakaev
@dagman
Apr 13 2016 11:59 UTC
@shivamarora13 what the whole code look like?
dennis-noah
@dennis-noah
Apr 13 2016 12:00 UTC
@brainyfarm http://codepen.io/Dennis_Noah/pen/PNQpwE my tribute page it's finished :D
Olawale Akinseye
@brainyfarm
Apr 13 2016 12:00 UTC
Good job @dennis-noah :+1:
Arryn
@arnoac
Apr 13 2016 12:00 UTC
So I have this cose:
switch(val){
case 1:
return 'alpha';
break;
case 2:
return 'beta';
break;
case 3:
return 'gamma';
break;
case 4:
return 'delta';
break;
}
it runs but the breaks give warnings that they are unreachable after retun, why's that?
Apr 13 2016 12:00 UTC
@shivamarora13 don't you mean to write
console.log(idStore === '2' || idStore === '4' || idStore === '6' || idStore === '8');
kirbyedy
@kirbyedy
Apr 13 2016 12:00 UTC
dont use return @arnoac
dennis-noah
@dennis-noah
Apr 13 2016 12:00 UTC
@brainyfarm thanks :D
CamperBot
@camperbot
Apr 13 2016 12:00 UTC
dennis-noah sends brownie points to @brainyfarm :sparkles: :thumbsup: :sparkles:
:star: 1824 | @brainyfarm | http://www.freecodecamp.com/brainyfarm
Arryn
@arnoac
Apr 13 2016 12:00 UTC
@kirbyedy then what do i use instead
Shivam Arora
@shivamarora13
Apr 13 2016 12:00 UTC
Olawale Akinseye
@brainyfarm
Apr 13 2016 12:01 UTC
Are you trying to animate h1 and h2 @dennis-noah ? | 2019-10-22 19:32:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2660461366176605, "perplexity": 10188.48024870714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00551.warc.gz"} |
https://andernesser.wordpress.com/2017/04/29/why-is-keplers-second-law-true/ | # Why Is Kepler’s Second Law True?
In the previous post, I mentioned sharing some things I have learned whilst preparing notes for my novels. Today’s topic relates to the orbital motion of planets.
Consider a very simple model of a point moving along the circumference of a circle. We give no reason for the point to change its speed, so let us assume it stays constant. Now, a circle seems an unlikely thing, a very special type of ellipse. So let us alter the model so that the ellipse has two foci instead of one. Is the point still moving at constant speed? Why would it not? Intuition might say that if this were a physical system where the point is pulled to a focus, the point would be moving at a constant speed (unless some external force were altering the system). So this simple model, one might have expected, applies to the orbits of planets around a sun. But when Johannes Kepler analyzed Tycho Brahe’s big data set on planetary motion, he discovered that the planets travel faster near the Sun and slower when far from the Sun. This is called Kepler’s Second Law of Planetary Motion.
Isaac Newton was able to prove that this law is true, and it holds for all two-body systems bound together in gravitational orbits. But wait, how could Newton deductively prove an observational discovery which seems dependent on the contingent nature of a physical system? Here is an attempt to outline why Kepler’s Second Law is a matter of deductive reasoning, largely independent from the exceptional physical nature of gravity, or planets, or stars.
But first let us acknowledge that we shall be talking about a toy model of a planetary system; we shall not be considering relativistic effects, quantum effects (i.e., a system of objects with planet-like orbits cannot exist at subatomic scales), or more mysterious galactic-scale effects. Additionally, for simplicity we can assume that a low-mass planet is orbiting a high-mass star so that one focus of the elliptical orbit aligns with the star’s center of mass. (In reality, this is never the case. Observe the diagrams below.
The size of the white dots indicates the relative mass of the objects in orbit. The objects orbit the center of mass of the system, the barycenter, not the center of mass of the star. As the difference in mass between the two objects increases, the barycenter approaches the center of mass of the more massive object. In discussing Kepler’s law, the important thing is that we place the origin of our coordinate system at this barycenter, which is one focus of an ellipse.)
With the preliminary qualifications of the model out of the way, let us turn to the argument itself. (Please note that I am keeping the tone informal and conversational; this is not a formal mathematical proof.) Every argument begins with a set of assumptions (not to be restricted to the colloquial usage of “assumption”, as argumentative assumptions may be empirically sourced facts). We shall assume that angular momentum is conserved, a fact which can be explained in the following way. First, note that the mass scalar ($m$) multiplied by the acceleration vector ($\vec{a}$) equals the force vector ($\vec{F}$) (Newton’s Second Law of Motion):
$\vec{F}=m\vec{a}$
Second, recall Newton’s Law of Gravity, which states that the force equals the gravitational constant multiplied by the two objects’ masses, divided by the square of the distance, and then multiplied by the position vector:
$\vec{F}=-\frac{GMm}{r^{2}}\hat{r}$
So these two expressions of force are equal to each other:
$m\vec{a}=-\frac{GMm}{r^{3}}\vec{r}$
Divide both sides of the equation by the mass of the planet to simplify the formula:
$\vec{a}=-\frac{GM}{r^{3}}\vec{r}$
Now it is easy to see that you can take that quotient to the left of the position vector and collapse it into a constant. So the acceleration vector equals the position vector times a constant:
$\vec{a}=c\vec{r}$
And vectors which are multiples of each other are parallel. The cross product of parallel vectors equals a zero vector:
$\vec{a}\times\vec{r}=\vec{0}$
which is
$\ddot{\vec{r}}\times\vec{r}=\vec{0}$
Now, take a moment to consider the derivative of the cross product of the position vector and the velocity vector:
$D_{t}[\vec{r}\times\dot{\vec{r}}]$
This derivative equals two summands: velocity cross velocity, and position cross acceleration:
$\dot{\vec{r}}\times\dot{\vec{r}}+\vec{r}\times\ddot{\vec{r}}$
The cross product of identical vectors is a zero vector, so the augend is zero:
$\vec{0}+\vec{r}\times\ddot{\vec{r}}$
And we just concluded above that the addend is zero. So the sum is a zero vector:
$\vec{0}+\vec{0}=\vec{0}$
And if a function’s derivative is zero, then it is a constant function. We shall call this constant vector $\vec{L}$: because it is their cross product, $\vec{L}$ is perpendicular to the position vector and the velocity vector. So the orbiting objects move orthogonally to $\vec{L}$. Since $\vec{L}$ never changes, the objects’ movements are restricted to a plane. The orbit is never warped in a third dimension; in other words, angular momentum is conserved.
With the establishment of the momentum conservation assumption, we can enter the heart of the argument. The vector function denoting the orbiting object’s position can be expressed in polar coordinates as cosine of the angle ($\varphi$) times the unit vector $\hat{i}$ plus sine of the angle times the unit vector $\hat{j}$ all multiplied by the length of that position vector:
$r(\cos\varphi\hat{i}+\sin\varphi\hat{j})$
Next, we find velocity: $\dot{\vec{r}}=r(-\sin\varphi\hat{i}+\cos\varphi\hat{j})\dot{\varphi}$. Then find the cross product of the position and velocity vectors. If you do the algebra, you should see that it equals the distance squared times the time derivative of the angle times unit vector $\hat{k}$:
$\vec{r}\times\dot{\vec{r}}=r^{2}\dot{\varphi}\hat{k}$
Well, earlier we had already decided that this cross product equals $\vec{L}$, so this expression also equals $\vec{L}$:
$\vec{L}=r^{2}\dot{\varphi}\hat{k}$
Because $\hat{k}$ is just a unit vector, the distance of $\vec{L}$ is $r$ squared, times the derivative of the angle:
$L=r^{2}\dot{\varphi}$
Now consider the variable angle $\varphi$ at two specific angles, $\alpha$ and $\beta$. We know that the area bounded by these angles should be half the integral of $r$ squared times the differential of $\varphi$ from $\alpha$ to $\beta$:
$A=\frac{1}{2}\intop_{\alpha}^{\beta}r^{2}d\varphi$
If we set our clock to zero when the angle $\varphi$ equals $\alpha$, and one time unit later the angle reaches $\beta$, we can re-write the integral like this: half the integral of $r$ squared times the time-derivative of the angle times the differential of time from $t_{0}$ to $t_{1}$:
$A=\frac{1}{2}\intop_{t_{0}}^{t_{1}}r^{2}\frac{d\varphi}{dt}dt$
We already learned that the integrand is the length of $\vec{L}$:
$\frac{1}{2}\intop_{t_{0}}^{t_{1}}Ldt$
So this integral equals half of $L$ multiplied by the time difference:
$\frac{1}{2}L(t_{1}-t_{0})=A$
Now you can easily see that for any two time intervals of equal length, the area is the same, as swept out by the distance line from the orbiting object to its barycenter. Hence the famous refrain “equal areas in equal times.” But equal areas mean unequal arc lengths along the ellipse traveled in equal times, and hence the planets change their orbital speeds.
So how much of this argument relies on empirical observation, and how much on armchair reasoning? You can see that the main body of the argument relies on properties of vectors. So let us go back further, to the assumptions. Again, we used vector properties to obtain conservation of momentum, and the main argument’s deductions are a consequence of this conservation, but we really started with Newton’s Second Law of Motion and his Law of Gravity. Note that most of the empirical details, such as the masses of objects and the value of the gravitational constant ($G$), disappear into the constant we labeled $c$. It is not only that the values of the variables and constant are irrelevant; even much of the detail in the law itself is abstracted into one simple term. I did not expect that so much of Kepler’s Second Law relies not on the contingent properties of gravitation, but on the geometry of vectors, which is basically logic.
— Ander Nesser, the 29th of April, 2017
References:
https://plato.stanford.edu/entries/kepler/#CopRefThrPlaLaw
Newton’s original proof is in Book 1, Section 2 of his Principia: http://www.17centurymaths.com/contents/newtoncontents.html
The mathematics of the above argument is based on Lecture 14 of the course Understanding Multivariable Calculus by Prof. Bruce Edwards, University of Florida: http://www.thegreatcourses.com/courses/understanding-multivariable-calculus-problems-solutions-and-tips.html
The images of orbits are drawn with Celestia: https://celestiaproject.net/
The barycenter animations are provided by Wikipedia: https://en.wikipedia.org/wiki/Barycenter
The gif animating Kepler’s Second Law was made by Antonio González Fernández of the Engineering School of the University of Seville: https://en.wikipedia.org/wiki/File:Kepler-second-law.gif | 2017-06-23 04:59:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496288657188416, "perplexity": 311.7681769365197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00249.warc.gz"} |
https://www.esaral.com/q/two-identical-charged-spheres-are-suspended-by-strings-of-equal-lengths-76035 | # Two identical charged spheres are suspended by strings of equal lengths.
Question:
Two identical charged spheres are suspended by strings of equal lengths. The strings make an angle of $30^{\circ}$ with each other. When suspended in a liquid of density $0.8 \mathrm{~g} \mathrm{~cm}^{-3}$, the angle remains the same. If density of the material of the sphere is $1.6 \mathrm{~g} \mathrm{~cm}^{-3}$, the dielectric constant of the liquid is :
1. 1
2. 4
3. 3
4. 2
Correct Option: , 4
Solution: | 2023-02-04 22:31:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.55387943983078, "perplexity": 393.02885711046827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00708.warc.gz"} |
https://blog.tomrochette.com/agi/genetics-based-agi | # Genetics based AGI
History / Edit / PDF / EPUB / BIB
Created: August 24, 2015 / Updated: January 12, 2020 / Status: unknown / 13 min read (~2575 words)
### Things to explore
• DNA is the software of life. If that is true, who wrote the code?
• How does the reproduction of cells (of an embryo) works in term of computation?
• If DNA is considered as the storage/tape of a Turing machine, can be it considered to be expandable? What are the other similar properties of DNA and Turing machines?
• Isomorphism between DNA and programs
• DNA is code, and it most likely didn't start the length it is now.
• Can we build a source tree ala git that would explain our evolution?
## Overview
One may extrapolate that the big bang is similar to the generation of random code. Everything that followed it is simply random permutation/mutation of the randomness that ended up into something that is coherent/structured. Like a well programmed neural network, with enough time, randomness will at some point have to generate patterns. However we know from the study of undirected/blind (as opposed to directed/guided) program generation, such as through a linear method, that the space of programs strings (chain of character that may or may not produce an executable program) is immense in comparison of the space of valid programs. It can easily be compared to the problem of finding a needle in a haystack.
In a similar way, given hundred of thousands of valid programs, if we want to find a particular one that does a number of specific things, then we're making it harder for a search/filtering algorithm to return us an appropriate one. If one can express what he wants the program to do in a boolean fashion, such as "the program does/does not do this", then for every expression, the number of potential program doubles. For instance, if you want 8 specific things, then there are $2^8$ potential programs, but only 1 that does what you want. You want to add 1 more thing? Then you've effectively doubled the number of potential programs, while still looking for a single specific program.
We, as human beings, are a gigantic assembly of billions of cell-sized machines. Each and every cell contains its own copy of the program (DNA) executed by each and everyone of these machines and which is itself about 3 billion nucleotides. As there are 4 valid nucleotides/base, there are $(2^2)^{(3 \times 10^9)}$ possible combinations/programs. However, considering that we (humans) all share a ton of similar attributes (we all have two eyes, two ears, two arms, two legs and so on), it makes sense to assume that a lot of our DNA code is shared.
In the body, or more specifically in each cell, the DNA is used as the source from which ARNm is transcribed and then translated into a protein.
Our genetic software (DNA) itself only changes/evolve through the combination of two parent chromosomes.
Some properties:
• Code rarely changes (only when a new "program"/human is created)
• New code is the combination of two existing codes
• Two code bases are mostly similar (99.5% similar1)
• A large amount of DNA is shared with other animals2, which could imply that we either developed shared code base (evolved from similar ancestors) or that we ended up developing similar code bases independently
• A certain amount of DNA is considered noncoding, meaning that they do not "execute" into proteins, and can be considered to be passive data
• A certain amount of this noncoding DNA is considered Junk DNA, the equivalent of dead code/data
• The 5' and 3' sections of the mRNA could be compared to the preambule and epilogue of functions in assembly, they serve to indicate the beginning and end of blocks of information/instructions
• If DNA is a string, then it most likely has a grammar (and its own language)
• Must follow some syntactic rules or else it is incorrect (see protein folding)
• Evolution is Nature's nondeterministic way to test out DNA machines, some survive (are born, live and die of old age), others don't (are not born)
## Reasoning on AGI
Given that programs are part of the integer space (with their length in the same space), one can be certain that a program within this infinite space will be considered an AGI. It is highly likely that many such programs exists. In fact, if one such program exist, an infinity of its variants will exist as well (longer programs containing this "seed AI"). Assuming that programs containing this seed AI code can also exhibit the same functionalities, the assumption that an infinity of those program exists holds. Otherwise, it means that even though the seed AI code is present within the program's string (the integer represented as a sequence of integers in the 0-9 range), it cannot be activated/executed. For example, given a C program that is a seed AI, any junk at its beginning or end may render the program uncompilable (or unexecutable given the integer could be a executable binary). For the sake of analysis, we'll prefer to work within a language that considers this seed AI string as active wherever it is found.
If we accept that "a seed AI program exists" as a fact (human beings being an instance), then the obvious next question is "what is the length of this shortest seed AGI?" The answer is likely to be language specific. For instance, our DNA is believed to be the equivalent to Nature's AGI program. DNA is itself about 3 billion nucleotides. As there are 4 valid nucleotides/base, there are $(2^2)^{(3 \times 10^9)}$ possible combinations/programs, a single program being approximately 3 Gbases, 6 Gbits or 750 MBytes (approximately 3.75 MBytes being different between individuals). What those 750 MB of code and data allow us to do is to construct a huge variety of cells/proteins that end up having lives of their own.
If we take this amount of information as a basis to determine the size of a potential human-like AGI, we have to ask ourselves if what we "really" need is a subset of this information, or all of it is needed. In the former case, then we can hope to reduce our search space considerably, in the latter, it means that we at least have an upper bound for something that should produce human-like intelligence levels, given the appropriate environment simulation.
This "upper bound" or threshold has a couple of interesting properties. Let's consider the smallest AGI being a program of length $l$. This means that for all programs $p$ smaller than $l$, in other words $|p| < l$ (where $|p|$ is the length of program $p$), the probability that we execute a program $p_{AGI}$ that is AGI is $P(\mathrm{p\ exhibits\ AGI} \mid |p| < l) = 0$, in other word we will at best observe sub-AGI intelligence but not AGI itself.
On the other hand, for any program larger or equal to $l$, we may assume that it is sufficient for a program $p$ to contain the program $p_{AGI}$ somewhere in its string definition. In other terms, if this program $p$ contains the substring (from index $a$ to $b$) $p_{a,b}$ that is the AGI program $p_{AGI}$ ($p_{a,b} = p_{AGI}$), then $P(\mathrm{p\ exhibits\ AGI} \mid |p| \ge l \wedge p_{a,b} = p_{AGI}) = 1$. Finally, we may ask ourselves what is the probability of finding an AGI program, given a program of length $|p|$ and a know seed AGI program $p_{AGI}$ which is a subprogram/substring of $p$, $P(\mathrm{p\ exhibits\ AGI} \mid |p| \ge l \wedge p_{AGI}) =\ ?$. More interestingly, we can ask what is the probability of finding an AGI program, given that we "know" the minimal program length of an existing AGI but do not have the code, $P(\mathrm{p\ exhibits\ AGI} \mid |p| \ge l) =\ ?$.
Since we said that for a program to exhibit AGI it would have to contain a seed AGI as a substring of itself, we can simplify $\mathrm{p\ exhibits\ AGI}$ as $p_{a,b} = p_{AGI}$, in other words, let $p$ be the shortest AGI program. $P(\mathrm{p\ exhibits\ AGI} \mid |p| \ge l \wedge p_{a,b} = p_{AGI}) = P(\mathrm{p_{a,b} = p_{AGI}} \mid |p| \ge l \wedge p_{a,b} = p_{AGI}) = 1$ is now obvious, since the evidence contains $p_{a,b} = p_{AGI}$. One of the questions we asked becomes $P(p_{a,b} = p_{AGI} \mid |p| \ge l) =\ ?$, which means "given that our program $p$ is longer than an expected seed AI of length $l$, what is the probability that a part of its code (a substring) is $p_{AGI}$?" As the program length $l$ increases, the probability decreases.
### Observations
• If we assume there is only 1 program of length $l$ that exhibits AGI, then all "variants" of the programs must be of length $l_{variant} > l$, in other words they must be at least one symbol longer and containing code that isn't part of the AGI program (dead code, similar to "junk DNA").
### Applications
• Given that we established there are $(2^2)^{(3 \times 10^9)}$ potentials programs of the same length as the human genome that also appears to generate variants of similar programs, namely different individuals with varying capabilities, and that we estimate there have been about 108 billion individuals that lived so far (assuming they all had unique DNA and that their DNA was of this exact length, which isn't the case)3
$\frac{108 \times 10^9}{(2^2)^{(3 \times 10^9)}} = \frac{108 \times 10^9}{4^{3 \times 10^9}} = \frac{1.08 \times 10^{11}}{9.6357 \times 10^{1806179973}} \approx 10^{-1806179962}$
## Questions
• Is the DNA/genome the same length for all individuals?
• If it is the case
• How is that possible?
• Why does it have to be the exact same length?
• If it is not the case
• What is the impact of the missing/added parts?
Answer: DNA length varies amongst individuals. This is mostly due to the large amount of non-coding DNA.
A major discrepancy in DNA length would cause infertility even if we assume it does not cause somatic defects.
Source: | 2020-08-08 17:00:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4680788516998291, "perplexity": 868.8029047225124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00278.warc.gz"} |
http://typo3master.com/mean-square/info-squared-error-function.php | Home > Mean Square > Squared Error Function
Squared Error Function
In which case, you individually square the error for Browse other questions tagged machine-learning subscribe format posts in markdown. Morris H. (1980).The remaining part is
If you're behind a web filter, please make Statistical decision theory and Error read review the lowest MSE among all possible estimators. Function Mean Square Error Definition So the formula is: K. Examples Mean Suppose we have a random sample of size n from
P.229. ^ DeGroot, that follows a Gaussian distribution $\mathcal{N}(\mu,\sigma)$. This property, undesirable in many applications, has led researchers to use alternativeshas much nicer mathematical properties.
to remember the properties of jointly normal random variables. evenly distributing the pie. Root Mean Square Error Formula placing a Gaussian prior on the coefficients?Namely, we show that the estimation
Mean squared error From Wikipedia, the free encyclopedia http://datascience.stackexchange.com/questions/10188/why-do-cost-functions-use-the-square-error By using this site, you agree tomathematical truth underlying the many different conveniences of the squared error. 2 ∼ χ n − 1 2 {\displaystyle {\frac {(n-1)S_{n-1}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} .
For an unbiased estimator, the MSEand the estimator that does this is the minimum variance unbiased estimator. Mean Square Error Example Then your gradient is the sum of $m$ terms divided how do we choose the right parameters that best fit our model. The squared error of a probabilisticto the English verison of the page.
each observation and take the square root of the mean.Comment preview submitTranslate immse Mean-squared error collapse all in page Syntaxerr = immse(X,Y) exampleDescriptionexampletry here signs without spaces around the edges.
In fact, I would say that unbiasedness could just as easily be MSE is a risk function, corresponding to the expectedYork: Springer-Verlag. Bayesian interpretation of regressions with gaussian prior $$\endgroup$$ reply https://en.wikipedia.org/wiki/Mean_squared_error your error as,$Predicted Value - Actual Value$.
signs without spaces around the edges. error, $\tilde{X}$, and $\hat{X}_M$ are uncorrelated.If the value returned by the evolved model is equal to or greaterway, but is not ok with any re-parameterizations (except for signed permutations). such as the mean absolute error, or those based on the median.
MSE also correspons to maximizing the likelihood of Gaussian random variables.7k order to decide between linear regression model, boosted model and Spline model. For a Gaussian distribution this is the best unbiased estimator (that is, it has Mean Squared Error Calculator value of the squared error loss or quadratic loss.Save your draft before refreshing this page.Submit in which two variables make a “right angle” if $$E(XY) = 0$$.
http://typo3master.com/mean-square/tutorial-sum-squared-error-performance-function.php For more information, see Code Generation for Image Processing.MATLAB Function BlockYou Squared motivated by the niceness of squared error as the other way around.For instance: If $$X$$ is a random variable, then the estimator$\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$ Why is that?
Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Mean squared United States Patents Trademarks Privacy Policy Root Mean Square Error Interpretation $$E(X^2)$$, which is related to its variance.This definition for a known, computed quantity differs from the above definition forMMSE estimator of $X$, \begin{align} \hat{X}_{M}=E[X|Y], \end{align} has the lowest MSE among all possible estimators. is, as expected, based on the standard mean squared error.
Squared signs without spaces around the edges.18 at 12:46 AM $$\begingroup$$Sorry for being so brief in my comment in the morning.add: $$Var(X + Y) = Var(X) + Var(Y)$$.Neither part of it seems true to me (and the claims seemsigns without spaces around the edges.
Put TeX math between $Clicking Here signs without spaces around the edges. How To Calculate Mean Square Error epsilon insensitive, or…) loss with either $$l_1$$ or $$l_2$$ or other regularization types. Put TeX math between$ and perhaps ask the teacher if (s)he's going to cover it. I think “squared error of a vector is a post? Text is available under the Creativewe have $E[\tilde{X} \cdot g(Y)]=0$.
So, the MSE index ranges from 0 (1985). "2.4.2 Certain Standard Loss Functions". Squared signs without spaces around the edges. Squared Error Vs Absolute Error it varies by sample and by out-of-sample test space. Squared In fact, the absolute error is often closer to
MSE is a risk function, corresponding to the expected I don’t think it will ever go away entirely. —December 2014 Enjoyed this post? This lets you handle all sizes of datasets, so yourconfused with Mean squared displacement. Mean Square Error Matlab residual (as is often done).MSE has nice mathematical properties which
But because of the connection between the squared error and the Gaussian distribution, iterated expectations)}. \end{align} Therefore, $\hat{X}_M=E[X|Y]$ is an unbiased estimator of $X$. However, MAE requires more complicated tools suchrandom variable $W=E[\tilde{X}|Y]$. By using this site, you agree toof Statistics (3rd ed.). Absolute error in the sense of “L1 distance between points” works that analysis for large margin classifiers", Neural Networks, 14 (10), 1447–1461.
That is, the n units are selected one at a time, and Inner products The squared error is induced seeing this message, it means we're having trouble loading external resources for Khan Academy.
ISBN0-495-38508-5. ^ Steel, the minimum mean squared error (MMSE) estimate.
Stockholm public transport on 26th December A pilot's messages Is an than the rounding threshold, then the record is classified as “1”, “0” otherwise. Not the answer
a population, X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} .
That is why it is called * Website Notify me of follow-up comments by email. 10:25 PM $$\begingroup$$@Matt: What do you mean by “Bayesian interpretation of regressions with gaussian prior”? Values of MSE may signs without spaces around the edges.
error of $$\vec x$$ to $$\vec X$$ is the sum of the coordinate-wise squared errors. | 2018-10-17 11:48:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054437279701233, "perplexity": 1763.7271380132931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00006.warc.gz"} |
https://www.mathway.com/examples/precalculus/sequences-and-series/finding-the-sum-of-the-series?id=1050 | # Precalculus Examples
, , , , , , , ,
This is the formula to find the sum of the first terms of the sequence. To evaluate it, the values of the first and th terms must be found.
This is an arithmetic sequence since there is a common difference between each term. In this case, adding to the previous term in the sequence gives the next term. In other words, .
Arithmetic Sequence:
This is the formula of an arithmetic sequence.
Substitute in the values of and .
Simplify each term.
Apply the distributive property.
Multiply by to get .
Simplify the expression.
Remove unnecessary parentheses.
Subtract from to get .
Substitute in the value of to find the th term.
Multiply by to get .
Replace the variables with the known values to find .
Cancel the common factor of .
Write as a fraction with denominator .
Factor out the greatest common factor .
Cancel the common factor.
Rewrite the expression.
Multiply and to get .
Convert the fraction to a decimal.
We're sorry, we were unable to process your request at this time
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%) | 2018-02-25 13:43:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.662712037563324, "perplexity": 1502.121372298471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00041.warc.gz"} |