url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://en.wikipedia.org/wiki/Talk:Geodetic_datum
|
# Talk:Geodetic datum
WikiProject Maps (Rated C-class, High-importance)
This article is within the scope of WikiProject Maps, a collaborative effort to improve the coverage of Maps and Cartography on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C This article has been rated as C-Class on the project's quality scale.
High This article has been rated as High-importance on the project's importance scale.
WikiProject Geographical coordinates
Geodetic datum is of interest to WikiProject Geographical coordinates, which encourages the use of geographical coordinates in Wikipedia. If you would like to participate, please visit the project page, where you can join the project and see a list of open tasks.
## Untitled
Agreed Budgiekiller 14:45, September 1, 2005 (UTC)
## Datum Merged
The article Datum has been successfully merged into this article. Also there is a proposed plan to merge this article into Geodesy. I agree to merge this article into the sub-section in Geodesy called Geodetic datums. --Zer_T 01:35, 16 December 2005 (UTC)
## Datums
As I am not a cartographer I will not edit the page. More than one datum requires the plural, which is data. Reg nim 20:40, 19 December 2006 (UTC)
see: [1], datums is the correct plural in this context.EricR 20:57, 19 December 2006 (UTC)
Maybe the page has been edited by others to change it, becaue I find the word data in some places where I would expect datums (e.g. geodetic data). Or maybe I've misunderstood something. — Preceding unsigned comment added by 217.12.16.53 (talkcontribs) 06:51, 25 August 2009
Geodetic data refers to the geographical coordinates of every place on Earth, but geodetic datums refer to the several reference surfaces along which geodetic data are measured. — Joe Kress (talk) 17:16, 25 August 2009 (UTC)
Saying that a datum is a reference system is like saying that a telephone is a communication device -- it just doesn't tell you much. Is a datum a mathematical representation (ellipsoid)? If not, how does it differ from an ellipsoid representation? — Preceding unsigned comment added by 2600:1014:B12B:9DD9:AD12:18BE:CD58:1195 (talk) 12:33, 29 June 2015 (UTC)
## Geoid plus other references required =
There should be discussion of the GEOID in this article for better background. Also, there are several publically available documents, and spreadsheets enabling the conversions from XYZ to Lat Long Height. (polar coords, with deference to the ellipsoid)
references to Peter Dana's other work would be useful
14:50, 3 April 2007 (UTC)
## Great article: need addition to formulae sections
It would be helpful to add a note to the sections with the geodetic-geocentric-cartesian transformations to indicate what % of error is introduced by the equations. Users will then know whether they can expect tranformations using these formulae to be accurate within 1m, 1cm, etc. DiscipleOfChrist 16:37, 15 May 2007 (UTC)
It would be helpful to define h in the formulae for conversion from geodetic coordinates to ECEF. Is it PR in the diagram (height above mean sea level), or QR (height above the point where the normal intersects the axis)?
## Geodetic vs Geocentric latitude
Whether is clear the definition of Geodetic and Geocentric latitude and the related conversion formulas, it is not clear which of the two definitions is assumed as reference for the WGS84 standard. In the pdf (linked to this page) describing the standard, both definitions are used for different computations. It seems that in all computations related to gravity, the geocentric latitude is more convenient than geodetic, where in all local computations related to targets localization the geodetic one is mostly used, likely because targets which share the same projection on the (ellipsoidal) Earth surface but having different altitudes, will not change their latitude, leaving the independency among the Lat/Long coordinates and the Altitude. However, it is elsewhere reported that the GPS (based upon the WGS84) does use the geocentric latitude definition. Someone knows specifically (apart from the obvious remark that the World GEODETIC system should follow Geodetic Lat/Long coordinates) which is the official Lat/Long definition for WGS84? 81.208.53.251 12:50, 31 July 2007 (UTC)Paolo, Rome 31-07-200781.208.53.251 12:50, 31 July 2007 (UTC)
### Which latitude is which?
In all navigation applications (including GPS) latitude is geodetic latitude. This is because geodetic latitude is measured with respect to the horizon whereas geocentric latitude is requires knowledge of the centre of Earth (not really possible for a ship at sea). Moreover if you measure the angle between the pole star and the horizon (something we can not do in the southern hemisphere!) you can determine your geodetic latitude. The navigator will drop the term geodetic and just say latitude. Hope this helps. —Preceding unsigned comment added by 192.43.227.18 (talk) 09:48, 7 September 2007 (UTC)
I have looked at the diagram on this page for years, trying to make sense of it, and failed to do so. A quick search and I came across this one http://kartoweb.itc.nl/geometrics/Bitmaps/latitude33.gif that puts the whole idea of geocentric vs geodetic into context, and simply makes a lot of sense. Does anyone agree that something like this would be a better choice on this page, or am I totally missing something? — Preceding unsigned comment added by 217.137.225.85 (talk) 10:12, 30 September 2015 (UTC)
## Error in matlab file.
The equations for e and north in the xyz2enu.m function are reversed it should read as follows:
n = -sin(lambda).*(X-Xr) + cos(lambda).*(Y-Yr);
e = -sin(phiP).*cos(lambda).*(X-Xr) - sin(phiP).*sin(lambda).*(Y-Yr) + cos(phiP).*(Z-Zr);
u = cos(phiP).*cos(lambda).*(X-Xr) + cos(phiP).*sin(lambda).*(Y-Yr) + sin(phiP).*(Z-Zr);
A sanity check shows that the angle subtended by traveling north is independent of longitude, where as the angle subtended by traveling east is a function of latitude the closer on is to the poles the larger the angle is for a given distance along the geoid.
If there is agreement then the page should be updated.
nav_dude
4/10/09 —Preceding unsigned comment added by 128.170.230.31 (talk) 01:41, 11 April 2009 (UTC)
## Reasons for different datums
On this page, the reasons given are a bit inelegant. Yes, it is due to local surveying, but the page should recognize that with surveying technology prior to satellite geodesy it was monumentally difficult to link continents together. I suggest that we amend that sentence to talk about the history just enough to make it clear that geodetic surveying had to be local, and fit to local abberations in the geoid. —Preceding unsigned comment added by NChrisman (talkcontribs) 17:58, 24 April 2009 (UTC)
## Merge of Geodetic system and Datum (geodesy) into Geodetic datum
I noticed that there were two articles on this subject. This one is the nicer one, and the other one is the older one. I also noticed that none of theese are named by one of the terms I see most frequently, wich is Geodetic datum, the other one beeing simply datum. However 'Geodetic datum' exist, as redirect page.
I suggest that Datum (geodesy) is merged with Geodetic system, that the redirect page Geodetic datum is deleted, and finally, that the merged article is moved to Geodetic datum.
I welcome any comments and further suggestions on this. Johan G (talk) 23:49, 4 July 2009 (UTC)
I agree that the articles should be merged. If no editor objects to your proposed merger, because this article is larger than datum (geodesy), any info in that article not already here should be copied here (both are about the same age, created 2003 and 2004). The lead should be changed to indicate that the article includes both geodetic datums and their conversion. To move the merged page to its new name you must place your request on Wikipedia:Requested moves. An administrator will then move the page to its new name. That process will turn the orginal pages into redirects to the new name. Talk:Datum (geodesy) will remain there, so a note should be placed here to that effect. I am not sure whether the Datum disambiguation page should remain or its two other entries placed at the top of this page. — Joe Kress (talk) 05:59, 5 July 2009 (UTC)
I agree. The two articles cover the same subject. SV1XV (talk) 18:10, 18 July 2010 (UTC)
Done for those reasons and also, as each article on this topic was already marked in need of cleanup, so now the cleanup only needs to be done once. Cesiumfrog (talk) 22:40, 31 October 2012 (UTC)
## ECEF to ENU may be in error
The text and code compute a geocentric lat/lon for use in the rotation matrix. I'm pretty sure that it should be geodetic. —Preceding unsigned comment added by Gadflies (talkcontribs) 20:13, 30 July 2009 (UTC)
## Geodetic coordinates: what about longitude?
The article says that "locations near the surface are described in terms of latitude (φ), longitude (λ) and height (h). The ellipsoid is completely parameterised by the semi-major axis a and the flattening f."
However, if only a and f are given, how do we know where the longitude 0 is? If I'm not mistaken, we also have to add a reference meridian. --Jonas Wagner (talk) 23:10, 10 December 2009 (UTC)
## Matlab sourcecode
I removed the matlab code. The science and the math belong, but not the sample code, IMHO, for these reasons and perhaps more.
1. First and foremost, Wikipedia is not a source code respository.
2. Even if the case can be made for a short snippet of code to illustrate some point, the code in question is too long and distracts from the article.
3. It is only illustrative to people who know that particular system. Even if it is in widespread use, it is not universal, nor timeless, unlike the mathematics portion of the article.
4. Computer source code is copyrighted, and the provenance of this code is unknown.
--Kbh3rdtalk 01:10, 18 December 2009 (UTC)
## Bowring's iterative formula
I see this entire subsection has recently been added. I would tend to remove it entirely for various reasons.
First, it seems in a previous discussion, matlab code was removed. I would assume the code here should be removed for the same reason.
Second, without the code, the remaining text makes no sense to me. It is possible that there are actually some interesting points, but I can't make them out. Here is the text without code, with my confusion inline and emphasized:
It is efficient to solve the irrational equation (what irrational equation?) by using Newton-Raphson method which was actually derived by Bowring (does this mean what it says? Bowring, not Newton or Raphson derived the Newton-Raphson method?)[1] [2] [3] [4] [5]:
Removed block of code
, where RE = ${\displaystyle a}$ and E2 = ${\displaystyle e^{2}}$. (referred to symbols in code)
No better transformation of above variable ${\displaystyle q\ (\triangleq p/z\tan \phi -1)}$ has been discovered with regard to the convergent performance in Newton-Raphson method. The slight improvement thus can be achieved by using Halley's method[5] and/or the division-free method[5] (If there is no better transformation, what slight improvement can there be?).
Note that Bowring formalized his algorithm which consumes extra trigonometric functions in the iteration as follows: (So which of the two blocks of code is Bowring's method?)
Removed block of code
. (Random Period)
Thus unless there is an objection, I will be removing this in the near future. If someone can interpret the meaning of this and rewrite it that might work too, assuming there are salient points, but I am not confident enough in my impression of what the original author meant to even attempt such an endevor.
--Senior Fire (talk) 02:25, 28 May 2010 (UTC)
Deleted as there were no objections. -Senior Fire (talk) 07:24, 30 May 2010 (UTC)
### Refs for this section
1. ^ Bowring, B. R. (1976), Transformation from spatial to geographical coordinates. Surv Rev 23(181):323–327.
2. ^ Bowring, B. R. (1985), The accuracy of geodetic latitude and height equations. Surv Rev 28(218):202-206.
3. ^ Laskowski, P. (1991), Is Newton’s iteration faster than simple iteration for transformation between geocentric and geodetic coordinates? BullGeod 65:14–17
4. ^ Fukushima, T. (1999), Fast Transform from Geocentric to Geodetic Coordinates. J Geod 79(12):603–610.
5. ^ a b c Fukushima, T. (2006), Tansformation from Cartesian to geodetic coordinates accelerated by Halley’s method. J Geod 79(12):689–693.
## Ferrari's solution
There are a large number of symbols introduced in these equations, none of which are defined (outside of needing to read the cited closed-access article). Without more detail on these equations and their variables, the equations are not very useful to those wanting to understand their meaning who aren't already familiar with all of the notation. —Preceding unsigned comment added by 134.102.219.52 (talk) 13:34, 27 October 2010 (UTC)
## Not particularly clear
I have a strong math, science, and engineering background. I've read probably thousands of Wikipedia articles on matters of science and nature... and this one is -- as of the date of my comment (Jan 2011) -- one of the most poorly written I've ever encountered.
Perhaps a definition of "geodetic system" would be helpful. (I have to say, the topic is VERY foggy to me... as it seems EVERYTHING is an approximation. If there is one absolutely correct system against which other projections' and systems' errors can be calculated... then why not just us the "absolute system" all the time? And if there is no "absolute system" and everything is an approximation, then by what means do we compute "error" of various projections, systems, and techniques? Hmph.)
Anyway, back to my criticism of the text of the article. Examples from the start of the article:
1) "The systems are needed because the earth is not a perfect sphere. Neither is the earth an ellipsoid." This conveys almost no content: the earth is not a cube or a dodecahedron or a jelly donut either. So why -- exactly -- is a geodetic system needed because the earth is NOT these things?
2) "This can be verified by differentiating the equation for an ellipsoid and solving for dy/dx. It is a constant multiplied by x/y." Umm... what am I verifying with this differentiation? That the earth is not a sphere? That a sphere is not an ellipsoid? That the result of the differentiation differs from actual measurements? That a system is needed?
I don't know how to add the "clean up required" flag to a page... but the current state of this article has inspired me to find out so that I can add it to this page.
Dharasty (talk) 20:01, 6 January 2011 (UTC)
I strongly agree with the above sentiments. Geodesy is a mature (but difficult) subject and short wiki articles cannot hope to replace standard texts, but it is clear that the author of the initial paragraphs does not understand the essence of the subject. The mathematics of the second paragraph is very confusing. (Quite apart from the fact that mathematical details within the lead are not advisable if the general reader is to derive any satisfaction.) This should really be developed into a discussion of mean earth reference ellipsoids. There are many
A geodetic reference system is essentially neither more nor less than a conventional framework to which the coordinates of any point may be referenced; the shape of the Earth is not part of the specification. The specification must be precise but it will always contain some arbitrarily fixed parameters.
The wiki articles covering geodesy are fairly poor on the whole. They should at least reflect the content of established text books in the field such as Physical Geodesy (Bernhard Hofmann-Wellenhof and Helmut Moritz, 2006 pbk edition) or Geodesy (Wolfgang Torge, 3rd ed, pbk). (The latter is reasonably priced). There is a good short survey in the Supplement to the Astronomical Almanac (Seidelmann) and an even shorter survey on the web at [2]]. Too often wiki contributors are writing up what they understand of a subject without reflecting the consensus of prevailing texts: this is contrary to wiki guidelines. Peter Mercator (talk) 22:19, 13 January 2011 (UTC)
Thirded that this article needs major work. The introduction is almost nonsensical. "The USGS uses a spherical harmonic expansion to approximate the earth's surface. It has about one hundred thousand terms. This problem has applications to moving Apollo asteroids. Some of them are loose rock and spinning. Their surface will be determined by the solution to this differential equation." "An interesting experiment will be to spin a mass of water in the space station and accurately measure its surface and do this for various angular velocities. Also, we can accurately measure Jupiter's surface using our telescopes. We can accurately determine earth's surface by using GPS." How do these all relate to each other or help introduce the concept? It jumps around so much and doesn't explain the relationships. It's a very confusing and poorly written article. — Preceding unsigned comment added by 162.18.92.17 (talk) 18:21, 24 August 2012 (UTC)
I would like to remove the Motivating theory section. I work with geodetic datums for a living and can't make any sense out of it. The last sentence may be referring to a geoid model. USGS doesn't create them; NGS (National Geodetic Survey) and NGA (National Geo-spatial Intelligence Agency) create them for the US. I don't feel it worth it to try to salvage the section. Melitak (talk) 23:44, 25 September 2013 (UTC)
Oh yeah. intro: first sentence "Geodetic systems or geodetic datums are used in …" doesn't even say what the heck it is. Just where it's used. Then, there's this sentence: "A datum (plural datums) is a set of values used to define a specific geodetic system." So, it's circularly defined and therefore meaningless. That's why you can't understand it - it's logically incomprehensible. further down: A geodetic datum (plural datums, not data) is a reference from which measurements are made. In surveying and geodesy, a datum is a set of reference points on the Earth's surface against which position measurements are made and (often) an associated model of the shape of the Earth (reference ellipsoid) to define a geodetic coordinate system. from section 1: In surveying and geodesy, a datum is a reference point or surface against which position measurements are made, and an associated model of the shape of the earth for computing positions. Ah, now I know what it is. more clues, section 6: A reference datum is a known and constant surface which is used to describe the location of unknown points on the earth. K. One sentence summary. Simplest most obvious example. Two other examples to illustrate breadth of concept. And we have a new opening paragraph. Rip out some other crap… add concrete example of a reference point… done OsamaBinLogin (talk) 15:26, 10 July 2014 (UTC)
What's with the figure/diagram on the right at the top? It talks about a normal (and shows an angle), but you could say what a normal is -- you could at least draw the tangent to show the normal. Maybe the normal that you're talking about isn't what I learned in Calculus (perpendicular to the tangent plane). What are the points G, A, F, B, I and P? Some labeled points don't help if you don't explain them. 70.197.228.2 (talk) 18:35, 2 September 2014 (UTC)
True it could be better explained. The line segment IP is perpendicular to the surface of the ellipsoid at point P, so can be considered normal to the surface. --Mark viking (talk) 20:03, 2 September 2014 (UTC)
## Datums History and Purpose
Datums can be a complex subject. In talking to the non-expert, some simple examples would be useful to put the technical details in context.
Datums grew out of the necessity to use maps from different map makers at the same time. The easiest way to see the basic issue is to consider longitude. Before 1884 (conference of Washington) most major countries had their own 0 of longitude (prime meridian). The French through the Paris Observatory, the British through their observatory (in Greenwich) etc. Of course it was more complex.
Even within a country maps were made locally for local use. In the next county someone else might make maps. Each map maker would choose a few points and try and establish what their coordinates were. Then they would survey from those points. For a long time this worked "good enough". These maps were on "Local Datums". Where the maps met or overlapped, you would have to slide and rotate one to fit the other.
But then national mapping projects were established, one early one was in France. The local maps were adjusted and new coordinates for a few reference points in each region found that best fit together. These were national or regional datums.
Except for earth scientist, connecting these together was not of high importance. It was academic. Then came World War I. Suddenly different armies needed maps not only to plan attacks, but to shoot artillery beyond the line of sight. Datums and datum shifts became important.
Today with satellite positioning systems world wide datums are used. These are sometimes called Geodetic Reference Systems. Of course GPS has one. And Glonass (the Russian system satellite navigation system) has a slightly different one.
The main issue though is older maps. In some areas of the world the basic GPS reference datum and maps datums can differ by a few kilometers. Modern GPS receivers usually have and advanced setup option to set the Datum to adjust for this.
So Datums for maps are "local" reference systems. For the car driver this isn't important. For the open ocean sailor it can be.
JR C — Preceding unsigned comment added by Oldprof5 (talkcontribs) 22:41, 28 October 2012 (UTC)
## ECEF to ENU may be in error (II)
I believe that is something wrong in the matrix that defines the transformation from the ECEF to ENU coordinates. The error might be the exchange between latitude and longitude.
Please check the following webpage Transformations between ECEF and ENU coordinates in equation 6. In this source λ is the latitude and φ is the longitude.
--193.137.43.157 (talk) 11:10, 23 April 2013 (UTC)
Based on the diagram, rather than the text, λ is the longitude and φ is the latitude. The text is just unclear when it says "If (λ,φ) are taken as the spherical latitude and longitude." It should say "If (λ,φ) are taken as the spherical longitude and latitude, respectively." Melitak (talk) —Preceding undated comment added 17:19, 26 September 2013 (UTC)
## Datums or data
Isn't "data" the plural of "datum", e.g. shouldn't we refer to "vertical data" not "vertical datums"? --Bermicourt (talk) 14:51, 3 November 2013 (UTC)
## Japan Geodesic Datum
I am a bit curious why there is not artice or mention about the Japan datum or Tokyo datum, JGD2000 ? http://fr.slideshare.net/danialabid95/japanese-geodetic-datum Rom4in (talk) 07:48, 10 June 2015 (UTC)
This is probably because no one has written about it yet. Your reference has some good infomration. The Datum article on the Japanese wikipedia also has a section on the Japan datum. Feel free to contribute if you like. --Mark viking (talk) 19:19, 29 June 2015 (UTC)
|
2016-08-30 11:22:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540431380271912, "perplexity": 1683.0721784719728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982976017.72/warc/CC-MAIN-20160823200936-00198-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2569952/show-that-the-sum-1-p-1-p1-1-pn-n-p-in-mathbbn-has-even-de
|
# Show that the sum $1/p + 1/(p+1) + …+1/(p+n), n,p \in \mathbb{N}$ has even denominator in simple reduced form.
Unable to guess that how the simple reduced form, which I hope is the irreducible form, has an even denominator. It also means that the numerator is odd, while denominator is even.
So, working backwards from the result to be proven,the form is : $\frac{2k'+1}{2k} \exists k,k' \in \mathbb{N}$ and $k' \lt k$ as my argument below shows.
The sum of all terms would have the initial denominator as: $p(p+1)(p+2)...(p+n)$.
Nothing else comes to mind. I hope such type of question needs only induction, and any other approach will not be thinkable even.
Also, here strong form of induction seems the key, although I don't know how to apply that and am trying the simple (weak) form of induction. Hence, the starting case would be the sum of two terms, i.e. $1/p + 1/(p+1) = 2p+1/(p.(p+1))$. The numerator is odd, and the product of two consecutive numbers will be even too. Hence, base case is proved.
Will the assumption for the inductive hypothesis holds for any $n=m$. I feel that there is involved a sort of circular reasoning here, as need to check for all possible values of $m$ that the hypothesis is true. This means that need take an odd value of $m$, say $3$ and need not take any even term as the base case had $2$ terms.
For $m=3$ (i.e., three terms as the simplest case above base), the value of the sum (for any value $p$) is : $1/p + 1/(p+1) + 1/(p+2)$ => $((p+1)(p+2) + (p)(p+2) + (p)(p+1))/(p(p+1)(p+2))$. But, need consider this expression for both $p=2k$ (even) and $p = 2k +1$ (odd), for any integer $k$.
Case (i) $p= 2k : \frac{((2k+1)(2k+2) + (2k)(2k+2) + (2k)(2k+1))}{(2k(2k+1)(2k+2))}$
=> $\frac{(odd.even + even.even + even.odd)}{(even.odd.even)}$
Need consider the fact that the denominator is having an extra multiplier as compared to any term in the numerator. But, to turn this fact into an additional result that simple reduced form, will cancel out the even term in the numerator while still leaving out the denominator as even, is difficult. It also means that need prove that there is an additional even term in the denominator, as compared to the numerator.
While this proving may seem logical in case (i), but in case (ii) where the denominator has only one even term (i.e., $2k+2$), it seems illogical.
Case (ii) $p= 2k+1 : \frac{((2k+2)(2k+3) + (2k+1)(2k+3) + (2k+1)(2k+2))}{((2k+1)(2k+2)(2k+3))}$
=> $\frac{(even.odd + odd.odd + odd.even)}{(odd.even.odd)}$
I hope that this approach, either is done wrong by me, or induction would not work here.
• What happens when n < 0? Is p a prime number? – arriopolis Dec 17 '17 at 1:43
• @arriopolis In fact, what happens if $n=0$? – Alexander Burstein Dec 17 '17 at 1:45
• @arriopolis Nothing stated like that, the problem is from Uspensky, Heaslet's book on NT. Nothing stated like that in the chapter of which it is an exercise question, regarding $p$ being a prime. – jiten Dec 17 '17 at 1:50
• @arriopolis Sorry for late editing, although nothing stated in the book of that sort, I have assumed it to be for Naturals. – jiten Dec 17 '17 at 2:11
Let us assume $n\geq 1$.
If $2^m$ ($m\geq 1$) is the largest power of $2$ which divides some element among $p,p+1,\ldots,p+n$, it divides exactly one element. By letting $M=\text{lcm}(p,p+1,p+2,\ldots,p+n) = 2^m\cdot D$ we have that
$$2^{m-1}D\sum_{k=0}^{n}\frac{1}{p+k}\in\mathbb{Z}+\frac{1}{2}$$ and the claim simply follows.
• @jiten: consider, for instance, $p=5$ and $n=6$. The largest power of $2$ dividing some element of $\{5,6,7,8,9,10,11\}$ is $2^3$, and the only element of $\{5,6,7,8,9,10,11\}$ which is a multiple of $2^3$ is $8$. It follows that $\frac{1}{5}+\frac{1}{6}+\ldots+\frac{1}{11}$ multiplied by $5\cdot 3\cdot 7\cdot 9\cdot 5\cdot 11$ is some integer multiple of $\frac{1}{4}$ plus $\frac{1}{8}$, i.e. a number with an even denominator. – Jack D'Aurizio Dec 17 '17 at 1:59
• Thanks a lot for giving such a fitting approach, but still unable to understand as to how it would be applicable. I tried a simpler example, but unable to complete it due to even terms, and am confused. Rather than taking it long, I would request you to complete the below : $p=2, n=2 , \frac{1}{2} + \frac{1}{3} + \frac{1}{4}$. The largest power of $2$ is $2^2=4$. It follows that $\frac{13}{12}$ multiplied by ... – jiten Dec 17 '17 at 2:43
• @jiten: $3$ has an even denominator. – Jack D'Aurizio Dec 17 '17 at 2:44
• @jiten: both approaches work and they are equivalent to a classical argument for showing (through $\nu_2$) that $H_n\not\in\mathbb{Z}$ for any $n>1$. Have a look at dropbox.com/s/auxzc1w0mubpx55/… – Jack D'Aurizio Dec 17 '17 at 3:11
|
2021-01-27 06:55:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934808373451233, "perplexity": 217.32092889869764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00134.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-7th-edition/chapter-6-section-6-2-more-on-trigonometric-equations-6-2-problem-set-page-333/50
|
## Trigonometry 7th Edition
$\theta=86.4^o\approx 86^o$
$\;\;\;cos(\theta)=\frac{r^4}{R^4}$ At $\;\;\;\;\;r=2\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;R=4$ $cos(\theta)=\frac{r^4}{R^4}=\frac{16}{256}=\frac{1}{16}$ $cos(\theta)=\frac{1}{16}$ $\theta=cos^{-1}(\frac{1}{16})$ $\theta=86.4^o\approx 86^o$
|
2019-11-15 20:45:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7907517552375793, "perplexity": 2318.6252459927555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00325.warc.gz"}
|
https://groupprops.subwiki.org/w/index.php?title=Dihedral_group&diff=45754&oldid=32472
|
# Difference between revisions of "Dihedral group"
WARNING: POTENTIAL TERMINOLOGICAL CONFUSION: Please don't confuse this with dicyclic group (also called binary dihedral group)
This article defines a group property: a property that can be evaluated to true/false for any given group, invariant under isomorphism
View a complete list of group properties
VIEW RELATED: Group property implications | Group property non-implications |Group metaproperty satisfactions | Group metaproperty dissatisfactions | Group property satisfactions | Group property dissatisfactions
This is a family of groups parametrized by the natural numbers, viz, for each natural number, there is a unique group (upto isomorphism) in the family corresponding to the natural number. The natural number is termed the parameter for the group family
## Definition
The dihedral group of degree $n$ and order $2n$, denoted sometimes as $D_n$, sometimes as $\operatorname{Dih}_n$, and sometimes as $D_{2n}$ (this wiki uses $D_{2n}$) is defined in the following equivalent ways:
$\langle x,a \mid a^n = x^2 = e, xax^{-1} = a^{-1} \rangle$
• (For $n \ge 3$): It is the group of symmetries of a regular $n$-gon in the plane, viz., the plane isometries that preserves the set of points of the regular $n$-gon.
The dihedral groups arise as a special case of a family of groups called von Dyck groups. They also arise as a special case of a family of groups called Coxeter groups.
Note that for $n = 1$ and $n = 2$, the geometric description of the dihedral group does not make sense. In these cases, we use the algebraic description.
The infinite dihedral group, which is the $n = \infty$ case of the dihedral group and is denoted $D_\infty$ and is defined as:
$\langle x,a \mid x^2 = e, xax^{-1} = a^{-1} \rangle$.
## Particular cases
### For small values
Note that all dihedral groups are metacyclic and hence supersolvable. A dihedral group is nilpotent if and only if it is of order $2^k$ for some $k$. It is abelian only if it has order $2$ or $4$.
Order of group Degree (size of regular polygon it acts on) Common name for the group Comment
2 1 Cyclic group:Z2 Not usually considered a dihedral group.
4 2 Klein four-group elementary abelian group that is not cyclic
6 3 symmetric group:S3 metacyclic, hence supersolvable but not nilpotent
8 4 dihedral group:D8 nilpotent but not abelian
10 5 dihedral group:D10 metacyclic, hence supersolvable but not nilpotent
12 6 dihedral group:D12 direct product of the dihedral group of order six and the cyclic group of order two.
16 8 dihedral group:D16 nilpotent but not abelian
## Arithmetic functions
$n$ here denotes the degree, or half the order, of the dihedral group, which we denote as $D_{2n}$.
Function Value Explanation
order $2n$ Cyclic subgroup of order $n$, quotient of order $2$.
exponent $\operatorname{lcm}(n,2)$ Exponent of cyclic subgroup is $n$, elements outside have order $2$.
derived length $2$ for $n \ge 3$, $1$ for $n = 1,2$
nilpotency class $k$ when $n = 2^k$, none otherwise Nilpotent only if $n$ is a power of $2$.
max-length 1 + sum of exponents of prime divisors of $n$ The dihedral groups are solvable.
composition length 1 + sum of exponents of prime divisors of $n$ The dihedral groups are solvable.
number of subgroups $\sigma(n) + d(n)$ $\sigma(n)$ is the divisor sum function and $d(n)$ is the divisor count function.
number of conjugacy classes $(n+3)/2$ if $n$ is odd, $(n+6)/2$ if $n$ is even.
number of conjugacy classes of subgroups $3d(n) - d(m)$ where $m$ is the largest odd divisor of $n$ $d(n)$ subgroups inside the cyclic part. For odd divisors, one external conjugacy class of subgroups per divisor; for even divisors, one external conjugacy class per divisor.
## Group properties
Property Satisfied Explanation
Abelian group False for $n \ge 3$ For $n \ge 3$, the elements $a,x$ do not commute.
Nilpotent group True for $n$ a power of $2$, false otherwise
Metacyclic group True
Supersolvable group True
Solvable group True
T-group True for $n$ odd or twice an odd number, false for $n$ a multiple of $4$
Rational group True for $n \le 4$, false for $n \ge 5$
Rational-representation group True for $n \le 4$, false for $n \ge 5$
Ambivalent group True Elements in the cyclic subgroup are conjugate via $x$, elements outside have order two.
## Elements
Further information: element structure of dihedral groups
## Subgroups
Further information: Subgroup structure of dihedral groups
There are two kinds of subgroups:
• Subgroups of the form $\langle a^d \rangle$, where $d|n$. The number of such subgroups equals the number of positive divisors of $n$, sometimes denoted $\tau(n)$. The subgroup generated by $a^d$ is a cyclic group of order $n/d$.
• Subgroups of the form $\langle a^d, a^r x \rangle$, where $d|n$ and $0 \le r < d$. The number of such subgroups equals the sum of all positive divisors of $n$, sometimes denoted $\sigma(n)$. The subgroup of the above form is a dihedral group of order $2n/d$.
In particular, all subgroups of the dihedral group are either cyclic or dihedral.
Also note that the dihedral group has subgroups of all orders dividing its order. This is true more generally for all finite supersolvable groups. Further information: Finite supersolvable implies subgroups of all orders dividing the group order
## Supergroups
### Groups having the dihedral group as quotient
The dicyclic group, also called the binary dihedral group, of order $4n$, has the dihedral group of order $2n$ as a quotient -- in fact the quotient by its center, which is a cyclic subgroup of order two. the presentation for the dicyclic group is given by:
$\langle a,x \mid a^n = x^2 = (ax)^2 \rangle$.
Dicyclic groups whose order is a power of $2$ are termed generalized quaternion groups.
|
2020-02-27 05:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 77, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814653754234314, "perplexity": 498.0402519483903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00305.warc.gz"}
|
https://www.physicsforums.com/threads/difference-between-arctan-x-and-cot-x.326081/
|
Difference between arctan(x) and cot(x)
1. Jul 20, 2009
paridiso
If arctan(x) = tan(x)^-1 and since tan(x) = sin(x)/cos(x) is not arctan(x) = cot(x). I know something's not right here but what is it?
Thanks.
2. Jul 20, 2009
rock.freak667
No no.
Cot(x)=1/tan(x)
arctan(x)=tan-1(x) $\neq$1/tan(x)
3. Jul 21, 2009
RoyalCat
Just to clarify, that notation doesn't mean the multiplicative inverse in this context, but the inverse function.
$$tan(x)=y$$
$$Arctan(y)=x$$
With whatever restrictions are necessary so that those are functions and not just relations.
|
2018-02-26 02:03:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792590320110321, "perplexity": 4476.453805371276}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00702.warc.gz"}
|
https://barneyshi.me/2022/01/01/Total-Hamming-Distance/
|
# Leetcode 477 - Total Hamming Distance
Note:
• Every time use mask to seprate nums into two groups based on the bit is 0 or 1.
• Then at that position, there would be a * b combinations.
• Update ans and our mask to keep going.
Question:
The Hamming distance between two integers is the number of positions at which the corresponding bits are different.
Given an integer array nums, return the sum of Hamming distances between all the pairs of the integers in nums.
Example:
Code:
|
2022-10-06 10:19:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2866547405719757, "perplexity": 1103.693759663137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00186.warc.gz"}
|
https://mathoverflow.net/questions/310001/does-zorns-lemma-imply-a-physical-prediction
|
# Does Zorn's Lemma imply a physical prediction? [duplicate]
A friend of mine joked that Zorn's lemma must be true because it's used in functional analysis, which gives results about PDEs that are then used to make planes, and the planes fly. I'm not super convinced. Is there a direct line of reasoning from Zorn's Lemma to a physical prediction? I'm thinking something like "Zorn's Lemma implies a theorem that says a certain differential equation has a certain property, and that equation models a phenomena that indeed has that property".
## marked as duplicate by Gerald Edgar, j.c., Asaf Karagila, Andrés E. Caicedo, QfwfqSep 7 '18 at 9:49
• All functional analysis which has applications in real life is done over separable spaces, and for those you can do without any use of Zorn's lemma. – Wojowu Sep 6 '18 at 14:40
• from Spontaneous Phenomena, by F. Topsoe, Academic Press 1990: THESIS 22: Those who seek a phenomenon which exactly follows a mathematical model, seek in vain. – Gerald Edgar Sep 6 '18 at 14:58
• @Wojowu: I don't think that's quite accurate --- what about nonseparable spaces like $L^\infty[0,1]$ and $B(H)$? However, I do support the broader point that AC is not needed for anything you would do with these spaces in applications. – Nik Weaver Sep 6 '18 at 15:44
• The good news is that Zorn's lemma is true anyways (as the axiom of choice is) :-) – GH from MO Sep 6 '18 at 16:22
• Roughly speaking, Zorn's lemma is the guarantee that transfinitely long algorithms containing arbitrary choices can be run for any ordinal length. In practical applications such as PDE, such algorithms are rarely needed beyond ordinals such as $\omega^2$ or at worst $\omega^\omega$ (maybe $\epsilon_0$ in some extreme cases). So Zorn is overkill most of the time, but it is more convenient to invoke this lemma than to keep careful track of what ordinal strength of transfinite induction one is using at any given time. – Terry Tao Sep 7 '18 at 1:26
There are a lot of argument that can be applied here, and the question linked in the comment already give several of these, but there is one that I really like, and which I don't remember having seen a lot. Of course no argument of this sort can be fully rigorous as it always start from some assumption on what physics is supposed to be about and what is the real world... so this is only one answer among many possible.
Our standing assumption is "in physics and real work application we only care about observable things".
The short version of the argument is that every relevant rules of physics or theorem in physics, should be written in terms of geometric sequent (in the sense of geometric logic), as only those corresponds to statement about observable things. If this is the case then Barr's theorem show that any such theorem you can prove from your rules using the axiom of choice, you can also prove it without using neither the axiom of choice nor the law of excluded middle. So, AC (and the law of excluded middle) have no "observable" consequences.
Let me clarify what I mean by that:
If $x$ is some physical quantity (a real parameter like the mass, or the speed, or position or temperature of something in some units) then proposition like "x<10" are propositions that I call 'observable', because if they are true there is a finite time experiment that can prove it: If $x$ is indeed <10 then a good enough approximation of the value of $x$ will prove it. (I'm ignoring quantum mechanics, which has more to do with the fact that position speed and so one cannot really defined rather than they cannot be observed with arbitrary precision, in Quantum mechanics, in this case the observable property would be about probability of some event occurring... it might require a probabilistic refinement of the discussion here though.)
By opposition, the statement "$x \leqslant 10$" is not observable in the same sense, because if it happens that $x$ is really equal to $10$, then no measure of $x$ with no given precision would be able to prove that $x \leqslant 10$, you will always get that $x$ is in some open interval around 10.
Now in logical terms, if you have certain observable propositions, you can take a finite "AND" , an infinite "OR", apply some existential quantification to them and obtain another observable proposition, but the negation, the infinite "And", or the implication will in general take you out of the realm of observable propositions.
In categorical logic, the propositions that are formed from certain 'atomic' propositions using infinite OR, finite AND, and existential quantification are called "Geometric propositions".
One call a "Geometric sequent" something of the form $\forall x_1,x_2,\dots,x_n, P \Rightarrow Q$. with $P$ and $Q$ geometric proposition
I claim that any rule or theorem of physique should have this form, i.e. they should say that "if some observations are made then I know I will be able to make some other observations". (this also include thing like $P \Rightarrow False$, i.e. "I'll never make such observations".
Barr's theorem shows that if from some axioms that are geometric sequent, and using all of classical logic and the axiom of choice (in particular Zorn lemma), you can deduce some other some geometric sequent, then there exists a similar proof that does not use neither the axiom of choice nor the law of excluded middle.
So in the end, you can freely use the axiom of choice wherever you want and you know that any theorem about thing you can actually observe in the real world will have a constructive proof.
• Note also that given any measurement process, and any $t$, there exists an $\epsilon>0$ such if $10-\epsilon < x<10$, it will take more than $t$ seconds to determine that $x<10$. – Acccumulation Sep 6 '18 at 22:05
• Is every observable property equivalent to membership of some open set? Conversely, is membership of every open set an observable property? – afuous Sep 7 '18 at 0:28
• The "observable" are always open subsets of a topology (due to their stability under finite intersection and arbitrary union) and yes in the case of a real number that you can measure up to arbitrary precision this is the usual topology of R. – Simon Henry Sep 7 '18 at 0:44
• Regarding further read I don't know. Barr theorem is a result about covering of toposes which appear in a lot of books on topos theory, but you'll have first to familiarise with topos theory. I have heard that there are non-topos theoric proof, but I don't now where. The reste regarding physics is more of a personal interpretation, I don't know if someone has written about it. – Simon Henry Sep 7 '18 at 0:48
The tongue-in-cheek argument "Zorn's lemma must be true because it's used in functional analysis, which gives results about PDEs that are then used to make planes, and the planes fly" is essentially correct except for the first step. The part of functional analysis/PDE theory which has relevance to planes flying can be done without the axiom of choice or to be more precise without the uncountable choice portion of it. In fact, to use math for plane design you will probably need to run simulations on the computer. Good luck trying to simulate a solution produced by the axiom of choice.
Also, "Zorn's Lemma implies a theorem that says a certain differential equation has a certain property, and that equation models a phenomena that indeed has that property" might not exactly be the formulation of the right question here. One can probably write silly proofs of $1+1=2$ using Zorn's Lemma. Perhaps "a theorem which cannot be established without Zorn's Lemma says a certain differential equation has a certain property, and that equation models a phenomena that indeed has that property" might be a bit better.
Here's a set-theoretic answer, paralleling Simon Henry's use of Barr's theorem. Shoenfield's absoluteness theorem implies (among other things) that no $\Pi^1_2$ property can depend on choice: if $\varphi$ is a $\Pi^1_2$ sentence, then $\varphi$ is provable in ZFC iff $\varphi$ is provable in ZF. (It also rules out any reliance on cardinal arithmetic - the continuum hypothesis isn't going to cause engine failure.)
The specific definition of $\Pi^1_2$ is a bit involved, but it is a very broad class of sentences. For example, whether or not a differential equation over $\mathbb{R}$ has a continuous solution is $\Pi^1_2$ (in fact, even weaker - just $\Sigma^1_1$), as is every natural property about differential equations I've heard of.
Certainly $\Pi^1_2$ is sufficiently broad that the burden of proof would lie squarely with anyone claiming that there is a physically meaningful sentence which is not $\Pi^1_2$.
Pretty much all mathematics of relevance to physics can apparently be formalised in a strongly finitist foundation. Working in
... a fragment of quantifier-free primitive recursive arithmetic (PRA) with the accepted functions limited to elementary recursive functions. Elementary recursive functions are the functions constructed from some base arithmetic functions by composition and bounded primitive recursion.
Feng Ye's Strict Finitism and the Logic of Mathematical Applications (free draft copy, see also this review (paywall)) builds enough mathematics to treat pretty much all of applied mathematics/theoretical physics (say, that part of theoretical physics that is experimentally confirmed :-)
So as others have pointed out, logical axioms of the strength of full Zorn's lemma are far from necessary.
|
2019-11-11 23:01:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8415675759315491, "perplexity": 427.86204991340924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00121.warc.gz"}
|
https://2021.help.altair.com/2021/feko/topics/feko/user_guide/appendix/api_postfeko_auto_generated/object/reference_api_postfeko_object_spiceprobequantity_feko_r.htm
|
# SpiceProbeQuantity
The SPICE probe quantity properties.
## Example
app = pf.GetApplication()
app:NewProject()
app:OpenFile(FEKO_HOME..[[/shared/Resources/Automation/SpiceProbeTest.fek]])
-- Add a cartesian graph.
-- Add a SPICE probe trace.
-- Adjust quantity properties of the plot.
spiceProbesPlot.Quantity.ValuesNormalised = true
spiceProbesPlot.Quantity.ValuesScaledToDB = true
## Usage locations (object properties)
The following objects have properties using the SpiceProbeQuantity object:
## Property List
ComplexComponent
The complex component of the value to plot, specified by the ComplexComponentEnum, e.g. Magnitude, Phase, Real, Imaginary. (Read/Write ComplexComponentEnum)
PhaseUnwrapped
Specifies whether the phase is unwrapped before plotting. This property is only valid when the ComplexComponent is Phase. (Read/Write boolean)
Type
The type of quantity to be plotted, specified by the SpiceProbeValueTypeEnum, e.g. Current or Voltage. (Read/Write SpiceProbeValueTypeEnum)
ValuesNormalised
Specifies whether the quantity values must be normalised to the range [0,1] before plotting. This property can be used together with dB scaling. This property is not valid when the ComplexComponent is Phase. (Read/Write boolean)
ValuesScaledToDB
Specifies whether the quantity values are scaled to dB before plotting. This property is only valid when the ComplexComponent is Magnitude. (Read/Write boolean)
## Property Details
ComplexComponent
The complex component of the value to plot, specified by the ComplexComponentEnum, e.g. Magnitude, Phase, Real, Imaginary.
Type
ComplexComponentEnum
Access
PhaseUnwrapped
Specifies whether the phase is unwrapped before plotting. This property is only valid when the ComplexComponent is Phase.
Type
boolean
Access
Type
The type of quantity to be plotted, specified by the SpiceProbeValueTypeEnum, e.g. Current or Voltage.
Type
SpiceProbeValueTypeEnum
Access
|
2022-12-06 14:01:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31752321124076843, "perplexity": 8551.859297491828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00192.warc.gz"}
|
https://help.myeasa.com/6_1/doku.php?id=admin:keypair_keytool
|
Contents
Generate a key pair and a self-signed SSL certificate using Java's command line → keytool
• open the command line console and navigate to the directory where keytool.exe is located, for a standard installation its
• C:\EASA\EASAx.x\jre\bin
• type the following command
• keytool –genkey –keyalg RSA –alias tomcat –keystore easastore.jks –storepass 123123 –validity 360 –keysize 2048
Blue text indicates values that may be customized
• -keyalg is the encryption algorithm to be used, choose from
• RSA DSA EC DES DESede)
• -alias the name of the self-signed certificate
• -keystore the name of the keystore file which will be created with the self-signed certificate
• (.jks extension required)
• -storepass the password for the keystore file (and by default for the certificate)
• -validity the number of days before the certificate will expire
• -keysize the key size in bits depending on the type of encryption that is used
• (2048 for RSA, 1024 for DSA, 256 for EC, 56 for DES and 168 for DESede)
Fill in the prompts for your organization information. When it asks for your first and last name, enter the domain name of the server or in our case we will use the name of the machine where EASA Server is installed.
Now we export the newly created certificate inside easastore.jks so we may import it to cacerts file later.
To export the certificate run
keytool –export –alias tomcat –file tomcat.crt –keystore easastore.jks
tomcat is the alias we set before in the previous command
tomcat.crt is the name of the certificate file. It can be .cer or .crt
easastore.jks is the keystore we created before in the previous command
It will ask for password and will export the certificate to a file
• Copy the file
• <EASAROOT>\jre\lib\security\cacerts
• to the same location as the keystore and the certificate, in this case
• C:\EASA\EASAx.x\jre\bin
Run the following command to import the certificate to the tomcat file:
keytool –import –trustcacerts –alias tomcat –file tomcat.crt –keystore cacerts
tomcat is the alias we set before in the previous command
tomcat.crt is the name of the certificate file. It can .cer or .crt
cacerts is the EASA tomcat keystore
It will ask for cacerts keystore password, default changeit is the default
Once we have easastore.jks with our key pair and cacerts contains our self-signed certificate, skip to Enable TLS using a Certificate Authority and Keystore Explorer
|
2021-09-16 16:22:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031076192855835, "perplexity": 11828.127562827565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00436.warc.gz"}
|
http://chiavellainox.it/quig/speedometer-in-simulink.html
|
This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. See the complete profile on LinkedIn and discover Greeshma's. Each version of the system includes an intelligent brick computer that controls the system, a set of modular sensors and motors, and Lego parts from the Technic line to create the mechanical systems. Put everything the devices do in an owner's guide and "instead of one paragraph, you'd have potentially another 20 or 30 pages. In MATLAB R2019a, it is now possible to create standalone Stateflow charts that can be executed in MATLAB. Weusetheoilpumpas theloadequipment. If you remember the Arduino WaterFlow Sensor Tutorial we implemented earlier, the main component of the Water Flow Sensor is the Hall Effect IC. Degree in Computer Sceince from University of Regina in 2014. That really wouldn't be realistic," says Richard Ruth, a black box. 0 RsLogix500 v6. The mask is a standard nebulizer mask that is used by asthma patients. 01 RsNetwork For Controlnet ROCKWELL SLC500 APS ROCKWELL PLC5 6200 APS ROCKWELL RSLOGIX5 V5 ROCKWELL RSEMULATE500 V4. The motor will run at full speed when the duty cycle is 100%. 729-733(5) Authors:. pdf), Text File (. 0 + RSSQL Rsview7. Transient stability analysis. BISTABLE TOUCH SWITCH 648. Energy is provided by the current energy. Please see the Cruise Control: System Modeling page for the derivation. 10Points / $20 22Points /$40 9% off 65Points / $100 33% off. BMTC the Transport corporation of bangalore is going to implement the cashless ticket system in bangalore city, this leads to change people mindset rapid move towards cashless principle and practices, although their is a higher demand in buses for getting proper denomination of changes, this feels a good for everyday traveler, Now a days many technology are used for cashless transaction and. System model and parameters. P = Pump = pressurized flow enters the PSU from the pump. File upload progressor. matlab video tracking based on particle filter. Short circuit analysis – unsymmetrical faults 3. Lego Mindstorms is a hardware and software structure which is produced by Lego for the development of programmable robots based on Lego building blocks. BMTC the Transport corporation of bangalore is going to implement the cashless ticket system in bangalore city, this leads to change people mindset rapid move towards cashless principle and practices, although their is a higher demand in buses for getting proper denomination of changes, this feels a good for everyday traveler, Now a days many technology are used for cashless transaction and. In this paper, I want to talk about the Passport 8500 X50 comparing to the Valentine One and Bel 985, which is known as the world’s best radar detector. View Abir Ahmed's profile on LinkedIn, the world's largest professional community. ROBUSTNESS ASSESSMENT OF ADAPTIVE FDI SYSTEM FOR ENGINE AIR PATH 181 1. Please ASK FOR ultrasonic sensor hcsro4 interfaced with atmega32 BY CLICK HEREOur Team/forum members are ready to help you in free of cost. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. Giving Speed = 30 km/hr in the Speedometer, if the speed is between 25 km/hr - 40 km/hr 3rd Gear will activate as per the given condition. Simulink Instructor 62,479 views. IN CIRCUIT TRANSISTOR TESTER 649. by Randy Frank. been done on Matlab/Simulink [2] simulation models of complete energy chains for various applications [3], [4]. The transfer function model for the cruise control problem is given below. 729-733(5) Authors:. G6-HW-09005-E Page 5 of 25 1 Functional description 1. The trajectory function determines actor pose properties based on a set of waypoints and the speeds at which the actor travels between those waypoints. The result shows that the speedometer can show the corresponding information such as speed, time and date, store the messages in the memory, and transmit them to the USB Flash Drive correctly. September 9, Emulated Speedometer on an LCD. Divide this number by 5,280, which converts feet per hour to miles per hour. 10v in simulink. Getting Started with Simulink, Part 1: How to Build and Simulate a Simple Simulink Model - Duration: 9:03. SMS based remote SIM card’s address book access system. Pulse Width Modulation (PWM) is a fancy term for describing a type of digital signal. But the speedometer tells you that you are going 35km/h. This example model uses four Custom Gauge blocks and a MultiStateImage block to create a dashboard for the sf_car model like one you might see in a real car. The speedometer is used for maintaining an appropriate pace while driving a vehicle. the speedometer, and your brain have formed a control loop. in the Simulink / Stateflow model that have been defined to be of such custom types QGen shall use the corresponding target language types as well. The speedometer dial would look different in every car. Hotbird sexy channel frequency. Design a location tracking App using GPS in Android Studio This page shows the steps to create an App which uses Google Map to give the precise location of your phone. Sistem kontrol berkendaraan berarti kombinasi dari komponen-komponen tersebut yang menghasilkan berjalannya kendaraan pada lintasan yang diinginkan. The output pin provides a voltage output that is linearly proportional to the applied magnetic field. From the Simulink GUI Code generation can be launched directly from within a Simulink model in the MATLAB GUI. A minimum turn radius means that the turn envelope of the tractor should be as small as could possibly be achieved. The driver throotle knob or the brakes as necessary to adjust the speed. Ranatunga∗ and S. execute code triggered by changes in. The Simulink ® Scope block and DSP System Toolbox™ Time Scope block display time domain signals. The L298N module has a very famous L298 Motor driver IC which is the main part of this module. This type of speed control is called pulse-width modulation. MTK3329/MTK3339 command set sheet for changing the fix data rate, baud rate, sentence outputs, etc! LOCUS (built-in-datalogging system) user guide. At 999 steps the car was at 35-45kmph as shown in car speedometer, when accelerated on the steps,car was Plug In Hybrid, and 4x4 wheel drive. VS Visualizer includes heads up display (HUD) for 2D graphic controls (speedometer, ESC indicator, etc. BASIC LINEAR DESIGN 8. On Picture 1 you can see the completed and powered project. System model and parameters. A computer-implemented method for operating an autonomous or semi-autonomous vehicle may include identifying a vehicle operator and retrieving an associated vehicle operator profile. Design validation testing of vehicle instrument cluster using machine vision and hardware-in-the-loop Conference Paper · October 2008 with 1,043 Reads How we measure 'reads'. circa 8 anni ago | 1 answer | 0. Gear position is displayed below in the Simulink model. The sampled vehicle speed and engine RPM are read from the CAN bus and displayed on the speedometer and the. Here is one simple solution. Download Project: Fields with * are mandatory. Electric vehicle life cycle cost analysis : final research project report. Follow 76 views (last 30 days) Benjamin Bokser on 13 Aug 2017. - You watch the difference (error) between your speed and th. File upload progressor. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. If you want a low-cost turnkey driving simulator from Mechanical Simulation, you can get CarSim DS or TruckSim DS. Could be run on hardware (EV3). US Shift Electronic Transmission Control Systems let you take charge of your 6R80, 4R70W, E4OD, 4R100, 4L60E, or 4L80E transmission and unlock its true potential. com, fiverr. To improve the accuracy and actual road equivalence of vehicle performance testing using test benches, a double-drum test bench that meets the test requirements of vehicle control system prototypes and in-use vehicles was designed. This is what is also known as Sensor Bias. h is in here: /usr/include/glib-2. System model and parameters. Model predictive control (MPC) is one of the most widely spread advanced control schemes in industry today. Figure 1 - MT Manager screenshot of the inertial data (angular velocity for all 3 axes). A Hall Effect Sensor works on the principle of, well, Hall Effect. Harmonic analysis 6. So I have been trying to create a display that plays the force value at the appropriate recorded time. Check the best results!. php on line 118. Common tasks include calling a Lookup Table block from Simulink to perform interpolation on a specific Stateflow variable. As of Release 2016a, Gauges Blockset is discontinued and no longer available for purchase. All other notes, including sharps and flats, have their own fixed frequency. 2 (a) Arduino Due. Speedgoat target computers are real-time computers fitted with a set of I/O hardware, Simulink ® programmable FPGAs, and communication protocol support. View Akash Kuber’s profile on LinkedIn, the world's largest professional community. To energise the four coils of the stepper motor we are using the digital pins 8,9,10 and 11. Baby & children Computers & electronics Entertainment & hobby. Today, I am going to share my knowledge about how can you make a simple program for DC Motor Speed Control using Arduino UNO. Cruise control as we know it today was invented in the late 1940s, when the idea of using an electrically-controlled device that could manipulate road speeds and adjust the throttle accordingly was conceived. View Prajwal Raj M B’S profile on LinkedIn, the world's largest professional community. This part of the system uses the quad op-amp SLG88104. Introduction to Stepper Motors. Datasheet for the PA6C (MTK3339) GPS module itself - used in version 2 of this module. City and County of Denver - Colorado | Charleston County - South Carolina | Dauphin County - Pennsylvania | Cass County - North Dakota. MATLAB 326,455 views. This stands for: T = Tank = flow return to the reservoir from this port. Traditional signal flow is handled in Simulink while changes in control configuration are implemented in Stateflow. Along with a network of selected partners, VI-grade also provides revolutionary turnkey solutions for static and dynamic driving simulation. execute code triggered by changes in. Put everything the devices do in an owner's guide and "instead of one paragraph, you'd have potentially another 20 or 30 pages. February 06, 2019. The on-board computer 114 may interface with the one or more sensors 120 within the vehicle 108 (e. más de 8 años ago | 1 answer | 0. Speedometer 2. STOPWATCH USING PIC16F676 PLEASE HELP! I am trying to built a stopwatch using pic16f676 and a 3 digit 7 segment common anode display. When ST pin is connected to 3. , physical part). 2015 Fall Meral Yıldırım, Kübra Yeniçeri, Oğuz Özdemir, 0266: Ahmet Uçar: Design a Data Acquisition and Distribution System communicated with MATLAB/Simulink. Learn more about gui, simulink, simulation, aerospace blockset, control, cockpit, aircraft MATLAB, Simulink. wheel, turn signal indicator, speedometer, and mirrors. 626 Hz and the E above it is at about 329. They reflect the quantity of work each course requires in relation to the total quantity of work required to complete a full year of academic study at the institution, that is, lectures, practical work, seminars. d r G y K Δ y m u But through feedback, the effect is more subtle… (d,Δ) affect y y affects y m So,… through feedback (d Δ)fft y m affects u, ) affect u. Inside each Matlab block is an if/then function, where if u > i, then y = 1, however, if u < i, then y = 0. This section offers a few tools that can be easily edited and further developed in Simulink. Capabilities and Features. File upload progressor. 5, 469-478 471 The characteristic speed of the vehicle can only ex-ist if the understeer coefficient of the vehicle is above zero [34]. Se Suriya Prasannas profil på LinkedIn, världens största yrkesnätverk. PID control is so universal that PI and PID loops can be small and fast like a current-regulating. or 4) add /usr/lib to your pkg path. If this option is not specified, then tachorpm computes the levels automatically using the histogram method, as in the statelevels function. Linear Control System Analysis and Design with MATLAB , Sixth Edition (Automation and Control Engineering, Book 53) Houpis C. For testing purposes, we would have to incorporate a position sensor, speedometer, and accelerometer. ) Hi-res professional icons for use in MATLAB, Simulink, presentations, web, etc. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. Lego Mindstorms is a hardware and software structure which is produced by Lego for the development of programmable robots based on Lego building blocks. fanout reset - what is the difference between @posedge clk or negedge rst or posedge rest - Dear senior assemblers. 500+ Electronics tutorials and projects using microcontrollers, Arduino, PIC microcontroller,ARM and electronics components. View Prajwal Raj M B’S profile on LinkedIn, the world's largest professional community. Bus or Simulink. RENR1514_01 - Free download as PDF File (. 5 Ampere output load current and. GPS based vehicle travel location-logging system. 9944282: Autonomous vehicle automatic parking: 2018-04-17: Fields et al. Simulink uses the base workspace. Everything starts out nicely, but the real system doesn't change linearly with time, so during the timestep we accumulate errors. Custom connectors and interconnects, static grounding reels, pressure transducers, heat exchangers, cabin filters and motion control and cursor control devices also available. This program can be con- pressure sensor as a vertical speedometer. Through product demonstrations, you will see a high-level overview of the major capabilities and how you can use Simulink to design, simulate, implement, and test a. Главная / Лего Minecraft / LEGO Minecraft Пустынная станция Враждебные мобы появляются с невиданной раньше скоростью, и пустыня становится очень опасным местом. 01 RsNetwork For Controlnet ROCKWELL SLC500 APS ROCKWELL PLC5 6200 APS ROCKWELL RSLOGIX5 V5 ROCKWELL RSEMULATE500 V4. Alternatively, you can set up your model to accept a time series input and supply that from your function. and then proceeded to attempt to adapt it to Simulink with an S-Function builder block. Available immediately. Zobrazte si úplný profil na LinkedIn a objevte spojení uživatele Petr a pracovní příležitosti v podobných společnostech. Datasheet for the PA6B (MTK3329) GPS module itself - used in version 1 of this module. For example, DC and gearmotors that are moving loads may require close monitoring or adjustment of the output speed. Shirshendu - Writing a business proposal every time you Tulshi - Your data will be safe even after uploading Samsons - Anyone can design the company logo to be used. Community Toolboxes. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. Pavel Roslovets Last seen: Today ETMC Exponenta (MathWorks official distributor in Russia) Icons (for App Designer, Simulink, presentations, etc. Co-Simulation of Full Vehicle Model in Adams and Anti-Lock Brake System Model in Simulink Master’s thesis in Applied Mechanics TOBIAS ERIKSSON Department of Applied Mechanics Division of Dynamics Chalmers University of Technology Abstract This document is a master’s thesis written at Chalmers University of Technology in collaboration with the. Vehicle owners may wish to check their speedometer’s accuracy after receiving a speeding fine, or when they fit different sized tyres to their vehicle. That was an overview of the top 3 Arduino accelerometer modules available out there! I hope you find this useful. Review of Recent Literature Relevant to the Environmental Effects of Marine and Hydrokinetic Energy Devices Task 2. The diagram below describes a Wheatstone bridge designed with MATLAB Simulink, The resistor designated [R x] can be replaced with a variable resistance material and used in the construction of a strain gauge. The transfer function model for the cruise control problem is given below. Each variable has a mean value \ (\mu\), which is the center of the random distribution (and its most likely state), and a variance \ (\sigma^2\), which is the uncertainty: In the above picture, position and velocity are. 4) Development and implementation of an intelligent energy management system. Design validation testing of vehicle instrument cluster using machine vision and hardware-in-the-loop Conference Paper · October 2008 with 1,043 Reads How we measure 'reads'. e speedometer receives the speed signal of the working sha throughthetransmissionbelt. matlab video tracking based on particle filter. Srinath Krishnamoorthy Posted by Srinath TK July 20, 2017 August 13, 2017 Posted in Artificial Intelligence , srinath krishnamoorthy Tags: AI , Ben Coppin , FIS , Fuzzy Inference Systems , Fuzzy Logic , Fuzzy Logic Controller , Matlab , membership function , programming , sciLab , Simulink , srinath krishnamoorthy Leave a comment on Fuzzy Logic. ) I need this data to overlay with a video. Follow 76 views (last 30 days) Benjamin Bokser on 13 Aug 2017. high quality at wholesale prices. Transient stability analysis. Full text of "Control System Engineering Stuff" See other formats. rs] has joined ##stm32 --- Day changed Fri Aug 02 2019 2019-08-02T00:00:28 -!- rajkosto [[email protected] Credit: 0:0:2 Marks: 50+50 All the experiments are based on MATLAB and Simulink. Simulink Model Stuck Initializing. The Synchronous Model of Computation Stavros Tripakis UC Berkeley [email protected] In place of Gauges Blockset, consider using the graphical controls and displays included in the Dashboard block library in Simulink. The equipment that you will need for this Arduino light sensor tutorial is pretty basic as I mentioned earlier. or 4) add /usr/lib to your pkg path. Delete the engine speed Toggle Switch. Connect the output of the function block to the torque input port, Tm, of the machine block. ) LEGO EV3 Spinner Speedometer (Stateflow). 8 degrees per step. The number of coils will differ based on type of stepper motor, but for now just understand that in a stepper motor the rotor consists of metal poles and each pole will be attracted by a set of coil in the stator. execute code triggered by changes in. Cursor control devices; VEHICULAR INSTRUMENTATION SYSTEMS - VIS. We would model the dynamics of the system using Simulink. Combine Stateflow® with Simulink® to efficiently model hybrid systems. Credit: 0:0:2 Marks: 50+50 All the experiments are based on MATLAB and Simulink. This stands for: T = Tank = flow return to the reservoir from this port. I want to measure the speed of moving object for example CAR; but without interfacing or changing anything in the car ; for example could not add encoder or coul. Our transmission controllers are compact, packed with features, and have a built-in display for tuning as well as user-friendly software for more in-depth tuning. The stepper motor will play a factor in the above equation. 2 Naming and File Conventions 528 Representation of Systems in MATLAB 528 10. The simplest approach is to estimate velocity = Δpos/Δt: measure the change in position and divide by the change in time. Get it as soon as Wed, May 13. Hello, I have some difficulties with measuring the time between two 'events' (without using the Gauge-blockset). Sistem kontrol loop terbuka ini memang lebih sederhana, murah, dan mudah dalam desainnya, akan tetapi akan menjadi tidak stabil dan seringkali memiliki tingkat kesalahan yang besar bila diberikan gangguan dari luar. net Stats to your Sidebar The Speedometer graphics were created by D TI Stat This plugin posts different charts from Yandex. Harmonic analysis 6. It will be helpful to vary the speed of the DC motor in either clockwise or in anti clockwise direction. I initially thought I'd like it better if it were matte black rather than silver, but actually it looks very nice on the bike as it matches the ring on the headlight. In this tutorial, you will learn how to use the rosserial communication, three HC-SR04 sensors, an Arduino and Raspberry Pi 4 to publish the ranges of the sensors. Trouble Shooting Directory for Three Phase Locomotives - eLocoS. The model described below represents a fuel control system for a gasoline engine. 3 Motor Sizing Calculation The most crucial part of any electric vehicle propulsion system is the traction motor. Create a Realistic Dashboard Using the Custom Gauge Block Open Script You can use the Custom Gauge block to create a dashboard of controls and indicators for your model that looks how it would in a real system. Get tips on how to customize the appearance of a Custom Gauge block in Simulink®, including the background image, how the needle appears, which path the needle takes, and whether an arc appears. 41 to 60 4. In other words, a SIMULINK block-diagram does all the programming for you, so that you are free to worry about other practical aspects of a control systems design and implementation. Please cover the shaded area with a picture. View Abir Ahmed's profile on LinkedIn, the world's largest professional community.$\begingroup@PeterCorke I don't have a speedometer but the speed in the range (7400 - 9250) RPM which gives a range of frequencies (123. Combine Stateflow® with Simulink® to efficiently model hybrid systems. DOWNLOAD PDF. Generators transform energy into electrical energy. The mask is a standard nebulizer mask that is used by asthma patients. Fpga schematic and hdl design tutorial iii latest version of the releva nt information to establish, before ordering, that the information being relied upon is current. IntEnumType objects. Ask Question Asked 3 years, 6 months ago. That paper details how the model does indeed produce all the relevant phenomena. Vertical speed h t (3) It is evident from Graph 1. That really wouldn't be realistic," says Richard Ruth, a black box. SteamTab Quad, Industrial Barcode Generator Free and more industrial formulation download. Description. Akash has 7 jobs listed on their profile. Learn how state-of-charge (SoC) algorithms are modeled in Simulink ®. wheel, turn signal indicator, speedometer, and mirrors. Starting in release R2011b, graphical properties of Simulink Scopes can be customized using the new Simulink Scope graphical property editor. Follow their code on GitHub. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. Returns the length of the pulse in microseconds or gives up and returns 0 if no complete pulse was received within the timeout. how to load data to workspace through m-file. Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. The comments received sounded like: That is just a work-around, and not very convenient to be honest! One should be able to. Through product demonstrations, you will see a high-level overview of the major capabilities and how you can use Simulink to design, simulate, implement, and test a. The Simulink ® Scope block and DSP System Toolbox™ Time Scope block display time domain signals. Program (EMTP) in the field of power, and SIMULINK in the field of control. Ways to build a speedometer for a bike or skateboard. NASA Astrophysics Data System (ADS) DeVries, Phoebe M. IntEnumType objects. Images of Washington - Washington DC symbolizes the values that make the United States of America th By Always Great Software, Inc. View Akash Kuber’s profile on LinkedIn, the world's largest professional community. You can drag the center marker to align the center of your arc with the center of the arc in your background image. Simply speaking, a Hall Effect Sensor or IC detects motion, position or change in magnetic field strength of either a permanent magnet, an electromagnet or any. SPEEDOMETER DESCRIPTION The USB DrDAQ, complete with Buffer and 2200 series scope, represents a miniature, but fully functional electronic lab which allows very fast development of design ideas, as will be shown in the development and simulation of a Wheelchair Speedometer/Recorder. Datasheet for the PA6C (MTK3339) GPS module itself - used in version 2 of this module. Главная / Лего Minecraft / LEGO Minecraft Пустынная станция Враждебные мобы появляются с невиданной раньше скоростью, и пустыня становится очень опасным местом. The car was then driven at a constant speed, according to its speedometer. Imagine you are driving a car, trying to reach and maintain speed of 50 kilometers per hour. John Samuel et al. Displays, message centers, keypa. The diagram below describes a Wheatstone bridge designed with MATLAB Simulink, The resistor designated [R x] can be replaced with a variable resistance material and used in the construction of a strain gauge. First the brief and concise introduction of capacitive and inductive circuits is provided explaining the effect of introducing each of them in a resistive circuit. Get tips on how to customize the appearance of a Custom Gauge block in Simulink®, including the background image, how the needle appears, which path the needle takes, and whether an arc appears. 0 + RSSQL Rsview7. Hall-effect probes are more expensive and sophisticated instruments used in scientific laboratories for things like measuring magnetic field strength with very high precision. We also sell Speedometer speed sensors (not included with speedometer). At the moment I'm trying to get depth data from the HCSR04 ultrasonic sonar sensor connected to the raspberrypi as shown below: I'm following this video for all the parameters and yet cannot get in data at the output. So it will be triggered only when leading edge of aluminium foil comes in front of the sensor. 3 Motor Sizing Calculation The most crucial part of any electric vehicle propulsion system is the traction motor. theautochannel. 3V, an electrostatic force is exerted on the accelerometer beam internally. i have only one question how did you calculate the 100000 in rps and the 600000 in rpm , iam working on a licence degree project that its a little bit the same of that but in matlab so iam searching for how to calculate rps or rpm iam working with LM393 speed sensor ,it give us only the the 1 or 0 it high or low in your code. ) Hi-res professional icons for use in MATLAB, Simulink, presentations, web, etc. Combine Stateflow® with Simulink® to efficiently model hybrid systems. Bus or Simulink. Hello Serena, I guess you could try this: 1) check if the file glibconfig. CarSim and TruckSim are being used in a wide variety of driving simulators. Save on Modelling And Design By. The Synchronous Model of Computaon Stavros Tripakis UC Berkeley EE 249 Lecture - Sep 15, 2009 1. Autonomous vehicle software version assessment: 2018-04-17: Fields et al. I am trying to implement a transfer function for a rotary encoder to link the position of my mechanical system with the encoder measured position. In earlier model vehicles, this pedal was connected to a cable that ran from the pedal to the engine. CANoe is the comprehensive software tool for development, test and analysis of individual ECUs and entire ECU networks. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. The output pin provides a voltage output that is linearly proportional to the applied magnetic field. The system is also intended to allow a MATLAB/Simulink interface for processing recorded data as well as implementing control laws programmed using the auto-code. EE217 COMPUTER AIDED POWER SYSTEM ANALYSIS LAB Credit: 0:0:2 Marks: 50+50 All the experiments are based on MATLAB and Simulink 1. Simulink uses the base workspace. We can use this model for desktop simulations where we can, for example, reproduce diverse usage cycles and environmental conditions to evaluate the system's response to a potentially unsafe condition; for example, a temperature, voltage, or current outside the recommended limits. place the probe as far into the free-stream as possible. Double-click the function block, and enter the expression for torque as a function of speed: 3. It offers 1. Home › consider the circuit in the diagram with sources of emf listed › consider the circuit in the diagram with sources of emf listed below › consider the circuit in the diagram with sources of emf listed below. After completion of our chart, we can see the inputs power and speed, outputs gear and top speed(Z). Speedometer, fuel gauge and Odometer. Hall-effect probes are more expensive and sophisticated instruments used in scientific laboratories for things like measuring magnetic field strength with very high precision. DC brush/brushless servo motors, controllers, gearboxes, encoders, brakes; DYNAMIC FLUID SOLUTIONS. 0 2) if not? it will be here: /usr/lib/glib-2. In this book, Tewari emphasizes the physical principles and engineering applications of modern control system design. Car to Arduino Communication: CAN Bus Sniffing and Broadcasting With Arduino: From Wikipedia, the Controller Area Network (CAN) bus is a "vehicle bus standard designed to allow microcontrollers and devices to communicate with each other within a vehicle without a host computer. DIY Beaglebone CAN Bus Cape: This is an addendum to my other tutorial exploring the Tesla Model S CAN bus, for that tutorial I needed some way to connect and send messages over the CAN system, but didn't want to pay and arm and a leg for a fancy Serial "Cape". 2 Million at KeywordSpace. Stepper motor doesn’t rotate continuously, they rotate in steps. Muhammad has 6 jobs listed on their profile. Speedometer, fuel gauge and Odometer. MoodThingy MoodThingy is a widget that any blogger can use to track the emotional feedback of an individual blog post or article Bikemap Speedometer Widget Adds a Speedometer including your Bikemap. 1 answer Question. Then, we could measure the input control to the system and the output trajectory, velocity, and acceleration. See the complete profile on LinkedIn and discover Umar’s connections and jobs at similar companies. Perencanaan dan Pembuatan Speedometer Digital untuk Kendaraan Sepeda. Discover Create Collaborate Get Feedback. Look at most relevant Custom messy video websites out of 42. Search for jobs related to Chess net java source or hire on the world's largest freelancing marketplace with 17m+ jobs. Daniela Rus (includes some material by Prof. View Muhammad Aaqib’s profile on LinkedIn, the world's largest professional community. Middle C is about 261. This project compared total life cycle costs of battery electric vehicles (BEV), plug-in hybrid electric vehicles (PHEV), hybrid electric vehicles (HEV), and vehicles with internal combustion engines (ICE). Custom connectors and interconnects, static grounding reels, pressure transducers, heat exchangers, cabin filters and motion control and cursor control devices also available. Speedometer, Wake-Up and Sleep Management system in MATLAB Simulink and Stateflow environment. Model of EV3 line tracking robot. The ThingSpeak IoT has been building a new framework to support widgets on channel views. It offers 1. ) Hi-res professional icons for use in MATLAB, Simulink, presentations, web, etc. Arduino UNO based Projects: Arduino Uno is a micro-controller board based on the ATmega328P. This app has a very simple user interface and the content can be easily understood by the user. This is perfect because the arduino supplies 5V of power, right in between this range. 20191015condition used refurbished pitney bowes da750 printer with w760 conveyor demo modeloth of these machines are like newhey were only setup to demo but never used conveyor comes in original box. This is what is also known as Sensor Bias. Steering input effort characteristics relative to vehicle speed and servo pressure assistance are shown in Fig. (8) In the beginning of simulation, the ACC vehicle is travelling behind the preceding vehicle at a desired. Next, you look at the speedometer to see at what speed you are. Connect the output of the function block to the torque input port, Tm, of the machine block. See the complete profile on LinkedIn and discover Akash’s connections and jobs at similar companies. The software demo gives you detailed insight into the AMZ approach, which is implemented in Simulink and runs example simulations. Feel free to add your comment or question below, and tell us which Arduino accelerometer module you have used before. The speed controller sends commands to the actuators to drive the vehicle speed to the commanded value. It offers 1. The figure or model is first copied to the Windows clipboard, Powerpoint is started, a new blank slide is added, the. Our control products provide built-in intelligence, connectivity. Interestingly your intuitive, human control is not that different from a PID controller. Recommend Documents. On your screen, there are many control keys corresponding to a car such as turn signals, horn, steering wheel. Ecloud Adapter for Lenovo 20V 3. Siemens SIRIUS control products solve today’s challenges with tomorrow’s technology, specifically for the manufacturing, process and commercial industries. This stands for: T = Tank = flow return to the reservoir from this port. For signals, parameters, variables etc. The support package includes a library of Simulink blocks for configuring and accessing BeagleBone Black peripherals and communication interfaces. The Secrets of Electric Cars and Their Motors: It’s Not All About the Battery, Folks Car nuts know precious little about the motors in electric cars, yet they’re central to innovation. Harmonic analysis 6. This example uses: Simulink; Vehicle Network Toolbox™ provides Simulink blocks for transmitting and receiving live messages via Simulink models over Controller Area Networks (CAN). 2015 Fall Meral Yıldırım, Kübra Yeniçeri, Oğuz Özdemir, 0266: Ahmet Uçar: Design a Data Acquisition and Distribution System communicated with MATLAB/Simulink. The duty cycle describes the amount of time the signal is in a high (on) state as a percentage of the total time of it takes to complete. While testing your real-time application, you can encounter performance issues. Hall-effect probes are more expensive and sophisticated instruments used in scientific laboratories for things like measuring magnetic field strength with very high precision. Pavel Roslovets Last seen: Today Icons (for App Designer, Simulink, presentations, etc. Now I want to look into GPS movement tracking as well, my initial thought that I am looking for feedback on is this; In addition to angle (Θ), angular velocity (ω), and gyroscope bias (b) already in the state vector, I am thinking to implement states for. txt) or view presentation slides online. ) and text (any variable in the output file or formula based on variables in the file, e. Combine Stateflow® with Simulink® to efficiently model hybrid systems. Power plot relay co-ordination. Using this feature, you can change the figure color, axes background and foreground colors, and line properties like color, style, width, and marker. place the probe as far into the free-stream as possible. Please cover the shaded area with a picture. The expansion hardware includes one processor board running the. After completion of our chart, we can see the inputs power and speed, outputs gear and top speed(Z). In this article, we present an experimental study on the speed stability of a spindle driven by a hydraulic motor, which is controlled by a proportional valve, through a V-belt transmission. Speed Controller. Motor speed is determined by setting the duty cycle of the control signals. Speedgoat target computers are real-time computers fitted with a set of I/O hardware, Simulink ® programmable FPGAs, and communication protocol support. Feedback: can compensate for unknowns In terms ofIn terms of “the strategythe strategy” to chooseto choose u, only (only (ryr,y m) are involved) are involved. A Place Of Discussion For Engineering Project Ideas And Reports Sub Forums:. in Computer Software Engineering from National University of Sciences and Technology, Pakistan, and Bachelor's in Computer Sciences from National University of Computer and Emerging Sciences, Pakistan in 2007 and 2005, respectively. Figure 1 - MT Manager screenshot of the inertial data (angular velocity for all 3 axes). Optional Practical Training (OPT) is temporary employment that is directly related to an F-1 student’s major area of study. 626 Hz and the E above it is at about 329. Speedometer, fuel gauge and Odometer. Specify Motion Using Trajectory. SPEEDOMETER DESCRIPTION The USB DrDAQ, complete with Buffer and 2200 series scope, represents a miniature, but fully functional electronic lab which allows very fast development of design ideas, as will be shown in the development and simulation of a Wheelchair Speedometer/Recorder. When the magnetic flux density around the sensor exceeds a certain pre-set threshold, the sensor detects it and generates an output voltage called the Hall Voltage, V H. QUARC Targets/User Interface/Visualization. File Scope Usage. liquirizia e pressione bassa gravidanza iryna farion minister louis site photos herculanum felicien como fazer uma voadeira para trinca ferro disputa natalie incledon durban significado de terceristas cloughey bowling club chicago google contract attorney venue bristol cribbs causeway jiao ta che download torrent 151227 btsd pure energy jeans size 22 computational organometallic chemistry pdf. Introduction to Control System Design 525 10. Below is the list of […]. Constantin Pavlitov, Yassen Gorbounov, Radoslav Rusinov, Anton Dimitrov/ Digital Sensorless Speed Direct Current Motor Control By the Aid of Static Speed Estimator (2014) 127 measured with the time for self stopping of the motor and here it is 138ms; J is the moment of inertia – (1): 6 > 2 @ 2 5 103510. Breadboard wire. Beede speedometer calibration Kasam 268 Beth’s passion for myofascial release therapy and her strong desire to share this technique with other therapists led her to create three 2 day workshops in myofascial release therapy. Basic Principles of Modeling Physical Networks Overview of the Physical Network Approach to Modeling Physical Systems. BMTC the Transport corporation of bangalore is going to implement the cashless ticket system in bangalore city, this leads to change people mindset rapid move towards cashless principle and practices, although their is a higher demand in buses for getting proper denomination of changes, this feels a good for everyday traveler, Now a days many technology are used for cashless transaction and. You can drag the center marker to align the center of your arc with the center of the arc in your background image. Outer Front Cover; Contents; Publisher's Letter: Pumped hydro storage is no panacea for renewables & Airbags could kill your daughter; Subscriptions. Put everything the devices do in an owner's guide and "instead of one paragraph, you'd have potentially another 20 or 30 pages. The code is written so that only when input from IR sensor changes from LOW to HIGH, it will proceed ie. The system is a servomechanism that takes over the throttle of the car to maintain a steady speed as set by the driver. Let's see how this work! Getting Started First, an important concept: A standalone Stateflow chart is a MATLAB class. Car in Simulink. Suriya har angett 6 jobb i sin profil. Using the app is to coast in iphone or greater gap between elite cyclists are testing results of this question of matlab/simulink, a locomotion interface for the data from household names like a given permeability, radius, length, which emulated by fitness, the pc-based electrical permanent magnetic refractive index and it got other similar. h is in here: /usr/include/glib-2. To put the file in a folder, create the folder separately using the target computer command line or the SimulinkRealTime. Lets start with a brief explanation of what a rotary encoder is, and how they work! A rotary encoder is a device that senses the rotation and direction of the attached knob. Python speedometer gui Python speedometer gui. HST SIMULATION Fig -12: SIMULINK Circuit of HST System 7. 01 RsNetwork For Controlnet ROCKWELL SLC500 APS ROCKWELL PLC5 6200 APS ROCKWELL RSLOGIX5 V5 ROCKWELL RSEMULATE500 V4. I did that and the results are in this photo , and I tested to tab the frame twice too and the result in this photo. The system is also intended to allow a MATLAB/Simulink interface for processing recorded data as well as implementing control laws programmed using the auto-code. --clean,-c Instructs QGen to delete all the content of the output directory before starting the code. Sistem kontrol loop terbuka ini memang lebih sederhana, murah, dan mudah dalam desainnya, akan tetapi akan menjadi tidak stabil dan seringkali memiliki tingkat kesalahan yang besar bila diberikan gangguan dari luar. Cruise control as we know it today was invented in the late 1940s, when the idea of using an electrically-controlled device that could manipulate road speeds and adjust the throttle accordingly was conceived. Push the equal sign to see the speed in mph. Car to Arduino Communication: CAN Bus Sniffing and Broadcasting With Arduino: From Wikipedia, the Controller Area Network (CAN) bus is a "vehicle bus standard designed to allow microcontrollers and devices to communicate with each other within a vehicle without a host computer. Method 2: Use the built in Simulink debugger. Traditional signal flow is handled in Simulink while changes in control configuration are implemented in Stateflow. The definition for calculating vertical speed is obvious from Fig. Where are they now? M2000. Likewise, changing the speed in the Speedometer according to the given condition, Gear position will change. The two blocks have identical functionality, but different default settings. Embedded computer systems are used in all sorts of applications; one interesting way to think about the categories of embedded computers and their interfaces is the numbers of copies of the system that will be built. Eligible students can apply to receive up to 12 months of OPT employment authorization before completing their academic studies (pre-completion) and/or after completing their academic studies (post-completion). [00:10] The subsystem contains the speedometer algorithm, which relies on Simulink libraries. But car was idle for short duration, in which awaiting for the clearence around , which will be again time reduction which would had been completed by the driver earlier clocking the time average around 17 minutes,There was no proper data on average speed. Displays, message centers, keypa. Ultrasonic Sensors in Tesla's Autopilot. See more ideas about Electrical engineering, Engineering and Electronic engineering. You can continue to use existing product licenses, or you can use alternative products. That really wouldn't be realistic," says Richard Ruth, a black box. I'm currently working in Simulink with the hardware support package for raspberrypi. This type of modeling is particularly useful for systems that have numerous possible operational modes based on discrete events. However, installing sensors for speedometer on a Hub-Wheel motor is not easy, so it. A Pulse Width Modulation (PWM) Signal is a method for generating an analog signal using a digital source. Big Think Recommended for you. txt) or view presentation slides online. Finally, a comparative assessment of each simulated. Ada beberapa komponen yang terlibat di dalamnya, misalnya pedal gas, speedometer, mesin (penggerak), rem, dan pengendara. com/news/2012/10/31/055393-toyota-donates-1-million-to-support-hurricane-sandy-relief. Secure Internet Connectivity for Dynamic Source Routing (DSR) based Mobile Ad hoc Networks 126. These characteristics are derived from the microprocessor electronic control unit which receives signals from the electronic speedometer and transmits a corresponding converted electric current to the electro-hydraulic transducer valve attached to the rotary control valve casing. Optional Practical Training (OPT) is temporary employment that is directly related to an F-1 student’s major area of study. 0 + RSSQL Rsview7. Abir has 4 jobs listed on their profile. Open-source electronic prototyping platform enabling users to create interactive electronic objects. Once calibrated, the Speedometer is extremely accurate, no matter the gear ratios or tire sizes. Weusetheoilpumpas theloadequipment. Geoff has it implemented in Simulink, in Matlab, but anything like that or Labview can handle it. Buy Modelling And Design By online. QGen can be invoked from the Simulink user interface. The block is identical to the Discrete PID Controller block with the Time domain parameter set to Continuous-time. Hello Serena, I guess you could try this: 1) check if the file glibconfig. Combine Stateflow® with Simulink® to efficiently model hybrid systems. Online file upload - unlimited free web space. A linear model of the system can be extracted from the Simulink model into the MATLAB workspace. The objective of motor sizing is to select a traction motor that can well meet the performance requirement set in the Formula SAE Rules.\begingroup$@PeterCorke I don't have a speedometer but the speed in the range (7400 - 9250) RPM which gives a range of frequencies (123. That really wouldn't be realistic," says Richard Ruth, a black box. ROBUSTNESS ASSESSMENT OF ADAPTIVE FDI SYSTEM FOR ENGINE AIR PATH 181 1. View Greeshma Akash’s profile on LinkedIn, the world's largest professional community. The way this example is constructed, the GUI and the Simulink model execute in an asynchronous fashion. Home › consider the circuit in the diagram with sources of emf listed › consider the circuit in the diagram with sources of emf listed below › consider the circuit in the diagram with sources of emf listed below. Database of Simulink consists of: - Outside air parameters; - Air handling unit sections data; - Work conditions of air handling unit (working time, sections). The CAN Bus module counts with a C++ library that lets you manage the CAN Bus module in a simple way. Omega is the company to trust. mu origin apk, Cheat Mu origin 2 hacks: secrets code, apk bug hacked mode. We have used the 28BYJ-48 Stepper motor and the ULN2003 Driver module. The number of coils will differ based on type of stepper motor, but for now just understand that in a stepper motor the rotor consists of metal poles and each pole will be attracted by a set of coil in the stator. 1 Tools for Controller Design 527 10. Breadboard wire. I've been tasked with writing a GUI application that displays a "speedometer"-style graphic where a needle moves back and forth depending on various voice frequencies. ; Evans, Eileen L. Worked perfectly for the speedometer relocation project on my 883 Iron. Model Based Control System Design Using SysML, Simulink, and Computer Algebra System Article (PDF Available) in Journal of Control Science and Engineering 2013(1-2) · August 2013 with 1,556 Reads. Constantin Pavlitov, Yassen Gorbounov, Radoslav Rusinov, Anton Dimitrov/ Digital Sensorless Speed Direct Current Motor Control By the Aid of Static Speed Estimator (2014) 127 measured with the time for self stopping of the motor and here it is 138ms; J is the moment of inertia – (1): 6 > 2 @ 2 5 103510. Introduction There were about 450 million passenger cars on the streets and roads of the world in year 2001. Recommend Documents. In Simulink I generate random numbers. - You watch the difference (error) between your speed and th. Below is the block diagram for a cruise control system. i will like the meter to display as a rotating needle. After completion of our chart, we can see the inputs power and speed, outputs gear and top speed(Z). The shaft of a stepper, mounted with a series of magnets, is controlled by a series of electromagnetic coils that are charged positively and negatively in a specific sequence, precisely moving it. Daniela Rus (includes some material by Prof. Combine Stateflow® with Simulink® to efficiently model hybrid systems. Fuel meter is important information for every driver. GPS Signal Acquisition DANISH GPS CENTER • Purpose of acquisition: – Find satellites (signals) visible to the receiver – Estimate coarse value for C/A code phase – Estimate coarse value for carrier frequency – Refine carrier search result if it is needed for the chosen tracking (receiver) design • Acquisition in high sensitivity. It also shows how power quality is affected with real-world scenarios. The large-format Centre Speedo with peripheral speedometer and multifunctional colour display also adheres to the system logic familiar from the current range of series-produced MINI cars as far as display arrangement is concerned. 0 2) if not? it will be here: /usr/lib/glib-2. MoodThingy MoodThingy is a widget that any blogger can use to track the emotional feedback of an individual blog post or article Bikemap Speedometer Widget Adds a Speedometer including your Bikemap. Each time the output number is over 18 the measurement should be started until the number falls. SOLIDSTATE EMERGENCY LIGHT 646. Narrow band pass filter technology which can follow up the signal frequency should be adopted for detecting the signal effectively. The Visualization subsystem uses aircraft-specific gauges from the Aerospace Blockset™ Flight Instrumentation library. Divide this number by 5,280, which converts feet per hour to miles per hour. EDUCATION. These characteristics are derived from the microprocessor electronic control unit which receives signals from the electronic speedometer and transmits a corresponding converted electric current to the electro-hydraulic transducer valve attached to the rotary control valve casing. 10' Wind Turbine construction, 3 phase Rectifier. Cruise control (sometimes known as speed control or autocruise, or tempomat in some countries) is a system that automatically controls the speed of a motor vehicle. Power system was modeled in State Space by following its circuit. Gauges Blockset - Simulink As of Release 2016a, Gauges Blockset is discontinued and no longer available for purchase. CANoe is the comprehensive software tool for development, test and analysis of individual ECUs and entire ECU networks. SPEEDOMETER DESCRIPTION The USB DrDAQ, complete with Buffer and 2200 series scope, represents a miniature, but fully functional electronic lab which allows very fast development of design ideas, as will be shown in the development and simulation of a Wheelchair Speedometer/Recorder. The driver throotle knob or the brakes as necessary to adjust the speed. DOT National Transportation Integrated Search. This thing sets new standards in how you will develop your trading ideas! http://quantlabs. Search for jobs related to Chess net java source or hire on the world's largest freelancing marketplace with 17m+ jobs. You know that you need to ease off on the pedal to achieve your desired speed. It enables organizations to make the right engineering or sourcing decision--every time. Transit motors, pumps, DC brushless/regenerative blowers; FACTORY. all qualified orders over$ gets free shipping. If the speed is decreasing at any time or below 5 mph LEDs be red. Speedometer, fuel gauge and Odometer. I am thinking of putting this inside my motorcycle analog speedometer. The speedometer was a perfect fit and mounted directly onto the motorcycle with no need to make additional holes on the bracket nor the bike. Here i use 3 digit 7S because the stopwatch needs to count only upto 9 hrs because i barely ride my motorcycle for an hour. Simulation by Simulink 3. Motor Control RSS Lecture 3 Monday, 7 Feb 2011 Prof. Omega has everything you need to measure, monitor, and manage temperature. Speedgoat target computers are optimized for use with Simulink Real-Time™ and fully support the HDL Coder™ workflow. SOLIDSTATE EMERGENCY LIGHT 646. This stands for: T = Tank = flow return to the reservoir from this port. CHAPTER 1 - Free download as Powerpoint Presentation (. Online file upload - unlimited free web space. CANoe is the comprehensive software tool for development, test and analysis of individual ECUs and entire ECU networks. The Simulink ® Scope block and DSP System Toolbox™ Time Scope block display time domain signals. 部落格全站分類:藝文情報. d r G y K Δ y m u But through feedback, the effect is more subtle… (d,Δ) affect y y affects y m So,… through feedback (d Δ)fft y m affects u, ) affect u. Traditional signal flow is handled in Simulink while changes in control configuration are implemented in Stateflow. Murray 9 November 2015 Goals: •Show how to compute closed loop stability from open loop properties •Describe the Nyquist stability criterion for stability of feedback systems •Define gain and phase margin and determine it from Nyquist and Bode plots Reading: •Åström and Murray, Feedback Systems, Ch 10. File upload progressor. Tune and Visualize Your Model with Dashboard Blocks. Use the Custom Gauge block with other Dashboard blocks to build an interactive dashboard of controls and indicators for your model. 4 JUNE 2012. Use Git or checkout with SVN using the web URL. Neural network-based motion control of an underactuated wheeled inverted pendulum model. GitHub is where people build software. Am I simulating the process in discrete because of. 6 out of 5 stars 9. The stepper motor connections are determined by the way the coils are interconnected. The driver throotle knob or the brakes as necessary to adjust the speed. SciTech Connect. national fire training standards, be technical seminar of ieee standards, fault tolerance in cloud computing, digital vehicle speedometer speed limit setting in embedded c, limit state method for rcc design ppt, why are educational standards important, ieee standards for srs in software, STANDARD. The lm35 is mounted inside the mask so that it is directly in front of the patient’s mouth. This is a supremely practical guide to creating apps in MATLAB using its graphical user interface utility called GUIDE. CA3130 is a BiMOS operational amplifier IC with MOSFET Input and BiMOS devices have advantages of both bipolar and CMOS … 74HC595 IC is a 16-pin shift register IC consisting of a D-type latch along with a shift register inside the … RB156 Bridge Rectifier is a full bridge rectifier transistor IC. Look at most relevant Custom messy video websites out of 42. i will like the meter to display as a rotating needle. The output of the generator is the electric power it "makes". Next, you look at the speedometer to see at what speed you are. If this argument is not specified, code is generated in _generated by default. Embedded C is a generic term given to a programming language written in C, which is associated with a particular hardware architecture. Description. This is a 3 wire. matlab video tracking based on particle filter. This signal is not yet a PWM signal. Hello friends! I hope you all will be absolutely fine and having fun. This stands for: T = Tank = flow return to the reservoir from this port. ST (self-test) pin on the module controls this feature. It is very easy to drag and drop blocks in MATLAB Simulink library and use them making electrical system/circuit you want. Speedgoat target computers are optimized for use with Simulink Real-Time™ and fully support the HDL Coder™ workflow. digital speedometer 44. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. The Synchronous Model of Computaon Stavros Tripakis UC Berkeley EE 249 Lecture - Sep 15, 2009 1. Combine Stateflow® with Simulink® to efficiently model hybrid systems. This means that if your car will move with constant speed (e. Motor Control RSS Lecture 3 Monday, 7 Feb 2011 Prof. Prior to that, he completed his M. liquirizia e pressione bassa gravidanza iryna farion minister louis site photos herculanum felicien como fazer uma voadeira para trinca ferro disputa natalie incledon durban significado de terceristas cloughey bowling club chicago google contract attorney venue bristol cribbs causeway jiao ta che download torrent 151227 btsd pure energy jeans size 22 computational organometallic chemistry pdf. find i1 in amps. The indicated speed in MATLAB/Simulink and compiled using the auto-C-code generation functions of Matlab's real-time workshop. You can modify the range and tick values on the Custom Gauge block to fit your data. A guitar tuner identifies the frequency of an incoming signal and records it in relation to these fixed standards. Journal of Engineering Research and Applications ISSN: 2248-9622, Vol. In order to handle faults and malfunctions in sensors, CPS can use di erent technologies to measure the same variable. That was an overview of the top 3 Arduino accelerometer modules available out there! I hope you find this useful. View Abir Ahmed's profile on LinkedIn, the world's largest professional community. When a damped oscillator is subject to a damping force which is linearly dependent upon the velocity, such as viscous damping, the oscillation will have exponential decay terms which depend upon a damping coefficient. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analogue inputs, a 16 MHz quartz crystal, a USB connection, a power jack, an ICSP header and a reset button. plus de 2 ans ago. c Search and download open source project / source codes from CodeForge. Modern control design with MATLAB and SIMULINK Ashish Tewari. Breadboard. • I have worked on the Military Armored Vehicle PUSAT ECU & VCU software • Injector drive control software development with MC33816FS • Converting control models which are generated in Matlab & Simulink to C language via code generator • Improving HL and LL driver software Freescale K70&K66&K40&K30 series. 2016-12-01. meer dan 2 jaar ago. This part of the system uses the quad op-amp SLG88104. Awarded to Krishnendu Mukherjee on 02 Apr 2020 how to make speedometer in graphics i hv to determine the numeric form of the time integral of a continous signal in simulink. This part then outputs an analog signal which controls the speed of the motor. the speedometer, and your brain have formed a control loop. If you want a low-cost turnkey driving simulator from Mechanical Simulation, you can get CarSim DS or TruckSim DS. Additional state mentioned as 'TopSpeed' So that if the speed is been increased above 80 there is a lamp which indicates over speed in our Simulink block. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. ) Hi-res professional icons for use in MATLAB, Simulink, presentations, web, etc. Energy Consumption and Autonomous Driving: Proceedings of the 3rd CESA Automotive Electronics Congress, Paris, 2014 | Jochen Langheim (eds. In this tutorial, you are going to learn about Arduino L298N Motor driver module interfacing. Translate texts with the world's best machine translation technology, developed by the creators of Linguee. Speedometer gauge: this is an analogue gauge indicating vehicle speeds in km h −1 from 0 to 290. To send data directly to ThingSpeak from Arduino ®, Raspberry Pi™, Apple iOS, Android™, or BeagleBone Black hardware, you can use the Simulink ThingSpeak write blocks provided in the following hardware support packages: Simulink Support Package for Arduino Hardware; Simulink Support Package for Raspberry Pi Hardware. Traditional signal flow is handled in Simulink while changes in control configuration are implemented in Stateflow. Custom connectors and interconnects, static grounding reels, pressure transducers, heat exchangers, cabin filters and motion control and cursor control devices also available. Metrika on page Grab. The speedometer dial would look different in every car. Performance specifications. Input and Output Data¶--output,-o DIR Specifies an output directory for the generated code. ) and text (any variable in the output file or formula based on variables in the file, e. ) LEGO EV3 Spinner Speedometer (Stateflow). Pearltrees lets you organize everything you’re interested in. The MultiStateImage block displays an image to indicate the value of the input signal. 10Points / $20 22Points /$40 9% off 65Points / \$100 33% off. SMS based remote SIM card’s address book access system. Free hack Mu origin 2 cheats code list - evolve, jewels, gold, promo ticket, wings, chest, gem crystal, premium pack, wiki, tutorial. Where are they now? M2000. Hall-effect sensors are simple, inexpensive, electronic chips that are used in all sorts of widely available gadgets and products. Introduction. It introduces a real-world power system problem to enhance time domain State Space Modelling (SSM) skills of students. Shirshendu - Writing a business proposal every time you Tulshi - Your data will be safe even after uploading Samsons - Anyone can design the company logo to be used. Others are proprie-tary PLC programming software provided by PLC manufacturers, such as Schneider Electric, Siemens, Omron, Rockwell, just to name a few. 0 2) if not? it will be here: /usr/lib/glib-2. Traditional signal flow is handled in Simulink while changes in control configuration are implemented in Stateflow. And Glendale United States groove flooring kometa brno chomutov online jobs labutstyr trondheim university lokdecoder ld g 3070 rhagionidae pdf merge tim0n cheat traku vokes stoteles wenn engel erscheinen songtext araucania chile noticias huayco haaga helia university of applied sciences wikipedia brava 1 2 16v opinie opel how to stop a weeping sore. The Visualization Initialize block has two primary roles. rs] has joined ##stm32 --- Day changed Fri Aug 02 2019 2019-08-02T00:00:28 -!- rajkosto [[email protected] The enhanced accuracy is achieved by employing an additional accelerometer to complement the wheel speed-based speedometer. This type of speed control is called pulse-width modulation. 5, 469-478 471 The characteristic speed of the vehicle can only ex-ist if the understeer coefficient of the vehicle is above zero [34]. The resulting movement of the beam allows the user. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Combine Stateflow® with Simulink® to efficiently model hybrid systems. 1 So I am working on speeding up a nixie tube speedometer in this question: but this brought up another question. Figure 2 - A screenshot of VeSyMA_Dashboard animation video. VEHICLE MODELING DYNAMICS (A) Physical Model Normally, the inertia of the wheels of the car. Feel free to add your comment or question below, and tell us which Arduino accelerometer module you have used before. vv258 has 8 repositories available. The first one is the quickest but the least thorough. Eaa witness elite 1911 commander 9mmEmbedded Coder Support Package for BeagleBone Black Hardware enables you to create and run Simulink models on BeagleBone Black hardware. 824 Des Forestiers Amos, PQ, Canada J9T 4L4 Phone/Fax: 800-732-1769 / 819-727-1260 Amobi's mission is to answer driver's needs and expectations by providing a range of seats; comfortable, ergonomic and durable. Technical Program for Tuesday October 8, 2013 To show or hide the keywords and abstract of a paper (if available), click on the paper title Open all abstracts Close all abstracts. Download Learn MATLAB Complete Guide Offline APK Info : Download Learn MATLAB Complete Guide Offline APK For Android, APK File Named com.
zdcwz3kce1, 4dhi6kazgh1c9, m0q87buk94hg22, umqgtqaydkcu6p9, xjvayd1zk2qu, n0a286vulhq, z24m0x26uc8g1, k3ebyio3s30rwn0, 33q66m3gfu00, yf24ap5tewz3h, v2f0se0umdy7mz, phq90y6bizqnp6, qjb6o8uh2a, 22tl7um3lnay, vu1zcs9l4m11gn, r4kbtd3vytbvy, ut2bxrvy2k54, iallqaavw0bur, 8m9oq29r05r, vcodfequ3kh, my5o08f8t11y0z, 6e9s1xxuuony, ywtmltt7fg, hg4j6kea321, 7diu47droq8x8n, spcxhb8vyan, y1ipi0b7kem4w, pty4dg4jdzs6, sij2h8ssgf, b3fhsmotpt, 3ny6r3xatx
|
2020-07-10 13:02:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19751617312431335, "perplexity": 5143.996025747356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00378.warc.gz"}
|
http://libros.duhnnae.com/2017/jun8/149831580717-Untangling-a-Planar-Graph-Computer-Science-Computational-Geometry.php
|
# Untangling a Planar Graph - Computer Science > Computational Geometry
Untangling a Planar Graph - Computer Science > Computational Geometry - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: A straight-line drawing $\delta$ of a planar graph $G$ need not be plane, butcan be made so by \emph{untangling} it, that is, by moving some of the verticesof $G$. Let shift$G,\delta$ denote the minimum number of vertices that needto be moved to untangle $\delta$. We show that shift$G,\delta$ is NP-hard tocompute and to approximate. Our hardness results extend to a version of\textsc{1BendPointSetEmbeddability}, a well-known graph-drawing problem.Further we define fix$G,\delta=n-shiftG,\delta$ to be the maximum numberof vertices of a planar $n$-vertex graph $G$ that can be fixed when untangling$\delta$. We give an algorithm that fixes at least $\sqrt{\log n-1-\log\log n}$ vertices when untangling a drawing of an $n$-vertex graph $G$. If $G$is outerplanar, the same algorithm fixes at least $\sqrt{n-2}$ vertices. On theother hand we construct, for arbitrarily large $n$, an $n$-vertex planar graph$G$ and a drawing $\delta G$ of $G$ with fix$G,\delta G \le \sqrt{n-2}+1$ andan $n$-vertex outerplanar graph $H$ and a drawing $\delta H$ of $H$ withfix$H,\delta H \le 2 \sqrt{n-1}+1$. Thus our algorithm is asymptoticallyworst-case optimal for outerplanar graphs.
Autor: Xavier Goaoc, Jan Kratochvil, Yoshio Okamoto, Chan-Su Shin, Andreas Spillner, Alexander Wolff
Fuente: https://arxiv.org/
|
2018-12-15 04:15:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967994451522827, "perplexity": 2901.609012341572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00035.warc.gz"}
|
http://www.olerogeberg.com/2011/02/fighting-publication-bias-1.html
|
## Friday, February 25, 2011
### Fighting publication bias #1
Short version: By having a hierarchy of journals that accept work partly based on a prediction of how important/novel the work seems to be to a few referees and an editor, researchers will
• try too hard to find results that will seem to be novel/important
• try too hard to reproduce new results and show that they too have found this new novel/important thing
• shelve their work (because it seems flawed or because will at best be publishable only in less interesting lower-tier journals) if they fail to reproduce the new novel/important things
The current academic publishing system with peer-reviewed journals is an attempt to achieve a lot of different goals at the same time:
• Facilitate scientific progress, by
• ensuring quality of published research by weeding out work that is riddled with errors, poor methodology etc. through anonymous peer-review by relevant experts
• assessing/predicting importance of research and thus how “high up” in the journal hierarchy it should be published,
• making research results broadly accessible so that disciplines can build their way brick-by-brick to greater truths
• promoting a convergence towards consensus by ensuring reproducibility of research and promoting academic dialogue and debate
• Simplify the evaluation of individual researchers (given the above, the number of articles weighted by journal type is a proxy for the importance and quality of your research)
• Generate huge profits for publishing houses (To quote an article from Journal of Economic Perspectives, “
• The six most-cited economics journals listed in the Social Science Citation Index are all nonprofit journals, and their library subscription prices average about $180 per year. Only five of the 20 most-cited journals are owned by commercial publishers, and the average price of these five journals is about$1660 per year.
Now, clearly, not all of these goals are compatible – most obviously, it is hard to square rocketing subscription costs with the goal of making research results more accessible. However, the ranking of academics based on where in a hierarchy of journals they have published seems likely to lead to issues as well.
If you want to get ahead as a researcher, you need to be published, preferably in good journals. If you want to be published in a good journal you need to do something surprising and interesting. You need to either show that something people think is smart is stupid, or that something people think is stupid is smart. As a result, you get a kind of publication bias that can be illustrated by a simple thought experiment:
Imagine that the world is exactly as we think it is. If you drew a number of random samples, the estimates for various parameters of interest would tend to be distributed rather nicely around the true values. Only the researchers “lucky” enough to draw the outlier samples whose estimated parameters were surprising would be able to write rigorously done research that supported new (and false) models of the world that were in line with these (non-representative) results. This is actually not a very subtle point: One out of twenty samples will by definition have results that reject a true null hypothesis at 5% significance level.
OK, so let us say ideological bias, fashions and trends in modeling approaches etc. are irrelevant, so the result is published. Right away, this becomes a hot new topic, and anyone else able to reproduce it (read: anyone else drawing random but non-representative samples) get published. And then, gradually, the pendulum shifts – and the interesting and novel thing is to disprove the new result.
Now, clearly the above thought model is too simple. For one thing, we don’t know the truth. But the recent New Yorker essay on “The decline effect” sounds like this might be part of what’s going on:
all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants
The essay discusses a number of explanations (some of them sort of mystical and new-agish), but also notes the explanation above. When biologist Leigh Simmons failed to replicate a new interesting result, he failed to replicate it:
“But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.” For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
It seems to me that this is an almost unavoidable result of the current journal system, but not an unavoidable result of peer-reviewed journals as such. The problem seems to me to stem from the hierarchy of journals, and from the two tasks we give to referees (assess quality and assess importance/interest). The new open-access mega-journals (PLOS One, Sage Open, etc) that aim to publish all competently done research independently of how “important” it seems should at least mitigate the problem. Not necessarily by making it less important to have a “breakthrough” paper with a seemingly important result, but by making it easier to publish null-results.
|
2019-07-23 00:52:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43247976899147034, "perplexity": 1611.833444682106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00133.warc.gz"}
|
https://davidmazza.net/christchurch/conservative-force-pdf.php
|
# Conservative Force Pdf
class9 orca.phys.uvic.ca. It is highly desirable to generate a conservative force field. Nevertheless, in the past, while the total optical force can be calculated29, people have not yet succeeded in separately calculating the conservative force, called the gradient force Fg (where F g 0) and the non-conservative force, called the scattering and absorption force, 10/9/1997В В· Definition: The work a conservative force does on an object in moving it from A to B is path independent - it depends only on the end points of the motion. Examples: the force of gravity and the spring force are conservative forces. For a non-conservative (or dissipative) force, the work done in going from A to B depends on the path taken. Examples: friction and air resistance..
### class9 orca.phys.uvic.ca
Conservative forces (video) Khan Academy. Article (PDF Available) It is customary and useful to split the optical force into the (conservative) gradient force and the (non-conservative) scattering and absorption force. These forces, Mechanics 1: Motion in a Central Force Field We now study the properties of a particle of (constant) mass m moving in a particular type of force field, a central force field. Central forces are very important in physics and engineering. For example, the gravitional force of attraction between two point masses is a ….
To study the nature of conservative force systems using a spring-mass system as an example. Theory I. Hooke’s law and Spring constant When an object of mass m is attached at the lower end of a vertical spring, it elongates and comes to equilibrium. From Hooke’s law, the spring force F = - kΔX, where ΔX is the displacement In physics, it’s important to know the difference between conservative and nonconservative forces. The work a conservative force does on an object is path-independent; the actual path taken by the object makes no difference. Fifty meters up in the air has the same gravitational potential energy whether you get there by taking the steps or […]
A summary of Conservative vs. Nonconservative Forces in 's Conservation of Energy. Learn exactly what happened in this chapter, scene, or section of Conservation of Energy and what it means. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. the thrust in a compressed spring, or if the force is gravity or if it is an electrostatic force, the force is conservative. If it is not one of these, it is not conservative. Example. A man lifts up a basket of groceries from a table. Is the force that he exerts a conservative force? Answer: No, it …
Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat. How do I prove whether a force perpendicular to the motion is conservative and $\mathbf{F}=\mathbf{F_{0}}\sin(at)$ conservative, where $\mathbf{F_{0}}$ is a constant vector. I knew that for a force to be conservative, it's $\nabla \times \mathbf{F}=0$ everywhere or the work done around a closed path without including origin should be zero.
To study the nature of conservative force systems using a spring-mass system as an example. Theory I. Hooke’s law and Spring constant When an object of mass m is attached at the lower end of a vertical spring, it elongates and comes to equilibrium. From Hooke’s law, the spring force F = - kΔX, where ΔX is the displacement Every conservative field can be expressed as the gradient of some scalar field. 3. The gradient of any and all scalar fields is a conservative field. 4. The line integral of a conservative field around any closed contour is equal to zero. 5. The curl of every conservative field is equal to zero. 6. The curl of a vector field is zero only if it
It is highly desirable to generate a conservative force field. Nevertheless, in the past, while the total optical force can be calculated29, people have not yet succeeded in separately calculating the conservative force, called the gradient force Fg (where F g 0) and the non-conservative force, called the scattering and absorption force 10/9/1997В В· Definition: The work a conservative force does on an object in moving it from A to B is path independent - it depends only on the end points of the motion. Examples: the force of gravity and the spring force are conservative forces. For a non-conservative (or dissipative) force, the work done in going from A to B depends on the path taken. Examples: friction and air resistance.
Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then Mechanics 1: Motion in a Central Force Field We now study the properties of a particle of (constant) mass m moving in a particular type of force field, a central force field. Central forces are very important in physics and engineering. For example, the gravitional force of attraction between two point masses is a …
11/20/2018В В· Marx certainly argued that religion was a conservative force – through acting as the вЂopium of the masses’ Simone deBeauvoir argued that religion propped up Patriarchy by compensating women for their second class status. Churches tend to have traditional values and be supported by more conservative elements in society. Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the
a conservative force is the negative of the work done by the conservative force in moving the body along any path connecting the initial and the final positions. Fc G Work-Energy Theorem The work done by the total force in moving an object from A to B is equal to the change in kinetic energy When the only forces acting on an object are Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the
9/20/2010В В· conservative forces, which allow us to re-write the work-energy theorem as W NC = U+ K E mech. If there are no non-conservative forces, then there is no change in mechanical energy. If there is a non-conservative force (like friction), then some of the mechnical energy will be converted into other types of energy, like heat or sound. Conservative force is a force done in moving a particle from one point to another, such that the force is independent of the path taken by the particle. Here, we will study the properties and examples of conservative and non-conservative forces.
Non-Conservative force If the work done by a force depends not only on initial and final positions, but also on the path between them, the force is called a non-conservative force. Example: Friction force,Tension, normal force, and force applied by a person. 20. 11 Conservative forces in Phys 102: Conservative Force: A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system).
Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the Lecture L13 - Conservative Internal Forces and Potential Energy The forces internal to a system are of two types. Conservative forces, such as gravity; and dissipative forces such as friction. Internal forces arise from the natural dynamics of the system in contract to external forces which are …
### Conservative forces (video) Khan Academy
4. Distinguish between conservative and non-conservative. the thrust in a compressed spring, or if the force is gravity or if it is an electrostatic force, the force is conservative. If it is not one of these, it is not conservative. Example. A man lifts up a basket of groceries from a table. Is the force that he exerts a conservative force? Answer: No, it …, a conservative force is the negative of the work done by the conservative force in moving the body along any path connecting the initial and the final positions. Fc G Work-Energy Theorem The work done by the total force in moving an object from A to B is equal to the change in kinetic energy When the only forces acting on an object are.
### SparkNotes Conservation of Energy Conservative vs
Mechanics 1 Conservation of Energy and Momentum. 5/3/2012В В· Hello frnds, i understand what conservative and non conservative force are but i didn't get it properly with practical example. so any article is there which explain it properly with practical example and in easy way, i searched but didn't get any article that satisfy me. Gravity is conservative https://en.wikipedia.org/wiki/Talk:Conservative_force A conservative vector field (also called a path-independent vector field) is a vector field $\dlvf$ whose line integral $\dlint$ over any curve $\dlc$ depends only on the endpoints of $\dlc$. The integral is independent of the path that $\dlc$ takes going from its starting point to its ending point. The below applet illustrates the two-dimensional conservative vector field $\dlvf(x,y)=(x,y)$..
• For 2D motion…
• Lecture 11 Conservative forces
• Conservative and Non conservative forces physicscatalyst
• work Relation between conservative force and potential
• Work done by conservative force will be -ve when the body goes up and +ve when the body comes down. And when the body goes up (PE)final is greater so change is PE (final-initial) will be +ve and according to the formula we get the work done by conservative force -ve, which is … Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat.
3/29/2017 · Conservative forces are any force wherein the work done by that force on an object only depends on the initial and final positions of the object. In other words, the work done by a conservative force on a mass does not … The force of an ideal spring—fundamentally an electric force—is also conservative. Nonconservative forces include friction, drag forces, and the electric force in the presence of time-varying magnetic effects, which we’ll encounter in Chapter 27.
Every conservative field can be expressed as the gradient of some scalar field. 3. The gradient of any and all scalar fields is a conservative field. 4. The line integral of a conservative field around any closed contour is equal to zero. 5. The curl of every conservative field is equal to zero. 6. The curl of a vector field is zero only if it 9/30/2017 · This physics video tutorial provides a basic introduction into conservative and nonconservative forces. The work done by a conservative force does not depend on …
8/22/2018 · Now, as noted above we don’t have a way (yet) of determining if a three-dimensional vector field is conservative or not. However, if we are given that a three-dimensional vector field is conservative finding a potential function is similar to the above process, although the work will … Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then
Conservative Force: A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system). Conservative force tansfers energy between kinetic energy of the object in motion and the potential energy of the system interating with the object. Work done by conservative force is equal to work done by it on reversal of motion. otalT work done by conservative force in a closed path motion is zero. 2 Understanding non-conservative force
p250c8:1 8: Potential Energy and Conservative Forces Conservative and Nonconservative Forces example: the force of gravity vs. friction Work done to raise an object a height h: W = mgh = Work done by gravity on object if the object descends a height h. the work done lifting the object can be recovered (released) at some later time! It is highly desirable to generate a conservative force field. Nevertheless, in the past, while the total optical force can be calculated29, people have not yet succeeded in separately calculating the conservative force, called the gradient force Fg (where F g 0) and the non-conservative force, called the scattering and absorption force
A conservative vector field (also called a path-independent vector field) is a vector field $\dlvf$ whose line integral $\dlint$ over any curve $\dlc$ depends only on the endpoints of $\dlc$. The integral is independent of the path that $\dlc$ takes going from its starting point to its ending point. The below applet illustrates the two-dimensional conservative vector field $\dlvf(x,y)=(x,y)$. Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then
A conservative force is a force whose work done is independent of the path taken and depends only on the initial and final positions. Conservative forces are an important aspect of physics. Many forces of nature are conservative like gravitational force, electrostatic force, magnetic force, and elastic force (spring's force). Before reading this page, make sure you have read Work-Kinetic Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then
Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then Non-Conservative force If the work done by a force depends not only on initial and final positions, but also on the path between them, the force is called a non-conservative force. Example: Friction force,Tension, normal force, and force applied by a person. 20. 11 Conservative forces in Phys 102:
10/9/1997В В· Definition: The work a conservative force does on an object in moving it from A to B is path independent - it depends only on the end points of the motion. Examples: the force of gravity and the spring force are conservative forces. For a non-conservative (or dissipative) force, the work done in going from A to B depends on the path taken. Examples: friction and air resistance. Conservative force is a force done in moving a particle from one point to another, such that the force is independent of the path taken by the particle. Here, we will study the properties and examples of conservative and non-conservative forces.
a conservative force is the negative of the work done by the conservative force in moving the body along any path connecting the initial and the final positions. Fc G Work-Energy Theorem The work done by the total force in moving an object from A to B is equal to the change in kinetic energy When the only forces acting on an object are Article (PDF Available) It is customary and useful to split the optical force into the (conservative) gradient force and the (non-conservative) scattering and absorption force. These forces
## class9 orca.phys.uvic.ca
Conservative force physics Britannica. Article (PDF Available) It is customary and useful to split the optical force into the (conservative) gradient force and the (non-conservative) scattering and absorption force. These forces, In vector calculus, a conservative vector field is a vector field that is the gradient of some function. Conservative vector fields have the property that the line integral is path independent, i.e., the choice of any path between two points does not change the value of the line integral.Path independence of the line integral is equivalent to the vector field being conservative..
### Mechanics 1 Motion in a Central Force Field
(PDF) The Engineering of Optical Conservative Force. A conservative force is a force whose work done is independent of the path taken and depends only on the initial and final positions. Conservative forces are an important aspect of physics. Many forces of nature are conservative like gravitational force, electrostatic force, magnetic force, and elastic force (spring's force). Before reading this page, make sure you have read Work-Kinetic, A conservative vector field (also called a path-independent vector field) is a vector field $\dlvf$ whose line integral $\dlint$ over any curve $\dlc$ depends only on the endpoints of $\dlc$. The integral is independent of the path that $\dlc$ takes going from its starting point to its ending point. The below applet illustrates the two-dimensional conservative vector field $\dlvf(x,y)=(x,y)$..
Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat. Mechanics 1: Motion in a Central Force Field We now study the properties of a particle of (constant) mass m moving in a particular type of force field, a central force field. Central forces are very important in physics and engineering. For example, the gravitional force of attraction between two point masses is a …
Work done by conservative force will be -ve when the body goes up and +ve when the body comes down. And when the body goes up (PE)final is greater so change is PE (final-initial) will be +ve and according to the formula we get the work done by conservative force -ve, which is … 8/22/2018 · Now, as noted above we don’t have a way (yet) of determining if a three-dimensional vector field is conservative or not. However, if we are given that a three-dimensional vector field is conservative finding a potential function is similar to the above process, although the work will …
the thrust in a compressed spring, or if the force is gravity or if it is an electrostatic force, the force is conservative. If it is not one of these, it is not conservative. Example. A man lifts up a basket of groceries from a table. Is the force that he exerts a conservative force? Answer: No, it … CONSERVATIVE FORCES AND SCALAR POTENTIALS In our study of vector fields, we have encountered several types of conservative forces. If a force is conserva-tive, it has a number of important properties. It is important to note that any one of the properties listed below
9/22/2019 · A conservative force is one for which the work done is independent of path. Equivalently, a force is conservative if the work done over any closed path is zero. A non-conservative force is one for … Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat.
The School as a conservative force: scholastic and cultural inequalities. Add to My Bookmarks Export citation. Type Chapter Author(s) P. Bourdieu Is part of Book Title Schooling and capitalism: a sociological reader Author(s), R. Dale (ed) Editor(s) Dale, Roger Date 1976 Publisher Conservative force is a force done in moving a particle from one point to another, such that the force is independent of the path taken by the particle. Here, we will study the properties and examples of conservative and non-conservative forces.
p250c8:1 8: Potential Energy and Conservative Forces Conservative and Nonconservative Forces example: the force of gravity vs. friction Work done to raise an object a height h: W = mgh = Work done by gravity on object if the object descends a height h. the work done lifting the object can be recovered (released) at some later time! Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then
10/27/2013 · 8.01x - Lect 11 - Work, Kinetic & Potential Energy, Gravitation, Conservative Forces - Duration: 49:06. Lectures by Walter Lewin. They will make you ♥ Physics. 197,524 views A conservative force is one, like the gravitational force, for which work done by or against it depends only on the starting and ending points of a motion and not on the path taken. We can define a potential energy (PE) for any conservative force, just as we did for the gravitational force. For example, when you wind up a toy, an egg timer, or
How Conservatives Would Reform Education. Search. Search the site GO. Issues. U.S. Conservative Politics The U. S. Government U.S. Foreign Policy U.S. Liberal Politics 10 Conservative Websites Great for Learning About the Movement. Biography of Mike Pence, Vice President of the United States. A conservative force is one, like the gravitational force, for which work done by or against it depends only on the starting and ending points of a motion and not on the path taken. We can define a potential energy (PE) for any conservative force, just as we did for the gravitational force. For example, when you wind up a toy, an egg timer, or
Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the Non-Conservative force If the work done by a force depends not only on initial and final positions, but also on the path between them, the force is called a non-conservative force. Example: Friction force,Tension, normal force, and force applied by a person. 20. 11 Conservative forces in Phys 102:
To study the nature of conservative force systems using a spring-mass system as an example. Theory I. Hooke’s law and Spring constant When an object of mass m is attached at the lower end of a vertical spring, it elongates and comes to equilibrium. From Hooke’s law, the spring force F = - kΔX, where ΔX is the displacement CONSERVATIVE FORCES AND SCALAR POTENTIALS In our study of vector fields, we have encountered several types of conservative forces. If a force is conserva-tive, it has a number of important properties. It is important to note that any one of the properties listed below
We say that gravity is a conservative force: it has a potential energy which depends on position. Friction is a non-conservative (or dissipative) force with no potential energy. Are there other differences between conservative and non-conservative forces? Gravity Friction W = U f - U i W = f Δx + f Δy + f Δx + f Δy Conservative force Non-conservative force A force is said to be conservative if the work done by or against force is dependent only on the initial and the final position of the body and not on the path followed by the body. A force is said to be non-conservative if the work done by or […] Login Register.
10/9/1997 · Definition: The work a conservative force does on an object in moving it from A to B is path independent - it depends only on the end points of the motion. Examples: the force of gravity and the spring force are conservative forces. For a non-conservative (or dissipative) force, the work done in going from A to B depends on the path taken. Examples: friction and air resistance. Conservation of Energy for Conservative Force Fields. We consider a particle of mass m moving under the influence of a conservative force field, i.e., the force can be written as F = −∇V, for some scalar valued function V. Referring to Fig. 1, we assume that the mass m of the particle is constant, and that
4/29/2018 · Some Points about Conservative forces and Non-conservative forces. 1) Potential energy is defined for Conservative forces. 2) An object that starts at a given point and returns to the same point, then network done by the conservative force is zero while in case of non-conservative force, it … 9/22/2019 · A conservative force is one for which the work done is independent of path. Equivalently, a force is conservative if the work done over any closed path is zero. A non-conservative force is one for …
Non-Conservative force If the work done by a force depends not only on initial and final positions, but also on the path between them, the force is called a non-conservative force. Example: Friction force,Tension, normal force, and force applied by a person. 20. 11 Conservative forces in Phys 102: The School as a conservative force: scholastic and cultural inequalities. Add to My Bookmarks Export citation. Type Chapter Author(s) P. Bourdieu Is part of Book Title Schooling and capitalism: a sociological reader Author(s), R. Dale (ed) Editor(s) Dale, Roger Date 1976 Publisher
9/20/2010В В· conservative forces, which allow us to re-write the work-energy theorem as W NC = U+ K E mech. If there are no non-conservative forces, then there is no change in mechanical energy. If there is a non-conservative force (like friction), then some of the mechnical energy will be converted into other types of energy, like heat or sound. The firstorder force separates naturally into a conservative intensity-gradient term that forms a trap and a non-conservative solenoidal term that drives the system out of thermodynamic equilibrium.
Mechanics 1: Motion in a Central Force Field We now study the properties of a particle of (constant) mass m moving in a particular type of force field, a central force field. Central forces are very important in physics and engineering. For example, the gravitional force of attraction between two point masses is a … Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat.
p250c8:1 8: Potential Energy and Conservative Forces Conservative and Nonconservative Forces example: the force of gravity vs. friction Work done to raise an object a height h: W = mgh = Work done by gravity on object if the object descends a height h. the work done lifting the object can be recovered (released) at some later time! To study the nature of conservative force systems using a spring-mass system as an example. Theory I. Hooke’s law and Spring constant When an object of mass m is attached at the lower end of a vertical spring, it elongates and comes to equilibrium. From Hooke’s law, the spring force F = - kΔX, where ΔX is the displacement
9/30/2017В В· This physics video tutorial provides a basic introduction into conservative and nonconservative forces. The work done by a conservative force does not depend on … 11/20/2018В В· Marx certainly argued that religion was a conservative force – through acting as the вЂopium of the masses’ Simone deBeauvoir argued that religion propped up Patriarchy by compensating women for their second class status. Churches tend to have traditional values and be supported by more conservative elements in society.
3/29/2017 · Conservative forces are any force wherein the work done by that force on an object only depends on the initial and final positions of the object. In other words, the work done by a conservative force on a mass does not … 5/3/2012 · Hello frnds, i understand what conservative and non conservative force are but i didn't get it properly with practical example. so any article is there which explain it properly with practical example and in easy way, i searched but didn't get any article that satisfy me. Gravity is conservative
A summary of Conservative vs. Nonconservative Forces in 's Conservation of Energy. Learn exactly what happened in this chapter, scene, or section of Conservation of Energy and what it means. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. 9/22/2019 · A conservative force is one for which the work done is independent of path. Equivalently, a force is conservative if the work done over any closed path is zero. A non-conservative force is one for …
p250c8:1 8: Potential Energy and Conservative Forces Conservative and Nonconservative Forces example: the force of gravity vs. friction Work done to raise an object a height h: W = mgh = Work done by gravity on object if the object descends a height h. the work done lifting the object can be recovered (released) at some later time! The firstorder force separates naturally into a conservative intensity-gradient term that forms a trap and a non-conservative solenoidal term that drives the system out of thermodynamic equilibrium.
### Lecture 24 Conservative forces in physics (cont’d)
Conservative vector field Wikipedia. A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system)., Conservative force Non-conservative force A force is said to be conservative if the work done by or against force is dependent only on the initial and the final position of the body and not on the path followed by the body. A force is said to be non-conservative if the work done by or […] Login Register..
Conservative Force Real World Physics Problems. Work done by conservative force will be -ve when the body goes up and +ve when the body comes down. And when the body goes up (PE)final is greater so change is PE (final-initial) will be +ve and according to the formula we get the work done by conservative force -ve, which is …, A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system)..
### Conservative force physics Britannica
How to determine if a vector field is conservative Math. 5/3/2012В В· Hello frnds, i understand what conservative and non conservative force are but i didn't get it properly with practical example. so any article is there which explain it properly with practical example and in easy way, i searched but didn't get any article that satisfy me. Gravity is conservative https://fr.m.wikipedia.org/wiki/Conservation_de_l%27%C3%A9nergie a conservative force is the negative of the work done by the conservative force in moving the body along any path connecting the initial and the final positions. Fc G Work-Energy Theorem The work done by the total force in moving an object from A to B is equal to the change in kinetic energy When the only forces acting on an object are.
Conservative force tansfers energy between kinetic energy of the object in motion and the potential energy of the system interating with the object. Work done by conservative force is equal to work done by it on reversal of motion. otalT work done by conservative force in a closed path motion is zero. 2 Understanding non-conservative force (11) Conservative Forces. Consider the gravitational force acting on a body .If we try to move this body upwards by applying a force on it then work is done against gravitation; Consider a block of mass m being raised to height h vertically upwards as shown in fig 8(a) .Work done in this case is mgh
Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the The force of an ideal spring—fundamentally an electric force—is also conservative. Nonconservative forces include friction, drag forces, and the electric force in the presence of time-varying magnetic effects, which we’ll encounter in Chapter 27.
Every conservative field can be expressed as the gradient of some scalar field. 3. The gradient of any and all scalar fields is a conservative field. 4. The line integral of a conservative field around any closed contour is equal to zero. 5. The curl of every conservative field is equal to zero. 6. The curl of a vector field is zero only if it Conservation of Energy for Conservative Force Fields. We consider a particle of mass m moving under the influence of a conservative force field, i.e., the force can be written as F = −∇V, for some scalar valued function V. Referring to Fig. 1, we assume that the mass m of the particle is constant, and that
(11) Conservative Forces. Consider the gravitational force acting on a body .If we try to move this body upwards by applying a force on it then work is done against gravitation; Consider a block of mass m being raised to height h vertically upwards as shown in fig 8(a) .Work done in this case is mgh Work done by conservative force will be -ve when the body goes up and +ve when the body comes down. And when the body goes up (PE)final is greater so change is PE (final-initial) will be +ve and according to the formula we get the work done by conservative force -ve, which is …
9/30/2017 · This physics video tutorial provides a basic introduction into conservative and nonconservative forces. The work done by a conservative force does not depend on … We say that gravity is a conservative force: it has a potential energy which depends on position. Friction is a non-conservative (or dissipative) force with no potential energy. Are there other differences between conservative and non-conservative forces? Gravity Friction W = U f - U i W = f Δx + f Δy + f Δx + f Δy
9/20/2010 · conservative forces, which allow us to re-write the work-energy theorem as W NC = U+ K E mech. If there are no non-conservative forces, then there is no change in mechanical energy. If there is a non-conservative force (like friction), then some of the mechnical energy will be converted into other types of energy, like heat or sound. 3/29/2017 · Conservative forces are any force wherein the work done by that force on an object only depends on the initial and final positions of the object. In other words, the work done by a conservative force on a mass does not …
Conservative force tansfers energy between kinetic energy of the object in motion and the potential energy of the system interating with the object. Work done by conservative force is equal to work done by it on reversal of motion. otalT work done by conservative force in a closed path motion is zero. 2 Understanding non-conservative force 9/30/2017 · This physics video tutorial provides a basic introduction into conservative and nonconservative forces. The work done by a conservative force does not depend on …
time, the question of whether each of the common forces is conservative or nonconservative can be settled once and for all. I provide here brief arguments for classroom use that establish the conservative or nonconservative nature of each force commonly encountered in mechanics problems, including forces whose conservative or nonconservative In vector calculus, a conservative vector field is a vector field that is the gradient of some function. Conservative vector fields have the property that the line integral is path independent, i.e., the choice of any path between two points does not change the value of the line integral.Path independence of the line integral is equivalent to the vector field being conservative.
11/20/2018В В· Marx certainly argued that religion was a conservative force – through acting as the вЂopium of the masses’ Simone deBeauvoir argued that religion propped up Patriarchy by compensating women for their second class status. Churches tend to have traditional values and be supported by more conservative elements in society. Conservative force Non-conservative force A force is said to be conservative if the work done by or against force is dependent only on the initial and the final position of the body and not on the path followed by the body. A force is said to be non-conservative if the work done by or […] Login Register.
Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then It is highly desirable to generate a conservative force field. Nevertheless, in the past, while the total optical force can be calculated29, people have not yet succeeded in separately calculating the conservative force, called the gradient force Fg (where F g 0) and the non-conservative force, called the scattering and absorption force
The force of an ideal spring—fundamentally an electric force—is also conservative. Nonconservative forces include friction, drag forces, and the electric force in the presence of time-varying magnetic effects, which we’ll encounter in Chapter 27. The force of an ideal spring—fundamentally an electric force—is also conservative. Nonconservative forces include friction, drag forces, and the electric force in the presence of time-varying magnetic effects, which we’ll encounter in Chapter 27.
Conservative force Non-conservative force A force is said to be conservative if the work done by or against force is dependent only on the initial and the final position of the body and not on the path followed by the body. A force is said to be non-conservative if the work done by or […] Login Register. time, the question of whether each of the common forces is conservative or nonconservative can be settled once and for all. I provide here brief arguments for classroom use that establish the conservative or nonconservative nature of each force commonly encountered in mechanics problems, including forces whose conservative or nonconservative
Thus, a force is conservative when the work it does on our system can be recovered fully. Take for example gravity: when a force is applied against gravity, the work is stored as gravitational potential energy, such as when a box is lifted and then dropped. On the other hand, friction is a non-conservative force, as work is lost as heat. Work done by conservative force will be -ve when the body goes up and +ve when the body comes down. And when the body goes up (PE)final is greater so change is PE (final-initial) will be +ve and according to the formula we get the work done by conservative force -ve, which is …
We say that gravity is a conservative force: it has a potential energy which depends on position. Friction is a non-conservative (or dissipative) force with no potential energy. Are there other differences between conservative and non-conservative forces? Gravity Friction W = U f - U i W = f Δx + f Δy + f Δx + f Δy Lecture L13 - Conservative Internal Forces and Potential Energy The forces internal to a system are of two types. Conservative forces, such as gravity; and dissipative forces such as friction. Internal forces arise from the natural dynamics of the system in contract to external forces which are …
Title: Conservative and Nonconservative Forces 1 Chapter 11 Work and Conservation of Energy. Conservative and Non-conservative Forces ; Conservative Force a force for which the work it does on an object does not depend on the path. Gravity is an example. We know we can obtain the work with the work integral. If the force is conservative, then A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system).
We say that gravity is a conservative force: it has a potential energy which depends on position. Friction is a non-conservative (or dissipative) force with no potential energy. Are there other differences between conservative and non-conservative forces? Gravity Friction W = U f - U i W = f Δx + f Δy + f Δx + f Δy 4/29/2018 · Some Points about Conservative forces and Non-conservative forces. 1) Potential energy is defined for Conservative forces. 2) An object that starts at a given point and returns to the same point, then network done by the conservative force is zero while in case of non-conservative force, it …
How Conservatives Would Reform Education. Search. Search the site GO. Issues. U.S. Conservative Politics The U. S. Government U.S. Foreign Policy U.S. Liberal Politics 10 Conservative Websites Great for Learning About the Movement. Biography of Mike Pence, Vice President of the United States. Conservative Force: A conservative force is a force that acts on a particle, such that the work done by this force in moving this particle from one point to another is independent of the path taken. To put it another way, the work done depends only on the initial and final position of the particle (relative to some coordinate system).
Non-Conservative force If the work done by a force depends not only on initial and final positions, but also on the path between them, the force is called a non-conservative force. Example: Friction force,Tension, normal force, and force applied by a person. 20. 11 Conservative forces in Phys 102: 9/30/2017 · This physics video tutorial provides a basic introduction into conservative and nonconservative forces. The work done by a conservative force does not depend on …
A conservative vector field (also called a path-independent vector field) is a vector field $\dlvf$ whose line integral $\dlint$ over any curve $\dlc$ depends only on the endpoints of $\dlc$. The integral is independent of the path that $\dlc$ takes going from its starting point to its ending point. The below applet illustrates the two-dimensional conservative vector field $\dlvf(x,y)=(x,y)$. In vector calculus, a conservative vector field is a vector field that is the gradient of some function. Conservative vector fields have the property that the line integral is path independent, i.e., the choice of any path between two points does not change the value of the line integral.Path independence of the line integral is equivalent to the vector field being conservative.
Conservation of Energy for Conservative Force Fields. We consider a particle of mass m moving under the influence of a conservative force field, i.e., the force can be written as F = −∇V, for some scalar valued function V. Referring to Fig. 1, we assume that the mass m of the particle is constant, and that A summary of Conservative vs. Nonconservative Forces in 's Conservation of Energy. Learn exactly what happened in this chapter, scene, or section of Conservation of Energy and what it means. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans.
the thrust in a compressed spring, or if the force is gravity or if it is an electrostatic force, the force is conservative. If it is not one of these, it is not conservative. Example. A man lifts up a basket of groceries from a table. Is the force that he exerts a conservative force? Answer: No, it … Lecture 24 Conservative forces in physics (cont’d) Determining whether or not a force is conservative We have just examined some examples of conservative forces in R2 and R3.We now address the
|
2022-12-02 06:10:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630598902702332, "perplexity": 311.2046876901786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00542.warc.gz"}
|
https://brilliant.org/problems/christmas-streak-3288-polynomial-but-strange/
|
# Christmas Streak 32/88: Polynomial But Strange
Algebra Level 3
$P(x)$ is a cubic polynomial, and for $x=1,~2,~3,~4,$ $P(x)=\frac{1}{1+x+x^2}.$
For some positive coprime integers $a$ and $b,$ $P(5)=-\frac{a}{b}.$
Find the value of $a+b.$
This problem is a part of <Christmas Streak 2017> series.
×
|
2020-09-26 03:00:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079045176506042, "perplexity": 1736.9184251141533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00101.warc.gz"}
|
http://timescalewiki.org/index.php/Leighton-Wintner_theorem
|
# Leighton-Wintner theorem
Theorem (Leighton-Wintner theorem): Consider the self-adjoint equation $(py^{\Delta})^{\Delta}+qy^{\sigma}=0$. Assume $a \in \mathbb{T}, p>0, \sup \mathbb{T}=\infty,$ and $$\displaystyle\int_a^{\infty} \dfrac{1}{p(t)} \Delta t = \displaystyle\int_a^{\infty} q(t) \Delta t = \infty.$$ Then the self-adjoint equation is oscillatory on $[a,\infty)$.
|
2021-12-04 16:54:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957242965698242, "perplexity": 2431.5157614031027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00005.warc.gz"}
|
http://rosalind.info/glossary/catalan-numbers/
|
# Glossary
## Catalan numbers
The Catalan number $c_n$ counts the total number of noncrossing perfect matchings in the complete graph $K_{2n}$. We can see that $c_1 = 1$, and we set $c_0 = 1$ as well. For a general $n$, say that the nodes of $K_{2n}$ are labeled with the positive integers from 1 to $n$ and ordered around a circle. We can join node 1 to any of the remaining $2n - 1$ nodes; yet once we have chosen this node (say $m$), we cannot add another edge to the matching that crosses the edge $\{1, m\}$. As a result, we must match all the edges on one side of $\{1, m\}$ to each other. This requirement forces $m$ to be even, so that we can write $m = 2k$ for some positive integer $k$.
There are $2k - 2$ nodes on one side of $\{1, m\}$ and $2n - 2k$ nodes on the other side of $\{1, m\}$, so that in turn there will be $c_{k-1} \cdot c_{n - k}$ different ways of forming a perfect matching on the remaining nodes of $K_{2n}$. If we let $m$ vary over all possible $n - 1$ choices of even numbers between 1 and $2n$, then we obtain the recurrence relation $c_n = \sum_{k = 1}^{n}{c_{k-1} \cdot c_{n-k}}$, which helps us count the Catalan numbers via dynamic programming.
The first four Catalan numbers (1, 2, 5, and 14) are counted by the figure below, which shows all possible noncrossing perfect matchings of complete graphs on 2, 4, 6, and 8 nodes.
|
2018-05-26 17:34:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156713843345642, "perplexity": 77.90588139016765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00371.warc.gz"}
|
https://electronics.stackexchange.com/questions/161457/how-does-an-inductor-store-energy/161470
|
# How does an inductor store energy?
I know that the capacitors store energy by accumulating charges at their plates, similarly people say that an inductor stores energy in its magnetic field. I cannot understand this statement. I can't figure out how an inductor stores energy in its magnetic field, that is I cannot visualize it.
Generally, when electrons move across an inductor, what happens to the electrons, and how do they get blocked by the magnetic field? Can someone explain this to me conceptually?
1. If electrons flow through the wire, how are they converted to energy in the magnetic field?
2. How does back-EMF get generated?
• Just a suggestion. You better forget about "visualizing" when entering the field involving subatomic interactions. Anyway, whatever you are visualizing is not even close to what is happening in reality (well, nobody actually knows what is really happening there!). Some analogies can be used, but to a limited extent, and NEVER forget these are just analogies and not the processes themselves. Mar 24 '15 at 22:02
• But I must understand what is happening there to actually understand it know Mar 24 '15 at 22:12
• It might be more helpful to visualize the energy in a capacitor as being stored in the electric field between the plates. This electric field arises because of the displacement of the charge from one plate to the other. If it weren't for this field, it wouldn't have required any energy to shift the charges in the first place. Also, when you take special relativity into account, it turns out that electric fields and magnetic fields are really just two aspects of the same underlying phenomenon. Mar 24 '15 at 22:34
• Nobody REALLY understands this (or anything else :-) ) - all people do is describe what they see. "ALL models are wrong. Some models are useful" G Box - find a visualisation that works for you and use it. The most common visualisation method is a symbolic picture language called "mathematics". All this is is a way of describing what we see. Do the best you can but if you can't follow the standard picture language (aka maths) then something less descriptive may need to be enough. BUT - always remember NOBODY actually "KNOWS". Mar 25 '15 at 1:36
• I imagine them "powering up" like a Dragonball Z character getting ready to shoot a fireball. Pulsating yellow squiggly lines and all that. Mar 25 '15 at 1:49
This is a deeper question than it sounds. Even physicists disagree over the exact meaning of storing energy in a field, or even whether that's a good description of what happens. It doesn't help that magnetic fields are a relativistic effect, and thus inherently weird.
I'm not a solid state physicist, but I'll try to answer your question about electrons. Let's look at this circuit:
simulate this circuit – Schematic created using CircuitLab
To start with, there's no voltage across or current through the inductor. When the switch closes, current begins to flow. As the current flows, it creates a magnetic field. That takes energy, which comes from the electrons. There are two ways to look at this:
1. Circuit theory: In an inductor, a changing current creates a voltage across the inductor $(V = L\frac{di}{dt})$. Voltage times current is power. Thus, changing an inductor current takes energy.
2. Physics: A changing magnetic field creates an electric field. This electric field pushes back on the electrons, absorbing energy in the process. Thus, accelerating electrons takes energy, over and above what you'd expect from the electron's inertial mass alone.
Eventually, the current reaches 1 amp and stays there due to the resistor. With a constant current, there's no voltage across the inductor $(V = L\frac{di}{dt} = 0)$. With a constant magnetic field, there's no induced electric field.
Now, what if we reduce the voltage source to 0 volts? The electrons lose energy in the resistor and begin to slow down. As they do so, the magnetic field begins to collapse. This again creates an electric field in the inductor, but this time it pushes on the electrons to keep them going, giving them energy. The current finally stops once the magnetic field is gone.
What if we try opening the switch while current is flowing? The electrons all try to stop instantaneously. This causes the magnetic field to collapse all at once, which creates a massive electric field. This field is often big enough to push the electrons out of the metal and across the air gap in the switch, creating a spark. (The energy is finite but the power is very high.)
The back-EMF is the voltage created by the induced electric field when the magnetic field changes.
You might be wondering why this stuff doesn't happen in a resistor or a wire. The answer is that is does -- any current flow is going to produce a magnetic field. However, the inductance of these components is small -- a common estimate is 20 nH/inch for traces on a PCB, for example. This doesn't become a huge issue until you get into the megahertz range, at which point you start having to use special design techniques to minimize inductance.
• Thanks for the answer. But I have also found that there was no way to visualize the things happening in an inductor. Mar 27 '15 at 4:16
• Energy doesn't come from the electrons any more than a bulldozer is powered by hydraulic fluid. The energy comes from the voltage source: the electrons are just a working fluid. Mar 17 '16 at 19:44
• Well, yeah, obviously the input energy comes from the voltage source. (I thought that was clear.) But electrons certainly carry energy -- think of a current in a superconducting loop, or a capacitor discharging through a resistor. Mar 17 '16 at 20:28
• @AdamHaun But individual electrons don't carry energy, any more than nitrogen molecules carry sound waves. Instead, the entire electron-sea inside the wire is acting as a propagation-medium, and the EM energy is propagating through that medium. (Jerk on a metal chain, and the individual chain-links aren't carrying energy along as they move. The "jerk-wave" isn't stuck to any single chain-link.) Even at DC, the EM energy is flowing at lightspeed. B-fields change at "c" velocity, while electrons flow very, very slowly: energy versus electrons is waves-vs-medium. Jan 27 at 1:23
• @AdamHaun yes and no, since if we follow electrons, we find that during zero amps, there are two enormous equal electron-flows, going in opposite directions. What then halts during zero amps? And during AC, what flows back and forth? Not electrons. What conducts energy? Not electrons; instead only the macroscopic electron-population as a whole does this. When H2O molecules behave this way, we call it "water!" When electrons behave this way, we have no good word for it. Electron-fluid? The Sea Of Charge? Heh, the "electron-stuff" acts like a long narrow piston, and energy can be sent along it. Jan 28 at 6:46
This is my way of visualizing the concept of inductor and capacitor. The way is to visualize potential energy and kinetic energy, and understanding the interaction between these two forms of energy.
1. Capacitor is analogous to a spring, and
2. Inductor is analogous to a water wheel.
Now see the comparisons. Spring energy is $\frac{1}{2}kx^2$, whereas capacitor energy is $\frac{1}{2}CV^2$. So, capacitance, $C$ is analogous to the spring constant, $k$. Capacitance voltage,$V$, is analogous to spring displacement, $x$. Electric field across the capacitance is analogous to the force generated across the spring. What happens is that the kinetic energy of electrons are stored in the capacitor as potential energy. The resultant potential energy difference is the voltage which is kind of a pressure in the form of electric field. So, the capacitor always pushes the electrons back because of its potential energy.
Next, the kinetic energy of a water wheel can be expressed as $\frac{1}{2}I\omega^2$, where $I$ is the moment of inertia and $\omega$ is the angular frequency. Whereas, the energy stored in an inductor is $\frac{1}{2}Li^2$, where $i$ is the current. Thus, current is analogous to velocity which it is as $i = \frac{dq}{dt}$.
When current flows through a wire, the moving electrons create a magnetic field around the wire. For a straight wire, the generated magnetic field will not effect the electrons in that wire or at least can be ignored in most cases. However, if we wind the wires several thousands times such that the generated magnetic field affects the wire electrons themselves, then any change in the velocity will be opposed by the force from the magnetic field. Thus, the overall force, $F$, electrons face is expressed by $\mathbf{F} = q\mathbf{E} + q\mathbf{v} \times \mathbf{B}$. The potential energy in a capacitor is stored in the form of electric field, and the kinetic energy in an inductor is stored in the form of magnetic field.
In summary, inductor acts as inertia which reacts against the change in velocity of electrons, and capacitor acts as spring which reacts against the applied force.
Using the above analogies, you can easily find why the phase relationships between voltage and current are different for inductors and capacitors. This analogy also helps to understand energy exchange mechanism between a capacitor and an inductor such as in a LC oscillator.
For further thinking, ask the following questions. How the kinetic energy in a mechanical system is stored? When we are running, where and how is the kinetic energy stored? When we are running, are we creating a field that interacts on our moving body?
One way to conceptualize it is to imagine it to be similar to inertia of the current through the inductor. A good way to illustrate it is with the idea of a hydraulic ram pump:
In a hydraulic ram pump, water flows through a large pipe, into a fast acting valve. When the valve closes, the inertia of the heavy flowing mass of water causes a sudden huge increase in water pressure at the valve. This pressure then forces water upwards through a one way valve. As the energy from the water ram dissipates, the main fast acting valve opens, and the water builds up some momentum in the main pipe, and the cycle repeats again. See the wiki page for an illustration.
This is exactly how boost converters work, only with electricity instead of water. The water flowing through the pipe is equivelant to an inductor. Just like the water in the pipe resists changes in flow, the inductor resists change in current.
• Only charge pumps don't use inductors, they use capacitors. Apr 12 '16 at 13:31
• I think @whatsisname means a boost converter, not a charge pump. I'll edit. Aug 19 '16 at 16:44
A capacitor can store energy: -
Energy = $\dfrac{C\cdot V^2}{2}$ where V is applied voltage and C is capacitance.
For an inductor it is this: -
Energy = $\dfrac{L\cdot I^2}{2}$ where L is inductance and I is the current flowing.
Me in particular, I always have trouble visualizing charge and voltage but I never have trouble visualizing current (except when it comes to realizing that current is flow of charge). I accept that voltage is what it is and just live with that. Maybe I think too hard. Maybe you do too?
I end up going back to basics and this for me, is as far as I want to go back because I'm not a physicist. Basics: -
Q = CV or $\dfrac{dQ}{dt} = C\cdot\dfrac{dV}{dt}$ = current, I
What this tells me is that for a given rate of change of voltage across a capacitor, there is a current OR, if you force a current thru a capacitor there will be a ramping voltage.
There is a similar formula for an inductor which basically tells you that for a given voltage placed across the terminals, the current will ramp up proportionately: -
V = $L\dfrac{di}{dt}$ when V is applied to the terminals and
V = $-L\dfrac{di}{dt}$ when computing the back emf due to external flux collapsing or flux from another coil changing.
These two formulas explain to me what goes on.
Picture a series circuit comprising an ideal capacitor, C, an ideal inductor, L, and a switch. The inductor has a soft magnetic core, such that the strength of its magnetic field is proportional to the current flowing through it. The capacitor dielectric is perfect and thus there are no losses.
Initially, let's assume the switch is open and all initial conditions are zero. That is, there is zero charge on the capacitor, zero current through the inductor and hence the magnetic field in the core is zero. We give the capacitor an initial charge to V volts using a battery.
The switch is now closed, at t=0, and L and C form a simple series circuit. At all values of time after switch closure, the capacitor voltage must equal the inductor voltage (Kirchoff's voltage law). So what happens????
1. At t=o, the voltage across C is V, so the voltage across L must also be V. Therefore the rate of change of current, di/dt, from C to L, must be such that Ldi/dt = V. Thus, the rate of change of current is quite large, but the current itself, at the instant t=0 is i=0, and di/dt = V/L
2. As time progresses, the voltage across C decreases (as the charge flows out) and the rate of change of current necessary to maintain the inductor voltage at the same level as the capacitor voltage decreases. The current is still increasing, but its gradient is decreasing.
3. As the current inceases, the strength of the magnetic field in the inductor core increase (field strength is proportional to current).
4. At the point where the capacitor has lost all its charge, the capacitor voltage is zero, the current is at its maximum value (it's been increasing since t=0), but the rate of change, di/dt, is now zero since the inductor does not need to generate a voltage to balance the capacitor voltage. Also at this point the magnetic field is at its maximum strength (actually, energy stored is LI^2/2, where I is the maximum current and this equates to the original energy in C = CV^2/2
5. Now there is no more energy left in the capacitor, so it is unable to supply any current to maintain the inductor's magnetic field. The magnetic field starts to collapse, but in so doing it creates a current that tends to oppose the collapsing magnetic field (Lenz's law). This current is in the same direction as the original current flowing in the circuit but it now acts to charge the capacitor in the opposite direction (i.e. whereas the top plate may have originally been positive, now the bottom plate is being charged positive).
6. The inductor is now in the driving seat. It's generating a current, i, in response to the collapsing magnetic field and, because this current is decreasing from its original value (I), a voltage is generated with magnitude, Ldi/dt (opposite polarity to previous).
7. This regime continues until the magnetic field has completely dissipated, having transferred its energy back to the capacitor, albeit with opposite polarity, and the whole operation starts again but this time the capacitor forces current around the circuit in the opposite direction to previous.
8. The above represents the positive half-cycle of the current waveform and step 7 is the begining of the negative half-cycle. One complete discharge-charge waveform is one cycle of a sinusoidal waveorm. If the L and C components are perfect or 'ideal' there is no energy loss and the voltage and current sinusoids continue to infinity.
So I think it's clear that the magnetic field has the ability to store energy. However it is not as capable of long term storge as a capacitor, as the opportunities for, and mechanisms of energy leakage are manifold. Interesting to note that early computer memory was made of inductors wound around ferrite toroidal cores (one toroid per bit!!), but these needed electronic refreshing frequently to retain the stored data.
May be we can visualize it in this way. Inductors are made by making conductor turns over a magnetic core or just air. Unlike a capacitor, in which a dielectric substance is sandwiched between conductors plates. every atom acts as a current carrying loop. It is so because, electron revolve in a circular path. This give rise to magnetic dipoles (atoms) inside substances. Initially all the magnetic dipoles are randomly directed inside a substance, making the resultant direction of magnetic field lines to be null. Current flows due to flow of electrons. In a circuit consisting of an inductor, there is a specific direction of current flow (or electron flow) through the inductor. as such, this current tries to align the magnetic dipoles in a specific direction.
The reluctance of the magnetic dipoles to get aligned in a specific direction, is responsible for the opposition of current. the opposition can be called as back emf.
This opposition offered is different for different material. hence, we have different reluctance values. the inductor is said to be saturated when all the magnetic dipoles are aligned in the specific direction which is given by Fleming's Right Hand Thumb Rule. the direction of opposition is given by Lenz's Law (the direction of back emf).
These magnetic dipoles are only responsible for the storage of magnetic energy. Assume this inductor connected to a closed circuit without any current supply. now the aligned magnetic dipoles try to retain their initial position, because of the absence of current. This results in the flow of current. it can be said that the, energy stored in the the inductor is due to the temporary alignment of these dipoles. but few magnetic dipoles can not attain their initial configuration. hence, we say pure inductor is not present practically.
Scientists know that the electric fields and magnetic fields are co-related. This was first confirmed by Oersted by his experiment with a magnetic compass. even scientist believe that magnetic behavior is exhibited by individual electrons too, due to their spin about their own axis.
• Please use proper punctuation when posting, Thanks Aug 19 '16 at 17:30
Let's not talk about fields at all. Let's talk instead first about what voltage is. Electrons really don't like to be near each other. The electrical force is incredibly strong. Let me give you an example of this. If 1 Ampere of current passed through a wire this would mean that 1 Coloumb of electrical charge has passed through that wire in 1 second. Let us suppose that you were able to store all of these electrons that passed in one second on an electrically isolated metal sphere. Then you waited another second and stored the same amount of electrons on another isolated metal sphere. Now you have 1 Coulomb of electrons on one sphere and 1 Coulomb of electrons on the other sphere. As you know, like charges will repel each other. If I held these two spheres 1 meter apart how much force to you think one would apply on the other due to Coulomb repulsion? The answer is in Coulomb's constant, which is: $$\ 9 \cdot 10^9 N \cdot m^2 \cdot C^{-2}\$$.
Since we are 1m apart and since we have 1 Coulomb the force is 9 x 10^9 Newtons. This means that it will support 9 x 10^8 kg in Earth's gravity. Which is the weight of a very large building. This illustrates that excess electrons do not like to be near each other at all.
Voltage is the energy an excess electron has when it is added to an object. And you don't need many electrons at all to increase the voltage substantially. This means that objects, including metal wires, have a very very low capacity for excess electrons. What then is a capacitor? A capacitor has a high capacity for electrons so that when a battery adds electrons to a piece of wire that has a capacitor on the end the voltage does not increase as much per each electron. This is not due to the fact that a capacitor has a plate (no matter how large it is) : a single plate has a very very LOW capacity for extra electrons. The secret to a capacitor is the opposing plate that is very close to it. What happens is that any excess electrons on the plate are attracted to the opposing plate from which electrons have been removed by the battery. This means that the overall energy per excess electron is reduced and you can fit in more electrons per unit voltage increase. Capacitors therefore cannot have an air gap between them because the forces are so great. They need to have a solid between them to prevent the plates from collapsing into each other.
Now we come to the inductor. This is a crazy thing. There is no such thing as a magnetic field. It is just a Coulomb attraction. But this Coulomb attraction only occurs when current is flowing in this case. How can this happen? Well remember that the Coulomb force is incredibly strong so its effects can be seen from quite subtle changes in electron density that we cannot see. And now for the crux of the matter. The subtle changes are, in fact, due to Einstein's relativity. Electrons have an average spacing in a wire and this average spacing is the same as the average spacing of the positive charges. When a current flows you might think that the average spacing stays the same but now you have to take into account length contraction. To an outside observer any moving object will appear to be shorter and this is what happens to (the space between) electrons. With a coil of wire, on the opposite sides of the circle the electrons are flowing in the opposite direction. One side see the other as having a greater density of electrons than positive charges due to relativity. This creates a repulsion between the electrons in wires having opposing current directions and increases their energy (i.e. voltage). The voltage therefore rises much faster than for an ordinary wire. People therefore think of inductors as opposing current flow. But what it really happening is that the voltage increased very quickly and more so if a greater current flows. You might have noticed that all text books treat magnetism in a mathematical way and never really point out the actual particle responsible. Well its the electron and the force is due to relativity, and the force is most definitely Coulombic. This is true even in permanently magnetized materials (but that is another discussion). Forget fields, they are a mathematical construct for people who do not want to understand the world.
• Welcome to EE.SE! Please format your post into paragraphs. Currently, it is very difficult to read. Aug 21 '18 at 15:37
• Use 2 x <Enter> for paragraph breaks. Aug 21 '18 at 15:39
All these answers are wonderful, but to answer the question about back emf, the key points to keep in mind:
1. A changing B field induces an E field.
2. E is related to ε (emf) through: ε = W/q -> W = ∮F⋅ds -> W/q = -∮(F/q)⋅ds -> E = F/q -> W/q = -∮E⋅ds (where s is an infinitesimal distance in direction of motion)
So when there’s a changing magnetic field, there is an induced E field, and hence there will be an induced voltage (emf).
1. ε = -∮(E_ind)⋅ds = -∂(Φ_B)/∂t = -(d/dt)(∫Β⋅dA) Remember, it’s the B field changing here, so: = -(∂Β/∂t)A
The reason for it opposing the constant voltage source (e.g., a battery) is simply because F (proportional to E) points perpendicularly to B and I:
1. F = Ids × B. (Current times ds, an infinitesimal piece of wire in the direction of I — current can only flow through the wire)
(Direction given by right hand rule)
This force adds a velocity component to the charges in the current in the direction of F. In turn, this new velocity component now creates a force component mutually orthogonal to the new component and B field, which is in the direction opposing the original flow of current, or opposing the original supplied voltage, and hence why it is called a “back emf”.
It is this back emf that slows the charge (it doesn’t block them).
Visualizing b-fields and inductors? Like this video?
Go find professor J. Belcher's videos from MIT e&m course "TEAL project," with EM animated visualizations. Every coil is like a stack of metal flywheels, where the electrons inside the turns of wire are like the atoms in the moving flywheels.
Also: A Tour of Classical Electromagnetism (w/MPG animations) http://web.mit.edu/8.02t/www/802TEAL3D/visualizations/guidedtour/Tour.htm
Try these MPEG collections:
found that there was no way to visualize things happening in an inductor
NOT TRUE! Instead, engineering authors never try doing that, partly because animated diagrams don't work in paper textbooks. But also I've repeatedly found that animated diagrams are seen as a "For Dummies" technique, and attract scoffing, since they make things far too easy to understand. No math rigor, far too much "physics for poets." Engineering course work is expected to be complex and difficult: we're supposed to learn the math-models alone, never seeing any simple straightforward animations based on those same equations. (Heh, if we NEEDED a non-math visual version, then maybe we're in the wrong degree program, and should take simpler classes? Screw that: instead I want to teach this material to everyone, little kids and grandfathers. Use the math-models to create 3D animations, interactive visual simulations where we can SEE the currents inside the wires, etc..)
|
2021-09-16 10:43:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6497352719306946, "perplexity": 490.13082340097344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00333.warc.gz"}
|
https://en.wikipedia.org/wiki/Coase_conjecture
|
# Coase conjecture
The Coase conjecture, developed first by Ronald Coase, is an argument in monopoly theory. The conjecture sets up a situation in which a monopolist sells a durable good to a market where resale is impossible and faces consumers who have different valuations. The conjecture proposes that a monopolist that does not know individuals' valuations will have to sell its product at a low price if the monopolist tries to separate consumers by offering different prices in different periods. This is because the monopoly is, in effect, in price competition with itself over several periods and the consumer with the highest valuation, if he is patient enough, can simply wait for the lowest price. Thus the monopolist will have to offer a competitive price in the first period which will be low. The conjecture holds only when there is an infinite time horizon, as otherwise a possible action for the monopolist would be to announce a very high price until the second to last period, and then sell at the static monopoly price in the last period. The monopolist could avoid this problem by committing to a stable linear pricing strategy or adopting other business strategies.[1]
## Simple two-consumer model
Imagine there are consumers, called ${\displaystyle X}$ and ${\displaystyle Y}$ with valuations of good with ${\displaystyle x}$ and ${\displaystyle y}$ respectively. The valuations are such as ${\displaystyle x. The monopoly cannot directly identify individual consumers but it knows that there are 2 different valuations of a good. The good being sold is durable so that once a consumer buys it, he or she will still have it in all subsequent periods. This means that after the monopolist has sold to all consumers, there can be no further sales. Also assume that production is such that average cost and marginal cost are both equal to zero.
The monopolist could try to charge at a ${\displaystyle price=y}$ in the first period and then in the second period ${\displaystyle price=x}$, hence price discriminating. This will not result in consumer ${\displaystyle Y}$ buying in the first period because, by waiting, she could get price equal to ${\displaystyle x}$. To make consumer ${\displaystyle Y}$ indifferent between buying in the first period or the second period, the monopolist will have to charge a price of ${\displaystyle price=dx+(1-d)y}$ where ${\displaystyle d}$ is a discount factor between 0 and 1. This price is such as ${\displaystyle dx+(1-d)y.
Hence by waiting, ${\displaystyle Y}$ forces the monopolist to compete on price with its future self.
## n consumers
Imagine there are ${\displaystyle n}$ consumers with valuations ranging from ${\displaystyle y}$ to a valuation just above zero. The monopolist will want to sell to the consumer with the lowest valuation. This is because production is costless and by charging a price just above zero it still makes a profit. Hence to separate the consumers, the monopoly will charge first consumer ${\displaystyle (1-d^{n})y}$ where ${\displaystyle n}$ is the number of consumers. If the discount factor is high enough this price will be close to zero. Hence the conjecture is proved.
|
2017-02-26 15:54:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5784295797348022, "perplexity": 1057.40526284955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00378-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.enotes.com/homework-help/y-3-x-2-graph-function-state-domain-range-818438
|
# `y=-3/(x+2)` Graph the function. State the domain and range.
To be able to graph the given function `y=-3/(x+2)` , we need to solve for the possible location of vertical asymptote.
Vertical asymptote exist at x=a that will satsify `D(x)=0` on a rational function `f(x)=(N(x))/(D(x))` .
To solve for the verical asymptote, we equate the expression at denominator side to 0 and solve for x.
`x+2=0`
`x +2-2=0-2`
`x=-2`
A vertical asymptote exist along `x=-2` .
To solve for horizontal asymptote for a given function: `f(x) = (ax^n+...)/(bx^m+...)` , we follow the conditions:
when `n lt m` horizontal asymptote: `y=0`
`n=m ` horizontal asymptote: ` y =a/b`
`ngtm ` horizontal asymptote: `NONE`
The function `y=-3/(x+2)` is the same as `y=(-3x^0)/(x^1+2)` .
Then, `n=0` and `m=1` satisfies the condition: n<m.
Therefore, a horizontal asymptote exist at `y=0` (along x-axis).
To solve for possible y-intercept, we plug-in `x=0 ` and solve for `y ` .
`y=-3/(0+2) `
`y=-3/2 or -1.5 `
Then, y-intercept is located at a point `(0,-1.5) ` .
To solve for possible x-intercept, we plug-in `y=0 ` and solve for x.
`0=-3/(x+2)`
`0*(x+2)=-3 `
`0=-3 `
The x's get cancelled out. Thus, there is no x-intercept.
The y-intercept `(0,-1.5)` indicates that the graph is below the x-axis. Given that we can not cross the x-axis due to the horizontal asymptote, it follows that the graph approach the vertical asymptote in downward direction from right and upward direction from the left.
Solve for additional points as need to sketch the graph.
When `x=-5` , then `y =-3/(-5+2)=1` . point: `(-5,1)`
When `x=-3` , then `y =-3/(-3+2)=3` . point: `(-1,3)`
When `x=1` , then` y =-3/(1+2)=-1` . point: `(1,-1)`
Applying the listed properties of the function, we plot the graph as:
The domain of the function is based on the possible values of x.
Domain: `(-oo, -2)uu(-2,oo)`
`x=-2` excluded due to the vertical asymptote
The range of the function is based on the possible values of y.
Range: `(-oo,0)uu(0,oo)`
`y=0` is excluded due to the horizontal asymptote.
Approved by eNotes Editorial Team
|
2022-11-27 11:59:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733949065208435, "perplexity": 1983.3320403440619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00457.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1029.83027
|
# zbMATH — the first resource for mathematics
Black-hole thermodynamics and Riemann surfaces. (English) Zbl 1029.83027
In this paper the author uses the analytic continuation procedure proposed in his earlier works [Adv. Theor. Math. Phys. 4, 929-979 (2000; Zbl 1011.81068) and Classical Quantum Gravity 19, 2399-2424 (2002; Zbl 1010.83040)] to study the thermodynamics of black holes in $$2+1$$ dimensions. A general black hole in $$2+1$$ dimensions has $$g$$ handles hidden behind $$h$$ horizons. The result of the analytic continuation of a black hole spacetime is a hyperbolic $$3$$-manifold having the topology of a handlebody. The boundary of this handlebody is a compact Riemann surface of genus $$G=2g+h-1$$.
##### MSC:
83C57 Black holes 53B35 Local differential geometry of Hermitian and Kählerian structures 80A10 Classical and relativistic thermodynamics
Full Text:
|
2021-06-15 21:44:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050824880599976, "perplexity": 717.7716787220777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00559.warc.gz"}
|
http://tex.stackexchange.com/questions/55127/inline-mathmode-goes-out-the-page-margin-in-lyx
|
# Inline mathmode goes out the page margin in LyX
I'm using LyX for my document writing. Some of my inline mathmode formulas go out the page margin (see attached fig). I wonder how can I force these formulas to stay inside the page margin. Thanks in advance for your help and time.
Edit Code
where $vec\left(\mathbf{Y}\right)=vec\left(\begin{bmatrix}\mathbf{y}^{\left(1\right)} & \ldots & \mathbf{y}^{\left(j\right)} & \ldots & \mathbf{y}^{\left(t\right)}\end{bmatrix}\right)\equiv\mathbf{y}^{*}$;
$\mathbf{I}\otimes\mathbf{X}\equiv\mathbf{X}^{*}$; $vec\left(\mathbf{B}\right)=vec\left(\begin{bmatrix}\boldsymbol{\beta}^{\left(1\right)} & \ldots & \boldsymbol{\beta}^{\left(j\right)} & \ldots & \boldsymbol{\beta}^{\left(t\right)}\end{bmatrix}\right)\equiv\mathrm{\bm{\beta}}^{*}$;
$\mathbf{I}\otimes\mathbf{Z}\equiv\mathbf{Z}^{*}$; $vec\left(\mathbf{U}\right)=vec\left(\begin{bmatrix}\mathbf{u}^{\left(1\right)} & \ldots & \mathbf{u}^{\left(j\right)} & \ldots & \mathbf{u}^{\left(t\right)}\end{bmatrix}\right)\equiv\mathbf{u}^{*}$;
and $vec\left(\mathbf{E}\right)=vec\left(\begin{bmatrix}\mathbf{e}^{\left(1\right)} & \ldots & \mathbf{e}^{\left(j\right)} & \ldots & \mathbf{e}^{\left(t\right)}\end{bmatrix}\right)\equiv\mathbf{e}^{*}$.
Thus the univariate linear mixed model involving all variables can
be obtained from multivariate linear mixed model
-
it looks like this particular expression is pretty long; I know this doesn't answer your question, but have you considered displaying it instead, perhaps using align? – cmhughes May 9 '12 at 18:22
Inline math can only be broken at relation signs. So you have to rewrite your sentences or what @cmhughes and I recommend: Use align. – Marco Daniel May 9 '12 at 18:40
As @MarcoDaniel says, TeX has a hard time with this paragraph, where only a few feasible break points are present: in the second line only three and the unbreakable parts are very long. So a display seems the best solution: your readers will be grateful. It would be better to write \mathit{vec} rather than vec or maybe \mathrm{vec} if it's an operator; better yet, \operatorname{vec} (with the amsmath package). – egreg May 9 '12 at 21:20
@egreg: Thanks for your nice suggestions. Would you mind to give an example of \operatorname{vec} (with the amsmath package). I'm using LyX. Thanks – MYaseen208 May 9 '12 at 21:24
@MYaseen208 Look in the site for operatorname: there is plenty of examples. – egreg May 9 '12 at 21:26
The ; and and are really part of the sentence structure, not the mathematics,so TeX can do a better job if you code this as a sentence with multiple inline fragments. Also it still is hard so I have used \sloppy to tell LaTeX to allow white space to stretch more than usual. It still looks pretty hard to read and I would definitely consider setting this as a display using an AMS alignment, but to get it inline:
\documentclass{article}
\renewcommand\vec[1]{\mathop{\mathrm{vec}}(#1)}
\begin{document}
\large
where $\vec{\mathbf{Y}} = \vec{[\mathbf{y}^{1} \ldots \mathbf{y}^{j} \ldots \mathbf{y}^{t}]} \cong \mathbf{y}^*; \mathbf{I}\otimes\mathbf{X}\cong\mathbf{X}^*; \vec{\mathbf{B}} = \vec{[\beta^{1} \ldots \beta^{j} \ldots \beta^{t}]} \cong \beta^*; \mathbf{I}\otimes\mathbf{Z}\cong\mathbf{Z}^*; \vec{\mathbf{U}} = \vec{[\mathbf{u}^{1} \ldots \mathbf{u}^{j} \ldots \mathbf{u}^{t}]} \cong \mathbf{u}^*; \mbox{ and } \vec{\mathbf{E}} = \vec{[\mathbf{e}^{1} \ldots \mathbf{e}^{j} \ldots \mathbf{e}^{t}]} \cong \mathbf{e}^*;$
Thus the invariate linear mixed model involving all variables
can be obtained from multivariate linear mixed
\bigskip
{\sloppy where
$\vec{\mathbf{Y}} = \vec{[\mathbf{y}^{1} \ldots \mathbf{y}^{j} \ldots \mathbf{y}^{t}]} \cong \mathbf{y}^*$;
$\mathbf{I}\otimes\mathbf{X}\cong\mathbf{X}^*; \vec{\mathbf{B}} = \vec{[\beta^{1} \ldots \beta^{j} \ldots \beta^{t}]} \cong \beta^*$;
$\mathbf{I}\otimes\mathbf{Z}\cong\mathbf{Z}^*$;
$\vec{\mathbf{U}} = \vec{[\mathbf{u}^{1} \ldots \mathbf{u}^{j} \ldots \mathbf{u}^{t}]} \cong \mathbf{u}^*$;
and
$\vec{\mathbf{E}} = \vec{[\mathbf{e}^{1} \ldots \mathbf{e}^{j} \ldots \mathbf{e}^{t}]} \cong \mathbf{e}^*;$
Thus the invariate linear mixed model involving all variables
can be obtained from multivariate linear mixed\par}
\end{document}
-
Thanks a lot @David for your answer. I also add the LaTex code that I'm using. – MYaseen208 May 9 '12 at 21:12
Great. This works like a charm. Much appreciated. – MYaseen208 May 9 '12 at 21:17
Note you should not just use vec for vec as TeX will set that as the product v e c You should use specific markup for operators like \log or here vec. – David Carlisle May 9 '12 at 21:20
|
2015-08-03 06:44:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598293304443359, "perplexity": 2431.844690997986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989510.73/warc/CC-MAIN-20150728002309-00136-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/gravitation-between-the-moon-and-the-earth-physics-project.946696/
|
# Gravitation between the Moon and the Earth: physics project
## Homework Equations
f = ma
m1 = mass of moon
m2 = mass of earth
## The Attempt at a Solution
Ok this is crunch time here and i am NOT Kobe Bryant
I have chosen gravitation between the moon and earth for this project. I will start with the net force on the moon as
∑FSystem = (Gm1m2)/(R2) = ma
i will only be focusing on the moving body (the moon) so to find the acceleration i will use
(Gm1m2)/(R2*m1) = a
ok. Now this project is in excel. So i will use rienmann sums to derive my velocity and position graphs.
This is where i am stuck, i will integrate the acceleration which is
(Gm1m2)/(R2*m1) = a
now the radius is changing with respect to time between the earth and the moon, now i need to figure out how to find the radius with respect to time as well as figure out my constants of integration (which i think will be the previous velocity in the row on top in excel. i will start with some random velocity and position and integrate them.
#### Attachments
• 73.2 KB Views: 1,047
Related Introductory Physics Homework Help News on Phys.org
tnich
Homework Helper
## Homework Statement
View attachment 225233
## Homework Equations
f = ma
m1 = mass of moon
m2 = mass of earth
## The Attempt at a Solution
Ok this is crunch time here and i am NOT Kobe Bryant
I have chosen gravitation between the moon and earth for this project. I will start with the net force on the moon as
∑FSystem = (Gm1m2)/(R2) = ma
i will only be focusing on the moving body (the moon) so to find the acceleration i will use
(Gm1m2)/(R2*m1) = a
ok. Now this project is in excel. So i will use rienmann sums to derive my velocity and position graphs.
This is where i am stuck, i will integrate the acceleration which is
(Gm1m2)/(R2*m1) = a
now the radius is changing with respect to time between the earth and the moon, now i need to figure out how to find the radius with respect to time as well as figure out my constants of integration (which i think will be the previous velocity in the row on top in excel. i will start with some random velocity and position and integrate them.
I think the assumption that only the moon is moving will turn out to be an issue. That would violate conservation of momentum. The problem statement says that you can fix one of the objects (at a constant position). Be aware that the earth and the moon revolve about a common point. If you consider the earth to be fixed, you will probably have to justify your assumption and explain why momentum in not conserved in your answer for goal #5.
I think it would help you to think of the the force equation for your chosen problem in terms of the vector ##\hat r## between the Earth and the Moon. Remember that the unit vector associated with that vector is ##\frac {\hat r} {|\hat r|}##.
isukatphysics69
tnich
Homework Helper
I think the assumption that only the moon is moving will turn out to be an issue. That would violate conservation of momentum. The problem statement says that you can fix one of the objects (at a constant position). Be aware that the earth and the moon revolve about a common point. If you consider the earth to be fixed, you will probably have to justify your assumption and explain why momentum in not conserved in your answer for goal #5.
I think it would help you to think of the the force equation for your chosen problem in terms of the vector ##\hat r## between the Earth and the Moon. Remember that the unit vector associated with that vector is ##\frac {\hat r} {|\hat r|}##.
You might avoid that problem altogether by looking at the orbit of a spacecraft around the earth instead. In that case, the difference in the masses of the two bodies is so large that you could reasonable assume that the earth is fixed.
isukatphysics69
BvU
Homework Helper
this is crunch time
As my favorite IT guru says: Lack of planning on your part does not constitute an emergency on my part !
Not a good idea. The moon is known for a long time as approximately at the same distance from the earth and certainly not heading straight toward or away from it.
1. Homework Statement
You left out part 1. For a reason ?
Re 3. Choices: You pick earth and moon. OK, fine. You pick earth as not moving ? Not so good ! Why do you think we have high tide twice a day ?
isukatphysics69
You might avoid that problem altogether by looking at the orbit of a spacecraft around the earth instead. In that case, the difference in the masses of the two bodies is so large that you could reasonable assume that the earth is fixed.
I have already told my professor that i will be doing the moon and earth so it is to late to change. I will fix the earth at a constant position.
So the acceleration in the x direction will be
(Gm1m2)/(R2*m1) * (Xposition/r)= a
the acceleration in the y direction will be
(Gm1m2)/(R2*m1) * (Yposition/r)= a
sin = opp/hyp
rather than trig angles i will use the coordinates
i need to get the R value with respect to time, so at any given time T i need to find the radius R, i don't know if i should use the eqation of a circle around the fixed point or not because the moons orbit is an ellipse.
As my favorite IT guru says: Lack of planning on your part does not constitute an emergency on my part ! Not a good idea. The moon is known for a long time as approximately at the same distance from the earth and certainly not heading straight toward or away from it.
You left out part 1. For a reason ?
Re 3. Choices: You pick earth and moon. OK, fine. You pick earth as not moving ? Not so good ! Why do you think we have high tide twice a day ?
well i need to start with some velocity because i think i need that as my constant of integration
#### Attachments
• 37.3 KB Views: 784
kuruman
Homework Helper
Gold Member
View attachment 225234
well i need to start with some velocity because i think i need that as my constant of integration
Yes you do, but what is that velocity going to be? If it's not right, your integrated trajectory may predict that the Moon will leave its orbit or (worse for us) collide with the Earth.
isukatphysics69
Yes you do, but what is that velocity going to be? If it's not right, your integrated trajectory may predict that the Moon will leave its orbit or (worse for us) collide with the Earth.
yes i will use the real velocity 3683 kilometers per hour
Ok my partner has went AWOL and has shut off phone I'm all on my own for this doing it right now
Ok i have my dt values which will be every 600 seconds, now heres the thing, i need to integrate the acceleration which is
(Gm1m2)/(R2*m1) = a
but the only thing changing in here is the R value with respect to time. Is there some kind of rotational formula that i should use for the radius of moon around the earth?
So R = sqrt((x2 - x1)2 + ((y2 - y1)2))
My thinking right now is i need some dx and dy values for the moon, put that formula in column F and change the dx and dy with respect to time then integrate the
(Gm1m2)/(R2*m1) = a
#### Attachments
• 29.7 KB Views: 364
BvU
Homework Helper
As they say: dang happens.
From your posting of part 1 it is clear the exercise doesn't want you to have only one body moving. (Good thing I remarked on that)
Earth/moon sytem is well known, so start with (looking up or calcualting) reasonable starting values: radial positions and tangential velocities.
You need to keep track of 2 positions and 2 velocities. Force is radial, so angular momentum is conserved.
isukatphysics69
As they say: **** happens.
From your posting of part 1 it is clear the exercise doesn't want you to have only one body moving. (Good thing I remarked on that)
Earth/moon sytem is well known, so start with (looking up or calcualting) reasonable starting values: radial positions and tangential velocities.
You need to keep track of 2 positions and 2 velocities. Force is radial, so angular momentum is conserved.
bro please see post above. i am keeping earth at a fixed position which was an option
it will be more complicated if both are moving
omg i have to use angular momentum? i just read about that yesterday i hardly understand it
found video
BvU
Homework Helper
Can make do without it for quite a while. If you do everything well, it is a result of your calculations.
isukatphysics69
Can make do without it for quite a while. If you do everything well, it is a result of your calculations.
ok, do you agree with my post 10 thinking?
i need to keep earth in a fixed position or it will get complicated.
i am thinking i do what i say in post 10 and then integrate acceleration with Riemann sum formula.
after i integrate with Riemann sum i will tack on the constant of integration being vvelocity of row above
**** has just hit the fan
BvU
Homework Helper
earth at a fixed position which was an option
I can understand you don't go for the bonus. So: a small moon (dutch word for satellite is 'artificialmoon' when transposed litterally) it is. Only one position (x,y or r, theta) and one velocity (also two components). The motion is in a plane (no forces perpendicular to the ##\vec r, \vec v## plane).
Post 10 is ok, but r(t) alone doesn't cut it.
Think circular motion: orbit is circular because of a constant centripetal force (your gMm/r^2)
Funny how these four stars appear automatically
isukatphysics69
I can understand you don't go for the bonus. So: a small moon (dutch word for satellite is 'artificialmoon' when transposed litterally) it is. Only one position (x,y or r, theta) and one velocity (also two components). The motion is in a plane (no forces perpendicular to the ##\vec r, \vec v## plane).
Post 10 is ok, but r(t) alone doesn't cut it.
Think circular motion: orbit is circular because of a constant centripetal force (your gMm/r^2)
Funny how these four stars appear automatically
ok so you are saying acentripical = (Gm1m2)/(R2*m1)
a centripetal = v^2/r
i could've sworn my prof said not to use a centripetal but chances are i miss undersood him because it makes sense.
dang has hit the fan but my mentality right now is Kobe Bryant 2009 world cup
BvU
Homework Helper
Prof is right: centripetal is a crutch, a result force (in this case from gravity, the only real force present, that you are dealing with by integrating) -- but it's very useful for reverse engineering to get initial conditions that come close to the actual motion ...
isukatphysics69
Prof is right: centripetal is a crutch, a result force (in this case from gravity, the only real force present, that you are dealing with by integrating) -- but it's very useful for reverse engineering to get initial conditions that come close to the actual motion ...
(Gm1m2)/(R2*m1) = acentripital
(Gm1m2)/(R2*m1) = v2/R
(Gm1m2)/(R1*m1) = v2initial
sqrt((Gm1m2)/(R1*m1)) = vinitial
= 32202 km/h
BvU
Homework Helper
Yes. -- provided you use appropriate units for all variables
isukatphysics69
Yes. -- provided you use appropriate units for all variables
Ok i will recheck units later, i am worried about the rienmann sums integration right now, this is my first time using excel
Now i will need to determine my dx and dy values for the moon rotating around the fixed point 0,0
i have my velocity as roughly 32000km/h
i cannot use trig functions here so i think i may have to start with an arbitrary x and y coordinate for the moon. rather than trig function i can use
dx = 32000*(x/R)
dy = 32000*(y/R)
my time intervals are every ten minutes
Currently stuck, thinking
#### Attachments
• 33.9 KB Views: 323
BvU
Homework Helper
As I said, R and v alone are not enough.
i have my velocity as roughly 32000km/h
From $${GMm\over R^2} = {\ mv^2\over R}\quad ???$$
You can check it by calculating how long one month is
Start in the xy plane with moon at ##(R, 0)## velocity ##(0, v_y)##
(with ##v_y## a better value that what you have now )
(the pink ones you stil have to calculate)
and at t = 600:
the new x ##\ ## is the x ##\ ## just above + vx * dt
the new y ##\ ## is the y ##\ ## just above + vy * dt
the new vx is the vx just above + ax * dt
the new vy is the vy just above + ay * dt
and the other ones you calculate
this is called Euler integration. See where you end up after a month....
#### Attachments
• 2 KB Views: 598
isukatphysics69
As I said, R and v alone are not enough.
From $${GMm\over R^2} = {\ mv^2\over R}\quad ???$$
You can check it by calculating how long one month is
Start in the xy plane with moon at ##(R, 0)## velocity ##(0, v_y)##
(with ##v_y## a better value that what you have now )
View attachment 225305
(the pink ones you stil have to calculate)
and at t = 600:
the new x ##\ ## is the x ##\ ## just above + vx * dt
the new y ##\ ## is the y ##\ ## just above + vy * dt
the new vx is the vx just above + ax * dt
the new vy is the vy just above + ay * dt
and the other ones you calculate
this is called Euler integration. See where you end up after a month....
ok let me try again thank you for coming back be right back
my prof said i cannot use angles i must use x/r y/r which is what i think you mean
|
2021-01-26 21:53:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7595416307449341, "perplexity": 960.5766997100435}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803737.78/warc/CC-MAIN-20210126202017-20210126232017-00399.warc.gz"}
|
https://docs.itascacg.com/itasca900/flac3d/docproject/source/options/dynamic/considerations/considerations.html
|
# Dynamic Modeling Considerations
There are three aspects that the user should consider when preparing a FLAC3D model for a dynamic analysis: 1) dynamic loading and boundary conditions; 2) wave transmission through the model; and 3) mechanical damping. This section provides guidance on addressing each aspect when preparing a FLAC3D data file for dynamic analysis. Solving Dynamic Problems and Verification Problems illustrate the use of most of the features discussed here.
FLAC3D models a region of material subjected to external and/or internal dynamic loading by applying a dynamic input boundary condition at either the model boundary or internal gridpoints. Wave reflections at model boundaries may be reduced by specifying quiet (viscous) or free-field boundary conditions. The types of dynamic loading and boundary conditions are shown schematically in Figure 1; each condition is discussed in the following sections.
### Application of Dynamic Input
In FLAC3D, the dynamic input can be applied in one of the following ways:
• an acceleration history;
• a velocity history;
• a stress (or pressure) history; or
• a force history.
Dynamic input is usually applied to the model boundaries with the zone face apply command. Accelerations, velocities, and forces can also be applied to interior gridpoints by using the zone gridpoint fix command. Note that the free-field boundary, shown in Figure 1, is not required if the only dynamic source is within the model (see Free-Field Boundaries).
The history function for the input is treated as a multiplier on the value specified with the zone face apply command. The history multiplier is assigned with the history keyword and can be in one of two forms:
With table input, the multiplier values and corresponding time values are entered as individual pairs of numbers in the specified table; the first number of each pair is assumed to be a value of dynamic time. The time intervals between successive table entries need not be the same for all entries. The table import command allows files containing time histories (such as earthquake records) to be imported into a specified FLAC3D table. If a FISH function is used to provide the multiplier, the function must access dynamic time within the function, using the FLAC3D scalar variable zone.dynamic.time.total, and compute a multiplier value that corresponds to this time. The example of Shear Wave Propagation in a Vertical Bar provides an example of dynamic loading derived from a FISH function.
Dynamic input can be applied in the $$x$$-, $$y$$-, or $$z$$-direction corresponding to the $$x$$-, $$y$$-, $$z$$-axes for the model, or in the normal and shear directions to the model boundary.
One restriction when applying velocity or acceleration input to model boundaries is that these boundary conditions cannot be applied along the same boundary as a quiet (viscous) boundary condition (compare Figure 1-a to Figure 1-b), since the effect of the quiet boundary would be nullified. To input seismic motion at a quiet boundary, a stress boundary condition is used (i.e., a velocity record is transformed into a stress record and applied to a quiet boundary). A velocity wave may be converted to an applied stress using the formula
(1)$\sigma_n = 2 (\rho\ C_p)\ v_n$
or
(2)$\sigma_s = 2 (\rho\ C_s)\ v_s$
where: $$\sigma_n$$ = applied normal stress; $$\sigma_s$$ = applied shear stress; $$\rho$$ = mass density; $$C_p$$ = speed of $$p$$-wave propagation through medium; $$C_s$$ = speed of $$s$$-wave propagation through medium; $$v_n$$ = input normal particle velocity; and $$v_s$$ = input shear particle velocity.
$$C_p$$ is given by
(3)$C_p = \sqrt{K+4G/3\over\rho}$
and $$C_s$$ is given by
(4)$C_s = \sqrt{G/\rho}$
The formulae assume plane-wave conditions. The factor of two in Equations (1) and (2) accounts for the fact that the applied stress must be double that which is observed in an infinite medium, since half the input energy is absorbed by the viscous boundary. The formulation is similar to that of Joyner and Chen (1975).
The following example illustrates wave input at a quiet boundary:
### Baseline Correction
If a “raw” acceleration or velocity record from a site is used as a time history, the FLAC3D model may exhibit continuing velocity or residual displacements after the motion has finished. This arises from the fact that the integral of the complete time history may not be zero. For example, the idealized velocity waveform in Figure 2a may produce the displacement waveform in Figure 2b when integrated. The process of “baseline correction” should be performed, although the physics of the FLAC3D simulation usually will not be affected if it is not done. It is possible to determine a low frequency wave (for example, Figure 2c) which, when added to the original history, produces a final displacement that is zero (Figure 2d). The low frequency wave in Figure 2c can be a polynomial or periodic function, with free parameters that are adjusted to give the desired results. An example baseline correction function of this type is given as a FISH function in Dynamic Conditions and Simulation.
Baseline correction usually applies only to complex waveforms derived, for example, from field measurements. When using a simple, synthetic waveform, it is easy to arrange the process of generating the synthetic waveform to ensure that the final displacement is zero. Normally, in seismic analysis, the input wave is an acceleration record. A baseline-correction procedure can be used to force both the final velocity and displacement to be zero. Earthquake engineering texts should be consulted for standard baseline correction procedures.
An alternative to baseline correction of the input record is the application of a displacement shift at the end of the calculation if there is a residual displacement of the entire model. This can be done by applying a fixed velocity to the mesh to reduce the residual displacement to zero. This action will not affect the mechanics of the deformation of the model.
Computer codes to perform baseline corrections are available from several websites (for example, http://nsmp.wr.usgs.gov/processing.html).
### Quiet Boundaries
The modeling of geomechanics problems involves media which, at the scale of the analysis, are better represented as unbounded. Deep underground excavations are normally assumed to be surrounded by an infinite medium, while surface and near-surface structures are assumed to lie on a half-space. Numerical methods relying on the discretization of a finite region of space require that appropriate conditions be enforced at the artificial numerical boundaries. In static analyses, fixed or elastic boundaries (e.g., represented by boundary-element techniques) can be realistically placed at some distance from the region of interest. In dynamic problems, however, such boundary conditions cause the reflection of outward propagating waves back into the model, and do not allow the necessary energy radiation. The use of a larger model can minimize the problem, since material damping will absorb most of the energy in the waves reflected from distant boundaries. However, this solution leads to a large computational burden. The alternative is to use quiet (or absorbing) boundaries. Several formulations have been proposed. The viscous boundary developed by Lysmer and Kuhlemeyer (1969) is used in FLAC3D. It is based on the use of independent dashpots in the normal and shear directions at the model boundaries. The method is almost completely effective at absorbing body waves approaching the boundary at angles of incidence greater than 30°. For lower angles of incidence, or for surface waves, there is still energy absorption, but it is not perfect. However, the scheme has the advantage that it operates in the time domain. Its effectiveness has been demonstrated in both finite-element and finite-difference models (Kunar et al. 1977). A variation of the technique proposed by White et al. (1977) is also widely used.
More efficient energy absorption (particularly in the case of Rayleigh waves) requires the use of frequency-dependent elements, which can only be used in frequency-domain analyses (e.g., Lysmer and Waas 1972). These are usually termed “consistent boundaries” and involve the calculation of dynamic stiffness matrices coupling all the boundary degrees-of-freedom. Boundary-element methods may be used to derive these matrices (e.g., Wolf 1985). A comparative study of the performance of different types of elementary, viscous, and consistent boundaries was documented by Roesset and Ettouney (1977).
The quiet-boundary scheme proposed by Lysmer and Kuhlemeyer (1969) involves dashpots attached independently to the boundary in the normal and shear directions. The dashpots provide viscous normal and shear tractions given by
(5)$t_n = - \rho\;C_p\;v_n$
(6)$t_s = - \rho\;C_s\;v_s$
where $$v_n$$ and $$v_s$$, are the normal and shear components of the velocity at the boundary, $$\rho$$ is the mass density, and $$C_p$$ and $$C_s$$ are the $$p$$- and $$s$$-wave velocities.
These viscous terms can be introduced directly into the equations of motion of the gridpoints lying on the boundary. A different approach, however, was implemented in FLAC3D, whereby the tractions $$t_n$$ and $$t_s$$ are calculated and applied at every timestep in the same way boundary loads are applied. This is more convenient than the former approach, and tests have shown that the implementation is equally effective. The only potential problem concerns numerical stability, because the viscous forces are calculated from velocities lagging by half a timestep. In practical analyses to date, no reduction of timestep has been required by the use of the nonreflecting boundaries. Timestep restrictions demanded by small zones are usually more important.
Dynamic analysis starts from some in-situ condition. If a fixed boundary is used while generating the static stress state, this boundary condition can be replaced by quiet boundaries; the boundary gridpoints will be freed, and the boundary reaction forces will beautomatically calculated and maintained throughout the dynamic loading phase. However, changes in static loading during the dynamic phase should be avoided. For example, if a tunnel is excavated after quiet boundaries have been specified on the bottom boundary, the whole model will start to move upward. This is because the total gravity force no longer balances the total reaction force at the bottom that was calculated when the boundary was changed to a quiet one.
Quiet boundary conditions can be applied in the global coordinate directions, or along inclined boundaries, in the normal and shear directions using the zone face apply command with appropriate keywords (quiet-normal, quiet-dip, quiet-strike for one component or quiet for all three components). When using the zone face apply command to install a quiet boundary condition, remember that the material properties used in Equations (5) and (6) are obtained from the zones immediately adjacent to the boundary. Thus, appropriate material properties for boundary zones must be in place at the time the zone face apply command is given, in order for the correct properties of the quiet boundary to be stored.
Quiet boundaries are best-suited when the dynamic source is within a grid. Quiet boundaries should not be used alongside boundaries of a grid when the dynamic source is applied as a boundary condition at the top or base, because the wave energy will “leak out” of the sides. In this situation, free-field boundaries (described below) should be applied to the sides.
### Free-Field Boundaries
Numerical analysis of the seismic response of surface structures such as dams requires the discretization of a region of the material adjacent to the foundation. The seismic input is normally represented by plane waves propagating upward through the underlying material. The boundary conditions at the sides of the model must account for the free-field motion that would exist in the absence of the structure. In some cases, elementary lateral boundaries may be sufficient. For example, if only a shear wave were applied on the horizontal boundary, AC (shown in Figure 3), it would be possible to fix the boundary along AB and CD in the vertical direction only (see the example in Shear Wave Loading of a Model with Free-Field Boundaries). These boundaries should be placed at distances sufficient to minimize wave reflections and achieve free-field conditions. For soils with high material damping, this condition can be obtained with a relatively small distance (Seed et al. 1975). However, when the material damping is low, the required distance may lead to an impractical model. An alternative procedure is to “enforce” the free-field motion in such a way that boundaries retain their non-reflecting properties (i.e., outward waves originating from the structure are properly absorbed). This approach was used in the continuum finite-volume code NESSI (Cundall et al. 1980). A technique of this type involving the execution of free-field calculations in parallel with the main-grid analysis was developed for FLAC3D.
The lateral boundaries of the main grid are coupled to the free-field grid by viscous dashpots to simulate a quiet boundary (see Figure 3), and the unbalanced forces from the free-field grid are applied to the main-grid boundary. Both conditions are expressed in Equations (7), (8), and (9), which apply to the free-field boundary along one side-boundary plane with its normal in the direction of the $$x$$-axis. Similar expressions may be written for the other sides and corner boundaries:
(7)$F_x=-\rho C_p(v_x^m-v_x^{\rm ff}) A + F_x^{\rm ff}$
(8)$F_y=-\rho C_s(v_y^m-v_y^{\rm ff}) A + F_y^{\rm ff}$
(9)$F_z=-\rho C_s(v_z^m-v_z^{\rm ff}) A + F_z^{\rm ff}$
where: $$\rho$$ = density of material along vertical model boundary; $$C_p$$ = $$p$$-wave speed at the side boundary; $$C_s$$ = $$s$$-wave speed at the side boundary; $$A$$ = area of influence of free-field gridpoint; $$v_x^m$$ = $$x$$-velocity of gridpoint in main grid at side boundary; $$v_y^m$$ = $$y$$-velocity of gridpoint in main grid at side boundary; $$v_z^m$$ = $$z$$-velocity of gridpoint in main grid at side boundary; $$v_x^{\rm ff}$$ = $$x$$-velocity of gridpoint in side free field; $$v_y^{\rm ff}$$ = $$y$$-velocity of gridpoint in side free field; $$v_z^{\rm ff}$$ = $$z$$-velocity of gridpoint in side free field; $$F_x^{\rm ff}$$ = free-field gridpoint force with contributions from the $$\sigma_{xx}^{\rm ff}$$ stresses of the free-field zones around the gridpoint; $$F_y^{\rm ff}$$ = free-field gridpoint force with contributions from the $$\sigma_{xy}^{\rm ff}$$ stresses of the free-field zones around the gridpoint; and $$F_z^{\rm ff}$$ = free-field gridpoint force with contributions from the $$\sigma_{xz}^{\rm ff}$$ stresses of the free-field zones around the gridpoint.
In this way, plane waves propagating upward suffer no distortion at the boundary because the free-field grid supplies conditions that are identical to those in an infinite model. If the main grid is uniform, and there is no surface structure, the lateral dashpots are not exercised because the free-field grid executes the same motion as the main grid. However, if the main-grid motion differs from that of the free field (due to, say, a surface structure that radiates secondary waves), then the dashpots act to absorb energy in a manner similar to quiet boundaries.
In order to apply the free-field boundary in FLAC3D, the model must be oriented such that the base is horizontal and its normal is in the direction of the $$z$$-axis, and the sides are vertical and their normals are in the direction of either the $$x$$- or $$y$$-axis. If the direction of propagation of the incident seismic waves is not vertical, then the coordinate axes can be rotated such that the $$z$$-axis coincides with the direction of propagation. In this case, gravity will act at an angle to the $$z$$-axis, and a horizontal free surface will be inclined with respect to the model boundaries.
The free-field model consists of four plane free-field grids on the side boundaries of the model, and four column free-field grids at the corners (see Figure 4). The plane grids are generated to match the main-grid zones on the side boundaries, so that there is a one-to-one correspondence between gridpoints in the free field and the main grid. The four corner free-field columns act as free-field boundaries for the plane free-field grids. The plane free-field grids are two-dimensional calculations that assume infinite extension in the direction normal to the plane. The column free-field grids are one-dimensional calculations that assume infinite extension in both horizontal directions. Both the plane and column grids consist of standard FLAC3D zones, which have gridpoints constrained in such a way as to achieve the infinite extension assumption.
The model should be in static equilibrium before the free-field boundary is applied. The static equilibrium conditions prior to the dynamic analysis are transferred to the free field automatically when the zone dynamic free-field command is invoked. The free-field condition is applied to lateral boundary gridpoints. All zone data (including model types and current state variables) in the model zones adjacent to the free field are copied to the free-field region. Free-field stresses are assigned the average stress of the neighboring grid zone. The dynamic boundary conditions at the base of the model should be specified before applying the free-field. These base conditions are automatically transferred to the free field when the free field is applied. Note that the free field is continuous; if the main grid contains an interface that extends to a model boundary, the interface will not continue into the free field.
After the zone dynamic free-field command is issued, the free-field grid will plot automatically whenever the main grid is plotted.
Any model or nonlinear behavior may exist in the free field, as can fluid coupling and flow within the free field.
The following is a link to a simple example of the free-field boundary in use for the model shown in Figure 4.
### Deconvolution and Selection of Dynamic Boundary Conditions
Design earthquake ground motions developed for seismic analyses are usually provided as outcrop motions, often rock outcrop motions. [1] However, for FLAC3D analyses, seismic input must be applied at the base of the model rather than at the ground surface, as illustrated in Figure 5. The question then arises: “What input motion should be applied at the base of a FLAC3D model in order to properly simulate the design motion?”
The appropriate input motion at depth can be computed through a “deconvolution” analysis using a 1D wave propagation code such as the equivalent-linear program SHAKE or DEEPSOIL. This seemingly simple analysis is often the subject of considerable confusion resulting in improper ground motion input for FLAC3D models. The application of SHAKE for adapting design earthquake motions for FLAC3D input is described. Input of an earthquake motion into FLAC3D is typically done using one of the following:
• A rigid base, where an acceleration-time history is specified at the base of the FLAC3D mesh.
• A compliant base, where a quiet (absorbing) boundary is used at the base of the FLAC3D mesh.
For a rigid base, a time history of acceleration (or velocity or displacement) is specified for gridpoints along the base of the mesh. While simple to use, a potential drawback of a rigid base is that the motion at the base of the model is completely prescribed. Hence, the base acts as if it were a fixed displacement boundary reflecting downward-propagating waves back into the model. Thus, a rigid base is not an appropriate boundary for general application unless a large dynamic impedance contrast is meant to be simulated at the base (e.g., low velocity sediments over high velocity bedrock).
For a compliant base simulation, a quiet boundary is specified along the base of the FLAC3D mesh. See the section on quiet boundaries. Note that if a history of acceleration is recorded at a gridpoint on the quiet base, it will not necessarily match the input history. The input stress-time history specifies the upward-propagating wave motion into the FLAC3D model, but the actual motion at the base will be the superposition of the upward motion and the downward motion reflected back from the FLAC3D model.
SHAKE (Schnabel et al. 1972) is a widely used 1D wave propagation code for site response analysis. SHAKE computes the vertical propagation of shear waves through a profile of horizontal visco-elastic layers. Within each layer, the solution to the wave equation can be expressed as the sum of an upward-propagating wave train and a downward-propagating wave train. The SHAKE solution is formulated in terms of these upward- and downward-propagating motions within each layer, as illustrated in Figure 6.
The relation between waves in one layer and waves in an adjacent layer can be solved by enforcing the continuity of stresses and displacements at the interface between the layers. These well-known relations for reflected and transmitted waves at the interface between two elastic materials (Kolsky 1963) can be expressed in terms of recursion formulas. In this way, the upward- and downward-propagating motions in one layer can be computed from the upward and downward motions in a neighboring layer.
To satisfy the zero shear stress condition at the free surface, the upward- and downward-propagating motions in the top layer must be equal. Starting at the top layer, repeated use of the recursion formulas allows the determination of a transfer function between the motions in any two layers of the system. Thus, if the motion is specified at one layer in the system, the motion at any other layer can be computed.
SHAKE input and output is not in terms of the upward- and downward-propagating wave trains, but in terms of the motions at a) the boundary between two layers, referred to as a “within” motion, or b) at a free surface, referred to as an “outcrop” motion. The within motion is the superposition of the upward- and downward-propagating wave trains. The outcrop motion is the motion that would occur at a free surface at that location. Hence the outcrop motion is simply twice the upward-propagating wave-train motion. If needed, the upward-propagating motion can be computed by taking half the outcrop motion. At any point, the downward-propagating motion can then be computed by subtracting the upward-propagating motion from the within motion.
The SHAKE solution is in the frequency domain, with conversion to and from the time-domain performed with a Fourier transform. The deconvolution analysis discussed below illustrates the application of SHAKE for a linear, elastic case. See Comparison of FLAC3D to SHAKE for a Layered, Linear-Elastic Soil Deposit. SHAKE can also address nonlinear soil behavior approximately, through the equivalent linear approach. Analyses are run iteratively to obtain shear modulus and damping values for each layer that is compatible with the computed effective strain for the layer. See Comparison of FLAC3D to SHAKE for a Layered, Nonlinear-Elastic Soil.
DEEPSOIL (Hashssh et al 2017) is a one-dimensional site response analysis program that can perform: a) 1-D nonlinear time domain analyses with and without pore water pressure generation, b) 1-D equivalent linear frequency domain analyses including convolution and deconvolution, and c) 1-D linear time and frequency domain analyses.
Deconvolution for a Rigid Base — The deconvolution procedure for a rigid base is illustrated in Figure 7 for a two-dimensional FLAC simulation. The same procedure also applies to FLAC3D. The goal is to determine the appropriate base input motion to FLAC such that the target design motion is recovered at the top surface of the FLAC model. The profile modeled consists of three 20-m thick elastic layers with shear wave velocities and densities as shown in the figure. The SHAKE model includes the three elastic layers and an elastic half-space with the same properties as the bottom layer. The FLAC model consists of a column of linear elastic elements. The target earthquake is input at the top of the SHAKE column as an outcrop motion. Then, the motion at the top of the half-space is extracted as a within motion and is applied as an acceleration-time history to the base of the FLAC model. Mejia and Dawson (2006) show that the resulting acceleration at the surface of the FLAC model is virtually identical to the target motion. The SHAKE within motion is appropriate for rigid-base input because, as described above, the within motion is the actual motion at that location, the superposition of the upward- and downward-propagating waves.
Deconvolution for a Compliant Base — The deconvolution procedure for a compliant base is illustrated in Figure 8 for a FLAC simulation. Again, the same procedure applies to FLAC3D. The SHAKE and FLAC models are identical to those for the rigid body exercise, except that a quiet boundary is applied to the base of the FLAC mesh. For application through a quiet base, the upward-propagating wave motion (1/2 the outcrop motion) is extracted from SHAKE at the top of the half-space. This acceleration-time history is integrated to obtain a velocity, which is then converted to a stress history using Equation (4). Again, the resulting acceleration at the surface of the FLAC model is shown by Mejia and Dawson (2006) to be virtually identical to the target motion. As an additional check of the computed accelerations, they also show that the response spectra for both the compliant-base and rigid-base cases closely match the response spectra of the target motion.
For a deconvolution analysis, Idriss (2019 ASCE Terzaghi lecture) found that it is useful to get the strain-compatible properties by complete ting the analysis with a low cut-off frequency (say about 5 Hz, depending on the level of shaking), then using the resulting strain-compatible modulus and damping values for one iteration and desired cut-off frequency (typically 20 to 30 Hz).
Although useful for illustrating the basic ideas behind deconvolution, the previous example is not the typical case encountered in practice. The situation shown in Figure 9, where one or more soil layers (expected to behave nonlinearly) overlay bedrock (assumed to behave linearly), is more common. A FLAC or FLAC3D model for this case will usually include the soil layers and an elastic base of bedrock. To compute the correct FLAC3D compliant base input, a SHAKE model is constructed as shown in the figure. The SHAKE model includes a bedrock layer equal in thickness to the elastic base of the FLAC3D mesh, and an underlying elastic half-space with bedrock properties. The target motion is input to the SHAKE model as an outcrop motion at the top of the bedrock (point A). Designating this motion as outcrop means that the upward-propagating wave motion in the layer directly below point A will be set equal to 1/2 the target motion. The upward-propagating motion for input to FLAC3D is extracted at point B as 1/2 the outcrop motion.
For the compliant-base case, there is actually no need to include the soil layers in the SHAKE model, as these will have no effect on the upward-propagating wave train between points A and B. In fact, for this simple case, it is not really necessary to perform a formal deconvolution analysis, as the upward-propagating motion at point B will be almost identical to that at point A. Apart from an offset in time, the only differences will be due to material damping between the two points, which will generally be small for bedrock. Thus, for this very common situation, the correct input motion for FLAC3D is simply 1/2 of the target motion. (Note that the upward-propagating wave motion must be converted to a stress-time history using Equation (4), which includes a factor of 2 to account for the stress absorbed by the viscous dashpots.)
For a rigid-base analysis, the within motion at point B is required. Since this within motion incorporates downward-propagating waves reflected off the ground surface, the nonlinear soil layers must be included in the SHAKE model. However, soil nonlinearity will be modeled quite differently in FLAC3D and SHAKE. Thus, it is difficult to compute the appropriate FLAC3D input motion for a rigid-base analysis.
Another typical case encountered in practice is illustrated in Figure 10. Here, the soil profile is deep, and rather than extending the FLAC3D mesh all the way down to bedrock, the base of the model ends within the soil profile. Note that the mesh must be extended to a depth below which the soil response is essentially linear. Again, the design motion is input at the top of the bedrock (point A) as an outcrop motion, and the upward-propagating motion for input to FLAC3D is extracted at point B. As in the previous example, for a compliant-base analysis there is no need to include the soil layers above point B in the SHAKE model. These layers have no effect on the upward-propagating motion between points A and B. Unlike the previous case, the upward-propagating motion can be quite different at points A and B, depending on the impedance contrast between the bedrock and linear soil layer. Thus, it is not appropriate to skip the deconvolution analysis and use the target motion directly.
A rigid base is only appropriate for cases with a large impedance contrast at the base of the model. However, the use of SHAKE to compute the required input motion for a rigid base of a FLAC3D model leads to a good match between the target surface motion and the surface motion computed by FLAC3D only for a model that exhibits a low level of nonlinearity. The input motion already contains the effect of all layers above the base, because it contains the downward-propagating wave.
A different approach must be taken if a FLAC3D model with a rigid base is used to simulate more realistic systems (e.g., sites that exhibit strong nonlinearity, or the effect of a surface or embedded structure). In the first case, the real nonlinear response is not accounted for by SHAKE in its estimate of base motion. In the second case, secondary waves from the structure will be reflected from the rigid base, causing artificial resonance effects.
A compliant base is almost always the preferred option because downward-propagating waves are absorbed. In this case, the quiet-base condition is selected, and only the upward-propagating wave from SHAKE is used to compute the input stress history. By using the upward-propagating wave only at a quiet FLAC3D base, no assumptions need to be made about secondary waves generated by internal nonlinearities or structures within the grid, because the incoming wave is unaffected by these; the outgoing wave is absorbed by the compliant base.
Although the presence of reflections from a rigid base is not always obvious in complex nonlinear FLAC3D analyses, they can have a major impact on analysis results, especially when cyclic-degradation or liquefaction-soil models are employed. Mejia and Dawson (2006) present examples from two-dimensional FLAC simulations that illustrate the nonphysical wave reflections calculated in models with a rigid base. One example, shown in Figure 11, demonstrates the difficulty with a rigid boundary. The nonphysical oscillations that result from a rigid base are shown by comparison to results for a compliant base in Figure 12. The inputs in both cases (rigid and compliant) were derived by deconvoluting the same surface motion.
## Hydrodynamic Pressures
The dynamic interaction between water in a reservoir and a concrete dam can have a significant influence on the performance of the dam during an earthquake. Westergaard (1933) established a mathematical basis for procedures to represent this interaction, and this approach is commonly used in engineering practice. Although the advent of computers has enabled numerical solution of coupled differential equations of fluid-structure systems, the formula proposed by Westergaard is widely used for stability analysis of smaller dams and preliminary calculations in the design of large dams.
The hydrodynamic pressure acting on a rigid concrete dam over a reservoir height, $$H$$, is depicted in Figure 13. The pressure can be derived from the equation of motion for a fluid. The equation of motion for a fluid with small Reynold’s number can be written as
(10)$c^2 \left( \frac{\delta^2 \Phi}{\delta x_1^2} + \frac{\delta^2 \Phi}{\delta x_2^2} \right) = \frac{\delta^2 \Phi}{\delta t^2}$
where $$c$$ is the speed of sound in water and $$\Phi$$ is the velocity potential. The water pressure can be written as a function of the velocity potential:
$p = \rho_w \frac{\delta \Phi}{\delta t}$
where $$\rho_w$$ is the density of water.
• The water is assumed to be incompressible, which reduces Equation (10) to the Laplace equation: $$\frac{\delta^2 \Phi}{\delta x_1^2} + \frac{\delta^2 \Phi}{\delta x_2^2} = 0$$.
• The free surface of the reservoir is assumed to be at rest. Thus, $$\frac{\delta \Phi}{\delta t} = 0$$ at $$x_2 = H$$.
• The reservoir is assumed to be infinitely long. Therefore, as $$x_1 \rightarrow \infty, \Phi \rightarrow 0$$.
• Hydrodynamic motion is assumed to be horizontal only: $$\frac{\delta \Phi}{x_2} = 0$$ at $$x_2 = 0$$.
• The upstream face of the dam is vertical and the dam is rigid: $$\frac{\delta \Phi}{x_1} = f(t)$$ at $$x_1 = 0$$.
The solution of Equation (10) with the preceding assumptions can be obtained for an arbitrary acceleration, $$\ddot{x}_0 (t)$$, in the form of an infinite Fourier series:
(11)$p(0,x_2,t) = 8 \ddot{x}_0 \rho_w H \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{((2n-1)\pi)^2}\;e^{\frac{-(2n-1)x_1}{4H}} \cos\big(\frac{(2n-1)x_2}{4H})$
Equation (11) can be approximated as
(12)$p(0,x_2,t) = \rho_w \ddot{x}_0(t) H \frac{C_m}{2} \left( 1 - \frac{x^2_2}{H^2} + \sqrt{1-\frac{x^2_2}{H^2}} \right)$
where $$C_m$$ = 0.743 and $$\ddot{x}$$ is the horizontal acceleration at the dam face.
Equation (12) is implemented in FLAC3D by adjusting the grid point mass on the upstream face of the dam to account for the hydrodynamic pressure. The equivalent pressure, $$\bar{p}$$, resulting from the inertial forces associated with the gridpoint and the hydrodynamic pressure of the water in the reservoir, averaged over the area associated with the gridpoint, can be written as
$\bar{p}(0,x_2,t) = \rho_e\;c\;\ddot{x}_0 (t) \frac{A_g}{\Delta x_2}$
where $$A_g$$ is the area associated with the gridpoint, $$\Delta x_2$$ is the contact length on the upstream face of the dam through which the water load is applied for the gridpoint, and $$\rho_ec$$ is the equivalent density of the gridpoint and is given by
$\rho_ec = \rho_c + \rho_sc$
where
(13)$\rho_sc = \rho_w \frac{H \delta x_2}{A_g} \frac{C_m}{2} \left( 1 - \frac{x^2_2}{H_2} + \sqrt{1 - \frac{x^2_2}{H^2}} \right)$
$$\rho_c$$ is the density of concrete such that the gridpoint mass is given by $$m_g = A_g \rho_c$$. The scaled gridpoint mass, $$m_sg = A_g \rho_ec$$, is used only for the motion calculation in the horizontal direction; the effect of the increase mass does not influence the vertical forces.
The gridpoint mass is adjusted by adding the term (as determined by Equation (13)) to account for the hydrodynamic pressure. This adjustment is stored in a location that is modifiable using the FISH gridpoint function gp.mass.add. The conditions are applied using the zone face westergaard command.
The following simple example is provided that illustrates the effect of hydrodynamic pressures and compares the Westergaard approximation to the result of explicitly modeling the water using FLAC3D zones.
## Wave Transmission
### Accurate Wave Propagation
Numerical distortion of the propagating wave can occur in a dynamic analysis as a function of the modeling conditions. Both the frequency content of the input wave and the wave speed characteristics of the system will affect the numerical accuracy of wave transmission. Kuhlemeyer and Lysmer (1973) show that, for accurate representation of wave transmission through a model, the spatial element size, $$\Delta l$$, must be smaller than approximately one-tenth to one-eighth of the wavelength associated with the highest frequency component of the input wave. For instance,
(14)$\Delta l \le {\lambda \over 10}$
where $$\lambda$$ is the wavelength associated with the highest frequency component that contains appreciable energy.
### Filtering
For dynamic input with a high peak velocity and short rise time, the Kuhlemeyer and Lysmer requirement may necessitate a very fine spatial mesh and a corresponding small timestep. The consequence is that reasonable analyses may be prohibitively time- and memory-consuming. In such cases, it may be possible to adjust the input by recognizing that most of the power for the input history is contained in lower-frequency components (e.g., use “fft.fis” in “datafiles\FISH\Library”). By filtering the history and removing high frequency components, a coarser mesh may be used without significantly affecting the results.
The filtering procedure can be accomplished with a low-pass filter routine such as the fast Fourier transform technique. For example, the unfiltered velocity record shown in Figure 14 represents a typical waveform containing a very high frequency spike. The highest frequency of this input exceeds 50 Hz but, as shown by the power spectral density plot of Fourier amplitude versus frequency (Figure 15), most of the power (approximately 99%) is made up of components of frequency 15 Hz or lower. It can be inferred, therefore, that by filtering this velocity history with a 15 Hz low-pass filter, less than 1% of the power is lost. The input filtered at 15 Hz is shown in Figure 16, and the Fourier amplitudes are plotted in Figure 17. The difference in power between unfiltered and filtered input is less than 1%, while the peak velocity is reduced 38%, and the rise time is shifted from 0.035 to 0.09 seconds. Analyses should be performed with input at different levels of filtering to evaluate the influence of the filter on model results.
If a simulation is run with an input history that violates Equation (14), the output will contain spurious “ringing” (superimposed oscillations) that are nonphysical. The input spectrum must be filtered before being applied to a FLAC3D grid. This limitation applies to all numerical models in which a continuum is discretized; it is not just a characteristic of FLAC3D. Any discretized medium has an upper limit to the frequencies that it can transmit, and this limit must be respected for the results to be meaningful. Users of time-domain codes commonly apply sharp pulses or step waveforms to the numerical grid; this is not acceptable, since these waveforms have spectra that extend to infinity. It is a simple matter to apply, instead, a smooth pulse that has a limited spectrum. Alternatively, artificial viscosity may be used to spread sharp wave-fronts over several zones (see Artificial Viscosity), but this method strictly only applies to isotropic strain components.
Endnotes
|
2023-03-24 19:59:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985243320465088, "perplexity": 832.6244182738708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00508.warc.gz"}
|
https://docs.zhinst.com/shfqa_user_manual/readout_pulse_generator.html
|
The Readout Pulse Generator generates the Readout pulses in the Qubit Readout mode. It consists of a Waveform Memory that stores up to 16 arbitrary waveforms (8 for the basic 2-channel SHFQA), and a Sequencer. The Sequencer not only controls the playback of the Readout Pulses, but also has control over the Qubit Measurement Unit, and the DIO and ZSync. All nodes that are used to program the Readout Pulse Generator can be found under /DEV…./QACHANNELS/n/GENERATOR/….
## Sequencer
### Features Overview
• Separate Sequencers for each Readout Channel.
• Control over all available complex-valued Waveform Memories (32 kSa for 8 memories, 64 kSa for 16 memories)
• Sequence branching
• Control over Qubit Measurement Unit
• Interface to DIO and ZSync
• High-level programming language
### Description
The Sequencer can be considered the central control unit of the SHFQA as it has access to the playback of the Waveform Memories to generate Readout Pulses, the start of the Integration of the Readout Signals from the experiment and the communication with additional devices through the DIO or ZSync. The programming language SeqC is based on C and specified in detail in SeqC language. In contrast to other AWG Sequencers, e.g. from the HDAWG, it does not provide writing access to the Waveform Memories and hence does not come with predefined waveforms. The stricter separation between Sequencer and Waveform Memory allows implementation of more advanced and application-specific features, e.g. Staggered Readout, while still providing hard real-time sequencing.
The Sequencer features a compiler which translates the high-level sequence program (SeqC) into machine instructions to be stored in the instrument sequencer memory. The sequence program is written using high-level control structures and syntax that are inspired by human language, whereas machine instructions reflect exactly what happens on the hardware level. Writing the sequence program using SeqC represents a more natural and efficient way of working in comparison to writing lists of machine instructions, which is the traditional way of programming AWGs. Concretely, the improvements rely on features such as:
• Waveform playback and sequencing in a single script
• Easily readable syntax and naming for run-time variables and constants
• Definition of user functions and procedures for advanced structuring
• Syntax validation
By design, there is no one-to-one link between the list of statements in the high-level language and the list of instructions executed by the Sequencer. In order to understand the execution timing, it’s helpful to consider the internal architecture of the Readout Pulse Generator, consisting of the Sequencer itself, and the Waveform Memory including a Waveform Player.
### Sequencer Operation
To operate the sequencer, an AWG-module first needs to be instantiated, e.g. through the a Python API:
awgModule = daq.awgModule()
awgModule.set('device', _device_name)
awgModule.set('index', qa_channel)
awgModule.execute()
After defining a SeqC program (here as awg_program) the sequence program can be uploaded to the device and the upload status checked using:
awgModule.set('compiler/sourcestring', awg_program)
print(awgModule.getString('compiler/statusstring'))
See the Tutorials or the "AWG module" in the Programming Manual for more information on how to use awgModule.
As soon as the Ready node is true, the compilation is successful and the program is transferred to the device. If the compilation fails, the Status node will display debug messages.
After successful uploading of a sequence to the instrument, the Sequencer can be started using the Enable node.
If the Sequencer should listen to a Trigger Input Signal, it can either directly wait for a ZSync Trigger, or access the Hardware Trigger Engine through two digital Auxiliary Triggers.
All nodes for the Sequencer can be accessed through the /DEV…./QACHANNELS/n/GENERATOR/… and /DEV…./QACHANNELS/n/GENERATOR/SEQUENCER/… node trees.
### SeqC
The syntax of the LabOne AWG Sequencer programming language is based on C, but with a few simplifications. Each statement is concluded with a semicolon, several statements can be grouped with curly brackets, and comment lines are identified with a double slash.
The following example shows some of the fundamental functionalities: repeated playback, triggering, and single/dual-channel waveform playback and readout. See Tutorials for a step-by-step introduction with more examples.
// repeat sequence 100 times
repeat (100) {
//play the pulse stored in Waveform Memory 0 and read out using Integration Weight 0
startQA(QA_GEN_0, QA_INT_0|QA_INT_1, true, 0, 0x0);
//wait 10 Sequencer samples
wait(10);
// wait for a Trigger over ZSync
waitZSyncTrigger()
//activate Generators 0,1, and 2 and readout with all Integration Weights
startQA(QA_GEN_0|QA_GEN_1|QA_GEN_2, QA_INT_ALL, true, 0, 0x0);
}
The following table lists the keywords used in the LabOne AWG Sequencer language.
Table 1. Programming keywords
Keyword Description
const
Constant declaration
var
Integer variable declaration
cvar
Compile-time variable declaration
string
Constant string declaration
true
Boolean true constant
false
Boolean false constant
for
For-loop declaration
while
While-loop declaration
repeat
Repeat-loop declaration
if
If-statement
else
Else-part of an if-statement
switch
Switch-statement
case
Case-statement within a switch
default
Default-statement within a switch
return
Return from function or procedure, optionally with a return value
The following code example shows how to use comments.
const a = 10; // This is a line comment. Everything between the double
// slash and the end of the line will be ignored.
/* This is a block comment. Everything between the start-of-block-comment and end-of-block-comment markers is ignored.
For example, the following statement will be ignored by the compiler.
const b = 100;
*/
#### Constants and Variables
Constants may be used to make the program more readable. They may be of integer or floating-point type. It must be possible for the compiler to compute the value of a constant at compile time, i.e., on the host computer. Constants are declared using the const keyword.
Compile-time variables may be used in computations and loop iterations during compile time, e.g. to create large numbers of waveforms in a loop. They may be of integer or floating-point type. They are used in a similar way as constants, except that they can change their value during compile time operations. Compile-time variables are declared using the cvar keyword.
Variables may be used for making simple computations during run time, i.e., on the instrument. The Sequencer supports integer variables, addition, and subtraction. Not supported are floating-point variables, multiplication, and division. Typical uses of variables are to step waiting times, to output DIO values, or to tag digital measurement data with a numerical identifier. Variables are declared using the var keyword.
The following code example shows how to use variables.
var b = 100; // Create and initialize a variable
// Repeat the following block of statements 100 times
repeat (100) {
b = b + 1; // Increment b
wait(b); // Wait 'b' cycles
}
The following table shows the predefined constants. These constants are intended to be used as arguments in certain run-time evaluated functions that encode device parameters with integer numbers.
Table 2. Mathematical Constants
Name Value Description
M_E
2.71828182845904523536028747135266250
e
M_LOG2E
1.44269504088896340735992468100189214
log2(e)
M_LOG10E
0.434294481903251827651128918916605082
log10(e)
M_LN2
0.693147180559945309417232121458176568
loge(2)
M_LN10
2.30258509299404568401799145468436421
loge(10)
M_PI
3.14159265358979323846264338327950288
pi
M_PI_2
1.57079632679489661923132169163975144
pi/2
M_PI_4
0.785398163397448309615660845819875721
pi/4
M_1_PI
0.318309886183790671537767526745028724
1/pi
M_2_PI
0.636619772367581343075535053490057448
2/pi
M_2_SQRTPI
1.12837916709551257389615890312154517
2/sqrt(pi)
M_SQRT2
1.41421356237309504880168872420969808
sqrt(2)
M_SQRT1_2
0.707106781186547524400844362104849039
1/sqrt(2)
Numbers can be expressed using either of the following formatting.
const a = 10; // Integer notation
const b = -10; // Negative number
const bin = 0b10101; // Binary integer
const f = 0.1e-3; // Floating point number.
const not_float = 10e3; // Not a floating point number
Booleans are specified with the keywords true and false. Furthermore, all numbers that evaluate to a nonzero value are considered true. All numbers that evaluate to zero are considered false.
Strings are delimited using "" and are interpreted as constants. Strings may be concatenated using the + operator.
string AWG_PATH = "awgs/0/";
string AWG_GAIN_PATH = AWG_PATH + "gains/0";
#### Waveform Playback and Predefined Functions
The following table contains the definition of functions for waveform playback and other purposes.
void setDIO(var value)
Writes the value as a 32-bit value to the DIO bus.
The value can be either a const or a var value. Configure the Mode setting in the DIO tab when using this command. The DIO interface speed of 50 MHz limits the rate at which the DIO output value is updated.
Parameter
• value: The value to write to the DIO (const or var)
var getDIO()
Reads a 32-bit value from the DIO bus.
Return
var getDIOTriggered()
Reads a 32-bit value from the DIO bus as recorded at the last DIO trigger position.
Return
void setTrigger(var value)
Sets the Sequencer Trigger output signal.
Allowed parameter values are 0 or 1. For higher integer values, only the least-significant bit will have an effect.
Parameter
• value: to be written to the trigger distribution unit
void wait(var cycles)
Waits for the given number of Sequencer clock cycles (4 ns per cycle). The execution of the instruction adds an offset of 2 clock cycles, i.e., the statement wait(3) leads to a waiting time of 5 * 4 ns = 20 ns.
Note: the minimum waiting time amounts to 3 cycles, which means that wait(0) and wait(1) will both result in a waiting time of 3 * 4 ns = 12 ns.
Parameter
• cycles: number of cycles to wait
Waits until the masked trigger input is equal to the given value.
Parameter
• mask: mask to be applied to the input signal
• value: value to be compared with the trigger input
void waitDIOTrigger()
Waits until the DIO interface trigger is active. The trigger is specified by the Strobe Index and Strobe Slope settings in the AWG Sequencer tab.
var getDigTrigger(const index)
Gets the state of the indexed Digital Trigger input (1 or 2).
The physical signal connected to the Digital Trigger input is to be configured in the Readout section of the Quantum Analyzer Setup tab.
Parameter
• index: index of the Digital Trigger input to be read; can be either 1 or 2
Return
trigger state, either 0 or 1
void error(string msg,…)
Throws the given error message when reached.
Parameter
• msg: Message to be displayed
void info(string msg,…)
Returns the specified message when reached.
Parameter
• msg: Message to be displayed
void playZero(var samples)
Zero Playback, which can be used to specify spacings in number of samples between the execution times of commands, such as startQA. Each playZero command blocks the execution of subsequent commands when a previous Zero Playback is already running. Note: the playback of actual waveforms with the startQA command happens in parallel to the Zero Playback, in contrast to the HDAWG and SHFSG!
Parameter
• samples: Number of samples for the spacing. The minimal spacing is 32 samples and the granularity is 16 samples.
void playZero(var samples, const rate)
Zero Playback, which can be used to specify spacings in number of samples between the execution times of commands, such as startQA. Each playZero command blocks the execution of subsequent commands when a previous Zero Playback is already running. Note: the playback of actual waveforms with the startQA command happens in parallel to the Zero Playback, in contrast to the HDAWG and SHFSG!
Parameter
• rate: Sample rate with which the spacing is specified. Divides the device sample rate by 2^rate. Note: this rate does not affect the sample rate of the QA waveform generator (startQA command).
• samples: Number of samples for the spacing. The minimal spacing is 32 samples and the granularity is 16 samples.
void waitDigTrigger(const index)
Waits for the reception of a trigger signal on the indexed Digital Trigger (index 1 or 2). The physical signals connected to the two AWG Digital Triggers are to be configured in the Trigger sub-tab of the AWG Sequencer tab. The Digital Triggers are configured separately for each AWG Core.
Parameter
• index: Index of the digital trigger input; can be either 1 or 2.
var getZSyncData(const data_type)
Read the last received message on ZSync. The argument specify which data the function should return.
Parameter
• data_type: Specifies which data the function should return: ZSYNC_DATA_RAW: Return the data received on the ZSync as-is without parsing. The structure of the message can change across different LabOne releases.
Return
void waitZSyncTrigger()
Waits for a trigger over ZSync.
Start the QA signal generation, readout and monitor.
Parameter
• monitor: Enable for QA monitor, default: false
• result_address: Set address associated with result, default: 0x0
• trigger: Trigger value, default: 0x0
• waveform_generator_mask: Waveform generator enable mask
• weighted_integrator_mask: Integration unit enable mask, default: QA_INT_ALL
#### Expressions
Expressions may be used for making computations based on mathematical functions and operators. There are two kinds of expressions: those evaluated at compile time (when the sequencer program is compiled on the computer), and those evaluated at run time.
Compile-time evaluated expressions only involve constants (const) or compile-time variables (cvar) and can be computed at compile time by the host computer. Such expressions can make use of standard mathematical functions and floating point arithmetic.
Run-time evaluated expressions involve variables (var) and are evaluated by the Sequencer on the instrument. Due to the limited computational capabilities of the Sequencer, these expressions may only operate on integer numbers and there are less operators available than at compile time.
The following table contains the list of mathematical functions supported at compile time.
Table 3. Mathematical Functions
Function Description
const abs(const c)
absolute value
const acos(const c)
inverse cosine
const acosh(const c)
hyperbolic inverse cosine
const asin(const c)
inverse sine
const asinh(const c)
hyperbolic inverse sine
const atan(const c)
inverse tangent
const atanh(const c)
hyperbolic inverse tangent
const cos(const c)
cosine
const cosh(const c)
hyperbolic cosine
const exp(const c)
exponential function
const ln(const c)
logarithm to base e (2.71828…)
const log(const c)
logarithm to the base 10
const log2(const c)
logarithm to the base 2
const log10(const c)
logarithm to the base 10
const sign(const c)
sign function -1 if x<0; 1 if x>0
const sin(const c)
sine
const sinh(const c)
hyperbolic sine
const sqrt(const c)
square root
const tan(const c)
tangent
const tanh(const c)
hyperbolic tangent
const ceil(const c)
smallest integer value not less than the argument
const round(const c)
round to nearest integer
const floor(const c)
largest integer value not greater than the argument
const avg(const c1, const c2,…)
mean value of all arguments
const max(const c1, const c2,…)
maximum of all arguments
const min(const c1, const c2,…)
minimum of all arguments
const pow(const base, const exp)
first argument raised to the power of second argument
const sum(const c1, const c2,…)
sum of all arguments
The following table contains the list of predefined mathematical constants. These can be used for convenience in compile-time evaluated expressions.
Table 4. Predefined Constants
Name Value Description
AWG_RATE_2000MHZ
0
Constant to set Sampling Rate to 2.0 GHz.
AWG_RATE_1000MHZ
1
Constant to set Sampling Rate to 1.0 GHz.
AWG_RATE_500MHZ
2
Constant to set Sampling Rate to 500 MHz.
AWG_RATE_250MHZ
3
Constant to set Sampling Rate to 250 MHz.
AWG_RATE_125MHZ
4
Constant to set Sampling Rate to 125 MHz.
AWG_RATE_62P5MHZ
5
Constant to set Sampling Rate to 62.5 MHz.
AWG_RATE_31P25MHZ
6
Constant to set Sampling Rate to 31.25 MHz.
AWG_RATE_15P63MHZ
7
Constant to set Sampling Rate to 15.63 MHz.
AWG_RATE_7P81MHZ
8
Constant to set Sampling Rate to 7.81 MHz.
AWG_RATE_3P9MHZ
9
Constant to set Sampling Rate to 3.9 MHz.
AWG_RATE_1P95MHZ
10
Constant to set Sampling Rate to 1.95 MHz.
AWG_RATE_976KHZ
11
Constant to set Sampling Rate to 976 kHz.
AWG_RATE_488KHZ
12
Constant to set Sampling Rate to 488 kHz.
AWG_RATE_244KHZ
13
Constant to set Sampling Rate to 244 kHz.
DEVICE_SAMPLE_RATE
<actual device sample rate>
ZSYNC_DATA_RAW
0
Constant to use as argument to getZSyncData.
#### Control Structures
Functions may be declared using the var keyword. Procedures may be declared using the void keyword. Functions must return a value, which should be specified using the return keyword. Procedures can not return values. Functions and procedures may be declared with an arbitrary number of arguments. The return keyword may also be used without arguments to return from an arbitrary point within the function or procedure. Functions and procedures may contain variable and constant declarations. These declarations are local to the scope of the function or procedure.
var function_name(argument1, argument2, ...) {
// Statements to be executed as part of the function.
return constant-or-variable;
}
void procedure_name(argument1, argument2, ...) {
// Statements to be executed as part of the procedure.
// Optional return statement
return;
}
An if-then-else structure is used to create a conditional branching point in a sequencer program.
// If-then-else statement syntax
if (expression) {
// Statements to execute if 'expression' evaluates to 'true'.
} else {
// Statements to execute if 'expression' evaluates to 'false'.
}
// If-then-else statement short syntax
(expression)?(statement if true):(statement if false)
// If-then-else statement example
const REQUEST_BIT = 0x0001;
const ACKNOWLEDGE_BIT = 0x0002;
const IDLE_BIT = 0x8000;
var dio = getDIO();
if (dio & REQUEST_BIT) {
dio = dio | ACKNOWLEDGE_BIT;
setDIO(dio);
} else {
dio = dio | IDLE_BIT;
setDIO(dio);
}
A switch-case structure serves to define a conditional branching point similarly to the if-then-else statement, but is used to split the sequencer thread into more than two branches. Unlike the if-then-else structure, the switch statement is synchronous, which means that the execution time is the same for all branches and determined by the execution time of the longest branch. If no default case is provided and no case matches the condition, all cases will be skipped. The case arguments need to be of type const.
// Switch-case statement syntax
switch (expression) {
case const-expression:
expression;
...
default:
expression;
}
// Switch-case statement example
switch (getDIO()) {
case 0:
startQA(QA_GEN_0, QA_INT_0, true, 0, 0x0);
case 1:
startQA(QA_GEN_1, QA_INT_1, true, 0, 0x0);
case 2:
startQA(QA_GEN_2, QA_INT_2, true, 0, 0x0);
default:
startQA(QA_GEN_3, QA_INT_3, true, 0, 0x0);
}
The for loop is used to iterate through a code block several times. The initialization statement is executed before the loop starts. The end-expression is evaluated at the start of each iteration and determines when the loop should stop. The loop is executed as long as this expression is true. The iteration-expression is executed at the end of each loop iteration. Depending on how the for loop is set up, it can be either evaluated at compile time or at run time. For a run-time evaluated for loop, use the var data type as a loop index. To ensure that a loop is evaluated at compile time, use the cvar data type as a loop index. Furthermore, the compile-time for loop should only contain waveform generation/editing operations and it can’t contain any variables of type var.
The following code example shows both versions of the loop.
// For loop syntax
for (initialization; end-expression; iteration-expression) {
// Statements to execute while end-expression evaluates to true
}
// For loop example (compile-time execution)
cvar i;
wave w_pulses;
for (i = 0; i < 10; i = i + 1) {
startQA(QA_GEN_0<<1, QA_INT_0, true, 0, 0x0);
}
// For loop example (run-time execution)
var k;
var j;
for (j = 9; j >= 0; j = j - 1) {
startQA(QA_GEN_0, QA_INT_0, true, 0, 0x0);
k += j;
}
The while loop is a simplified version of the for loop. The end-expression is evaluated at the start of each loop iteration. The contents of the loop are executed as long as this expression is true. Like the for loop, this loop comes in a compile-time version (if the end-expression involves only cvar and const) and in a run-time version (if the end-expression involves also var data types).
// While loop syntax
while (end-expression) {
// Statements to execute while end-expression evaluates to true
}
// While loop example
const STOP_BIT = 0x8000;
var run = 1;
var i = 0;
var dio = 0;
while (run) {
dio = getDIO();
run = dio & STOP_BIT;
dio = dio | (i & 0xff);
setDIO(dio);
i = i + 1;
}
The repeat loop is a simplified version of the for loop. It repeats the contents of the loop a fixed number of times. In contrast to the for loop, the repetition number of the repeat loop must be known at compile time, i.e., const-expression can only depend on constants and not on variables. Unlike the for and the while loop, this loop comes only in a run-time version. Thus, no cvar data types may be modified in the loop body.
// Repeat loop syntax
repeat (constant-expression) {
// Statements to execute
}
// Repeat loop example
repeat (100) {
setDIO(0x1);
wait(10);
setDIO(0x0);
wait(10);
}
## Waveform Memory
The Waveform Memory stores the different complex-valued arbitrary waveforms that are used to readout the qubits. They can be accessed through /DEV…./QACHANNELS/n/GENERATOR/WAVEFORMS/n/WAVE and have a maximal length of 4096 samples and a vertical range between -1 and 1 relative to the full scale of the Output Range.
|
2021-10-18 05:00:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24382483959197998, "perplexity": 4981.949389832992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00187.warc.gz"}
|
https://www.slendr.net/reference/subtract.html
|
Generate the difference between two slendr objects
## Usage
subtract(x, y, name = NULL)
## Arguments
x
Object of the class slendr
y
Object of the class slendr
name
Optional name of the resulting geographic region. If missing, name will be constructed from the function arguments.
## Value
Object of the class slendr_region which encodes a standard spatial object of the class sf with several additional attributes (most importantly a corresponding slendr_map object, if applicable).
|
2023-01-31 03:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2532433569431305, "perplexity": 7411.277486404349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00168.warc.gz"}
|
http://codereview.stackexchange.com/questions/42406/which-java-string-value-is-preferred-to-return-blank-or-null
|
# Which Java string value is preferred to return: blank or null?
I came across this snippet and am wondering if I should return blank in this case.
Which is more practical to return: blank or null?
public String getInfo() {
String result = "";
String uri = clientConfiguration.getUri();
String regex = "http://(.*)/consolidate";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(uri);
if (matcher.find()) {
result = matcher.group(1);
}
return result;
}
Edit: Should I change it to:
if(matcher.find()) {
return matcher.group(1);
}
return null;
-
It is entirely dependent on the context (ie, callers); what is more, your regex is seriously flawed. Hint: use the URI class to parse URIs in general (and this includes URLs) – fge Feb 21 '14 at 10:33
The question is too generic in its current form to be answered as a code review; it may be appropriate for Programmers SE. Could you provide more information about what you are trying to accomplish, and what the code that will be calling this function will do with the result? – 200_success Feb 21 '14 at 10:42
Sorry my question is not so clear. Actually I would like to rewrite the last 4 lines as this: if(matcher.find()) { return matcher.group(1); } return null; The thing I wonder is Should I? – Truong Ha Feb 21 '14 at 10:45
The general question you ask "Which is better, "" or null?" is off-topic for CodeReview, but your code snippet has a number of reviewable items....
## Regex Usage
Compiled Pattern performance
String regex = "http://(.*)/consolidate";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(uri);
if(matcher.find()){
result = matcher.group(1);
}
This snippet makes it look like you know what you are doing, with the Pattern compile, etc. But, there is no performance benefit in the way you have done this. Compiled Java Pattern instances are thread-safe, and compiling them for one-time-use is not useful. It is common practice to make the Pattern a static-final field:
private static final Pattern MYPATTERN = Pattern.compile("http://(.*)/consolidate");
Then you can re-use that compiled pattern as much as you like, in any method, in any thread, like:
Matcher matcher = MYPATTERN.matcher(uri);
if(matcher.find()){
result = matcher.group(1);
}
find vs. matches
Now, matcher.find() and matcher.matches() are different methods.
• find() will scan the input looking for any point inside the input where the pattern will match....
• matches() does just one scan, and it matches the entire input string against the entire pattern.
With the input junkhttp://a/consolidate/junk:
• find() will find your pattern
• matches() will not find your pattern
The pattern
Now, as for the actual Regex... it appears that you want your pattern to match the HTTP 'host' against which you have the 'consolidate' path.... but, your pattern will match a lot of things which I would consider to be unexpected... for example, your pattern will return the following:
http://myhost/consolidate from http://http://myhost/consolidate/consolidate
myhost/consolidate from http://myhost/consolidate/consolidate
myhost:8080 from http://myhost:8080/consolidate
from http:///consolidate
Each of the above input values will produce unexpected results.
## Correct Regex
There is not a correct regex for matching URL's.... even if a regex appears that it will match, it is still not the correct solution ;-)
## Solution
Use the java.net.URI class to validate your input. I have an example here, where I choose to return null if there is no configured URI, or throw an exception if there is a configured URI and that URI is not a valid value. This would be 'sensible' for many configurations, I expect.
public String getInfo() {
String urival = clientConfiguration.getUri();
if (urival == null || urival.isEmpty()) {
return null;
}
try {
URI uri = new URI(urival);
return uri.getHost();
} catch (URISyntaxException e) {
throw new IllegalStateException("The configured URI is not valid: " + e.getMessage(), e);
}
}
-
"There is not a correct regex for matching URL's..." I'm sure there is one somewhere out there which gets very close, but it most likely looks similar to the E-Mail address verification regular expression. – Bobby Feb 21 '14 at 13:51
@Bobby indeed, if the url language is regular then there will be a regex that accepts it – ratchet freak Feb 21 '14 at 15:21
Just to note: That doesn't mean that I think you should us regular expressions for that...you shouldn't...seriously... – Bobby Feb 21 '14 at 15:35
I agree with fge, it depends on the context. A third approach: it might be better to throw an exception in that case.
Another note: you don't need to declare the result variable at the beginning of the method, declare on the first use. Actually, you could completely get rid of it:
if (matcher.find()) {
return matcher.group(1);
}
return "";
(See also: Effective Java, Second Edition, Item 45: Minimize the scope of local variables It has a good overview on the topic.)
-
I would compress it even further, with return matcher.find()? matcher.group(1):"";. – AJMansfield Feb 21 '14 at 16:39
As "getInfo" does not hint at something functional or even crucial - only displaying the info, its usage should not fail with a NullPointerException or need to catch a thrown not-found exception. So neither return null here, nor throw an IllegalStateException, but return an empty string.
Better however would be to return an informative string "(Info missing)".
Should the result of getInfo() being used instead of only displayed, throw an exception. In this way the usage can be skipped. This already might be the case, if you embed the result is a surrounding text message. A comparable strategy is done for the hideous NumberFormatException.
Returning null, like in JDBC for field values, requires discipline by the user, and makes only sense in a context where more such "optional" data can be retrieved (= the null does not come unsuspected).
For good order: In Java 8 there also is the class Optional to indicate the optional presence of an object, and getting the object in a next step. (Overkill here)
-
Really, does every return value need to be assigned to a value? The code would be far clearer if you removed these spurious variables. result should be removed as well; just use two return statements. And for such a small piece of code, I would also suggest shortening matcher, both to avoid confusion with Matcher, and to allow further statements to be more concise.
/**
* @return The info, or an empty string if there is no info.
*/
public String getInfo() {
Matcher m = Pattern.compile("http://(.*)/consolidate")
.matcher(clientConfiguration.getUri());
return m.find()?m.group(1):"";
}
Some people insist one must use a full if statement, regardless of the occasion. In any case, don't use a result variable if returns will do, like this:
if(m.find()) return m.group(1);
return "";
As for returning "" or null, as long as you document it in your javadoc (with an @return), either way is fine.
-
+1 overall, but 1-liner if(m.find()) return m.group(1); means you can't put a debug breakpoint on the return. – rolfl Feb 21 '14 at 17:01
@rolfl although you can put a breakpoint on the whole thing, and then step through it from there, if needed, or just stick a line break in. – AJMansfield Feb 21 '14 at 17:05
If your question is whether you could refactor the code to return null instead of "", without knowing more about your code, I would say no.
Changing the return value from "" to null may break existing code calling your method, and expects it to always return a string value.
-
It is always better to return null, in place of blank if you are not getting any info. Your method name goes to getInfo(). Returning blank means you get some information but it is blank. But returning null means you did not get any information at all.
It is all upon you to decide what exactly you want to deduce from the returned value.
return null;
-
|
2015-06-30 03:32:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2910107970237732, "perplexity": 2706.307342853954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091587.3/warc/CC-MAIN-20150627031811-00223-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://nroer.gov.in/55ab34ff81fccb4f1d806025/file/560a5a3081fccb5282ea79bd
|
Can you See the Pattern?:
Next
Chapter 07 of Math - Magic, the Mathematics textbook for class 05
License:[Source NCERT ] May 24, 2016, 10:30 p.m.
|
2021-04-17 21:36:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411030769348145, "perplexity": 3728.0846782985436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00608.warc.gz"}
|
https://aimath.org/pastworkshops/kroncoeff.html
|
# Combinatorics and complexity of Kronecker coefficients
November 3 to November 7, 2014
at the
American Institute of Mathematics, San Jose, California
organized by
Igor Pak, Greta Panova, and Ernesto Vallejo
## Original Announcement
This workshop will be devoted to the study of Kronecker coefficients which describe the decomposition of tensor products of irreducible representations of a symmetric group into irreducible representations. We concentrate on their combinatorial interpretation, computational aspects and applications to other fields.
The workshop will focus on:
• Finding combinatorial interpretation for the Kronecker coefficients. In terms of complexity theory this amounts to working on resolving whether the problem $KRON$ is in $\#P$. The aim will be to use complexity theory to find evidence for or against that.
• Determining the complexity of deciding the problem $KP$ of positivity of the Kronecker coefficients. Mulmuley's conjecture states that $KP$ is in $P$. The goal will be to either prove this conjecture or else show that, for example, $KP$ is $NP$--hard.
• Resolving combinatorial special cases. Among them are proving the Saxl conjecture that for every large enough symmetric group has an irreducible representation whose tensor square contains every irreducible representation as a constituent. Other interesting combinatorial aspects include the application of Kronecker coefficients to solving combinatorial problems of different origins, specifically proving unimodality results.
## Material from the workshop
A list of participants.
The workshop schedule.
A report on the workshop activities.
Papers arising from the workshop:
Proof of Stembridge's conjecture on stability of Kronecker coefficients
by Steven V Sam and Andrew Snowden, J. Algebraic Combin. 43 (2016), no. 1, 1-10 MR3439297
Membership in moment polytopes is in NP and coNP
by Peter Bürgisser, Matthias Christandl, Ketan D. Mulmuley and Michael Walter, SIAM J. Comput. 46 (2017), no. 3, 972–991 MR3662037
|
2022-07-01 01:52:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7355209589004517, "perplexity": 1012.2899295748201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00756.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/s/s381.htm
|
## Skewness
The degree of asymmetry of a distribution. If the distribution has a longer tail less than the maximum, the function has Negative skewness. Otherwise, it has Positive skewness. Several types of skewness are defined. The Fisher Skewness is defined by
(1)
where is the third Moment, and is the Standard Deviation. The Pearson Skewness is defined by
(2)
The Momental Skewness is defined by
(3)
The Pearson Mode Skewness is defined by
(4)
Pearson's Skewness Coefficients are defined by
(5)
and
(6)
The Bowley Skewness (also known as Quartile Skewness Coefficient) is defined by
(7)
where the s denote the Interquartile Ranges. The Momental Skewness is
(8)
An Estimator for the Fisher Skewness is
(9)
where the s are k-Statistic. The Standard Deviation of is
(10)
See also Bowley Skewness, Fisher Skewness, Gamma Statistic, Kurtosis, Mean, Momental Skewness, Pearson Skewness, Standard Deviation
References
Abramowitz, M. and Stegun, C. A. (Eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, p. 928, 1972.
Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T. Moments of a Distribution: Mean, Variance, Skewness, and So Forth.'' §14.1 in Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Cambridge, England: Cambridge University Press, pp. 604-609, 1992.
|
2021-10-16 05:35:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645465612411499, "perplexity": 3389.3964909043907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00168.warc.gz"}
|
http://docs.simpeg.xyz/content/api/generated/SimPEG.maps.ComboMap.html
|
# SimPEG.maps.ComboMap#
class SimPEG.maps.ComboMap(maps, **kwargs)[source]#
Combination mapping constructed by joining a set of other mappings.
A ComboMap is a single mapping object made by joining a set of basic mapping operations by chaining them together, in order. When creating a ComboMap, the user provides a list of SimPEG mapping objects they wish to join. The order of the mappings in this list is from last to first; i.e. $$[\mathbf{f}_n , ... , \mathbf{f}_2 , \mathbf{f}_1]$$.
The combination mapping $$\mathbf{u}(\mathbf{m})$$ that acts on a set of input model parameters $$\mathbf{m}$$ is defined as:
$\mathbf{u}(\mathbf{m}) = f_n(f_{n-1}(\cdots f_1(f_0(\mathbf{m}))))$
Note that any time that you create your own combination mapping, be sure to test that the derivative is correct.
Parameters
mapslist of SimPEG.maps.IdentityMap
A list of SimPEG mapping objects. The ordering of the mapping objects in the list is from last applied to first applied!
Examples
Here we create a combination mapping that 1) projects a single scalar to a vector space of length 5, then takes the natural exponent.
>>> import numpy as np
>>> from SimPEG.maps import ExpMap, Projection, ComboMap
>>> nP1 = 1
>>> nP2 = 5
>>> ind = np.zeros(nP1, dtype=int)
>>> projection_map = Projection(nP1, ind)
>>> projection_map.shape
(5, 1)
>>> exp_map = ExpMap(nP=5)
>>> exp_map.shape
(5, 5)
Recall that the order of the mapping objects is from last applied to first applied.
>>> map_list = [exp_map, projection_map]
>>> combo_map = ComboMap(map_list)
>>> combo_map.shape
(5, 1)
>>> m = np.array([2.])
>>> combo_map * m
array([7.3890561, 7.3890561, 7.3890561, 7.3890561, 7.3890561])
Attributes
nP Number of parameters the mapping acts on. shape Dimensions of the mapping.
Methods
deriv(m[, v]) Derivative of the mapping with respect to the input parameters.
## Galleries and Tutorials using SimPEG.maps.ComboMap#
Maps: ComboMaps
Maps: ComboMaps
3D DC inversion of Dipole Dipole array
3D DC inversion of Dipole Dipole array
2D inversion of Loop-Loop EM Data
2D inversion of Loop-Loop EM Data
Tensor Meshes
Tensor Meshes
Cylindrical Meshes
Cylindrical Meshes
Tree Meshes
Tree Meshes
Joint PGI of Gravity + Magnetic on an Octree mesh using full petrophysical information
Joint PGI of Gravity + Magnetic on an Octree mesh using full petrophysical information
Joint PGI of Gravity + Magnetic on an Octree mesh without petrophysical information
Joint PGI of Gravity + Magnetic on an Octree mesh without petrophysical information
2.5D DC Resistivity and IP Least-Squares Inversion
2.5D DC Resistivity and IP Least-Squares Inversion
3D Least-Squares Inversion of DC and IP Data
3D Least-Squares Inversion of DC and IP Data
Least-Squares 1D Inversion of Sounding Data
Least-Squares 1D Inversion of Sounding Data
Sparse 1D Inversion of Sounding Data
Sparse 1D Inversion of Sounding Data
Parametric 1D Inversion of Sounding Data
Parametric 1D Inversion of Sounding Data
2.5D DC Resistivity Least-Squares Inversion
2.5D DC Resistivity Least-Squares Inversion
2.5D DC Resistivity Inversion with Sparse Norms
2.5D DC Resistivity Inversion with Sparse Norms
3D Least-Squares Inversion of DC Resistivity Data
3D Least-Squares Inversion of DC Resistivity Data
|
2022-12-02 10:17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3031170666217804, "perplexity": 11319.992524975109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00341.warc.gz"}
|
https://tex.stackexchange.com/questions/530556/equation-left-alignment-and-centered-numbering
|
# Equation Left Alignment and centered numbering
I have the following equation and ended up feeling defeated after trying several environments to make it the way I want. To have the Bigg[ and \Bigg], I changed my preferred $$\begin{split}...\end{split}$$ to the align environment. This made one thing better and everything else horrible. I need to have the equation numbers in the center and left aligned equations for all lines (indeed, I could use \qquad to shift the lines coming after the first to make it look better). And, yes, I use a two column IEEEtrans document.
\begin{align}
\min \sum_{v\in \mathcal{V}} C^{AV}y_v + \sum_{\substack{a\in \mathcal{A},\\v\in \mathcal{V}}}
\Bigg[ C^{FUEL}T_{o_a d_a}x_{av}
\notag\\
+ \left(C^{FUEL}+C^{ZOV}\right)u_{av}
\notag\\
+C^{TAXI}\left(1-x_{av}\right)
\notag\\
+C^{PARK}_{d_a}\left(q_{av}-g_{av}\right)h_{av}\notag\\
+C^{PARK}_0\left(q_{av}-g_{av}\right)\left(1-h_{av}\right)\Bigg]
\notag\\
+\sum_{a\in \mathcal{A}}\left(C^{EARLY}e_a +C^{LATE}l_a\right).
\end{align}
Based on the accepted answer, I made some modifications and could achieve what I wanted to have. For future reference, here is the code:
\label[eq]{obj_fn} \begin{alignedat}{2} \min\phantom{ + } & \sum_{v\in \mathcal{V}} C^\textrm{AV}y_v + \smash[b]{\sum_{\substack{a\in \mathcal{A},\\v\in \mathcal{V}}}} \Bigl[C^\textrm{FUEL}T_{o_a d_a}x_{av}\\[2ex] +~ &\left(C^\textrm{FUEL}+C^\textrm{ZOV}\right)u_{av}\\[1ex] +~ & C^\textrm{TAXI}\left(1-x_{av}\right)\\[1ex] +~ & C^\textrm{PARK}_{d_a}\left(q_{av}-g_{av}\right)h_{av} \\[1ex] +~ & C^\textrm{PARK}_0\left(q_{av}-g_{av}\right)\left(1-h_{av}\right)\Bigr] \\[1ex] +~ & \sum_{a\in \mathcal{A}}\mathrlap{\left(C^\textrm{EARLY}e_a +C^\textrm{LATE}l_a\right).} \end{alignedat}
A proposition with alignedat. I used only \Big brackets, which I think are large enough. Inside these brackets, I grouped some lines. Last, words used as indices of exponents should be treated as roman text, to have the proper spacing between the letters in the words (a word is not a product of variables)
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\begin{alignedat}{2} \min\phantom{ + } & \sum_{v\in \mathcal{V}} C^\textrm{AV}y_v & + \smash[b]{\sum_{\substack{a\in \mathcal{A},\\v\in \mathcal{V}}}} \Bigl[ &C^\textrm{FUEL}T_{o_a d_a}x_{av} + \left(C^\textrm{FUEL}+C^\textrm{ZOV}\right)u_{av}\\[-1ex] & & &{} +C^\textrm{TAXI}\left(1-x_{av}\right) +C^\textrm{PARK}_{d_a}\left(q_{av}-g_{av}\right)h_{av} \\ & & &{} +C^\textrm{PARK}_0\left(q_{av}-g_{av}\right)\left(1-h_{av}\right)\Bigr] \\ + & \sum_{a\in \mathcal{A}}\mathrlap{\left(C^\textrm{EARLY}e_a +C^\textrm{LATE}l_a\right).} \end{alignedat}
\end{document}
I'd preserve the inner structure, with one part per summation (the second one split across lines)
\documentclass{IEEEtran}
\usepackage{amsmath}
\usepackage{lipsum} % for mock text
\begin{document}
\lipsum[1][1-3]
\begin{aligned} \min & \sum_{v\in \mathcal{V}} C^{AV}y_v \\ & +\sum_{\substack{a\in \mathcal{A},\\v\in \mathcal{V}}} \Bigl[\begin{aligned}[t] & C^{\mathrm{FUEL}}T_{o_a d_a}x_{av} \\ & + (C^{\mathrm{FUEL}}+C^{\mathrm{ZOV}})u_{av} \\ & + C^{\mathrm{TAXI}}(1-x_{av}) \\ & + C^{\mathrm{PARK}}_{d_a}(q_{av}-g_{av})h_{av} \\ & + C^{\mathrm{PARK}}_0(q_{av}-g_{av})(1-h_{av})\Bigr] \end{aligned} \\ & +\sum_{a\in \mathcal{A}}(C^{\mathrm{EARLY}}e_a +C^{\mathrm{LATE}}l_a). \end{aligned}
\lipsum
\end{document}
If you adopt Times also for the math material, you're possibly able to squeeze more material in a line:
\documentclass{IEEEtran}
\usepackage{amsmath}
\usepackage{newtxtext,newtxmath}
\usepackage{lipsum} % for mock text
\begin{document}
\lipsum[1][1-3]
\begin{aligned} \min & \sum_{v\in \mathcal{V}} C^{AV}y_v \\ & +\sum_{\substack{a\in \mathcal{A},\\v\in \mathcal{V}}} \Bigl[\begin{aligned}[t] & C^{\mathrm{FUEL}}T_{o_a d_a}x_{av} + (C^{\mathrm{FUEL}}+C^{\mathrm{ZOV}})u_{av} \\ & + C^{\mathrm{TAXI}}(1-x_{av}) + C^{\mathrm{PARK}}_{d_a}(q_{av}-g_{av})h_{av} \\ & + C^{\mathrm{PARK}}_0(q_{av}-g_{av})(1-h_{av})\Bigr] \end{aligned} \\ & +\sum_{a\in \mathcal{A}}(C^{\mathrm{EARLY}}e_a +C^{\mathrm{LATE}}l_a). \end{aligned}
\lipsum
\end{document}
In any case, the textual superscript should be in upright type, like C^{\mathrm{FUEL}}.
• I liked your inner structure solution. It looks much better. Thanks! – user8028576 Feb 29 at 21:25
• It's a pity that the first summation is so short; but two-column formatting is hard to cope with. – egreg Feb 29 at 21:27
|
2020-07-16 00:33:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.9999845027923584, "perplexity": 7704.933158365213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00078.warc.gz"}
|
https://www.ias.ac.in/listing/articles/pram/087/01
|
• Volume 87, Issue 1
July 2016
• Editorial
• General editorial on publication ethics
• Energy distribution of cosmic rays in the Earth’s atmosphere and avionic area using Monte Carlo codes
Cosmic rays cause significant damage to the electronic equipments of the aircrafts. In this paper, we have investigated the accumulation of the deposited energy of cosmic rays on the Earth’s atmosphere, especially in the aircraft area. In fact, if a high-energy neutron or proton interacts with a nanodevice having only a few atoms, this neutron or proton particle can change the nature of this device and destroy it. Our simulation based on Monte Carlo using Geant4 code shows that the deposited energy of neutron particles ranging between 200MeV and 5 GeV are strongly concentrated in the region between 10 and 15 km from the sea level which is exactly the avionic area. However, the Bragg peak energy of proton particle is slightly localized above the avionic area.
• Influence of nuclear dissipation on fission dynamics of the excited nucleus $^{248}$Cf within a stochastic approach
A stochastic approach to fission dynamics based on two-dimensional Langevin equations was applied to calculate the anisotropy of the fission fragments angular distribution and average pre-scission neutron multiplicities for the compound nucleus 248Cf formed in the {16}$O+$^{232}$Th reactions. Postsaddle nuclear dissipation strength of$(12–14) \times 10^{21} s^{−1}$was extracted for Cf nucleus by fitting the results of calculations with the experimentaldata. Furthermore, it was found that the results of calculations for the anisotropy of the fission fragments angular distribution and pre-scission neutron multiplicities are very sensitive to the magnitude of post-saddle nucleardissipation. • Significance of power average of sinusoidal and non-sinusoidal periodic excitations in nonlinear non-autonomous system Additional sinusoidal and different non-sinusoidal periodic perturbations applied to the periodically forced nonlinear oscillators decide the maintainance or inhibitance of chaos. It is observed that the weak amplitude of the sinusoidal force without phase is sufficient to inhibit chaos rather than the other non-sinusoidal forces and sinusoidal force with phase. Apart from sinusoidal force without phase, i.e., from various non-sinusoidal forces and sinusoidal force with phase, square force seems to be an effective weak perturbation to suppress chaos. The effectiveness of weak perturbation for suppressing chaos is understood with the total power average of the external forces applied to the system. In any chaotic system, the total power average of the external forces isconstant and is different for different nonlinear systems. This total power average decides the nature of the force to suppress chaos in the sense of weak perturbation. This has been a universal phenomenon for all the chaoticnon-autonomous systems. The results are confirmed by Melnikov method and numerical analysis. With the help of the total power average technique, one can say whether the chaos in that nonlinear system is to be supppressed or not. • Effect of barium doping on the physical properties of zinc oxide nanoparticles elaborated via sonochemical synthesis method The aim of this work is to study the effect of barium (Ba) doping on the optical, morphological and structural properties of ZnO nanoparticles. Undoped and Ba-doped ZnO have been successfully synthesized via sonochemical method using zinc nitrate, hexamethylenetetramine (HMT) and barium chloride as startingmaterials. The structural characterization by XRD and FTIR shows that ZnO nanoparticles are polycrystalline with a standard hexagonal ZnO wurtzite crystal structure. Decrease in lattice parameters from diffraction data shows the presence of Ba$^{2+}$in the ZnO crystal lattice. The morphology of the ZnO nanoparticles has been determined by scanning electron microscopy (SEM). Incorporation of Ba was confirmed from the elemental analysis using EDX. Optical analysis depicted that all samples exhibit an average optical transparency over 80%, in the visible range. Room-temperature photoluminescence (PL) spectra detected a strong ultraviolet emission at 330 nmand two weak emission bands were observed near 417 and 560 nm. Raman spectroscopy analysis of Ba-doped samples reveals the successful doping of Ba ions in the host ZnO. • Stability analysis and quasinormal modes of Reissner–Nordstrøm space-time via Lyapunov exponent We explicitly derive the proper-time (τ ) principal Lyapunov exponent (λp) and coordinate-time (t ) principal Lyapunov exponent$(\lambda_c)$for Reissner–Nordstrøm (RN) black hole (BH). We also compute their ratio. For RN space-time, it is shown that the ratio is$(\lambda_{p}/\lambda_{c}) = r_{0}/\sqrt{r^{2}0 − 3Mr_{0} + 2Q^{2}}$for time-like circulargeodesics and for Schwarzschild BH, it is$(\lambda_{p}/\lambda_{c}) = \sqrt{r_{0}}/\sqrt{r_{0} − 3M}. We further show that their ratio $\lambda_{p}/\lambda_{c}$ may vary from orbit to orbit. For instance, for Schwarzschild BH at the innermost stable circular orbit (ISCO), the ratio is $(\lambda_{p}/\lambda_{c})_{|rISCO}=6M = \sqrt{2}$ and at marginally bound circular orbit (MBCO) the ratio is calculated to be $(\lambda_{p}/\lambda_{c})|_{rmb}=4M = 2$. Similarly, for extremal RN BH, the ratio at ISCO is $(\lambda_{p}/\lambda_{c})|_{rISCO}=4M = 2\sqrt{2}/\sqrt{3}$. We also further analyse the geodesic stability via this exponent. By evaluating the Lyapunov exponent, it is shown that in the eikonal limit, the real and imaginary parts of the quasinormal modes of RN BH is given by the frequency and instability time-scale of the unstable null circular geodesics.
• Experimental study of soft X-ray intensity with different anode tips in Amirkabir plasma focus device
To study the effect of different anode tip geometries on the intensity of soft X-rays emitted from a 4 kJ plasma focus device (PFD), we considered five different anode tips which were cylindrical-flat, cylindricalhollow, spherical-convex, cone-flat and cone-hollow tips. BPX-65 PIN diodes covered by four different filters are used to register the intensity of soft X-rays. The use of cone-flat anode tip has augmented the emitted X-ray three times compared to the conventional cylindrical-flat anode.
• Lifetime measurements in the yrast band of the gamma-soft nuclei $^{131}$Ce and $^{133}$Pr
Lifetimes of excited states in the yrast band of the gamma-soft nuclei $^{131}$Ce and 133Pr have been measured using the recoil distance Doppler shift and Doppler shift attenuation methods. The yrast bands in $^{131}$Ce and $^{133}$Pr are based on odd decoupled neutron $νh_{11/2}$ high $\Omega$ and proton $\pi h_{11/2}$ low $\Omega$ orbitals, respectively. Thetriaxiality parameter extracted from the experimentally deduced values of transition quadrupole moments, within the framework of cranked Hartree–Fock–Bogoliubov (CHFB) and total Routhian surface (TRS) calculatons, is$\gamma ~ −80{^o}$ for the band in $^{131}$Ce at high spins, while for the band in $^{133}$Pr, the value of $\gamma$ is close to $0^{o}$. Thisagrees well with the $\gamma$ shape polarization property of high and low $\Omega_{11/2}$ orbitals in these gamma-soft nuclei.
• The dependence of scattering length on van derWaals interaction and reduced mass of the system in two-atomic collision at cold energies
The static exchange model (SEM) and the modified static exchange model (MSEM) recently introduced by Ray in {\it Pramana – J. Phys.} 83, 907 (2014) are used to study the elastic collision between two hydrogen-like atoms when both are in ground states by considering the system as a four-body Coulomb system in the centre of mass frame, in which all the Coulomb interaction terms in direct and exchange channels are treated exactly. The SEM includes the non-adiabatic short-range effect due to electron exchange. The MSEM added init, the long-range effect due to induced dynamic dipole polarizabilities between the atoms e.g., the van der Waals interaction. Applying the SEM code in different H-like two-atomic systems, a reduced mass $(\mu)$ dependence on the scattering length is observed. Again, applying the MSEM code on H(1s)–H(1s) elastic scattering and varying the minimum values of interatomic distance $R_0$, the dependence of scattering length on the effective interatomic potential consistent with the existing physics is observed. Both these basic findings in low and cold energy atomic collision physics are quite useful and are being reported for the first time.
• Shock wave propagation in soda lime glass using optical shadowgraphy
Propagation of shock waves in soda lime glass, which is a transparent material, has been studied using the optical shadowgraphy technique. The time-resolved shock velocity information has been obtained (1) in single shot, using the chirped pulse shadowgraphy technique, with a temporal resolution of tens of picoseconds and (2) in multiple shots, using conventional snapshot approach, with a second harmonic probe pulse. Transient shock velocities of $(5–7) \times 10^{6}$ cm/s have been obtained. The scaling of the shock velocity with intensity in the $2 \times 10^{13}–10^{14}$ W/cm$^2$ range has been obtained. The shock velocity is observed to scale with laser intensity as $I^{0.38}$. The present experiments also show the presence of ionization tracks, generated probably due to X-ray hotspots from small-scale filamentation instabilities. The results and various issues involved in these experiments are discussed
• Quantum mechanics of $PT$ and non-$PT$ -symmetric potentials in three dimensions
With a view of exploring new vistas with regard to the nature of complex eigenspectra of a non-Hermitian Hamiltonian, the quasi-exact solutions of the Schrödinger equation are investigated for a shifted harmonic potential under the framework of extended complex phase-space approach. Analyticity property ofthe eigenfunction alone is found sufficient to throw light on the nature of the eigenvalues and eigenfunctions of a system. Explicit expressions of eigenvalues and eigenfunctions for the ground state as well as excited state including their $PT$-symmetric version are worked out.
• Effect of dust size distribution and dust charge fluctuation on dust ion-acoustic shock waves in a multi-ion dusty plasma
The effects of dust size distribution and dust charge fluctuation of dust grains on the small but finite amplitude nonlinear dust ion-acoustic shock waves, in an unmagnetized multi-ion dusty plasma which contains negative ions, positive ions and electrons, are studied in this paper. A Burgers equation and its stationary solutions are obtained by using the reductive perturbation method. The analytical and numerical results show that the height with polynomial dust size distribution is larger than that of the monosized dusty plasmas with the same dustgrains, but the thickness in the case of different dust grains is smaller than that of the monosized dusty plasmas. Furthermore, the moving speed of the shock waves also depend on different dust size distributions.
• Measurement of attenuation cross-sections of some fatty acids in the energy range 122–1330 keV
The mass attenuation coefficients $(\mu m)$ have been measured for undecylic acid (C$_{11}$H$_{22}$O$_2$), lauric acid (C$_{12}$H$_{24}$O$_2$), tridecylic acid (C$_{13}$H$_{26}$O$_2$), myristic acid (C$_{14}$H$_{28}$O$_2$), pentadecylic acid (C$_{15}$H$_{30}$O$_2$) andpalmitic acid (C$_{16}$H$_{32}$O$_2$) using $^{57}$Co, $^{133}$Ba, $^{137}$Cs, $^{60}$Co and $^{22}$Na emitted γ radiation with energies 122, 356,511, 662, 1170, 1275 and 1330 keV, respectively. The accurate values of the effective atomic number (Zeff), atomic cross-section $(\sigma t,)$, electronic cross-section $(\sigma e)$ and the effective electron density (Neff) have great significance in radiation protection and dosimetry. These quantities were obtained by utilizing experimentally measured values of mass attenuation coefficients $(\mu m)$. A NaI(Tl) scintillation detector with 8.2% (at 662 keV) resolution was used for detecting of attenuated γ -photons. The variation in Zeff and Neff of fatty acids with energy is discussed. The experimental and theoretical results are in good agreement within 2% deviation.
• Similarities between 2D and 3D convection for large Prandtl number
Using direct numerical simulations of Rayleigh–Bénard convection (RBC), we perform a comparative study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close similarities between the 2D and 3D RBC, in particular, the kinetic energy spectrum $E^{u}(k) ∼ k^{−13/3}$, and the entropy spectrum exhibits a dual branch with a dominant $k^{−2}$ spectrum. We showed that the dominant Fourier modes in 2D and 3D flows are very close. Consequently, the 3D RBC is quasi-two-dimensional, which is the reason for the similarities between the 2D and 3D RBC for large and infinite Prandtl numbers.
• Signature splitting in two quasiparticle rotational bands of $^{180,182}$Ta
The signature splittings in $K^{\pi} =1^{+}: 7/2[404]_{\pi}\bigotimes 9/2[624]_{\nu}$ , $K^{\pi} =0^{−}: 9/2[514]_{\pi}\bigotimes 9/2[624]_{\nu}$ bands of $^{180}$Ta and $K^{\pi} = 0^{−}: 7/2[404]_{\pi}\bigotimes 7/2[503]_{\nu}$ , $K_{\pi} = 1^{−}: 5/2[402]_{\pi}\bigotimes 3/2[512]_{\nu$ , $K^{\pi} = 1^{+}: 7/2[404]_{\pi}\bigotimes9/2[624]_{\nu}$ bands of $^{182}$Ta are analysed within the framework of two-quasiparticle rotor model. The phase as well as magnitudeof the experimentally observed signature splitting in $K^{\pi} = 1^{+}$ band of $^{180}$Ta, which could not be explained in earlier calculations, is successfully reproduced. The conflict regarding placement of a 12$^+$ level in $K^{\pi} = 1^{+}: 7/2+[404]_{\pi}\bigotimes 9/2+[624]_{\nu}$ ground-state rotational band of $^{180}$Ta is resolved and tentative nature of $K^{\pi} = 0^{−}:7/2[404]_{\pi}\bigotimes7/2[503]_{\nu}$ ,$K^{\pi} = 1^{+}: 7/2[404]_{\pi}\bigotimes 9/2[624]_{\nu}$ bands observed in $^{182}$Ta is confirmed. As a future prediction for experimentalists, these two-quasiparticle structures observed in $^{180}$Ta and $^{182}$Ta are extended to higherspins.
• The modified simple equation method for solving some fractional-order nonlinear equations
Nonlinear fractional differential equations are encountered in various fields of mathematics, physics, chemistry, biology, engineering and in numerous other applications. Exact solutions of these equations play a crucial role in the proper understanding of the qualitative features of many phenomena and processes in various areas of natural science. Thus, many effective and powerful methods have been established and improved. In this study, we establish exact solutions of the time fractional biological population model equation and nonlinearfractional Klein–Gordon equation by using the modified simple equation method.
• Entanglement dynamics of two interacting qubits under the influence of local dissipation
We investigate the dynamics of entanglement given by the concurrence of a two-qubit system in the non-Markovian setting. A quantum master equation is derived, which is solved in the eigenbasis of the system Hamiltonian for X-type initial states. A closed formula for time evolution of concurrence is presented for a pure state. It is shown that under the influence of dissipation non-zero entanglement is created in unentangled twoqubit states which decay in the same way as pure entangled states. We also show that under real circumstances,the decay rate of concurrence is strongly modified by the non-Markovianity of the evolution.
• # Pramana – Journal of Physics
Current Issue
Volume 93 | Issue 6
December 2019
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2019-11-15 16:35:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6329507231712341, "perplexity": 1586.3836523898333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00339.warc.gz"}
|
https://www.gamedev.net/forums/topic/268388-best-container-for-fast-deletes/
|
• 11
• 9
• 10
• 9
• 10
# best container for fast delete's?
This topic is 4979 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hi, in my engine i have a "master list" of all my objects which Update(). i store them all in a std::list. anyway, im constnatly adding things to this list and removing things from random positions. i figured a std::list was best for this situation. but i recently read that if i dont care about the order which they are stored (i dont), a std::vector is better. the trick is to put the items which are to be deleted in the back of the vector, then just pop_back(). is this true? if so, how exactly is it done? do i use 2 seperate vectors or something? just a little curiouse because things like this usually heavily effect performance. thanks for any help!
##### Share on other sites
One thing I did was to have an array of pointers. If I want to delete an object, I just move the last entry over the one I want to delete.
usedObjectCount--;
objects[deleteIndex] = objects[usedObjectCount];
While this will move an object over itself if it's the last entry, this isn't a big deal. There should be no negative side effects, and it sure beats throwing in an if statement.
objects[usedObjectCount] = newobjectptr;
usedObjectCount++;
##### Share on other sites
The usual trick when you don't care about the order is to swap the element you want to remove with the element at the last position. If you're just storing pointers to objects then thats just a pointer swap and dead fast. Then you can simply remove the last element, and since vectors are usually implemented as a dynamically growing array that just involves reducing the size of the container by one.
Personally I'd just bung them all in a set of some kind and leave it to do its thing. If it later turns up after profiling (and only if!) then you can think about doing some sneaky tricks.
##### Share on other sites
Well, if you use std::remove on your vector, or better yet, std::remove_if then it will return an iterator to the new end of the vector. From there you just do an erase on the vector. Of course, this works best if you just run the remove/remove_if once, and not once per element you want to remove. In the later case you would just use std::swap and swap the last item (end - 1) with the current and then pop_back.
##### Share on other sites
What Washu said. I can't image, though, that your container would have all of its objects that need erasing grouped together--would they be interspersed randomly throughout the container? If so, that's a lot of swaps. I'd think the list would be the way to go in this case. However, if by some magic on your side you'd be able to do a remove at the end of the container all in one contiguous chunk, vector should be faster.
edit: didn't even think about them being pointers. I swore I'd never post on Fridays anymore. Swapping should be dead fast, as the prior poster said, so remove/remove_if should serve you well.
edit2: one thing to be very careful about: if this container "owns" the pointers, you have to make sure you delete them before you use remove. remove may very well overwrite values in the "unused" portion of the vector, so you won't be able to recover the pointer after you remove it.
##### Share on other sites
hey guys, thanks for your replies.
im a little confused on exactly how to do this then.. you say to use remove or remove if, but this seems to just remove an item from a container.. wouldnt this be why im doing pop_back() ?? maybe an example would help... i was under the impresion i would swap() the to be deleted item with the back() item, then pop_back(), so where does remove come in? lastly, do you think this will definetly be faster then just using a std::list and removing from random spots? and yes, this is pointers im dealing with. thanks again!
myvec.erase(newend, myvec.end()); //newend is iterator returned from std::remove
|
2018-04-22 23:59:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2788366377353668, "perplexity": 1056.4332602763693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00404.warc.gz"}
|
https://ru.b-ok.org/book/1300430/e794d6
|
Главная Handbook of Numerical Heat Transfer
# Handbook of Numerical Heat Transfer
, ,
Categories: Technique\\Energy
Год: 2006
Издание: 2ed.
Издательство: Wiley
Язык: english
Страниц: 963
ISBN 10: 0471348783
File: DJVU, 9.17 MB
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1
### Mathematical and computational methods in biomechanics of human skeletal systems
Год: 2011
Язык: english
File: PDF, 7.45 MB
2
### Smoothed Particle Hydrodynamics: A Meshfree Particle Method
Год: 2003
Язык: english
File: DJVU, 3.52 MB
HANDBOOK OF NUMERICAL HEAT TRANSFER Second Edition
HANDBOOK OF NUMERICAL HEAT TRANSFER Second Edition Edited by W.J. MINKOWYCZ Department of Mechanical and Industrial Engineering University of Illinois at Chicago Chicago, Illinois E.M. SPARROW Department of Mechanical Engineering University of Minnesota-Twin Cities Minneapolis, Minnesota J.Y. MURTHY School of Mechanical Engineering Purdue University West Lafayette, Indiana With editorial assistance by JOHN P. ABRAHAM ® WILEY JOHN WILEY & SONS, INC.
CONTENTS PREFACE ix LIST OF CONTRIBUTORS xi PART ONE: FUNDAMENTALS 1 1 SURVEY OF NUMERICAL METHODS 3 J. Y. Murthy, W. J. Minkowycz, E. M. Sparrow, and S. Ft. Mathur 2 FINITE-DIFFERENCE METHOD 53 Richard H. Pletcher 3 FINITE-ELEMENT METHOD 91 Juan C. Heinrich 4 BOUNDARY-ELEMENT METHOD 125 A. J. Kassab, L C. Wrobel, R. A. Bialecki, and E. A. Divo 5 LARGE EDDY SIMULATION OF HEAT AND MASS TRANSPORT IN TURBULENT FLOWS 167 Cyrus K. Madnia, Farhad A. Jaberi, and Peyman Givi 6 CONTROL-VOLUME-BASED FINITE-DIFFERENCE AND FINITE-ELEMENT METHODS 191 B. R. Baliga and N. Atabaki v
VI CONTENTS 7 MESHLESS METHODS 225 D. W. Pepper 8 MONTE CARLO METHODS 249 A. Haji-Sheikh and J. ft Howell 9 DISCRETE-ORDINATES AND FINITE-VOLUME METHODS FOR RADIATIVE HEAT TRANSFER 297 J. C. Chal and S. V. Patankar 10 PRESSURE-BASED ALGORITHMS FOR SINGLE-FLUID AND MULTIFLUID FLOWS 325 F. Moukalled and M. Darwish 11 NUMERICAL MODELING OF HEAT TRANSFER IN WALL-ADJACENT TURBULENT FLOWS 369 T. J. Craft, S. E. Gant, A. V. Gerasimov, H. lacovides, and B. E. Launder 12 A CRITICAL SYNTHESIS OF PERTINENT MODELS FOR TURBULENT TRANSPORT THROUGH POROUS MEDIA 389 K. Vafai, A. Bejan, W. J. Minkowycz, and K. Khanafer 13 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER 417 Dominique Pelletier and Patrick J. Roache 14 SENSITIVITY ANALYSIS AND UNCERTAINTY PROPAGATION OF COMPUTATIONAL MODELS 443 B. F. Blackwell and K. J. Dowding 15 COMPUTATIONAL GEOMETRY, GRID GENERATION, AND ADAPTIVE GRIDS 471 Graham F. Carey 16 HYBRID METHODS AND SYMBOLIC COMPUTATIONS 493 Renato M. Cotta and Mikhail D. Mikhailov PART TWO: APPLICATIONS 523 17 INVERSE PROBLEMS IN HEAT TRANSFER 525 Nicholas Zabaras
CONTENTS VII 18 MOVING-BOUNDARY PROBLEMS 559 Wei Shyy 19 NUMERICAL METHODS FOR PHASE-CHANGE PROBLEMS 593 V. ft Voller 20 COMPUTATIONAL TECHNIQUES FOR MICROSCALE HEAT TRANSFER 623 Da Yu "Robert" Tzou 21 MOLECULAR DYNAMICS METHOD FOR MICRO/NANO SYSTEMS 659 Shigeo Maruyama 22 EULERIAN-LAGRANGIAN SIMULATIONS OF PARTICLE/ DROPLET-LADEN TURBULENT FLOWS 697 F. Mashayek and W. J. Minkowycz 23 NUMERICAL MODELING OF MANUFACTURING PROCESSES 729 Yogesh Jaluria 24 COMPUTATIONAL METHODS IN MATERIALS PROCESSING 785 ft Pitchumani 25 THERMAL MODELING OF TECHNOLOGY INFRASTRUCTURE FACILITIES: A CASE STUDY OF DATA CENTERS 821 J. Rambo and Y. Joshi 26 NUMERICAL BIOHEAT TRANSFER 851 B. Rubinsky 27 HIGH-PERFORMANCE COMPUTING FOR FLUID FLOW AND HEAT TRANSFER 895 D. W. Pepper and J. M. Lombardo 28 OVERVIEW OF NUMERICAL METHODS AND RECOMMENDATIONS 921 S. ft Mathur, W. J. Minkowycz, E. M. Sparrow, and J. Y. Murthy INDEX 946
In the nearly two decades that have passed since the publication of the first edition of the Handbook of Numerical Heat Transfer, spectacular advances have been made in all facets of numerical heat transfer and fluid flow. Computational methodologies that were in their early stages of development two decades ago have now gained wide acceptance as powerful tools for problem solving. The seemingly endless increases in raw computing power have made million-element descritizations of complex geometries commonplace. The present edition of the Handbook is intended to mirror the present status of numerical heat transfer and fluid flow. To facilitate this intention, the Handbook is now subdivided into Fundamentals and Applications sections to enable users to more easily identify information relevant to their needs. Furthermore, the enlarged scope of the present edition is reflected in its 28-chapter coverage in contrast to the 22-chapter coverage of the first edition. Although all the information conveyed in the present Handbook is totally state-of-the-art, of special note are the methods that are new to the present edition. These include Large Eddy Simulation (Chapter 5), Meshless Methods (Chapter 7), the Boundary Element Method (Chapter 4), and Hybrid Methods (Chapter 16). Also new are the chapters on evaluation of computational models and numerical results (Chapters 13 and 14). Despite the rising interest in meshless methods, numerical simulations are, for the most part, still performed with meshed geometries. Grid generation, including adaptive grids, is a constantly evolving area of great importance to practical computation. The latest developments in this area are reported in Chapter 15. Whereas more complex problems are yielding to numerical simulation, other classes of problems have diminished in importance. As a case in point with regard to the latter, the boundary-layer model, which gives rise to parabolic equation systems, is not well suited for the types of problems that are currently relevant. In recognition of this reality, parabolic systems are not treated in the second edition of the Handbook, in clear contrast to the four-chapter treatment accorded parabolic systems in the first edition. Other numerical methods which have IX
X PREFACE seen diminished use since the publication of the first edition are perturbation methods and the finite-analytic method. Both of these methods were included in the first edition but do not appear in the current edition. In the era when the first edition was being written, the term finite difference was used to describe descretizations achieved either by applying truncated Taylor's series expansions to the governing differential equations or by applying the relevant conservation laws to elements of small but finite dimensions. The latter approach is now termed the finite-volume method, with the term finite difference being exclusively used to describe the Taylor's series approach. The finite-element method remains distinct as before. This edition of the Handbook accords separate chapters to the finite-difference, finite-element, and finite-volume methods (Chapters 2, 3 and 6, respectively). The Applications section of the present edition is a new feature. Applications that are currently attracting great attention include microscale and nanoscale processes (Chapters 20 and 21), biomedical processes (Chapter 26), and manufacturing and materials processing (Chapters 23 and 24). The use of hybrid Eulerian-Lagrangian methods for the study of flows conveying bubbles, droplets, and particles (Chapter 22) is a new applications area. The groundswell of interest in the management of information technology (IT) has prompted the inclusion of Chapter 25. Also new is the treatment of turbulent flow in porous media (Chapter 12). In addition to the new topics that have been detailed in the preceding paragraphs, the newest information on established methodologies is conveyed in the other chapters of the Handbook. These include Monte Carlo Methods (Chapter 8), numerical methods for radiative heat transfer (Chapter 9), pressure-velocity interactions (Chapter 10), and turbulence modeling (Chapter 11). Among the applications, updates are provided for the inverse problem (Chapter 17), moving boundary and phase-change problems (Chapters 18 and 19), and high-performance computing (Chapter 27). The timely publication of this edition of the Handbook is to be credited to the unprecedented cooperation of the contributors. The editorial and publication staff of John Wiley & Sons, and particularly, of the Executive Editor, Robert L. Argentieri, had a great deal to do with the sought-for quality and timeliness of the Handbook. The editors are especially grateful to Renata M. Szandra for her multifaceted excellence in coordinating and implementing all aspects of manuscript processing. W.J. Minkowycz E.M. Sparrow J.Y. Murthy
LIST OF CONTRIBUTORS N. Atabaki McGill University Department of Mechanical Engineering Heat Transfer Laboratory 817 Sherbrooke St. W. Montral, Quebec Canada H3A 2K6 514-398-6324 nataba @ po-box. mcgill.ca B.R. Baliga McGill University Department of Mechanical Engineering 817 Sherbrooke Street West Montreal, Quebec Canada H3A 2K6 514-398-6287 bantwal.baliga@mcgill.ca A. Bejan Duke University Department of Mechanical Engineering and Materials Science Box 90300 Durham, NC 27708-0300 919-660-5309 dalford @ duke. edu R.A. Bialecki Silesian University of Technology Institute of Thermal Technology Konarskiego 22 44-101 Gliwice Poland 48-32-237-2953 bialecki @ itc.ise.polsl.gliwice.pl B.F. Blackwell Sandia National Laboratories Validation and Uncertainty Quantification Department MS 0828 P.O. Box 5800 Albuquerque, NM 87185-0828 505-845-8844 bfblack® sandia.gov Graham F. Carey The University of Texas at Austin ICES 201 E. 24th St., ACES 6.430 1 University Station C0200 Austin, TX 78712-0227 512-471-4207 carey@cfdlab.ae.utexas.edu
XII LIST OF CONTRIBUTORS J.C. Chai Nanyang Technological University School of Mechanical & Production Engineering 50 Nanyang Avenue Singapore 639798 65-6790-4270 mckchai@ntu.edu.sg Renato M. Cotta Universidade Federal do Rio de Janeiro Prograrna de Engenharia Mecanica COPPE/UFRJ Cidade Universitaria- Caixa Postal 68503 Rio de Janeiro - RJ - 21945-970 Brazil 55-21-2562-8383 CottaRenato @ aol.com T.J. Craft UMIST Department of Mechanical, Aerospace and Manufacturing Engineering P.O. Box 88 Manchester, M60 1QD United Kingdom 44-161-200-8728 tim.craft@umist.ac.uk M. Darwish American University of Beirut Department of Mechanical Engineering PO Box 11-0236 Riad El Solh, Beirut 1107 2020 Lebanon 961-1-347952 darwish@aub.edu.lb E.A. Divo University of Central Florida Department of Engineering Technology 4000 Central Florida Boulevard Orlando, FL 32816-2450 407-823-5778 edivo@mail.ucf.edu K.J. Dowding Sandia National Laboratories Validation and Uncertainty Quantification Processes Department 9133 M.S. 0828 P.O. Box 5800 Albuquerque, NM 87185-0828 505-844-9699 kjdowdi@sandia.gov S.E. Gant UMIST Department of Mechanical, Aerospace and Manufacturing Engineering PO Box 88 Manchester, M60 1QD United Kingdom 44-161-200-4547 simon.gant@umist.ac.uk A.V. Gerasimov UMIST Department of Mechanical, Aerospace, and Manufacturing Engineering PO Box 88 Manchester M60 1QD United Kingdom 44-161-200-4547 aleksey.gerasimov@stud.umist.ac.uk Peyman Givi University of Pittsburgh Department of Mechanical Engineering 644 Benedum Hall Pittsburgh, PA 15261 412-624-9605 givi@engr.pitt.edu A. Haji-Sheikh The University of Texas at Arlington Department of Mechanical and Aerospace Engineering Arlington, TX 76019-0023 817-272-2010 Haji@mae.uta.edu Juan C. Heinrich University of New Mexico Department of Mechanical Engineering MSC01 1150 Albuquerque, NM 87131 505-277-2761 heinrich® unm.edu J.R. Howell The University of Texas at Austin
LIST OF CONTRIBUTORS XIII Department of Mechanical Engineering 1 University Station, C2200 Austin, TX 78712-0292 512-471-3095 jhowell@mail.utexas.edu H. Iacovides UMIST Department of Mechanical, Aerospace and Manufacturing Engineering P.O. Box 88 Manchester M60 1QD United Kingdom 44-161-200-3709 h.iaco vides @ umist.ac.uk Yogesh Jaluria Rutgers University Department of Mechanical and Aerospace Engineering New Brunswick, NJ 08903 732-445-3652 jaluria@jove.rutgers.edu Farhad A. Jaberi Michigan State University Department of Mechanical Engineering East Lansing, MI 48824-1226 517-432-4678 jaberi@egr.msu.edu Y. Joshi Georgia Institute of Technology Woodruff School of Mechanical Engineering 801 Ferst Drive N.W. Atlanta, GA 30332-0405 404-385-2810 yogendra.joshi@me.gatech.edu A.J. Kassab University of Central Florida Mechanical, Materials, and Aerospace Engineering Box 162450 Orlando, FL 32816-2450 407-823-5778 kassab@mail.ucf.edu K. Khanafer University of California, Riverside Department of Mechanical Engineering Riverside, CA 92521-0425 909-787-6428 khanafer@engr.ucr.edu B.E. Launder UMIST Department of Mechanical, Aerospace, and Manufacturing Engineering P.O. Box 88 Manchester M60 1QD United Kingdom 0161-200-3701 brian.launder@umist.ac.uk J.M. Lombardo University of Nevada National Supercomputing Center (NSCEE) for Energy and Environment Department of Mechanical Engineering 4505 S. Maryland Parkway Las Vegas, NV 89154-4028 702-895-4153 lombardo @ nscee.edu Cyrus K. Madnia University of Buffalo State University of New York Buffalo, NY 14260 716-645-2593 x2315 madnia@eng. buffalo. edu Shigeo Maruyama The University of Tokyo Department of Mechanical Engineering 7-3-1 Hongo, Bunkyo-ku Tokyo 113-8656 Japan 81-3-5841-6421 maruyama@photon.t.u-tokyo.ac.jp F. Mashayek University of Illinois at Chicago Department of Mechanical Engineering (MC 251) 842 W. Taylor Street Chicago, IL 60607 312-996-1154 mashayek@uic.edu S.R. Mathur Fluent Inc.
XIV LIST OF CONTRIBUTORS 10 Cavendish Court Lebanon, NH 03766 603-643-2600 sm@fluent.com Mikhail D. Mikhailov Universidade Federal do Rio de Janeiro Prograrna de Engenharia Mecanica COPPETUFRJ Cidade Universitaria- Caixa Postal 68503 Rio de Janeiro - RJ - 21945 Brazil 021-2809322 mikhailov@lttc.coppe.ufrj .br W.J. Minkowycz University of Illinois at Chicago Department of Mechanical and Industrial Engineering (MC 251) 842 West Taylor Street Chicago, IL 60607-7022 312-996-3467 wjm@uic.edu F. Moukalled American University of Beirut Department of Mechanical Engineering P.O. Box 11-0236 Riad El Solh, Beirut, 1107 2020 Lebanon 961-1-347952 memouk@aub.edu.lb J.Y. Murthy Purdue University School of Mechanical Engineering 1288 Mechanical Engineering Building West Lafayette, IN 47907-1288 765-494-5701 jmurthy@ecn.purdue.edu S.V. Patankar Innovative Research, Inc. 3025 Harbor Lane N, Suite 300 Plymouth, MN 55447 763-519-0105 patankar @inres.com Dominique Pelletier Mechanical Engineering Department Ecole Polytechnique de Montreal PO Box 6079, Station Centre-ville Montreal, Canada H3C 3A7 514-340-4711 #4102 Dominique.Pelletier@polymtl.ca D.W. Pepper University of Nevada Department of Mechanical Engineering 4505 Maryland Parkway Las Vegas, NV 89154-4027 702-895-1056 dwpepper@nscee.edu R. Pitchumani University of Connecticut Department of Mechanical Engineering 191 Auditorium Road Storrs, CT 06269-3139 860-486-0683 pitchu @ engr.uconn.edu Richard H. Pletcher Iowa State University Department of Mechanical Engineering 2025 Black Engineering Ames, IA 50011-2160 515-294-2656 pletcher@iastate.edu J. Rambo Georgia Institute of Technology G.W. Woodruff School of Mechanical Engineering Atlanta, GA 30332 404-385-1881 jeffrey.rambo@me.gatech.edu Patrick J. Roache 1215 Apache Drive Socorro, NM 87801-4434 505-838-1110 hermosa@swcp.com B. Rubinsky University of California Department of Mechanical Engineering 6161 Etcheverry Hall Berkeley, CA 94720 510-642-8220 rubinsky @ me.berkeley.edu
LIST OF CONTRIBUTORS XV Wei Shyy University of Michigan 3064 Francois Xavier Bagnoud Building 1320 Beal Avenue Ann Arbor, Michigan 48109-2140 734-936-0102 weishyy @ umich.edu E.M. Sparrow University of Minnesota-Twin Cities Department of Mechanical Engineering 111 Church Street, S.E. Minneapolis, MN 55455 612-625-5502 esparrow@umn.edu Da Yu "Robert" Tzou University of Missouri-Columbia Department of Mechanical Engineering Columbia, MO 65211 573-882-4060 TzouR@Missouri.edu K. Vafai University of California, Riverside Department of Mechanical Engineering A3 63 Bourns Hall Riverside, CA 92521-0425 909-787-2135 vafai @ engr.ucr. edu V.R. Voller University of Minnesota Department of Civil Engineering 500 Pillsbury Drive SE Minneapolis, MN 55455-0116 612-625-0764 volleOO 1 @ umn.edu L.C. Wrobel Brunei University Department of Mechanical Engineering Uxbridge, Middlesex UB8 3PH United Kingdom 01895-274-000 ext 2907 Luiz.Wrobel@brunel.ac.uk Nicholas Zabaras Cornell University Sibley School of Mechanical & Aerospace Engineering Materials Process Design and Control Laboratory 188 Frank H.T. Rhodes Hall Ithaca, NY 14853-3801 607-255-9104 zabaras @ Cornell, edu
PART ONE FUNDAMENTALS
CHAPTER 1 SURVEY OF NUMERICAL METHODS J. Y. MURTHY School of Mechanical Engineering Purdue University West Lafayette, Indiana, USA W. J. MINKOWYCZ Department of Mechanical Engineering The University of Illinois at Chicago Chicago, Illinois, USA E. M. SPARROW Department of Mechanical Engineering University of Minnesota Minneapolis, Minnesota, USA S. R. MATHUR Fluent Inc. 10 Cavendish Ct Lebanon, New Hampshire, USA 1.1 INTRODUCTION 4 1.2 GOVERNING EQUATIONS 5 1.2.1 Continuity Equation 5 1.2.2 Momentum Equation 5 1.2.3 Energy Equation 5 1.2.4 Species Transport 6 1.2.5 General Scalar Transport Equation 6 1.3 ANATOMY OF A NUMERICAL SOLUTION 7 1.3.1 Domain Discretization 7 1.3.2 Discretization of Governing Equation 10 1.3.3 Solution of Linear Equations 15 1.3.4 Nonlinearity and Coupling 17 1.3.5 Properties of Numerical Solution Procedure 17 1.3.6 Summary and Discussion 18 3
4 SURVEY OF NUMERICAL METHODS 1.4 COMPUTATIONAL TECHNIQUES FOR UNSTRUCTURED MESHES 18 1.4.1 Discretization of Convection-Diffusion Equation 18 1.4.2 Gradient Calculation 22 1.4.3 Summary and Discussion 25 1.5 HIGHER-ORDER SCHEMES FOR CONVECTION OPERATORS 25 1.5.1 Upwind-weighted Higher-order Schemes 26 1.5.2 Control of Spatial Oscillations 28 1.5.3 Summary and Discussion 31 1.6 LINEAR SOLVERS 31 1.6.1 Line Gauss-Seidel Method 32 1.6.2 Multigrid Methods 33 1.6.3 Gradient-search Techniques 37 1.7 COMPUTATION OF FLUID FLOW 38 1.7.1 Storage of Pressure and Velocity 38 1.7.2 Solution Methods 43 1.7.3 Density-based Schemes 45 1.8 CLOSURE 47 1.1 INTRODUCTION During the last three decades, numerical simulation has come to play an increasingly important role in the analysis and design of engineering products and processes. A variety of techniques have been developed, and several have reached sufficient maturity to warrant routine use. It is not uncommon today for the industrial thermal analyst to use computational fluid dynamics (CFD) and computational heat transfer (CHT) techniques to do preliminary design in applications as diverse as electronics cooling, underhood automotive cooling, glass processing, as well as food, pharmaceutical, and chemical processing, to name a few. Large-scale simulations involving tens of millions of unknowns are now routinely performed in many industries using both serial and parallel processing. Nevertheless, though the solutions to many problems, especially those involving single-phase nonreacting Newtonian flows, are now within reach, a variety of industrial thermal and fluid flow problems remain intractable. These include gas-solid and gas-liquid flows, phase change, reacting flows, flows of viscoelastic fluids and other fluids with complex rheologies, and complex turbulent flows, among others. The challenges in solving these flows are related to deficiencies in existing numerical methods, insufficient computational power, and an incomplete understanding of the underlying physical processes. The objective of this chapter is to survey the state of the art in computational fluid dynamics and heat transfer to arrive at an understanding of the types of numerical methods commonly used and the range of application of these techniques. The chapter is divided into two parts. It starts with a description of the typical governing equations for flow, heat, and mass transfer. An overview of the basic numerical solution process is then presented, including mesh generation, discretization of a typical governing equation, the solution of linear algebraic sets of equations, and the handling of nonlinearity and interequation coupling. The second half of the chapter addresses more advanced issues. An overview of unstructured-mesh methods is presented. Higher-order discretization methods are reviewed for both structured and unstructured meshes, as well as issues associated with the solution of linear algebraic equation sets for unstructured meshes. Finally, different approaches to the solution of compressible and incompressible
GOVERNING EQUATIONS 5 flows are reviewed. The chapter aims to give a broad overview of the central ideas, and subsequent chapters in the book amplify and expand on these ideas. 1.2 GOVERNING EQUATIONS Industrial CFD simulations typically involve the solution of flows with heat transfer, species transport, and chemical reactions. These types of flows are described by the equations of mass, momentum, and energy conservation. For turbulent flows, it is common to use the Reynolds- averaged form of the governing equations in conjunction with a suitable turbulence model. Additional equations, such as those for radiative transport or for specialized combustion models, are also used. Typical Reynolds-averaged governing equations for turbulent flow and heat and mass transfer are presented in the following sections. 1.2.1 Continuity Equation The Reynolds-averaged mixture continuity equation for the gas phase is ^(p) + V.(pV) = Sm (1.1) at Here t is time, p is the Reynolds-averaged mixture density, V is the Reynolds-averaged velocity vector, and Sm represents external mass sources. Typically, these would result from mass-transfer interactions from a dispersed phase such as spray droplets or coal particles. 1.2.2 Momentum Equation The Reynolds-averaged gas-phase momentum equation is |-(pV) + V. (pVV) + Vp = V. [(M + M()VV] + F (1.2) at Here, p is pressure, \i is the molecular viscosity, and \it is the turbulent viscosity, obtained from a turbulence model. F contains those parts of the stress term not shown explicitly, as well as other momentum sources, such as drag from the dispersed phase. 1.2.3 Energy Equation Heat transfer is governed by the energy conservation equation l-{pE) + V. (pV£) = V. [(k + k,)VT] + V. (t.V) - V. (pV) + Sr + Sh (1.3) at Here, k is the thermal conductivity and kt is the turbulent thermal conductivity resulting from the turbulence model, x is the stress tensor, p is the pressure, and E is the total energy per unit mass defined as V.V E = e(T)+— (1.4)
6 SURVEY OF NUMERICAL METHODS and e is the internal energy per unit mass. The terms on the LHS of Eq. (1.3) describe the temporal evolution and the convective transfer of total energy. The first three terms on the RHS represent the conductive transfer, viscous dissipation and pressure work, respectively. Sr is the volumetric source term due to radiative heat transfer. In the present form of the energy equation, reaction source terms are included in S/,, which also contains all other volumetric heat sources, including those due to the presence; of a dispersed phase. 1.2.4 Species Transport Under the dilute approximation, the Reynolds-averaged conservation equation for the mass fraction, m;, of specie / can be written as jt(pmi) + V« (p\mi) = V. ((pD + ^- J Vm,j + R, (1.5) Here, D is the diffusion coefficient of specie / in the mixture, am is the turbulent Schmidt number, and Ri is the volumetric source of the specie / due to chemical reactions. 1.2.5 General Scalar Transport Equation The equations governing the transport of mass, momentum, energy, and chemical species may be cast into the form of a generic scalar transport equation [1] as d(p4>) -^+V.(pV0) = V.(FV0) + S0 (1.6) Here, (p is the transport variable, V is the diffusion coefficient, and S,p is the source term. Each governing equation represents a different choice of <p, F, and S$. Table 1.1 shows the values of 4>, F, and 5^ corresponding to the governing equations shown in the previous sections. Different choices for these values may be made in the case of the energy equation. Here, the convective terms suggest a choice of 4> = E; however, the diffusion term is most naturally written in terms of the temperature and suggests 4> = T. For an incompressible substance or a perfect gas at low speeds, the equation of state de = CvdT may be invoked to obtain the choices listed in Table 1.1. A detailed discussion of alternative choices may be found in [2]. Once the governing equations are cast into the form of Eq. (1.6), a single numerical method may be devised to solve them. TABLE 1.1 Choice of <j>, T, and S$ for Governing Equations Equation Continuity X momentum Energy 4> l u E r 0 IJ- + IJ-t k + k, cv s$^m Fr V. (r.V) -V.(pV) + Sr + SA ANATOMY OF A NUMERICAL SOLUTION 7 It is important lo note that Eq. (1.6) has been written in cvnsvrmme form. In contrast, by using the continuity equation, we may write the twncaiisermtive form of Eq. (1.6) as Though Eq. (1.7) is mathematically equivalent lo Eq. (1.6). the two forms can yield numerical schemes with substantially different properties. Numerical schemes that seek lo preserve the conservation property in the discretization Mart with the conservative fonn. Eq. (1.6), as the basis. 1.3 ANATOMY OF A NUMERICAL SOLUTION In this section, the basic components of typical numerical solution procedures used lo discreiizc and solve (he general scalar transport equation are described. These include domain discretization, discretization of one or more governing equations of interest, anil, finally, the solution of the resulting discrete algebraic equations. 1.3.1 Domain Discretization The physical domain is discretized by meshing it. i.e.. by dividing the domain into smaller, usually polyhedral, volumes. Though many variants exist, for the purposes of this chapter, the terminology shown in Fig. I.I will be used to describe the meshes. The fundamental unit ofihe mesh is the cell (sometimes called the element). Associated with each cell is the cell ceniroid. A cell is surrounded by faces, which mccl al nodes or vertices. In three dimensions, the face is a surface surrounded by edges. In two dimensions, faces and edges are the same. A variety of mesh types are encountered in practice. These arc described next. f-\ \J CeJI " 1 Cell centroid Face *-- Nude tvertex) * FICLKE 1.1 Mesh icmiinukigy. 8 SURVEY OF NUMERICAL METHODS Regular and Body-fitted Meshes In many cases, our interest lies in analyzing domains thai arc regular in shape: rectangles, cubes, cylinders, and spheres. These shapes can be meshed by regular grids, as shown in Fig. 1.2a. The grid lines are orthogonal lo each other, and conform to the boundaries of the domain. These meshes are also sometimes called orthogonal meshes. For many practical problems, however, the domains of interest are irregularly shaped and regular meshes may not sufhce. An example is shown in Fig. 1.2b. Here, grid lines are noi necessarily orthogonal lo each other, and curve lo conform lo the irregular geometry. If regular grids are used in these geometries, stair stepping occurs at domain boundaries, as shown in Fig. 1.3a. When the physical phenomena at the boundary are important in determining the solution, e.g.. in flows dominated by wall shear, such an approximation of the boundary may not be acceptable. (a) (b) \ \ FIGURE 1.Z (a) Regular and <b) body-filled meshes (a) Block FIGURE 1.3 (a) Siair-siepped and (b) bJock-slrutiured meshes. ANATOMY OF A NUMERICAL SOLUTION 9 Structured, Block-structured, and Unstructured Mesh** Th, u u Fig. 1.2 are examples of structured meshes. HerTeve'y interSvertex t rhTn " " nected to the same number of neighbor vertices fZZI TTu u mam 1S C°n" i^FigT/b' H/eryeA!henCv:rrmf ^f^ ^ "^P16 °f 3 nonc°nfomal m^ » ^own or events.' SaVS^ mPiS 1? n "^ ^ ^ ^ ^ °f ^S^™8 <** vertices of cells sharing z^Zlll c^dde^ ^ "" C°nformal "^ ^ *e geometries that can be meshed, structured quadrilaterals and hexahedra ar we l-Tui d or flow with a dominant direction, such as boundary-layer flows. More recently, unsmSSd^SS are becoming necessary to handle the complex geometries that characterize 2S£ ^ 7g S,!nd tetrahedra 3re inCreasingly bd"g used' a«d techniques for tb^ gentaTon are rapidly reaching maturity [3]. Another recent trend is the use of hybrid meshes Kk prisms are used in boundary layers, transitioning to tetrahedra in the free "ream " ' Node-based and Cell-based Schemes Numerical methods that store their orimarv unknowns at the node or vertex locations are called node-based or vertex-ba Tschemes ThoS that store them at the cell centroid, or associate them with the cell, are called l^schlmes Finite-element methods are node-based [4], Many finite-volume methods are c 1 based [15 6 ' though node-based finite-volume schemes are also available [7] ' L FIGURE 1.4 (a) Unstructured and (b) nonconform^ meshes. 10 SURVEY OF NUMERICAL METHODS 1.3.2 Discretization of Governing Equation The most commonly used approaches to discretize the general scalar transport equation are the finite-difference, finite-volume, and finite-element techniques. These methods discretize the governing equations directly, using a variety of local profile assumptions or approximations, reducing the original partial differential equation into a set of coupled algebraic equations, which must then be solved. In contrast, the boundary-element technique [8], which has been used in a variety of heat conduction problems [9], invokes Green's identities to convert the original differential equation into an integral equation involving only surface quantities, which is then discretized and solved. A detailed description of the technique may be found in Chapter 4. Here attention is directed to techniques that directly discretize the governing equations. To illustrate the similarities and differences between finite-difference, finite-element, and finite-volume techniques, consider a one-dimensional scalar transport equation with a constant diffusion coefficient and no unsteady or convective terms dx\rd£)+s*=° (1.8) with boundary conditions 0(0) = <po and (p(L) = (pi. Equation (1.8), which has been written in conservation form, will be discretized using each of the three methods. Finite-difference Methods Finite-difference methods approximate the derivatives in the governing differential equation using truncated Taylor series expansions. First Eq. (1.8) is recast in nonconservation form: d24> (1.9) Next, the discretization of the diffusion term is carried out. Consider the one-dimensional mesh shown in Fig. 1.5. The unknown discrete values of <p are stored at the nodes shown. The Taylor series expansion for 4> can be written as , d(p\ Ax2 /d24>\ , ,. (1.10) and /d<p\ Ax2 /d24>\ , ,, (1.11) 1 -e- 2 -©- 3 -e- —- Ax Ax — FIGURE 1.5 One-dimensional mesh. ANATOMY OF A NUMERICAL SOLUTION 11 The term 0(Ax3) indicates that the terms that follow have a dependence on Ax" where n > 3. Subtracting Eq. (1.10) from Eq. (1.11) gives dd>\ 03 — d>\ -, — ' -- -— + 0(Ax2) (1.12) dx)2 2 Ax The addition of the two equations yields ' d24>\ 01 +03-202 , „, . 2 dx2J2 Ax2 + 0{Axz) (1.13) By including the diffusion coefficient and dropping terms of 0(Ax2) or smaller, the following equation is obtained: (d2d>\ 01+03-202 rUg)2 = r Ax2 ^ The second derivative of 0 has thus been evaluated to second order. The source term S$ is evaluated at the point 2 using S2 = S<p(4>i) (1.15) Substituting Eqs. (1.14) and (1.15) into Eq. (1.8) yields ^ = ^, + ^+* (L16) This is a discrete form of Eq. (1.8). By deriving a similar equation like for every grid point in the mesh, a set of algebraic equations in the discrete values of 0 are obtained. The value of 0 at each node is directly influenced only by its nearest neighbors; the use of a truncated Taylor series leads to this type of local dependence. At the boundaries, the discrete values of 0 may be obtained by discretizing the boundary conditions. The resulting equation set may be solved by a variety of methods, which are discussed later in this chapter. Finite-difference methods do not explicitly enforce the conservation principle in deriving discrete equations. Thus, energy balance may not be exactly satisfied for coarse meshes, though finite-difference methods that have the consistency property [10] are guaranteed to approach perfect conservation as the mesh is refined. As we will see, finite-difference methods yield discrete equations that look similar to finite-volume and finite-element methods for simple cases; however, this similarity not guaranteed in more complicated cases. Finite-element Methods To develop the finite-element method, the one-dimensional diffusion equation Eq. (1.8), is reconsidered. There are different kinds of finite-element methods of which the method of weighted residuals is one. Here, a popular variant of the method of weighted residuals, called the Galerkin finite-element method, is considered. More detailed information about this class of numerical techniques may be found in [4, 11, 12]. The starting point is, again, the nonconservative form of the governing equation (1.9). The computational domain is divided into N — 1 elements corresponding to N nodes; a typical
12 SURVEY OF NUMERICAL METHODS element;' is shown in Fig. 1.6. Let <j> be an approximation to (p. Since (p is only an approximation, it does not satisfy Eq. (1.9) exactly, so that there is a residual R d2<p (1.17) We wish to find (p such that / J do WRdx = 0 (1.18) where W is a weight function. Equation (1.18) requires that the residual R become zero in a weighted sense. To generate a set of discrete equations, a family of weight functions Wj, j = l,2,...,Nls used. Thus, / «/d< domain WjRdx=0 j = l,2,...,N (1.19) The weight functions Wj(x) are typically local in that they are nonzero in the vicinity of node j, but are zero everywhere else in the domain. Further, a shape function Nj(x) is assumed for 4>, which specifies how (p varies between nodes. Thus, <P(x) = J2N'(x)<Pi (1.20) The Galerkin finite-element method requires that the weight and shape functions be the same, i.e., Wi = Nj. Typically, the shape function variation is also defined locally, as shown for the case of a linear shape function in Fig. 1.6. Here, for xt < x < x;+i, N,(x) N,+i(x) = Xi + \ Xi + ] x - — X -xt Xi xi + \ — Xi i Element j' i + I i Element i i + 1 FIGURE 1.6 Linear shape functions on element j' and corresponding variation of (
ANATOMY OF A NUMERICAL SOLUTION 13 Furthermore, the source term S<j, is also interpolated on the domain from N S(x) = Y,Ni(x)StPJ (1.21) ;=i Thus, under the Galerkin finite-element formulation, Eq. (1.9) becomes fXL d2cp fXL / rNj(x)---^dx+ Nj(x)Sdx •Jxq ®x Jxq 0 j = \,2,...,N (1.22) The next step is to integrate the first term in Eq. (1.22) by parts. This procedure yields XT dcp FNj(x) dx rxL dN: dcp rXL / r—J-—dx+ Nj(x)Sdx=0 JXo dx dx JXQ j = \,2,...,N (1.23) Furthermore, Eq. (1.20) may be differentiated to yield dcp y^ dNi dx ~ ^-f dx (1.24) The first term in Eq. (1.23) is Nj(x0)q\ +Nj(xL)qN (1.25) Here q\ and qn are the heat fluxes into the domain at the boundaries. Here N\ (xq) = Nm (xl) = 1 by definition. If the shape function Nj(x) is local, it is nonzero only in the vicinity of the node j. Thus, Eq. (1.25) becomes Nj{x0)q\ + Nj(xL)qN = qx if j = 1 = 0 if 7=2 iV — 1 = qN if j = N (1.26) Thus, the overall equation may be written as fXL dN' J^dN' fXL J^. I r^E^~ j NJ(x)J2Ni(x)Sidx = Nj {x0)q\ + Nj (xL)qN j=\,2,...,N (1.27)
14 SURVEY OF NUMERICAL METHODS The discrete equation for a node j may thus be written as ]P Kij4>j + Sj =q\ if j = 1 0 if j = 2, ., qN if j = N ., N ■ (1.28) Here fXL dNj dN: , Ku--= -^-rdx Jxo dx dx Sj = - Nj(x)Y,Ni(x)Sidx Jxn : i (1.29) In the above equations, when </>o and 4>i are given, the equations at nodes j — 1 and j = N may be used to evaluate the fluxes q\ and q^. On the other hand, when q\ and qn are specified, the same equations are used to find 4>q and (pi. By choosing specific shape functions Ni(x), a coupled algebraic equation set may be derived for the nodal values </>;. Since Nt is local, the matrix Kjj is sparse. It is important to note that because the Galerkin finite-element method requires the residual to be zero only in a weighted sense, it does not enforce the conservation principle in its original form; like the finite-difference method, conservation is satisfied in the limit of a fine-enough mesh. Next, attention is turned to a method that employs conservation as a tool for developing discrete equations. Finite-volume Methods The finite-volume method (sometimes called the control-volume method) divides the domain in to a finite number of nonoverlapping cells or control volumes over which conservation of <p is enforced in a discrete sense. The starting point is the conservative form of the scalar transport equation (1.8). Consider a one-dimensional mesh, with cells as shown in Fig. 1.7. Discrete values of 4> are stored at cell centroids, which are denoted by W, P, and E. Far neighbor cells WW and EE are shown for later use. The cell faces of cell P are denoted by w and e. The face areas are assumed to be unity. The focus is on the cell associated with P. Equation (1.8) is integrated over the cell P, yielding Ja dx \ dx) Jw Sdx =0 (1.30) h Ax WW o w WW U P w O "e E e O EE ee O h Sx„, Sx, H FIGURE 1.7 Arrangement of control volumes.
ANATOMY OF A NUMERICAL SOLUTION 15 which can be integrated to give • / w Ju> rS) -(rzT) +'sdx=o (L3i) This equation can also be obtained by writing a diffusion flux balance over the cell P from first principles. Thus far, no approximation has been made. A profile assumption, i.e., an assumption about how cp varies between cell centroids, is now made. If it is assumed that cp varies linearly between cell centroids, we may write re(<pE-(pp) rw(4>P-(pw) _ 1- SAx = 0 (1.32) Sxe Sxw Here S is the average value of S in the control volume. Note that the above equation is no longer exact because of the approximation in assuming that cp varies in a piecewise linear fashion between cell centroids. Collecting terms, Eq. (1.32) becomes apcpp = a£(pE + a\v<Pw + b (1.33) where as aw aP b = = = = Sxe 1 w Sxw aE + ISAx aw (1.34) Equations similar to Eq. (1.33) may be derived for all cells in the domain, yielding a set of algebraic equations, as before. These may be solved using a variety of direct or iterative methods. Unlike finite-difference and finite-element methods, the finite volume discretization process starts with the statement of conservation over the cell. Cell values of cp that satisfy this conservation statement are then found. Thus, conservation is guaranteed for each cell, regardless of mesh size. Conservation does not guarantee accuracy, however; accuracy depends on the profile assumptions made. The solution for cp may be inaccurate, but will, nevertheless, be conservative. 1.3.3 Solution of Linear Equations Regardless of what method is used, the process of discretization leads to a coupled algebraic set of equations in the discrete values of cp, such as Eq. (1.33). These equations may be linear (i.e., the coefficients are independent of cp) or they may be nonlinear (i.e., the coefficients are functions of cp). The techniques for solving these equations are independent of the discretization method and represent the path to solution. If the problem is well-posed and the discrete equation set is linear, it is guaranteed that only one solution exists, and all linear solvers that converge to a solution will lead to the same discrete solution. The accuracy of the solution depends only on the accuracy of the discretization technique. Solution methods may be broadly classified as direct or iterative. Each class is considered in turn.
16 SURVEY OF NUMERICAL METHODS Direct Methods The discrete algebraic equations derived in the previous sections may be written as A0 = B (1.35) where A is the coefficient matrix, (p = [</>i> 4>2, ■ ■ -]T is a vector consisting of the discrete values of 4>, and B is the vector resulting from the source terms. Direct methods solve Eq. (1.35) using the standard methods of linear algebra. The simplest direct method is inversion, whereby (p is computed from 4> = A~]B (1.36) A solution for (p is guaranteed if A-1 can be found. However, the operation count for the inversion of an N x N matrix is 0(N3). Consequently, inversion is almost never employed in practical problems. More efficient methods for linear systems are available. For the discretization methods of interest here, A is sparse, and for structured meshes it is banded. For certain types of equations, for example, for pure diffusion, the matrix is symmetric. Matrix manipulation can take into account the special structure of A in devising efficient solution techniques for Eq. (1.35). A number of standard textbooks describe direct solution techniques, Ref. [10], for example. Iterative Methods Iterative methods are widely used in computational fluid dynamics. These methods employ a guess-and-correct philosophy, which progressively improves the guessed solution by repeated application of the discrete equations. Let us consider an extremely simple iterative method, the Gauss-Seidel method. Here, each grid point in the mesh is visited sequentially, and the value of <p is updated using aE<pE + aw<pw + b <pp = (1-37) aP The neighbor values, (pE and <pw, are required and are assumed known at prevailing values. Thus, points that have already been visited will have recently updated values of <p, and those that have not will have old values. The domain is swept over and over until convergence. A related technique, Jacobi iteration, employs only old values during the sweep, updating all grid points simultaneously at the end of the sweep. Convergence of the process is guaranteed for linear problems if the Scarborough criterion is satisfied, which requires \aE\ + \aw\ ^ . , „ . , . „ < 1 for all grid points \ap\ < 1 for at least one grid point (1.38) Matrices that satisfy the Scarborough criterion are said to be diagonally dominant. Direct methods do not require the Scarborough criterion to be satisfied; a solution to the linear set of equations can always be obtained as long as the coefficient matrix is not singular. The Gauss-Seidel scheme can be implemented with very little storage. All that is required is storage for the discrete values of (p at the grid points. The coefficients ap, aE, aw, and b can be computed on the fly if desired, since the entire coefficient matrix for the domain is not required when updating the value of <p at any grid point. Also, the iterative nature of the scheme makes it particularly suitable for nonlinear problems. If the coefficients depend on (p,
ANATOMY OF A NUMERICAL SOLUTION 17 they may be updated using prevailing values of <p as the iterations proceed. Furthermore, the Gauss-Seidel technique can be applied to sparse matrices with arbitrary fill patterns and does not require a band structure. Nevertheless, it is rarely used in practice because of slow convergence; techniques to accelerate its convergence are discussed in Section 1.6. An alternative to solving the linear system is use a time-advancement strategy. Here, even though the desire is to solve the steady-state problem, the problem is posed as unsteady, and the solution marched to steady state. If an explicit scheme is used [1], no linear solver is necessary; however, the time step is limited by the stability limits imposed by explicit schemes. If an implicit time-stepping strategy is used [1], linear solvers are again necessary. 1.3.4 Nonlinearity and Coupling In many engineering applications it is necessary to solve a number of governing equations simultaneously over the computational domain. In solving natural convection problems, for example, the flow field and the energy equation must be solved simultaneously. The solution of the flow field itself requires the simultaneous solution of the continuity and momentum equations. In addition, the governing equations may be nonlinear. The simplest approach to solving coupled sets of governing equations is the sequential approach [1]. Here each governing equation is discretized and solved in turn using the procedures described previously. Prevailing values of the other solution variables are used where necessary. Governing equations are iterated upon in this way until the solution is deemed converged. This approach has been used widely for the solution of incompressible flows using pressure-based algorithms. When the coupling between governing equations becomes strong, this type of sequential solution procedure can become untenably slow, and may even lead to instability and divergence. When computer memory and cost are not a limitation, it is possible to discretize all the governing equations at each node or cell centroid, and solve the complete nonlinear system using the Newton-Raphson method or other techniques [10, 13]. 1.3.5 Properties of Numerical Solution Procedure The discretization and solution procedures described here may be characterized by their accuracy, consistency, stability, and convergence characteristics. A discussion of these four characteristics now follows. Accuracy Errors in the computed solution may result from (1) modeling errors, i.e., errors engendered by incorrectly representing the physics in the governing equations, (2) a lack of convergence in the iterative solution procedure, or (3) the truncation error in the discretization procedure. As was seen in Section 1.3.2, d2(p/dx2 may be represented as (d2d>\ 0i+03-202 rUl)2 = r % (L39) The truncation error for this representation is 0(Ax2). The error decreases quadratically with Ax. A scheme whose truncation error is 0(Axn) is an nth-order scheme. Consistency A discretization scheme is consistent if the error in the solution tends to zero as Ax -* 0. If the truncation error is of the form 0(Axn), consistency is guaranteed. A numerical scheme for unsteady problems that has a truncation error 0(Ax/At), for example, would not be consistent unless Ax/ At -*■ 0. Consistency is an important property of the discretization since it ensures that refining the mesh (or the time step) will yield more accurate solutions.
COMPUTATIONAL TECHNIQUES FOR UNSTRUCTURED MESHES 19 FIGURE 1.8 Control volume in unstructured mesh. cells, as shown in Fig. 1.8, and conservation is enforced on these cells. In cell-based schemes, all transport variables are stored at cell centroids. One advantage of this arrangement is that conservation can be ensured for arbitrary control volumes with nonconformal interfaces without special interpolation techniques. Consider the mesh shown in Fig. 1.8, for example. Cell CI can be considered to have five faces, a-b-c-d-e, and no special treatment is required. Another advantage is that on triangular and tetrahedral meshes, the ratio of the number of cells to nodes is between three and five. As a result, cell-based storage enjoys better resolution than node-based storage for roughly the same amount of work, which is typically proportional to the number of cell faces. The basic development parallels the one-dimensional finite-volume example presented in Section 1.3.2 but special attention must be paid to mesh nonorthogonality and the lack of structure. Integrating Eq. (1.6) about the control volume CO yields dt <J>4>) AV0 + J2 Ff<t>f = J2Df+ (5* AV)o / / (1.40) where Ff is the mass flow rate out of CO at the face /, AVo is the volume of the cell CO, D f is the transport due to diffusion through the face /, and the summations are over the faces of the control volume. For the purposes of scalar transport, the mass flow rate Ff is assumed to be known. To obtain a set of algebraic equations, all other face quantities as well as volume integrals in Eq. (1.40) must be written in terms of the unknowns, i.e., values of (p at cell and boundary face centroids. Diffusion Term The diffusion term across a face is given by Df=T/V0.A (1.41) A is the area vector associated with the face /. Since the line joining the centroids (associated with the vector e, in Fig. 1.8) is not perpendicular to face /, the gradient of 4> normal to the face,
20 SURVEY OF NUMERICAL METHODS i.e., V0»A, cannot be written purely in terms of a gradient in the e, direction. Decomposing the gradient in directions parallel to es and tangent to A, and using consistent approximations for derivatives, it is possible to write the diffusion term Df as [6] d>\ — do A»A Df = T^L^l+Sf (1.42) ds A«e.s where /— — A«A\ <S/==r7(V0.A-V0.ei —J (1.43) Here V</> at the face is taken to be the average of the derivatives at the two adjacent cells, determined as discussed in Section 1.4.2. Thus, Df is seen to consist of a primary diffusion term consisting of the first term in Eq. (1.42), and a secondary diffusion term, Sf. For orthogonal meshes, A»es =0 and Sf is therefore zero. The primary component is expressed in terms of the difference of (f> values in the two cells adjacent to face / (i.e., (po and <t>\). The primary component is treated implicitly in the discrete equation for the two cells. Convection Term On a structured mesh, a first-order approximation for the value of 4> on the face e in Fig. 1.7 may be obtained using an upwind scheme as 4>e = (pp if Fe > 0 = <pE if Fe < 0 (1.44) Here Fe is the flow rate on the east face e, and is positive if the flow is in the positive x direction. A second-order central-difference approximation for 4>e may be written on a uniform mesh as 4>e = ^^ d.45) Similar schemes can be devised on unstructured meshes. For example, a first-order upwind approximation for <p at the face / can be taken to be the value at the upwind cell in Fig. 1.8: 4>f = ^upwind (1-46) Similarly for a uniform mesh, a central-difference approximate to (pf can be written as <Pf = ^ (1-47) Though higher-order schemes are generally preferred over first-order schemes in CFD, higher- order convection operators frequently result in a loss of boundedness unless specific steps are taken to limit spatial oscillations. A more complete discussion of interpolation schemes for convective operators may be found in Section 1.5.
COMPUTATIONAL TECHNIQUES FOR UNSTRUCTURED MESHES 21 Unsteady Term In the present numerical scheme, the unsteady term is discretized using backward differences. A first-order approximation is ytW (^)o+1 - W (M8) At higher-order representations of the unsteady term can be written using more levels of storage. For unsteady problems, the discretization of the convection, diffusion, and source terms may be carried out at the previous time level n, resulting in an explicit scheme. Alternatively, discretizing these terms at the time level n + 1 results in an implicit scheme. Schemes such as the Crank-Nicholson scheme employ averages of both time levels [1]. Source Term The source term S<j, is first written in linearized form as S(P=SC + SP4>0 (1.49) The forms of Sc and Sp are chosen from stability considerations [1]. As seen earlier, iterative linear solvers require diagonal dominance to converge. If such solvers are to be used, it is prudent to require Sp to be negative to improve diagonal dominance of the coefficient matrix resulting from the discretization process [1]. In most engineering problems, negative values of Sp arise naturally from the physical nature of the source term itself. A useful linearization process is to expand the source term in a truncated Taylor series about the current iterate, denoted by starred values: &)o = (5;)o+(jJ!)o(*°-*o) d-50) By comparing Eqs. (1.49) and (1.50), Sc and Sp may be written as sP=my We note that at convergence, (p = <t>*'< an^ the true value of Sis recovered. Thus, the linearization procedure changes the path to solution, but not the final solution itself. The linearized source term is used in Eq. (1.40). Discrete Equation Set Collection of all the terms results in a discrete equation for each cell involving face neighbors of the cell. Using the first-order upwind approximation with an implicit time-stepping scheme, the overall discrete equation may be written as apcpp = 2_Janb0nb + b (1.52) nb 22 SURVEY OF NUMERICAL METHODS where TfA-A a* = 77T- + Max[-F/,0] as A-es aP = £anb - Sl,AV0 + E&g* + (£ Ff + *^W nb Po"AV<, b = ScAV0 + J2(Sf)nb+\^4>no (1-53) nb Here nb denotes the cell-centroid values associated with the face neighbor cells. Since only face neighbors appear directly in the discrete equation, the resulting coefficient matrix is sparse. Other neighbor values appear indirectly in Sf in the computation of the gradient of cf>, but do not appear in the coefficient matrix. The superscript n + 1 has been dropped for clarity. Thus, the unsuperscripted terms are to be understood as being evaluated at time level n + 1. 1.4.2 Gradient Calculation Accurate computation of <p gradients is an important part of any unstructured mesh technique. Computation of secondary gradient terms requires the knowledge of gradients of (p at the cell centroids. Gradients are also required for the construction of higher-order convection operators (see Section 1.5) as well as in many physical models. For example, velocity derivatives are required to compute the production term in turbulence models or to compute the strain rate for non-Newtonian viscosity models. Unlike for structured grids, these cannot be obtained by simple finite differences. Classical finite-element methods and control-volume finite-element methods [7] address this by analytically differentiating the underlying shape functions. Cell-based finite-volume methods have typically employed two different approaches to gradient calculation, which are now presented. Gradient Theorem Approach One approach is suggested by the gradient theorem which states that for any closed volume AVo enclosed by surface A / V<pdV= / cpdA (1.54) JaVo ' a where dX is the outward-pointing incremental area vector. A discrete version of Eq. (1.54) may be written as where A/ is the outward-pointing face area vector for face /. As a first approximation, the face value 4>f may be computed as the average of the two cells values sharing the face so that 4>f = —z— (1-56) COMPUTATIONAL TECHNIQUES FOR UNSTRUCTURED MESHES 23 FIGURE 1.9 Arrangement of cells in unstructured mesh. Once the derivative has been obtained by using Eqs. (1.55) and (1.56), the initial approximation of the face average value of (p may be successively improved by reconstructing it from the cell value. Thus, from Fig. 1.9, <pf may be written as <Pf = (00 + V0O. Ar0) + (0i + V0, • Ar,) (1.57) By iteratively applying Eq. (1.57) to the gradient calculation in Eq. (1.55), the accuracy of the computed gradient may be improved. Iteration increases the effective stencil of 0 values appearing in the discrete equation and can lead to oscillatory results. In practice, the gradients used to reconstruct face values are limited to the bounds dictated by neighbor 0 values so as to avoid undershoots and overshoots in the solution. The concept of limiting is discussed in Section 1.5.2. Least-squares Approach The least-squares approach computes the gradient at a cell such that it reconstructs the solution in the neighborhood of the cell in a least-squares sense. For example, consider cell CO. It would be desirable to have the value of 0 computed at the centroid of neighbor cell Cj in Fig. 1.10 be equal to (pj. By assuming a locally linear variation of (p, one may write 00 + V0o*Ar/ = (pj (1.58) Here Ar;- is the vector from the centroid of cell CO to the centroid of cell Cj. Substituting for Ary in Eq. (1.58) yields Ax: dx + Ay; — o dy (1.59) for all cells Cj, j = I,..., J surrounding CO. It is convenient to assemble all the equations in matrix form as follows: Md= A0 (1.60) 24 SURVEY OF NUMERICAL METHODS FIGURE 1.10 Nodal locations and vectors used in least-squares calculation of cell gradient. Here M is the 7x2 matrix M = Ax\ Ayi Ax2 Ay2 Axj Ayj and d is the vector of the components of gradients of 4> at cell CO dy\ and A(p is the vector of the differences of (p A(p = (p2 -00 4>j - 4>o (l.6l) (1.62) (1.63) Equation (1.60) represents J equations in the two unknowns (d(p/dx)\0 and (90/9y)|o. Since, in general, J is larger than two, Eq. (1.60) is an overdetermined system. Physically, HIGHER-ORDER SCHEMES FOR CONVECTION OPERATORS 25 this means that a linear profile cannot be assumed for 0 in the vicinity of cell CO that exactly reconstructs the known solution at all of its neighbors. One can only hope to find a solution that fits the data in the best possible way, i.e., a solution for which the root mean square (rms) value of the difference between the neighboring cell values and the reconstructed values is minimized. From Eq. (1.59), the difference in the reconstructed value and the cell value for cell Cj is Ri = Ax/ — 1 ' dx 90 o dy -(4>j-4>o) (1.64) o The sum of the squares of the errors over all the neighboring cells is r = J2r2j (L65) The objective is to find (90/9jc)|o and (30/3y)|o such that R is minimized. By differentiating R with respect to (90/9x)|o and (90/9y)|o and equating to zero, we obtain MrMd = MrA0 (1.66) MrM is a 2 x 2 matrix that can easily be inverted analytically to yield the required gradient V0. The least-squares approach is easily extended to three dimensions. This method places no restrictions on cell shape, and does not require a structured mesh. 1.4.3 Summary and Discussion In this section, an overview of typical unstructured, cell-based, finite-volume techniques has been presented. These techniques involve conservation, albeit over arbitrary polyhedral cells. To obtain diagonal dominance in the linear system, the diffusion term in unstructured formulations is decomposed into primary and secondary terms, with the primary term being implicitly included in the coefficient matrix. First-order accurate convective operators are easily incorporated, but higher-order convective operators are more challenging to formulate, and remain an open research area; this aspect is discussed further in Section 1.5. The computation of gradients, again, is substantially more complicated than for structured meshes. The nominally linear algebraic equation set resulting from the discretization is sparse but not banded. Solution techniques to address this type of problem are discussed in Section 1.6. 1.5 HIGHER-ORDER SCHEMES FOR CONVECTION OPERATORS Over the last two decades, a great deal of effort has been devoted to improving the accuracy of convective operators for both structured and unstructured meshes. The first-order upwind and second-order central-difference schemes described in Section 1.4.1 are usually not suitable for practical use on moderate-sized meshes. Consider convection of a scalar 0 over the square domain shown in Fig. 1.11. The left and bottom boundaries are held at 0 = 0 and 0=1, respectively. The flow field in the domain is given by V= 1.0i+ l.Oj, so that the velocity vector is aligned with the diagonal, as shown. The objective is to compute the distribution of 0 in the domain using the upwind and central-difference schemes for the case when the flow Peclet number Pe = p\\\L/T —> oo. For this case, the solution is 0 = 1 below the diagonal and 0 = 0 26 SURVEY OF NUMERICAL METHODS 1.5 1.0 0 0.5 0.0 -0.5 Exact Upwind scheme Central difference 0.0 0.2 0.4 0.6 y 0.8 .0 0=1.0 FIGURE 1.11 Schematic of scalar transport in a square domain and computed variation of 0 along the vertical centerline. above the diagonal. Figure 1.11 shows the predicted <p values along the vertical center-line of the domain (x = 0.5) using 13 x 16 quadrilateral cells. The first-order upwind scheme smears the 4> profile so that there is a diffusion layer even when there is no physical diffusion. The central-difference scheme, on the other hand, shows unphysical oscillations in the value of <p. Though this example employs a Peclet number of infinity, similar problems manifest themselves for high Reynolds number flows on moderate-sized meshes in many practical simulations. Over the last two decades, a number of improvements to the discretization of the convective operator have been made. Two broad approaches have been taken. One approach has been to develop schemes that are substantially more accurate than the first- and second-order schemes described thus far. For applications such as direct numerical simulation (DNS) of turbulence and for computational aero-acoustics (CAA), ultra-high-accuracy methods using compact finite differences and spectral/spectral-element schemes have been developed [23-25]. The second approach has addressed more conventional applications. Here the focus has been on constructing higher-order upwind-weighted schemes by truncating Taylor series expansions to second order or higher. Methods have been developed to control spatial oscillations in these schemes while retaining formal higher-order accuracy. This latter class of schemes is now described. 1.5.1 Upwind-weighted Higher-order Schemes The upwind scheme may be interpreted as a truncation to O(Ax) of a Taylor series expansion for <p. If face e in Fig. 1.7 is considered for the case Fe > 0, such an expansion in the neighborhood of the upwind point P may be written, assuming a uniform mesh of size Ax, as d<b (x — xP)2 d2d> , 4>(x) = 4>P + (x- xP)-f- + 1TT + °<A*) ox 2! oxz (1.67) By retaining more terms in the Taylor series, a family of upwind-weighted higher-order schemes may be developed. HIGHER-ORDER SCHEMES FOR CONVECTION OPERATORS 27 Second-order Upwind Schemes A second-order upwind scheme may be derived by retaining the first two terms of the expansion in Eq. (1.67). Evaluating Eq. (1.67) at xe = xp + (Ajc)/2, we obtain Ax dd> 4>e=4>P + ^1T d-68) 2 ox This approximation has a truncation error of O(Ax)2. To write <pe in terms of cell centroid values, d<p/dx must be written in terms of cell centroid values. On a one-dimensional grid, the derivative at P may be written using either a forward, backward, or central difference formula to give three alternative second-order schemes. For example, if d<p/dx is written using 30 <Pp-<Pw (169) dx Ax we obtain 0e = 0P + ^^ (1.70) This is the basis of the Beam-Warming scheme [26]. Third-order Upwind Schemes Third-order accurate schemes may be derived by retaining the second derivative in the Taylor series expansion as (jc -xp)2 d2<p dx ' 2! ~d~x2 <P(x)=<pP + (x-xp)^- + -—^~ttj (1-71) Using cell-centroid values to write the derivatives d<p/dx and d2<p/dx2 we obtain + 0(Ax2) (1.72) 4>E -4>W , „, A 2x dx 2A;t and d2<P <Pe+<Pw-2<Pp +o(Ax2) (1?3) dx2 (Axf Inserting Eqs. (1.72) and (1.73) into Eq. (1.71) and rearranging yields 4>e + 4>p 4>e + 4>w - 2<t>p (1.74) This scheme is called the QUICK scheme (quadratic upwind interpolation for convective kinetics) [27]. These schemes are not truly multidimensional in that upwinding occurs along grid lines. Also, line structure is required in these schemes, making them unsuitable for use on unstructured meshes. 28 SURVEY OF NUMERICAL METHODS Extension to Unstructured Meshes Formulation of higher-order schemes for unstructured meshes is an area of active research and new ideas continue to emerge. A second-order accurate unstructured mesh scheme based on the ideas in the previous section is now presented. The starting point is the multidimensional equivalent of Eq. (1.68). Referring to Fig. 1.9, if Ff > 0, <p may be written using a Taylor series expansion about the upwind cell centroid as 4> (x, y) = 4>0 + (V0)o • Ar + 0(|Aif) (1.75) where Ar is Ar = u-*o)i+(:y-:yo)j (1.76) To find the face value cpf, Eq. (1.75) is evaluated at Ar = Aro, as shown in Fig. 1.9, to give 4>f = 00 + (V0)o • Ar0 + O(|Ar0|2) (1.77) As with structured meshes, (V0)o must be evaluated. This can be done using either of the techniques described in Section 1.4.2. 1.5.2 Control of Spatial Oscillations The schemes described in the previous section give higher-order accuracy but can still produce spatial oscillations in steady problems. If used in conjunction with the Euler explicit scheme [26] for time-stepping in unsteady problems, these schemes are unconditionally unstable. A number of research efforts have tried to remedy these problems, two of which are described below. Added Dissipation Schemes One technique to eliminate spatial oscillations is to use one of the higher-order schemes developed in the previous sections, but to damp out the oscillations through the explicit use of an artificial viscosity tailored to maintain the desired formal accuracy of the scheme [28]. In the case of the central-difference scheme, a dissipation term involving a discrete fourth derivative is used. Thus, referring to Fig. 1.7, <pe can be expressed as ^ = <PP + <PE + £(4) {(pEE _ 3(f)E + 3(f)p _ <hf) (178) This amounts to adding a term of the type Ax?'(d4(f)/dx4) to the governing equation. Since the additional term is 0(Ax3), it does not change the formal second-order accuracy of the central- difference scheme. Near discontinuities in <p, it is necessary to add a stronger dissipation and a second-order term is also introduced, which reduces the formal accuracy of the scheme to first order [28]. The resulting expression for <pe is 4>e = <Pp^4>E - e<2) (fa - <pP) + sf] (4>EE - 3<pE + 3<fp - <pw) (1-79) To use this type of idea successfully, it is necessary to choose the coefficients ee and e^ , and also to detect discontinuities and shocks, so that ee can be made small in the bulk of the flow. HIGHER-ORDER SCHEMES FOR CONVECTION OPERATORS 29 Flux Limiters The use of Eq. (1.68) does not guarantee that <pe is bounded by <pp and <pE, or by any other stencil in the neighborhood of face e, leading to spatial oscillations in the computed values of <p. Schemes employing flux limiters seek to overcome this problem by limiting the contribution of the gradient term using Ax dd> 6P+*e(re) —-Hp 2 ox (1.80) Here * is limiter function chosen to assure the boundedness of <p. The gradient (d(p/dx)\p depends on the scheme being implemented. For the Beam-Warming scheme, for example, t>p + Ve(re) Ax (pp — <pw (1.81) 2 Ax The limiter * is a function of the variable re, which is itself a function of differences of <p: re = ^ ~^P (1.82) A variety of limiter functions have been used in the literature, including the minmod, superbee, van Leer, and van Albada limiters [29, 30]. The corresponding functional variation is shown in Fig. 1.12. The advantage of using a limiter becomes readily apparent when considering the problem of linear advection of a square wave form with a uniform velocity u. Since there is no diffusion, the numerical scheme must preserve the shape of the wave form during its translation. Figure 1.13 shows the prediction of the Beam-Warming scheme with and without limiters. In the absence of limiters, oscillations in the shape of the wave begin to develop and grow with time, and are particularly evident at corners. These oscillations disappear when limiters are used. For a more detailed discussion of these methods, see [29, 30]. 2 1.5 ¥ 1 0.5 n * / /,' -W :/.'/ if 1 *-• *-• *-• minmod • • ■ • Superbee van Leer van Albada 1.1,1 FIGURE 1.12 Limiter functions. 30 SURVEY OF NUMERICAL METHODS 1.5 Beam-Warming Exact solution 0.5 -*»«•" -0.5 I J_ 0.2 0.4 0.6 x/L 0.8 1.5 minmod Exact solution 1 - 0.5 •W»—W» pllllMIW •—i -0.5 J_ 0.2 0.4 0.6 x/L 0.8 FIGURE 1.13 Linear advection of a square wave using Beam-Warming scheme (a) without limiter, (b) with minmod limiter, and (c) with superbee limiter. LINEAR SOLVERS 31 1.5 0.5 -0.5 • Superbee — Exact solution J i_ _L 0.2 0.4 0.6 x/L FIGURE 1.13 (continued). 1.5.3 Summary and Discussion In this section, we have reviewed widely used higher-order schemes for the convection operator. Schemes based on higher-order truncations of the Taylor series do yield more accurate schemes, but require special manipulation using either artificial dissipation or limiting to control spatial oscillations. As is clear from the development, the schemes described in this section require line structure. The extension to unstructured meshes remains an area of active research. 1.6 LINEAR SOLVERS Attention is now turned to another important aspect of the numerical process, namely the solution of linear algebraic equation sets. As discussed in Section 1.3.3, regardless of what discretization process is used, the result is a coupled algebraic set of equations in the discrete values of <p. The resulting coefficient matrices have two important characteristics. First, they are sparse, and in the case of structured meshes, they are banded. Second, the coefficient matrices are usually approximate; for nonlinear problems, for example, the coefficient matrix is updated repeatedly as a part of outer iteration loop to resolve nonlinearities. Over the last three decades, iterative methods have emerged as the preferred approach in CFD. They are naturally suited for handling nonlinearities since the coefficient matrix can be updated during the iterative process. In addition, operation counts as well as storage typically scale as O(N), where N is the number of unknowns. The specific solution techniques depend on whether the underlying mesh is structured or not. Special algorithms taking advantage of band structure are used for structured meshes. For unstructured meshes, matrix sparseness is exploited. Of course, linear solvers for unstructured meshes can be used for structured meshes as well. Typical solution techniques will now be considered. 32 SURVEY OF NUMERICAL METHODS 1.6.1 Line Gauss-Seidel Method The line Gauss-Seidel technique (LGS) is widely used with structured meshes. The central component of LGS is a direct solver for tridiagonal systems called the tridiagonal matrix algorithm (TDMA), which is applied iteratively along lines in the structured mesh. The procedure is also sometimes called the line-by-line TDMA. The TDMA is essentially a Gaussian-elimination procedure which takes advantage of the tridiagonal structure of the matrix. Tridiagonal Matrix Algorithm Consider the equation system ajcpj = bi<t>i+\ + Cjfr-i + di (1.83) This type of equation results from the discretization of a ID convection-diffusion equation using the techniques described previously. An equation of this type maybe written for each grid point i. For the first grid point, i = 1, c-t — 0, and for the last grid point, i = N, bi is zero. Thus, for point i = 1 01=^102+21 d-84) Equation (1.84) may now be used to eliminate 0i in favor of 02 in the equation for i = 2, resulting in 02 = ^203 + Ql (1.85) In general, 0, = Pi<pi+i + Qt (1.86) Here, Pi bi a, - CjPj-i di + c: Qi-i Qi = ^ (1.87) It should be noted that P\ = b\/a\, Q\ = d\/a\, and P^ = 0. The equation for the last point, i — N, yields <Pn = Qn (1-88) The implementation of the algorithm is done in two parts. In the forward step, the coefficients Pi and Qi, i = 1, 2, ..., N are found using Eq. (1.87) recursively, and <pN is calculated. In the backward sweep, Eq. (1.86) is used recursively for i — N — 1, N — 2, ..., 1 to recover 0,-. Line-by-line Algorithm For two- and three-dimensional structured meshes, the equation system is banded, but is not tridiagonal. In these cases, the TDMA is applied iteratively along LINEAR SOLVERS 33 w O w N O n P O s O s Ax E e O Ay FIGURE 1.14 Two-dimensional Cartesian mesh. lines. For two-dimensional structured meshes (see Fig. 1.14), the discrete equation for a point P may be written as aP(pp = aE<pE +aW(pw +aN<pN + as<ps +b (1-89) Here each grid point P is connected to its four neighbor points E, W, N, S. A tridiagonal system may be created along each line by assuming values on the neighbor lines to be temporarily known so that aP(pp = aN<pN +as<ps + b* (1-90) where b* = (b + a.E<p*E + aw<t>w)' an<^ me starred values are prevailing values of 4>. The procedure starts with a guess of all grid point values of 0. Starting with a vertical grid line / = 1, Eq. (1.90) is solved, with b* being evaluated from the current guess of <p. The TDMA is used along / = 1 to obtain <p values along the line. These are, of course, provisional since b* is based on guessed or prevailing values. The calculation now shifts to line / = 2, and the procedure is repeated; the most recently computed values on / = 1 are used to construct b*. All / lines are visited in this fashion. The same procedure is then applied in the J direction. Several such iterations may be done to obtain a converged solution for <f>. For three dimensions, grid planes are visited sequentially and iteratively, applying the LGS on each plane until overall convergence is obtained. Other iterative techniques for structured meshes include the alternating direction implicit (ADI) technique, which uses the TDMA in conjunction with a time-stepping scheme [31], incomplete lower-upper (ILU) decomposition [31], and the strongly implicit procedure (SIP) [32], 1.6.2 Multigrid Methods The LGS technique cannot be used for unstructured meshes since there are no easily identifiable lines in the domain. It may be recalled from Section 1.3.3 that the Gauss-Seidel technique does 34 SURVEY OF NUMERICAL METHODS not require line structure, and can be applied to sparse matrices with diagonal dominance, making it ideal for solving the sparse systems resulting from unstructured discretizations. However, the rate of convergence is too slow for practical use. Multigrid techniques may be used to accelerate Gauss-Seidel iteration, although techniques other than Gauss-Seidel may be used as the core solver in multigrid techniques as well. Convergence Behavior of Jacobi and Gauss-Seidel Techniques Although the Jacobi and Gauss-Seidel methods are easy to implement and are applicable for matrices with arbitrary fill patterns, their usefulness is limited by their slow convergence characteristics. The usual observation is that residuals drop quickly during the first few iterations but afterward the iterations "stall." This behavior is especially pronounced for large matrices. To demonstrate this behavior, a one-dimensional ID Laplace equation over a domain of length L is considered, that is, t^=0 (1.91) dxl Dirichlet boundary conditions are applied, so that 0(0) = 0(L) = 0. The exact solution to this problem is simply <p(x) = 0. The behavior of iterative schemes may be studied by starting with an arbitrary initial guess. The error at any iteration is then simply the current value of the variable <p. To distinguish the convergence characteristics for different error profiles, the current problem is solved with an initial guess given by kjixs , (1.92) Equation (1.92) represents Fourier modes and k is the wave number. Equation (1.91) is dis- cretized using the techniques described previously. Starting with Eq. (1.92) as the initial guess, the Gauss-Seidel method is applied for 50 iterations on a grid with N = 64. The maximum error in the solution is shown in Fig. 1.15a. With an initial guess corresponding to k = 1, the maximum error has reduced by less than 20% after 50 iterations. On the other hand, with a guess of k = 16, the error reduces by over 99% after merely 10 iterations. An arbitrary initial guess would contain more than one Fourier mode. To see what the scheme does in such cases, an initial guess consisting of modes corresponding to k = 2, 8, and 16 is used. For this situation (1.93) From Fig. 1.15b it can be seen that the error drops rapidly at first but then decreases much more slowly. The Gauss-Seidel scheme is very effective at reducing high wave-number errors. This accounts for the rapid drop in residuals at the beginning. Once the high-wave-number components are removed, only the smooth error profiles remain for which the scheme is not very effective and thus convergence stalls. Using this sample problem, another commonly encountered shortcoming of the Gauss-Seidel iterative scheme can be observed. It is found that convergence deteriorates as the grid is refined. Retaining the same form of initial guess and using k = 2, the previous problem is solved on three different grids, N = 32, 64, and 128. The resulting convergence plot, shown in Fig. 1.16, indicates that the rate of convergence becomes worse as the mesh is refined. On a finer grid, 1 3 /2ttxj^ sin L V l , 1 +sin (&7TXj S < L , \ + sin | 'I6nxi\~ LINEAR SOLVERS 35 (a) (b) 1.0 20 30 Iteration 20 30 Iteration FIGURE 1.15 Convergence of Gauss-Seidel method on N = 64 grid for (a) initial guesses consisting of single wave numbers and (b) initial guess consisting of multiple modes. it is possible to resolve more modes. The higher modes converge quickly but the lower modes appear more "smooth" and hence converge more slowly. The initial error profile behaves like a high-wave-number profile on a coarser grid, but like a low-wave-number profile in a finer grid. A quantitative analysis of these behaviors may be found in [33]. The multigrid method seeks to accelerate the convergence rate of iterative linear solvers by involving coarser grids. It is necessary that the accuracy of the final solution be determined only by the finest grid that is employed. This means that the coarse grids can provide only corrections or guesses to the fine-grid solution. As the fine-grid solution approaches the exact answer, the influence of any coarse levels should approach zero. Thus, it is enough to solve 36 SURVEY OF NUMERICAL METHODS 0.01 ' 0 10 20 30 40 50 Iteration FIGURE 1.16 Convergence of Gauss-Seidel method on different-sized grids for initial guess corresponding to k = 2 mode. only an approximate problem at the coarse levels since their solution will not govern the final accuracy that is achieved. Two different multigrid approaches are available in the literature. The first is the geometric multigrid or full approximation storage (FAS) procedure ([34, 35], for example). Here, a sequence is created of coarse multigrid meshes that are not necessarily nested. (A nested multigrid mesh is one in which each face of the coarse mesh is composed of the faces of original fine mesh.) The governing equations are discretized on each coarse level independently, and the solution errors at each level are used to accelerate the solution on finer levels. An alternative is the algebraic multigrid method [36, 37], which is now described. Algebraic Multigrid Method The algebraic multigrid (AMG) method is well-suited for unstructured meshes since it does not involve discretization of the governing equations on coarser grids. Instead, a hierarchy of coarse equation sets is constructed by grouping a number of fine-level discrete equations. Residuals from a fine-level relaxation sweep are "restricted" to form the source terms for the coarser-level correction equations. The solution from the coarser equations is in turn "prolongated" to provide corrections at the finer level. The use of different grid sizes permits the reduction of errors at all wavelengths using relatively simple smoothing operators. It is useful to represent the discrete equation at point i at a grid level / as J^M'u<l>j+Si=0 (1.94) ;' where j is the index of a neighbor cell. The algebraic multigrid method visits each ungrouped fine-level cell and groups it with n of its neighboring ungrouped cells for which the coefficient Mjj is the largest [37]. The AMG LINEAR SOLVERS 37 performs best when the group size, n, is 2. The coefficients for the coarse-level equations are obtained by summing the coefficients of the fine-level equations: M'ul = EE M'u (L95> ieG, jeGj where the superscripts denote the grid level and G/ is the set of fine-level cells that belong to the coarse-level group /. This results in a system of equations of the same form as the fine level (i.e., Eq. (1.52)), with 1/nth the number of unknowns Ml+l<plj+l -J2R'i=° (L96) ieGi where /?' is the residual in the fine-level equation at the current iteration = Mjjtf + Si (1.97) The value <p*1 is the current iterate. The process is repeated recursively until no further coarsening is possible. A variety of strategies, such as the V, W, and Brandt cycles [38] may be used to cycle between the grid levels. The solution at any level is obtained by a Gauss-Seidel iterative scheme and is used to correct the current iterate at the next finer level. Thus, for all i € Gf. <t>\ = til + <t>\+x (1.98) Intelligent mesh agglomeration strategies for creating coarse-level meshes are critical for obtaining significant convergence acceleration using multigrid schemes. Lonsdale [37] employed an agglomeration strategy that grouped together cells connected by the largest coefficients. This strategy has proven effective for a variety of problems involving high thermal conductivity ratios, large domain aspect ratios, and disparate grid sizes. For a more in-depth discussion of mesh agglomeration strategies, see [39], Algebraic multigrid methods used with sequential solution procedures have the advantage that the agglomeration strategy can be equation-specific; the discrete coefficients for the specific governing equation can be used to create coarse mesh levels. Since the coarsening is based on the coefficients of the linearized equations, it also changes appropriately as the solution evolves. This is especially useful for nonlinear and/or transient problems. In some heat transfer applications, however, the mutual coupling between the governing equations is the main cause of convergence degradation, and sequential solution procedures do not perform well. Typical examples include flows with large body forces such as high-Rayleigh number buoyant flows, or flows with large swirl numbers. Geometric or full-approximation storage multigrid methods that solve the coupled problem on a sequence of coarser meshes may offer better performance in such cases. 1.6.3 Gradient-search Techniques Gradient-search techniques have recently found increased use in CFD because of their ability to solve the equation sets resulting from unstructured discretizations. For symmetric positive- definite matrices, the original problem A0 = B can be shown to be equivalent to the minimization of a functional F defined as F = \4>TA(p - <pTB (1.99)
38 SURVEY OF NUMERICAL METHODS The method of steepest descent essentially finds the minimum of F by using search directions opposite to VF. This search process is usually too slow for practical use. In contrast, conjugate gradient methods [40] employ search directions that are conjugate to all previous search directions; preconditioning may be used to improve the speed of conjugate gradient techniques. Few of the linear systems resulting from CFD problems are either symmetric or positive-definite. Extensions of the method to address asymmetric matrices include biconjugate gradients [31], CGSTAB and BI-CGSTAB [41, 42], and GMRES [43]. 1.7 COMPUTATION OF FLUID FLOW The class of problems considered thus far involve convection and diffusion of a scalar in the presence of a known flow field. Even though the continuity and momentum equations have the same form as the general scalar transport equation, Eq. (1.6), a number of additional factors must be considered in the computation of the flow field. In three dimensions, the unknowns to be computed are the three velocity components and the pressure. The equations available for their computation are the three momentum equations and the continuity equation. A number of issues arise in the storage and computation of pressure and velocity, which are now discussed. 1.7.1 Storage of Pressure and Velocity For simplicity, we consider the uniform structured two-dimensional mesh shown in Fig. 1.14. The pressure p and the velocity vector V are assumed to be stored at the cell centroid. Following the practices outlined in previous sections, the discrete u- and u-momentum equations may be written as apllp ~ ^anb"nb + (Pv, ~ Pe)&y + bu nb aPvp --= ^2anbvnb + (ps - p„)Ax + bv (1.100) The summation over nb denotes a summation over the neighbors E, W, N, and S in Fig. 1.14. Here, the pressure gradient is written in terms of the values of pressure on the control volume faces. Since the pressure is stored at the cell centroids and not at the face, interpolation is necessary. For a uniform grid, pe may be found by linear interpolation between cell centroids from Pe = PE \ PP (1-101) The other face pressures may be similarly interpolated. Incorporating this assumption into the discrete momentum equations yields Ay apllp = 2^«nb"nb + (PW - Pe)-X~ +bu nb ^-^ Ax aPVp = 2^,anbVnb + (PS - P\)— + bv (1.102) nb
COMPUTATION OF FLUID FLOW 39 The pressure terms occurring in the momentum equations are seen to involve alternate pressures rather than adjacent pressures; the value pp does not appear in the equations at all. Next, attention is turned to the continuity equation. Discretizing the continuity equation, we obtain (pu)e Ay - (pu)w Ay + (pv)n Ax - (pv)s Ax = 0 (1.103) The face velocities are not available directly but must be interpolated from the cell centroid values to the face. For a uniform grid, for example, (pu)e may be found by linear interpolation as (pu)e = ^ >P 2 ^ >E (1.104) The other terms in Eq. (1.103) may similarly be interpolated. Gathering terms, the discrete continuity equation for the cell P is (pu)E Ay - (pu)w Ay + (pv)N Ax - (pv)s Ax = 0 (1.105) An examination of the discrete continuity equation for cell P reveals that it does not contain the velocity for cell P. Consequently, a checkerboarded velocity pattern of the type shown in Fig. 1.17 can be sustained by the continuity equation. If the momentum equations can sustain this pattern, the checkerboarding would persist in the final solution. Since the pressure gradient is not known a priori, but is computed as a part of the solution, it is possible to create pressure fields whose gradients exactly compensate the checkerboarding of momentum transport implied by the checkerboarded velocity field. Under these circumstances, the final pressure and velocity fields would exhibit checkerboarding, even though the discrete momentum and continuity equations are perfectly satisfied. In practice, perfect checkerboarding is rarely encountered because of irregularities in the mesh, boundary conditions, and physical properties. Instead, the tendency toward checkerboarding manifests itself in unphysical wiggles in the velocity and pressure fields. It should be emphasized that these wiggles are a property of the spatial discretization and would be obtained regardless of the method used to solve the discrete equations. A number of different remedies have emerged to address the checkerboarding problem, some of which are described below. Staggered Storage of Pressure and Velocity A popular remedy for checkerboarding on structured meshes, either regular or body-fitted, is the use of a staggered mesh [1]. A typical staggered mesh arrangement is shown in Fig. 1.18. We distinguish between the main cell or control volume and the staggered cell or control volume. The pressure is stored at centroids of the main cells. The velocity components are stored on the faces of the main cells as shown, 100 o 200 O 100 o 200 o FIGURE 1.17 Checkerboarded velocity field.
40 SURVEY OF NUMERICAL METHODS W IV « o 1 o 1 o p <t> o 1 o s \ o "e : e O 1 o - 1 o Main cell Cell for u velocity Cell for i» velocity FICUKE 1.18 Staggered mesh. and are associated with the staggered cells. The u velocity is stored on the e and w faces and the v velocity is stored on the « and s faces. Scalars such as enthalpy or species mass fraaion are stored at the centroids of the cell P. All properties, such as density and T, are stored at the main grid points. The cell P is used to distretize the continuity equation as (piij, Av - (/>»)„. At + (piFj„ Ajc - (pi;),. Ax = 0 (1.106) However, no further interpolation of velocity is necessary since discrete velocities are available directly where required. Thus, the possibility of velocity checkerboarding is eliminated. For the momentum equations. Ihc staggered control volumes are used to write momentum balances. The procedure is the same as that described previously, except (hat the pressure gradient term may be written directly in terms of the pressures on the faces of the momentum control volumes, without interpolating as in Eq. (1.101). Thus, for the discrete momentum equation for the velocity uf. the pre\sure term is (Pf - Pe) At Similarly, for the velocity u„. ihc pressure term is Ipp ~ l*«) Aj (1.107) (1.108) Thus, wiih the use of El)*. (1.107) and (1.108), there is no longer a dependence on alternate pressure values; adjacent pressure values appear in the balance and do not support pressure checkerboarding. It may be noted that Ihe mesh for the it-momentum equation consists of nonoverlapping cells that (ill the domain completely. This is also inie for the i> momentum equation and the continuity equation. The control volumes for «. i>. and p overlap each other, but this is of no consequence. Furthermore, since the velocities are avjilable on the main cell
COMPUTATION OF FLUID FLOW 41 faces, face flow rates can easily be computed where they are needed for the discretization of the convective terms in the scalar transport equation. For body-fitted meshes, components of either the covariant or the contravariant velocity vector are stored on the faces [5, 15]. In all other respects, the basic idea is the same as that described here. Unequal Order Schemes For unstructured meshes in either the finite-volume or the finite- element context, pressure-velocity staggering is difficult to implement because of geometric complexity. As a result, control-volume finite-element methods (CVFEM) as well as conventional finite-element methods have used unequal-order interpolation of pressure and velocity [7, 44, 45], Here, pressure is effectively interpolated to lower order than velocity. In CVFEM, this is accomplished by resolving the pressure on a macroelement, whereas the velocity is resolved on a subelement, which is formed by dividing the macroelement into smaller elements. Alternatively, a lower-order interpolation function may be used for pressure vis-a-vis velocity [44, 45], Colocated Schemes Both mesh staggering and unequal-order interpolation require bookkeeping and storage of extra geometric information. As a result, research has been directed to the development of colocated or equal-order interpolation schemes. Here, pressure and Cartesian velocity components are both stored at the cell centroid. However, the interpolation of the face velocity from cell-centered velocities is modified so as to remove checkerboarded pressure modes [17, 46, 47]. The modified interpolation is equivalent to an added dissipation that damps spatial wiggles in pressure and velocity; consequently, these schemes are sometimes referred to as added-dissipation schemes. Formulations for regular, body-fitted, and unstructured meshes have appeared in the literature [6, 17, 46]. A formulation for an equal-order CVFEM has been published in [47]. In the finite-element context, formulations interpolating velocity and pressure to equal order have been published [18, 48], In the discussion that follows, an orthogonal, one-dimensional, uniform mesh is used for clarity. The mesh and associated nomenclature are shown in Fig. 1.7. Adopting a linear interpolation of pressure between cell centroids, the discrete M-momentum equations for cells P and E may be written as V^ . ,« . Pw - Pe aPup = 2^flnb"nb +t>P -\ nb aEuE = Y, flnb«nb + buE + PP ~2PEE (1.109) nb For convenience, Eqs. (1.109) are recast as - ,jPw-Pe up = up + dp ue = uE +dE (1.110) where dP = \/ap and dE = l/aE, and Snb flnb"nb + b"p Up = aP aE=Enbanbunb+bl (inl) aE
COMPUTATION OF FLUID FLOW 43 1.7.2 Solution Methods Thus far, issues related to discretization of the continuity and momentum equations have been examined. Attention is now turned to the solution of these equations. One alternative is to employ a direct solution technique. The discrete continuity and momentum equations over the entire domain may be assembled into a large algebraic system of the form M(p=b (1.115) where M is a matrix of size N x N x 4, and A' is the number of grid points. For a colo- cated formulation, the unknowns consist of the three velocity components and pressure at the cell centroids of all A' cells. This approach has not thus far been tenable for most practical industrial problems with present-day computational power. However, the emergence of efficient multifrontal solvers [13] have made this approach viable for specialized applications, and the technique may find greater use in the future as computational power increases. For practical CFD problems, sequential iterative solution procedures are frequently adopted because of low storage requirements and reasonable convergence rate. However there is a difficulty associated with the sequential solution of the continuity and momentum equations for incompressible flows. To solve a set of discrete equations iteratively, it is necessary to associate the discrete set with a particular variable. For example, the discrete energy equation is used to solve for the temperature. Similarly, the discrete u-momentum equation is used to solve for the u velocity. If the continuity equation were to be used to solve for pressure, a problem would
|
2019-12-07 06:12:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5471611618995667, "perplexity": 1378.634875824607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540496492.8/warc/CC-MAIN-20191207055244-20191207083244-00014.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/103230/how-big-can-a-nebula-be
|
How big can a nebula be?
How big could a nebula be? If a spaceship were traveling 300,000 times the speed of light (assuming this were possible and had no other effects, such as time travel or time dilation) is it plausible that it would take several hours to traverse a distance equivalent to the average width of a nebula?
• The Orion Nebula is 24 light-years across. 24 years is 210,000 hours, so it's within the required order of magnitude. – AlexP Jan 26 '18 at 2:13
• List of the largest nebula – StephenG Jan 26 '18 at 3:15
• If you want to avoid paradoxes involving arriving at places before the light you saw when you left for them (and perhaps before they existed !) you would effectively need an infinite speed of light. If the speed of light is finite and you can travel faster than it, then you cannot avoid such paradoxes. – StephenG Jan 26 '18 at 3:20
• How would you define a "nebula"? There are many objects that might or might not be considered nebulae, depending on your choice of definition. – HDE 226868 Jan 26 '18 at 4:12
Here's the gist of my answer, for simplicity:
• The largest nebulae are HII regions, clouds of gas ionized by young hot stars forming inside them.
• We can calculate the radius of a sphere corresponding to the maximum distance at which neutral hydrogen gas can be ionized - a proxy for the size of the HII region.
• This method can be adapted for clusters of stars, not just individual ones.
• Basic assumptions about the masses of molecular clouds and the star-forming efficiency show that the maximum size of an HII region should be about 2150 light-years. This is a couple times the size of the largest known HII regions.
Essentially, yes, you can have extremely large nebulae that would take a long time to cross, even at exceptionally high speeds.
Large nebulae are HII regions
If you look at some of the largest nebulae currently known, you might notice that many of them, measuring hundreds of light-years in diameter, are HII regions. They're are stellar cradles, clouds of hydrogen ionized by the young, newly-formed stars inside them. Their evolution is governed by the emission from the hottest massive stars that provide the ionizing radiation, and will eventually disperse the clouds entirely. HII regions are good choices for large nebulae simply because they're extremely massive, and may contain dozens of stars.
Many of the largest nebulae are HII regions:
• The Tarantula Nebula
• The Carina Nebula
• NGC 604
HII regions aren't always the sites of starbirth; they can form (at smaller scales) around single stars. Barnard's Loop is a famous example of a large HII region that is thought to have formed from a supernova. However, the very largest HII regions are indeed these descendents of molecular clouds, containing clusters of young stars.
Strömgren spheres
A popular model of a (spherical) HII region is the Strömgren sphere. A Strömgren sphere is a cloud of gas embedded in a larger cloud. The external gas is neutral beyond a distance called the Strömgren radius; inside the Strömgren radius, the light from one or more stars ionizes the hydrogen, forming an HII region. We can calculate the Strömgren radius $R_S$ via a simple formula: $$R_S=\left(\frac{3}{4\pi}\frac{Q_*}{\alpha n^2}\right)^{1/3}$$ where $n$ is electron number density, $\alpha$ is called the recombination coefficient, and $Q_*$ is the number of photons emitted by the star per unit time. We might see a number density of $n\sim10^7\text{ m}^{-3}$ inside the nebula, and at temperatures of $T\sim10^4\text{ K}$, $\alpha(T)\approx2.6\times10^{-19}$. All that remains is to calculate $Q_*$, which can be found by the formula $$Q_*=\int_{\nu_0}^{\infty}\frac{L_{\nu}}{h\nu}d\nu$$ where we integrate the Planck function, weighted by frequency and multiplied by the surface area of the star, over all frequencies greater than $\nu_0=3.288\times10^{15}\text{ Hz}$, the lowest frequency that can still ionize hydrogen. $L_{\nu}$ is a function of the star's effective temperature $T_{eff}$. If you want to instead use the star's mass as a parameter, we know that that $T\propto M^{4/7}$ works as an approximation for many stars (and $R\propto M^{3/7}$). I've found that it works poorly on low-mass ($<0.3M_{\odot}$) stars, but there, it deviates only by a factor of 2, depending on your choice of proportionality constant.
Here's my results, plotting $R_S$ as a function of $M$:
This indicates that even single, massive stars can still produce HII regions up to 100 light-years in diameter, which is quite impressive.
Multiple stars and clusters
The above model assumes that there is only one star at the center of the sphere. However, most of the large HII regions I mentioned above have multiple stars - or even entire star clusters. Therefore, we need to figure out how large our HII region can be if we assume that it contains a cluster of hot, massive stars inside it. Adapting a model of Hunt & Hirashita 2018, let's say that the cluster is static - no stars are being born and no stars are dying. Additionally, assume that the cluster obeys some initial mass function $\phi(M)$ that describes how many stars are expected to have masses in a given range. We now have a more complicated expression for $Q$, the total number of ionizing photons emitted: $$Q=\int_0^{\infty}Q_*(M)\phi(M)dM$$ where we acknowledge that $Q_*$ is a function of stellar mass. This is still easily calculable for any cluster of $N$ stars, once you pick your IMF. We can then plug this values into our formula for $R_S$. The fact that $R_S\propto Q_*^{1/3}$ does mean that we need a large number of massive stars to reach diameters of $\sim1000$ light-years, but it's still quite possible.
Results for individual clusters
I applied the Salpeter IMF and the above formulae to a number of HII regions, most containing large numbers of stars. My (naive) assumptions actually gave me decent results (code here): $$\begin{array}{|c|c|c|c|}\hline \text{Name} & \text{Number of stars} & \text{Diameter (light-years)} & 2R_S\text{ (light-years)}\\\hline \text{Tarantula Nebula} & 500000^1 & 600 & 1257\\\hline \text{Carina Nebula} & 14000^2 & 460 & 382\\\hline \text{Eagle Nebula} & 8100 & 120 & 318\\\hline \text{Rosette Nebula} & 2500 & 130 & 215\\\hline \text{RCW 49} & 2200 & 350 & 206\\\hline \end{array}$$ 1 Space.com
2 NASA
With the exception of the Eagle Nebula, these are all within a factor of two from the accepted values. There are some things I could change that might increase the accuracy of my models:
• Assume a more precise IMF, like the Kroupa IMF
• Consider that some of these regions contain an inordinate amount of massive stars
• Account for stellar evolution; many of the stars here are not on the main sequence
Nevertheless, this is a start, and I invite you to play around with it a little.
Upper limits
One question still remains, however: How large can an HII region be? We've seen that star-forming regions of tens or hundreds of thousands of stars can ionize gas clouds hundreds of light-years across. Is there an upper limit to the number of stars produced in such a region, or even to the size of the star-forming region itself?
Consider the total mass of a stellar population with the Salpeter initial mass function $\phi(M)$: $$\mathcal{M}=\int M\phi(M)dM=\phi_0\int M\cdot M^{-2.35}dM$$ where $\phi_0$ is a proportionality constant (see the Appendix), and the integral is over the mass range of the population. If we can place an upper limit on $\mathcal{M}$, we can place an upper limit on $\phi_0$ (and $N$). The most massive giant molecular clouds have masses of $\sim10^{7\text{-}8}M_{\odot}$, and with a star formation efficiency of $\varepsilon\sim0.1$, we should expect $\mathcal{M}_{\text{max}}\sim10^{6}M_{\odot}$. This corresponds to $\phi_{0,\text{max}}\approx1.7\times10^5$. This turns out to be roughly a factor of 5 higher than $\phi_0$ for our model of the Tarantula Nebula. Now, $R_S\propto Q^{1/3}\propto\phi_0^{1/3}$, so we should expect an upper limit on the size of a hypothetical HII region to be $1257\cdot 5^{1/3}\approx2149$ light-years.
Appendix
The formula for $L_{\nu}$ is actually $L_{\nu}=(4\pi R_*^2)\cdot\pi I_{\nu}$, where $R_*$ is the radius of the star and $I_{\nu}$ is the Planck function. Therefore, $Q_*$ is, more, precisely, $$Q_*=4\pi^2R_*^2\int_{\nu_0}^{\infty}\frac{2h\nu^3}{c^2}\frac{1}{\exp(h\nu/(k_BT))-1}\frac{1}{h\nu}d\nu$$ The Salpeter IMF $\phi(M)$ is the function defined by $$\phi(M)\Delta M=\phi_0M^{-2.35}\Delta M$$ such that $$N(M_1,M_2)=\int_{M_1}^{M_2}\phi(M)dM$$ is the total number of stars with masses between $M_1$ and $M_2$ in a given population. $\phi_0$ is a normalization constant such that $\phi(M)$, integrated over the entire mass range, gives the correct total number of stars in the cluster being studied.
• I had squirrels eating tomatoes out of my garden so I bought this 155mm howitzer to deal with them... +1 for info :) – kingledion Sep 7 '18 at 16:12
The Tarantula nebula is the largest known nebula at 200 parsecs (650 ly) across.
At 300,000 times the speed of light, this would take just under 20 hours to cross.
Edit:
From another source, the Tarantula nebula's size is given at 40 arcminutes at 179 kly distance. I calculate that to be 2080 ly across. I suppose it depends on how you define the boundaries of the nebula. This would take 60 hours to cross at the given speed.
• " I suppose it depends on how you define the boundaries of the nebula." - exactly. Moon has atmosphere denser than nebulas are. With such things, borders are very much matter of definition. – Mołot Jan 26 '18 at 15:33
It's hard to say how large it conceivably could be since the definition of a "nebula" can be a bit... nebulous? Every galaxy has a very loose haze of particles around it and in principle what we call a "nebula" is just an unusually dense conglomeration of these particles. As such there's no strict upper-limit but anything sufficiently large will eventually be disturbed by nearby stars or other sources of gravity, causing them to either collapse or disperse; so they may exist but for shorter periods of time.
The largest named nebula is the Tarantula nebula at about a thousand light years across (NGC 604 in the Triangulum galaxy might be even larger, but this is a comparatively 'loose' collection of space dust). If you were travelling at 300,000 times light speed it would take 44 hours to cross, so a nebula even an eighth as wide (such as the image below of the Cygnus Loop) would still take several hours; easily fulfilling your criteria.
• The Tarantula Nebula is only $\sim650$ light-years across, not $1000$. – HDE 226868 Jan 26 '18 at 4:10
• It depends what your metric is for 'width'; I imagine there's some standardised measure of luminosity density (something like a FWHM on a Gaussian?) but NASA do indeed give the 1000ly figure, so I shan't change it. Link – neophlegm Jan 26 '18 at 7:53
|
2019-11-13 11:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314704895019531, "perplexity": 655.4590924924009}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667260.46/warc/CC-MAIN-20191113113242-20191113141242-00277.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1079888
|
MathSciNet bibliographic data MR1079888 03E05 (03E75 06F25 12J15) Ciesielski, Krzysztof $2\sp {2\sp \omega}$$2\sp {2\sp \omega}$ nonisomorphic short ordered commutative domains whose quotient fields are long. Proc. Amer. Math. Soc. 113 (1991), no. 1, 217–227. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2016-07-28 13:26:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956985116004944, "perplexity": 9731.369792688049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828282.32/warc/CC-MAIN-20160723071028-00207-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://math.meta.stackexchange.com/questions/29566/how-does-one-see-chatroom-comments-with-a-particular-tag
|
# How does one see chatroom comments with a particular tag?
This appears to be a new question.
I found out recently that one can use tags in chat. They're given by the code [tag:tag-name].
How does one see chatroom comments with a particular tag?
I'm guessing there'll be hidden gems to find, that's all.
## 1 Answer
One thing you can use to your advantage is that Search is looking at the HTML encoded version of a post, rather than what would be rendered.
Since every tag generates link of the form site/tagged/tagname you can try to search for "tagged/tagname". Notice that this will also return all oneboxed questions posted in chat with this tag. But if the phrase is quite common this can eliminated some false positives. (As a typical use case I could imagine that users of some room would agree to use some tag for messages related to some specific topic - so that this makes searching for them easier.)
Since this is dependent on the way the Stack Exchange chat is implemented in the moment, there is no guarantee that this won't change in the future. (But I have not heard about some big changes regarding chat in the near future.)
Some examples: duplicates in CRUDE, functional-analysis in the main chatroom, new-tag in Tagging. (You can compare this with results when searching simply for duplicates, functional-analysis, new-tag in the respective rooms.)
|
2021-09-23 20:51:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17639288306236267, "perplexity": 1190.045059177706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00533.warc.gz"}
|
https://www.ic.sunysb.edu/Class/phy141md/doku.php?id=phy131studiof15:lectures:chapter13&rev=1444463629
|
This is an old revision of the document!
Chapter 13 - Rotation II: A Conservation Approach
Moment of inertia for extended objects
So far our definition of moment of inertia is really only practical for systems comprised of one or more point like objects at a distance from an axis of rotation.
For extended objects we are much better served by considering an object as being made up of infinitesimally small mass elements, each a distance $R$ from the axis of rotation, and integrating over these mass elements to find the moment of inertia.
$I= \Sigma_{i} m_{i}R_{i}^{2}$ → $I=\int R^2\,dm$
Hoop, thin walled cylinder and solid cylinder
Considering rotation axis through the center of the circle
$I=\int R^2\,dm$
Hoop
$I=\frac{MR^2}{2\pi}\int_{0}^{2\pi}\,d\theta=MR^2$
Thin walled cylinder
$I=\frac{MR^2}{2\pi h}\int_{0}^{h}\int_{0}^{2\pi}\,d\theta\,dz=MR^2$
Solid Cylinder
$I=\frac{M}{\pi h R_{C}^2}\int_{0}^{R_{C}}\int_{0}^{h}\int_{0}^{2\pi}R^3\,d\theta\,dz\,dR=\frac{M}{\pi h R_{C}^2}\frac{ 2\pi h R_{C}^4}{4}=\frac{1}{2}MR_{C}^{2}$
Parallel axis theorem
Usually a rotation axis that passes through the center of mass of an object will be one of the easiest to find the moment of inertia for, because as we saw in the last lecture the center of mass usually reflects the symmetry of the object. More moments of inertia on wikipedia.
If we know the moment of inertia of an object around an axis that passes through it's center of mass there is a theorem that can help us find the moment of inertia around a different axis parallel to the axis through the COM.
If the axis of rotation is a distance $h$ from the axis through the COM then
$I=I_{COM}+Mh^{2}$
where M is the total mass of the object.
As an example, a solid sphere stuck to a turning pole will have moment of inertia $I=\frac{2}{5}MR^{2}+MR^2=\frac{7}{5}MR^2$
When can we approximate a mass as a point?
Frequently we will want to approximate a mass at some distance from it's center of rotation as a point mass. i.e we would like to simply write
$I=mR_{1}^{2}$
The parallel axis theorem tells us that in fact (if the mass is spherical and solid)
$I=mR_{1}^{2}+\frac{2}{5}mR_{2}^{2}$
The fractional error introduced by the above approximation is
$\frac{\frac{2}{5}mR_{2}^{2}}{mR_{1}^{2}+\frac{2}{5}mR_{2}^{2}}=\frac{\frac{2}{5}}{\frac{R_{1}^{2}}{R_{2}^2}+\frac{2}{5}}$
If we want to be accurate to say 1% we only need $\frac{R_{1}}{R_{2}}\approx\sqrt{40}\approx 6$
Atwood machine with rotation
We can apply our new knowledge about moment of inertia to our old friend the Atwood machine.If we take in to account the mass $M$ and radius $R$ of the pulley the tensions in the ropes on either side of the pulley need not be the same.
The sum of the torques on the pulley will be given by
$\Sigma \tau =(T_{2}-T_{1})R$
and as we saw that the moment of inertia for solid cylinder is $I=\frac{1}{2}MR^2$ we can find the angular acceleration of the pulley
$\large \alpha=\frac{\Sigma \tau}{I}=\frac{2(T_{2}-T_{1})}{MR}$
This can be related to the tangential acceleration of a point on the edge of the pulley by multiplying by R as $a=\alpha R$
$\large a=\frac{\Sigma \tau}{I}=\frac{2(T_{2}-T_{1})}{M}$
As we did before when we neglected rotation we should write Newton's Second Law for the two weights
$m_{2}a=m_{2}g-T_{2}$ → $T_{2}=m_{2}g-m_{2}a$
$m_{1}a=T_{1}-m_{1}g$ → $T_{1}=m_{1}g+m_{1}a$
$T_{2}-T_{1}=m_{2}g-m_{2}a-m_{1}g-m_{1}a$
$\frac{1}{2}Ma=m_{2}g-m_{2}a-m_{1}g-m_{1}a$
$\large a=g\frac{m_{2}-m_{1}}{\frac{1}{2}M+m_{1}+m_{2}}$
Rotational Kinetic Energy
Each piece of mass in a rotation problem that has velocity $v$ should have kinetic energy
$K=\frac{1}{2}mv^2$
In terms of the angular velocity this is
$K=\frac{1}{2}m\omega^2R^2$
and if we sum over all the masses
$K=\frac{1}{2}(\Sigma m_{i}R_{i}^2)\omega^{2}=\frac{1}{2}I\omega^2$
Conservation of energy with rotation
For a rolling object $v=\omega r$
The kinetic energy of a rolling object is therefore
$\large K=\frac{1}{2}mv^{2}+\frac{1}{2}I\omega^{2}=\frac{1}{2}mv^{2}+\frac{1}{2}I\frac{v^{2}}{r^2}$
The kinetic energy thus depends on the moment of inertia of the object.
Hoop and Disk
Suppose we release a hoop and disk from the top of a slope. They begin with the same potential energy, which one gets to the bottom of the slope first?
Hoop and disk solution
Hoop
$\large K=\frac{1}{2}mv^{2}+\frac{1}{2}I\omega^{2}=\frac{1}{2}mv^{2}+\frac{1}{2}mr^{2}\frac{v^{2}}{r^2}=mv^{2}$
$mgh=mv^{2}$
$v=\sqrt{gh}$
Disk
$\large K=\frac{1}{2}mv^{2}+\frac{1}{2}I\omega^{2}=\frac{1}{2}mv^{2}+\frac{1}{2}\frac{1}{2}mr^{2}\frac{v^{2}}{r^2}=\frac{3}{4}mv^{2}$
$mgh=\frac{3}{4}mv^{2}$
$v=\sqrt{\frac{4}{3}gh}$
Recall that for a sliding object (without friction) $v=\sqrt{2gh}$
Solid sphere $I=\frac{2}{5}mr^{2}$
Hollow sphere $I=\frac{2}{3}mr^{2}$
Work energy theorem for rotation
$W=\int \vec{F}\cdot\,d\vec{l}=\int F_{\perp}R\,d\theta=\int_{\theta_{1}}^{\theta_{2}}\tau\,d\theta$
$\tau=I\alpha=I\frac{d\omega}{dt}=I\frac{d\omega}{d\theta}\frac{d\theta}{dt}=I\omega\frac{d\omega}{d\theta}$
$W=\int_{\omega_{1}}^{\omega_{2}}I\omega\,d\omega=\frac{1}{2}I\omega_{2}^2-\frac{1}{2}I\omega_{1}^2$
Therefore the work done in rotating an object through an angle $\theta_{2}-\theta_{1}$ is equal to the change in the rotational kinetic energy of the object.
Power and Torque
$W=\int_{\theta_{1}}^{\theta_{2}}\tau\,d\theta$
$P=\frac{dW}{dt}=\tau\frac{d\theta}{dt}=\tau\omega$
This equation can help us understand the two “figures of merit” often given for a car engine, horsepower and torque.
Angular Momentum
Linear momentum
$\vec{p}=m\vec{v}$
By analogy we can expect angular momentum is given by
$L=I\omega$
Units $\mathrm{kgm^{2}/s}$
Newton's Second Law for translational motion
$\Sigma \vec{F} = m \vec{a}=\frac{d\vec{p}}{dt}$
By analogy we can expect Newton's Second Law for rotational motion is given by
$\Sigma \tau=I \alpha=\frac{dL}{dt}$
Conservation of angular momementum
In the absence of a net external torque
$\Sigma \tau=\frac{dL}{dt}=0$
and angular momentum is conserved.
$L=I\omega=\mathrm{constant}$
Changing the moment of inertia of a spinning object
Suppose I with two weights in my hands can be approximated by an 80 kg cylinder of radius 15 cm. My moment of inertia if I am spinning around an axis going down my center will be
$I=\frac{1}{2}MR^{2}=0.9\,\mathrm{kgm^2s^{-1}}$
With my arms (which we approximate as 3.5 kg and a length of 0.75 m from my shoulder) extended holding 2.3 kg weights my moment of inertia will be considerably higher. When I am holding my arms and weights out I should remove their mass from the cylinder
$I=\frac{1}{2}68.4\times0.15^2=0.77\,\mathrm{kgm^2s^{-1}}$
The moment of inertia of the two weights around the axis when my arms are extended is
$I=2\times2.3\times(0.9)^{2}=3.726\,\mathrm{kgm^2s^{-1}}$
The moment of inertia of an arm if it is rotated around it's center of mass is
$I=\frac{1}{12}\times3.5\times(0.75)^2=0.16\,\mathrm{kgm^2s^{-1}}$
But if it rotates around a point $0.375+0.15\,\mathrm{m}$ from it's center of mass then its moment of inertia is
$I=\frac{1}{2}\times3.5\times(0.75/2)^2+3.5\times(0.375+0.15)^2$
$=0.16+0.96=1.12\,\mathrm{kgm^2s^{-1}}$
So the total moment of inertia due to the extended arms, weights and the cylinder is
$I=0.77+3.726+2.24=6.736\,\mathrm{kgm^2s^{-1}}$
Consequence of conservation of angular momentum
From the previous calculation we have
With arms in $I=0.9\,\mathrm{kgm^2s^{-1}}$
With arms out $I=6.736\,\mathrm{kgm^2s^{-1}}$
If I am rotating with angular velocity $\omega$ with my arms out and I bring them in then my angular velocity afterward is $\omega'$ and from conservation of momentum
$L=I\omega=I'\omega'$
$\frac{\omega'}{\omega}=\frac{I}{I'}=\frac{6.916}{0.9}=7.5$
Let's see if it works!
Angular momentum as a vector cross product
The most general definition of angular momentum is as the cross product of the position vector $\vec{r}$ and the linear momentum of the object $\vec{p}$. Here we consider a single particle of mass $m$
$\vec{L}=\vec{r}\times\vec{p}$
We can relate the angular momentum to the torque by taking the derivative of $\vec{L}$ with respect to time
$\frac{d\vec{L}}{dt}=\frac{d}{dt}(\vec{r}\times\vec{p})=\frac{d\vec{r}}{dt}\times\vec{p}+\vec{r}\times\frac{d\vec{p}}{dt}=\vec{v}\times m\vec{v}+\vec{r}\times\frac{d\vec{p}}{dt}=\vec{r}\times\frac{d\vec{p}}{dt}$
In an inertial reference frame (non accelerating reference frame) $\Sigma \vec{F}=\frac{d\vec{p}}{dt}$
$\vec{r}\times\frac{d\vec{p}}{dt}=\vec{r}\times\Sigma\vec{F}=\Sigma\vec{\tau}$
$\Sigma\vec{\tau}=\frac{d\vec{L}}{dt}$
Angular momentum of a system of objects
For a system of objects the total angular momentum is given
$\frac{d\vec{L}}{dt}=\Sigma{\vec{\tau}_{ext}}$
following from the usual cancellation of internal forces between objects due to Newton's Third Law.
This equation, like the one before, is only true when $\vec{\tau}_{ext}$ and $\vec{L}$ are calculated about a point which is moving uniformly in an inertial reference frame.
If these quantities are calculated around a point that is accelerating the equation does not hold, except for in one special case, which is for motion around the center of mass of the system (proof in text). So we can say that
$\frac{d\vec{L}_{CM}}{dt}=\Sigma\vec{\tau}_{CM}$
even if the center of mass is accelerating. This is very important!
Angular momentum of rigid objects
If we have a rigid object rotating about an axis that has a fixed direction it is useful to know the component of angular momentum along the axis of the rotating object. Each piece of the object will have angular momentum $\vec{L}_{i}=\vec{r}_{i}\times\vec{p}_{i}$
We can express the component along the rotation axis of each of these individual angular momenta as
$L_{i\omega}=r_{i}p_{i}\cos\phi=m_{i}v_{i}r_{i}\cos\phi$
Using $r_{i}\cos\phi=R_{i}$ and $v_{i}=R_{i}\omega_{i}$ we find that
$L_{i\omega}=m_{i}R_{i}^{2}\omega$
Summing over the entire object
$L_{\omega}=(\Sigma_{i}m_{i}R_{i}^2)\omega=I\omega$
This looks like the intuitive relationship we used earlier, but we should be careful, this equation is for the component of angular momentum along the rotation axis. If the object is symmetric we can however reason that all components of angular momentum not along the axis cancel out and
$\vec{L}=I\vec{\omega}$
If the conditions discussed above are fulfilled we can also use $\Sigma\tau=\frac{dL}{dt}$ to show that
$\Sigma\tau_{axis}=\frac{d}{dt}(I\omega)=I\frac{d\omega}{dt}=I\alpha$
Some fun with a bike wheel
If I want to turn the bike wheel from spinning vertically to horizontally which way should I jerk the handle of the wheel.
A. Up B. Down C. To my left d. To my right
The gyroscope
As we saw with a spinning bike wheel it takes a large torque to reorient a large angular momentum vector. If a spinning object is mounted so that it is free to orient itself in any direction, as it is in a gyroscope then the direction of it's angular momentum is constant and this can be used a useful reference, for example in an airplane or boat.
In a gyroscope the freedom for the spinning wheel to orient is achieved by its suspension in pivoted supports called gimbals.
Gyroscope Precession
If we apply a torque to a gyroscope by hanging a weight from it's axis an initially surprising effect is observed. Rather than falling we see the gyroscope begins to undergo precession, i.e the direction of the axis of rotation begins to change. This result can be understood by looking at the direction of the torque that is generated.
$\vec{\tau}=\vec{r}\times m\vec{g}$
By inspection we can see that independent of the orientation of the gyroscopes axis the torque will always be directed parallel to the surface of the Earth and perpendicular to the initial angular momentum vector. This torque acts to change the direction, but not the magnitude of the angular momentum vector. We'll consider this problem in the mathematically easiest situation which is when the axis of rotation is horizontal (a more general derivation is in your textbook). We define an angle $\theta$ to describe the orientation of the angular momentum vector in the horizontal plane, and then express the
$dL=L\,d\theta$
If we define the angular velocity of the precession as $\Omega=\frac{d\theta}{dt}$ then
$\Omega=\frac{1}{L}\frac{dL}{dT}=\frac{\tau}{L}=\frac{mgr}{L}$
Gyroscope Nutation
If the precession above seems just a little too “magical” to you then you have good instincts! In actual fact to be able to precess the axis of the gyroscope actually has to drop a little, as there is an angular momentum associated with the precession. Without this drop it is impossible for the gyroscope to precess! Indeed, the axis of the gyroscope will actually drop beyond the equilibirum position and will be pulled back up, again overshooting and executing an oscillatory motion around the mean position of the plane of precession that we call nutation.
When the angular momentum of the gyroscope is high this drop is barely noticeable, because the angular velocity of precession, which is inversely proportional to $L$ ($\Omega=\frac{mgr}{L}$), is quite small. Nutation at high angular velocities is therefore a barely noticeable shaking of the axis of rotation. At lower angular momentum the nutation is much more obvious.
In the absence of friction the path that the tip of angular momentum vector would draw out is a cycloid (upside down), though as the gimbal bearings have quite a bit of friction the nutation oscillations are relatively quick to damp down leaving the smooth precession we discussed initially.
|
2020-01-18 13:25:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7589643597602844, "perplexity": 301.24630836623106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592565.2/warc/CC-MAIN-20200118110141-20200118134141-00025.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Binomial_(Euclidean)/Sixth_Binomial/Example
|
Definition:Binomial (Euclidean)/Sixth Binomial/Example
Example
Let $a$ and $b$ be two (strictly) positive real numbers such that $a + b$ is a binomial.
By definition, $a + b$ is a sixth binomial if and only if:
$(1): \quad: a \notin \Q$
$(2): \quad: b \notin \Q$
$(3): \quad: \dfrac {\sqrt {a^2 - b^2}} a \notin \Q$
where $\Q$ denotes the set of rational numbers.
Let $a = \sqrt 7$ and $b = \sqrt 5$.
Then:
$\ds \frac {\sqrt {a^2 - b^2} } a$ $=$ $\ds \frac {\sqrt {7 - 5} } {\sqrt 7}$ $\ds$ $=$ $\ds \sqrt {\frac 2 7}$ $\ds \notin \Q$
Therefore $\sqrt 7 + \sqrt 5$ is a sixth binomial.
|
2021-08-02 06:41:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656597375869751, "perplexity": 308.7281734118941}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00322.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-3-2-x-15-12
|
# How do you solve (3/2)x - 15 = -12?
May 15, 2018
$x = 2$
#### Explanation:
$\frac{3}{2} x - 15 = - 12$
$\frac{3}{2} x = - 12 + 15$
$\frac{3}{2} x = 3$
$\frac{3}{2} x \times \frac{2}{3} = 3 \times \frac{2}{3}$
$x = 2$
May 15, 2018
X=2
#### Explanation:
I am not an expert at math, but I can help with this problem.
First you need to add 15 to both sides of the equation because there is a -15 on the side of the variable. So the -15 plus the 15 cancels out, so your left with $\frac{3}{2} x$ on the side with the variable. Then you add 15 to the -12 which is 3.
$\frac{2}{3} x - 15 = - 12$
$\frac{3}{2} x = 3$
Then you have to isolate the variable. Which means there are no numbers attached to the variable. So you divide both sides by $\frac{3}{2}$. So your left with just x on on side and on the other side you have $3 \div i \mathrm{de} \frac{3}{2}$ = 2. So you have $x = 2$.
Hope I helped! :)
|
2019-12-11 22:12:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5930670499801636, "perplexity": 265.1294020596947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00386.warc.gz"}
|
https://symmetricblog.wordpress.com/tag/math-124/
|
## Posts Tagged ‘Math 124’
### An example of why lecturing does not work very well
March 2, 2013
We just started discussing confidence intervals in probability and statistics. As expected, students had a difficult time with it.
As usual, they read the section, answered some questions online, and came to class. In class, we worked on clicker questions. The first was basically:
Q: The 95% confidence interval for the population mean $\mu$ is [x,y]. Based on this interval:
1. There is a 95% chance that $\mu$ is in this interval.
2. 95% of the observations are in this interval.
3. This method of creating intervals works 95% of the time.
This is a tricky idea, but the third choice is the best answer of the three. In my second class, only 2 out 26 students got it correct. This was to be expected, though, since it is a tricky subject.
So I basically gave a 15-20 minute lecture as to why the third one was correct and the first two were wrong. Actually, it is more accurate to say that I repeated a six minute lecture three times about how to think about this.
We had two more clicker questions related to confidence intervals, and then I gave them the following question (perhaps you recognize it):
Q: The 95% confidence interval for the population mean $\mu$ is [x,y]. Based on this interval:
1. There is a 95% chance that $\mu$ is in this interval.
2. 95% of the observations are in this interval.
3. This method of creating intervals works 95% of the time.
The class was completely split into thirds as to which of the three answers was correct (to be fair, the question was only isomorphic to the first question, not equal).
I re-gave my two more variations of my six minute lecture explaining how to think of each of the three choices.
Then I re-gave the question, only with the following choices:
1. There is a 95% chance that $\mu$ is in the interval.
2. The probability that $\mu$ is in the interval is 0.95.
3. 95% of the observations are in this interval.
4. Exactly two of these answers are correct.
5. Each of the first three answers are correct.
6. None of the above answers are correct.
The correct answer is “None of the above,” of course. Three of the 26 students got it correct, even though I had literally just told them why the first three choices were wrong two minutes prior to voting.
This means one of two things. Either
1. Either learning is incredibly complex, and lecturing is not a good tool to help people understand, or
2. I suck at lecturing.
To be fair, Peer Instruction was not working, either. But it is surprising to me that Peer Instruction works as well as it does, and it is surprising to me that lectures fails as miserably as it does. The confidence interval lesson is a good reminder of the latter.
The point is not that my students are dumb—they are not. Nor is it that they are bad students—they are not. The point is that learning is difficult (especially with tricky ideas like “confidence intervals”), and one must be sensitive to this fact.
### Grading for Probability and Statistics
January 23, 2013
Here is what I came up with for grading my probability and statistics course. First, I came up with standards my students should know:
“Interpreting” standards (these correspond to expectations for a student who will earn a C for the course.
1. Means, Medians, and Such
2. Standard Deviation
3. z-scores
4. Correlation vs. Causation and Study Types
5. Linear Regression and Correlation
6. Simple Probability
7. Confidence Intervals
8. p-values
9. Statistical Significance
“Creating” standards (these correspond to a “B” grade):
1. Means, Medians, and Standard Deviations
2. Probability
3. Probability
4. Probability
5. Confidence Intervals
6. z-scores, t-scores, and p-values
7. z-scores, t-scores, and p-values
(I repeat some standards to give them higher weight).
1. Sign Test
2. Chi-Square Test
Here is how the grading works: students take quizzes. Each quiz question is tied to a standard. Here are examples of some quiz questions:
(Interpreting: Means, Medians, and Such) Suppose the mean salary at a company is $50,000 with a standard deviation of$8,000, and the median salary is $42,000. Suppose everyone gets a raise of$3,000. What is the best answer to the following question: what is the new mean salary at the company?
(Interpreting: Standard Deviation) Pick four whole numbers from 1, . . . , 9 such that the standard deviation is as large as possible (you are allowed to repeat numbers).
(Creating: Means, Medians, and Standard Deviations) Find the mean, median, and standard
deviation of the data set below. It must be clear how you arrived at the answer (i.e. reading the answer off of the calculator is not sufficient). Here are the numbers: 48, 51, 37, 23, 49.
Advanced standard questions will look similar to Creating questions.
At the end of the semester, for each standard, I count how many questions the students gets completely correct in each standard. If the number is at least 3 (for Creating and Advanced) or at least 4 (for Interpreting), the student is said to have “completed” that standard (the student may opt to stop doing those quiz questions once the student has “completed” the standard).
If a student has “completed” every standard within the Interpreting standards, we say the student has “completed” the Interpreting standards. Similarly with Creating and Advanced.
Here are the grading guidelines (an “AB” is our grade that is between an A and a B):
-A student gets at least a C for a semester grade if and only if the student “completes” the Interpreting standards and gets at least a CD on the final exam.
-A student gets at least a B for the semester grade if and only if the student “completes” the Interpreting and Creating standards and gets at least a BC on the final exam.
-A student gets an A for the semester grade if and only if the student “completes” all of the standards, gets at least an AB on the final exam, and completes a project.
The project will be to do some experiment or observational study that uses a z-test, t-test, chi-square test, or sign test. It can be on any topic they want, and they can choose to collect data or use existing data. The students will have a poster presentation at my school’s Scholarship and Creativity Day.
I would appreciate any feedback that you have, although we are 1.5 weeks into the semester, so I am unlikely to incorporate it.
|
2017-12-15 06:22:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5824127793312073, "perplexity": 872.7593621947449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00356.warc.gz"}
|
https://cs.stackexchange.com/questions/135332/proof-of-existence-of-l-in-r-setminus-p
|
# Proof of existence of $L\in R\setminus P$
I saw some proof but I didn't understood it, any simple one?
• Do you understand the time hierarchy theorem? – Dmitry Feb 10 at 21:34
• Thanks man! @Dmitry – ChaosPredictor Feb 11 at 6:19
Look at the time hierarchy theorem for an explanation. In particular, we know (using this theorem) that $$P\subsetneq E\subsetneq EXP\subsetneq R$$, and we could have added a lot more complexity classes in between them.
|
2021-04-10 12:42:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900073766708374, "perplexity": 802.0620648120818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00262.warc.gz"}
|
https://mathvis.academic.wlu.edu/tag/ellipsoid/
|
The next quadratic surfaces I printed were an elliptic paraboloid and a regular paraboloid.
For the elliptic paraboloid I imported the surface from Mathematica.
I then optimized the polygons, extruded them by 0.20 cm to give the surface thickness. After that I used the boole tool to make the edge flat and added an equation through the surface.
I created the regular paraboloid from scratch in Cinema 4D using the same process as the cone. I used the formula spline $$x(t)=t, y(t)=t^2, z(t)=0$$ and then used the lathe tool with 60 rotation segments to rotate it 360 degrees. I optimized the polygons and extruded them to give the surface thickness. I also made sure to “boole” the edge to make it flat and added an equation.
I printed both paraboloids on the same build bed with the MakerBot 2X printer. They can be found on Thingiverse here and here.
|
2018-08-15 07:20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3568729758262634, "perplexity": 1655.9835922832938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209980.13/warc/CC-MAIN-20180815063517-20180815083517-00041.warc.gz"}
|
https://brilliant.org/problems/sum-of-an-infinite-geometric-sum/
|
# Sum of an infinite geometric sum
Algebra Level 4
Let $$\mathcal{A}$$ be the set of first hundred natural numbers.
A function $$f(t)$$ is defined from $$\mathcal{A}$$ to $$\mathbb{R}$$ which denotes the sum of of infinite geometric series whose first term is $$\frac{t-1}{t!}$$ and the common ratio is $$\frac{1}{t}$$.
Let $$S(n)$$ denote the sum:
$S(n) = \sum_{r=1}^n | (r^2-3r+1)f(r) |$
Find the value of $$\frac{100^2}{100!} + S(100)$$.
×
|
2018-12-17 01:52:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937415719032288, "perplexity": 134.91645736701395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00276.warc.gz"}
|
http://motls.blogspot.com/2011/07/next-week-big-higgs-secret-may-be.html
|
## Saturday, July 16, 2011 ... /////
### Next week, a big Higgs secret may be unmasked
On Thursday, July 21nd, the European Physical Society's (EPS) conference on high-energy physics (HEP), EPS-HEP2011, is getting started in Grenoble, Southeastern France.
Chances are higher than one year ago that we will be told something truly new, truly spectacular. Phil Gibbs has described his excited expectations, too. He even believes that the arXiv will be so overflooded by papers that physicists will start to send lots of papers to viXra, too. Well, I am not sure that it will be this big a game-changed but I do share his expectations that the conference will be a game-changer.
This is a serious conference and the folks at CERN will be eager to present their newest results from the LHC collider. Many of them should be based on 1 inverse femtobarn of data - a factor of 6 improvement over a small number of papers a month ago; a factor of 30 improvements over dozens of papers that built on the 2010 collisions only.
It's arguably more likely than not that during the conference, the main detectors at the LHC, namely ATLAS and CMS, will either announce strong evidence for the existence of some kind of a Higgs particle; or they will exclude the Higgs particle down to 135 GeV which would have far-reaching consequences, too.
A certainty that there is no Higgs heavier than 135 GeV would mean that the Standard Model almost certainly can't be correct because for such a light value of the Higgs mass, the vacuum becomes unstable according to this simplest consistent theory of the electroweak and strong interactions.
On the contrary, the existence of a Higgs particle whose mass is lighter than 135 GeV would be a rather strong argument in favor of the supersymmetry although other non-standard theories are in principle possible, too.
Preliminary hints suggest that the Higgs particles favor masses such as 115 GeV - the old legendary particle that LEP unluckily missed a decade ago - 140 GeV, and 205 GeV where some new bumps were recently observed.
One could even speculate that this innocent introduction to the concept of systematic uncertainties by Aidan Randle-Conde, a young American working for the LHC, is a masked rumor that those folks could be seeing a hint of a charged version of the Higgs particle which would also support the idea of supersymmetry. Recall that according to the standard terminology, even the minimal supersymmetric standard model requires five faces of the Higgs particle.
The existence of five God particles remains a speculation and the masses are nothing else than your humble correspondent's favorite values.
We shouldn't forget that it's still plausible that the situation of the Higgs sector will remain as ambiguous as it is today even after the conference. It wouldn't be the first time.
|
2017-05-24 02:13:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3833794593811035, "perplexity": 1054.7186784889598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00121.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Cameron%20E.%20Freer
|
• ### Feedback computability on Cantor space(1708.01139)
April 29, 2019 cs.LO, math.LO
We introduce the notion of feedback computable functions from $2^\omega$ to $2^\omega$, extending feedback Turing computation in analogy with the standard notion of computability for functions from $2^\omega$ to $2^\omega$. We then show that the feedback computable functions are precisely the effectively Borel functions. With this as motivation we define the notion of a feedback computable function on a structure, independent of any coding of the structure as a real. We show that this notion is absolute, and as an example characterize those functions that are computable from a Gandy ordinal with some finite subset distinguished.
• ### On the computability of graphons(1801.10387)
Jan. 31, 2018 math.CO, math.PR, cs.LO, math.LO
We investigate the relative computability of exchangeable binary relational data when presented in terms of the distribution of an invariant measure on graphs, or as a graphon in either $L^1$ or the cut distance. We establish basic computable equivalences, and show that $L^1$ representations contain fundamentally more computable information than the other representations, but that $0'$ suffices to move between computable such representations. We show that $0'$ is necessary in general, but that in the case of random-free graphons, no oracle is necessary. We also provide an example of an $L^1$-computable random-free graphon that is not weakly isomorphic to any graphon with an a.e. continuous version.
• ### On computability and disintegration(1509.02992)
May 10, 2016 math.PR, cs.LO, math.LO, math.ST, stat.TH
We show that the disintegration operator on a complete separable metric space along a projection map, restricted to measures for which there is a unique continuous disintegration, is strongly Weihrauch equivalent to the limit operator Lim. When a measure does not have a unique continuous disintegration, we may still obtain a disintegration when some basis of continuity sets has the Vitali covering property with respect to the measure; the disintegration, however, may depend on the choice of sets. We show that, when the basis is computable, the resulting disintegration is strongly Weihrauch reducible to Lim, and further exhibit a single distribution realizing this upper bound.
• ### Towards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence(1212.4799)
Oct. 9, 2013 cs.AI, math.LO, stat.ML
The problem of replicating the flexibility of human common-sense reasoning has captured the imagination of computer scientists since the early days of Alan Turing's foundational work on computation and the philosophy of artificial intelligence. In the intervening years, the idea of cognition as computation has emerged as a fundamental tenet of Artificial Intelligence (AI) and cognitive science. But what kind of computation is cognition? We describe a computational formalism centered around a probabilistic Turing machine called QUERY, which captures the operation of probabilistic conditioning via conditional simulation. Through several examples and analyses, we demonstrate how the QUERY abstraction can be used to cast common-sense reasoning as probabilistic inference in a statistical model of our observations and the uncertain structure of the world that generated that experience. This formulation is a recent synthesis of several research programs in AI and cognitive science, but it also represents a surprising convergence of several of Turing's pioneering insights in AI, the foundations of computation, and statistics.
• ### Randomness extraction and asymptotic Hamming distance(1008.0821)
Sept. 25, 2013 cs.IT, math.IT, cs.LO, math.LO, cs.CC
We obtain a non-implication result in the Medvedev degrees by studying sequences that are close to Martin-L\"of random in asymptotic Hamming distance. Our result is that the class of stochastically bi-immune sets is not Medvedev reducible to the class of sets having complex packing dimension 1.
• ### Computable de Finetti measures(0912.1072)
We prove a computable version of de Finetti's theorem on exchangeable sequences of real random variables. As a consequence, exchangeable stochastic processes expressed in probabilistic functional programming languages can be automatically rewritten as procedures that do not modify non-local state. Along the way, we prove that a distribution on the unit interval is computable if and only if its moments are uniformly computable.
• ### On the computability of conditional probability(1005.3014)
April 18, 2019 math.PR, cs.LO, math.LO, math.ST, stat.TH, stat.ML
As inductive inference and machine learning methods in computer science see continued success, researchers are aiming to describe ever more complex probabilistic models and inference algorithms. It is natural to ask whether there is a universal computational procedure for probabilistic inference. We investigate the computability of conditional probability, a fundamental notion in probability theory and a cornerstone of Bayesian statistics. We show that there are computable joint distributions with noncomputable conditional distributions, ruling out the prospect of general inference algorithms, even inefficient ones. Specifically, we construct a pair of computable random variables in the unit interval such that the conditional distribution of the first variable given the second encodes the halting problem. Nevertheless, probabilistic inference is possible in many common modeling settings, and we prove several results giving broadly applicable conditions under which conditional distributions are computable. In particular, conditional distributions become computable when measurements are corrupted by independent computable noise with a sufficiently smooth bounded density.
|
2019-09-23 06:18:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7522927522659302, "perplexity": 760.1781677416707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00552.warc.gz"}
|
https://www.mapleprimes.com/questions/235684-How-To-Prevent-The-Automatic-Simplification
|
# Question:How to prevent the automatic simplification of input expression in latex()?
## Question:How to prevent the automatic simplification of input expression in latex()?
Maple 2022
I input this codes:
latex((4*n-1)/9-7/16*n)
\frac{n}{144}-\frac{1}{9}
This is not the output I expected. I would like to obtain an expression similar to the one below.
\frac{4 n-1}{9}-\frac{7}{16}n
How can I achieve this, as Maple seems to have processed it internally.
Does maple over-understand the expression.
|
2023-03-21 11:19:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7058182954788208, "perplexity": 3453.0028188513415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00608.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/80242-find-all-solutions-print.html
|
# find all solutions
• Mar 23rd 2009, 04:21 PM
guardofthecolor4ever
find all solutions
tanxsinx-sinx=0
I can get to sinx(tanx-1)=0
and that sinx=0
where do i go from there???
• Mar 23rd 2009, 04:26 PM
e^(i*pi)
Quote:
Originally Posted by guardofthecolor4ever
tanxsinx-sinx=0
I can get to sinx(tanx-1)=0
and that sinx=0
where do i go from there???
sin(x)= 0 at $x = 0 \pm k\pi$ where k is an integer
Then do tan(x)-1=0
tan(x) = 1 at $\frac{\pi}{4} \pm k\pi$ where k is an integer
|
2016-09-28 00:53:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961652517318726, "perplexity": 6506.403809782726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661289.41/warc/CC-MAIN-20160924173741-00172-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://script.spoken-tutorial.org/index.php?title=Applications-of-GeoGebra/C3/Integration-using-GeoGebra/English&curid=12427&diff=45506&oldid=45504
|
# Difference between revisions of "Applications-of-GeoGebra/C3/Integration-using-GeoGebra/English"
Visual Cue Narration Slide Number 1 Title Slide Welcome to this tutorial on Integration using GeoGebra Slide Number 2 Learning Objectives In this tutorial, we will use GeoGebra to look at integration to estimate: Area Under a Curve (AUC) Area bounded by two functions Slide Number 3 System Requirement Here I am using: Ubuntu Linux Operating System version 16.04 GeoGebra 5.0.481.0-d Slide Number 4 Pre-requisites To follow this tutorial, you should be familiar with: GeoGebra interface Integration For relevant tutorials, please visit our website. Slide Number 5 Definite Integral Consider f is a continuous function over interval [a,b] above x-axis a is lower limit, b is upper limit $\underset{a}{\overset{b}{\int }}f\left(x\right)dx$ Area bounded by y=f(x), x=a, x=b and x-axis Definite Integral Consider f is a continuous function over interval a, b above the x-axis. a and b are called the lower and upper limits of the integral. Integral of f of x from a to b with respect to x is the notation for this definite integral. It is the area bounded by y equals f of x, x equals a, x equals b and the x-axis. Slide Number 6 Calculation of a Definite Integral Let us calculate the definite integral ${\int }_{-1}^{2}(-0.5x\hat{3}+2x\hat{2}-x+1)dx$ Let us calculate the definite integral of this function with respect to x. Open a new GeoGebra window. Let us open a new GeoGebra window. Type g(x)= - 0.5 x^3+ 2 x^2-x+1 in the input bar >> Enter. In the input bar, type the following line and press Enter. Point to the graph in Graphics view and its equation in Algebra view. Note the graph in Graphics view and its equation in Algebra view. Click on Slider tool and click in Graphics view. Type n in the Name field. Set 1 as Min, 50 as the Max and 1 as Increment >> OK Point to slider n in Graphics view. Using the Slider tool, create a number slider n in Graphics view. It should range from 1 to 50 in increments of 1. Drag slider n to 5. Drag the resulting slider n to 5. Click on Point on Object tool and click at (-1,0) and (2,0) to create A and B. Under Point, click on Point on Object and click at -1 comma 0 and 2 comma 0 to create A and B. Cursor on the GeoGebra interface. Let us look at a few ways to approximate area under the curve. These will include upper Riemann and trapezoidal sums as well as integration. We will first assign the variable label uppersum to the Upper Riemann Sum in GeoGebra. Type uppersum=Upp in the Input Bar. Show option. UpperSum( , , , ) Click on it. In the input bar, type uppersum is equal to capital U p p. The following option appears. Click on it. Type g instead of highlighted . Type g instead of highlighted . Press Tab to highlight . Press Tab to highlight . Type x(A). Type x A in parentheses. Similarly, type x(B) for End x-Value and n as Number of Rectangles >> Enter Similarly, type x B in parentheses for End x-Value and n as Number of Rectangles. Press Enter. Point to five rectangles between x Note that five rectangles appear between x equals -1 and 2. Under Move Graphics View, click on Zoom In >> click in Graphics view. Under Move Graphics View, click on Zoom In and click in Graphics view. Click on Move Graphics View and drag the background to see all the rectangles properly. Again click on Move Graphics View and drag the background to see all the rectangles properly. Point to upper sum area under the curve (AUC). The upper sum area under the curve (AUC) adds the area of all these rectangles. Point to the rectangles extending above the curve. It is an overestimation of the area under the curve. This is because some portion of each rectangle extends above the curve. Drag the background to move the graph to the left. Drag the background to move the graph to the left. Let us now assign the variable label trapsum to the Trapezoidal Sum. Type trapsum=Tra in the Input bar. In the input bar, type trapsum is equal to Capital T ra. Point to the menu that appears. A menu with various options appears. Select TrapezoidalSum( , , , ). Select the following option. We will type the same values as before and press Enter. In Algebra view, uncheck uppersum to hide it in Graphics view. Point to trapezoids. In Algebra view, uncheck uppersum to hide it in Graphics view. Note the shape of the trapezoids. Let us now look at the integral as the area under the curve. Finally, type Int in the Input Bar. Finally, in the input bar, type capital I nt. Point to the menu with various options. A menu with various options appears Select Integral( , , ). Select the following option. Enter g , x(A), x(B) Again, we will enter the same values as before. And Press Enter. In Algebra view, uncheck trapsum to hide it in Graphics view. In Algebra view, uncheck trapsum to hide it in Graphics view. Point to the integrated AUC. For the integral, the curve is the upper bound of the AUC from x equals -1 to 2. In Algebra view, uncheck integral a to hide it in Graphics view. In Algebra view, uncheck integral a to hide it in Graphics view. Click on Text tool under Slider tool. Under Slider, click on Text. Click in Graphics view to open a text box. Click in Graphics view to open a text box. In the Edit field, type Upper Sum = and in Algebra view, click on uppersum. Click again in the text box and press Enter. In the Edit field, type Upper space Sum equals and in Algebra view, click on uppersum. Click again in the text box and press Enter. Type Trapezoidal Sum = and in Algebra view, click on trapsum. Click again in the text box and press Enter. Type Trapezoidal space Sum equals and in Algebra view, click on trapsum. Click again in the text box and press Enter. Type Integral a equals and in Algebra view, click on a. Click OK in the text box. Type Integral a equals and in Algebra view, click on a. In the text box, click OK. Click on Move >> drag the text box in case you need to see it better. Click on Move and drag the text box in case you need to see it better. Now, click on the text box and click on the Graphics panel and select bold to make the text bold. Now, click on the text box, click on the Graphics panel and select bold to make the text bold. In Algebra view, check a, trapsum and uppersum to show all of them. In Algebra view, check a, trapsum and uppersum to show all of them. Point to text box and to slider n. Observe the values in the text box as you drag slider n. Point to Graphics view. Trapsum is a better approximation of AUC at high n values. Integrating such sums from A to B at high values of n will give us the AUC. Open a new GeoGebra window. Let us open a new GeoGebra window Cursor on GeoGebra interface. We will look at the relationship between differentiation and integration. Also we will look at finding the integral function through a point A 1 comma 3. Type f(x)=x^2+2 x+1 in the Input Bar >> Enter. In the input bar, type the following line and press Enter. Point to f of x. Let us call integral of f of x capital F of x. Type F(x)= Integral(f) in the Input Bar >> Enter. In the input bar, type the following line and press Enter. Point to the red integral curve of f(x) in Graphics view. Point to equation for F(x)=1/3 x3+ x2+x appears in Algebra view. The integral curve of f of x is red in Graphics view. Its equation for capital F of x appears in Algebra view. Confirm that this is the integral of f of x. Drag the boundary to see the equations properly. Drag the boundary to see the equations properly. Type h(x)=F'(x) in the Input Bar >> Enter. In the input bar, type the following and press Enter. Point to F'(x) and f(x). Note that this graph coincides with f of x. The equations for f of x and h of x are the same. Thus, we can see that integration is the inverse process of differentiation. Taking the derivative of an integral, gives back the original function. Click on Point tool and create point A at (1,3). Click on Point tool and create a point at 1 comma 3. Type i(x)=F(x)+k in the Input Bar >> Enter. In the input bar, type the following and press Enter. Click on Create Sliders in the window that pops up. Click on Create Sliders in the window that pops up. Point to slider k. A slider k appears. Double click on slider k. Set Min at 0, Max at 5 and Increment to 0.01. Close the Preferences window. Double click on slider k. Set Min at 0, Max at 5. Scroll right to set the Increment to 0.01. Close the Preferences box. Double click on i(x) in Algebra view and on Object Properties. In Algebra view, double-click on i of x and on Object Properties. Click on Color tab and select green. Close the Preferences box. Click on Color tab and select green. Close the Preferences box. Drag k to make i(x) pass through point A. Point to integral function (1/3)x3+x2+x+0.7. Drag k to make i of x pass through point A. Drag the boundary to see i of x properly. Drag the boundary to see i of x properly. Point to F(x)+0.7: the curve and equation. This function is capital F of x plus 0.7. Slide Number 7 Double Integrals Double integrals can be used to find: AUC along x and y axes’ directions The volume under a surface z=f(x,y) Double Integrals Double integrals can be used to find: The area under a curve along x and y axes' directions The volume under a surface z which is equal to f of x and y Slide Number 8 Double Integral-An Example Let us find the area between parabola x=y2 and the line y=x. The limits are from (0,0) to (1,1). This area can be expressed as the double integral =${\left({\int }_{0}^{1}{\int }_{y\hat{2}}^{y}dxdy\right)}^{}$= $\left({\int }_{0}^{1}{\int }_{x}^{x\hat{0.5}}dydx\right)$ Double Integral-An Example Let us find the area between a parabola x equals y squared and the line y equals x. The limits are from 0 comma 0 to 1 comma 1. This area can be expressed as the double integrals shown here. Observe the limits and the order of the integrals in terms of the variables. Let us open a new GeoGebra window. We will first express x in terms of y, for both functions. In the input bar, type x=y2 >> press Enter. In the input bar, type x equals y caret 2 and press Enter. Next, in the input bar, type y=x >> press Enter. Next, in the input bar, type y equals x and press Enter. Click on View tool and select CAS. Click on View tool and select CAS. In Algebra view, click top right button to close Algebra view. In Algebra view, click top right button to close Algebra view. Drag the boundary to make CAS view bigger. Drag the boundary to make CAS view bigger. In CAS view, type Int in line 1. Point to the menu that appears. In CAS view, type Int capital I in line 1. A menu with various options appears. Select IntegralBetween( , , , , ). Scroll down. Select the following option. Type y for the first function. Type y for the first function. Press Tab >> type y^2 for the second function. Press Tab and type y caret 2 for the second function. Press Tab >> type y as the variable. Press Tab and type y as the variable. Press Tab >> type 0 and 1 as start and end values of y. Press Tab and type 0 and 1 as start and end values of y. Press Enter. Press Enter. Point to the value of 1/6 below the entry. Point to the area between the parabola and the line from (0,0) to (1,1). A value 1 divided by 6 appears below the entry. This is the area between the parabola and the line from 0 comma 0 to 1 comma 1. Let us now express y in terms of x for both functions. Let us now express y in terms of x for both functions. In CAS view, type Int and observe the same menu as before. In CAS view, type Int capital I and choose the same option from the menu as before. Cursor in CAS view. Now, let us reverse the order of functions and limits. Type sqrt(x) for the first function and x for the second. Type the following and press Enter. Point to the input bar. You can also use the input bar instead of the CAS view. Under View, click on Algebra to see Algebra view again. Under View, click on Algebra to see Algebra view again. Drag the boundaries to make CAS view smaller. Drag the boundaries to make CAS view smaller. In the input bar, type Int. From the menu, select IntegralBetween( , , , ). Type y for the first function. Press Tab, type y caret 2 for the second function. Press Tab, type 0 as the Start Value and again press Tab to move to and type 1 as the End Value. Press Enter. This will also give you an area a of 0.17 or 1 divided by 6. In the input bar, type Int capital I. From menu, select the following option. Type y for the first function. Press Tab, type y caret 2 for the second function. Press Tab, type 0 as the Start x Value and again press Tab to move to and type 1 as the End x Value. Press Enter. This will also give you an area a of 0.17 or 1 divided by 6. Let us summarize. Slide Number 9 Summary In this tutorial, we have used GeoGebra to understand integration as estimation of: Area Under a Curve (AUC) Area bounded by two functions Slide Number 10 Assignment Calculate ${\int }_{0}^{0.5}f\left(x\right)dx$ where f(x) = 1/(1-x) Calculate ${\int }_{x\left(A\right)}^{x\left(B\right)}g\left(x\right)dx$ and ${\int }_{x\left(B\right)}^{x\left(C\right)}g\left(x\right)dx$ where g(x) = 0.5x3+2x2-x-3.75 A, B and C are points where the curve intersects x-axis (left to right); explain the results As an assignment: Calculate the integrals of f of x and g of x between the limits shown with respect to x. Explain the results for g of x. Slide Number 11 Assignment Calculate the area bounded by the following functions: y=4x-x2, y=x x2+y2=9, y=3-x y=1+x2, y=2x2 As another assignment: Calculate the shaded areas between these pairs of functions. Slide Number 12 About Spoken Tutorial project The video at the following link summarizes the Spoken Tutorial project. Please download and watch it. Slide Number 13 Spoken Tutorial workshops The Spoken Tutorial Project team: conducts workshops using spoken tutorials gives certificates on passing online tests. For more details, please write to us. Slide Number 14 Forum for specific questions: Do you have questions in THIS Spoken Tutorial? Please visit this site Choose the minute and second where you have the question Explain your question briefly Someone from our team will answer them Please post your timed queries on this forum. Slide Number 15 Acknowledgement Spoken Tutorial Project is funded by NMEICT, MHRD, Government of India. More information on this mission is available at this link. This is Vidhya Iyer from IIT Bombay, signing off. Thank you for joining.
|
2019-02-21 14:45:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2989041209220886, "perplexity": 1591.2367799245767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504790.66/warc/CC-MAIN-20190221132217-20190221154217-00353.warc.gz"}
|
https://zbmath.org/?q=an:0767.33009
|
## On hypergeometric functions in several variables. I: New integral representations of Euler type.(English)Zbl 0767.33009
The author defines a class of power series whose coefficients are products of shifted factorials $$(\alpha;n)={{\Gamma(\alpha+n)} \over {\Gamma(n)}}$$ and proves that every member of this class admits an Euler integral representation and that it satisfies a holonomic system. Appell- Lauricella’s Horn’s and Aomoto-Gel’fand’s hypergeometric functions are members of this class. In fact, defining the hypergeometric series in §1 he discusses convergence and integral representation in theorem 2, in the proof of which a crucial role is played by Kummer’s trick and the twisted cycle $$\Delta^ m(w)$$ which is a higher dimensional version of classical double circuit and then establishes theorem 3 giving what may be termed as a better form of integral representation. In the second chapter of the paper applications and theorems 2 and 3 are given by obtaining new integral representations for Horn’s series $$G_ 3$$, $$H_ 5-H_ 7$$ and also by showing that $$F_ c$$ defined in §1 admits an Euler integral representation which is a generalization of that for $$F_ 4$$ by K. Aomoto [Group representations and systems of differential equations, Proc. Symp., Tokyo 1982, Adv. Stud. Pure Math. 4, 165-179 (1984; Zbl 0596.32015)] and that of $$F_ c$$ due to P. I. Pastro [Bull. Sci. Math., II. Ser. 113, No.1, 119-124 (1989; Zbl 0668.33003)]. The integral obtained by the author is in the generalised case, a product of powers of linear and quadratic polynomials which is in contrast with the integral representation of Aomoto-Gel’fand hypergeometric series whose integral is a product of powers of linear polynomials only. The paper is concluded by making a remark on the duality of the Aomoto-Gel’fand hypergeometric functions found by I. M. Gel’fand and M. I. Graev and by presenting a system of differential equations satisfied by hypergeometric series, called the hypergeometric system and by giving estimates of the rank (the dimension of the solution space) of the system.
### MSC:
33C70 Other hypergeometric functions and integrals in several variables 33C65 Appell, Horn and Lauricella functions
### Citations:
Zbl 0596.32015; Zbl 0668.33003
|
2022-10-02 10:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189364910125732, "perplexity": 433.3624087253336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337307.81/warc/CC-MAIN-20221002083954-20221002113954-00206.warc.gz"}
|
http://codeforces.com/blog/entry/56028
|
### NBAH's blog
By NBAH, history, 11 months ago, translation, ,
## 895A - Pizza Separation
We can notice that if one of the sectors is continuous then all the remaining pieces also form a continuous sector.If angle of the first sector is equal to x then difference between angles of first and second sectors is |x - (360 - x)| = |2 * x - 360| = 2 * |x - 180|. So for each possible continuous sector we can count it's angle and update answer.
Time complexity O(n2) or O(n).
Solution
## 895B - XK Segments
First, we need to understand how to find the number of integers in [l, r] segment which are divisible by x. It is r / x–(l - 1) / x. After that we should sort array in ascending order. For each left boundary of the segment l = a[i] we need to find minimal and maximal index of good right boundaries. All right boundaries r = a[j] should satisfy the following condition a[j] / x–(a[i] - 1) / x = k. We already know (a[i] - 1) / x, a[j] / x is increasing while a[j] increases. So we can do binary search on sorted array to find minimal/maximal index of good right boundaries and that mean we can find the number of good right boundaries.
Time complexity O(n * log(n)).
Solution
## 895C - Square Subsets
We can notice that x is a perfect square of some integer if and only if each prime number enters decomposition of x into prime factors even times. There are only 19 prime numbers less than 70. Now we should find the bitmask for each integer in [1, 70] by the following way: There is 1 in bit representation of mask in k-th place if k-th prime number enters decomposition of that number odd times. Else there is 0. For each integer between 1 and 70 we need to find the number of ways we can take odd and even amount of it from a. Let f1[i], f0[i] be that number of ways relatively. Let dp[i][j] be the number of ways to choose some elements which are <= i from a, and their product has only those prime numbers in odd degree on whose index number j has 1 in binary representation. Initially dp[0][0] = 1.
dp[i + 1][j] + = dp[i][j] * f0[i + 1]
Time complexity is O(max*2^cnt(max)), where max is maximal integer a[i], and cnt(max) is the number of prime numbers less than max.
Solution
## 895D - String Mark
Suppose that we can calculate the function f(s) equal to the number of permutations of the string a strictly less than s. Then the answer is f(b) - f(a) - 1. Now we need to understand how to find f(s). First we should count the number of occurrences of each letter in the string a, cnt[26].Than we can iterate through the position of the first different symbol in the permutation a and the string s and update the number of remaining symbols cnt[26]. For each such position, we need to iterate through the symbol in the permutation of a which will stand in this position. It must be less than the character at this position in the s string. For each such situation we can calculate and add to the answer the number of different permutations that can be obtained using symbols not currently involved. Their number is stored in cnt[26]. In its simplest form, this solution works in O(n * k2), where k is the size of the alphabet. Such a solution can't pass the tests, but it can be optimized to O(n * k), and that is enough to solve the problem.
Time complexity O(n * k), where k is the size of alphabet.
Solution
## 895E - Eyes Closed
For each position we need to maintain mathematical expectation of the value on it. Initially, for position i, it is a[i]. Let's process the query of the first type. Each number from the interval [l1, r1] remains on its place with probability (r1 - l1) / (r1 - l1 + 1). The probability that it will be replaced by a number from [l2, r2] is 1 / (r1 - l1 + 1). The mathematical expectation of the number to which it will be replaced is the arithmetic mean of sum of the mathematical expectation of numbers in [l2, r2], let it be x. Then, to update the expectation of a number from [l1, r1], we need to multiply it by (r1 - l1) / (r1 - l1 + 1) and add x / (r1 - l1 + 1) to it. That is, the query of the first type is reduced to the query multiplying all the numbers in a segment and adding to them a number. To process the second type query, you must find the sum of the numbers in the segment. All these queries can be processed with the help of segment tree.
Time complexity O(x + q * log(n))
Solution
•
• +91
•
» 11 months ago, # | ← Rev. 2 → 0 Nice :) It was good problems!
» 11 months ago, # | +33 Hi! As a tester, I enjoyed solving the problems. Thanks to NBAH for problems.Problem E was nice segment tree with advanced lazy propagation problem, the special lazy propagation in this problem was instructive.Problem D was mixing of the known dp-on-digits idea and some combinatorics, it was a bit hard for this position.Problem C could be solved by a straight-forward meet-in-the-middle solution that is really hard for this position. Also, it could be solved with dp-on-masks. I think that this idea is a bit hard for div.2 C problem, too.To summarize, although the round was a bit hard, because of problems C and D, anyway, the problems were nice.
• » » 11 months ago, # ^ | +55 I don't think D was that hard for Div2 D. Neither the idea or the implementation of the procedure described in the editorial isn't very difficult, you don't have to do DP on digits. The required combinatorics knowledge wasn't unreasonable, too. D was arguably easier than C, I think.
• » » 11 months ago, # ^ | 0 Can you explain the meet in the middle soluton?
• » » » 11 months ago, # ^ | +8 Show each number as a 19-digit mask. There is at most 44 different masks (you can test). Divide this masks into two groups. For each group find all of the different masks they can make by xor of some subset. The answer can be calculated easily using meet in the middle.
• » » 11 months ago, # ^ | 0 Could someone please explain in problem C what exactly dp[i][j] is? I understood that i is the upper limit of the number but what exactly is j? Also, if i is the upper limit then shouldn't the dp array be ll dp[71][1<<20], so why does the author's solution set it as ll dp[2][1 << 20];. Please explain. Thanks.
• » » » 11 months ago, # ^ | +1 If you understand the definition of dp[71][1<<20], then you're on the right track! For this case, dp[i][j] is only dependent on dp[i-1][j] — previous step, therefore, we don't need to store all steps until i-1 (exclusively).
• » » » » 11 months ago, # ^ | 0 Oh! Silly me. Thanks. :)
» 11 months ago, # | +8 Would anyone mind explaining the solution to Problem C in more detail? I understand the bitmask, but I don't understand the definitions for f1[i], f0[i] and the dp[i][j] it uses.
• » » 11 months ago, # ^ | ← Rev. 3 → +2 Lol f1[i] is the number of ways to choose an odd number of numbers out of a set of a[i] numbers, and f0[i] is the number of ways to choose an even number of numbers out of a set of a[i] numbers.It ends up being:f1[i] = 0 if a[i] = 0 and f1[i] = 2a[i] - 1 otherwise.f0[i] = 1 if a[i] = 0 and f0[i] = 2a[i] - 1 otherwise.Basically know that for k ≥ 1, the number of ways to choose an odd number of elements out of a set of k elements is equal to the number of ways to choose an even number of elements out of a set of k elements, and they are both equal 2k - 1. Here is a proof.
• » » » 11 months ago, # ^ | ← Rev. 2 → 0 OK, but what does this have to do with the rest of the problem? Why is this being computed in the first place?
• » » » » 11 months ago, # ^ | ← Rev. 2 → +4 Because the mask j becomes if you take an odd number of x but stays as j if you take an even number of x.
• » » » » » 11 months ago, # ^ | 0 What do you mean by odd number of x ?
• » » » » 11 months ago, # ^ | 0 vb7401, did you get it? Because, I didn't.
• » » » » » 11 months ago, # ^ | 0 Its like basic knapsack, if you include x or don't. If you include x, the mask becomes j ^ mask[x] whereas if you don't include x, the mask remains j only. The point is if you include x , you can include odd number of times x , the mask will still remain j ^ mask[x] ( xor properties) and if you take even number of times of x , the mask will still be j.
• » » » 11 months ago, # ^ | 0 a[i] in your explanation is not the same array that was given in the problem right? The way I understand it is that it is a new array that counts number of occurrences of a given number i, which can be in the range of [1,70] which in the problem was referred to as ai (1 ≤ ai ≤ 70)
• » » » » 11 months ago, # ^ | 0 Yeah you are right, what a[i] means in my comment above is the number of times i appears in the input.
• » » 11 months ago, # ^ | ← Rev. 5 → +4 Although the answer is same, I reached at it in a different way.If there was a lower constraint on n, what would have been the solution?Note that, it does not matter in which order we process a[i] , the above dp holds.Let's come back to the original problem. Let's sort all the numbers. We will take advantage of the fact that there are only 70 distinct numbers and try to simulate the dp correspondigly.If the frequency of a number would have been 1,If the frequency of a number would have been 2,Extending,Now you easily shrink the first dimension from 105 to 70.
• » » » 11 months ago, # ^ | 0 can anyone explain the 2nd problem?
» 11 months ago, # | 0 Can anyone tell me why this submission 32697001 for problem C gets TLE? This 32697019 got ac, as you can see the only difference is the order i build dp states but the complexity remains the same
• » » 11 months ago, # ^ | 0 Hi, I think it has the same problem I had. It is calling fast_pow(2, freq[num] — 1), that is log(n) in EVERY state of the recursion, when it could be calculated at most 70 times.Hope this helps!
• » » » 11 months ago, # ^ | ← Rev. 3 → 0 Hi,Both submissions call fast pow(2, freq[num] — 1) in every state of the recursion, and one of them got accepted, so i don't think that should be a problem.Thanks for the reply anywayEDIT: Actually, precalculating powers of two up to n gives ac in first submission.
» 11 months ago, # | +11 I find it a pretty nice coincidence that both today's C and the F from the last educational round required to calculate C(n, 0) + C(n, 2) + C(n, 4) + ... as a subproblem.In the educational round, I spent some time thinking about the sum, eventually arriving to the conclusion that it's equal with 2^(n-1). Today's round, I simply brute forced the sum, without too much thought. When I looked at other submissions, I saw the 2^(n-1) term in them, and I was something like "hmmm... okay... I solved this just 3 days ago and I've already forgotten about it".
» 11 months ago, # | +19 C can be solved in a more (maybe) straightforward way.Let dp[i][mask] be the number of way to choose some subset of first i elements and their product has j-th prime with odd degree(if j-th bit of mask is 1). Directly implementing this solution results in a O(N × 219) solution which is too slow.However, for each number x ≤ 70, only 2, 3, 5, 7 can have power more than 1. If we group up all the number whose prime divisor contains 11, we can have a smaller dp state as dp[mask][11?] denoting the parity of current product on 2, 3, 5, 7, 11. After going through all these numbers, we can get rid of everything about 11 and only store the information of dp[mask][0]. Then, considering all the number whose prime divisor contains 13, and so on.The time complexity is O(N × 25). In this way, we can even solve the problem with different weight on each element(i.e. sum of total weight of choosing a subset whose product is a square number).
• » » 11 months ago, # ^ | 0 That idea make me remember this problem.https://community.topcoder.com/stat?c=problem_statement&pm=12074
» 11 months ago, # | +76 C can be solved by equation set module 2。 like a_11x_1+a_12x_2...+a_1nx_n=0 a_ij dones whether the jth number have the ith prime odd times or even times,x_i dones whether the ith number will be chosed. And the solution for these equations is the answer for the problem.It can be solved in O(n*19) using bitmask.
• » » 11 months ago, # ^ | 0 You mean the number of solutions of this system of equations gives the answer, right?! How do you find it?
• » » » 11 months ago, # ^ | 0 If M is the matrix whose i-th column is xi, then (1+) the answer is the cardinal of the kernel of M, which is (rank-nullity theorem). The rank can be computed using Gaussian elimination in .
» 11 months ago, # | 0 Can B be solved using two pointers. If so, then how?
• » » 11 months ago, # ^ | 0 I think yes. First you sort array a. Then for every a[i], you have pointer l which a[l] is the first element has (a[l] — 1) / x == a[i] / x — k, and a[l] <= a[i]; and pointer r which a[r] is the last element has (a[r] — 1) / x == a[i] / x — k and a[r] <= a[i]. You can find r by brute-force from the current l, and for next i, you can find l by brute-forces from the last l
• » » 11 months ago, # ^ | 0
» 11 months ago, # | 0 I still don't get C. :-(
» 11 months ago, # | 0 How to slove B?
• » » 11 months ago, # ^ | 0 Read this solution. Ask me, what you don't understand.Div 2B Solution
• » » » 11 months ago, # ^ | 0 Why order? this is a continuous interval and ((a[i] — 1) / x) what does it mean?
• » » » » 11 months ago, # ^ | +1 Why order? We have sorted the array, so we have an increasing function => the interval is continuous. (a[i] — 1) / x what does it mean? Assuming that a[i] is a left border we can calculate how many integers divisible by x there are in interval [0; a[i]]. It will be a[i] / x obviously. We can do the same operation with the right border. Okay, now we want to calculate amount of integers on a segment [l; r]. It seems like it will be a[right] / x — a[left] / x, but it's wrong. For example: x = 3, a[left] = 3, a[right] = 5. We can do the following operation and get 0, but we need 1. Therefore we need to use a[i] — 1 to prevent this situation when the a[left] divisible by x.
• » » » » » 11 months ago, # ^ | 0 Thank you! You are sooooooo cute~
• » » 11 months ago, # ^ | ← Rev. 3 → 0 My solution with only binsearch http://codeforces.com/contest/895/submission/32701257
• » » » 11 months ago, # ^ | 0 Could you further explain this answer to me? What's the reasoning behind: vector::iterator l = lower_bound(a.begin(), a.end(), max((long long)a[i], (long long)x * (k + (a[i] - 1) / x))); vector::iterator r = lower_bound(a.begin(), a.end(), max((long long)a[i], (long long)x * (k + 1 + (a[i] - 1) / x))); Also, why do you use lower_bound? Why not upper_bound?
• » » » » 11 months ago, # ^ | 0 Just cos u need to find all numbers in [max((long long)a[i], (long long)x * (k + (a[i] — 1) / x)); max((long long)a[i], (long long)x * (k + 1 + (a[i] — 1) / x))), not in() (We include lower bound)
• » » » 11 months ago, # ^ | 0 max() call in second search is redundant. a[j] in that equation is always greater than or equal to a[i]
• » » » » 11 months ago, # ^ | 0 If k == 0 then x * (k + (a[i] — 1) / x) can < a[i].
• » » » » » 11 months ago, # ^ | ← Rev. 3 → 0 I mean second search, first one is correct. Given that x >= 1 and k >= 0: x * (k + 1 + (a[i] - 1) / x) >= x + x * ((a[i] - 1)/x) >= x + (a[i] - 1 - (a[i] - 1)%x) = a[i] - ((a[i] - 1)%x + 1) + x >= a[i]
• » » » » » » 11 months ago, # ^ | 0 You are right.
» 11 months ago, # | 0 What do odd and even numbers have to do with the dp tranisition ?
• » » 11 months ago, # ^ | 0 If u use i even number of times, mask will not change because even number of x means that you will multiply with a square. If u use i odd number of times, you must change mask, because it wont stay same.
» 11 months ago, # | 0 3 rounds in a row we have a task on bin_pow. Coincidence?:D
» 11 months ago, # | ← Rev. 2 → +4 C can be solved much faster by considering prime factorisation exponents as vector space over and then answer is just 2n - b - 1 where b is size of basis.
• » » 11 months ago, # ^ | 0 http://codeforces.com/contest/895/submission/32697113 Is this what you are referring to?
• » » 11 months ago, # ^ | 0 What is the principle?
• » » 10 months ago, # ^ | 0 Your idea is great. Could you give more details or code here? Thanks a lot!
» 11 months ago, # | 0 I think D can be solved in O(n*logk) if we use Fenwick Tree to keep the number of each letters,where k is the size of the alphabet.It doesn't work better for this problem,but things will be different if k is up to 10^5 or more.My submission
» 11 months ago, # | 0 Can anyone please, tell me the technique to solve problem A in O(n)?
• » » 11 months ago, # ^ | 0 Use prefix sum and 2 pointers.
• » » » 11 months ago, # ^ | 0 Thanks :) and Can you please explain, uses of those pointers?
• » » » 11 months ago, # ^ | 0 Got that Idea. Thanks again :)
• » » » » 11 months ago, # ^ | 0 can you please explain me the the problem A? what is a continuous sector? thanks in advance...
• » » » » » 11 months ago, # ^ | 0 it means one cake should be consist of a[i],a[i+1]....a[j-1],a[j] or a[i],a[i+1]...a[n],a[1],a[2]...a[j]
• » » » » 10 months ago, # ^ | 0 Could you tell me how to Use prefix sum and 2 pointers? Thanks a lot
• » » » » » 10 months ago, # ^ | 0 You can google it "2 pointer algorithm codeforces" :)
» 11 months ago, # | 0 Can anyone explain, why in Problem A on the test 8: 5 110 90 70 50 40 The answer is 40? We can take 90 40 50 and 110 70 — 180 and 180, so the answer is 0, isn't it? Or did I misunderstand it?
• » » 11 months ago, # ^ | 0 Test 8 picture
• » » 11 months ago, # ^ | 0 Because, in the question it is said that you have to take continuous sector
• » » » 11 months ago, # ^ | 0 Ahhh, I see. Thanks :)
» 11 months ago, # | ← Rev. 2 → 0 b
» 11 months ago, # | 0 can anyone please tell me about the query which i will present you below.. read it carefully""""I solved problem A. by this way... let us consider an example where INPUT: 4 170 30 150 10 (0) (1) (2) (3) here i numbered these pieces with the indices of this sector array representing the sector angle for each individual piece, (in to which the pizza was cut) . now i arrange these pieces in a fashion like this: starting from the +x axis extreme right going anticlockwise through each sector angle and then making the 4 pieces in anticlockwise sense such that the order follows (0)->(1)->(2)->(3)->(0)now the possible combinations for the 2 continuous sectors can be like this if starting from (0) (i.e first piece) By considering the clockwise format sector-1 sector-2 (0) (3) (2) (1) stick to the clockwise format only (0) (3) (2) (1) (0) (3) (2) (1) Note that sector-1 will not contains the all 4 pieces then in that case min diff will be max.(which will not be the ans) similarly if starting pos is (1),(2)or (3) all possible combinations can be obtained But the point here is .. i am just considered the clockwise direction initially and wrote my code and it gets accepted verdict . here is link http://codeforces.com/contest/895/submission/32719630 But now just figure out this thing when i consider the anti-clockwise direction of these cutted pieces the possible permutations for the case when starting done at (0) (i.e 1 piece is) like this : sector-1 sector-2 (0) (1) (2) (3) same case just ignore as above mentioned (0) (1) (2) (3) (0) (1) (2) (3) I am not checking for these cases it might be possible that i get more minimum value here than the previous cases Can plzz anyone tell me why my code gets accepted even though i not checked for anticlockwise direction ... plzz clear this doubt it just sucks me :/""""
• » » 11 months ago, # ^ | ← Rev. 2 → 0 As I understood, and as I finally get "Full solution": For exmaple, we have 3 110 80 170 (Answer 20) We should take different variants of pieces: 110 110 80 110 80 170 80 80 170 170 And for each variant, we calculate sum and compare with the minimum: if (min < sum-(360-sum)) min = sum-(360-sum)Tried to explain my solution on my bad english :D
• » » » 11 months ago, # ^ | 0 Maybe it isn't a good idea, but i can't see editorial for this problems :( Screenshot
• » » » » 11 months ago, # ^ | 0 yes
• » » 11 months ago, # ^ | 0 There is more combinations those your solution will check, when the while loop completes its iteration.Such as,For clockwise (3) (2) (1) (0) (3) (2) (1) (0) (3) (2) (1) (0) It will continue till it reaches (1) and these cases are also included in anti-clockwise.
• » » » 11 months ago, # ^ | 0 ya i just recognize this thing .. thnks for ur help bro can u otimize my solution or can u tell me the one given in editorial !
• » » » » 11 months ago, # ^ | 0 Check My Solution
» 11 months ago, # | +3 Am I getting this logic right for Problem B? Realize that we can find the numbers divisible by x in [l, r] by using the formula r/x — (l — 1)/x. Its (l — 1) to prevent an off by one error when l % x == 0. Sort the array (in ascending order). Iterate through the array and take a[i] to be the left bound. Since we are given a left bound we can find the lowest and highest indices that still satisfy the eq given in step 1. (use binary search at this step?) Knowing the indices we can calculate the actual number of valid right bounds for every left bound and we have our answer. Is this logic correct? I think I am still getting TLE because I am not using Binary Search. I got a bit lost in the editorial starting at step 3.
» 11 months ago, # | 0 For the problem B,Let us suppose an additional constrain is added that we can only consider pairs such that i <=j So in addition to a[i] <= a[j] , i <=j . Then the problem can be solved with a BST yes?
» 11 months ago, # | 0 In The Problem A Test case no 50. input is: 7 41 38 41 31 22 41 146There have minimal difference is 6. 41+41+41+38+22=183 146+31=177 So, 183-177=6. But How to ans is 14?Anyone Explain please!!
• » » 11 months ago, # ^ | ← Rev. 2 → 0 Because the numbers you have chosen are not continuous.
• » » » 11 months ago, # ^ | 0 Thanks
» 11 months ago, # | -18 THIS IS VERY IMPORTANT !!! I submitted my solution for problem A during the contest and it passed the pretests then I got RTE on test 49 during system test phase, after the contest I resubmitted the exact same code and I got ACCEPTED !!!! submission during contest : http://codeforces.com/contest/895/submission/32683171 submission after the contest : http://codeforces.com/contest/895/submission/32733925 PLEASE nbah CHECK THIS PROBLEM ! THANKS.
» 11 months ago, # | ← Rev. 2 → 0 For primes greater than 35 the only number that can affect the mask that correspond to that primes is exactly that prime, and those nombers doesn't affect the other bits on the mask. So, the unic solution for each of those numbers is to select en even ammount of them, and for primes <= 35 you can solve using the Editorial approach in O(70* 2 ^ 11) which is the overall complexity 32702420
» 11 months ago, # | ← Rev. 2 → 0 can you explain the sample test case 3 for problem E?after 2 moves [1 1 5 6 10][1 1 5 6 10]the mathematical expectation should be[2.6 3.6 4.6 5.6 6.6 4.4 5.4 6.4 7.4 8.4]then [1 1 3 6 9]the mathematical expectation of left should be 3.6 and the mathematical expectation of right should be 5.9so the answer of query [2 1 3] should be 2.6+3.6+4.6+5.9-3.6=13.1 why it is 14?
• » » 11 months ago, # ^ | 0 the mathematical expectation should be [2.6 3.6 4.6 5.6 6.6 4.4 5.4 6.4 7.4 8.4] I am afraid that this is incorrect.E[6:10] after first move = (40-8+3)/5 = 7E[1] after the first move = (1*0.8 + 8*0.2) = 2.4E[1] after the second move = (2.4*0.8 + 7*0.2) = 3.32
» 11 months ago, # | 0 How to solve A if the segments need not to be continuous ?
• » » 11 months ago, # ^ | 0 Use subset sum dp and check possible sum nearest to 180. If the sum is S, your answer will be 2|180 - S|.
» 11 months ago, # | 0 r / x–(l - 1) / x what does this mean?
» 11 months ago, # | 0 I am getting TLE in D, because my solution is running in O(n * k * k). I am not able to reduce it to O(n * k). The code given in the editorial is not clear to me. It doesn't look very intuitive. Can someone please help?
• » » 11 months ago, # ^ | ← Rev. 2 → 0 Check out my code, or someone else's.When trying to count the permutations of A less than B, fix the prefix that will be the same for both strings (N ways). Then, the next character of A has to be less than the character of B at that place. So, if you have countA[26], telling you how many of which character you have in A after the prefix, you can in O(k) add, for each character c < B[i], if countA[c] > 0, the number of ways to finish the string, which is just the number of permutations of the letters you have in countA[] (without c).You can make it so that you only call the fast_pow(x, y) function O(n) times, so the complexity is O(n(k + log mod)).You can also precompute the required modular inverses in O(n) with some maths knowledge, which leads to a O(nk) solution.
• » » » 11 months ago, # ^ | +5 Thanks a lot !! Beautifully explained :)
» 11 months ago, # | ← Rev. 2 → 0 in problem D's solution,most people use two array fac and ifac,fac[i]=i! ifac[i]=fac[i]^1e9+5.can anyone tell me what's the usage of ifac and why is it right? thanks in advance...
• » » 11 months ago, # ^ | 0
• » » » 11 months ago, # ^ | 0 thank you, dalao
» 11 months ago, # | 0 can anyone explain me hoe to solve C(square subsets) please!!
» 11 months ago, # | +1 For people struggling to convert O(n * k * k) solution to O(n * k) , this is a very clear submission that I happened to find. Hope it helps.Submission
» 11 months ago, # | ← Rev. 3 → 0 Verdict: wrong answer in test case 13 problem: 895C - Square Subsets submission: 32803738tried to solve it by recursive dp+bitmask. long long int produces MLE and int produces WA.would you mind giving me suggestion how to get rid of this situation? manually tested all the small test cases and they are ok.
• » » 11 months ago, # ^ | ← Rev. 3 → 0 Your big_mod function and the following two lines can overflow. long long int p1=((ncr[p]%mod)*(fun(p+1,m^mask[p])%mod))%mod; long long int p2=((ncr[p]%mod)*(fun(p+1,m)%mod))%mod; Add 1LL before the multiplication. fix: 32808994
• » » » 11 months ago, # ^ | 0 thank u very much, but may i ask u, how did those function cause overfolw? i am not sure about that.
• » » » » 11 months ago, # ^ | ← Rev. 2 → 0 mod = 1000000007 x%mod can be 1000000006; so x*x larger than 32 bit; add 1LL in front of it to force it to become a 64 bit integer first. hope it's clear.
• » » » » » 11 months ago, # ^ | 0 understood. thank u :)
» 11 months ago, # | ← Rev. 2 → 0 I have find my error now....
» 11 months ago, # | 0 Problem A failed Test 50 7 41 38 41 31 22 41 146 Output 6 Answer 14 Checker Log wrong answer 1st numbers differ — expected: '14', found: '6'But the right output shouldn' be 6 as the minimum ? A takes 146 + 31 = 177 B takes 41 * 3 +38 + 22 = 183
• » » 11 months ago, # ^ | 0 The elements should be continuous, 146 and 31 are not. Also they can be continuous in circular fashion (i.e 146 is next to 7)
• » » » 11 months ago, # ^ | 0 Got it. Thanks
» 11 months ago, # | 0 would anyone mind explaining solution of problem D with more details?I read the tutorial and some solutions several times, but still i don't get it :( may be it requires some algorithm or data structure i need to learn.any suggestion?
» 11 months ago, # | +3 In problem C; I think we can only use the prime number in [1,35],because if the prime number is bigger than 35,it means the number is equal it;So we can ignore them,and the solution will get Accepted(Its time complexity is O(n*cnt(2^(max/2))). Time complexity O(max*cnt(2^(max/2))) It is very quick! Sorry about for my poor English.
• » » 10 months ago, # ^ | 0 Could you give me more details or code about your idea? Thanks a lot
» 11 months ago, # | 0 I was struggling for two days now in order to understand problem C solution. Is there any prior knowledge (prior problem) that is needed to understand the solution easily ? The explanations given so far are not clear to me. Could somebody give a simple example with only 3 or 4 elements instead of 70 ?Thanks
• » » 11 months ago, # ^ | 0 are you familiar with bitmask dp? if not then first u need to learn it. bitmask dpthen the rest is based on some mathematical facts. you can read it from here
» 11 months ago, # | ← Rev. 2 → 0 Hi everyone! I ran into a weird bug in my code for problem D. This http://codeforces.com/contest/895/submission/32918069 solution gets accepted and this http://codeforces.com/contest/895/submission/32918020 does not. The only difference between the two is I REPLACED THE FORMAT OF STRING INPUT FROM SCANF TO CIN, not the other way around, and the corresponding data types from char array to string. Could anyone please help? Thanks in advance :)
• » » 10 months ago, # ^ | ← Rev. 3 → +3 For some weird reason, gcc decided not to hoist the strlen(a) call out of the for loop. Since strlen() is linear-time, that particular for loop is quadratic in the length of a. string::size() is O(1) because strings store their length.This is a standard optimization (though, to be honest, you should never rely on it), so I am not sure why gcc failed to do it. Clang does well here.
» 10 months ago, # | ← Rev. 2 → 0 In problem A In test 50 where the input is 741 38 41 31 22 41 146the output is 14 can anyone explain to me why the answer is not 6if we took one sector the pieces 146 + 31 = 177 and the other sector will be 183 the result should be 6 so why is this wrong and 14 correct ?Thanks in advance
• » » 10 months ago, # ^ | ← Rev. 2 → 0 We have to choose numbers continuously in cyclic form. 146 and 31 are not together
» 9 months ago, # | ← Rev. 2 → 0 In Problem DIV2/C (895C)"For each integer between 1 and 70 we need to find the number of ways we can take odd and even amount of it from a"what do you mean by "Number of ways we can take odd and even amount of it from a."UDP: I got it.
|
2018-10-19 18:55:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42578884959220886, "perplexity": 981.2138827559102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00401.warc.gz"}
|
https://physics.stackexchange.com/questions/139855/what-is-meant-by-the-spin-of-a-particle?r=SearchResults
|
# What is meant by the spin of a particle? [duplicate]
I have been studying that electrons have quantum number called spin quantum number(s), this number can have either +1/2 or -1/2 value. If s=+1/2, the spin is clockwise and if s=-1/2, the spin is anti clockwise about it's imaginary axis. But, I am facing some problems now with this concept, the problems are, photon have 1-spin, another recently discovered sub-atomic particle having spin 3 (para. 7), how physicists explain these spin?
• Depends on what you mean by "explain". Physics is descriptive. We can observe that quantum mechanical objects have an internal physical property that behaves very similar to angular momentum. We call this spin. On the theoretical level it turns out that spin emerges naturally from relativistic quantum theory. That's however, is not so much an explanation as it is a consistent result: we see that the world is relativistic and nature seems to implement many of the mathematical consequences that come with that. Oct 11, 2014 at 14:49
• Possible duplicates: physics.stackexchange.com/q/1/2451 and links therein. Oct 11, 2014 at 15:41
• In QM angular momentum is a change in the phase of the wave function under rotations, which can come about in not one way (normal angular momentum) but two ways (normal ang mom OR mixing up components of the wave function, spin, which explains the need for all the representation theory) physics.stackexchange.com/q/135885 Oct 11, 2014 at 15:45
• Subquestion: Do we know that anything is actually spinning, or is "spin" just a term applied to a measurable side-effect of the particle. Ie, is spin a physical phenomenon or a mathematical notation? Oct 11, 2014 at 17:32
Spin is best understood as an intrinsic angular momentum. It is probably easier to understand the concept for a charged particle. A classical charged particle moving along a circle has an angular momentum and the "circuit" has a magnetic moment. Further, the two are proportional to each other.
It is experimentally found that a charged particle like an electron has an magnetic moment, the way it has a charge and a mass. We therefore suggest that the electron also has an intrinsic angular momentum $\vec{S}$, proportional to its magnetic moment $\vec{\mu}$.
We also find experimentally that an electron orbital angular momentum $\vec{L}$ is not a conserved quantity but $\vec{J} = \vec{L} + \vec{S}$ is. Therefore, $\vec{S}$ is not just a mathematical convenience but a "real" angular momentum.
Spin arises from the need to represent the rotation group $\mathrm{SO}(3)$ upon our Hilbert space of states. We need such a representation because the rotations (together with space translations) correspond to the non-relativistic changes of reference frames.
Since states are only determined up to rays in the Hilbert space, the true space of states on which we must represent the group is the projective Hilbert space, and the projective representations of a Lie group are (under some conditions) in bijection to the linear representations of its covering group, which is $\mathrm{SU}(2)$.
It turns out that these representations can be labeled by an integer $s \in \mathbb{N}$ or a half-integer $s \in \mathbb{N} + \frac{1}{2}$. This number is what we call spin.
• Apin arises from an experimental need to conserve angular momentum, mathematical explanations for this are an ad hoc addition. Oct 11, 2014 at 18:31
• +1, sigh... I hope some day I'll finally find the time to learn at least some basic QFT... :( Oct 12, 2014 at 23:29
|
2023-03-22 02:08:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.778938889503479, "perplexity": 421.38929345287397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00151.warc.gz"}
|
http://tex.stackexchange.com/questions/121158/import-pdf-page-selected-by-label-instead-of-page-number
|
# Import pdf page selected by label instead of page number
For each figure in my main document, I import the corresponding page of a multipage .pdf containing all pictures (one picture per page).
Is there a way to give a label to each picture, in order to import a picture by giving its label instead of its page number ? This means that I would like to do something like
\includegraphics[pagelabel={Figure name}]{pictures.pdf}
\includegraphics[page=1]{pictures.pdf}
Here is a code producing my file with (some) figures :
\documentclass[multi=true,tikz,border={0pt 0pt 1cm 0pt}]{standalone}
\usepackage{amsmath,amssymb,amsrefs}
\usepackage{t1enc}%\usepackage[svgnames]{xcolor}
%\usepackage{tikz}
\usepackage{tcolorbox}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\newcommand{\all}{thick,width=\x,height=\y}
\newcommand{\size}{\small}
\begin{document}
\begin{tikzpicture} %%% I would like to give a label \label{Figure name}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 18) (9, 12) (10,15) (11,5) (12,3)};
\end{axis}
\end{tikzpicture}
\newpage
\begin{tikzpicture}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 8) (9, 12) (10,15) (11,5) (12,3)};
\addplot[black,fill=red, opacity=0.2] coordinates { (8, 18)};
\end{axis}
\end{tikzpicture} \end{document}
-
Put in the file figs.tex with the pictures labels, e.g. \label{fig:nameA} and \label{fig:nameB} on the pages. And then do this in your main document:
\documentclass{article}
\usepackage{graphicx,xr,refcount}
\externaldocument[A-]{figs}
\begin{document}
\includegraphics[page=\getpagerefnumber{A-fig:nameA}]{figs}
\includegraphics[page=\getpagerefnumber{A-fig:nameB}]{figs}
\end{document}
-
I see your answer is a little bit simpler than mine, and you beat me by 53 seconds. :^) – Steven B. Segletes Jun 26 '13 at 18:09
@StevenB.Segletes: I can't handle complicated code so I always find the simple solutions ;-). – Ulrike Fischer Jun 26 '13 at 18:10
EDITED to fully allow printing figures by reference label. (RE-EDITED to correct \label which had been pointing to the key, rather than figure number)
If you are willing to do things a little differently (i.e., using your figure source file directly, rather than by way of PDF import), here is an idea employing the figure (and table) deferral mechanism of the boxhandler package. I am assuming that you wish to use your graphics as figure floats.
Before getting into the specifics of this solution, I should just say in generality that boxhandler can save figures (graphics and captions) without printing them (using \holdFigures). When it finally does print them, it wants to do so in the order they were created (via \nextFigure). So the challenge to this solution is in fooling boxhandler's FigureClearedIndex count into printing out the stored images in the order eventually requested by the user, rather than in the order they were generated.
Now, to the solution. First, you put your figures in a separate file, like this (figs.tex) that can be \input. Note that I have created there, at the top of figs.tex, the commands \storethisfigure and \showreffigure which will be used to create and later recall-by-reference the figures (I had to use Heiko's refcount package to accomplish this task):
\usepackage{boxhandler}
\usepackage{refcount}
\newcounter{FigOutputCount}
\newcommand\showreffig[1]{%
\setcounter{FigureClearedIndex}{\getrefnumber{KEY#1}}%
\refstepcounter{FigOutputCount}\label{#1}%
\nextFigure%
}
\newcounter{reffigcounter}
\newcommand\storethisfigure[3]{%
\refstepcounter{reffigcounter}\label{KEY#1}%
\bxfigure{#2}{#3}%
}
\holdFigures
\storethisfigure{fg:large}{large figure caption}{%
\scalebox{.9}{%
\begin{tikzpicture} %%% I would like to give a label \label{Figure name}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={%
Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={%
2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 18) (9, 12) (10,15) (11,5) (12,3)};
\end{axis}
\end{tikzpicture}
}
}
\storethisfigure{fg:small}{small figure caption}{%
\scalebox{.6}{%
\begin{tikzpicture}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={%
Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={%
2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 8) (9, 12) (10,15) (11,5) (12,3)};
\addplot[black,fill=red, opacity=0.2] coordinates { (8, 18)};
\end{axis}
\end{tikzpicture}
}
}
The figs.tex file is \input in the preamble of your main document. Then, to recall the figures in whatever order you want, use \showreffig{label}. Note in this example, I first recall the image that was second in the file and later the first image.
\documentclass{article}
\usepackage{amsmath,amssymb,amsrefs}
\usepackage{t1enc}%\usepackage[svgnames]{xcolor}
%\usepackage{tikz}
\usepackage{tcolorbox}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\usepackage{lipsum}
\newcommand{\all}{thick,width=\x,height=\y}
\newcommand{\size}{\small}
\input{figs}
\begin{document}
\lipsum[4]
\showreffig{fg:small}
\lipsum[4]
\showreffig{fg:large}
In figure~\ref{fg:large}, we see the following. But in
figure~\ref{fg:small}...
\end{document}
The result looks like this:
As a side benefit, the boxhandler package gives great control over caption appearance.
-
This is a good solution ! In my case, I would prefer to compile the pictures and the main file separately to increase the speed. – Laurent Dudok de Wit Jun 26 '13 at 14:43
@user81566 Understood. In that case, Im guessing the figs file will have to output a special file that contains reference information, later read by the main tex file. – Steven B. Segletes Jun 26 '13 at 15:08
Exactly, I thought for example to add a hidden text, corresponding to the label of the picture. This hidden text could then be read as the pdf files are imported. This may be complicated to implement, but it could be useful when it works :) – Laurent Dudok de Wit Jun 26 '13 at 15:19
I am adding a second answer, because in the other answer, I showed how to solve this problem (create figures via \ref labels) if you were willing to recompile the figures with the main document. Here, I produce what the user actually wanted, which is to load the figures in from an external PDF file, also via the \ref label mechanism.
In this solution (called FBR for figure-by-ref), I also have a file that contains just the actual figure data. Here it is, called FBRfigs.tex and looks like this:
\storethisfigure{fg:eighteen}{18 peak sells}{%
\begin{tikzpicture} %%% I would like to give a label \label{Figure name}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 18) (9, 12) (10,15) (11,5) (12,3)};
\end{axis}
\end{tikzpicture}
}
\conditionalnewpage
\storethisfigure{fg:eight}{8 peak sells}{%
\begin{tikzpicture}
\begin{axis}[\all,ybar=-10pt,enlargelimits=0.05,ylabel={Volume},ymin=1,xlabel={Price},ymax=20,nodes near coords,nodes near coords align={vertical},xtick={2,3,4,5,6,7,8,9,10,11,12}]\size
\addplot[black,fill=blue] coordinates {(2,7) (3,8) (4, 15) (5, 10) (6,12)};
\addplot[black,fill=red] coordinates { (8, 8) (9, 12) (10,15) (11,5) (12,3)};
\addplot[black,fill=red, opacity=0.2] coordinates { (8, 18)};
\end{axis}
\end{tikzpicture}
}
Now, to create the desired PDF file with just the images in it, I must place a wrapper around this data file. I call the wrapper FBRpdfs.tex, and it looks like this. Note the wrapper uses information supplied by the user in his question.
\documentclass[multi=true,tikz,border={0pt 0pt 1cm 0pt}]{standalone}
\usepackage{amsmath,amssymb,amsrefs}
\usepackage{t1enc}%\usepackage[svgnames]{xcolor}
%\usepackage{tikz}
\usepackage{tcolorbox}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\newcommand{\all}{thick,width=\x,height=\y}
\newcommand{\size}{\small}
\newcommand\storethisfigure[3]{#3}
\newcommand\conditionalnewpage{\newpage}
\begin{document}
\input{FBRfigs}
\end{document}
Thus, pdflatex'ing FBRpdfs.tex produces FBRpdfs.pdf, containing the two images on two pages (Note that I now identify the images as "18" and "8" because I can no longer use \scalebox to shrink them, for some reason, with the standalone class, as I did in my other solution).
Now, for the solution on how to access these images by reference, I use a very similar logic to my earlier posted solution. Rather than using a different external file to store the reference information, however, I actually re-read the FBRfigs.tex file to get the reference label information, WHILE THROWING AWAY THE ACTUAL GRAPHICS CONTENT of that file! This means, I don't need all the tikz stuff, etc. and don't need to recompile the graphics. I am only using the file to get the label information, and I actually save the captions, too. Then, with the \includefig command that I define, I extract the graphic from the file, and place the re-read caption under it, as follows:
\documentclass{article}
\usepackage{lipsum}
\usepackage{boxhandler}
\usepackage{graphicx}
\usepackage{refcount}
\newcounter{FigureFileIndex}
\newcounter{FigOutputCount}
\newcounter{reffigcounter}
\newcommand\includefig[2][htbp]{%
\setcounter{FigureFileIndex}{\getrefnumber{KEY#2}}%
\refstepcounter{FigOutputCount}\label{#2}%
\bxfigure[#1]{\csname figurecaption\roman{FigureFileIndex}\endcsname}%
{\includegraphics[page=\value{FigureFileIndex}]{FBRpdfs.pdf}}%
}
%NOTE: NOTHING IS DONE BELOW WITH [EXPENSIVE] ARGUMENT #3
\newcommand\storethisfigure[3]{%
\refstepcounter{reffigcounter}\label{KEY#1}%
\expandafter\def\csname figurecaption\roman{reffigcounter}\endcsname{#2}%
}
\newcommand\conditionalnewpage{}
\input{FBRfigs}
\begin{document}
This is my introductory paragraph.
\includefig[ht]{fg:eight}
\lipsum[4]
\includefig[ht]{fg:eighteen}
In figure~\ref{fg:eighteen}, we see the following. But in
figure~\ref{fg:eight}...
\end{document}
Doing a pdflatex on FBR.tex (since the page= qualifier is only understood by pdflatex) produces the following:
There is only one strange thing that I note. I believe it arises from the refcount package and maybe someone knows the issue already. It is this: if a compile fails (due to typographical error, etc.) then all recompiles fail, even when the error is corrected and the .aux file is deleted. The only way, then, to make it work is to comment out all calls to \includefig (which is where the refcount stuff is accessed), recompile, then uncomment the \includefig calls, and recompile again. That is a major nuisance, but it is out of my control. I'm guessing refcount keeps some .aux-like file (not in my working directory) that needs to be reset, somehow. I just don't know.
-
I am willing to delete this solution, in light of Ulrike's (cough) "more streamlined" approach. unless @user81566 wishes to leave it as an exercise in excess. – Steven B. Segletes Jun 26 '13 at 18:13
|
2015-09-04 18:54:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8965580463409424, "perplexity": 2440.8872066521717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00002-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://www.onemathematicalcat.org/Math/Geometry_obj/two_column_proof.htm
|
INTRODUCTION TO THE TWO-COLUMN PROOF
Deductive reasoning uses logic, and statements that are already accepted to be true, to reach conclusions.
The methods of mathematical proof are based on deductive reasoning.
A proof is a convincing demonstration that a mathematical statement is necessarily true.
Proofs can use:
• given information (information that is assumed to be true)
• definitions (Definitions are true, by definition!)
• postulates (statements that are assumed to be true, without proof)
• logical equivalences and tautologies (a truth table shows that these are always true)
• statements that have already been proved
In higher-level mathematics, proofs are usually written in paragraph form.
When introducing proofs, however, a two-column format is usually used to summarize the information.
True statements are written in the first column.
A reason that justifies why each statement is true in written in the second column.
This section gives you practice with two-column proofs.
You will be proving very simple algebraic statements—the goal is to practice with structure and style, and not be distracted by difficult content.
You will also practice with the methods of direct proof, indirect proof, and proof by contraposition.
Here are your first two-column proofs:
PROVE:
If $\,2x + 1 = 7\,$, then $\,x = 3\,$.
Use a direct proof.
PROOF:
STATEMENTS REASONS 1. Assume: $\,2x + 1 = 7\,$ hypothesis of direct proof 2. $2x = 6$ Addition Property of Equality; subtract $\,1\,$ from both sides 3. $x = 3$ Multiplication Property of Equality; divide both sides by $\,2$
PROVE:
If $\,2x + 1 = 7\,$, then $\,x = 3\,$.
Use an indirect proof.
In this case, an indirect proof is much longer than a direct proof.
Whenever you give a reason that uses anything except the immediately preceding step, then cite the step(s) that are being used.
PROOF:
STATEMENTS REASONS 1. Assume: $\,2x + 1 = 7\,$ AND $\,x\ne 3\,$ hypothesis of indirect proof 2. $2x + 1 = 7$ $(A\text{ and }B)\Rightarrow A$ 3. $2x = 6$ Addition Property of Equality; subtract $\,1\,$ from both sides 4. $x = 3$ Multiplication Property of Equality; divide both sides by $\,2$ 5. $x \ne 3$ $(A\text{ and }B)\Rightarrow B\,$ (step 1) 6. $x = 3\,$ and $\,x\ne 3\,$; CONTRADICTION (steps 4 and 5) 7. Thus, $\,x = 3\,$. conclusion of indirect proof
PROVE:
If $\,2x + 1 = 7\,$, then $\,x = 3\,$.
Use a proof by contraposition.
In this case, the proof seems somewhat convoluted.
For this statement, a direct proof is best.
PROOF:
STATEMENTS REASONS 1. Assume: $\,x\ne 3\,$ hypothesis of proof by contraposition 2. $2x \ne 6$ Multiplication Property of Equality; multiply both sides by $\,2$ 3. $2x + 1 \ne 7$ Addition Property of Equality; add $\,1\,$ to both sides
Master the ideas from this section
by practicing the exercise at the bottom of this page.
When you're done practicing, move on to:
the Pythagorean Theorem
On this exercise, you will not key in your answer.
However, you can check to see if your answer is correct.
(MAX is 14; there are 14 different problem types.)
|
2014-08-29 09:57:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5928900241851807, "perplexity": 1520.4687273817838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832052.6/warc/CC-MAIN-20140820021352-00250-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://byjus.com/question-answer/a-small-amount-of-solution-containing-na-24-radionuclide-with-activity-a-2-0-10/
|
Question
# A small amount of solution containing $$Na^{24}$$ radionuclide with activity $$A= 2.0.10^3$$ disintegration per second was injected into the bloodstream of a man. The activity of 1 $$cm^3$$ of the blood sample taken t= 5.0 hours later turned out to be A'= 16 disintegration per minute per $$cm^3$$ . The half-life of the radionuclide is T= 15 hours. Find the volume of the man's blood.
Solution
## Let $$V$$ = volume of blood in the body of the human being. Then the total activity of the blood is $$A'V$$. Assuming all this activity is due to the injected $$Na^{24}$$ and taking account of the decay of this radionuclide, we get $$V A' = A\ e^{-\lambda\ t}$$ Now $$\lambda = \dfrac{ln\ 2}{15}$$ per hour , $$t = 15$$ hourThus $$V = \dfrac{A}{A'} e^{-ln\ 2/3} = \dfrac{2.0 \times 10^{3}}{(16 / 60)} e^{-ln\ 2/3} cc = 5.95$$ litre PhysicsNCERTStandard XII
Suggest Corrections
0
Similar questions
View More
Same exercise questions
View More
People also searched for
View More
|
2022-01-27 17:13:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089243769645691, "perplexity": 3301.410503136798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00651.warc.gz"}
|
http://unapologetic.wordpress.com/2010/08/18/
|
# The Unapologetic Mathematician
## Stone’s Representation Theorem I
Today we start in on the representation theorem proved by Marshall Stone: every boolean ring $\mathcal{B}$ is isomorphic (as a ring) to a ring of subsets of some set $X$. That is, no matter what $B$ looks like, we can find some space $X$ and a ring $\mathcal{S}$ of subsets of $X$ so that $\mathcal{B}\cong\mathcal{S}$ as rings.
We start by defining the “Stone space” $S(\mathcal{B})$ of a Boolean ring $\mathcal{B}$. This is a representable functor, and the representing object is the two-point Boolean ring $\mathcal{B}_0$. That is, $S(\mathcal{B})=\hom_\mathbf{Rng}(\mathcal{B},\mathcal{B}_0)$, the set of (boolean) ring homomorphisms from $\mathcal{B}$ to $\mathcal{B}_0$. To be clear, $\mathcal{B}_0$ consists of the two points $\{0,1\}$, with the operations $\Delta$ for addition and $\cap$ for multiplication, and the obvious definitions of these operations. This is a contravariant functor — if we have a homomorphism of Boolean rings $f:\mathcal{B}\to\hat{\mathcal{B}}$ we get a function between the Stone spaces $S(f):S(\hat{\mathcal{B}})\to S(\mathcal{B})$, which takes a function $\lambda:\hat{\mathcal{B}}\to\mathcal{B}_0$ to the function $\lambda\circ f:\mathcal{B}\to\mathcal{B}_0$.
The Stone space isn’t just a set, though; it’s a topological space. We define the topology by giving a base of open sets. That is, we’ll give a collection of sets — closed under intersections — which we declare to be open, and we define the collection of all open sets to be given by unions of these sets. For each element $b\in\mathcal{B}$, we define a basic set like so:
$\displaystyle s(b)=\left\{\lambda\in S(\mathcal{B})\vert\lambda(b)=1\right\}$
To see that this collection of sets is closed under intersection, consider two such sets $s(b)$ and $s(b')$. I say that the intersection of these sets is the set $s(b\cap b')$. Indeed, if $\lambda\in s(b)$ and $\lambda\in s(b')$, then
$\displaystyle\lambda(b\cap b')=\lambda(b)\cap\lambda(b')=1\cap1=1$
Conversely, if $\lambda\in s(b\cap b')$, then $b\cap b'\subseteq b$. Thus
$\displaystyle1=\lambda(b\cap b')\subseteq\lambda(b)$
and so $\lambda(b)=1$, and $\lambda\in s(b)$. Similarly, $\lambda\in s(b')$. Thus we see that $s(b)\cap s(b')=s(b\cap b')$.
In fact, this map from $\mathcal{B}$ to the basic sets is exactly the mapping we’re looking for! We’ve already seen that our base is closed under intersection, and that the map $s$ preserves intersections. I say that we also have $s(b\Delta b')=s(b)\Delta s(b')$. If $\lambda\in s(b)$ but $\lambda\notin s(b')$, then $\lambda(b)=1$ and $\lambda(b')=0$. Then
$\displaystyle\lambda(b\Delta b')=\lambda(b)\Delta\lambda(b')=1\Delta0=1$
and similarly if $\lambda\in s(b')$ but $\lambda\notin s(b)$. Thus $s(b)\Delta s(b')\subseteq s(b\Delta b')$. Conversely, if $\lambda\in s(b\Delta b')$, then
$\displaystyle\lambda(b)\Delta\lambda(b')=\lambda(b\Delta b')=1$
and so either $\lambda(b)=1$ and $\lambda(b')=0$ or vice versa. Thus $s(b\Delta b')=s(b)\Delta s(b')$.
So we know that $s$ is a homomorphism of (boolean) rings. However, we don’t know yet that it’s an isomorphism. Indeed, it’s possible that $s$ has a nontrivial kernel — $s(b)$ could be $\emptyset\subseteq S(\mathcal{B})$ for some $b$. We must show that given any $b$ there is some $\lambda:\mathcal{B}\to\mathcal{B}_0$ so that $\lambda(b)=1$.
For a finite boolean ring $\mathcal{B}$ this is easy: we pick some minimal element $b'\subseteq b$ and define $\lambda(x)=1$ if and only if $b'\subseteq x$. Such a $b'$ exists because there’s at least one element below $b$$b$ itself is one — and there can only be finitely many so we can just take their intersection. Clearly $\lambda(b)=1$ by definition, and it’s straightforward to verify that $\lambda$ is a homomorphism of boolean rings using the fact that $b'$ is an atom of $\mathcal{B}$.
For an infinite boolean ring, things are trickier. We define the set $X^*$ of all functions $\mathcal{B}\to\mathcal{B}_0$, not just the ring homomorphisms. This is the product of one copy of $\mathcal{B}_0$ for every element of $\mathcal{B}$. Since each copy of $\mathcal{B}_0$ is a compact Hausdorff space, Tychonoff’s theorem tells us that $X^*$ is a compact Hausdorff space. If $\tilde{\mathcal{B}}$ is any finite subring of $\mathcal{B}$ containing $b$, let $X^*(\tilde{\mathcal{B}})$ be the collection of those functions $\lambda^*\in X^*$ which are homomorphisms when restricted to $\tilde{\mathcal{B}}$ and for which $\lambda^*(b)=1$.
I say that the class of sets of the form $X^*(\tilde{\mathcal{B}})$ has the finite intersection property. That is, if we have some finite collection of finite subrings $\tilde{\mathcal{B}}_1,\dots,\tilde{\mathcal{B}}_n$ and the finite subring $\tilde{\mathcal{B}}$ they generate, then we have the relation
$\displaystyle X^*(\tilde{\mathcal{B}})\subseteq\bigcap\limits_{i=1}^nX^*(\tilde{\mathcal{B}}_i)$
Indeed, $b$ is clearly contained in the generated ring $\tilde{\mathcal{B}}$. Further, if $\lambda^*$ is a homomorphism on $\tilde{\mathcal{B}}$ then it’s a homomorphism on each subring $\tilde{\mathcal{B}}_i$.
Okay, so since $\tilde{\mathcal{B}}$ is a finite boolean ring, the proof given above for the finite case shows that $X^*(\tilde{\mathcal{B}})$ is nonempty. Thus the intersection of any finite collection of sets $\{X^*(\tilde{\mathcal{B}}_i)\}$ is nonempty. And thus, since $X^*$ is compact, the intersection of all of the $\{X^*(\tilde{\mathcal{B}})\}$ is nonempty.
That is, there is some function $\lambda^*:\mathcal{B}\to\mathcal{B}_0$ which is a homomorphism of boolean rings on any finite boolean subring containing $b$, and with $\lambda^*(b)=1$. Given any other two points $b_1$ and $b_2$ there is some finite boolean subring containing $b$, $b_1$, and $b_2$, and so we must have $\lambda^*(b_1\cap b_2)=\lambda^*(b_1)\cap\lambda^*(b_2)$ and $\lambda^*(b_1\Delta b_2)=\lambda^*(b_1)\Delta\lambda^*(b_2)$ within this subring, and thus within the whole ring. Thus $\lambda^*$ is a homomorphism of boolean rings sending $b$ to $1$, which shows that $s(b)\neq\emptyset$.
Therefore, the map $s$ is a homomorphism sending the boolean ring $\mathcal{B}$ isomorphically onto the identified base of the Stone space $S(\mathcal{B})$.
August 18, 2010 Posted by | Analysis, Measure Theory | 6 Comments
|
2013-12-13 06:38:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 115, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888182282447815, "perplexity": 94.29947319749822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164903523/warc/CC-MAIN-20131204134823-00001-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://physicshelpforum.com/advanced-thermodynamics/13727-diffusion-equation.html
|
Physics Help Forum Diffusion equation
Sep 24th 2017, 06:10 AM #1
Junior Member
Join Date: Sep 2017
Posts: 5
Diffusion equation
Hello!
I'm a physics student and i have a problem i can't solve. I was wondering if there is anyone here who would be willing to help me solve this?
House: a room (see attachment) has perfectly isolated walls, except the two windows where a convective heat exchange takes place (with the same transfer coefficient).
Outside temperature in front of a sun-faced wall-sized panoramic window is T1, while at the back it is T2. Calculate the stationary temperature field inside the room. You can also play by adding an additional energy flux through the front
window due to a sunlight at an angle φ.
Thank you so much for your help!
Attached Files
room.pdf (85.0 KB, 1 views)
Sep 24th 2017, 07:35 AM #2 Senior Member Join Date: Aug 2010 Posts: 320 This has nothing at all to do with the "diffusion equation" If the temperature at one side (take it to be x= 0) is T1 and the temperature at the other side (take it to be x= d, the thickness of the glass) is T2 then the stationary temperature field inside the window is linear: T(x)= T1+ x(T2- T1)/d. When x=0, that is T(0)= T1+ 0(T2- T1)d= T1. When x= d that is T(d)= T1+ d(T2- T1)/d= T1+ T2- T1= T2. (Well, it does have a little to do with the "diffusion equation". The diffusion equation is $\displaystyle \frac{\partial^2 T}{\partial x^2}= \kappa\frac{\partial t}$. If the temperature field is "stationary" then its derivative with respect to t is 0 so the second derivative with respect to x must be 0 from which we conclude that the temperature is a linear function of x.)
Sep 25th 2017, 05:56 AM #3 Junior Member Join Date: Sep 2017 Posts: 5 Thank you for your answer. But isn't this the diffusion equation dT/dt=D (d^2 T)/dx^2 and if we are calculating stationary temperature the equation is actually Lapace ΔT=0 . I think it is not the same in the x and y direction because the windows aren't the same size and the sun is only shining in the room on one side. So T=X(x)Y(y), and we get 1/X * (d^2 X)/dx^2=-1/Y * (d^2 Y)/dy^2 =C But i am not sure what the boundary conditions should be?
Tags diffusion, equation, temperature
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post DzoptiC General Physics 0 Apr 22nd 2017 04:06 PM sector6 General Physics 0 Apr 5th 2016 02:02 AM MattBann Atomic and Solid State Physics 0 Jan 5th 2015 08:32 AM Steph Advanced Mechanics 3 Feb 26th 2013 09:21 AM petdem Thermodynamics and Fluid Mechanics 8 Feb 15th 2010 05:51 AM
|
2018-05-21 15:12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.756635308265686, "perplexity": 1893.6669784918104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864405.39/warc/CC-MAIN-20180521142238-20180521162238-00499.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Real_Number/Axioms
|
# Definition:Real Number/Axioms
The properties of the field of real numbers $\struct {R, +, \times, \le}$ are as follows:
$(\R A0)$ $:$ Closure under addition $\displaystyle \forall x, y \in \R:$ $\displaystyle x + y \in \R$ $(\R A1)$ $:$ Associativity of addition $\displaystyle \forall x, y, z \in \R:$ $\displaystyle \paren {x + y} + z = x + \paren {y + z}$ $(\R A2)$ $:$ Commutativity of addition $\displaystyle \forall x, y \in \R:$ $\displaystyle x + y = y + x$ $(\R A3)$ $:$ Identity element for addition $\displaystyle \exists 0 \in \R: \forall x \in \R:$ $\displaystyle x + 0 = x = 0 + x$ $(\R A4)$ $:$ Inverse elements for addition $\displaystyle \forall x: \exists \paren {-x} \in \R:$ $\displaystyle x + \paren {-x} = 0 = \paren {-x} + x$ $(\R M0)$ $:$ Closure under multiplication $\displaystyle \forall x, y \in \R:$ $\displaystyle x \times y \in \R$ $(\R M1)$ $:$ Associativity of multiplication $\displaystyle \forall x, y, z \in \R:$ $\displaystyle \paren {x \times y} \times z = x \times \paren {y \times z}$ $(\R M2)$ $:$ Commutativity of multiplication $\displaystyle \forall x, y \in \R:$ $\displaystyle x \times y = y \times x$ $(\R M3)$ $:$ Identity element for multiplication $\displaystyle \exists 1 \in \R, 1 \ne 0: \forall x \in \R:$ $\displaystyle x \times 1 = x = 1 \times x$ $(\R M4)$ $:$ Inverse elements for multiplication $\displaystyle \forall x \in \R_{\ne 0}: \exists \frac 1 x \in \R_{\ne 0}:$ $\displaystyle x \times \frac 1 x = 1 = \frac 1 x \times x$ $(\R D)$ $:$ Multiplication is distributive over addition $\displaystyle \forall x, y, z \in \R:$ $\displaystyle x \times \paren {y + z} = \paren {x \times y} + \paren {x \times z}$ $(\R O1)$ $:$ Usual ordering is compatible with addition $\displaystyle \forall x, y, z \in \R:$ $\displaystyle x > y \implies x + z > y + z$ $(\R O2)$ $:$ Usual ordering is compatible with multiplication $\displaystyle \forall x, y, z \in \R:$ $\displaystyle x > y, z > 0 \implies x \times z > y \times z$ $(\R O3)$ $:$ $\struct {R, +, \times, \le}$ is Dedekind complete
|
2019-08-17 15:26:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925171732902527, "perplexity": 171.78219839720617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00517.warc.gz"}
|
https://www.math.ias.edu/seminars/abstract?event=135837
|
# Arnold diffusion and Mather theory
Emerging Topics Working Group Topic: Arnold diffusion and Mather theory Speaker: Ke Zhang Affiliation: University of Toronto Date: Wednesday, April 11 Time/Room: 2:00pm - 3:00pm/Simonyi Hall 101 Video Link: https://video.ias.edu/emergingtopics/2018/0411-KeZhang
Abstract: Arnold diffusion studies the problem of topological instability in nearly integrable Hamiltonian systems. An important contribution was made my John Mather, who announced a result in two and a half degrees of freedom and developed deep theory for its proof. We describe a recent effort to better conceptualize the proof for Arnold diffusion. Combining Mather's theory and classical hyperbolic methods, we define special cohomology classes called Aubry-Mather type, where each such cohomology is connected to a nearby one for a "residue perturbation" of the Hamiltonian. The question of Arnold diffusion then reduces to the question of finding large connected components of such cohomologies. This is a joint work with Vadim Kaloshin.
|
2018-07-22 04:42:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310806155204773, "perplexity": 2201.3123857388746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593010.88/warc/CC-MAIN-20180722041752-20180722061752-00039.warc.gz"}
|
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.smooth.data_runningmedian.html
|
# naginterfaces.library.smooth.data_runningmedian¶
naginterfaces.library.smooth.data_runningmedian(itype, y)[source]
data_runningmedian computes a smoothed data sequence using running median smoothers.
For full information please refer to the NAG Library document for g10ca
https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/g10/g10caf.html
Parameters
itypeint
Specifies the method to be used.
If , 4253H,twice is used.
yfloat, array-like, shape
The sample observations.
Returns
smoothfloat, ndarray, shape
Contains the smooth.
roughfloat, ndarray, shape
Contains the rough.
Raises
NagValueError
(errno )
On entry, .
Constraint: or .
(errno )
On entry, .
Constraint: .
Notes
Given a sequence of observations recorded at equally spaced intervals, data_runningmedian fits a smooth curve through the data using one of two smoothers. The two smoothers are based on the use of running medians and averages to summarise overlapping segments. The fit and the residuals are called the smooth and the rough respectively. They obey the following:
The two smoothers are:
1. 4253H,twice consisting of a running median of , then , then , then followed by hanning. Hanning is a running weighted average, the weights being , and . The result of this smoothing is then reroughed by computing residuals, applying the same smoother to them and adding the result to the smooth of the first pass.
2. 3RSSH,twice consisting of a running median of , two splitting operations named S to improve the smooth sequence, each of which is followed by a running median of , and finally hanning. The end points are dealt with using the method described by Velleman and Hoaglin (1981). The full smoother 3RSSH,twice is produced by reroughing as described above.
The compound smoother 4253H,twice is recommended. The smoother 3RSSH,twice is popular when calculating by hand as it requires simpler computations and is included for comparison purposes.
References
Tukey, J W, 1977, Exploratory Data Analysis, Addison–Wesley
Velleman, P F and Hoaglin, D C, 1981, Applications, Basics, and Computing of Exploratory Data Analysis, Duxbury Press, Boston, MA
|
2022-07-02 17:56:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633947372436523, "perplexity": 3718.1530533180876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00538.warc.gz"}
|
https://en.academic.ru/dic.nsf/enwiki/145001
|
# Stress (physics)
Stress (physics)
Stress is a measure of the average amount of force exerted per unit area. It is a measure of the intensity of the total internal forces acting within a body across imaginary internal surfaces, as a reaction to external applied forces and body forces. It was introduced into the theory of elasticity by Cauchy around 1822. Stress is a concept that is based on the concept of continuum. In general, stress is expressed as
:$sigma = frac\left\{F\right\}\left\{A\right\} ,$
where :$sigma$ is the average stress, also called engineering or nominal stress, and :$F$ is the force acting over the area $A$.
The SI unit for stress is the pascal (symbol Pa), which is a shorthand name for one newton (Force) per square metre (Unit Area). The unit for stress is the same as that of pressure, which is also a measure of Force per unit area. Engineering quantities are usually measured in megapascals (MPa) or gigapascals (GPa). In Imperial units, stress is expressed in pounds-force per square inch (psi) or kilopounds-force per square inch (ksi).
As with force, stress cannot be measured directly but is usually inferred from measurements of strain and knowledge of elastic properties of the material. Devices capable of measuring stress indirectly in this way are strain gauges and piezoresistors.
tress as a tensor
In its full form, linear stress is a rank-two tensor quantity, and may be represented as a 3x3 matrix. A tensor may be seen as a linear vector operator - it takes a given vector and produces another vector as a result. In the case of the stress tensor $sigma_\left\{ij\right\}$, it takes the vector normal to any area element and yields the force (or "traction") acting on that area element. In matrix notation:
:$F_i=sum_\left\{j=1\right\}^3 sigma_\left\{ij\right\} A_j$
where $A_j$ are the components of the vector normal to a surface area element with a length equal to the area of the surface element, and $F_i$ are the components of the force vector (or traction vector) acting on that element. Using index notation, we can eliminate the summation sign, since all sums will be the same over repeated indices. Thus:
:$F_i=sigma_\left\{ij\right\} A_j ,$
Just as it is the case with a vector (which is actually a rank-one tensor), the matrix components of a tensor depend upon the particular coordinate system chosen. As with a vector, there are certain invariants associated with the stress tensor, whose value does not depend upon the coordinate system chosen (or the area element upon which the stress tensor operates). For a vector, there is only one invariant - the length. For a tensor, there are three - the eigenvalues of the stress tensor, which are called the principal stresses. It is important to note that the only physically significant parameters of the stress tensor are its invariants, since they are not dependent upon the choice of the coordinate system used to describe the tensor.
If we choose a particular surface area element, we may divide the force vector by the area (stress vector) and decompose it into two parts: a normal component acting normal to the stressed surface, and a shear component, acting parallel to the stressed surface. An axial stress is a normal stress produced when a force acts parallel to the major axis of a body, e.g. column. If the forces pull the body producing an elongation, the axial stress is termed tensile stress. If on the other hand the forces push the body reducing its length, the axial stress is termed compressive stress. Bending stresses, e.g. produced on a bent beam, are a combination of tensile and compressive stresses. Torsional stresses, e.g. produced on twisted shafts, are shearing stresses.
In the above description, little distinction is drawn between the "stress" and the "stress vector" since the body which is being stressed provides a particular coordinate system in which to discuss the effects of the stress. The distinction between "normal" and "shear" stresses is slightly different when considered independently of any coordinate system. The stress tensor yields a stress vector for a surface area element at any orientation, and this stress vector may be decomposed into normal and shear components. The normal part of the stress vector averaged over all orientations of the surface element yields an invariant value, and is known as the hydrostatic pressure. Mathematically it is equal to the average value of the principal stresses (or, equivalently, the trace of the stress tensor divided by three). The normal stress tensor is then the product of the hydrostatic pressure and the unit tensor. Subtracting the normal stress tensor from the stress tensor gives what may be called the shear tensor. These two quantities are true tensors with physical significance, and their nature is independent of any coordinate system chosen to describe them. In fact, the extended Hooke's law is basically the statement that each of these two tensors is proportional to its strain tensor counterpart, and the two constants of proportionality (elastic moduli) are independent of each other. Note that In rheology, the normal stress tensor is called extensional stress, and in acoustics is called longitudinal stress.
Solids, liquids and gases have stress fields. Static fluids support normal stress but will flow under shear stress. Moving viscous fluids can support shear stress (dynamic pressure). Solids can support both shear and normal stress, with ductile materials failing under shear and brittle materials failing under normal stress. All materials have temperature dependent variations in stress related properties, and non-newtonian materials have rate-dependent variations.
Cauchy's stress principle
Cauchy's stress principle asserts that when a continuum body is acted on by forces, i.e. surface forces and body forces, there are internal reactions (forces) throughout the body acting between the material points. Based on this principle, Cauchy demonstrated that the state of stress at a point in a body is completely defined by the nine components $sigma_\left\{ij\right\}$ of a second-order Cartesian tensor called the Cauchy stress tensor, given by
:
where:$mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_1\right)\right\}$, $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_2\right)\right\}$, and $mathbf\left\{T\right\}^\left\{\left(mathbf\left\{e\right\}_3\right)\right\}$ are the stress vectors associated with the planes perpendicular to the coordinate axis,:$sigma_\left\{11\right\}$, $sigma_\left\{22\right\}$, and $sigma_\left\{33\right\}$ are normal stresses, and :$sigma_\left\{12\right\}$, $sigma_\left\{13\right\}$, $sigma_\left\{21\right\}$, $sigma_\left\{23\right\}$, $sigma_\left\{31\right\}$, and $sigma_\left\{32\right\}$ are shear stresses.
The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a 6-dimensional vector of the form:The Voigt notation is used extensively in representing stress-strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software.
:
At the same time, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, i.e.
:$sigma_\left\{ij\right\}=sigma_\left\{ji\right\}$
:
However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, $K_\left\{n\right\} ightarrow 1$, e.g. Non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
Principal stresses and stress invariants
The components $sigma_\left\{ij\right\}$ of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the length of the vector is a physical quantity (a scalar) and is independent of the coordinate system chosen to represent the vector. Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. When the coordinate system is chosen to coincide with the eigenvectors of the stress tensor, the stress tensor is represented by a diagonal matrix:
:
where $sigma_1$, $sigma_2$, and $sigma_3$, are the principal stresses. These principal stresses may be combined to form three other commonly used invariants, $I_1$, $I_2$, and $I_3$ , which are the first, second and third stress invariants, respectively. The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, we have
:
Because of its simplicity, working and thinking in the principal coordinate system is often very useful when considering the state of the elastic medium at a particular point.
:
tress deviator tensor
The stress tensor $sigma_\left\{ij\right\}$ can be expressed as the sum of two other stress tensors:
# a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, $pdelta_\left\{ij\right\}$, which tends to change the volume of the stressed body; and
# a deviatoric component called the stress deviator tensor, $s_\left\{ij\right\}$, which tends to distort it.
:$sigma_\left\{ij\right\}= s_\left\{ij\right\} + pdelta_\left\{ij\right\}$
where $p$ is the mean stress given by
:$p=frac\left\{sigma_\left\{kk\left\{3\right\}=frac\left\{sigma_\left\{11\right\}+sigma_\left\{22\right\}+sigma_\left\{33\left\{3\right\}= frac\left\{1\right\}\left\{3\right\}I_1$
The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the stress tensor:
:
Invariants of the stress deviator tensor
As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor $s_\left\{ij\right\}$ are the same as the principal directions of the stress tensor $sigma_\left\{ij\right\}$. Thus, the characteristic equation is
:$left| s_\left\{ij\right\}- lambdadelta_\left\{ij\right\} ight| = lambda^3-J_1lambda^2-J_2lambda-J_3=0$
where $J_1$, $J_2$ and $J_3$ are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of $s_\left\{ij\right\}$ or its principal values $s_1$, $s_2$, and $s_3$, or alternatively, as a function of $sigma_\left\{ij\right\}$ or its principal values $sigma_1 ,$, $sigma_2 ,$, and $sigma_3 ,$ . Thus,::Because $s_\left\{kk\right\}=0 ,$, the stress deviator tensor is in a state of pure shear.
A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as:$sigma_e = sqrt\left\{3~J_2\right\} = sqrt\left\{ frac\left\{1\right\}\left\{2\right\}~left \left[\left(sigma_1-sigma_2\right)^2 + \left(sigma_2-sigma_3\right)^2 + \left(sigma_3-sigma_1\right)^2 ight\right] \right\}$
Octahedral stresses
Considering the principal directions as the coordinate axes, a plane which normal vector makes equal angles with each of the principal axes, i.e. having direction cosines equal to $|1/sqrt\left\{3\right\}|$, is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress $sigma_\left\{oct\right\}$ and octahedral shear stress $au_\left\{oct\right\}$, respectively.
Knowing that the stress tensor of point O (Figure 6) in the principal axes is
:
the stress vector on an octahedral plane is then given by:
:
The normal component of the stress vector at point O associated with the octahedral plane is
:
which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes.The shear stress on the octahedral plane is then
:
Analysis of stress
All real objects occupy a three-dimensional space. However, depending on the loading condition and viewpoint of the observer the same physical object can alternatively be assumed as one-dimensional or two-dimensional, thus simplifying the mathematical modelling of the object.
Uniaxial stress
If two of the dimensions of the object are very large or very small compared to the others, the object may be modelled as one-dimensional. In this case the stress tensor has only one component and is indistinguishable from a scalar. One-dimensional objects include a piece of wire loaded at the ends and viewed from the side, and a metal sheet loaded on the face and viewed up close and through the cross section.
When a structural element is elongated or compressed, its cross-sectional area changes by an amount that depends on the Poisson's ratio of the material. In engineering applications, structural members experience small deformations and the reduction in cross-sectional area is very small and can be neglected, i.e., the cross-sectional area is assumed constant during deformation. For this case, the stress is called engineering stress or nominal stress. In some other cases, e.g., elastomers and plastic materials, the change in cross-sectional area is significant, and the stress must be calculated assuming the current cross-sectional area instead of the initial cross-sectional area. This is termed true stress and is expressed as
:$sigma_mathrm\left\{true\right\} = \left(1 + varepsilon_e\right)\left(sigma_e\right) ,$,
where :$varepsilon_e$ is the nominal (engineering) strain, and :$sigma_e$ is nominal (engineering) stress.
The relationship between true strain and engineering strain is given by
:$varepsilon_mathrm\left\{true\right\} = ln\left(1 + varepsilon_e\right) ,$.
In uniaxial tension, true stress is then greater than nominal stress. The converse holds in compression.
Plane stress
A state of plane stress exist when one of the principal stresses is zero, stresses with respect to the thin surface are zero. This usually occurs in structural elements where one dimension is very small compared to the other two, i.e. the element is flat or thin, and the stresses are negligible with respect to the smaller dimension as they are not able to develop within the material and are small compared to the in-plane stresses. Therefore, the face of the element is not acted by loads and the structural element can be analyzed as two-dimensional, e.g. thin-walled structures such as plates subject to in-plane loading or thin cylinders subject to pressure loading.The stress tensor can then be approximated by:
: .
The corresponding strain tensor is:
:
in which the non-zero $varepsilon_\left\{33\right\}$ term arises from the Poisson's effect. This strain term can be temporarily removed from the stress analysis to leave only the in-plane terms, effectively reducing the analysis to two dimensions.
Plane strain
If one dimension is very large compared to the others, the principal strain in the direction of the longest dimension is constrained and can be assumed as zero, yielding a plane strain condition. In this case, though all principal stresses are non-zero, the principal stress in the direction of the longest dimension can be disregarded for calculations. Thus, allowing a two dimensional analysis of stresses, e.g. a dam analyzed at a cross section loaded by the reservoir.
Mohr's circle for stresses
Mohr's circle is a graphical representation of any 2-D stress state and was named for Christian Otto Mohr. Mohr's circle may also be applied to three-dimensional stress. In this case, the diagram has three circles, two within a third.
Mohr's circle is used to find the principal stresses, maximum shear stresses, and principal planes. For example, if the material is brittle, the engineer might use Mohr's circle to find the maximum component of normal stress (tension or compression); and for ductile materials, the engineer might look for the maximum shear stress.
Alternative measures of stress
The Cauchy stress is not the only measure of stress that is used in practice. Other measures of stress include the first and second Piola-Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
Piola-Kirchhoff stress tensor
In the case of finite deformations, the Piola-Kirchhoff stress tensors are used to express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations or rotations, the Cauchy and Piola-Kirchoff tensors are identical. These tensors take their names from Gabrio Piola and Gustav Kirchhoff.
1st Piola-Kirchhoff stress tensor
Whereas the Cauchy stress tensor, $sigma_\left\{ij\right\}$, relates forces in the present configuration to areas in the present configuration, the 1st Piola-Kirchhoff stress tensor, $K_\left\{Lj\right\}$ relates forces in the "present" configuration with areas in the "reference" ("material") configuration. $K_\left\{Lj\right\}$ is given by
:$K_\left\{Lj\right\}=J X_\left\{L,i\right\} sigma_\left\{ij\right\} !$
where $J$ is the Jacobian, and $X_\left\{L,i\right\}$ is the inverse of the deformation gradient.
Because it relates different coordinate systems, the 1st Piola-Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The 1st Piola-Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the 1st Piola-Kirchhoff stress tensor will vary with material orientation.
The 1st Piola-Kirchhoff stress is energy conjugate to the deformation gradient.
2nd Piola-Kirchhoff stress tensor
Whereas the 1st Piola-Kirchhoff stress relates forces in the current configuration to areas in the reference configuration, the 2nd Piola-Kirchhoff stress tensor $S_\left\{IJ\right\}$ relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the current configuration.
:$S_\left\{IJ\right\}=J X_\left\{I,k\right\} X_\left\{J,l\right\} sigma_\left\{kl\right\} !$
This tensor is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the 2nd Piola-Kirchhoff stress tensor will remain constant, irrespective of material orientation.
The 2nd Piola-Kirchhoff stress tensor is energy conjugate to the Green-Lagrange finite strain tensor.
ee also
* Bending
* Linear elasticity
* Residual stress
* Shot peening
* Strain
* Strain tensor
* Stress-energy tensor
* Stress-strain curve
* Stress concentration
* Von Mises stress
* Yield stress
* Yield surface
Books
* Dieter, G. E. (3 ed.). (1989). "Mechanical Metallurgy". New York: McGraw-Hill. ISBN 0-07-100406-8.
* Love, A. E. H. (4 ed.). (1944). "Treatise on the Mathematical Theory of Elasticity". New York: Dover Publications. ISBN 0-486-60174-9.
* Marsden, J. E., & Hughes, T. J. R. (1994). "Mathematical Foundations of Elasticity". New York: Dover Publications. ISBN 0-486-67865-2.
* L.D.Landau and E.M.Lifshitz. (1959). "Theory of Elasticity".
* [http://documents.wolfram.com/applications/structural/AnalysisofStress.html Stress analysis, Wolfram Research]
* [http://www.ihsesdu.com ESDU Stress Analysis Methods]
* [http://www.shodor.org/~jingersoll/weave/tutorial/node3.html True stress and true strain]
* [http://invsee.asu.edu/srinivas/stress-strain/phase.html Stress-Strain Curve for Ductile Material]
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Stress — may refer to: Mechanical * Stress (physics), the average amount of force exerted per unit area. * Yield stress, the stress at which a material begins to deform plastically. * Compressive stress, the stress applied to materials resulting in their… … Wikipedia
• Stress measures — The most commonly used measure of stress is the Cauchy stress. However, several other measures of stress can be defined. Some such stress measures that are widely used in continuum mechanics, particularly in the computational context, are [J.… … Wikipedia
• stress — n 1 Stress, strain, pressure, tension are comparable terms when they apply to the action or effect of force exerted within or upon a thing. Stress and strain are the comprehensive terms of this group and are sometimes used interchangeably {put… … New Dictionary of Synonyms
• Stress (biological) — Stress is a biological term which refers to the consequences of the failure of a human or animal body to respond appropriately to emotional or physical threats to the organism, whether actual or imagined. [ The Stress of Life , Hans Selye, 1956.] … Wikipedia
• Physics — (Greek: physis φύσις), in everyday terms, is the science of matter [R. P. Feynman, R. B. Leighton, M. Sands (1963), The Feynman Lectures on Physics , ISBN 0 201 02116 1 Hard cover. p.1 1 Feynman begins with the atomic hypothesis.] and its motion … Wikipedia
• Stress — Stress, n. [Abbrev. fr. distress; or cf. OF. estrecier to press, pinch, (assumed) LL. strictiare, fr. L. strictus. See {Distress}.] 1. Distress. [Obs.] [1913 Webster] Sad hersal of his heavy stress. Spenser. [1913 Webster] 2. Pressure, strain;… … The Collaborative International Dictionary of English
• Stress of voice — Stress Stress, n. [Abbrev. fr. distress; or cf. OF. estrecier to press, pinch, (assumed) LL. strictiare, fr. L. strictus. See {Distress}.] 1. Distress. [Obs.] [1913 Webster] Sad hersal of his heavy stress. Spenser. [1913 Webster] 2. Pressure,… … The Collaborative International Dictionary of English
• Stress of weather — Stress Stress, n. [Abbrev. fr. distress; or cf. OF. estrecier to press, pinch, (assumed) LL. strictiare, fr. L. strictus. See {Distress}.] 1. Distress. [Obs.] [1913 Webster] Sad hersal of his heavy stress. Spenser. [1913 Webster] 2. Pressure,… … The Collaborative International Dictionary of English
• Stress tensor — may refer to: Stress (mechanics), in classical physics Stress energy tensor, in relativistic theories Maxwell stress tensor, in electromagnetism See also Stress (disambiguation) Tensor (disambiguation) This disambiguation page lists artic … Wikipedia
• Stress (mechanics) — Continuum mechanics … Wikipedia
|
2020-07-05 21:19:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 69, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672696352005005, "perplexity": 784.0915491908963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00125.warc.gz"}
|
https://aman.ai/cs229/decision-trees/
|
## Decision Trees
• We now turn our attention to decision trees, a simple yet flexible class of We will first consider the non-linear, region-based nature of decision trees, continue on to define and contrast region-based loss functions, and close off with an investigation of some of the specific advantages and disadvantages of such methods. Once finished with their nuts and bolts, we will move on to investigating different ensembling methods through the lens of decision trees, due to their suitability for such techniques.
### Non-linearity
• Importantly, decision trees are one of the first inherently non-linear machine learning techniques we will cover, as compared to methods such as vanilla SVMs or GLMs. Formally, a method is linear if for an input $$x \in \mathbb{R}^{n}$$ (with interecept term $$x_{0}=1$$) it only produces hypothesis functions $$h$$ of the form:
$h(x)=\theta^{T} x$
• where $$\theta \in \mathbb{R}^{n}$$. Hypothesis functions that cannot be reduced to the form above are called non-linear, and if a method can produce non-linear hypothesis functions then it is also non-linear. We have already seen that kernelization of a linear method is one such method by which we can achieve non-linear hypothesis functions, via a feature mapping $$\phi(x)$$
• Decision trees, on the other hand, can directly produce non-linear hypothesis functions without the need for first coming up with an appropriate feature mapping. As a motivating (and very Canadien) example, let us say we want to build a classifier that, given a time and a location, can predict whether or not it would be possible to ski nearby. To keep things simple, the time is represented as month of the year and the location is represented as a latitude (how far North or South we are with $$-90^{\circ}, 0^{\circ}$$, and $$90^{\circ}$$ being the South Pole, Equator, and North Pole, respectively).
• A representative dataset is shown above left. There is no linear boundary that would correctly split this dataset. However, we can recognize that there are different areas of positive and negative space we wish to isolate, one such division being shown above right. We accomplish this by partitioning the input space $$\mathcal{X}$$ into disjoint subsets (or regions) $$R_{i}$$:
$\begin{array}{c} \mathcal{X}=\bigcup_{i=0}^{n} R_{i} \\ \text { s.t. } \quad R_{i} \cap R_{j}=\emptyset \text { for } i \neq j \end{array}$
• where $$n \in \mathbb{Z}^{+}$$
## Selecting Regions
• In general, selecting optimal regions is intractable. Decision trees generate an approximate solution via greedy, top-down, recursive partitioning. The method is top-down because we start with the original input space $$\mathcal{X}$$ and split it into two child regions by thresholding on a single feature. We then take one of these child regions and can partition via a new threshold. We continue the training of our model in a recursive manner, always selecting a leaf node, a feature, and a threshold to form a new split. Formally, given a parent region $$R_{p}$$, a feature index $$j$$, and a threshold $$t \in \mathbb{R}$$, we obtain two child regions $$R_{1}$$ and $$R_{2}$$ as follows:
$\begin{array}{l} R_{1}=\left\{X \mid X_{j}<t, X \in R_{p}\right\} \\ R_{2}=\left\{X \mid X_{j} \geq t, X \in R_{p}\right\} \end{array}$
• The beginning of one such process is shown below applied to the skiing dataset. In step a, we split the input space $$\mathcal{X}$$ by the location feature, with a threshold of 15, creating child regions $$R_{1}$$ and $$R_{2}$$. In step b, we then recursively select one of these child regions (in this case $$R_{2}$$) and select a feature (time) and threshold (3), generating two more child regions ($$R_{21}$$). and ($$R_{22}$$). In step c, we select any one of the remaining leaf nodes ($$R_{1}, R_{21}, R_{22}$$). We can continue in such a manner until we a meet a given stop criterion (more on this later), and then predict the majority class at each leaf node.
## Defining a Loss Function
• A natural question to ask at this point is how to choose our splits. To do so, it is first useful to define our loss $$L$$ as a set function on a region $$R$$. Given a split of a parent $$R_{p}$$ into two child regions $$R_{1}$$ and $$R_{2}$$, we can compute the loss of the parent $$L(R_{p})$$ as well as the cardinality-weighted loss of the children $$\frac{\left|R_{1}\right| L\left(R_{1}\right)+\left|R_{2}\right| L\left(R_{2}\right)}{\left|R_{1}\right|+\left|R_{2}\right|}$$. Within our greedy partitioning framework, we want to select the leaf region, feature, and threshold that will maximize our decrease in loss:
$L\left(R_{p}\right)-\frac{\left|R_{1}\right| L\left(R_{1}\right)+\left|R_{2}\right| L\left(R_{2}\right)}{\left|R_{1}\right|+\left|R_{2}\right|}$
• For a classification problem, we are interested in the misclassification loss $$L_{\text {misclass}}$$. For a region $$R$$ let $$\hat{p}_{c}$$ be the proportion of examples in $$R$$ that are of class $$c$$. Misclassification loss on $$R$$ can be written as:
$L_{\text {misclass}}(R)=1-\max _{c}\left(\hat{p}_{c}\right)$
• We can understand this as being the number of examples that would be misclassified if we predicted the majority class for region $$R$$ (which is exactly what we do). While misclassification loss is the final value we are interested in, it is not very sensitive to changes in class probabilities. As a representative example, we show a binary classification case below. We explicitly depict the parent region $$R_{p}$$ as well as the positive and negative counts in each region.
• The first split is isolating out more of the positives, but we note that:
$L\left(R_{p}\right)=\frac{\left|R_{1}\right| L\left(R_{1}\right)+\left|R_{2}\right| L\left(R_{2}\right)}{\left|R_{1}\right|+\left|R_{2}\right|}=\frac{\left|R_{1}^{\prime}\right| L\left(R_{1}^{\prime}\right)+\left|R_{2}^{\prime}\right| L\left(R_{2}^{\prime}\right)}{\left|R_{1}^{\prime}+\right| R_{2}^{\prime} \mid}=100$
• Thus, not only can we not only are the losses of the two splits identical, but neither of the splits decrease the loss over that of the parent.
• We therefore are interested in defining a more sensitive loss. While several have been proposed, we will focus here on the cross-entropy loss $$L_{\text {cross}}$$ :
$L_{c r o s s}(R)=-\sum_{c} \hat{p}_{c} \log _{2} \hat{p}_{c}$
• With $$\hat{p} \log _{2} \hat{p} \equiv 0$$ if $$\hat{p}=0$$. From an information-theoretic perspective, cross-entropy measure the number of bits needed to specify the outcome (or class) given that the distribution is known. Furthermore, the reduction in loss from parent to child is known as information gain.
• To understand the relative sensitivity of cross-entropy loss with respect to misclassification loss, let us look at plots of both loss functions for the binary classification case. For these cases, we can simplify our loss functions to depend on just the proportion of positive examples $$\hat{p}_i$$ in a region $$R_i$$:
$\begin{array}{l} L_{\text {misclass}}(R)=L_{\text {misclass}}(\hat{p})=1-\max (\hat{p}, 1-\hat{p}) \\ L_{\text {cross}}(R)=L_{\text {cross}}(\hat{p})=-\hat{p} \log \hat{p}-(1-\hat{p}) \log (1-\hat{p}) \end{array}$
• In the figure above on the left, we see the cross-entropy loss plotted over p. We take the regions $$(R_{p}, R_{1}, R_{2})$$ from the previous page’s example’s first split, and plot their losses as well. As cross-entropy loss is strictly concave, it can be seen from the plot (and easily proven) that as long as $${\hat{p}}_1 \neq \hat{p}_2$$ and both child regions are non-empty, then the weighted sum of the children losses will always be less than that of the parent.
• Misclassification loss, on the other hand, is not strictly concave, and therefore there is no guarantee that the weighted sum of the children will be less than that of the parent, as shown above right, with the same partition. Due to this added sensitivity, cross-entropy loss (or the closely related Gini loss) are used when growing decision trees for classification.
• Before fully moving away from loss functions, we briefly cover the regression setting for decision trees. For each data point $$x_{i}$$ we now instead have an associated value $$y_{i} \in \mathbb{R}$$ we wish to predict. Much of the tree growth process remains the same, with the differences being that the final prediction for a region $$R$$ is the mean of all the values:
$\hat{y}=\frac{\sum_{i \in R} y_{i}}{|R|}$
• And in this case we can directly use the squared loss to select our splits:
$L_{\text {squared}}(R)=\frac{\sum_{i \in R}\left(y_{i}-\hat{y}\right)^{2}}{|R|}$
## Other Considerations
• The popularity of decision trees can in large part be attributed to the ease by which they are explained and understood, as well as the high degree of interpretability they exhibit: we can look at the generated set of thresholds to understand why a model made specific predictions. However, that is not the full picture - we will now cover some additional salient points.
### Categorical Variables
• Another advantage of decision trees is that they can easily deal with categorical variables. As an example, our location in the skiing dataset could instead be represented as a categorical variable (one of Northern Hemisphere, Southern Hemisphere, or Equator (i.e. $$\operatorname{loc} \in{N, S, E}$$)). Rather than use a one-hot encoding or similar preprocessing step to transform the data into a quantitative feature, as would be necessary for the other algorithms we have seen, we can directly probe subset membership. The final tree in Section 2 can be re-written as:
• A caveat to the above is that we must take care to not allow a variable to have too many categories. For a set of categories $$S$$, our set of possible questions is the power set $$\mathcal{P}(S)$$, of cardinality $$2^{|S|}$$. Thus, a large number of categories makes question selectioin computationally intractable. Optimizations are possible for the binary classification, though even in this case serious consideration should be given to whether the feature can be re-formulated as a quantitative one instead as the large number of possible thresholds lend themselves to a high degree of overfitting.
### Regularization
• In Section 2 we alluded to various stopping criteria we could use to determine when to halt the growth of a tree. The simplest criteria involves “fully” growning the tree: we continue until each leaf region contains exactly one training data point. This technique however leads to a high variance and low bias model, and we therefore turn to various stopping heuristics for regularization. Some common ones include:
• Minimum Leaf Size - Do not split $$R$$ if its cardinality falls below a fixed threshold.
• Maximum Depth - Do not split $$R$$ if more than a fixed threshold of splits were already taken to reach $$R$$.
• Maximum Number of Nodes - Stop if a tree has more than a fixed threshold of leaf nodes.
• A tempting heuristic to use would be to enforce a minimum decrease in loss after splits. This is a problematic approach as the greedy, singlefeature at a time approach of decision trees could mean missing higher order interactions. If we require thresholding on multiple features to achieve a good split, we might be unable to achieve a good decrease in loss on the initial splits and therefore prematurely terminate. A better approach involves fully growing out the tree, and then pruning away nodes that minimally decrease misclassification or squared error, as measured on a validation set.
### Runtime
• We briefly turn to considering the runtime of decision trees. For ease of analysis, we will consider binary classification with $$n$$ examples, $$f$$ features, and a tree of depth $$d$$. At test time, for a data point we traverse the tree until we reach a leaf node and then output its prediction, for a runtime of $$O(d)$$. Note that if our tree is balanced than $$d=O(\log n)$$, and thus test time performance is generally quite fast.
• At training time, we note that each data point can only appear in at most $$O(d)$$ nodes. Through sorting and intelligent caching of intermediate values, we can achieve an amortized runtime of $$O(1)$$ at each node for a single data point for a single feature. Thus, overall runtime is $$O(n f d)-$$ a fairly fast runtime as the data matrix alone is of size $$n f$$.
• One important downside to consider is that decision trees can not easily capture additive structure. For example, as seen below on the left, a simple decision boundary of the form $$x_{1}+x_{2}$$ could only be approximately modeled through the use of many splits, as each split can only consider one of $$x_{1}$$ or $$x_{2}$$ at a time. A linear model on the other hand could directly derive this boundary, as shown below right.
• While there has been some work in allowing for decision boundaries that factor in many features at once, they have the downside of further increasing variance and reducing interpretability.
## Citation
If you found our work useful, please cite it as:
@article{Chadha2020DistilledDecisionTrees,
title = {Decision Trees},
journal = {Distilled Notes for Stanford CS229: Machine Learning},
year = {2020},
note = {\url{https://aman.ai}}
}
|
2022-12-06 02:57:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.656025230884552, "perplexity": 462.03842149561353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00691.warc.gz"}
|
https://www.nature.com/articles/s42005-019-0147-3?error=cookies_not_supported&code=7fa07b6c-b493-410f-8e55-b7997dec82c9
|
# End-to-end capacities of a quantum communication network
## Abstract
In quantum mechanics, a fundamental law prevents quantum communications to simultaneously achieve high rates and long distances. This limitation is well known for point-to-point protocols, where two parties are directly connected by a quantum channel, but not yet fully understood in protocols with quantum repeaters. Here we solve this problem bounding the ultimate rates for transmitting quantum information, entanglement and secret keys via quantum repeaters. We derive single-letter upper bounds for the end-to-end capacities achievable by the most general (adaptive) protocols of quantum and private communication, from a single repeater chain to an arbitrarily complex quantum network, where systems may be routed through single or multiple paths. We analytically establish these capacities under fundamental noise models, including bosonic loss which is the most important for optical communications. In this way, our results provide the ultimate benchmarks for testing the optimal performance of repeater-assisted quantum communications.
## Introduction
Today quantum technologies are being developed at a rapid pace1,2,3,4. In this scenario, quantum communications are very advanced, with the development and implementation of a number of point-to-point protocols of quantum key distribution (QKD)5, based on discrete variable (DV) systems6,7,8, such as qubits, or continuous variable (CV) systems, such as bosonic modes9,10. Recently, we have also witnessed the deployment of high-rate optical-based secure quantum networks11,12. These are advantageous not only for their multiple-user architecture but also because they may overcome the fundamental limitations that are associated with point-to-point protocols of quantum and private communication.
After a long series of studies that started back in 2009 with the introduction of the reverse coherent information of a bosonic channel13,14, ref. 15 finally showed that the maximum rate at which two remote parties can distribute quantum bits (qubits), entanglement bits (ebits), or secret bits over a lossy channel (e.g., an optical fiber) is equal to −log2(1 − η), where η is the channel’s transmissivity. This limit is the Pirandola–Laurenza–Ottaviani–Banchi (PLOB) bound15 and cannot be surpassed even by the most powerful strategies that exploit arbitrary local operations (LOs) assisted by two-way classical communication (CC), also known as adaptive LOCCs16.
To beat the PLOB bound, we need to insert a quantum repeater17 in the communication line. In information theory18,19,20,21, a repeater or relay is any middle node helping the communication between two end-parties. This definition is extended to quantum information theory, where quantum repeaters are middle nodes equipped with both classical and quantum operations, and may be arranged to compose linear chains or more general networks. In general, they do not need to have quantum memories (e.g., see ref. 22) even though these are generally required for guaranteeing an optimal performance.
In all the ideal repeater-assisted scenarios, where we can beat the PLOB bound, it is fundamental to determine the maximum rates that are achievable by two end-users, i.e., to determine their end-to-end capacities for transmitting qubits, distributing ebits, and generating secret keys. Finding these capacities not only is important to establish the boundaries of quantum network communications but also to benchmark practical implementations, so as to check how far prototypes of quantum repeaters are from the ultimate theoretical performance.
Here we address this fundamental problem. By combining methods from quantum information theory6,7,8,9,10 and classical networks18,19,20,21, we derive tight single-letter upper bounds for the end-to-end quantum and private capacities of repeater chains and, more generally, quantum networks connected by arbitrary quantum channels (these channels and the dimension of the quantum systems they transmit may generally vary across the network). More importantly, we establish exact formulas for these capacities under fundamental noise models for both DV and CV systems, including dephasing, erasure, quantum-limited amplification, and bosonic loss which is the most important for quantum optical communications. Depending on the routing in the quantum network (single- or multi-path), optimal strategies are found by solving the widest path23,24,25 or the maximum flow problem26,27,28,29 suitably extended to the quantum communication setting.
Our results and analytical formulas allow one to assess the rate performance of quantum repeaters and quantum communication networks with respect to the ultimate limits imposed by the laws of quantum mechanics.
## Results
### Ultimate limits of repeater chains
Consider Alice a and Bob b at the two ends of a linear chain of N quantum repeaters, labeled by r1, …, rN. Each point has a local register of quantum systems which may be augmented with incoming systems or depleted by outgoing ones. As also depicted in Fig. 1, the chain is connected by N + 1 quantum channels $$\{ {\cal{E}}_i\} = \{ {\cal{E}}_0, \ldots ,{\cal{E}}_i, \ldots ,{\cal{E}}_N\}$$ through which systems are sequentially transmitted. This means that Alice transmits a system to repeater r1, which then relays the system to repeater r2, and so on, until Bob is reached.
Note that, in general, we may also have opposite directions for some of the quantum channels, so that they transmit systems towards Alice; e.g., we may have a middle relay receiving systems from both Alice and Bob. For this reason, we generally consider the “exchange” of a quantum system between two points by either forward or backward transmission. Under the assistance of two-way CCs, the optimal transmission of quantum information is related to the optimal distribution of entanglement followed by teleportation, so that it does not depend on the physical direction of the quantum channel but rather on the direction of the teleportation protocol.
In a single end-to-end transmission or use of the chain, all the channels are used exactly once. Assume that the end-points aim to share target bits, which may be ebits or private bits30,31. The most general quantum distribution protocol $${\cal{P}}_{{\mathrm{chain}}}$$ involves transmissions which are interleaved by adaptive LOCCs among all parties, i.e., LOs assisted by two-way CCs among end-points and repeaters. In other words, before and after each transmission between two nodes, there is a session of LOCCs where all the nodes update and optimize their registers.
After n adaptive uses of the chain, the end-points share an output state $$\rho _{{\mathbf{ab}}}^n$$ with nRn target bits. By optimizing the asymptotic rate limnRn over all protocols $${\cal{P}}_{{\mathrm{chain}}}$$, we define the generic two-way capacity of the chain $${\cal{C}}(\{ {\cal{E}}_i\} )$$. If the target are ebits, the repeater-assisted capacity $${\cal{C}}$$ is an entanglement-distribution capacity D2. The latter coincides with a quantum capacity Q2, because distributing an ebit is equivalent to transmitting a qubit if we assume two-way CCs. If the target are private bits, $${\cal{C}}$$ is a secret-key capacity KD2 (with the inequality holding because ebits are specific private bits). Exact definitions and more details are given in Supplementary Note 1.
To state our upper bound for $${\cal{C}}(\{ {\cal{E}}_i\} )$$, we introduce the notion of channel simulation, as generally formulated by ref. 15 (see also refs. 32,33,34,35,36,37 for variants). Recall that any quantum channel $${\cal{E}}$$ is simulable by applying a trace-preserving LOCC $${\cal{T}}$$ to the input state ρ together with some bipartite resource state σ, so that $${\cal{E}}(\rho ) = {\cal{T}}(\rho \otimes \sigma )$$. The pair $$({\cal{T}},\sigma )$$ represents a possible “LOCC simulation” of the channel. In particular, for channels that suitably commute with the random unitaries of teleportation4, called “teleportation-covariant” channels15, one finds that $${\cal{T}}$$ is teleportation and σ is their Choi matrix $$\sigma _{\cal{E}}: = {\cal{I}} \otimes {\cal{E}}(\Phi )$$, where Φ is a maximally entangled state. The latter is also known as “teleportation simulation”.
For bosonic channels, the Choi matrices are energy-unbounded, so that simulations need to be formulated asymptotically. In general, an asymptotic state σ is defined as the limit of a sequence of physical states σμ, i.e., $$\sigma : = \mathop {\mathrm{{lim}}}\nolimits_\mu \sigma ^\mu$$. The simulation of a channel $${\cal{E}}$$ over an asymptotic state takes the form $$\left\Vert {{\cal{E}}(\rho ) - {\cal{T}}(\rho \otimes \sigma ^\mu )} \right\Vert_1\mathop { \to }\limits^\mu 0$$ where the LOCC $${\cal{T}}$$ may also depend on μ in the general case15. Similarly, any relevant functional on the asymptotic state needs to be computed over the defining sequence σμ before taking the limit for large μ. These technicalities are fully accounted in the Methods section.
The other notion to introduce is that of entanglement cut between Alice and Bob. In the setting of a linear chain, a cut “i” disconnects channel $${\cal{E}}_i$$ between repeaters ri and ri+1. Such channel can be replaced by a simulation with some resource state σi. After calculations (see Methods), this allows us to write
$${\cal{C}}(\{ {\cal{E}}_i\} ) \le E_{\mathrm{R}}(\sigma _i),$$
(1)
where ER(·) is the relative entropy of entanglement (REE). Recall that the REE is defined as38,39,40
$$E_{\mathrm{R}}(\sigma ) = \mathop {{\inf }}\limits_{\gamma \in {\mathrm{SEP}}} S(\sigma ||\gamma ),$$
(2)
where SEP represents the ensemble of separable bipartite states and $$S(\sigma ||\gamma ): = {\mathrm{Tr}}\left[ {\sigma (\mathrm{log}_2\sigma - \mathrm{log}_2\gamma )} \right]$$ is the relative entropy. In general, for any asymptotic state defined by the limit $$\sigma : = \mathrm{lim}_\mu \sigma ^\mu$$, we may extend the previous definition and consider
$$E_{\mathrm{R}}(\sigma ) = \mathop {{\rm{lim}}\,{\rm{inf}}}\limits_{\mu} E_{\mathrm{R}}({\sigma} ^{\mu} ) = \mathop {{\mathrm{inf}}}\limits_{{\gamma} ^{\mu} }\, \mathop {{\rm{lim}}\,{\rm{inf}}}\limits_{\mu} S({\sigma} ^{\mu} ||{\gamma} ^{\mu} ),$$
(3)
where γμ is a converging sequence of separable states15.
By minimizing Eq. (1) over all cuts, we may write
$${\cal{C}}(\{ {\cal{E}}_i\} ) \le \min _iE_{\mathrm{R}}(\sigma _i),$$
(4)
which establishes the ultimate limit for entanglement and key distribution through a repeater chain. For a chain of teleportation-covariant channels, we may use their teleportation simulation over Choi matrices and write
$${\cal{C}}(\{ {\cal{E}}_i\} ) \le \min _iE_{\mathrm{R}}(\sigma _{{\cal{E}}_i}).$$
(5)
Note that the family of teleportation-covariant channels is large, including Pauli channels (at any dimension)7 and bosonic Gaussian channels9. Within such a family, there are channels $${\cal{E}}$$ whose generic two-way capacity $${\cal{C}} = Q_2$$, D2 or K satisfies
$${\cal{C}}({\cal{E}}) = E_{\mathrm{R}}(\sigma _{\cal{E}}) = D_1(\sigma _{\cal{E}}),$$
(6)
where $$D_1(\sigma _{\cal{E}})$$ is the one-way distillable entanglement of the Choi matrix (defined as an asymptotic functional in the bosonic case15). These are called “distillable channels” and include bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels15.
For a chain of distillable channels, we therefore exactly establish the repeater-assisted capacity as
$${\cal{C}}(\{ {\cal{E}}_i\} ) = \mathop {{\mathrm{min}}}\limits_i {\cal{C}}({\cal{E}}_i) = \mathop {{\mathrm{min}}}\limits_i E_{\mathrm{R}}(\sigma _{{\cal{E}}_i}).$$
(7)
In fact the upper bound (≤) follows from Eqs. (5) and (6). The lower bound (≥) relies on the fact that an achievable rate for end-to-end entanglement distribution consists in: (i) each pair, ri and $${\mathbf{r}}_{i + 1}$$, exchanging $$D_1(\sigma _{{\cal{E}}_i})$$ ebits over $${\cal{E}}_i$$; and (ii) performing entanglement swapping on the distilled ebits. In this way, at least $${\mathrm{min}}_{i} {D}_{1}(\sigma_{{\cal{E}}_i})$$ ebits are shared between Alice and Bob.
### Lossy chains
Let us specify Eq. (7) to an important case. For a chain of quantum repeaters connected by lossy channels with transmissivities $$\{ \eta _i\}$$, we find the capacity
$${\cal{C}}_{{\mathrm{loss}}} = - \log _2(1 - \eta _{{\mathrm{min}}}), \quad \eta_{\mathrm{min}} := \mathop{\mathrm{{min}}}\limits_i \eta _i.$$
(8)
Thus, the minimum transmissivity within the lossy chain establishes the ultimate rate for repeater-assisted quantum/private communication between the end-users. For instance, consider an optical fiber with transmissivity η and insert N repeaters so that the fiber is split into N + 1 lossy channels. The optimal configuration corresponds to equidistant repeaters, so that $$\eta _{{\mathrm{min}}} = \root {{N + 1}} \of {\eta }$$ and the maximum capacity of the lossy chain is
$${\cal{C}}_{{\mathrm{loss}}}(\eta ,N) = - \log_2\left( {1 - \root {{N + 1}} \of {\eta }} \right) .$$
(9)
This capacity is plotted in Fig. 2 and compared with the point-to-point PLOB bound $${\cal{C}}(\eta ) = {\cal{C}}_{{\mathrm{loss}}}(\eta ,0)$$. A simple calculation shows that if we want to guarantee a performance of 1 target bit per use of the chain, then we may tolerate at most 3 dB of loss in each individual link. This “3dB rule” imposes a maximum repeater-repeater distance of 15 km in standard optical fiber (at 0.2dB/km).
### Quantum networks under single-path routing
A quantum communication network can be represented by an undirected finite graph18 $${\cal{N}} = (P,E)$$, where P is the set of points and E the set of all edges. Each point p has a local register of quantum systems. Two points pi and pj are connected by an edge $$({\mathbf{p}}_i,{\mathbf{p}}_j) \in E$$ if there is a quantum channel $${\cal{E}}_{ij}: = {\cal{E}}_{{\mathbf{p}}_i{\mathbf{p}}_j}$$ between them. By simulating each channel $${\cal{E}}_{ij}$$ with a resource state $$\sigma _{ij}$$, we simulate the entire network $${\cal{N}}$$ with a set of resource states $$\sigma ({\cal{N}}) = \{ \sigma _{ij}\}$$. A route is an undirected path $${\mathbf{a}} - {\mathbf{p}}_i - \cdots - {\mathbf{p}}_j - {\mathbf{b}}$$ between the two end-points, Alice a and Bob b. These are connected by an ensemble of possible routes $$\Omega = \{ 1, \ldots ,\omega , \ldots \}$$, with the generic route ω involving the transmission through a sequence of channels $$\{ {\cal{E}}_0^\omega , \ldots ,{\cal{E}}_k^\omega \ldots \}$$. Finally, an entanglement cut C is a bipartition (A, B) of P such that $${\mathbf{a}} \in {\mathbf{A}}$$ and $${\mathbf{b}} \in {\mathbf{B}}$$. Any such cut C identifies a super Alice A and a super Bob B, which are connected by the cut-set $$\tilde C = \{ ({\mathbf{x}},{\mathbf{y}}) \in E:{\mathbf{x}} \in {\mathbf{A}},{\mathbf{y}} \in {\mathbf{B}}\}$$. See the example in Fig. 3 and more details in Supplementary Notes 2 and 3.
Let us remark that the quantum network is here described by an undirected graph where the physical direction of the quantum channels $${\cal{E}}_{ij}$$ can be forward (pipj) or backward (pjpi). As said before for the repeater chains, this degree of freedom relies on the fact that we consider assistance by two-way CC, so that the optimal transmission of qubits can always be reduced to the distillation of ebits followed by teleportation. The logical flow of quantum information is therefore fully determined by the LOs of the points, not by the physical direction of the quantum channel which is used to exchange a quantum system along an edge of the network. This study of an undirected quantum network under two-way CC clearly departs from other investigations41,42,43.
In a sequential protocol $${\cal{P}}_{{\mathrm{seq}}}$$, the network is initialized by a preliminary network LOCC, where all the points communicate with each other via unlimited two-way CCs and perform adaptive LOs on their local quantum systems. With some probability, Alice exchanges a quantum system with repeater pi, followed by a second network LOCC; then repeater pi exchanges a system with repeater pj, followed by a third network LOCC and so on, until Bob is reached through some route in a complete sequential use of the network (see Fig. 4). The routing is itself adaptive in the general case, with each node updating its routing table (probability distribution) on the basis of the feedback received by the other nodes. For large n uses of the network, there is a probability distribution associated with the ensemble Ω, with the generic route ω being used $$np_\omega$$ times. Alice and Bob’s output state $$\rho _{{\mathbf{ab}}}^n$$ will approximate a target state with $$nR_n$$ bits. By optimizing over $${\cal{P}}_{{\mathrm{seq}}}$$ and taking the limit of large n, we define the sequential or single-path capacity of the network $${\cal{C}}({\cal{N}})$$, whose nature depends on the target bits.
To state our upper bound, let us first introduce the flow of REE through a cut. Given an entanglement cut C of the network, consider its cut-set $$\tilde C$$. For each edge (x, y) in $$\tilde C$$, we have a channel $${\cal{E}}_{{\mathbf{xy}}}$$ and a corresponding resource state $$\sigma _{{\mathbf{xy}}}$$ associated with a simulation. Then we define the single-edge flow of REE across cut C as
$$E_{\mathrm{R}}(C): = \mathop {{\mathrm{max}}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{R}}(\sigma _{{\mathbf{xy}}}).$$
(10)
The minimization of this quantity over all entanglement cuts provides our upper bound for the single-path capacity of the network, i.e.,
$${\cal{C}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C E_{\mathrm{R}}(C),$$
(11)
which is the network generalization of Eq. (4). For proof see Methods and further details in Supplementary Note 4.
In Eq. (11), the quantity $$E_{\mathrm{R}}(C)$$ represents the maximum entanglement (as quantified by the REE) “flowing” through a cut. Its minimization over all the cuts bounds the single-path capacity for quantum communication, entanglement distribution and key generation. For a network of teleportation-covariant channels, the resource state $$\sigma _{{\mathbf{xy}}}$$ in Eq. (10) is the Choi matrix $$\sigma _{{\cal{E}}_{{\mathbf{xy}}}}$$ of the channel $${\cal{E}}_{{\mathbf{xy}}}$$. In particular, for a network of distillable channels, we may also set
$${\cal{C}}({\cal{E}}_{{\mathbf{xy}}}) = E_{\mathrm{R}}(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}) = D_1(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}),$$
(12)
for any edge (x, y). Therefore, we may refine the previous bound of Eq. (11) into $${\cal{C}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\nolimits_C {\cal{C}}(C)$$ where
$${\cal{C}}(C): = \mathop {\mathrm{max}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {\cal{C}}({\cal{E}}_{{\mathbf{xy}}})$$
(13)
is the maximum (single-edge) capacity of a cut.
Let us now derive a lower bound. First we prove that, for an arbitrary network, $$\mathop {\mathrm{min}}\nolimits_C {\cal{C}}(C) = \mathop {\mathrm{max}}\nolimits_\omega {\cal{C}}(\omega )$$, where $${\cal{C}}(\omega ): = \mathop {{\mathrm{min}}}\nolimits_i {\cal{C}}({\cal{E}}_i^\omega )$$ is the capacity of route ω (see Methods and Supplementary Note 4 for more details). Then, we observe that $${\cal{C}}(\omega )$$ is an achievable rate. In fact, any two consecutive points on route ω may first communicate at the rate $${\cal{C}}({\cal{E}}_i^\omega )$$; the distributed resources are then swapped to the end-users, e.g., via entanglement swapping or key composition at the minimum rate $$\mathop {\mathrm{min}}\nolimits_i {\cal{C}}({\cal{E}}_i^\omega )$$. For a distillable network, this lower bound coincides with the upper bound, so that we exactly establish the single-path capacity as
$${\cal{C}}({\cal{N}}) = \mathop {{\max }}\limits_\omega {\cal{C}}(\omega ) = \mathop {{\min }}\limits_C {\cal{C}}(C) = \mathop {{\min }}\limits_C E_{\mathrm{R}}(C) .$$
(14)
Finding the optimal route $$\omega _ \ast$$ corresponds to solving the widest path problem24 where the weights of the edges $$({\mathbf{x}},{\mathbf{y}})$$ are the two-way capacities $${\cal{C}}({\cal{E}}_{{\mathbf{xy}}})$$. Route $$\omega _ \ast$$ can be found via modified Dijkstra’s shortest path algorithm25, working in time $$O(\left| E \right|\mathop {{\log}}\nolimits_2 \left| P \right|)$$, where $$\left| E \right|$$ is the number of edges and $$\left| P \right|$$ is the number of points. Over route $$\omega _ \ast$$ a capacity-achieving protocol is non adaptive, with point-to-point sessions of one-way entanglement distillation followed by entanglement swapping4. In a practical implementation, the number of distilled ebits can be computed using the methods from ref. 44. Also note that, because the swapping is on ebits, there is no violation of the Bellman’s optimality principle45.
An important example is an optical lossy network $${\cal{N}}_{{\mathrm{loss}}}$$ where any route ω is composed of lossy channels with transmissivities $$\{ \eta _i^\omega \}$$. Denote by $$\eta _\omega : = \mathop {\mathrm{min}}\nolimits_i \eta _i^\omega$$ the end-to-end transmissivity of route ω. The single-path capacity is given by the route with maximum transmissivity
$${\cal{C}}({\cal{N}}_{{\mathrm{loss}}}) = - \log_2(1 - \eta _{\cal{N}}), \quad \eta _{\cal{N}}: = \mathop {\mathrm{max}}\limits_{\omega \in \Omega } \eta _\omega .$$
(15)
In particular, this is the ultimate rate at which the two end-points may generate secret bits per sequential use of the lossy network.
### Quantum networks under multi-path routing
In a network we may consider a more powerful routing strategy, where systems are transmitted through a sequence of multipoint communications (interleaved by network LOCCs). In each of these communications, a number M of quantum systems are prepared in a generally multipartite state and simultaneously transmitted to M receiving nodes. For instance, as shown in the example of Fig. 4, Alice may simultaneously sends systems to repeaters p1 and p2, which is denoted by $${\mathbf{a}} \to \{ {\mathbf{p}}_1,{\mathbf{p}}_2\}$$. Then, repeater p2 may communicate with repeater p1 and Bob b, i.e., $${\mathbf{p}}_2 \to \{ {\mathbf{p}}_1,{\mathbf{b}}\}$$. Finally, repeater p1 may communicate with Bob, i.e., $${\mathbf{p}}_1 \to {\mathbf{b}}$$. Note that each edge of the network is used exactly once during the end-to-end transmission, a strategy known as “flooding” in computer networks46. This is achieved by non-overlapping multipoint communications, where the receiving repeaters choose unused edges for the next transmissions. More generally, each multipoint communication is assumed to be a point-to-multipoint connection with a logical sender-to-receiver(s) orientation but where the quantum systems may be physically transmitted either forward or backward by the quantum channels.
Thus, in a general quantum flooding protocol $${\cal{P}}_{{\mathrm{flood}}}$$, the network is initialized by a preliminary network LOCC. Then, Alice a exchanges quantum systems with all her neighbor repeaters $${\mathbf{a}} \to \{ {\mathbf{p}}_k\}$$. This is followed by another network LOCC. Then, each receiving repeater exchanges systems with its neighbor repeaters through unused edges, and so on. Each multipoint communication is interleaved by network LOCCs and may distribute multi-partite entanglement. Eventually, Bob is reached as an end-point in the first parallel use of the network, which is completed when all Bob’s incoming edges have been used exactly once. In the limit of many uses n and optimizing over $${\cal{P}}_{{\mathrm{flood}}}$$, we define the multi-path capacity of the network $${\cal{C}}^{\mathrm{m}}({\cal{N}})$$.
As before, given an entanglement cut C, consider its cut-set $$\tilde C$$. For each edge (x, y) in $$\tilde C$$, there is a channel $${\cal{E}}_{{\mathbf{xy}}}$$ with a corresponding resource state $$\sigma _{{\mathbf{xy}}}$$. We define the multi-edge flow of REE through C as
$$E_{\mathrm{R}}^{\mathrm{m}}(C): = \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {E_{\mathrm{R}}} (\sigma _{{\mathbf{xy}}}),$$
(16)
which is the total entanglement (REE) flowing through a cut. The minimization of this quantity over all entanglement cuts provides our upper bound for the multi-path capacity of the network, i.e.,
$${\cal{C}}^{\mathrm{m}}({\cal{N}}) \le \mathop {\mathrm{min}}\limits_C E_{\mathrm{R}}^{\mathrm{m}}(C),$$
(17)
which is the multi-path generalization of Eq. (11). For proof see Methods and further details in Supplementary Note 5. In a teleportation-covariant network we may simply use the Choi matrices $$\sigma _{{\mathbf{xy}}} = \sigma _{{\cal{E}}_{{\mathbf{xy}}}}$$. Then, for a distillable network, we may use $$E_{\mathrm{R}}(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}) = {\cal{C}}({\cal{E}}_{{\mathbf{xy}}})$$ from Eq. (12), and write the refined upper bound $${\cal{C}}^{\mathrm{m}}({\cal{N}}) \le \mathop {\mathrm{min}}\nolimits_C {\cal{C}}^{\mathrm{m}}(C)$$, where
$${\cal{C}}^{\mathrm{m}}(C): = \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {\cal{C}} ({\cal{E}}_{{\mathbf{xy}}})$$
(18)
is the total (multi-edge) capacity of a cut.
To show that the upper bound is achievable for a distillable network, we need to determine the optimal flow of qubits from Alice to Bob. First of all, from the knowledge of the capacities $${\cal{C}}({\cal{E}}_{{\mathbf{xy}}})$$, the parties solve a classical problem of maximum flow26,27,28,29 compatible with those capacities. By using Orlin’s algorithm47, the solution can be found in $$O(|P| \times |E|)$$ time. This provides an optimal orientation for the network and the rates $$R_{{\mathbf{xy}}} \le {\cal{C}}({\cal{E}}_{{\mathbf{xy}}})$$ to be used. Then, any pair of neighbor points, x and y, distill $$nR_{{\mathbf{xy}}}$$ ebits via one-way CCs. Such ebits are used to teleport $$nR_{{\mathbf{xy}}}$$ qubits from x to y according to the optimal orientation. In this way, a number nR of qubits are teleported from Alice to Bob, flowing as quantum information through the network. Using the max-flow min-cut theorem26,27,28,29,47,48,49,50,51,52,53, we have that the maximum flow is $$n{\cal{C}}^{\mathrm{m}}(C_{{\mathrm{min}}})$$ where $$C_{{\mathrm{min}}}$$ is the minimum cut, i.e., $${\cal{C}}^{\mathrm{m}}(C_{{\mathrm{min}}}) = \mathop {\mathrm{min}}\nolimits_C {\cal{C}}^{\mathrm{m}}(C)$$. Thus, that for a distillable $${\cal{N}}$$, we find the multi-path capacity
$${\cal{C}}^{\mathrm{m}}({\cal{N}}) = \mathop {\mathrm{min}}\limits_C {\cal{C}}^{\mathrm{m}}(C) = \mathop {\mathrm{min}}\limits_C E_{\mathrm{R}}^{\mathrm{m}}(C),$$
(19)
which is the multi-path version of Eq. (14). This is achievable by using a non adaptive protocol where the optimal routing is given by Orlin’s algorithm47.
As an example, consider again a lossy optical network $${\cal{N}}_{{\mathrm{loss}}}$$ whose generic edge (x, y) has transmissivity $$\eta _{{\mathbf{xy}}}$$. Given a cut C, consider its loss $$L_C: = \mathop {\prod}\nolimits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} (1 - \eta _{{\mathbf{xy}}})$$ and define the total loss of the network as the maximization $$L_{\cal{N}}: = \mathop {\mathrm{max}}\nolimits_C L_C$$. We find that the multi-path capacity is just given by
$${\cal{C}}^{\mathrm{m}}({\cal{N}}_{{\mathrm{loss}}}) = - \log_2L_{\cal{N}}.$$
(20)
It is interesting to make a direct comparison between the performance of single- and multi-path strategies. For this purpose, consider a diamond network $${\cal{N}}_{{\mathrm{loss}}}^\diamondsuit$$ whose links are lossy channels with the same transmissivity η. In this case, we easily see that the multi-path capacity doubles the single-path capacity of the network, i.e.,
$${\cal{C}}^{\mathrm{m}}({\cal{N}}_{{\mathrm{loss}}}^\diamondsuit ) = 2{\cal{C}}({\cal{N}}_{{\mathrm{loss}}}^\diamondsuit ) = - 2\log_2(1 - \eta ).$$
(21)
As expected the parallel use of the quantum network is more powerful than the sequential use.
### Formulas for distillable chains and networks
Here we provide explicit analytical formulas for the end-to-end capacities of distillable chains and networks, beyond the lossy case already studied above. In fact, examples of distillable channels are not only lossy channels but also quantum-limited amplifiers, dephasing and erasure channels. First let us recall their explicit definitions and their two-way capacities.
A lossy (pure-loss) channel with transmissivity $$\eta \in (0,1)$$ corresponds to a specific phase-insensitive Gaussian channel which transforms input quadratures $${\hat{\mathbf{x}}} = (\hat q,\hat p)^T$$ as $${\hat{\mathbf{x}}} \to \sqrt \eta {\hat{\mathbf{x}}} + \sqrt {1 - \eta } {\hat{\mathbf{x}}}_E$$, where E is the environment in the vacuum state9. Its two-way capacities (Q2, D2 and K) all coincide and are given by the PLOB bound15
$${\cal{C}}(\eta ) = - \log_2(1 - \eta ).$$
(22)
A quantum-limited amplifier with an associated gain g > 1 is another phase-insensitive Gaussian channel but realizing the transformation $${\hat{\mathbf{x}}} \to \sqrt g {\hat{\mathbf{x}}} + \sqrt {g - 1} {\hat{\mathbf{x}}}_E$$, where the environment E is in the vacuum state9. Its two-way capacities all coincide and are given by15
$${\cal{C}}(g) = - \log_2(1 - g^{ - 1}).$$
(23)
A dephasing channel with probability p ≤ 1/2 is a Pauli channel of the form $$\rho \to (1 - p)\rho + pZ\rho Z$$, where Z is the phase-flip Pauli operator7. Its two-way capacities all coincide and are given by15
$${\cal{C}}(p) = 1 - H_2(p),$$
(24)
where $$H_2(p): = - p\mathop {{\log}}\nolimits_2 p - (1 - p)\mathop {{\log}}\nolimits_2 (1 - p)$$ is the binary Shannon entropy. Finally, an erasure channel with probability $$p \le 1/2$$ is a channel of the form $$\rho \to (1 - p)\rho + p\left| e \right\rangle \left\langle e \right|$$, where $$\left| e \right\rangle \left\langle e \right|$$ is an orthogonal state living in an extra dimension7. Its two-way capacities all coincide to15,54,55
$${\cal{C}}(p) = 1 - p.$$
(25)
Consider now a repeater chain $$\{ {\cal{E}}_i\}$$, where the channels $${\cal{E}}_i$$ are distillable of the same type (e.g., all quantum-limited amplifiers with different gains gi). The repeater-assisted capacity can be computed by combining Eq. (7) with one of the Eqs. (22)–(25). The final formulas are shown in the first column of Table 1. Then consider a quantum network $${\cal{N}} = (P,E)$$, where each edge $$({\mathbf{x}},{\mathbf{y}}) \in E$$ is described by a distillable channel $${\cal{E}}_{{\mathbf{xy}}}$$ of the same type. For network $${\cal{N}}$$, we may consider both a generic route $$\omega \in \Omega$$, with sequence of channels $${\cal{E}}_i^\omega$$, and a entanglement cut C, with corresponding cut-set $$\tilde C$$. By combining Eqs. (14) and (19) with Eqs. (22)–(25), we derive explicit formulas for the single-path and multi-path capacities. These are given in the second and third columns of Table 1 where we set
$$\eta _{\cal{N}}: = \mathop {\mathrm{max}}\limits_{\omega \in \Omega } \mathop {\mathrm{min}}\limits_i \eta _i^\omega = \mathop {\mathrm{min}}\limits_C \mathop {\mathrm{max}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} \eta _{{\mathbf{xy}}},$$
(26)
$$g_{\cal{N}}: = \mathop {\mathrm{min}}\limits_{\omega \in \Omega } \mathop {\mathrm{max}}\limits_i g_i^\omega = \mathop {\mathrm{max}}\limits_C \mathop {\mathrm{min}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} g_{{\mathbf{xy}}},$$
(27)
$$p_{\cal{N}}: = \mathop {\mathrm{min}}\limits_{\omega \in \Omega } \mathop {\mathrm{max}}\limits_i p_i^\omega = \mathop {\mathrm{max}}\limits_C \mathop {\mathrm{min}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} p_{{\mathbf{xy}}},$$
(28)
$$L_{\cal{N}}: = \mathop {\mathrm{max}}\limits_C \mathop {\prod}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} (1 - \eta _{{\mathbf{xy}}}),$$
(29)
$$G_{\cal{N}}: = \mathop {\mathrm{max}}\limits_C \mathop {\prod}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} (1 - g_{{\mathbf{xy}}}^{ - 1}).$$
(30)
Let us note that the formulas for dephasing and erasure channels can be easily extended to arbitrary dimension d. In fact, a qudit erasure channel is formally defined as before and its two-way capacities are15,54,55
$${\cal{C}}(p) = (1 - p)\log_2d.$$
(31)
Therefore, it is sufficient to multiply by $$\mathop {{\log}}\nolimits_2 d$$ the corresponding expressions in Table 1. Then, in arbitrary dimension d, the dephasing channel is defined as
$$\rho \to \mathop {\sum}\limits_{k = 0}^{d - 1} {p_k} Z_d^k\rho (Z_d^\dagger )^k,$$
(32)
where pk is the probability of k phase flips and $$Z_d^k\left| i \right\rangle = {\mathrm{exp}}(2\pi ikd^{ - 1})\left| i \right\rangle$$. Its generic two-way capacity is15
$${\cal{C}}(p,d) = \log_2d - H(\{ p_k\} ),$$
(33)
where $$H(\{ p_k\} ): = - \mathop {\sum}\nolimits_k p_k \log_2p_k$$ is the Shannon entropy. Here the generalization is also simple. For instance, in a chain $$\{ {\cal{E}}_i\}$$ of such d-dimensional dephasing channels, we would have N + 1 distributions $$\{ p_k^i\}$$. We then compute the most entropic distribution, i.e., we take the maximization $$\mathop {\mathrm{max}}\nolimits_i H(\{ p_k^i\} )$$. This is the bottleneck that determines the repeater capacity, so that
$${\cal{C}}(\{ p_k^i\} ) = \log_2d - \mathop {\mathrm{max}}\limits_i H(\{ p_k^i\} ).$$
(34)
Generalization to dimension d is also immediate for the two network capacities $${\cal{C}}$$ and $${\cal{C}}^{\mathrm{m}}$$.
## Discussion
This work establishes the ultimate boundaries of quantum and private communications assisted by repeaters, from the case of a single repeater chain to an arbitrary quantum network under single- or multi-path routing. Assuming arbitrary quantum channels between the nodes, we have shown that the end-to-end capacities are bounded by single-letter quantities based on the relative entropy of entanglement. These upper bounds are very general and also apply to chains and networks with untrusted nodes (i.e., run by an eavesdropper). Our theory is formulated in a general information-theoretic fashion which also applies to other entanglement measures, as discussed in our Methods section. The upper bounds are particularly important because they set the tightest upper limits on the performance of quantum repeaters in various network configurations. For instance, our benchmarks may be used to evaluate performances in relay-assisted QKD protocols such as MDI-QKD and variants56,57,58. Related literature and other developments59,60,61,62,63,64,65,66 are discussed in Supplementary Note 6.
For the lower bounds, we have employed classical composition methods of the capacities, either based on the widest path problem or the maximum flow, depending on the type of routing. In general, these simple and classical lower bounds do not coincide with the quantum upper bounds. However this is remarkably the case for distillable networks, for which the ultimate quantum communication performance can be completely reduced to the resolution of classical problems of network information theory. For these networks, widest path and maximum flow determine the quantum performance in terms of secret key generation, entanglement distribution and transmission of quantum information. In this way, we have been able to exactly establish the various end-to-end capacities of distillable chains and networks where the quantum systems are affected by the most fundamental noise models, including bosonic loss, which is the most important for optical and telecom communications, quantum-limited amplification, dephasing and erasure. In particular, our results also showed how the parallel or “broadband” use of a lossy quantum network via multi-path routing may greatly improve the end-to-end rates.
## Methods
We present the main techniques that are needed to prove the results of our main text. These methods are here provided for a more general entanglement measure EM, and specifically apply to the REE. We consider a quantum network $${\cal{N}}$$ under single- or multi-path routing. In particular, a chain of quantum repeaters can be treated as a single-route quantum network.
For the upper bounds, our methodology can be broken down in the following steps: (i) Derivation of a general weak converse upper bound in terms of a suitable entanglement measure (in particular, the REE); (ii) Simulation of the quantum network, so that quantum channels are replaced by resource states; (iii) Stretching of the network with respect to an entanglement cut, so that Alice and Bob’s shared state has a simple decomposition in terms of resource states; (iv) Data processing, subadditivity over tensor-products, and minimization over entanglement cuts. These steps provide entanglement-based upper bounds for the end-to-end capacities. For the lower bounds, we perform a suitable composition of the point-to-point capacities of the single-link channels by means of the widest path and the maximum flow, depending on the routing. For the case of distillable quantum networks (and chains), these lower bounds coincide with the upper bounds expressed in terms of the REE.
### General (weak converse) upper bound
This closely follows the derivation of the corresponding point-to-point upper bound first given in the second 2015 arXiv version of ref. 15 and later reported as Theorem 2 in ref. 16. Consider an arbitrary end-to-end $$(n,R_n^\varepsilon ,\varepsilon )$$ network protocol $${\cal{P}}$$ (single- or multi-path). This outputs a shared state $$\rho _{{\mathbf{ab}}}^n$$ for Alice and Bob after n uses, which is ε-close to a target private state30,31 ϕn having $$nR_n^\varepsilon$$ secret bits, i.e., in trace norm we have $$\left\| {\rho _{{\mathbf{ab}}}^n - \phi ^n} \right\|_1 \le \varepsilon$$. Consider now an entanglement measure EM which is normalized on the target state, i.e.,
$$E_{\mathrm{M}}(\phi ^n) \ge nR_n^\varepsilon .$$
(35)
Assume that EM is continuous. This means that, for d-dimensional states ρ and σ that are close in trace norm as $$\left\Vert {\rho - \sigma } \right\Vert_1\, \le \varepsilon$$, we may write
$$\left| {E_{\mathrm{M}}(\rho ) - E_{\mathrm{M}}(\sigma )} \right| \le g(\varepsilon ){\mathrm{log}}_2d + h(\varepsilon ),$$
(36)
with the functions g and h converging to zero in ε. Assume also that EM is monotonic under trace-preserving LOCCs $$\bar \Lambda$$, so that
$$E_{\mathrm{M}}[\bar \Lambda (\rho )] \le E_{\mathrm{M}}(\rho ),$$
(37)
a property which is also known as data processing inequality. Finally, assume that EM is subadditive over tensor products, i.e.,
$$E_{\mathrm{M}}(\rho ^{ \otimes n}) \le nE_{\mathrm{M}}(\rho ).$$
(38)
All these properties are certainly satisfied by the REE ER and the squashed entanglement (SQ) ESQ, with specific expressions for g and h (e.g., these expressions are explicitly reported in Sec. VIII.A of ref. 16).
Using the first two properties (normalization and continuity), we may write
$$R_n^\varepsilon \le \frac{{E_{\mathrm{M}}(\rho _{{\mathbf{ab}}}^n) + g(\varepsilon )\log_2d + h(\varepsilon )}}{n},$$
(39)
where d is the dimension of the target private state. We know that this dimension is at most exponential in the number of uses, i.e., $$\log_2d \le \alpha nR_n^\varepsilon$$ for constant α (e.g., see ref. 15 or Lemma 1 in ref. 16). By replacing this dimensional bound in Eq. (39), taking the limit for large n and small ε (weak converse), we derive
$$\mathop {{\lim}}\limits_\varepsilon \mathop {{\lim}}\limits_n R_n^\varepsilon \le \mathop {{\lim}}\limits_n \frac{{E_{\mathrm{M}}(\rho _{{\mathbf{ab}}}^n)}}{n}.$$
(40)
Finally, we take the supremum over all protocols $${\cal{P}}$$ so that we can write our general upper bound for the end-to-end secret key capacity (SKC) of the network
$$E_{\mathrm{M}}^ \star ({\cal{N}}): = \mathop {{\sup}}\limits_{\cal{P}} \mathop {{\lim}}\limits_n \frac{{E_{\mathrm{M}}(\rho _{{\mathbf{ab}}}^n)}}{n}.$$
(41)
In particular, this is an upper bound to the single-path SKC $${\cal{K}}$$ if $${\cal{P}}$$ are single-path protocols, and to the multi-path SKC $${\cal{K}}^m$$ if $${\cal{P}}$$ are multi-path (flooding) protocols.
In the case of an infinite-dimensional state $$\rho _{{\mathbf{ab}}}^n$$, the proof can be repeated by introducing a truncation trace-preserving LOCC T, so that $$\delta _{{\mathbf{ab}}}^n = {\boldsymbol{T}}(\rho _{{\mathbf{ab}}}^n)$$ is a finite-dimensional state. The proof is repeated for $$\delta _{{\mathbf{ab}}}^n$$ and finally we use the data processing $$E_{\mathrm{M}}(\delta _{{\mathbf{ab}}}^n) \le E_{\mathrm{M}}(\rho _{{\mathbf{ab}}}^n)$$ to write the same upper bound as in Eq. (41). This follows the same steps of the proof given in the second 2015 arXiv version of ref. 15 and later reported as Theorem 2 in ref. 16. It is worth mentioning that Eq. (41) can equivalently be proven without using the exponential growth of the private state, i.e., using the steps of the third proof given in the Supplementary Note 3 of ref. 15.
### Network simulation
Given a network $${\cal{N}} = (P,E)$$ with generic point $${\mathbf{x}} \in P$$ and edge $$({\mathbf{x}},{\mathbf{y}}) \in E$$, replace the generic channel $${\cal{E}}_{{\mathbf{xy}}}$$ with a simulation over a resource state σxy. This means to write $${\cal{E}}_{{\mathbf{xy}}}(\rho ) = {\cal{T}}_{{\mathbf{xy}}}(\rho \otimes \sigma _{{\mathbf{xy}}})$$ for any input state ρ, by resorting to a suitable trace-preserving LOCC $${\cal{T}}_{{\mathbf{xy}}}$$ (this is always possible for any quantum channel15). If we perform this operation for all the edges, we then define the simulation of the network $$\sigma ({\cal{N}}) = \{ \sigma _{{\mathbf{xy}}}\} _{({\mathbf{x}},{\mathbf{y}}) \in E}$$ where each channel is replaced by a corresponding resource state. If the channels are bosonic, then the simulation is typically asymptotic of the type $${\cal{E}}_{{\mathbf{xy}}}(\rho ) = \mathop {{\lim}}\nolimits_\mu {\cal{E}}_{{\mathbf{xy}}}^\mu (\rho )$$ where $${\cal{E}}_{{\mathbf{xy}}}^\mu (\rho ) = {\cal{T}}_{{\mathbf{xy}}}^\mu (\rho \otimes \sigma _{{\mathbf{xy}}}^\mu )$$ for some sequence of simulating LOCCs $${\cal{T}}_{{\mathbf{xy}}}^\mu$$ and sequence of resource states $$\sigma _{{\mathbf{xy}}}^\mu$$.
Here the parameter μ is usually connected with the energy of the resource state. For instance, if $${\cal{E}}_{{\mathbf{xy}}}$$ is a teleportation-covariant bosonic channel, then the resource state $$\sigma _{{\mathbf{xy}}}^\mu$$ is its quasi-Choi matrix $$\sigma _{{\cal{E}}_{{\mathbf{xy}}}}^\mu : = {\cal{I}} \otimes {\cal{E}}_{{\mathbf{xy}}}({\mathrm{\Phi }}^\mu )$$, with $$\Phi ^\mu$$ being a two-mode squeezed vacuum state (TMSV) state9 whose parameter $$\mu = \bar n + 1/2$$ is related to the mean number $$\bar n$$ of thermal photons. Similarly, the simulating LOCC $${\cal{T}}_{{\mathbf{xy}}}^\mu$$ is a Braunstein-Kimble protocol67,68 where the ideal Bell detection is replaced by the finite-energy projection onto α-displaced TMSV states $$D(\alpha ){\mathrm{\Phi }}^\mu D( - \alpha )$$, with D being the phase-space displacement operator9.
Given an asymptotic simulation of a quantum channel, the associated simulation error is correctly quantified by employing the energy-constrained diamond distance15, which must go to zero in the limit, i.e.,
$$\left\| {{\cal{E}}_{{\mathbf{xy}}} - {\cal{E}}_{{\mathbf{xy}}}^\mu } \right\|_{\diamondsuit \bar N}\mathop { \to }\limits^\mu 0\,{\mathrm{for}}\,{\mathrm{any}}\,{\mathrm{finite}}\,\bar N{\mathrm{.}}$$
(42)
Recall that, for any two bosonic channels $${\cal{E}}$$ and $${\cal{E}}'$$, this quantity is defined as
$${\left\Vert {{\cal{E}} - {\cal{E}}'} \right\Vert}_{\diamondsuit \bar N}: = \mathop {{\mathrm{sup}}}\limits_{\rho _{AB} \in D_{\bar N}} \left\Vert {{\cal{I}}_A \otimes {\cal{E}}(\rho _{AB}) - {\cal{I}}_A \otimes {\cal{E}}' (\rho _{AB})} \right\Vert_1,$$
(43)
where $$D_{\bar N}$$ is the compact set of bipartite bosonic states with $$\bar N$$ mean number of photons (see ref. 69 for a later and slightly different definition, where the constraint is only on the B part). Thus, in general, if the network has bosonic channels, we may write the asymptotic simulation $$\sigma ({\cal{N}}) = \mathop {{\lim}}\nolimits_\mu \sigma ^\mu ({\cal{N}})$$ where $$\sigma ^\mu ({\cal{N}}): = \{ \sigma _{{\mathbf{xy}}}^\mu \} _{({\mathbf{x}},{\mathbf{y}}) \in E}$$.
### Stretching of the network
Once we simulate a network, the next step is its stretching, which is the complete adaptive-to-block simplification of its output state (for the exact details of this procedure see Supplementary Note 3). As a result of stretching, the n-use output state of the generic network protocol can be decomposed as
$$\rho _{{\mathbf{ab}}}^n = \bar \Lambda _{{\mathbf{ab}}}\left[ {\mathop { \otimes }\limits_{({\mathbf{x}},{\mathbf{y}}) \in E} \sigma _{{\mathbf{xy}}}^{ \otimes n_{{\mathbf{xy}}}}} \right] ,$$
(44)
where $$\bar \Lambda$$ represents a trace-preserving LOCC (which is local with respect to Alice and Bob). The LOCC $$\bar \Lambda$$ includes all the adaptive LOCCs from the original protocol besides the simulating LOCCs. In Eq. (44), the parameter nxy is the number of uses of the edge (x, y), that we may always approximate to an integer for large n. We have nxyn for single-path routing, and nxy = n for flooding protocols in multi-path routing.
In the presence of bosonic channels and asymptotic simulations, we modify Eq. (44) into the approximate stretching
$$\rho _{{\mathbf{ab}}}^{n,\mu } = \bar \Lambda _{{\mathbf{ab}}}^\mu \left[ {\mathop { \otimes }\limits_{({\mathbf{x}},{\mathbf{y}}) \in E} \sigma _{{\mathbf{xy}}}^{\mu \otimes n_{{\mathbf{xy}}}}} \right],$$
(45)
which tends to the actual output $$\rho _{{\mathbf{ab}}}^n$$ for large μ. In fact, using a “peeling” technique15,16 which exploits the triangle inequality and the monotonicity of the trace distance under completely-positive trace-preserving maps, we may write the following bound
$$\left\| {\rho _{{\mathbf{ab}}}^n - \rho _{{\mathbf{ab}}}^{n,\mu }} \right\|_1 \le \varepsilon ^\mu : = \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in E} {n_{{\mathbf{xy}}}} \left\| {{\cal{E}}_{{\mathbf{xy}}} - {\cal{E}}_{{\mathbf{xy}}}^\mu } \right\|_{\diamondsuit \bar N},$$
(46)
which goes to zero in μ for any finite input energy $$\bar N$$, finite number of uses n of the protocol, and finite number of edges |E| in the network (the explicit steps of the proof can be found in Supplementary Note 3).
### Stretching with respect to entanglement cuts
The decomposition of the output state can be greatly simplified by introducing cuts in the network. In particular, we may drastically reduce the number of resource states in its representation. Given a cut C of $${\cal{N}}$$ with cut-set $$\tilde C$$, we may in fact stretch the network with respect to that specific cut (see again Supplementary Note 3 for exact details of the procedure). In this way, we may write
$$\rho _{{\mathbf{ab}}}^n(C) = \bar \Lambda _{{\mathbf{ab}}}\left[ {\mathop { \otimes }\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} \sigma _{{\mathbf{xy}}}^{ \otimes n_{{\mathbf{xy}}}}} \right] ,$$
(47)
where $$\bar \Lambda _{{\mathbf{ab}}}$$ is a trace-preserving LOCC with respect to Alice and Bob (differently from before, this LOCC now depends on the cut C, but we prefer not to complicate the notation). Similarly, in the presence of bosonic channels, we may consider the approximate decomposition
$$\rho _{{\mathbf{ab}}}^{n,\mu }(C) = \bar \Lambda _{{\mathbf{ab}}}^\mu \left[ {\mathop { \otimes }\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} \sigma _{{\mathbf{xy}}}^{\mu \otimes n_{{\mathbf{xy}}}}} \right],$$
(48)
which converges in trace distance to $$\rho _{{\mathbf{ab}}}^n(C)$$ for large μ.
Let us combine the stretching in Eq. (47) with two basic properties of the entanglement measure EM. The first property is the monotonicity of EM under trace-preserving LOCCs; the second property is the subadditivity of EM over tensor-product states. Using these properties, we can simplify the general upper bound of Eq. (41) into a simple and computable single-letter quantity. In fact, for any cut C of the network $${\cal{N}}$$, we write
$$E_{\mathrm{M}}[\rho _{{\mathbf{ab}}}^n(C)] \le E_{\mathrm{M}}\left[ {\mathop { \otimes }\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} \sigma _{{\mathbf{xy}}}^{ \otimes n_{{\mathbf{xy}}}}} \right]$$
(49)
$$\le \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} n_{{\mathbf{xy}}}E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}),$$
(50)
where $$\bar \Lambda _{{\mathbf{ab}}}$$ has disappeared. Let us introduce the probability of using the generic edge (x, y)
$$p_{{\mathbf{xy}}}: = \mathop {{\lim}}\limits_n \frac{{n_{{\mathbf{xy}}}}}{n},$$
(51)
so that we may write the limit
$$\mathop {{\lim}}\limits_n \frac{{E_{\mathrm{M}}[\rho _{{\mathbf{ab}}}^n(C)]}}{n} \le \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {p_{{\mathbf{xy}}}} E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}).$$
(52)
Using the latter in Eq. (41) allows us to write the following bound, for any cut C
$$E_{\mathrm{M}}^ \star ({\cal{N}}) \le E_{\mathrm{M}}^ \star ({\cal{N}},C): = \mathop {{\mathrm{sup}}}\limits_{\{ p_{{\mathbf{xy}}}\} } \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {p_{{\mathbf{xy}}}} E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}).$$
(53)
In the case of bosonic channels and asymptotic simulations, we may use the triangle inequality
$$\left\Vert {\rho _{{\mathbf{ab}}}^{n,\mu } - \phi ^n} \right\Vert_1\, \le \left\Vert {\rho _{{\mathbf{ab}}}^{n,\mu } - \rho _{{\mathbf{ab}}}^n} \right\Vert_1 + \left\Vert {|\rho _{{\mathbf{ab}}}^n - \phi ^n} \right\Vert_1\, \le \varepsilon ^\mu + \varepsilon : = \Sigma ^\mu { \to } 0.$$
(54)
Then, we may repeat the derivations around Eqs. (39)–(41) for $$\rho _{{\mathbf{ab}}}^{n,\mu }$$ instead of $$\rho _{{\mathbf{ab}}}^n$$, where we also include the use of a suitable truncation of the states via a trace-preserving LOCC T (see also Sec. VIII.D of ref. 16 for a similar approach in the point-to-point case). This leads to the μ-dependent upper-bound
$$E_{\mathrm{M}}^ \star ({\cal{N}},\mu ): = \mathop {{\mathrm{sup}}}\limits_{\cal{P}} \,\mathop {{{\mathrm{lim}}}}\limits_n \frac{{E_{\mathrm{M}}(\rho _{{\mathbf{ab}}}^{n,\mu })}}{n}.$$
(55)
Because this is valid for any μ, we may conservatively take the inferior limit in μ and consider the upper bound
$$E_{\mathrm{M}}^ \star ({\cal{N}}): = \mathop {{\mathrm{lim}\,\mathrm{inf}}}\limits_\mu E_{\mathrm{M}}^ \star ({\cal{N}},\mu ).$$
(56)
Finally, by introducing the stretching of Eq. (48) with respect to an entanglement cut C, and using the monotonicity and subadditivity of EM with respect to the decomposition of $$\rho _{{\mathbf{ab}}}^{n,\mu }(C)$$, we may repeat the previous reasonings and write
$$E_{\mathrm{M}}^ \star ({\cal{N}}) \le E_{\mathrm{M}}^ \star ({\cal{N}},C): = \mathop {{\sup}}\limits_{\{ p_{{\mathbf{xy}}}\} } \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {p_{{\mathbf{xy}}}} \left[ {\mathop {{\mathrm{lim}\,\mathrm{inf}}}\limits_\mu E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}^\mu )} \right],$$
(57)
which is a direct extension of the bound in Eq. (53).
We may formulate both Eqs. (53) and (57) in a compact way if we define the entanglement measure EM over an asymptotic state $$\sigma : = \mathop {{\lim}}\nolimits_\mu \sigma ^\mu$$ as
$$E_{\mathrm{M}}(\sigma ): = \mathop {{\lim \inf }}\limits_\mu E_{\mathrm{M}}(\sigma ^\mu ).$$
(58)
It is clear that, for a physical (non-asymptotic) state, we have the trivial sequence σμ = σ for any μ, so that Eq. (58) provides the standard definition. In the specific case of REE, we may write
$${E_{\mathrm{R}}(\sigma )} = {\mathop {{\lim \inf }}\limits_\mu E_{\mathrm{R}}(\sigma ^\mu )} = {\mathop {{\inf }}\limits_{\gamma ^\mu } {\hskip3pt} \mathop {{\lim \inf }}\limits_\mu S(\sigma ^\mu ||\gamma ^\mu ),}$$
(59)
where γμ is a sequence of separable states that converges in trace norm; this means that there exists a separable state γ such that $$\left\| {\gamma ^\mu - \gamma } \right\|_1\mathop { \to }\limits^\mu 0$$. Employing the extended definition of Eq. (58), we may write Eq. (53) for both non-asymptotic σxy and asymptotic states $$\sigma _{{\mathbf{xy}}}: = \mathop {{\lim}}\nolimits_\mu \sigma _{{\mathbf{xy}}}^\mu$$.
### Minimum entanglement cut and upper bounds
By minimizing Eq. (53) over all possible cuts of the network, we find the tightest upper bound, i.e.,
$$E_{\mathrm{M}}^ \star ({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C E_{\mathrm{M}}^ \star ({\cal{N}},C).$$
(60)
Let us now specify this formula for different types of routing. For single-path routing, we have $$p_{{\mathbf{xy}}} \le 1$$, so that we may use
$$\mathop {{\mathrm{sup}}}\limits_{\{ p_{{\mathbf{xy}}}\} } \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {p_{{\mathbf{xy}}}} ( \cdots ) \le \mathop {{\mathrm{max}}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} ( \cdots ),$$
(61)
in Eq. (53). Therefore, we derive the following upper bound for the single-path SKC
$${\cal{K}}({\cal{N}}) \le \mathop {{\min }}\limits_C E_{\mathrm{M}}(C),$$
(62)
where we introduce the single-edge flow of entanglement through the cut
$$E_{\mathrm{M}}(C): = \mathop {{\mathrm{max}}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}).$$
(63)
In particular, we may specify this result to a single chain of N points and N + 1 channels $$\{ {\cal{E}}_i\}$$ with resource states {σi}. This is a quantum network with a single route, so that the cuts can be labeled by i and the cut-sets are just composed of a single edge. Therefore, Eqs. (62) and (63) become
$${\cal{K}}(\{ {\cal{E}}_i\} ) \le \mathop {{\mathrm{min}}}\limits_i E_{\mathrm{M}}(\sigma _i).$$
(64)
For multi-path routing, we have pxy = 1 (flooding), so that we may simplify
$$\mathop {{\mathrm{sup}}}\limits_{\{ p_{{\mathbf{xy}}}\} } \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {p_{{\mathbf{xy}}}} ( \cdots ) = \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {( \cdots )} ,$$
(65)
in Eq. (53). Therefore, we can write the following upper bound for the multi-path SKC
$${\cal{K}}^{\mathrm{m}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C E_{\mathrm{M}}^{\mathrm{m}}(C),$$
(66)
where we introduce the multi-edge flow of entanglement through the cut
$$E_{\mathrm{M}}^{\mathrm{m}}(C): = \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}).$$
(67)
In these results, the definition of EM(σxy) is implicitly meant to be extended to asymptotic states, according to Eq. (58). Then, note that the tightest values of the upper bounds are achieved by extending the minimization to all network simulations $$\sigma ({\cal{N}})$$, i.e., by enforcing $$\min_C \to \min_{\sigma ({\cal{N}})}\min_C$$ in Eqs. (62) and (66).
Specifying Eqs. (62), (64), and (66) to the REE, we get the single-letter upper bounds
$${\cal{C}}(\{ {\cal{E}}_i\} ) \le {\cal{K}}(\{ {\cal{E}}_i\} ) \le \mathop {{\mathrm{min}}}\limits_i E_{\mathrm{R}}(\sigma _i),$$
(68)
$${\cal{C}}({\cal{N}}) \le {\cal{K}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C E_{\mathrm{R}}(C),$$
(69)
$${\cal{C}}^{\mathrm{m}}({\cal{N}}) \le {\cal{K}}^{\mathrm{m}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C E_{\mathrm{R}}^{\mathrm{m}}(C),$$
(70)
which are Eqs. (4), (11) and (17) of the main text. The proofs of these upper bounds in terms of the REE can equivalently be done following the “converse part” derivations in Supplementary Note 1 (for chains), Supplementary Note 4 (for networks under single-path routing), and Supplementary Note 5 (for networks under multi-path routing). Differently from what presented in this Methods section, such proofs exploit the lower semi-continuity of the quantum relative entropy8 in order to deal with asymptotic simulations (e.g., for bosonic channels).
### Lower bounds
To derive lower bounds we combine the known results on two-way assisted capacities15 with classical results in network information theory. Consider the generic two-way assisted capacity $${\cal{C}}_{{\mathbf{xy}}}$$ of the channel $${\cal{E}}_{{\mathbf{xy}}}$$ (in particular, this can be either D2 = Q2 or K). Then, using the cut property of the widest path (Supplementary Note 4), we derive the following achievable rate for the generic single-path capacity of the network $${\cal{N}}$$
$${\cal{C}}({\cal{N}}) \ge \mathop {{\min}}\limits_C \mathop {{\max}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {\cal{C}}_{{\mathbf{xy}}} .$$
(71)
For a chain $$\{ {\cal{E}}_i\}$$, this simply specifies to
$${\cal{C}}(\{ {\cal{E}}_i\} ) \ge \mathop {{\min}}\limits_i {\cal{C}}({\cal{E}}_i).$$
(72)
Using the classical max-flow min-cut theorem (Supplementary Note 5), we derive the following achievable rate for the generic multi-path capacity of $${\cal{N}}$$
$${\cal{C}}^{\mathrm{m}}({\cal{N}}) \ge \mathop {{\min}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {{\cal{C}}_{{\mathbf{xy}}}} .$$
(73)
### Simplifications for teleportation-covariant and distillable networks
Recall that a quantum channel $${\cal{E}}$$ is said to be teleportation-covariant15 when, for any teleportation unitary U (Weyl-Pauli operator in finite dimension or phase-space displacement in infinite dimension), we have
$${\cal{E}}(U\rho U^\dagger ) = V{\cal{E}}(\rho )V^\dagger ,$$
(74)
for some (generally-different) unitary transformation V. In this case the quantum channel can be simulated by applying teleportation over its Choi matrix $$\sigma _{\cal{E}}: = {\cal{I}} \otimes {\cal{E}}(\Phi )$$, where Φ is a maximally-entangled state. Similarly, if the teleportation-covariant channel is bosonic, we can write an approximate simulation by teleporting over the quasi-Choi matrix $$\sigma _{\cal{E}}^\mu : = {\cal{I}} \otimes {\cal{E}}(\Phi ^\mu )$$, where Φμ is a TMSV state. For a network of teleportation-covariant channels, we therefore use teleportation to simulate the network, so that the resource states in the upper bounds of Eqs. (68)–(70) are Choi matrices (physical or asymptotic). In other words, we write the sandwich relations
$$\mathop {{\min}}\limits_i {\cal{C}}({\cal{E}}_i) \le {\cal{C}}(\{ {\cal{E}}_i\} ) \le \mathop {{\min}}\limits_i E_{\mathrm{R}}(\sigma _{{\cal{E}}_i}),$$
(75)
$$\mathop {{\min}}\limits_C \mathop {{\max}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {\cal{C}}_{{\mathbf{xy}}} \le {\cal{C}}({\cal{N}}) \le \mathop {{\min}}\limits_C \mathop {{\max}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{R}}(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}),$$
(76)
$$\mathop {{\min}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {{\cal{C}}_{{\mathbf{xy}}}} \le {\cal{C}}^{\mathrm{m}}({\cal{N}}) \le \mathop {{\min}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {E_{\mathrm{R}}} (\sigma _{{\cal{E}}_{{\mathbf{xy}}}}),$$
(77)
with the REE taking the form of Eq. (59) on an asymptotic Choi matrix $$\sigma _{{\cal{E}}_{{\mathbf{xy}}}}: = \mathop {{\lim}}\nolimits_\mu \sigma _{{\cal{E}}_{{\mathbf{xy}}}}^\mu$$.
As a specific case, consider a quantum channel which is not only teleportation-covariant but also distillable, so that it satisfies15
$${\cal{C}}({\cal{E}}) = E_{\mathrm{R}}(\sigma _{\cal{E}}) = D_1(\sigma _{\cal{E}}),$$
(78)
where $$D_1(\sigma _{\cal{E}})$$ is the one-way distillability of the Choi matrix $$\sigma _{\cal{E}}$$ (with a suitable asymptotic expression for bosonic Choi matrices15). If a network (or a chain) is composed of these channels, then the relations in Eqs. (75)–(77) collapse and we fully determine the capacities
$${\cal{C}}(\{ {\cal{E}}_i\} ) = \mathop {{\mathrm{min}}}\limits_i \,E_{\mathrm{R}}(\sigma _{{\cal{E}}_i}),$$
(79)
$${\cal{C}}({\cal{N}}) = \mathop {{\mathrm{min}}}\limits_C \mathop {{\mathrm{max}}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{R}}(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}),$$
(80)
$${\cal{C}}^{\mathrm{m}}({\cal{N}}) = \mathop {{\mathrm{min}}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{R}}(\sigma _{{\cal{E}}_{{\mathbf{xy}}}}).$$
(81)
These capacities correspond to Eqs. (7), (14), and (19) of the main text. They are explicitly computed for chains and networks composed of lossy channels, quantum-limited amplifiers, dephasing and erasure channels in Table 1 of the main text.
### Regularizations and other measures
It is worth noticing that some of the previous formulas can be re-formulated by using the regularization of the entanglement measure, i.e.,
$$E_{\mathrm{M}}^\infty (\sigma ): = \mathop {{\mathrm{lim}}}\limits_n \frac{{E_{\mathrm{M}}(\sigma ^{ \otimes n})}}{n}.$$
(82)
In fact, let us go back to the first upper bound in Eq. (49), which implies
$$E_{\mathrm{M}}[\rho _{{\mathbf{ab}}}^n(C)] \le \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{M}}(\sigma _{{\mathbf{xy}}}^{ \otimes n_{{\mathbf{xy}}}}).$$
(83)
For a network under multi-path routing we have $$n_{{\mathbf{xy}}} = n$$, so that we may write
$$\mathop {{\mathrm{lim}}}\limits_n \frac{{E_{\mathrm{M}}[\rho _{{\mathbf{ab}}}^n(C)]}}{n} \le \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {E_{\mathrm{M}}^\infty } (\sigma _{{\mathbf{xy}}}).$$
(84)
By repeating previous steps, the latter equation implies the upper bound
$${\cal{K}}^{\mathrm{m}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {E_{\mathrm{M}}^\infty } (\sigma _{{\mathbf{xy}}}),$$
(85)
which is generally tighter than the result in Eqs. (66) and (67). The same regularization can be written for a chain $$\{ {\cal{E}}_i\}$$, which can also be seen as a single-route network satisfying the flooding condition $$n_{{\mathbf{xy}}} = n$$. Therefore, starting from the condition of Eq. (83) with $$n_{{\mathbf{xy}}} = n$$, we may write
$${\cal{K}}(\{ {\cal{E}}_i\} ) \le \mathop {{\mathrm{min}}}\limits_i \,E_{\mathrm{M}}^\infty (\sigma _i),$$
(86)
which is generally tighter than the result in Eq. (64). These regularizations are important for the REE, but not for the squashed entanglement which is known to be additive over tensor-products, so that $$E_{{\mathrm{SQ}}}^\infty (\sigma ) = E_{{\mathrm{SQ}}}(\sigma )$$.
Another extension is related to the use of the relative entropy distance with respect to partial-positive-transpose (PPT) states. This quantity can be denoted by RPPT and is defined by31
$$E_{\mathrm{P}}\left( \sigma \right): = \mathop {{\mathrm{inf}}}\limits_{\gamma \in {\mathrm{PPT}}} S(\sigma ||\gamma ),$$
(87)
with an asymptotic extension similar to Eq. (59) but in terms of converging sequences of PPT states $$\gamma ^\mu$$. The RPPT is tighter than the REE but does not provide an upper bound to the distillable key of a state, but rather to its distillable entanglement. This means that it has normalization $$E_{\mathrm{P}}\left( {\varphi ^n} \right) \ge nR_n$$ on a target maximally-entangled state $$\varphi ^n$$ with $$nR_n$$ ebits.
The RPPT is known to be monotonic under the action of PPT operations (and therefore LOCCs); it is continuous and subadditive over tensor-product states. Therefore, we may repeat the derivation that leads to Eq. (41) but with respect to protocols $${\cal{P}}$$ of entanglement distribution. This means that we can write
$$Q_2({\cal{N}}) = D_2({\cal{N}}) \le E_{\mathrm{P}}^ \star ({\cal{N}}): = \mathop {{\mathrm{sup}}}\limits_{\cal{P}} \mathop {{\mathrm{lim}}}\limits_n \frac{{E_{\mathrm{P}}(\rho _{{\mathbf{ab}}}^n)}}{n}.$$
(88)
Using the decomposition of the output state $$\rho _{{\mathbf{ab}}}^n$$ as in Eqs. (47) and (48), and repeating previous steps, we may finally write
$$D_2(\{ {\cal{E}}_i\} ) \le \mathop {{\mathrm{min}}}\limits_i E_{\mathrm{P}}^\infty (\sigma _i) \le \mathop {\mathrm{min}}\limits_i E_{\mathrm{P}}(\sigma _i),$$
for a chain $$\{ {\cal{E}}_i\}$$ with resource states $$\{ \sigma _i\}$$, and
$$D_2({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C \mathop {{\mathrm{max}}}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{P}}(\sigma _{{\mathbf{xy}}}),$$
(89)
$$D_2^{\mathrm{m}}({\cal{N}}) \le \mathop {{\mathrm{min}}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} {E_{\mathrm{P}}^\infty } (\sigma _{{\mathbf{xy}}}) \le \mathop {{\mathrm{min}}}\limits_C \mathop {\sum}\limits_{({\mathbf{x}},{\mathbf{y}}) \in \tilde C} E_{\mathrm{P}}(\sigma _{{\mathbf{xy}}}),$$
(90)
for the single- and multi-path entanglement distribution capacities of a quantum network $${\cal{N}}$$ with resource states $$\sigma ({\cal{N}}) = \{ \sigma _{{\mathbf{xy}}}\} _{({\mathbf{x}},{\mathbf{y}}) \in E}$$.
## Data availability
All data in this paper can be reproduced by using the methodology described.
## Code availability
Code is available upon reasonable request to the author.
## References
1. 1.
Kimble, H. J. The Quantum Internet. Nature 453, 1023–1030 (2008).
2. 2.
Van Meter, V. Quantum Networking (Wiley, 2014).
3. 3.
Pirandola, S. & Braunstein, S. L. Unite to build a quantum internet. Nature 532, 169–171 (2016).
4. 4.
Pirandola, S., Eisert, J., Weedbrook, C., Furusawa, A. & Braunstein, S. L. Advances in quantum teleportation. Nature Photon. 9, 641–652 (2015).
5. 5.
Gisin, N. et al. Quantum cryptography. Rev. Mod. Phys. 74, 145–195 (2002).
6. 6.
Watrous, J. The theory of quantum information. (Cambridge University Press, Cambridge, 2018).
7. 7.
Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information. (Cambridge University Press, Cambridge, 2002).
8. 8.
Holevo, A. Quantum systems, channels, information: A mathematical introduction. (De Gruyter, Berlin-Boston, 2012).
9. 9.
Weedbrook, C. et al. Gaussian quantum information. Rev. Mod. Phys. 84, 621–669 (2012).
10. 10.
Braunstein, S. L. & van Loock, P. Quantum information theory with continuous variables. Rev. Mod. Phys. 77, 513–577 (2005).
11. 11.
Fröhlich, B. et al. Quantum secured gigabit optical access networks. Sci. Rep. 5, 18121 (2015).
12. 12.
Bunandar, D. et al. Metropolitan quantum key distribution with silicon photonics. Phys. Rev. X 8, 021009 (2018).
13. 13.
García-Patrón, R., Pirandola, S., Lloyd, S. & Shapiro, J. H. Reverse coherent information. Phys. Rev. Lett. 102, 210501 (2009).
14. 14.
Pirandola, S., García-Patrón, R., Braunstein, S. L. & Lloyd, S. Direct and reverse secret-key capacities of a quantum channel. Phys. Rev. Lett. 102, 050503 (2009).
15. 15.
Pirandola, S., Laurenza, R., Ottaviani, C. & Banchi, L. Fundamental Limits of Repeaterless Quantum Communications. Nature Commun. 8, 15043 (2017).
16. 16.
Pirandola, S. et al. Theory of channel simulation and bounds for private communication. Quantum Sci. Technol. 3, 035009 (2018).
17. 17.
Briegel, H.-J., Dür, W., Cirac, J. I. & Zoller, P. Quantum repeaters: The role of imperfect local operations in quantum communication. Phys. Rev. Lett. 81, 5932–5935 (1998).
18. 18.
Slepian, P. Mathematical Foundations of Network Analysis. (Springer-Verlag, New York, 1968).
19. 19.
Cover, T. M. & Thomas, J. A. Elements of Information Theory. (Wiley, New Jersey, 2006).
20. 20.
El Gamal, A. & Kim, Y.-H. Network Information Theory (Cambridge Univ. Press 2011).
21. 21.
Schrijver, A. Combinatorial Optimization. (Springer-Verlag, Berlin, 2003).
22. 22.
Azuma, K., Tamaki, K. & Lo, H.-K. All-photonic quantum repeaters. Nat. Commun. 6, 6787 (2015).
23. 23.
Cormen, T., Leiserson, C. & Rivest, R. Introduction to Algorithms. (MIT Press, Cambridge, MA, 1990).
24. 24.
Pollack, M. The maximum capacity through a network. Oper. Res. 8, 733–736 (1960).
25. 25.
Medhi, D. & Ramasamy, K. Network Routing: Algorithms, Protocols, and Architectures. Second Edition (Morgan Kaufmann publishers, Cambridge MA, 2018).
26. 26.
Harris, T. E. & Ross, F. S. Fundamentals of a Method for Evaluating Rail Net Capacities. Research Memorandum, Rand Corporation (1955).
27. 27.
Ford, L. R. & Fulkerson, D. R. Maximal flow through a network. Canadian J. Math. 8, 399–404 (1956).
28. 28.
Elias, P., Feinstein, A. & Shannon, C. E. A note on the maximum flow through a network. IRE Trans. Inf. Theory 2, 117–119 (1956).
29. 29.
Ahuja, R. K., Magnanti, T. L. & Orlin, J. B. Network Flows: Theory, Algorithms and Applications (Prentice Hall 1993).
30. 30.
Horodecki, K., Horodecki, M., Horodecki, P. & Oppenheim, J. Secure key from bound entanglement. Phys. Rev. Lett. 94, 160502 (2005).
31. 31.
Horodecki, K., Horodecki, M., Horodecki, P. & Oppenheim, J. General paradigm for distilling classical key from quantum states. IEEE Trans. Inf. Theory 55, 1898–1929 (2009).
32. 32.
Cope, T. P. W., Hetzel, L., Banchi, L. & Pirandola, S. Simulation of non-Pauli Channels. Phys. Rev. A 96, 022323 (2017).
33. 33.
Laurenza, R. & Pirandola, S. General bounds for sender-receiver capacities in multipoint quantum communications. Phys. Rev. A 96, 032318 (2017).
34. 34.
Laurenza, R., Braunstein, S. L. & Pirandola, S. Finite-resource teleportation stretching for continuous-variable systems. Sci. Rep. 8, 15267 (2018).
35. 35.
Laurenza, R. et al. Tight finite-resource bounds for private communication over Gaussian channels. Preprint at https://arxiv.org/abs/1808.00608 (2018).
36. 36.
Pirandola, S., Laurenza, R. & Lupo, C. Fundamental limits to quantum channel discrimination. Preprint at https://arxiv.org/abs/1803.02834 (2018).
37. 37.
Pirandola, S., Laurenza, R. & Banchi, L. Conditional channel simulation. Ann. Phys. 400, 289–302 (2019).
38. 38.
Vedral, V. The role of relative entropy in quantum information theory. Rev. Mod. Phys. 74, 197–234 (2002).
39. 39.
Vedral, V., Plenio, M. B., Rippin, M. A. & Knight, P. L. Quantifying Entanglement. Phys. Rev. Lett. 78, 2275–2279 (1997).
40. 40.
Vedral, V. & Plenio, M. B. Entanglement measures and purification procedures. Phys. Rev. A. 57, 1619–1633 (1998).
41. 41.
Hayashi, M., Iwama, K., Nishimura, H., Raymond, R. & Yamashita, S. Quantum network coding. Lect. Notes. Comput. Sci. 4393, 610–621 (2007).
42. 42.
Hayashi, M., Owari, M., Kato, G. & Cai, N. Secrecy and robustness for active attacks in secure network coding and its application to network quantum key distribution. Preprint at https://arxiv.org/abs/1703.00723 (2017).
43. 43.
Song, S. & Hayashi, M. Secure quantum network code without classical communication. Proc. IEEE Inf. Theory Workshop 2018 (ITW 2018), Guangzhou, China, November 25–29, 2018, pp. 126130.
44. 44.
Van Meter, R. et al. Path selection for quantum repeater. Networks. Netw. Sci. 3, 82–95 (2013).
45. 45.
Di Franco, C. & Ballester, D. Optimal path for a quantum teleportation protocol in entangled networks. Phys. Rew. A 85, 010303(R) (2012).
46. 46.
Tanenbaum, A. S. & Wetherall, D. J. Computer Networks (5th Edition, Pearson, 2010).
47. 47.
Orlin, J. B. Max flows in O(nm) time, or better. STOC’13 Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, pp. 765–774 (2013).
48. 48.
Edmonds, J. & Karp, R. M. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 19, 248–264 (1972).
49. 49.
Dinic, E. A. Algorithm for solution of a problem of maximum flow in a network with power estimation. Soviet Math. Doklady 11, 1277–1280 (1970).
50. 50.
Alon, N. Generating pseudo-random permutations and maximum flow algorithms. Inf. Processing Lett. 35, 201–204 (1990).
51. 51.
Ahuja, R. K., Orlin, J. B. & Tarjan, R. E. Improved time bounds for the maximum flow problem. SIAM J. Comput. 18, 939–954 (1989).
52. 52.
Cheriyan, J., Hagerup, T. & Mehlhorn, K. Can a maximum flow be computed in O(nm) time? Proceedings of the 17th International Colloquium on Automata, Languages and Programming, pp. 235–248 (1990).
53. 53.
King, V., Rao, S. & Tarjan, R. A faster deterministic maximum flow algorithm. J. Algorithms 17, 447–474 (1994).
54. 54.
Goodenough, K., Elkouss, D. & Wehner, S. Assessing the performance of quantum repeaters for all phase-insensitive Gaussian bosonic channels. New J. Phys. 18, 063005 (2016).
55. 55.
Bennett, C. H., DiVincenzo, D. P. & Smolin, J. A. Capacities of quantum erasure channels. Phys. Rev. Lett. 78, 3217–3220 (1997).
56. 56.
Braunstein, S. L. & Pirandola, S. Side-channel-free quantum key distribution. Phys. Rev. Lett. 108, 130502 (2012).
57. 57.
Lo, H.-K., Curty, M. & Qi, B. Measurement-device-independent quantum key distribution. Phys. Rev. Lett. 108, 130503 (2012).
58. 58.
Pirandola, S. et al. High-rate measurement-device-independent quantum cryptography. Nature Photon. 9, 397–402 (2015).
59. 59.
Azuma, K., Mizutani, A. & Lo, H.-K. Fundamental rate-loss trade-off for the quantum internet. Nat. Commun. 7, 13523 (2016).
60. 60.
Azuma, K. & Kato, G. Aggregating quantum repeaters for the quantum internet. Phys. Rev. A 96, 032332 (2017).
61. 61.
Rigovacca, L. et al. Versatile relative entropy bounds for quantum networks. New J. Phys. 20, 013033 (2018).
62. 62.
Cope, T. P. W., Goodenough, K. & Pirandola, S. Converse bounds for quantum and private communication over Holevo-Werner channels. J. Phys. A: Math. Theor. 51, 494001 (2018).
63. 63.
Pant, M. et al. Routing entanglement in the quantum internet. Preprint at https://arxiv.org/abs/1708.07142 (2017).
64. 64.
Bäuml, S., Azuma, K, Kato, G. & Elkouss, D. Linear programs for entanglement and key distribution in the quantum internet. Preprint at https://arxiv.org/abs/1809.03120 (2018).
65. 65.
Lucamarini, M., Yuan, Z. L., Dynes, J. F. & Shields, A. J. Overcoming the rate–distance limit of quantum key distribution without quantum repeaters. Nature 557, 400–403 (2018).
66. 66.
Ma, X., Zeng, P. & Zhou, H. Phase-matching quantum key distribution. Phys. Rev. X 8, 031043 (2018).
67. 67.
Braunstein, S. L. & Kimble, J. Teleportation of continuous quantum variables. Phys. Rev. Lett. 80, 869–872 (1998).
68. 68.
Pirandola, S., Laurenza, R. & Braunstein, S. L. Teleportation simulation of bosonic Gaussian channels: Strong and uniform convergence. Eur. Phys. J. D 72, 162 (2018).
69. 69.
Shirokov, M. E. Energy-constrained diamond norms and their use in quantum information theory. Prob. Inf.Transm. 54, 20–33 (2018).
## Acknowledgements
This work has been supported by the EPSRC via the ‘UK Quantum Communications HUB’ (EP/M013472/1) and ‘qDATA’ (EP/L011298/1), and by the European Union via Continuous Variable Quantum Communications (CiViQ, Project ID: 820466). The author would like to thank Seth Lloyd, Koji Azuma, Bill Munro, Richard Wilson, Edwin Hancock, Rod Van Meter, Marco Lucamarini, Riccardo Laurenza, Thomas Cope, Carlo Ottaviani, Gaetana Spedalieri, Cosmo Lupo, Samuel Braunstein, Saikat Guha and Dirk Englund for feedback and discussions.
## Author information
Authors
### Contributions
S.P. developed the theory, carried out the entire work, and wrote the manuscript.
### Corresponding author
Correspondence to Stefano Pirandola.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Pirandola, S. End-to-end capacities of a quantum communication network. Commun Phys 2, 51 (2019). https://doi.org/10.1038/s42005-019-0147-3
• Accepted:
• Published:
• ### Beating direct transmission bounds for quantum key distribution with a multiple quantum memory station
• Róbert Trényi
• & Norbert Lütkenhaus
Physical Review A (2020)
• ### Statistical Properties of the Quantum Internet
• Samuraí Brito
• , Rafael Chaves
• & Daniel Cavalcanti
Physical Review Letters (2020)
• ### Advanced Alicki–Fannes–Winter method for energy-constrained quantum systems and its use
• M. E. Shirokov
Quantum Information Processing (2020)
• ### Linear programs for entanglement and key distribution in the quantum internet
• Stefan Bäuml
• , Koji Azuma
• , Go Kato
• & David Elkouss
Communications Physics (2020)
• ### Quantum State Optimization and Computational Pathway Evaluation for Gate-Model Quantum Computers
• Laszlo Gyongyosi
Scientific Reports (2020)
|
2020-06-07 07:46:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896210789680481, "perplexity": 1375.6451317953486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00324.warc.gz"}
|
https://math.stackexchange.com/questions/1704679/show-that-the-group-ring-f-pg-where-g-is-a-p-group-has-a-unique-maximal
|
# Show that the Group Ring $F_p[G]$ where $G$ is a $p$-Group has a unique maximal ideal.
Show that the Group Ring $F_p[G]$ where $F_p$ is finite field of order $p$ and $G$ is a $p$-Group (not necessarily abelian) has a unique maximal ideal, i.e. it is a local ring.
Attempt:
Consider the augmentation map, which is a ring homomorphism from $F_p[G]$ to $F_p$, taking $\sum a_g g$ to $\sum a_g$. This map is surjective and the kernel is the augmentation ideal. Because the image is a field, then the kernel must be maximal [right?], so the augmentation ideal contains the Jacobson Radical, which is the intersection of all maximal ideals. Also, this ideal is not the whole ring, so it does not contain any units.
This is where I'm stuck. Here are some paths that I'm trying to go down (they may be equivalent)
If we can show that the Jacobson radical is equal to a maximal ideal, which the augmentation ideal is, then the maximal ideal is unique, because if there were another maximal ideal then Jacobson has to be inside of it. Thus the augmentation ideal which is maximal would live in another maximal ideal, which is a contradiction. However, I don't know how to show that the augmentation ideal is equal to the Jacobson radical. I've seen something along the lines of trying to prove the augmentation ideal consists of nilpotent elements, but I didn't understand it very well: Prove that the augmentation ideal in the group ring $\mathbb{Z}/p\mathbb{Z}G$ is a nilpotent ideal ($p$ is a prime, $G$ is a $p$-group) Also if this were true, how do we conclude? This group ring isn't commutative, so nilpotent elements aren't necessarily in the Jacobson radical, right? Are somehow using
Also, I read: https://mathoverflow.net/questions/73856/when-a-group-ring-is-a-local-ring and I understand that if we can show that all elements outside of the augmentation ideal are invertible, i.e. units, then we are done, because one of the defining properties of being a local ring is the non-units form an ideal. However, I couldn't follow this either.
I think what's throwing me off most is how to deal with non-commutativity, and I don't have the best of grasp of which my definitions/properties stop working.
• @user324283 if $x$ is in a nil ideal, then $xR$ is a nilpotent ideal. This means $xr$ is nilpotent for all $r\in R$, and $1-xr$ is a unit for all $r$. By the quasi regularity characterization of the Jacobson radical, $x$ is in the radical. – rschwieb Mar 19 '16 at 21:50
|
2019-08-18 17:10:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198160529136658, "perplexity": 102.0360689372787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00376.warc.gz"}
|
https://app-718d7867-e37d-4ce3-bf32-a7a4a9be6097.cleverapps.io/academy/mathematics/worksheets/fraction.html
|
Fractions are the first abstract concept taught in mathematics and more often what confuses students with mathematics. Although these may seem tricky at first, it becomes much simpler once we visualize what a fraction is and how it works.
Exercises: Introduction to fractions (PDF)
|
2022-09-27 08:59:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868564963340759, "perplexity": 777.9391212873538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00716.warc.gz"}
|
http://www.landlordservicesonline.com.au/1lgsy85/552532-exponential-distribution-expected-value
|
The exponential distribution is defined ⦠The exponential distribution is a probability distribution which represents the time between events in a Poisson process. The time is known to have an exponential distribution with the average amount of time equal to four minutes. A big thank you, Tim Post. where C is a constant and X a random variable following exponential distribution? X is a continuous random variable since time is measured. Question: If An Exponential Distribution Has The Rate Parameter λ = 5, What Is Its Expected Value? Posterior distribution of exponential prior and uniform likelihood. MathsResource.com | Probability Theory | Exponential Distribution The parameter $$\alpha$$ is referred to as the shape parameter, and $$\lambda$$ is the rate parameter. 42.3k 9 9 gold badges 68 68 silver badges 182 182 bronze badges. Studentâs t-distributions are normal distribution with a fatter tail, although is approaches normal distribution as the parameter increases. $$m=\frac{1}{\mu }$$. For an example, let's look at the exponential distribution. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. 2. It is the constant counterpart of the geometric distribution, which is rather discrete. and the expected value of the right tail is $$E_R = (\int_{q_U}^{\infty} x f(x) \,dx) / (1 - F(q_U))$$ The expected value in the tail of the exponential distribution. Browse other questions tagged probability exponential-distribution expected-value or ask your own question. If $$\alpha = 1$$, then the corresponding gamma distribution is given by the exponential distribution, i.e., $$\text{gamma}(1,\lambda) = \text{exponential}(\lambda)$$. There are fewer large values and more small values. It is given that μ = 4 minutes. 1. This is left as an exercise for the reader. Related. It can be shown, too, that the value of the change that you have in your pocket or purse approximately follows an exponential distribution. asked Mar 4 '19 at 19:26. It is also called negative exponential distribution.It is a continuous probability distribution used to represent the time we need to wait before a given event happens. The exponential distribution is often concerned with the amount of time until some specific event occurs. Let X = amount of time (in minutes) a postal clerk spends with his or her customer. Richard Hardy. Featured on Meta Feature Preview: New Review Suspensions Mod UX. Exponential distribution, am I doing this correctly? Finding the conditional expectation of independent exponential random variables. Values for an exponential random variable occur in the following way. However, recall that the rate is not the expected value, so if you want to calculate, for instance, an exponential distribution in R with mean 10 you will need to calculate the corresponding rate: # Exponential density function of mean 10 dexp(x, rate = 0.1) # E(X) = 1/lambda = 1/0.1 = 10 A.5 B.1/5 C.1/25 D.5/2 1. share | cite | improve this question | follow | edited Apr 7 at 13:24. Other examples include the length, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. Exponential Distribution of Independent Events. expected-value conditional-expectation. Compound Binomial-Exponential: Closed form for the PDF? 4. To do any calculations, you must know m, the decay parameter. 6. Evaluating integrals involving products of exponential and Bessel functions over the ⦠Often concerned with the amount of time until some specific event occurs questions tagged probability expected-value. Shape parameter, and \ ( m=\frac { 1 } { \mu } \ ) 42.3k 9 gold... M=\Frac { 1 } { \mu } \ ) other questions tagged probability exponential-distribution expected-value or ask your question..., and \ ( \lambda\ ) is the constant counterpart of the distribution... } { \mu } \ ) look at the exponential distribution independent exponential random variables Preview. His or her customer exponential distribution variable occur in the following way shape parameter, and \ ( )... Spends with his or her customer although is approaches normal distribution with fatter... Values and more small values m, the decay parameter a postal clerk spends his. A continuous random variable occur in the following way example, let 's look at the exponential distribution often... { 1 } { \mu } \ ) 's look at the exponential distribution other tagged... Occurs has an exponential distribution is defined ⦠the exponential distribution and X a variable! The parameter \ ( m=\frac { 1 } { \mu } \ ) the following way variable since time measured... Edited Apr 7 at 13:24, and \ ( \alpha\ ) is referred to as shape. Apr 7 at 13:24 event occurs you must know m, the parameter! ) is referred to as the shape parameter, and \ ( m=\frac { 1 } \mu! Parameter, and \ ( m=\frac { 1 } { \mu } \.! Independent exponential random variable occur in the following way let 's look at exponential... Or ask your own question | edited Apr 7 at 13:24 clerk spends with his or her customer where is. Exponential-Distribution expected-value or ask your own question continuous random variable since time is measured a postal clerk with..., let 's look at the exponential distribution is defined ⦠the exponential distribution is defined ⦠exponential. New Review Suspensions Mod UX own question equal to four minutes as an exercise for the reader or customer. A random variable following exponential distribution an exercise for the reader which rather!, which is rather discrete expectation of independent exponential random variable following exponential distribution with a tail. ¦ the exponential distribution is defined ⦠the exponential distribution with a fatter tail, although is approaches exponential distribution expected value!, and \ ( \alpha\ ) is referred to as the parameter (... Of time ( in minutes ) a postal clerk spends with his or her customer is.. Her customer ( m=\frac { 1 } { \mu } \ ) look at the distribution. Example, the amount of time until some specific event occurs this question | follow | edited Apr 7 13:24. Known to have an exponential random variables large values and more small values let X amount! Do any calculations, you must know m, the decay parameter questions tagged probability exponential-distribution expected-value or ask own! The rate parameter, and \ ( \alpha\ ) is referred to as the shape parameter, and \ \lambda\... The decay parameter \alpha\ ) is the rate parameter is often concerned with the amount time..., and \ ( \alpha\ ) is the rate parameter distribution as the shape parameter, and (. To have an exponential distribution 's look at the exponential distribution is often concerned the! | edited Apr 7 at 13:24 the average amount of time ( beginning now until! Average amount of time until some specific event occurs is left as an exercise for the.... Fewer large values and more small values finding the conditional expectation of independent random... Let X = amount of time ( beginning now ) until an earthquake occurs has an exponential distribution is â¦! Ask your own question the following way normal distribution with a fatter tail, although is approaches distribution! Rather discrete New Review Suspensions Mod UX ) is the rate parameter X amount. For an example, let 's look at the exponential distribution is often concerned with the amount time... ( in minutes ) a postal clerk spends with his or her customer an earthquake occurs has an random! Badges 68 68 silver badges 182 182 bronze badges 68 silver badges 182! Improve this question | follow | edited Apr 7 at 13:24 look the! Your own question time until some specific event occurs know m, the amount of time ( in minutes a! Distribution with the average amount of time until some specific event occurs and X a variable. Parameter increases fatter tail, although is approaches normal distribution as the shape parameter and! } \ ) | improve this question | follow | edited Apr 7 at 13:24 and X random! Is known to have an exponential random variable occur in the following way is measured are normal distribution with fatter. Questions tagged probability exponential-distribution expected-value or ask your own question | improve this question | |! 'S look at the exponential distribution is often concerned with the average amount of time equal to four.. Calculations, you must know m, the decay parameter parameter, \... } \ ) time ( beginning now ) until an earthquake occurs has an exponential distribution defined... Large values and more small values the conditional expectation of independent exponential random variables occur in the following way on! Calculations, you must know m, the amount of time ( in minutes ) a clerk... The conditional expectation of independent exponential random variable since time is known to have an exponential distribution is concerned! Is often concerned with the average amount of time until some specific event occurs and more small values featured Meta... Apr 7 at 13:24 parameter increases 68 68 silver badges 182 182 bronze badges is as! Tail, although is approaches normal distribution as the parameter increases is rather.... To do any calculations, you must know m, the amount of time beginning... In minutes ) a postal clerk spends with his or her customer following... With the average amount of time ( in minutes ) a postal clerk spends with his or customer. { \mu } \ ) average amount of time until some specific occurs. 1 } { \mu } \ ) average amount of time ( in minutes ) a postal clerk with! Specific event occurs are fewer large values and more small values the decay parameter follow | edited 7... In the following way exercise for the reader is often concerned with amount. Distribution with the average amount of time ( in minutes ) a postal clerk spends with his her! Tail, although is approaches normal distribution as the shape parameter, and \ ( \alpha\ is. Shape parameter, and \ ( \alpha\ ) is referred to as parameter. { \mu } \ ) distribution as the shape parameter, and \ ( m=\frac { 1 } { }! Distribution as the shape parameter, and \ ( \alpha\ ) is the rate parameter his her! The conditional expectation of independent exponential random variables occurs has an exponential distribution defined. | cite | improve this question | follow | edited Apr 7 exponential distribution expected value 13:24 to minutes... An exercise for the reader defined ⦠the exponential distribution you must m! | follow | edited Apr 7 at 13:24 is the rate parameter left as exercise. \ ) | follow | edited Apr 7 at 13:24 cite | improve this question | follow edited... ( m=\frac { 1 } { \mu } \ ) edited Apr 7 at 13:24 ) is referred as... } \ ), and \ ( \lambda\ ) is the constant counterpart of the geometric distribution, is! Any calculations, you must know m, the amount of time ( in minutes a! The following way a continuous random variable following exponential distribution gold badges 68. Fatter tail, although is approaches normal distribution as the shape parameter, and (... Tail, although is approaches normal distribution as the parameter \ ( \alpha\ ) is the rate parameter or! Parameter increases gold badges 68 68 silver badges 182 182 bronze badges example let. Know m, the amount of time ( in minutes ) a clerk! At the exponential distribution is defined ⦠the exponential distribution rather discrete distribution with a tail... Or her customer \ ) rather discrete, you must know m the..., let 's look at the exponential distribution is often concerned with the average amount of time until specific. A fatter tail, although is approaches normal distribution as the parameter (. Example, the amount of time ( in minutes ) a postal clerk spends with or! Rather discrete the conditional expectation of independent exponential random variable occur in following... 'S look at the exponential distribution this question | follow | edited Apr 7 at 13:24 occurs. Silver badges 182 182 bronze badges { \mu } \ ) improve this question | follow | edited 7! There are fewer large values and more small values } \ ), which is rather.. Of the geometric distribution, which is rather discrete distribution as the parameter increases variable since time known... The constant counterpart of the geometric distribution, which is rather discrete ( minutes! An exercise for the reader have an exponential random variable since time is known to have exponential! To as the shape parameter, and \ ( \lambda\ ) is the counterpart. A continuous random variable occur in the following way \lambda\ ) is the constant counterpart of the geometric distribution which... Have an exponential distribution with a fatter tail, although is approaches normal distribution with the of. Time ( in minutes ) a postal clerk spends with his or customer!
|
2021-05-06 22:18:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904522657394409, "perplexity": 1175.9985611708516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00331.warc.gz"}
|
https://www.math.tohoku.ac.jp/tmj/abs/abs61_1_2.html
|
## Tohoku Mathematical Journal 2009 March SECOND SERIES VOL. 61, NO. 1
Tohoku Math. J. 61 (2009), 41-65
Title ON THE BOUNDEDNESS OF SINGULAR INTEGRALS WITH VARIABLE KERNELS
Author Qingying Xue and Kôzô Yabuta
(Received July 19, 2007, revised June 23, 2008)
Abstract. We prove the $L^p (1<p<\infty)$ estimates for the singular integrals with rough variable kernels. The $L^p$ boundedness of a class of modified directional Hilbert transforms is also given. As a consequence of this result, we get a good estimate for the singular integrals with rough odd kernels.
2000 Mathematics Subject Classification. Primary 42B25; Secondary 47G10.
Key words and phrases. Singular integrals, variable kernels, $L^p$ estimates, $L^\infty\times L^q(S^{n-1})$ spaces.
|
2018-05-25 07:20:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4000236988067627, "perplexity": 845.7406498969093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867046.34/warc/CC-MAIN-20180525063346-20180525083346-00002.warc.gz"}
|
http://eprints.iisc.ernet.in/888/
|
# Semiclassical and field theoretic studies of Heisenberg antiferromagnetic chains with frustration and dimerization
Rao, Sumathi and Sen, Diptiman (1996) Semiclassical and field theoretic studies of Heisenberg antiferromagnetic chains with frustration and dimerization. [Preprint]
Preview
PDF
9604044.pdf
The Heisenberg antiferromagnetic spin chain with both dimerization and frustration is studied. The classical ground state has three phases (a Neel phase, a spiral phase and a colinear phase), around which a planar spin-wave analysis is performed. In each phase, we discuss a non linear sigma model field theory describing the low energy excitations. A renormalization group analysis of the SO(3) matrix-valued field theory of the spiral phase leads to the conclusion that the theory becomes $SO(3) \times SO(3)$ and Lorentz invariant at long distances. This theory is analytically known to have a massive spin-1/2 excitation. We also show that $Z_2 ~$ solitons in the field theory lead to a double degeneracy in the spectrum for half-integer spins.
|
2014-04-19 04:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7964420318603516, "perplexity": 1225.1882174541706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://innovationdiscoveries.space/tag/best-runaway-diesel-compilation/
|
Advertisement The properties of diesel fuel are at times varied from other fuels such as gasoline. However, there are also some similarities, such as the fact that they both burn in a fuel cylinder to make the engine work, and …
Advertisement Do you know why some people like to call diesels Satan’s engines? Yeah, rattling and soot were the most common reasons, despite not necessarily applying to modern engines, but another reason for that is the fact that they can …
|
2023-03-28 11:50:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495652675628662, "perplexity": 1086.371913831913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00757.warc.gz"}
|
http://nag.com/numeric/MB/manual64_24_1/html/F08/f08spf.html
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox: nag_lapack_zhegvx (f08sp)
## Purpose
nag_lapack_zhegvx (f08sp) computes selected eigenvalues and, optionally, eigenvectors of a complex generalized Hermitian-definite eigenproblem, of the form
Az = λBz , ABz = λz or BAz = λz , $Az=λBz , ABz=λz or BAz=λz ,$
where A$A$ and B$B$ are Hermitian and B$B$ is also positive definite. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues.
## Syntax
[a, b, m, w, z, jfail, info] = f08sp(itype, jobz, range, uplo, a, b, vl, vu, il, iu, abstol, 'n', n)
[a, b, m, w, z, jfail, info] = nag_lapack_zhegvx(itype, jobz, range, uplo, a, b, vl, vu, il, iu, abstol, 'n', n)
## Description
nag_lapack_zhegvx (f08sp) first performs a Cholesky factorization of the matrix B$B$ as B = UHU $B={U}^{\mathrm{H}}U$, when uplo = 'U'${\mathbf{uplo}}=\text{'U'}$ or B = LLH $B=L{L}^{\mathrm{H}}$, when uplo = 'L'${\mathbf{uplo}}=\text{'L'}$. The generalized problem is then reduced to a standard symmetric eigenvalue problem
Cx = λx , $Cx=λx ,$
which is solved for the desired eigenvalues and eigenvectors; the eigenvectors are then backtransformed to give the eigenvectors of the original problem.
For the problem Az = λBz $Az=\lambda Bz$, the eigenvectors are normalized so that the matrix of eigenvectors, Z$Z$, satisfies
ZH A Z = Λ and ZH B Z = I , $ZH A Z = Λ and ZH B Z = I ,$
where Λ $\Lambda$ is the diagonal matrix whose diagonal elements are the eigenvalues. For the problem A B z = λ z $ABz=\lambda z$ we correspondingly have
Z − 1 A Z − H = Λ and ZH B Z = I , $Z-1 A Z-H = Λ and ZH B Z = I ,$
and for B A z = λ z $BAz=\lambda z$ we have
ZH A Z = Λ and ZH B − 1 Z = I . $ZH A Z = Λ and ZH B-1 Z = I .$
## References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug
Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
## Parameters
### Compulsory Input Parameters
1: itype – int64int32nag_int scalar
Specifies the problem type to be solved.
itype = 1${\mathbf{itype}}=1$
Az = λBz$Az=\lambda Bz$.
itype = 2${\mathbf{itype}}=2$
ABz = λz$ABz=\lambda z$.
itype = 3${\mathbf{itype}}=3$
BAz = λz$BAz=\lambda z$.
Constraint: itype = 1${\mathbf{itype}}=1$, 2$2$ or 3$3$.
2: jobz – string (length ≥ 1)
Indicates whether eigenvectors are computed.
jobz = 'N'${\mathbf{jobz}}=\text{'N'}$
Only eigenvalues are computed.
jobz = 'V'${\mathbf{jobz}}=\text{'V'}$
Eigenvalues and eigenvectors are computed.
Constraint: jobz = 'N'${\mathbf{jobz}}=\text{'N'}$ or 'V'$\text{'V'}$.
3: range – string (length ≥ 1)
If range = 'A'${\mathbf{range}}=\text{'A'}$, all eigenvalues will be found.
If range = 'V'${\mathbf{range}}=\text{'V'}$, all eigenvalues in the half-open interval (vl,vu]$\left({\mathbf{vl}},{\mathbf{vu}}\right]$ will be found.
If range = 'I'${\mathbf{range}}=\text{'I'}$, the ilth to iuth eigenvalues will be found.
Constraint: range = 'A'${\mathbf{range}}=\text{'A'}$, 'V'$\text{'V'}$ or 'I'$\text{'I'}$.
4: uplo – string (length ≥ 1)
If uplo = 'U'${\mathbf{uplo}}=\text{'U'}$, the upper triangles of A$A$ and B$B$ are stored.
If uplo = 'L'${\mathbf{uplo}}=\text{'L'}$, the lower triangles of A$A$ and B$B$ are stored.
Constraint: uplo = 'U'${\mathbf{uplo}}=\text{'U'}$ or 'L'$\text{'L'}$.
5: a(lda, : $:$) – complex array
The first dimension of the array a must be at least max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The second dimension of the array must be at least max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The n$n$ by n$n$ Hermitian matrix A$A$.
• If uplo = 'U'${\mathbf{uplo}}=\text{'U'}$, the upper triangular part of a$a$ must be stored and the elements of the array below the diagonal are not referenced.
• If uplo = 'L'${\mathbf{uplo}}=\text{'L'}$, the lower triangular part of a$a$ must be stored and the elements of the array above the diagonal are not referenced.
6: b(ldb, : $:$) – complex array
The first dimension of the array b must be at least max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The second dimension of the array must be at least max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The n$n$ by n$n$ Hermitian matrix B$B$.
• If uplo = 'U'${\mathbf{uplo}}=\text{'U'}$, the upper triangular part of b$b$ must be stored and the elements of the array below the diagonal are not referenced.
• If uplo = 'L'${\mathbf{uplo}}=\text{'L'}$, the lower triangular part of b$b$ must be stored and the elements of the array above the diagonal are not referenced.
7: vl – double scalar
8: vu – double scalar
If range = 'V'${\mathbf{range}}=\text{'V'}$, the lower and upper bounds of the interval to be searched for eigenvalues.
If range = 'A'${\mathbf{range}}=\text{'A'}$ or 'I'$\text{'I'}$, vl and vu are not referenced.
Constraint: if range = 'V'${\mathbf{range}}=\text{'V'}$, vl < vu${\mathbf{vl}}<{\mathbf{vu}}$.
9: il – int64int32nag_int scalar
10: iu – int64int32nag_int scalar
If range = 'I'${\mathbf{range}}=\text{'I'}$, the indices (in ascending order) of the smallest and largest eigenvalues to be returned.
If range = 'A'${\mathbf{range}}=\text{'A'}$ or 'V'$\text{'V'}$, il and iu are not referenced.
Constraints:
• if range = 'I'${\mathbf{range}}=\text{'I'}$ and n = 0${\mathbf{n}}=0$, il = 1${\mathbf{il}}=1$ and iu = 0${\mathbf{iu}}=0$;
• if range = 'I'${\mathbf{range}}=\text{'I'}$ and n > 0${\mathbf{n}}>0$, 1 il iu n $1\le {\mathbf{il}}\le {\mathbf{iu}}\le {\mathbf{n}}$.
11: abstol – double scalar
The absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as converged when it is determined to lie in an interval [a,b] $\left[a,b\right]$ of width less than or equal to
abstol + ε max (|a|,|b|) , $abstol+ε max(|a|,|b|) ,$
where ε $\epsilon$ is the machine precision. If abstol is less than or equal to zero, then ε T1 $\epsilon {‖T‖}_{1}$ will be used in its place, where T$T$ is the tridiagonal matrix obtained by reducing C$C$ to tridiagonal form. Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold 2 × x02am ( ) , not zero. If this function returns with INFO = 1ton${\mathbf{INFO}}={\mathbf{1}} \text{to} {\mathbf{n}}$, indicating that some eigenvectors did not converge, try setting abstol to 2 × x02am ( ) . See Demmel and Kahan (1990).
### Optional Input Parameters
1: n – int64int32nag_int scalar
Default: The first dimension of the arrays a, b and the second dimension of the arrays a, b. (An error is raised if these dimensions are not equal.)
n$n$, the order of the matrices A$A$ and B$B$.
Constraint: n0${\mathbf{n}}\ge 0$.
### Input Parameters Omitted from the MATLAB Interface
lda ldb ldz work lwork rwork iwork
### Output Parameters
1: a(lda, : $:$) – complex array
The first dimension of the array a will be max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The second dimension of the array will be max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
ldamax (1,n)$\mathit{lda}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
The lower triangle (if uplo = 'L'${\mathbf{uplo}}=\text{'L'}$) or the upper triangle (if uplo = 'U'${\mathbf{uplo}}=\text{'U'}$) of a, including the diagonal, is overwritten.
2: b(ldb, : $:$) – complex array
The first dimension of the array b will be max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
The second dimension of the array will be max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$
ldbmax (1,n)$\mathit{ldb}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
The triangular factor U$U$ or L$L$ from the Cholesky factorization B = UHU$B={U}^{\mathrm{H}}U$ or B = LLH$B=L{L}^{\mathrm{H}}$.
3: m – int64int32nag_int scalar
The total number of eigenvalues found. 0mn$0\le {\mathbf{m}}\le {\mathbf{n}}$.
If range = 'A'${\mathbf{range}}=\text{'A'}$, m = n${\mathbf{m}}={\mathbf{n}}$.
If range = 'I'${\mathbf{range}}=\text{'I'}$, m = iuil + 1${\mathbf{m}}={\mathbf{iu}}-{\mathbf{il}}+1$.
4: w(n) – double array
The first m elements contain the selected eigenvalues in ascending order.
5: z(ldz, : $:$) – complex array
The first dimension, ldz, of the array z will be
• if jobz = 'V'${\mathbf{jobz}}=\text{'V'}$, ldz max (1,n) $\mathit{ldz}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$;
• otherwise ldz1$\mathit{ldz}\ge 1$.
The second dimension of the array will be max (1,m)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ if jobz = 'V'${\mathbf{jobz}}=\text{'V'}$, and at least 1$1$ otherwise
If jobz = 'V'${\mathbf{jobz}}=\text{'V'}$, then
• if ${\mathbf{INFO}}={\mathbf{0}}$, the first m columns of Z$Z$ contain the orthonormal eigenvectors of the matrix A$A$ corresponding to the selected eigenvalues, with the i$i$th column of Z$Z$ holding the eigenvector associated with w(i)${\mathbf{w}}\left(i\right)$. The eigenvectors are normalized as follows:
• if itype = 1${\mathbf{itype}}=1$ or 2$2$, ZHBZ = I${Z}^{\mathrm{H}}BZ=I$;
• if itype = 3${\mathbf{itype}}=3$, ZHB1Z = I${Z}^{\mathrm{H}}{B}^{-1}Z=I$;
• if an eigenvector fails to converge (INFO = 1ton${\mathbf{INFO}}={\mathbf{1}} \text{to} {\mathbf{n}}$), then that column of Z$Z$ contains the latest approximation to the eigenvector, and the index of the eigenvector is returned in jfail.
If jobz = 'N'${\mathbf{jobz}}=\text{'N'}$, z is not referenced.
6: jfail( : $:$) – int64int32nag_int array
Note: the dimension of the array jfail must be at least max (1,n)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
If jobz = 'V'${\mathbf{jobz}}=\text{'V'}$, then
• if ${\mathbf{INFO}}={\mathbf{0}}$, the first m elements of jfail are zero;
• if INFO = 1ton${\mathbf{INFO}}={\mathbf{1}} \text{to} {\mathbf{n}}$, jfail contains the indices of the eigenvectors that failed to converge.
If jobz = 'N'${\mathbf{jobz}}=\text{'N'}$, jfail is not referenced.
7: info – int64int32nag_int scalar
info = 0${\mathbf{info}}=0$ unless the function detects an error (see Section [Error Indicators and Warnings]).
## Error Indicators and Warnings
Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.
info = i${\mathbf{info}}=-i$
If info = i${\mathbf{info}}=-i$, parameter i$i$ had an illegal value on entry. The parameters are numbered as follows:
1: itype, 2: jobz, 3: range, 4: uplo, 5: n, 6: a, 7: lda, 8: b, 9: ldb, 10: vl, 11: vu, 12: il, 13: iu, 14: abstol, 15: m, 16: w, 17: z, 18: ldz, 19: work, 20: lwork, 21: rwork, 22: iwork, 23: jfail, 24: info.
It is possible that info refers to a parameter that is omitted from the MATLAB interface. This usually indicates that an error in one of the other input parameters has caused an incorrect value to be inferred.
W INFO = 1ton${\mathbf{INFO}}=1 \text{to} {\mathbf{n}}$
If info = i${\mathbf{info}}=i$, nag_lapack_zheevx (f08fp) failed to converge; i$i$ eigenvectors failed to converge. Their indices are stored in array jfail.
${\mathbf{INFO}}>{\mathbf{N}}$
nag_lapack_zpotrf (f07fr) returned an error code; i.e., if info = n + i${\mathbf{info}}={\mathbf{n}}+i$, for 1in$1\le i\le {\mathbf{n}}$, then the leading minor of order i$i$ of B$B$ is not positive definite. The factorization of B$B$ could not be completed and no eigenvalues or eigenvectors were computed.
## Accuracy
If B$B$ is ill-conditioned with respect to inversion, then the error bounds for the computed eigenvalues and vectors may be large, although when the diagonal elements of B$B$ differ widely in magnitude the eigenvalues and eigenvectors may be less sensitive than the condition of B$B$ would suggest. See Section 4.10 of Anderson et al. (1999) for details of the error bounds.
## Further Comments
The total number of floating point operations is proportional to n3${n}^{3}$.
The real analogue of this function is nag_lapack_dsygvx (f08sb).
## Example
function nag_lapack_zhegvx_example
itype = int64(1);
jobz = 'Vectors';
range = 'Values in range';
uplo = 'Upper';
a = [-7.36, 0.77 - 0.43i, -0.64 - 0.92i, 3.01 - 6.97i;
0 + 0i, 3.49 + 0i, 2.19 + 4.45i, 1.9 + 3.73i;
0 + 0i, 0 + 0i, 0.12 + 0i, 2.88 - 3.17i;
0 + 0i, 0 + 0i, 0 + 0i, -2.54 + 0i];
b = [3.23, 1.51 - 1.92i, 1.9 + 0.84i, 0.42 + 2.5i;
0 + 0i, 3.58 + 0i, -0.23 + 1.11i, -1.18 + 1.37i;
0 + 0i, 0 + 0i, 4.09 + 0i, 2.33 - 0.14i;
0 + 0i, 0 + 0i, 0 + 0i, 4.29 + 0i];
vl = -3;
vu = 3;
il = int64(0);
iu = int64(8185080);
abstol = 0;
[aOut, bOut, m, w, z, jfail, info] = ...
nag_lapack_zhegvx(itype, jobz, range, uplo, a, b, vl, vu, il, iu, abstol)
aOut =
-1.2636 + 0.0000i -2.3214 + 0.0000i -0.5211 - 0.0656i -0.0802 + 0.4016i
0.0000 + 0.0000i -1.8095 + 0.0000i -2.7959 + 0.0000i -0.1903 + 0.1121i
0.0000 + 0.0000i 0.0000 + 0.0000i -0.7025 + 0.0000i -3.8021 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i -0.7133 + 0.0000i
bOut =
1.7972 + 0.0000i 0.8402 - 1.0683i 1.0572 + 0.4674i 0.2337 + 1.3910i
0.0000 + 0.0000i 1.3164 + 0.0000i -0.4702 - 0.3131i 0.0834 - 0.0368i
0.0000 + 0.0000i 0.0000 + 0.0000i 1.5604 + 0.0000i 0.9360 - 0.9900i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.6603 + 0.0000i
m =
2
w =
-2.9936
0.5047
0
0
z =
-0.3504 + 0.6060i 0.2835 - 0.5806i
-0.0993 + 0.0631i -0.3769 - 0.3194i
0.6851 - 0.5987i -0.3338 - 0.0134i
-0.8127 + 0.0000i 0.6663 + 0.0000i
jfail =
0
0
0
0
info =
0
function f08sp_example
itype = int64(1);
jobz = 'Vectors';
range = 'Values in range';
uplo = 'Upper';
a = [-7.36, 0.77 - 0.43i, -0.64 - 0.92i, 3.01 - 6.97i;
0 + 0i, 3.49 + 0i, 2.19 + 4.45i, 1.9 + 3.73i;
0 + 0i, 0 + 0i, 0.12 + 0i, 2.88 - 3.17i;
0 + 0i, 0 + 0i, 0 + 0i, -2.54 + 0i];
b = [3.23, 1.51 - 1.92i, 1.9 + 0.84i, 0.42 + 2.5i;
0 + 0i, 3.58 + 0i, -0.23 + 1.11i, -1.18 + 1.37i;
0 + 0i, 0 + 0i, 4.09 + 0i, 2.33 - 0.14i;
0 + 0i, 0 + 0i, 0 + 0i, 4.29 + 0i];
vl = -3;
vu = 3;
il = int64(0);
iu = int64(8185080);
abstol = 0;
[aOut, bOut, m, w, z, jfail, info] = ...
f08sp(itype, jobz, range, uplo, a, b, vl, vu, il, iu, abstol)
aOut =
-1.2636 + 0.0000i -2.3214 + 0.0000i -0.5211 - 0.0656i -0.0802 + 0.4016i
0.0000 + 0.0000i -1.8095 + 0.0000i -2.7959 + 0.0000i -0.1903 + 0.1121i
0.0000 + 0.0000i 0.0000 + 0.0000i -0.7025 + 0.0000i -3.8021 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i -0.7133 + 0.0000i
bOut =
1.7972 + 0.0000i 0.8402 - 1.0683i 1.0572 + 0.4674i 0.2337 + 1.3910i
0.0000 + 0.0000i 1.3164 + 0.0000i -0.4702 - 0.3131i 0.0834 - 0.0368i
0.0000 + 0.0000i 0.0000 + 0.0000i 1.5604 + 0.0000i 0.9360 - 0.9900i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.6603 + 0.0000i
m =
2
w =
-2.9936
0.5047
0
0
z =
-0.3504 + 0.6060i 0.2835 - 0.5806i
-0.0993 + 0.0631i -0.3769 - 0.3194i
0.6851 - 0.5987i -0.3338 - 0.0134i
-0.8127 + 0.0000i 0.6663 + 0.0000i
jfail =
0
0
0
0
info =
0
PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013
|
2015-12-02 02:14:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 156, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309601187705994, "perplexity": 5504.320623565993}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00321-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/186001-complex-analysis-function-arg.html
|
# Math Help - Complex Analysis of a Function (Arg)
1. ## Complex Analysis of a Function (Arg)
I've been messing around with a problem and converted it from an arc-cotangent function, to an arctangent function, to a complex logarithm, to an inverse hyperbolic tangent function, and finally argument form.
The function is:-
arg(1-e^(-i pi x)) : This is not the whole function, but its a section of it.
How is the above function calculated? I've seen references to it, but no actual explanations. (Note: The conversion from its previous form to the current form was achieved with the help of software I just purchased, mathematica! Too bad it cannot explain WHY ;(
When it comes to integration, what do I do? What is the definition for the integration of the arg(z) function, where z = 1-e^(-i pi x)?
Let's see, I don't understand the exact meaning of your question. It seems we have the function $f:A\subset \mathbb{R}\to \mathbb{R}$ defined by $f(x)=\arg (1-e^{-i\pi x})$ where $\arg$ means the principal argument. Right?. Now, suppose $A=[a,b]$ is a closed interval such that $f(x)=\ldots=\arctan\dfrac {\sin \pi x}{1-\cos \pi x}$ is continuous . Are you asking about $\int_a^bf(x)\;dx$ ?
|
2015-09-05 04:30:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144042730331421, "perplexity": 387.8798417079757}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00057-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.techwhiff.com/issue/a-shore-store-is-going-out-of-business-all-shoes-were--114540
|
# A shore store is going out of business. all shoes were marked half off and are now an additional 20% off the sale price. given a shoes original price p, which of the following equations shows the final price F?
###### Question:
a shore store is going out of business. all shoes were marked half off and are now an additional 20% off the sale price. given a shoes original price p, which of the following equations shows the final price F?
### What are the decomposers of a Desert biome or an example of a food chain for a Desert biome?
What are the decomposers of a Desert biome or an example of a food chain for a Desert biome?...
### A red blood cell has a diameter of approximately 8 micrometers, or 0.008 A model represents its diameter as 8 um. What is the ratio of model size to actual size ?
A red blood cell has a diameter of approximately 8 micrometers, or 0.008 A model represents its diameter as 8 um. What is the ratio of model size to actual size ?...
### KA4of a number is 64.Find the number.62. I
KA4of a number is 64.Find the number.62. I...
### What does the sign ll mean in geometry
what does the sign ll mean in geometry...
### Suppose the mass of a fully loaded module in which astronauts take off from the Moon is 12,500kg. The thrust of its engines is 28,000 N. (a) Calculate its the magnitude of acceleration in a vertical takeoff from the Moon in meter per square second, assuming the acceleration due to gravity on the Moon is 1.67m/s^2.
Suppose the mass of a fully loaded module in which astronauts take off from the Moon is 12,500kg. The thrust of its engines is 28,000 N. (a) Calculate its the magnitude of acceleration in a vertical takeoff from the Moon in meter per square second, assuming the acceleration due to gravity on the Mo...
### (01.04 MC) Which component of thermoregulation allows your body to maintain homeostasis after jumping into a tub of hot water? (2 points) Blood vessels will dilate in your epidermis. O Blood vessels will dilate in your dermis. O Blood vessels will contract in your epidermis. Blood vessels will contract in your dermis.
(01.04 MC) Which component of thermoregulation allows your body to maintain homeostasis after jumping into a tub of hot water? (2 points) Blood vessels will dilate in your epidermis. O Blood vessels will dilate in your dermis. O Blood vessels will contract in your epidermis. Blood vessels will contr...
### What are two advantages of a pay-for-use online conferencing service compared to a free service? (Choose two) Speed Provision of both audio and video Allows larger audiences Dedicated support Easier to manage
What are two advantages of a pay-for-use online conferencing service compared to a free service? (Choose two) Speed Provision of both audio and video Allows larger audiences Dedicated support Easier to manage...
### Find the values of I and y. Write your answer in simplest form. Will mark branliest, help
Find the values of I and y. Write your answer in simplest form. Will mark branliest, help...
### Y=x^2+3x-4 and y=4x-4
y=x^2+3x-4 and y=4x-4...
### The area of a rectangle is found by multiplying its length and width together. Calculate the area, in square feet, of the room you measured in the previous section. Explain or show your reasoning.
The area of a rectangle is found by multiplying its length and width together. Calculate the area, in square feet, of the room you measured in the previous section. Explain or show your reasoning....
### Can someone help? Need to find x, and measure of angle DEF
Can someone help? Need to find x, and measure of angle DEF...
### How did the United States initially respond to the rise of totalitarian dictatorships in the 1930s?
How did the United States initially respond to the rise of totalitarian dictatorships in the 1930s?...
### Hi what is the meaning of meaning
hi what is the meaning of meaning...
### Why should you avoid using sarcasm, cliché, and idioms in buisness letters?
why should you avoid using sarcasm, cliché, and idioms in buisness letters?...
### Identify f (x)=3x2+8x+4
identify f (x)=3x2+8x+4...
### 4√5+5√5 is equal to ? A)9√5 B)9√10 C) 5√10 D) 7√5
4√5+5√5 is equal to ? A)9√5 B)9√10 C) 5√10 D) 7√5...
### Parker works for a computer repair company. He gets paid $20 each time he has to work on a computer, plus$12 per hour he is working. The expression that represents how much money he makes is represented by 12x + 20.How much money would Parker make if it took him 6 hours to fix the computer?
Parker works for a computer repair company. He gets paid $20 each time he has to work on a computer, plus$12 per hour he is working. The expression that represents how much money he makes is represented by 12x + 20.How much money would Parker make if it took him 6 hours to fix the computer?...
### I need a verbal description for the equation using the graph texts. Thank you!
I need a verbal description for the equation using the graph texts. Thank you!...
### What’s a department store?
What’s a department store?...
### Choose all the answers that apply. Which of the following occurs during data analysis? observations are made statistical information is calculated Omathematical calculations are completed D graphs are created Odata is collected
Choose all the answers that apply. Which of the following occurs during data analysis? observations are made statistical information is calculated Omathematical calculations are completed D graphs are created Odata is collected...
-- 0.055367--
|
2023-03-25 01:23:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4564373791217804, "perplexity": 2166.2968330578096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00620.warc.gz"}
|
https://pos.sissa.it/396/430/
|
Volume 396 - The 38th International Symposium on Lattice Field Theory (LATTICE2021) - Oral presentation
Scalar fields on fluctuating hyperbolic geometries
Full text: pdf
Pre-published on: May 16, 2022
Published on:
Abstract
We present results on the behavior of the boundary-boundary correlation function of scalar fields propagating on discrete two-dimensional random triangulations representing manifolds with the topology of a disk. We use a gravitational action that includes a curvature squared operator, which favors a regular tessellation of hyperbolic space for large values of its coupling. We probe the resultant geometry by analyzing the propagator of a massive scalar field and show that the conformal behavior seen in the uniform hyperbolic space survives as the coupling approaches zero.
The analysis of the boundary correlator suggests that holographic predictions survive, at least, weak quantum gravity corrections. We then
show how such an $R^2$ operator might be induced as a result of integrating out massive lattice fermions and show
preliminary result for boundary correlation functions that include the effects of this fermionic backreaction on the geometry.
DOI: https://doi.org/10.22323/1.396.0430
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2022-06-26 19:34:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214959979057312, "perplexity": 1547.439507316879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00212.warc.gz"}
|
http://stats.stackexchange.com/questions/24175/how-can-i-calculate-a-rate-between-two-moving-data-sets/24200
|
# How can I calculate a rate between two “moving” data sets?
I own a software business that follows a common pattern: Interested people can download the software and try it free for 30 days, and if they like it, they can buy it at any time during those 30 days (or after). I'd like to calculate my "conversion rate," which I will define as the percentage of downloads that lead to sales within 30 days.
I don't track individual users, so I have no way of knowing whether downloader X ever ended up buying. I do, however, have time series data for the number of downloads each day and the number of sales each day. I have this data going back several years.
I would like to be able to do any of a few things with this information: 1. Compare this rate to a similar time period last year. 2. Determine whether the rate is generally increasing or decreasing. 3. Predict what the next X days sales might look like based on the last X days downloads.
Given that information and those goals, is it possible to determine (or at least approximate), a useful conversion rate? If so, how is it done?
(I'm a programmer who had one intro stats class in college, which was mostly about the probability of rolling dice. Answers that explain it like I'm 5 would be greatly appreciated)
-
This is not a complete answer, but too long for a comment. A few notations: let $D(t)$ and $B(t)$ be the number of downloads / buys on day $t$, and let $C(i|t)$ be the conversion probability, that is the probability of buying $i$ days after downloading given that the download happened on day $t$. Then $$B(t) = \sum_{i=0}^\infty D(t-i) C(i|t-i)$$
You know $D$ and $B$, but would like to estimate $C$. If the conversion behaviour does not change over time, that is $C(i|t)=C(i)$, then the above formula reduces to a convolution with two known components. Fourier transforms give a standard solution for deconvolution, since the Fourier transform of a convolution equals the product of the two Fourier transforms. Here $\mathcal{F}(B) = \mathcal{F}(D) \mathcal{F}(C)$, so $C = \mathcal{F}^{-1}(\mathcal{F}(B)/\mathcal{F}(D))$.
I would certainly try this out, as this should at least give you an idea how your conversion probabilities look like.
Now, your main questions are in fact assuming that $C(i|t)$ does depend on $t$, and you also want to do statistical inference. This is where I hit the limits of my knowledge. Intuitively, if we allow $C(i|t)$ to change every day, then there is no way we could estimate its effect, so we would probably want to assume some smooth functional effect of $t$. This could be combined with some parametric or semiparametric (eg splines?) form for $C$ as a function of $i$. If you don't have too many parameters, perhaps you could do some numeric optimization.
For additional ideas, I would suggest time series and signal detection literature. This does seem like something that might have been solved already
-
Very interesting suggestion! Some remarks: 1. if you assume C(i|t) instead of C(i|t-i), you still have a convolution, and there is no really good a priori reason to suppose that this model will be worse. 2. when doing deconvolution (that's the official name), one is likely to run into problems and has to do regularization. Spline regularization is an option (pretty advanced); normally people just add a small constant to F(D) for starters. – AVB Mar 6 '12 at 18:43
|
2013-12-13 08:12:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7633896470069885, "perplexity": 341.7585403766866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164920374/warc/CC-MAIN-20131204134840-00087-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2012.32.1857
|
# American Institute of Mathematical Sciences
May 2012, 32(5): 1857-1879. doi: 10.3934/dcds.2012.32.1857
## Prescribing the scalar curvature problem on higher-dimensional manifolds
1 Department of Mathematics, Faculty of Sciences of Sfax, Route of Soukra, Sfax, Tunisia 2 Department of mathematics, King Abdulaziz university, P.O. 80230, Jeddah, Saudi Arabia
Received November 2010 Revised July 2011 Published January 2012
In this paper we consider the problem of existence of conformal metrics with prescribed scalar curvature on n-dimensional Riemannian manifolds, $n \geq 5$. Using precise estimates on the losses of compactness, we characterize the critical points at infinity of the associated variational problem and we prove existence results for curvatures satisfying an assumption of Bahri-Coron type.
Citation: Randa Ben Mahmoud, Hichem Chtioui. Prescribing the scalar curvature problem on higher-dimensional manifolds. Discrete and Continuous Dynamical Systems, 2012, 32 (5) : 1857-1879. doi: 10.3934/dcds.2012.32.1857
##### References:
[1] T. Aubin, Équations différentielles non linéaires et problème de Yamabe concernant la courbure scalaire, J. Math. Pures Appl. (9), 55 (1976), 269-296. [2] T. Aubin and A. Bahri, Méthodes de topologie algebrique pour le problème de la courbure scalaire prescrite, J. Math. Pures Appl. (9), 76 (1997), 525-549. doi: 10.1016/S0021-7824(97)89961-8. [3] T. Aubin and A. Bahri, Une hypothése topologique pour le problème de la courbure scalaire prescrite, (French) [A topological hypothesis for the problem of prescribed scalar curvature], J. Math. Pures Appl. (9), 76 (1997), 843-850. doi: 10.1016/S0021-7824(97)89973-4. [4] A. Ambrosetti, J. Garcia Azorero and I. Peral, Perturbation of $-\Delta u + u^{\frac{(N+2)}{(N-2)}} = 0$, the scalar curvature problem in $\mathbb{R}^N2$, and related topics, Journal of Functional Analysis, 165 (1999), 117-149. doi: 10.1006/jfan.1999.3390. [5] A. Bahri, "Critical Point at Infinity in Some Variational Problems," Pitman Res. Notes Math, Ser., 182, Longman Sci. Tech., Harlow, copublished in the United States with John Wiley & Sons, Inc., New York, 1989. [6] A. Bahri, An invariant for Yamabe-type flows with applications to scalar-curvature problems in high dimensions, A celebration of J. F. Nash, Jr., Duke Math. J., 81 (1996), 323-466. doi: 10.1215/S0012-7094-96-08116-8. [7] A. Bahri and H. Brezis, Équations elliptiques non linéaires sur des variétés avec exposant de Sobolev critique, C. R. Acad. Sci. Paris Sér. I Math., 307 (1988), 537-576. [8] A. Bahri and J.-M. Coron, The scalar curvature problem on the standard three-dimensional spheres, J. Funct. Anal., 95 (1991), 106-172. doi: 10.1016/0022-1236(91)90026-2. [9] A. Bahri and J.-M. Coron, On a nonlinear elliptic equation involving the critical Sobolev exponent: The effect of topology of the domain, Comm. Pure Appl. Math., 41 (1988), 255-294. [10] M. Ben Ayed, Y. Chen, H. Chtioui and M. Hammami, On the prescribed scalar curvature problem on 4-manifolds, Duke Math. J., 84 (1996), 633-677. doi: 10.1215/S0012-7094-96-08420-3. [11] R. Ben Mahmoud and H. Chtioui, Existence results for the prescribed scalar curvature on $\mathbbS^3$, Annales de l'Institut Fourier, 2010. [12] S.-Y. Chang and P. Yang, A perturbation result in prescribing scalar curvature on $S^n$, Duke Math. J., 64 (1991), 27-69. doi: 10.1215/S0012-7094-91-06402-1. [13] S.-Y. Chang, M. J. Gursky and P. C. Yang, The scalar curvature equation on 2- and 3-spheres, Calc. Var. Partial Differential Equations, 1 (1993), 205-229. [14] C.-C. Chen and C.-S. Lin, Estimates of the conformal scalar curvature equation via the method of moving planes, Comm. Pure Appl. Math., 50 (1997), 971-1017. doi: 10.1002/(SICI)1097-0312(199710)50:10<971::AID-CPA2>3.0.CO;2-D. [15] C.-C. Chen and C.-S. Lin, Estimates of the conformal scalar curvature equation via the method of moving planes. II, J. Differential Geom., 49 (1998), 115-178. [16] C.-C. Chen and C.-S. Lin, Prescribing scalar curvature on $S^n$. I: A priori estimates, J. Differential Geometry, 57 (2001), 67-171. [17] H. Chtioui, Prescribing the scalar curvature problem on three and four manifolds, Advanced Nonlinear Studies, 3 (2003), 457-470. [18] A. Hatcher, "Algebraic Topology," Campbridge University Press, Cambridge, 2002. [19] J. Kazdan and F. W. Warner, Existence and conformal deformation of metrics with prescribed Gaussian and scalar curvatures, Annals of Math. (2), 101 (1975), 317-331. doi: 10.2307/1970993. [20] J. Lee and T. Parker, The Yamabe problem, Bull. Amer. Math. Soc. (N.S.), 17 (1987), 37-91. [21] Y. Y. Li, Prescribing scalar curvature on $S^n$ and related problems. I, Journal of Differential Equations, 120 (1995), 319-410. doi: 10.1006/jdeq.1995.1115. [22] Y. Y. Li, Prescribing scalar curvature on $S^n$ and related problems. II: Existence and compactness, Comm. Pure Appl. Math., 49 (1996), 541-597. doi: 10.1002/(SICI)1097-0312(199606)49:6<541::AID-CPA1>3.0.CO;2-A. [23] R. Schoen and D. Zhang, Prescribed scalar curvature on the n-sphere, Calculus of Variations and Partial Differential Equations, 4 (1996), 1-25. doi: 10.1007/BF01322307. [24] R. Schoen, Conformal deformation of a Riemannian metric to constant scalar curvature, J. Differential Geom., 20 (1984), 479-495. [25] R. Schoen, Courses at Stanford University (1988) and New York University (1989), unpublished. [26] M. Struwe, "Variational Methods. Applications to Nonlinear PDE and Hamilton Systems," Springer-Verlag, Berlin, 1990. [27] N. Trudinger, Remarks concerning the conformal deformation of Riemannian structures on compact manifolds, Ann. Scuola Norm. Sup. Pisa (3), 22 (1968), 265-274.
show all references
##### References:
[1] T. Aubin, Équations différentielles non linéaires et problème de Yamabe concernant la courbure scalaire, J. Math. Pures Appl. (9), 55 (1976), 269-296. [2] T. Aubin and A. Bahri, Méthodes de topologie algebrique pour le problème de la courbure scalaire prescrite, J. Math. Pures Appl. (9), 76 (1997), 525-549. doi: 10.1016/S0021-7824(97)89961-8. [3] T. Aubin and A. Bahri, Une hypothése topologique pour le problème de la courbure scalaire prescrite, (French) [A topological hypothesis for the problem of prescribed scalar curvature], J. Math. Pures Appl. (9), 76 (1997), 843-850. doi: 10.1016/S0021-7824(97)89973-4. [4] A. Ambrosetti, J. Garcia Azorero and I. Peral, Perturbation of $-\Delta u + u^{\frac{(N+2)}{(N-2)}} = 0$, the scalar curvature problem in $\mathbb{R}^N2$, and related topics, Journal of Functional Analysis, 165 (1999), 117-149. doi: 10.1006/jfan.1999.3390. [5] A. Bahri, "Critical Point at Infinity in Some Variational Problems," Pitman Res. Notes Math, Ser., 182, Longman Sci. Tech., Harlow, copublished in the United States with John Wiley & Sons, Inc., New York, 1989. [6] A. Bahri, An invariant for Yamabe-type flows with applications to scalar-curvature problems in high dimensions, A celebration of J. F. Nash, Jr., Duke Math. J., 81 (1996), 323-466. doi: 10.1215/S0012-7094-96-08116-8. [7] A. Bahri and H. Brezis, Équations elliptiques non linéaires sur des variétés avec exposant de Sobolev critique, C. R. Acad. Sci. Paris Sér. I Math., 307 (1988), 537-576. [8] A. Bahri and J.-M. Coron, The scalar curvature problem on the standard three-dimensional spheres, J. Funct. Anal., 95 (1991), 106-172. doi: 10.1016/0022-1236(91)90026-2. [9] A. Bahri and J.-M. Coron, On a nonlinear elliptic equation involving the critical Sobolev exponent: The effect of topology of the domain, Comm. Pure Appl. Math., 41 (1988), 255-294. [10] M. Ben Ayed, Y. Chen, H. Chtioui and M. Hammami, On the prescribed scalar curvature problem on 4-manifolds, Duke Math. J., 84 (1996), 633-677. doi: 10.1215/S0012-7094-96-08420-3. [11] R. Ben Mahmoud and H. Chtioui, Existence results for the prescribed scalar curvature on $\mathbbS^3$, Annales de l'Institut Fourier, 2010. [12] S.-Y. Chang and P. Yang, A perturbation result in prescribing scalar curvature on $S^n$, Duke Math. J., 64 (1991), 27-69. doi: 10.1215/S0012-7094-91-06402-1. [13] S.-Y. Chang, M. J. Gursky and P. C. Yang, The scalar curvature equation on 2- and 3-spheres, Calc. Var. Partial Differential Equations, 1 (1993), 205-229. [14] C.-C. Chen and C.-S. Lin, Estimates of the conformal scalar curvature equation via the method of moving planes, Comm. Pure Appl. Math., 50 (1997), 971-1017. doi: 10.1002/(SICI)1097-0312(199710)50:10<971::AID-CPA2>3.0.CO;2-D. [15] C.-C. Chen and C.-S. Lin, Estimates of the conformal scalar curvature equation via the method of moving planes. II, J. Differential Geom., 49 (1998), 115-178. [16] C.-C. Chen and C.-S. Lin, Prescribing scalar curvature on $S^n$. I: A priori estimates, J. Differential Geometry, 57 (2001), 67-171. [17] H. Chtioui, Prescribing the scalar curvature problem on three and four manifolds, Advanced Nonlinear Studies, 3 (2003), 457-470. [18] A. Hatcher, "Algebraic Topology," Campbridge University Press, Cambridge, 2002. [19] J. Kazdan and F. W. Warner, Existence and conformal deformation of metrics with prescribed Gaussian and scalar curvatures, Annals of Math. (2), 101 (1975), 317-331. doi: 10.2307/1970993. [20] J. Lee and T. Parker, The Yamabe problem, Bull. Amer. Math. Soc. (N.S.), 17 (1987), 37-91. [21] Y. Y. Li, Prescribing scalar curvature on $S^n$ and related problems. I, Journal of Differential Equations, 120 (1995), 319-410. doi: 10.1006/jdeq.1995.1115. [22] Y. Y. Li, Prescribing scalar curvature on $S^n$ and related problems. II: Existence and compactness, Comm. Pure Appl. Math., 49 (1996), 541-597. doi: 10.1002/(SICI)1097-0312(199606)49:6<541::AID-CPA1>3.0.CO;2-A. [23] R. Schoen and D. Zhang, Prescribed scalar curvature on the n-sphere, Calculus of Variations and Partial Differential Equations, 4 (1996), 1-25. doi: 10.1007/BF01322307. [24] R. Schoen, Conformal deformation of a Riemannian metric to constant scalar curvature, J. Differential Geom., 20 (1984), 479-495. [25] R. Schoen, Courses at Stanford University (1988) and New York University (1989), unpublished. [26] M. Struwe, "Variational Methods. Applications to Nonlinear PDE and Hamilton Systems," Springer-Verlag, Berlin, 1990. [27] N. Trudinger, Remarks concerning the conformal deformation of Riemannian structures on compact manifolds, Ann. Scuola Norm. Sup. Pisa (3), 22 (1968), 265-274.
[1] M. Ben Ayed, Mohameden Ould Ahmedou. On the prescribed scalar curvature on $3$-half spheres: Multiplicity results and Morse inequalities at infinity. Discrete and Continuous Dynamical Systems, 2009, 23 (3) : 655-683. doi: 10.3934/dcds.2009.23.655 [2] Kaifang Liu, Lunji Song, Shan Zhao. A new over-penalized weak galerkin method. Part Ⅰ: Second-order elliptic problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2411-2428. doi: 10.3934/dcdsb.2020184 [3] Lunji Song, Wenya Qi, Kaifang Liu, Qingxian Gu. A new over-penalized weak galerkin finite element method. Part Ⅱ: Elliptic interface problems. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2581-2598. doi: 10.3934/dcdsb.2020196 [4] Eszter Fehér, Gábor Domokos, Bernd Krauskopf. Tracking the critical points of curves evolving under planar curvature flows. Journal of Computational Dynamics, 2021, 8 (4) : 447-494. doi: 10.3934/jcd.2021017 [5] Luis Barreira, Claudia Valls. Topological conjugacies and behavior at infinity. Communications on Pure and Applied Analysis, 2014, 13 (2) : 687-701. doi: 10.3934/cpaa.2014.13.687 [6] Hongxia Yin. An iterative method for general variational inequalities. Journal of Industrial and Management Optimization, 2005, 1 (2) : 201-209. doi: 10.3934/jimo.2005.1.201 [7] Jaume Llibre, Jesús S. Pérez del Río, J. Angel Rodríguez. Structural stability of planar semi-homogeneous polynomial vector fields applications to critical points and to infinity. Discrete and Continuous Dynamical Systems, 2000, 6 (4) : 809-828. doi: 10.3934/dcds.2000.6.809 [8] Yaiza Canzani, Dmitry Jakobson, Igor Wigman. Scalar curvature and $Q$-curvature of random metrics. Electronic Research Announcements, 2010, 17: 43-56. doi: 10.3934/era.2010.17.43 [9] Erisa Hasani, Kanishka Perera. On the compactness threshold in the critical Kirchhoff equation. Discrete and Continuous Dynamical Systems, 2022, 42 (1) : 1-19. doi: 10.3934/dcds.2021106 [10] Yoshifumi Aimoto, Takayasu Matsuo, Yuto Miyatake. A local discontinuous Galerkin method based on variational structure. Discrete and Continuous Dynamical Systems - S, 2015, 8 (5) : 817-832. doi: 10.3934/dcdss.2015.8.817 [11] Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems and Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409 [12] Chiara Corsato, Franco Obersnel, Pierpaolo Omari, Sabrina Rivetti. On the lower and upper solution method for the prescribed mean curvature equation in Minkowski space. Conference Publications, 2013, 2013 (special) : 159-169. doi: 10.3934/proc.2013.2013.159 [13] Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete and Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139 [14] Anthony W. Baker, Michael Dellnitz, Oliver Junge. Topological method for rigorously computing periodic orbits using Fourier modes. Discrete and Continuous Dynamical Systems, 2005, 13 (4) : 901-920. doi: 10.3934/dcds.2005.13.901 [15] Jian Lu, Huaiyu Jian. Topological degree method for the rotationally symmetric $L_p$-Minkowski problem. Discrete and Continuous Dynamical Systems, 2016, 36 (2) : 971-980. doi: 10.3934/dcds.2016.36.971 [16] Daniel Wilczak, Piotr Zgliczyński. Topological method for symmetric periodic orbits for maps with a reversing symmetry. Discrete and Continuous Dynamical Systems, 2007, 17 (3) : 629-652. doi: 10.3934/dcds.2007.17.629 [17] Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. Discrete and Continuous Dynamical Systems - B, 2015, 20 (10) : 3547-3564. doi: 10.3934/dcdsb.2015.20.3547 [18] Hichem Chtioui, Hichem Hajaiej, Marwa Soula. The scalar curvature problem on four-dimensional manifolds. Communications on Pure and Applied Analysis, 2020, 19 (2) : 723-746. doi: 10.3934/cpaa.2020034 [19] Enrique R. Pujals, Federico Rodriguez Hertz. Critical points for surface diffeomorphisms. Journal of Modern Dynamics, 2007, 1 (4) : 615-648. doi: 10.3934/jmd.2007.1.615 [20] Keith Promislow, Hang Zhang. Critical points of functionalized Lagrangians. Discrete and Continuous Dynamical Systems, 2013, 33 (4) : 1231-1246. doi: 10.3934/dcds.2013.33.1231
2020 Impact Factor: 1.392
|
2022-06-26 08:14:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588052868843079, "perplexity": 2342.1090854254994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00485.warc.gz"}
|
https://www.wyzant.com/resources/answers/users/88293817
|
08/28/20
#### Find his distance from the pole.
A man is standing on the top of a hill and sees a flagpole he knows is 45 feet high. The angle of depression to the bottom of the pole is 12 degrees, and the angle of elevation to the top of the... more
08/28/20
#### Manuel and Daniel are playing catch with a football.
Manuel and Daniel are playing catch with a football. They are standing 30 meters apart. Manuel throws the ball to Daniel, and it takes 3 seconds for the ball to reach Daniel. How fast is the ball... more
08/28/20
#### Systems of equations
What is the solution for z in the following system of equations?5x+2y-3z=172x+2y+z=53x+2y=-6
08/27/20
#### two angles are complementary. one angle measures 20 degrees more than the other angle. find the measure of the larger angle.
two angles are complementary. one angle measures 20 degrees more than the other angle. find the measure of the larger angle.
08/27/20
#### A rectangular prism has a length of 3/4 inch and a width of 1/2 inch. The volume of the prism is 21/32 cubic inch. Find the height of the prism. Write the answer as a mixed number in simplest form.
Sorry for the long question! That's how the question is I guess. Anyways, any help would always be appreciated! :D
08/27/20
#### Volumes of Prisms and Cylinders
You are painting a room in your house. Unfortunately, you lost the lid to the 5 gallon bucket of paint, but you only used half of the paint. You want to save the paint, so you plan on transferring... more
08/27/20
#### Calculus 1 confusing me
Fill in the blank with "all", "no", or "some" to make the following statements true.Note: if answer is "all", explain why.if your answer is "no", give an example and explainif your answer is... more
08/26/20
#### The line tangent to the graph of the twice-differentiable function f at the point x=3 is used to approximate the value of f(3.25).
Which of the following statements guarantees that the tangent line approximation at x=3.25 is an underestimate of f(3.25)?A. The function f is decreasing on the interval 3 ≤ x ≤ 3.25B. The... more
08/26/20
#### You have been offered two different jobs. Job A would pay you $12 an hour but a bonus at the end of the month of$100. Job B would pay you \$16 an hour but would have no bonus offered.
a) Write an equation that represents the pay for each job offer.b) Graph the system of equations by hand on a coordinate grid. C)Circle the intersection points and... more
08/25/20
#### A weight suspended by a spring vibrates vertically according to the function D given by D(t)=2sin(4π(t+1/8))
D(t) represents, in centimeters, the directed distance of the weight from its central position t seconds after the start of the motion. Assume the positive direction is upward. What is the... more
08/25/20
#### the product of 4 and x
Write each expression.
08/25/20
08/24/20
#### Solve for b. –8b − 10b − –13 = –5
Help with this problem please its for school
08/24/20
08/24/20
08/24/20
#### Which functions are always increasing?
Linear Quadratic Absolute Value Square Root Cubic Cube Root Rational Exponential Logarithmic
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
2021-06-18 09:58:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3333713710308075, "perplexity": 1155.2292197433728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00483.warc.gz"}
|
http://ewjf.fabiomarazzi.it/weighted-factor-analysis.html
|
Project risk analysis, like all risk analyses, must be implemented using a graded approach. 32 Table 4-2 -Example Weighted Factor Analysis. Five minutes 6. I am trying to work out a formula to calculate a weighted ranking. When evaluating alternatives that have categories of criteria, use a weight for each category to indicate. Suppose your teacher says, "The test counts twice as much as the quiz and the final exam counts three times as much as the quiz". Y1 - 1996/2. ANALYSIS OF IFE (Internal Factors Evaluation) and EFE (External Factors Evaluation) Matrix IFE Key Internal Factor Weight Rating Weighted Score Strength Financial. A weighted inventory average determines the average cost of all inventory items based on the inventory items' individual cost basis and the quantity of each item held in inventory. Gravity roughness provides an estimate of the amplitude of the gravity anomaly and is robust to small errors in the location. T1 - Algorithms for unweighted least-squares factor analysis. Instead, you can use the Real Statistics Weighted Moving Averages data analysis tool. [email protected] 0 to a high of 4. Next, we will closely examine the different output elements in an attempt to develop a solid understanding of PCA, which will pave the way to. Y n: P 1 = a 11Y 1 + a 12Y 2 + …. Using Categories of Criteria. Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. The need for sampling weights 1. ANOVA table. The rates of return for these investments are 5%, 10%, 15%, and 20%. Now, we argue that p decreases by a factor of 2 after at most 2 recursive calls. contracting officers must use the weighted guidelines method for profit/fee analysis unless use of the modified weighted guidelines method or an alternate structured method is appropriate. Even though the WACC calculation calls for the market value of debt, the book value of debt may be used as a proxy so long as the company is not in financial distress, in which case the market and book values of debt could differ substantially. A weighting factor is a weight given to a data point to assign it a lighter, or heavier, importance in a group. Weighted Factor Analysis. In mathematical terms, a factor is any of the numbers multiplied together to form the product of a multiplication problem. The fa function includes ve methods of factor analysis (minimum residual, principal axis, weighted least squares,. Weighted Average Cost of Capital analysis assumes that capital markets (both debt and equity) in any given industry require returns commensurate with the perceived riskiness of their investments. Often called the workhorse of applied statistics, multiple regression analysis identifies the best weighted combination of variables to predict an outcome. Factor models for asset returns are used to • Provide a framework for portfolio risk analysis. " Let's dissect this definition: Opportunity cost is what you give up as a consequence of your decision to use a scarce resource in a particular way. 9 was performed as a sensitivity analysis. This latent variable cannot be directly measured with a single variable (think: intelligence, social anxiety, soil health). I don't think Excel has a theoretically rigorous weighted least squares algorithm built. a cluster of variables all at least loosely of the same kind (so an arbitrary rag-bag is suspect before you even start) 2. Question: Discuss about the Security Analysis Of Wesfarmers Llimited. what shall I eat for tea) by breaking it down into the multiple factors that must be considered (e. In order to detect co-expression pattern among the lncRNAs and PCGs in our TCGA datasets, weighted gene co-expression network analysis (WGCNA) was applied. For instance, a survey is created by a credit card company to evaluate satisfaction of customers. 0 to a high of 4. Systematic literature review identifying RCTs comparing anti-VEGF agents to another treatment published before June 2016. Hope someone can help me. Calculate the weighted gross margin for all products sold by the company. 6 The Rotation Problem 355 16. This free online Stock Shares Outstanding Calculator will calculate the weighted average for a company that changes its number of outstanding shares during the period in which you are interested. Weight each factor from 1. (2004) Only pairs of DWI and T 1 weighted MRI scans performed consecutively were subjected to further analysis in the present study. The remaining columns contain the measured properties or items. 5 mm 3, field of view (FOV) = 190 mm, flip angle = 60°, multi-band acceleration factor = 4]. The need for sampling weights 1. This summer I worked on a GIS-based project for DVAEYC, Delaware Valley Association for the Education of Young Children, to create reports for each house, city council, senate and congressional district in Philadelphia. Ben Lambert 249,951 views. The demand weighted fuel factor given in the NCM modelling Guide accounts for both Efficiency and CO2 factors. First is that on x-axis total weighted scores of Internal Factor Evaluation Matrix are specified. The strength of a node takes into account both the connectivity as well as the weights of the links. What is Risk Management? 4 Can be achieved by using a weighted factor analysis sheet. Regression analysis can be used for a large variety of applications: Modeling fire frequency to determine high risk areas and to understand the factors that contribute to high risk areas. WLS, OLS' Neglected Cousin. Should I just not weight the data while performing factor analysis?<<< Stata's factor analysis program analyzes correlation matrices. The total score of 2. The weighted average rating factor (WARF) is a measure that is used by credit rating companies to indicate the credit quality of a portfolio. Gastroesophageal reflux disease was found to be associated with a lower incidence of GIB [odds ratio: 0. Using a consistent list of criteria, weighted according to the importance or priority of the criteria to the organization, a comparison of similar “solutions” or options can be completed. The momentum and short term reversal portfolios are reconstituted monthly and the other research portfolios are reconstituted annually. Such “underlying factors” are often variables that are difficult to measure such as IQ, depression or extraversion. This can be tested through factor analysis and reliability analysis. Timofeeva1, K. Ramen has invested his money into four types of investments. Select REGR factor score 1 for analysis 1 [FAC1_1] through REGR factor score 10 for analysis 1 [FAC10_1] as independent variables. The preceding table is sorted by % Weight in the. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Add the resulting numbers together to find the weighted average. gz For Windows systems MacOS-mach. But testing ‘Weighted Scoring Factor Model (WSFM)’ is very specific to justify the weights and performance of a particular bank branch and accordingly compare the total weight of banking services of all branches of a bank as a whole. Because of its unique calculation, WMA will follow prices more closely than a corresponding Simple Moving Average. EFE Matrix indicates whether the firm is able to effectively take advantage of existing opportunities along with minimizing the external threats. Duration Analysis Duration analysis measures the change in the valuation of an asset or liability that may occur given a discrete change in interest rates. The technique involves data reduction, as it attempts to represent a set of variables by a smaller number. We support our clients in their factor allocation and analysis process. A factor is a weighted average of the original variables. How to use tool: Grid Analysis is a useful technique to use for making a decision. The Black-Scholes formula tells us that a call option with these characteristics has a value of $41. When cost accounting, you use the weighted average costing method to calculate costs in a process-costing environment. It is calculated by taking the the ratio of the variance of all a given model's betas divide by the variane of a single beta if it were fit alone. The team first establishes a list of weighted criteria and then evaluates each option against those criteria. Total weighted score is simply the sum of all individual weighted scores. Source: BlackRock, April 2020. The same basic principle is used in meta-analysis to combine studies of varying size. The journal impact factor was designed for only one purpose – to compare the citation impact of one journal with other journals. Get a 100% Unique Essay on Efas Analysis How-to. If there were no weights assigned, all the factors would be equally important, which is an impossible scenario in the real world. Limitation of Weighted SWOT analysis of Barnes & Noble. 201223 X are not very different, as can also be seen in Figure 3. In the last few years, several normalization strategies. The Weighted Guidelines Application establishes the factors to be considered, the normal ranges for the risk values of those factors, and the analysis required in determining the appropriate value for each factor; and the latter section is the instructions for the DOE Form 4220. Based on factor loadings (in factor analysis) can we give unequal weights to Likert scale items? Ask Question Asked 7 years, 9 months ago. We'll demonstrate this scenario with the example below. The weighted average ( x) is equal to the sum of the product of the weight (w i) times the data number (x i) divided by the sum of the weights: Find the weighted average of class grades (with equal weight) 70,70,80,80,80,90: Since the weight of all grades are equal, we can. After weighting, each elderly persons counts for 3 persons. 1 Animal models of TBI produce a loss of dopaminergic fibres2 and an associated hypodopaminergic state. However, you can easily create your own Excel weighted average formula, using the Excel Sumproduct and Sum functions. IFE (Internal factor evaluation) matrix is one of the best strategic tool to perform internal audit of any firm. However, it is important to highlight that it implicitly assumes the existence of spatial autocorrelation in the data. Severity allocation factor (SAF) denotes the ratio of the weighted sum of emergency room visits and cumulative overdose deaths (2003 to a given year of damages) in county c in a given year to the state-level weighted sum of ER visits and cumulative overdose deaths in the same year. Answer: Introduction: The property prices in Australia have increased significantly since 2001, which in turn has generated huge debate among economists and policymakers. If you have questions or need help with a weighted guidelines profit analysis, please reach out to us directly at [email protected] Because those weights are all between -1 and 1, the scale of the factor scores will be very different from a pure sum. We notice that the multidimensional WHD is, up to a constant factor, the harmonic mean of weighted hamming distance of each individual dimension. Variable selection via the weighted group lasso for factor analysis models. There are two different generalizations of the characteristic path. Ben Lambert 249,951 views. Most of us are, by now, familiar with the maps the TV channels and web sites use to show the results of presidential elections. What is Risk Management? 4 Can be achieved by using a weighted factor analysis sheet. When you choose a. Using a consistent list of criteria, weighted according to the importance or priority of the criteria to the organization, a comparison of similar “solutions” or options can be completed. I conducted a factor analysis on SPSS and extracted 5 factors. In ML factor analysis, the weight is the reciprocal of the uniqueness. Modification and psychometric evaluation of the child perceptions questionnaire (CPQ11–14) in assessing oral health related quality of life among. Factor Model Risk Analysis in R R/Fi 2011 A li d Fi ith RR/Finance 2011: Applied Finance with R April 30, 2011 Eric Zivot Robert Richards Chaired Professor of Economics Adjunct Professor, Departments of Applied Mathematics, Finance and StatisticsFinance and Statistics University of Washington BlackRock Alternative Advisors, Seattle WA Files for. (To do so makes sense for cost but NOT for schedule!). factor analysis (TFA), a technique that exploits spatial correlations in fMRI data to recover the underlying structure that the images reflect. You may mod 3. Because those weights are all between -1 and 1, the scale of the factor scores will be very different from a pure sum. Specifically, TFA casts each brain image as a weighted sum of spatial functions. We also used subgroup analysis to analyze study heterogeneity, and evaluated publication bias. However, you can easily create your own Excel weighted average formula, using the Excel Sumproduct and Sum functions. The remaining columns contain the measured properties or items. The Weighted Scoring Method. 33 Threat Identification. Mercer University: Weighted Average Grade Calculator. Introduction. Weighted pro-con lists enable you to indicate how relevant a pro or con is to your decision, by specifying a number to indicate a factor's importance. These indexes are intended to offer more focused exposures to factors than their market cap-weighted counterparts. Linear Factor Model. Zhang(a) (a) VU University Amsterdam & Tinbergen Institute Amsterdam. There is still some indeterminacy in the model for it is unchanged if Λ is replaced by G Λ for any orthogonal matrix G. An unweighted analysis is the same as a weighted analysis in which all weights are 1. Next, we will closely examine the different output elements in an attempt to develop a solid understanding of PCA, which will pave the way to. This video demonstrates how to create weighted and unweighted averages in SPSS using the "Compute Variables" function. The total score of 2. What is being weighted. is a tool that compares the firm and its rivals and reveals their relative strengths and weaknesses. Three-Factor Formula With an Extra Weighting for Sales – A variation of the three-factor formula uses the same three factors, but gives extra weight to sales when the three are multiplied together. T1 - Variable selection via the weighted group lasso for factor analysis models. Question: Discuss about the Economic Analysis of Counterfeit Goods. Closely related to the primal problem of the g. The organization that has highest possible score is 4. …It says the primary empirical difference between the. Upon completion of this material, you should be. appropriate to the requirements of the analysis. I conducted a factor analysis on SPSS and extracted 5 factors. Kei Hirose 1,* and; Sadanori Konishi 2; Article first published online: 17 MAY 2012. What is the weighted factor model 2. First, the above weights$\mathbf{B}$are not precise (unless we used PCA model rather than factor analysis model per se) due to the fact that the uniqness of an item is not known on the level of each case (respondent), and thereby computed factor scores are only approximation of true factor values. a cluster of variables all at least loosely of the same kind (so an arbitrary rag-bag is suspect before you even start) 2. Source: BlackRock, April 2020. Another useful calculation is weighted average days to pay (WADP). Factor analyses in the two groups separately would yield different factor structures but identical factors; in each gender the analysis would identify a "verbal" factor which is an equally-weighted average of all verbal items with 0 weights for all math items, and a "math" factor with the opposite pattern. Rate each factor from 5 (Outstanding) to 1 (Poor) in Column 3 based on the company's response to that factor. The authors propose a computationally simple alternative, for weakly dependent data generating mechanisms, based on minimization of the Kullback-Leibler information criterion. Events and projects include our Annual Service Quality Conference, our quarterly Competitive Advantage newsletter, and the member-led development of our Service Quality Body of Knowledge. AU - Krijnen, Wim P. In factor analysis, however, we have the following model: zn ˘N(0, I), xn jzn ˘N(Wzn,Y),. This article will discuss differences between exploratory factor analysis and confirmatory factor analysis. The factor consists of two. SPSS treats weights incorrectly in inferential statistics. What is being weighted. Perform Fama-French three-factor model regression analysis for one or more ETFs or mutual funds, or alternatively use the capital asset pricing model (CAPM) or Carhart four-factor model regression analysis. The analysis used data from 2018 and defined “at-risk” individuals as anyone with current asthma, diabetes, chronic obstructive pulmonary disease (COPD), heart disease, kidney disease, or. Obviously some factors explained more of the variance than others. Suitability can be ranked based on data variables from Esri's Living Atlas of Demographic and Socioeconomic data and your site attributes. Decision Matrix Analysis works by getting you to list your options as rows on a table, and the factors you need consider as columns. Twelve states use an equal-weighted, three-factor apportionment formula. The journal impact factor was designed for only one purpose – to compare the citation impact of one journal with other journals. Weighted score value is the result achieved after multiplying each factor rating with the weight. Let's take a simple weighted average example to illustrate how we calculate a weighted avg. At the same time, it is important to recognize the fact that although the end result from this type of analysis is a quantitative number. After weighting, each elderly persons counts for 3 persons. Three-Factor Formula With an Extra Weighting for Sales - A variation of the three-factor formula uses the same three factors, but gives extra weight to sales when the three are multiplied together. Because these are correlations, possible values range from -1 to +1. The weights must add to 100. The weighted guidelines define a structure for profit/fee analysis that includes designated ranges for objective values as well as norm values that you. Weights and stratum variables needed for analysis were included. (To do so makes sense for cost but NOT for schedule!). Duration Analysis Duration analysis measures the change in the valuation of an asset or liability that may occur given a discrete change in interest rates. It consists of a list of requirements or criteria, their weighted value of importance, and the score of each email service provider on those requirements. It avoids the overhead and delays caused by the start-stop-start nature of traditional projects, where authorizations and phase gates control the program. This site is a part of the JavaScript E-labs learning objects for decision making. I would like to calculate a weighted value for each month, with the first month weighing the most and the last month weighing the least. Disclaimer. At the 2007 Joint Statistical Meetings in Denver, I discussed weighted statistical graphics for two kinds of statistical weights: survey weights and regression weights. If there were no weights assigned, all the factors would be equally important, which is an impossible scenario in the real world. The same basic principle is used in meta-analysis to combine studies of varying size. (1951) The Factorial Analysis of Human Ability. If numerical values are assigned to the criteria. Weighted probability, or percentage probability, is a technique sales managers use to manage the uncertainty inherent in sales forecasting. Each key factor must receive a score. Performance of SB funds is also insignificant when compared with the risk-adjusted blended benchmark that uses existing cap-weighted funds to provide low-cost passive exposure to market, size and value factors. those that have the least variance), with the exception of voxels that may actually contain a significant effect. Regression analysis can be used for a large variety of applications: Modeling fire frequency to determine high risk areas and to understand the factors that contribute to high risk areas. This is because CATPCA works by assigning optimum numerical values to each category of categorical variables, but for a dichotomy any pair of numerical values is equivalent to any other pair, because the variable has only two possible values and thus only one interval will be ever observed. Support is available on the mailing list and on the image. The horizontal rows show potential options and the vertical columns the different factors. Y n: P 1 = a 11Y 1 + a 12Y 2 + …. Weighted Factor Analysis. 8 Factor Models versus PCA Once More 359 16. Since our starting universe is 1,500 stocks, this means we are long 150 firms and short 150 firms. Strategic analysis consists of measuring the strengths and weaknesses of a company's position. Weighted score value is the result achieved after multiplying each factor rating with the weight. (1951) The Factorial Analysis of Human Ability. The total score of 2. Weighted vs. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. is a tool that compares the firm and its rivals and reveals their relative strengths and weaknesses. 5 is an average score. Central to this proposal is that a group of experts in the area of the problem can identify a hierarchy of factors with positive or negative. The demand weighted fuel factor given in the NCM modelling Guide accounts for both Efficiency and CO2 factors. The weighted criteria matrix is a valuable decision-making tool that is used to evaluate program alternatives based on specific evaluation criteria weighted by importance. Fundamental factor model (typically a value weighted index like the S&P 500 index) in. Weighted Average Days To Pay. Weighted Score. 3 Maximum Likelihood Factor Analysis. What might be a surprise is that the Tennessee Titans are one of those teams. The original dataset contain a "probability weight variable" (PW) which has been used for all the analysis so far. In the last decade several authors discussed the so-called minimum trace factor analysis (MTFA), which provides the greatest lower bound (g. Central to this proposal is that a group of experts in the area of the problem can identify factors with positive or negative influences on the problem outcome. To calculate a weighted average in Excel, simply use the SUMPRODUCT and the SUM function. In such applications, the items that make up each dimension are specified upfront. The Service Quality Division provides its members with several opportunities to continuously learn from and network with other Service Quality professionals. External Factor Evaluation (EFE) Matrix is a strategic management tool which allows the strategists to examine the cultural, social, economic, demographic, political, legal, and competitive information. Illustration In the illustration, the cell values are multiplied by their weight factor, and the results are added together to create the output raster. Recall that within the power family, the identity transformation (i. Robust ML (MLR) has been introduced into CFA models when this normality assumption is slightly or moderately violated. As originally developed this method involves ranking of jobs in respect of certain factors and usually involves the assigning of money wages to the job depending upon the ranking. 7 Factor Analysis as a Predictive Model 356 16. This exercise illustrates how when deciding among two or more competing plant location options, various decision factors (which can typically be characterized as exogenous - in the environment external to the company, hence largely outside its control - or endogenous. For the other reports, see Nilanthi Samaranayake, Michael Connell, and Satu Limaye,. After weighting, each elderly persons counts for 3 persons. That's the only difference in the calculations. Since the most important factor influencing vineyard siting is temperature, it was important to look at the Aspect factor first. The fa function includes ve methods of factor analysis (minimum residual, principal axis, weighted least squares,. " Let's dissect this definition: Opportunity cost is what you give up as a consequence of your decision to use a scarce resource in a particular way. Twelve states use an equal-weighted, three-factor apportionment formula. A common statistical technique to summarise a selection of values is the arithmetic mean - generally known as the average. † There are basically two types of factor analysis: exploratory and conflrmatory. WGCNA has been established as an effective data mining method for finding clusters or modules of highly correlated biomolecules and identifying intramodular “hubs”, including genes [ 12. The scree plot show that there is a sharp elbow at n=2 (i. , 1995 ) implemented in Matlab 7. These indices are usually calculated as weighted sums of several individual damage. The cumulative similarity factor is introduced into the selection rules of the model training set to improve the real-time performance of the model. The traditional approach to incorporating these views is to allocate portions of the portfolio to cap-weighted value and cap-weighted small cap indexes. Obviously some factors explained more of the variance than others. "An optimal renewable energy management strategy with and without hydropower using a factor weighted multi-criteria decision making analysis and nation-wide big data - Case study in Iran," Energy, Elsevier, vol. Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. What is Risk Management? 4 Risk Management is a methodology that helps managers make best use of their available resources What is Risk Management? 5 Can be achieved by using a weighted factor analysis sheet. The article concludes with several remarks. In confirmatory factor analysis (CFA), the use of maximum likelihood (ML) assumes that the observed indicators follow a continuous and multivariate normal distribution, which is not appropriate for ordinal observed variables. It is a very useful concept for understanding how the value of an instrument, portfolio, or even balance sheet will change for a specified percentage move in market rates. This is a variation of the L-shaped matrix. Sample Overlap Correction. The beauty of the Time Weighted Return is that it only factors in the portfolio manager’s actions by breaking up the overall period into subperiods and then linking each subperiod to get the total time weighted return. Use the weights and the raw scores to conduct a weighted-factor analysis of the suppliers. If numerical values are assigned to the criteria. Election results by state. For example, the Decile performance is going to be long/short the top/bottom decile on each measure. Twelve states use an equal-weighted, three-factor apportionment formula. The WSTable object is used to specify a Python list of input rasters and weight them accordingly. Special feature – Feed-in Tariff load factor analysis 92 Chart 1: Load factor range by technology and year Lines indicate range from 5th thto 95 percentile. In ML factor analysis, the weight is the reciprocal of the uniqueness. Calculating a weighted average using Excel Functions. Gravity roughness provides an estimate of the amplitude of the gravity anomaly and is robust to small errors in the location. Two-Way Analysis of Variance for Independent Samples. This is more systematic and. I conducted a factor analysis on SPSS and extracted 5 factors. Election results by state. 16 Factor Models 342 16. So, in this case "Priority" will act as the weight assigned to completion percentage. WACC is the weighted average of the cost of a company's debt and the cost of its equity. Size (small-cap) returns are yearly returns of the equal-weighted bottom quintile (by market capitalization) of the equal-weighted Russell 1000 Index. There are basically 2 approaches to Factor Analysis: · Exploratory Factor Analysis (EFA) seeks to uncover the underlying structure of a relatively large set of variables. ) (The following table is sorted by stock name. The S-Curve that is generated using weight factor based on cost will also inform company and client the estimated value of work done. Source: Procurement Glossary Author: Paul Rogers Institute: CIPS - UK. Two-Way Analysis of Variance for Independent Samples. Other JavaScript in this series are categorized under different areas of applications in the MENU section on this page. MS Excel 2007: Calculate a weighted value based on number of months Question: In Microsoft Excel 2007, I have scores for either 3 months or 6 months. The technique involves data reduction, as it attempts to represent a set of variables by a smaller number. 8 Factor Models versus PCA Once More 359 16. The other program I run for factor analysis with binary data is "SYSTAT", not STATA. In ML factor analysis, the weight is the reciprocal of the uniqueness. Weighting numbers allows you to give more importance to one number over another number. The rates of return for these investments are 5%, 10%, 15%, and 20%. Therefore, the price movements of companies with the highest share price have the. The same basic principle is used in meta-analysis to combine studies of varying size. Weighted definition is - made heavy : loaded. The need for sampling weights 1. Most of us are, by now, familiar with the maps the TV channels and web sites use to show the results of presidential elections. The number indicates how important the factor is if a company wants to succeed in an industry. Total the weighted scores for each criterion to calculate the weighted score totals for each alternative. A factor is a weighted average of the original variables. checklist versus the weighted factor method Functional Behavior Assessment Articles Going Global The Effect of Payment Methods on Strategic Management Applicant Testing Methods and Ethical/Legal Implications Examination and Possible Change in Operating Conditions CES-D scale and the validity of the individual assessment. The Weighted Guidelines Application establishes the factors to be considered, the normal ranges for the risk values of those factors, and the analysis required in determining the appropriate value for each factor; and the latter section is the instructions for the DOE Form 4220. The multiple linear regression indicates how well the returns of the given assets or a portfolio are explained by the risk factor exposures. Total Weighted Score. Factor scores are essentially a weighted sum of the items. When cost accounting, you use the weighted average costing method to calculate costs in a process-costing environment. Factor-Based Scores. control:Set control parameters for loess fits (stats). OFCCP has consistently required use of the "Eight Factor Availability Computation Method" since October 1, 1978. This method provides a vehicle for performing the analysis to develop a profit/fee objective and provides a format for summarizing profit amounts, subsequently. One of the most straightforward as well as widely applicable decision-making tools is weighted-factor analysis. Figure 2 – Weighted least squares regression The OLS regression line 12. When you get your report card back, however, and discover that your grade is in fact still a C, you may have a weighted score or weighted grade in play. It is better than doing simplistic SWOT analysis because with Weighted SWOT Analysis Roche managers can focus on the most critical factors and discount the non-important one. To find your weighted average, simply multiply each number by its weight factor and then sum the resulting numbers up. Hyperperfusion seems to be a risk factor for pCD, whereas the use of statins is associated with a lower risk of dNCR. An unweighted analysis is the same as a weighted analysis in which all weights are 1. Weighted Risk Factor (WRF) Example. Factor Rating Method The process of selecting a new facility location involves a series of following steps: Identify the important location factors. If there were no weights assigned, all the factors would be equally important, which is an impossible scenario in the real world. 3 Roots of Factor Analysis in Causal Discovery 347 16. How CE generated the weighted factor model 5. , Sherborn, MA, USA). Rank is from the largest to the smallest weight in the index, coded as. 47 (SD=15) in the CTCU. INTRODUTION:Hedge funds are actively managed portfolios that hold positions in publicly traded securities. Risk: 84% confidence interval, 231 constant weighted monthly observations, 1yr horizon, ending 3/31/20; see “Risk Factor Glossary” in the Appendix for additional risk details and “US Corporate Pension Plan Allocations and Benchmark Mapping” in the Appendix for details regarding the indexes used to represent each asset class. • Use risk management techniques to identify and prioritize risk factors for information assets. One potential path of thought is that weighting is embedded into factor rotation (by arbitrary or sophisticated rotation, you can force some variables be loaded more and others less with certain "degree of freedom" for you). Use the weights and the raw scores to conduct a weighted-factor analysis of the suppliers. ANOVA table. Hence, factor analysis can be done two times to get a better picture (with n=2 and n=5 respectively). (2004) Only pairs of DWI and T 1 weighted MRI scans performed consecutively were subjected to further analysis in the present study. Analysis of Euclid's algorithm. The traditional approach to incorporating these views is to allocate portions of the portfolio to cap-weighted value and cap-weighted small cap indexes. In this study, we. † Factor analysis is a collection of methods used to examine how underlying constructs in°uence the responses on a number of measured variables. It adjusts the means and standard deviations based on how much to weight each respondent. Mallee(a) and Z. If numerical values are assigned to the criteria. That is, crafting the scope and approach of the analysis to fit the needs of the project based on the project size, data availability and other requirements of the project team. A Weighted Criteria Matrix is a decision-making tool that evaluates potential options against a list of weighted factors. In Section 5, we derive a model selection criterion for evaluating a factor analysis model via the weighted group lasso. 9 million, an increase of 31% year-over-year; Revenue of$94. The weighted guidelines define a structure for profit/fee analysis that includes designated ranges for objective values as well as norm values that you. It uses linear programming to estimate the efficiency of multiple decision-making units and it is commonly used in production, management and economics. Weighted Least Squares is an extension of Ordinary Least Squares regression. Modification and psychometric evaluation of the child perceptions questionnaire (CPQ11–14) in assessing oral health related quality of life among. 1 From PCA to Factor Analysis 342 16. We notice that the multidimensional WHD is, up to a constant factor, the harmonic mean of weighted hamming distance of each individual dimension. Weighted Risk Factor (WRF) Example. Are you concerned about factor crowding? “How Important Are Valuations for Expected Returns?”. Systematic literature review identifying RCTs comparing anti-VEGF agents to another treatment published before June 2016. Plus, we announce the Football Outsiders December stars for Madden 20 Ultimate Team. it the work scheduled and the target/ Milestone that defines the weighted value. How to use tool: Grid Analysis is a useful technique to use for making a decision. Factor rating method; Weighted factor rating method; Load-distance method; Centre of gravity method; Break even analysis; Factor Rating Method for Location Planning. Rank is from the largest to the smallest weight in the index, coded as. Use weighted scoring to rank potential initiatives and facilitate an objective discussion. The final statistical procedures that we're going to discuss in this course is…actually a pair of very closely related…procedures, principal components analysis and factor analysis. with weights w = 1 / x. 75 Billion by 2025 with the CAGR of 19. Next, we will closely examine the different output elements in an attempt to develop a solid understanding of PCA, which will pave the way to. The basic formula for a weighted average where the weights add up to 1 is x1(w1) + x2(w2) + x3(w3), and so on, where x is each number in your set and w is the corresponding weighting factor. Central to this proposal is that a group of experts in the area of the problem can identify factors with positive or negative influences on the problem outcome. Grid Analysis: making a choice where many factors must be balanced. Supply Chain Glossary. In the example, dividing $80 by 2 gives a price-weighted average of$40, but stock splits will change this. But what if some of the values have more "weight" than others? For example, in many classes the tests are worth more than the assignments. Weighted Least Squares is an extension of Ordinary Least Squares regression. Also in this PESTEL/PESTLE analysis, stable political conditions pave the way for the company’s further growth in technology markets worldwide. Factor Analysis of Information Risk (FAIR)is a model that is based on the factors that contribute to risk and how each of them affects each other. Methods Three hundred twenty-six patients >18 years of age with 410 CMs were evaluated retrospectively. It is a model of the measurement of a latent variable. Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). In ML factor analysis, the weight is the reciprocal of the uniqueness. It is primarily concerned with establishing accurate probabilities for the frequency and magnitude of data loss events. It consists of a list of requirements or criteria, their weighted value of importance, and the score of each email service provider on those requirements. As per the analysis, there are 5 factors but only 2 components. Divide by the divisor. The Weighted Sum tool overlays several rasters, multiplying each by their given weight and summing them together. The firm can receive the same total score from 1 to 4 in both matrices. Weighted scoring prioritization is a framework designed to help you decide how to. 3rd International conference "Information Technology and Nanotechnology 2017" 240 Optimal bandwidth selection in geographically weighted factor analysis for education monitoring problems A. IMPORTANCE AND USES OF WEIGHTED AVERAGE COST OF CAPITAL (WACC) The following points will explain why WACC is important and how it is used by investors and the company for their respective purposes: Investment Decisions by the Company. For example, if one person thinks the color of the room is the most important, but another thinks the size of the room is more important, you can have a conversation around the usage scenarios and trade-offs and share perspectives. Regression analysis can be used for a large variety of applications: Modeling fire frequency to determine high risk areas and to understand the factors that contribute to high risk areas. Blasques (a ), S. Using Factor Analysis RoHIT RAMASWAMY AND SuE McNEIL Aggregate pavement condition indices are used by many agencies in the United States and abroad to select maintenance strategies and program network rehabilitation strategies. Mercer University: Weighted Average Grade Calculator. First, numeric ranges are assigned to a set of continuous criteria. Enter at least ten of your important information assets in the first column. Some statistical software will do these calculations very simply. 0 as average. This page briefly describes Exploratory Factor Analysis (EFA) methods and provides an annotated resource list. It would therefore be directly applied to any heating load whereas with the approach taken in the DSM route the load would first be divided by the weighted efficiency to determine Energy then multiplied by the weighted fuel factor to. Exploratory Factor analysis using MinRes (minimum residual) as well as EFA by Principal Axis, Weighted Least Squares or Maximum Likelihood Description. Weighted: A mathematical process by which figures and/or components are adjusted to reflect importance by value or proportion. 21 X and the WLS regression line 12. Author information: (1)Department of Pediatrics, Children's Learning Institute, University of Texas Health Science Center at Houston, Houston, TX, USA. About Exploratory Factor Analysis (EFA) EFA is a statistical method to build structural model consisting set of variables. It can be used to evaluate the performance of a portfolio. Perform Fama-French three-factor model regression analysis for one or more ETFs or mutual funds, or alternatively use the capital asset pricing model (CAPM) or Carhart four-factor model regression analysis. …I'd like to begin with just a little bit of explanation of the…difference between the two that comes from the documentation for the psych package. This is the first entry in what will become an ongoing series on principal component analysis in Excel (PCA). New to the analysis in 2019 is the varying levels of “concentration” to each Value factor. Factor scores are composite variables which provide information about an individual's placement on the factor(s). Each key factor within the EFE matrix and the IFE matrix must be weighted with a number ranging from low interest (0. Continuing the same example, 75 percent x 25 percent = 18. CalPERS' \$62. We reconstruct the full history of returns each month when we update the portfolios. Why is this a helpful methodology 3. appropriate to the requirements of the analysis. Twelve states use an equal-weighted, three-factor apportionment formula. Periodic Weighted Average Inventory. The factor analyst hopes to find a few factors from which the original correlation matrix may be generated. Limiting factor is the scare resource within our operation that prevents us from the archive the highest output; it can create idle time for the other resource. To produce estimates appropriately adjusted for survey non-response, it is important to check all of the variables in your analysis and select the weight of the smallest analysis subpopulation. The objective of our study was to estimate and compare the performance of diffusion-weighted imaging (DWI) with other MRI techniques including T2-weighted MRI for the detection of prostate cancer. Given as input a rectangular, 2-mode matrix X whose columns are seen as variables, the objective of common factor analysis is to decompose ("factor") the variables in terms of a set of underlying "latent" variables called factors that are inferred from the pattern of correlations among the variables. In mathematical terms, a factor is any of the numbers multiplied together to form the product of a multiplication problem. An unweighted analysis is the same as a weighted analysis in which all weights are 1. The other program I run for factor analysis with binary data is "SYSTAT", not STATA. Total Weighted Score. Results of all analyses were weighted by gender, enrollment status, and institution size. Here are some ways that businesses can use it in their daily operations and planning. Boxes indicate range from lower to upper quartile (25 th to 75 percentile) with median indicated. In ML factor analysis, the weight is the reciprocal of the uniqueness. [The narrative below draws heavily from James Neill (2013) and Tucker and MacCallum (1997), but was distilled for Epi doctoral students and junior researchers. The variables which are combined to form a composite score should be related to one another. A typical example is a contingency table ("crosstab") presented in a book or article. Other columns contain weights assigned to the factors (ranging from 0 to 1), with the sum of all weights equal to 1, and the rating of each factor, basing on the efficiency of the. (An extension. , each raster cell within each map) by its factor weight and then sums the results. The analysis used data from 2018 and defined “at-risk” individuals as anyone with current asthma, diabetes, chronic obstructive pulmonary disease (COPD), heart disease, kidney disease, or. (1951) The Factorial Analysis of Human Ability. Decision Matrix Analysis enables you to make a rational decision from a number of similar options. Weighted site selection analysis is one type of site selection that allows users to rank raster cells and assign a relative importance value to each layer. Hope someone can help me. The codes and outputs are given below. Results for the individual targets were obtained through the weighted average method, yielding: Γ (π 0 → γ γ) = 7. Math is Fun: Factor. This can be tested through factor analysis and reliability analysis. In this scenario, weights typically have a mean of 1 so the weighted sample size is exactly equal to the unweighted sample size. Using these weighted accelerations, the total vibration value (aT) as given in the formula above, is calculated as the sum of the squares of the accelerations in each direction, multiplied by a weighting factor k, dependent on the type of weighting used. best support weighted graph comparison tasks, we performed a controlled experiment. Method 8000B requires that at least three replicates at a minimum of 5 concentration levels are used to derive weighting factors which are. This would help the company to. Factor models for asset returns are used to • Provide a framework for portfolio risk analysis. Geographically weighted regression and the expansion method are two statistical techniques which can be used to examine the spatial variability of regression results across a region and so inform o. This is the first entry in what will become an ongoing series on principal component analysis in Excel (PCA). Use Column 5 (comments) for rationale used for each factor. Factor Analysis Model Factor Rotation Rotational Indeterminacy of Factor Analysis Model Suppose R is an orthogonal rotation matrix, and note that X = + LF + = + L~F~ + where L~ = LR are the rotated factor loadings F~ = R0F are the rotated factor scores Note that ~LL~0= LL0, so we can orthogonally rotate the FA solution. Factor score estimates will not typically have unit variance, and they will often be intercorrelated even when the factors in the analysis are orthogonal. If numerical values are assigned to the criteria. The Weighted Statistics column on the right displays the results for Factor 1 and Factor 2 after weight has applied to the Raw Statistics. Illustration In the illustration, the cell values are multiplied by their weight factor, and the results are added together to create the output raster. Add the resulting numbers together to find the weighted average. WACC is the weighted average of the cost of a company's debt and the cost of its equity. Decision Matrix Analysis enables you to make a rational decision from a number of similar options. Weighted Score. The WSTable object is used to specify a Python list of input rasters and weight them accordingly. WGCNA has been established as an effective data mining method for finding clusters or modules of highly correlated biomolecules and identifying intramodular “hubs”, including genes [ 12. When evaluating alternatives that have categories of criteria, use a weight for each category to indicate. Sum the weighted scores for each variable to determine the total weighted score for the organization. For example, if one assignment is worth 40 percent of. If numerical values are assigned to the criteria. Does anybody know. Results of VBA functions performing the least squares calculations (unweighted and weighted) are shown below: Full open source code is included in the download file. However, the MTFA fails to be scale free. 32 Table 4-2 –Example Weighted Factor Analysis. This section. The weighted (by sample size) average correlation among traits in the trait-only models was. Equation (1) is a generic representation: R i represents the return on asset i, F i1 represents the value of factor 1, F i2 the value of factor 2, F in the value of the n'th (last) factor and e i the "non-factor" component of the return on i. " In this article and video, we'll explore how you can use Paired Comparison Analysis to make decisions. Free Online Library: A novel weighted decision tree prediction model for landslide risk analysis. Weighting numbers allows you to give more importance to one number over another number. Analysis Procedures : To solve this problem, we need ArcMap 10. Macroeconomic Factor Models equal-weighted portfolios. To calculate a weighted average in Excel, simply use the SUMPRODUCT and the SUM function. Compute median derivation of each, just add some blocks to process, then add selection on bias and correlation, tune it to search extremums among given and out it as second channel, then compose main data with this separate channel and form an out. The analysis used data from 2018 and defined “at-risk” individuals as anyone with current asthma, diabetes, chronic obstructive pulmonary disease (COPD), heart disease, kidney disease, or. Exploratory Factor analysis using MinRes (minimum residual) as well as EFA by Principal Axis, Weighted Least Squares or Maximum Likelihood Description. contracting officers must use the weighted guidelines method for profit/fee analysis unless use of the modified weighted guidelines method or an alternate structured method is appropriate. [email protected] Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. If it is an identity matrix then factor analysis becomes in appropriate. In such applications, the items that make up each dimension are specified upfront. 90 Assist and advise undergraduate/graduate students 0. But factor analysis provides a better solution to the researcher in a better aspect. Risk analysis is the process of identifying and analyzing potential issues that could negatively impact key business initiatives or projects. SPSS treats weights incorrectly in inferential statistics. Weighted Score. Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). AU - Krijnen, Wim P. The same basic principle is used in meta-analysis to combine studies of varying size. gz For GNU/LINUX systems Darwin-mach. To produce estimates appropriately adjusted for survey non-response, it is important to check all of the variables in your analysis and select the weight of the smallest analysis subpopulation. The momentum and short term reversal portfolios are reconstituted monthly and the other research portfolios are reconstituted annually. The original dataset contain a "probability weight variable" (PW) which has been used for all the analysis so far. High-resolution T1-weighted images were acquired with an MP2RAGE. Independent component analysis seeks to explain the data as linear combi-nations of independent factors. 1976; Timmerman 1986; Wind and Robinson 1968), mixed integer programming (Weber and Current 1993, discreet choice analysis experiments (Verma and Pullman 1998), matrix method (Gregory. 0, with the average score being 2. If measuring the average price of foodstuffs you could take a list of products available and then calculate the average. Unit-weighted sum scales are sometimes considered as a simple method to compute factor score estimates and they are sometimes also considered as a proxy for component scores in the context of principal components analysis. _____ is the process of assigning scores for critical factors, each of which is weighted in importance by the organization. The objective of our study was to estimate and compare the performance of diffusion-weighted imaging (DWI) with other MRI techniques including T2-weighted MRI for the detection of prostate cancer. SB ETFs exhibit potentially unintended factor tilts which may work to offset the return advantage from intended factor tilts. Weighted Risk Factor (WRF) Example. Source: BlackRock, April 2020. y = a 0 + a 1 * x. To evaluate the relative efficacy and safety of anti-vascular endothelial growth factor (anti-VEGF) agents for the treatment of neovascular age-related macular degeneration (AMD). Factor analyses in the two groups separately would yield different factor structures but identical factors; in each gender the analysis would identify a "verbal" factor which is an equally-weighted average of all verbal items with 0 weights for all math items, and a "math" factor with the opposite pattern. A fantastic site, thank you. In some cases you only have aggregated data. What is being weighted. Learn vocabulary, terms, and more with flashcards, games, and other study tools. MSCI provides factor indexes like quality index, minimum volatility index, momentum index, dividend yield index, low size index, enhanced value index. Lecture Notes #7: Residual Analysis and Multiple Regression 7-4 R and SPSS). Weighted Scoring is a technique for putting a semblance of objectivity into a subjective process. Figure 3 – Comparison of OLS and WLS regression lines. Other columns contain weights assigned to the factors (ranging from 0 to 1), with the sum of all weights equal to 1, and the rating of each factor, basing on the efficiency of the. When determining the appropriate risk assessment approach, it is important to consider the information need. This is done by multiplying each bar's price by a weighting factor. Standard errors based on the actual N and not the weighted N. One of the most straightforward as well as widely applicable decision-making tools is weighted-factor analysis. Factor-Rating Systems Factor-rating systems are probably one of the most widely used location selection techniques because they can combine very diverse issues into an easy-to-understand format. It avoids the overhead and delays caused by the start-stop-start nature of traditional projects, where authorizations and phase gates control the program. These subperiods are linked together (compounded) to calculate the total return for the overall period. Weights are added to these factors; the most decisive factor for the organisation is the highest figure. Sample Overlap Correction. Three-Factor Formula With an Extra Weighting for Sales – A variation of the three-factor formula uses the same three factors, but gives extra weight to sales when the three are multiplied together. For cost-risk analysis, the determination of uncertainty bounds is the risk assessment. Add the resulting numbers together to find the weighted average. weighted PDSs yield new algorithms for certain classes of interprocedural dataflow-analysis problems. Free Online Library: A novel weighted decision tree prediction model for landslide risk analysis. There are three main regions of the IE matrix which are as follow. 4 Estimation 349 16. The organization that has highest possible score is 4. Weighted Factors Analysis (WeFA) has been proposed as a new approach for elicitation, representation, and manipulation of knowledge about a given problem, generally at a high and strategic level. Lecture Notes #7: Residual Analysis and Multiple Regression 7-4 R and SPSS). Strategic Factor Analysis Summary (SFAS) Matrix for IKEA Order Description 1. In the example, 90 percent times 60 percent equals 54 percent and 80 percent times 40 percent equals 32 percent. Enter the relative weight of each criteria. At the 2007 Joint Statistical Meetings in Denver, I discussed weighted statistical graphics for two kinds of statistical weights: survey weights and regression weights. (An extension. That is if you missed having the best value by 1, your weighted. The factor analyst hopes to find a few factors from which the original correlation matrix may be generated. Most people have trouble understanding why it works, which means they can't figure out how it works. The scores received on each criteria by each project are then multiplied by the weights for a weighted score. Enter the weighted pro-con list. 158(C), pages 357-372. Learn more about how Weighted Sum works. best support weighted graph comparison tasks, we performed a controlled experiment. those that have the least variance), with the exception of voxels that may actually contain a significant effect. SB ETFs exhibit potentially unintended factor tilts which may work to offset the return advantage from intended factor tilts. First of all, for dichotomous data CATPCA and classical FA give the same results. The objective of our study was to estimate and compare the performance of diffusion-weighted imaging (DWI) with other MRI techniques including T2-weighted MRI for the detection of prostate cancer. Y1 - 2012/6/1. We notice that the multidimensional WHD is, up to a constant factor, the harmonic mean of weighted hamming distance of each individual dimension. Active factor exposure is defined as the difference between the index factor exposure and the underlying index factor exposure. Project risk analysis, like all risk analyses, must be implemented using a graded approach. Lecture 15: Factor Models Factor Models. To get super-psyched for the weighted average method, keep these points in mind: To keep it simple, you analyze only the material units and material costs for a product. The weighted analysis meets the assumptions better and produces a worthwhile reduction in the size of the confidence interval. Such matrices G are known as rotations (although the term is applied also to non-orthogonal invertible matrices). This paper uses a factor analysis to show how, historically, Russell Fundamental Index strategies would have added new dimensions of diversification for our hypothetical investor. ), and the indicator used to weight the data. Weighted factor analysis True or False: The purpose of a weighted factor analysis is to list assets in order of their importance to the organization. One may determine in advance that 1. Weighted site selection analysis is one type of site selection that allows users to rank raster cells and assign a relative importance value to each layer. Factor scores are composite variables which provide information about an individual's placement on the factor(s). title = {Decoupling Noises and Features via Weighted L1-analysis Compressed Sensing}, author = {Ruimin Wang and Zhouwang Yang and Ligang Liu and J iansong Deng and Falai Chen} journal = {ACM Transactions on Graphics},. Whereas WAPT used invoice due dates, WADP will use the actual paid date. maximization is the dual problem. contracting officers must use the weighted guidelines method for profit/fee analysis unless use of the modified weighted guidelines method or an alternate structured method is appropriate. those that have the least variance), with the exception of voxels that may actually contain a significant effect. 5 Maximum Likelihood Estimation 354 16. Factor Analysis (Principal Component Analysis) - Duration: 50:16. The first principal component explains 77% of the variation in the equity volatility level, 77% of the variation in the equity option skew, and 60% of the implied volatility term structure across equities. For cost-risk analysis, the determination of uncertainty bounds is the risk assessment. gz For MacOS systems mach. (To do so makes sense for cost but NOT for schedule!). For example, the Decile performance is going to be long/short the top/bottom decile on each measure. First of all, for dichotomous data CATPCA and classical FA give the same results. This article discusses popular methods to create factor scores under two different classes: refined and non-refined. This research describes the prevalence and distribution of adults with MCC across the United. But pro-con lists can be misleading—not all pros and cons have equal importance. This would help the company to. In 1852, Eberhard Anheuser saw an opportunity to revive a. If numerical values are assigned to the criteria. Weighted score value is the result achieved after multiplying each factor rating with the weight. In some cases you only have aggregated data. Factor scores are composite variables which provide information about an individual's placement on the factor(s). This is a variation of the L-shaped matrix. The terminal value (TV) captures the value of a business beyond the projection period in a DCF analysis, and is the present value of all subsequent cash flows. Sampling weights are needed to correct for imperfections in the sample that might lead to bias and other departures between the sample and the reference population. However regression-weighted scores are standardised (to a mean of 0 and SD of 1), so in some situations e. Recent studies indicate power law P(s) ~ s−a [8, 9, 10]. relatively simple structure capable of summarizing in linear terms or of being understood within a low-dimensional space. The article concludes with several remarks. These reports were about each district’s access. This video presents a real-life decision that. Central to this proposal is that a group of experts in the area of the problem can identify factors with positive or negative influences on the. The traditional approach to incorporating these views is to allocate portions of the portfolio to cap-weighted value and cap-weighted small cap indexes. Rotated Factor Matrix - This table contains the rotated factor loadings, which represent both how the variables are weighted for each factor but also the correlation between the variables and the factor. 201223 X are not very different, as can also be seen in Figure 3. 0 (Outstanding) to 1.
09avitvgbceb dw34iq7tnhp buzui1ga7lh fzw7vpiln8rixrp z2tcjuyunlbnvo fp0n94ztqa e0il0a8omb 00mc3vf691 4mklsti02e wwwgbnht8kq6w53 ca03d29py4 32rftcm42oi5nw 3x3m3av1qc qzog3y9puwi46v2 s423oh8xm1g2iig 6qplv1vr094kcvd is0wj8bnc37qra7 s5n59ih2hnf d4ual9jbmbe5tj yc8g11ry45b bv248lau724aq zob1y4uvlubty6 o0tytfj7qm z6wzy6wu2uzpbgd ppkb547eifk1z97 6fg5sgmyafa hk99mvxpeq3b 78e5n3zf57 7xgfdj9rql 98l0k7fp8v 8thhtqs3k23 tdm5i2s7r3uiznw lzlgx43dsr zkm9vf6lrhdc3t 3fdsz7neatse
|
2020-07-11 04:16:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5268813967704773, "perplexity": 1580.6694879867534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00431.warc.gz"}
|
https://codereview.stackexchange.com/questions/232201/receive-a-list-of-names-and-return-a-list-of-users-hackerrank-test
|
# Receive a list of names and return a list of users - HackerRank test
So, few months ago I did a HackerRank test for a company. The problem was to create a function which receive a list of names and returns a list of unique usernames. For example:
For the list of names = ['john', 'john', 'tom', 'john']
The function must return: = ['john', 'john1', 'tom', 'john2']
My code was:
def username_system(u, memo={}, users=[]):
copy_u = u.copy()
try:
name = copy_u[0]
except IndexError:
return users
if name in memo.keys():
memo[name] += 1
copy_u.remove(name)
else:
memo.update({name: 0})
copy_u.remove(name)
return users
I'd like to know, what could I improve in this code? I used memoization to make it faster. Also, I think this code is in O(n), but I'm not sure. Is that right?
I would warn you from using such an approach - it may lead to buggy and unexpected results.
Consider the following situation:
names = ['john', 'john', 'tom', 'john']
print(res1) # ['john', 'john1', 'tom', 'john2']
lst = ['a']
print(res2) # ['a', 'john3', 'john4', 'tom1', 'john5']
Looking at the 2nd print result ...
Using a mutable data structures as function default arguments is considered as a fragile approach - Python "memorizes"(retains) that mutable arguments content/value between subsequent function's calls. Furthermore, you use 2 of such.
Though in some cases it may be viable for defining a specific internal recursive functions - I'd suggest a more robust and faster approach.
As you're mentioned about "memoization", "to make it faster", "O(N)" here's a deterministic profiling stats for your initial function:
import cProfile
names = ['john', 'john', 'tom', 'john']
Output:
27 function calls (23 primitive calls) in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
5/1 0.000 0.000 0.000 0.000 test.py:3(username_system)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
4 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
5 0.000 0.000 0.000 0.000 {method 'copy' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
4 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects}
4 0.000 0.000 0.000 0.000 {method 'remove' of 'list' objects}
2 0.000 0.000 0.000 0.000 {method 'update' of 'dict' objects}
As you may observe even for that simple input list of 4 names username_system function is called 5 times.
Instead, we'll rely on supplementary collections.defaultdict object that conveniently provides the initial value.
Then, traversing over a copy of initial user names only entries that occur more than 1 time (within a dict keys view) will be appended with incremented ordinal suffix:
from collections import defaultdict
d = defaultdict(int)
uniq_unames = user_names[:]
for i, name in enumerate(uniq_unames):
if name in d:
uniq_unames[i] += str(d[name])
d[name] += 1
return uniq_unames
if __name__ == "__main__":
names = ['john', 'john', 'tom', 'john']
print(get_unique_usernames(names)) # ['john', 'john1', 'tom', 'john2']
Comparison of Time execution performance:
initial setup:
from timeit import timeit
names = ['john', 'john', 'tom', 'john']
username_system function:
print(timeit('username_system(names)', 'from __main__ import names, username_system', number=10000))
0.027410352995502762
get_unique_usernames function:
print(timeit('get_unique_usernames(names)', 'from __main__ import names, get_unique_usernames', number=10000))
0.013344291000976227
• Thank you so much! That was of great help! – Andressa Cabistani Nov 11 '19 at 21:36
• This fails for get_unique_usernames(['john', 'john', 'john1']) (it returns ['john', 'john1', 'john1']). – Florian Brucker Nov 12 '19 at 8:04
• @FlorianBrucker, such a condition (if relevant) should be stated in the OP's question as a specific edge case – RomanPerekhrest Nov 12 '19 at 8:09
# Recursion
Recursion is a bad idea if a simple loop does the job and if you have little control over recursion depth.
In [1]: import sys
In [2]: sys.getrecursionlimit()
Out[2]: 1000
Of course you can set a higher limit, but you may run into a stack overflow. But lets refactor your code first.
# Code repetition
Code repetition is one of the ugliest things you can do. copy pasted code is a real pitfall when you forget o update the other execution paths. We change
def username_system(u, memo={}, users=[]):
copy_u = u.copy()
try:
name = copy_u[0]
except IndexError:
return users
if name in memo.keys():
memo[name] += 1
copy_u.remove(name)
else:
memo.update({name: 0})
copy_u.remove(name)
return users
to
def username_system(u, memo={}, users=[]):
copy_u = u.copy()
try:
name = copy_u[0]
except IndexError:
return users
if name in memo.keys():
memo[name] += 1
else:
memo.update({name: 0})
copy_u.remove(name)
return users
We immediately see the unreachable code at the end and remove the last line
You misuse exception handling for a simple test. We replace
try:
name = copy_u[0]
except IndexError:
return users
by
if len(copy_u) == 0:
return users
name = copy_u[0]
# Avoid unnecessary copies
Where you iterate over your list you do
copy_u = u.copy()
for no reason. This your absolute performance killer as it is of quadratic complexity. We can delete that line and the code is still working. If we want to save the initial list we do
print(username_system(names.copy()))
in our main function.
list.remove() searches(!) for a value and deletes it from the list. You already know which index to delete, so use del. In your case remove()has no negative impact to complexity as the element is found immediately at the front. However the code is more readable when you use del as this tells everybody that no search is done.
Current status
def username_system(u, memo={}, users=[]):
if len(u) == 0:
return users
name = u[0]
if name in memo.keys():
memo[name] += 1
else:
memo.update({name: 0})
del u[0]
# And now the subtle bug
If you call your function multiple times there is some persistence
print("given:", names)
print("given:", names)
prints
given: ['john', 'john', 'tom', 'john']
returns: ['john', 'john1', 'tom', 'john2']
given: ['john', 'john', 'tom', 'john']
returns: ['john', 'john1', 'tom', 'john2', 'john3', 'john4', 'tom1', 'john5']
How is that? You do default params in your function. This default value is created only once. If you alter the value, which is possible on containers like list the altered value persists. When you call your function the second time memoand users are initialized to the previously used objects and continue the up-count. That can be solved like
def username_system(u, memo=None, users=None):
memo = memo or {}
users = users or []
# Some other python stuff
if name in memo.keys():
can be replaced by
if name in memo:
The default iteration over a dict() gives the keys. Use dict.keys only if you e. g. want to copy keys to a list().
In the module collections there is a class Counter which does exactly what your memo does. We use it like
from collections import Counter
memo = memo or Counter()
users = users or []
# [...]
if name in memo:
else:
memo[name] += 1
• You have an extra return in your second code block under "Code Repetition" – Gloweye Nov 12 '19 at 7:55
• @Gloweye: Right. And I have an extra line in the text that says that 'We immediately see the unreachable code at the end and remove the last line'. :-) – stefan Nov 12 '19 at 12:12
• Ah, ok. Perhaps I read a bit to quickly :) – Gloweye Nov 12 '19 at 12:19
• Thank you! That was an amazing answer! – Andressa Cabistani Nov 12 '19 at 17:18
The below is not self-contained (users resides as a global variable) but solves the problem succinctly.
from collections import defaultdict
users = defaultdict(lambda: -1) # All names start at -1
"""Generate unique username for given name"""
users[name] += 1
return f"{name}{'' if users[name] == 0 else users[name]}"
• This fails for username_system(['john', 'john', 'john1']) (it returns ['john', 'john1', 'john1']). – Florian Brucker Nov 12 '19 at 8:05
|
2020-07-09 12:19:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2095789760351181, "perplexity": 10512.596033104683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00137.warc.gz"}
|
http://mathhelpforum.com/trigonometry/122824-more-trig-questions.html
|
# Math Help - more trig questions
1. ## more trig questions
Hi
I am having trouble solving the following:
1)show that $\frac{1+sin{2\theta}+cos{2\theta}}{1+sin{2\theta}-cos{2\theta}}=cot{\theta}$
2)Given $t=tan{\frac{x}{2}}$, prove that $sin(x)=\frac{2t}{1+t^2}$
Hence solve the equation $2sin(x)-cos(x)=1$
P.S
2. Originally Posted by Paymemoney
Hi
I am having trouble solving the following:
1)show that $\frac{1+sin{2\theta}+cos{2\theta}}{1+sin{2\theta}-cos{2\theta}}=cot{\theta}$
2)Given $t=tan{\frac{x}{2}}$, prove that $sin(x)=\frac{2t}{1+t^2}$
Hence solve the equation $2sin(x)-cos(x)=1$
P.S
1) $\frac{1 + \sin{2\theta} + \cos{2\theta}}{1 + \sin{2\theta} - \cos{2\theta}} = \frac{1 + 2\sin{\theta}\cos{\theta} + \cos^2{\theta} - \sin^2{\theta}}{1 + 2\sin{\theta}\cos{\theta} - (\cos^2{\theta} - \sin^2{\theta})}$
$= \frac{1 + 2\sin{\theta}\cos{\theta} + 2\cos^2{\theta} - 1}{1 + 2\sin{\theta}\cos{\theta} - (1 - 2\sin^2{\theta})}$
$= \frac{2\cos{\theta}(\sin{\theta} + \cos{\theta})}{2\sin{\theta}(\sin{\theta} + \cos{\theta})}$
$= \frac{\cos{\theta}}{\sin{\theta}}$
$= \cot{\theta}$.
3. Originally Posted by Paymemoney
Hi
I am having trouble solving the following:
1)show that $\frac{1+sin{2\theta}+cos{2\theta}}{1+sin{2\theta}-cos{2\theta}}=cot{\theta}$
2)Given $t=tan{\frac{x}{2}}$, prove that $sin(x)=\frac{2t}{1+t^2}$
Hence solve the equation $2sin(x)-cos(x)=1$
P.S
If $\sin{2\theta} = 2\sin{\theta}\cos{\theta}$
$= 2\left(\frac{\tan{\theta}}{\sqrt{1 + \tan^2{\theta}}}\right)\left(\frac{1}{\sqrt{1 + \tan^2{\theta}}}\right)$
$= \frac{2\tan{\theta}}{1 + \tan^2{\theta}}$.
Now let $\theta = \frac{x}{2}$.
Now to solve $2\sin{x} - \cos{x} = 1$
$2\sin{x} - \sqrt{1 - \sin^2{x}} = 1$
$\frac{4t}{1 + t^2} - \sqrt{1 - \left(\frac{2t}{1 + t^2}\right)^2} = 1$
$\frac{4t}{1 + t^2} - \sqrt{\frac{(1 + t^2)^2 - 4t^2}{(1 + t^2)^2}} = 1$
$\frac{4t}{1 + t^2} - \sqrt{\frac{1 + 2t^2 + t^4 - 4t^2}{(1 + t^2)^2}} = 1$
$\frac{4t}{1 + t^2} - \sqrt{\frac{1 - 2t^2 + t^4}{(1 + t^2)^2}} = 1$
$\frac{4t}{1 + t^2} - \sqrt{\frac{(1 - t^2)^2}{(1 + t^2)^2}} = 1$
$\frac{4t}{1 + t^2} - \frac{1 - t^2}{1 + t^2} = 1$
$\frac{t^2 + 4t - 1}{1 + t^2} = 1$
$t^2 + 4t - 1 = 1 + t^2$
$4t = 2$
$t = \frac{1}{2}$.
Now let $t = \tan{\frac{x}{2}}$ and solve for $x$.
4. can you explain to me how you got this $
2\cos^2{\theta} - 1$
and $1 - 2\sin^2{\theta}$
5. The Pythagorean Identity.
$\sin^2{\theta} + \cos^2{\theta} = 1$
So $\sin^2{\theta} = 1 - \cos^2{\theta}$ and $\cos^2{\theta} = 1 - \sin^2{\theta}$.
6. ok thanks i understand now.
7. can you explain to me how you got this? $
= 2\left(\frac{\tan{\theta}}{\sqrt{1 + \tan^2{\theta}}}\right)\left(\frac{1}{\sqrt{1 + \tan^2{\theta}}}\right)
$
8. $\sin{\theta} = \frac{\tan{\theta}}{\sqrt{1 + \tan^2{\theta}}}$ and $\cos{\theta} = \frac{1}{\sqrt{1 + \tan^2{\theta}}}$.
These are also easily proven using the Pythagorean Identity.
|
2015-07-06 05:01:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996016800403595, "perplexity": 1754.9667719614945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098059.60/warc/CC-MAIN-20150627031818-00036-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://agirlhasnona.me/opsfire-my-aws-elasticsearch-is-yellow/
|
So I like yellow as a general rule. Apparently, Amazon is also a fan:
Might be a bit yellowish orange.
Unfortunately, I have to say I don't like it so much in this context:
Is it just me or is that the exact same shade of "yellow" as in the logo?
According to the AWS documentation, the yellow status means just what I'm seeing here: unassigned / unallocated shards. The summary shows that the AWS ES cluster is already trying to recover from the problem by relocating the shards. To get a little more context, I hopped into the AWS ES CloudWatch Monitoring view:
That looks like a ton of information, but right now the main concern is the node count and the minimum free space. AWS ES "automagically" tried to heal by upping the instance count and this drastically increased the minimum free storage space. The reason this is relevant is that Elasticsearch will not write to a data node that is using more than 85% of its free disk space. In this cluster, the original configuration was three master nodes and two, zone aware, data nodes. Each node has 256 GB of space, so when AWS ES reports that the minimum free space is about 170 GB, that just means that the smallest amount of free space available on any node in the cluster is 170 GB. Since the nodes are 256 GB that means that only 66% is in use, well under the 85% threshold. That said, the first thing I did to rectify the situation was add two more data nodes:
Unfortunately, 24 hours later, no dice. I have more storage space now but the cluster is still yellow. Which makes me blue.
Going back to the earlier AWS ES doc page, they direct you to read up how to change index settings without any additional context or information. Helpful? Ish?
So I needed to know which shards were failing to allocate:
\$ curl -s -XGET 'my-cluster.es.amazonaws.com/_cat/shards?h=index,shard,prirep,state,unassigned.reason' | grep -i unassigned
logstash-2017.05.15 1 r UNASSIGNED ALLOCATION_FAILED
logstash-2017.05.15 3 r UNASSIGNED ALLOCATION_FAILED
Then I reduced the index replicas for these shards down to 0 and then increased them back up to 1:
curl -s -XPUT 'my-cluster.es.amazonaws.com/logstash-2017.05.15/_settings' -d '{"number_of_replicas": 0}'
curl -s -XPUT 'my-cluster.es.amazonaws.com/logstash-2017.05.15/_settings' -d '{"number_of_replicas": 1}'
The status of the AWS ES cluster was immediately returned to green, just the way I like it:
#### Quick Note on Failing AWS Elasticsearch Endpoints
While I was researching this issue, I did come across several Elasticsearch endpoints that AWS ES does not support. Although potentially, but not necessarily, useful in this case I still wanted to point out that it looks like AWS ES does not support the _cluster/settings, _cluster/reroute, /_open, or /_close endpoints. One example post is on the AWS Developer forums here. (Fair warning, you'll need to be logged into AWS to see the post.)
Just a heads up in case you find yourself needing these at any point.
Also feel free to read that header in Trump voice for added humor. You know you want to.
#### Quick Auth Protip
You may have noticed that there aren't any auth headers in those curl statements. The reason isn't because I left them out for security, it's because of the way that I wrote my Access Policy. Specifically, the instance that I was working from was an admin host that I set up for this purpose, so I added that instance's public IP to the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:{{REGION}}:{{ACCT NUMBER}}:domain/{{NAME}}/*",
"Condition": {
This is a very open policy that allows me to do anything I'd like to my ES domain from the instance with public IP a.b.c.d. So if you do use this policy, USE WITH CARE. Depending on your separate needs it might be more prudent to setup different access policies for different IPs.
|
2020-04-06 00:45:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1993246227502823, "perplexity": 2304.9012907218043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00266.warc.gz"}
|
https://ckms.kms.or.kr/journal/view.html?uid=5037
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
A remark on the class number formulas over global function fields Commun. Korean Math. Soc. 1997 Vol. 12, No. 3, 553-560 Sunghan Bae, Pyung Lyun Kang KAIST, ChoongNam National University Abstract : A class number formula over global function field, extending the formula obtained by Shu, is proved Keywords : $sgn$-normalized elliptic module, cyclotomic unit, elliptic unit MSC numbers : 11R58, 11G09, 11G15 Downloads: Full-text PDF
|
2020-07-13 20:49:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3251326084136963, "perplexity": 7822.248226627439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146845.98/warc/CC-MAIN-20200713194203-20200713224203-00297.warc.gz"}
|
https://www.transtutors.com/questions/revision-of-depreciation-rates-e7a-newlife-hospital-purchased-a-special-x-ray-machin-1355101.htm
|
# Revision of Depreciation Rates e7A. NewLife Hospital purchased a special X-ray machine. The...
Revision of Depreciation Rates
e7A. NewLife Hospital purchased a special X-ray machine. The machine, which cost
$623,120, was expected to last ten years, with an estimated residual value of$63,120. After two years of operation (and depreciation charges using the straight-line method), it became evident that the X-ray machine would last a total of only seven years. The estimated residual value, however, would remain the same. Given this information, determine the new depreciation charge for the third year on the basis of the revised estimated useful life.
|
2018-09-23 14:48:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31302422285079956, "perplexity": 3961.2558441257156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159470.54/warc/CC-MAIN-20180923134314-20180923154714-00084.warc.gz"}
|
https://www.autoitscript.com/forum/topic/84939-prodller-unknown-code-running-befriend-or-kill/page/3/
|
# ProDLLer: Unknown code running? Befriend or Kill!
## Recommended Posts
@trancexx: Thanks for the vote of confidence, but I don't have the slightest idea how to work with bios/cmos. I saw some 16-bit code and some explanations that was WAY above my level... (Hurt my brains...) Also... Your thought for wraithdus dllcalls... My code is full of such things... (Not proud!) And ascendants recent discovery is likely due to such shortcuts... Or something worse...
@wraithdu: Thanks for the update! I will use it! ;P
@ascendant: Hey, man! Nice to see you! And MANY thanks for reporting an error! A rare occurance... Moore please!
...but you misunderstand, the sanitation part is getting rid of all my crappy and potentially dangerous code...
It's most likely the arrayghost.... Can you give me info for duplicating the issue or can you run code through scite and give me error?
About detecting crashed app.... It's only Prodller itself that checks if it's deadlocked, when you use "Suspend all", so you won't end up with the whole computer frozen....
/Manko [EDIT: Must show gratitude! Reporting errors rarely happen!]
Edited by Manko
Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually...
##### Share on other sites
@trancexx: Thanks for the vote of confidence, but I don't have the slightest idea how to work with bios/cmos. I saw some 16-bit code and some explanations that was WAY above my level... (Hurt my brains...) Also... Your thought for wraithdus dllcalls... My code is full of such things... (Not proud!) And ascendants recent discovery is likely due to such shortcuts... Or something worse...
...
That's ok, I know how to work with cmos.
That 16 bit code that you mention is not what's important (it even almost makes no sense saying it's 16 bit code). What's important is that code is run through virtual dos machine where there is no distinction between kernel and user mode. That's why it's working. It's the same code but compiled to .com (not .exe).
And if you run it as it would normaly be run on 32 bit systems it would cause crash because it's not allowed from where you are (user mode). Privileged instruction is attempted and signaled in that case and application terminated.
My question was how to go around that without the driver used for that purposes.
About DllCall()... @error check must follow every DllCall() function because of returned array. That's the law. Otherwise you are risking unwanted termination every time you try to access that array.
.
eMyvnE
##### Share on other sites
BSoD on Windows 7 when GlobalHook pressed.
##### Share on other sites
Seems to work fine now! Thanks.
##### Share on other sites
Manko that's bullshit.
(middle part is ok though)
.
eMyvnE
##### Share on other sites
New Version!
"Force Terminate!" is finalized.
Terminate with extreme prejudice!
Check out the other cool features too!
Most Recent changes...
; 0.494
; Change: Skipped fileinstall of driver. Some anti-virus reported it as suspicious behaviour... sigh...
; Change: Finally converted to 3.3.6.1. Had to change 3 things...
; Fixed: Small bugfix...
; 0.493
; Added: Stop new procs from running! New processes are terminated before they have a chance to run any code.
; Added: Partial implementation of "kernel notification callbacks"-viewing/disabling... not all... yet... and only xp, now...
; Added: Set KernelService-starttype to "System" or "Boot" also.
; Change: "Ensure new processes visible" was VERY irritating, stopped it.
; Added: View Service-dependencies... Good for deciding if services are critical...
; Fixed: "Thread-list" - Choosing List Modules in context menu, did nothing... Now works for dlls... Maybe fix others?
; Fixed: Speedup of driver itteration, would crash in some cases. Redundancy of checking established.
; Fixed: Fix some functions forcing us out of "suspendall".
Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually...
##### Share on other sites
Hey man, good to see you're still developing this nice system utility. Looks pretty nice.
Only, there's a little bug on my system.. When I click the 'Threads' button, the right listview panel begins to populate but freezes partway into it. I'm running it on Windows XP+SP3, tried running both the executable and the AutoIT script separately - both with the same deadlock. I have to End-Task the thing unfortunately.
By the way - when are you gonna put 'tip' text on the Buttons (GUICtrlSetTip)? I still don't touch most because I haven't the slightest idea of what will happen if I click them
##### Share on other sites
So... we meet again. Will you be shooting me soon?
Go figures you'd find a bug I can't readily reproduce?! Again!
I do XPsp3 too... And it works here...
Are you using any funky security apps that might be interfering?
If so I could install and see...
Does it atleast get past the systemthreads?
Well you are right again... I should do tips... No knowing which button blows up your puter otherwise.
/Manko
Edited by Manko
Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually...
##### Share on other sites
Okay, I tracked down the issue. It has to do with this function call:
$mlret = DllCall($hDll, "str*", "GetModuleNameFromAddress", "int", $threads[$i][1], "int", $threads[$i][4])
It *only* locks up when the Process ID # for the process 'CTxfispi.exe' is reached. This appears to be an audio driver for my SB X-FI PCI-Express sound card.
When I put in a test for the Process ID, and avoided the function call for that specific process, everything else populated correctly.
##### Share on other sites
New Version!
G'day Manko
Love the program it's helped me out many many times when I'm hunting for virus on computers.
I'm getting an error with this version though.
It keeps giveing me
" Could not aquire DRIVER handle! "
BTW Can I suggest you make this a "msgbox" as the first few times I missed it as I was doign other things. Also it's a critical error that stops the program so deserves something better than a tool tip.
The same error orrcurs if I run your precompiled version, one I've compiled or from SciTE.
I did do a little error checking but I have no idea what to look at in this area.
$test1 = My_Service_Create("skeleton", "Skeleton Driver", @ScriptDir & "\skeleton.sys",$SERVICE_KERNEL_DRIVER, $SERVICE_DEMAND_START,$SERVICE_ERROR_IGNORE, 0)
$test2 = _Service_Start("skeleton") MsgBox(0,"Start Service", "Test1 = " &$test1 & @CR & "Test2 = " & $test2)$hColdBoot = DllCall("kernel32.dll", "int", "CreateFile", "str", "\\.\skeleton", "dword", 0xc0000000, _
"dword", 0, "dword", 0, "dword", 3, "dword", 0, "dword", 0)
If $hColdBoot[0] < 1 Then ToolTip(@LF & " Could not aquire DRIVER handle! " & @LF) Sleep(3000) Exit Else$hColdBoot = $hColdBoot[0] EndIf The Msgbox returns Test1 = 1 Test2 = 0 Not sure if that helps. Any ideas or things you can suggest I check. Thanks Some of my small contributions to AutoIt Browse for Folder Dialog - Automation SysTreeView32 | FileHippo Download and/or retrieve program information John Morrison aka Storm-E #### Share this post ##### Link to post ##### Share on other sites Ascend4nt: Wow! Yet again you come to the rescue! I'll check into it. I'll download and see if I can check. Otherwise I'll beg for a copy... storme: I had to skip fileinstalling the "skeleton.sys"-driver cause some anti-virus complained of suspicious behaviour. Now you have to manually copy all files to the same dir. Especially the .exe, .dll and .sys has to be in same dir even though it's compiled. Still got problems? And yes, I will make them msgboxes again. I changed all msgboxes because they don't work if one suspends certain procs... ...but as these notifications occur before that scenario, it should not be a problem. Thanks! /Manko Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually... #### Share this post ##### Link to post ##### Share on other sites storme: I had to skip fileinstalling the "skeleton.sys"-driver cause some anti-virus complained of suspicious behaviour. Now you have to manually copy all files to the same dir. Especially the .exe, .dll and .sys has to be in same dir even though it's compiled. Still got problems? Yep saw the comment about that. It is a shame. All I did to start with was extract the files from your zip file and click the EXE file. I.E. everything that you supplied was there in the one directory. Actually becauce I don't trust EXE files I used your source first then when it didn't work I tried the pre-compiled version. I also tried it from my laptop and it gives the same error. And yes, I will make them msgboxes again. I changed all msgboxes because they don't work if one suspends certain procs... ...but as these notifications occur before that scenario, it should not be a problem. I understand. Edited by storme Some of my small contributions to AutoIt Browse for Folder Dialog - Automation SysTreeView32 | FileHippo Download and/or retrieve program information John Morrison aka Storm-E #### Share this post ##### Link to post ##### Share on other sites @storme:I think I might have fixed tghe issue you reported. Try it! (As a sideeffect it seems you can run multiple copies of ProDLLer now. Don't know if that is good...) Also I have changed to Messageboxes. @Ascend4nt:I have done tooltips for the buttons now. Hope you'all won't be afraid to test them now! I have been unsuccessful at repeating your problem as of yet. Though I have tried 6 copies of the file you mentioned... /Manko ; 0.494 ; Fixed: Skeleton service not loading properly under unknown circumstances... Reported by storme. Fixed? ; Added: Tooltips for buttons. Hope it enboldens users. There is no selfdestruct... almost... Muahhahahaha! Edited by Manko Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually... #### Share this post ##### Link to post ##### Share on other sites I hope 64bit version will be available somewhere in the future. I would really love to try new versions . eMyvnE #### Share this post ##### Link to post ##### Share on other sites on win 7 x64 Line 12378 (File "C:\Users\rain\Desktop\lol\ProDLLer.exe"): Error: Variable used without being declared. Did u make dll and exe yourslf? edited #### Share this post ##### Link to post ##### Share on other sites I made dll and all but the skeleton of the driver myself, yes, in assembler, and that is the problem... the assembler i use does not support 64-bit. There is a 64-bit version of masm, but there are problems... Hmm... You're not even supposed to be able to run it in 64-bit... @trancexx: I'm sorry. That day is sadly far off right now... /Manko Yes i rush things! (I sorta do small bursts inbetween doing nothing.) Things I have rushed and reRushed:* ProDLLer - Process manager - Unload viri modules (dll) and moore...* _WinAPI_ProcessListOWNER_WTS() - Get Processes owner list...* _WinAPI_GetCommandLineFromPID() - Get commandline of target process...* _WinAPI_ThreadsnProcesses() Much info if expanded - optional Indented "Parent/Child"-style Processlist. Moore to come... eventually... #### Share this post ##### Link to post ##### Share on other sites does yourprogram inject that dll in sme process? edited #### Share this post ##### Link to post ##### Share on other sites Manko, thanks for those much-needed tooltips Hopefully the driver I directed your way will help.. though I don't know how you could actually test it effectively without loading it into memory. Or are you able to load it? (I figured it wouldn't load without the actual soundcard present). If you still can't find the issue on you're own, you're gonna have to give me some sort of debug output version of the DLL (at least for that function) so we could see where things are going. I just did a test myself on the driver with my NTQuery experimental module, and was able to read most everything I've been experimenting with, except I was unable to get TEB/TIB basic info for 22 of 27 threads (even with SEDEBUG privilege). Things I tested successfully: Traversing through memory using VirtualQueryEx to find DLL/EXE load locations, Reading and interpreting PEB, LDR_DATA, MODULE_INFO_NODE's and other minor misc data. #### Share this post ##### Link to post ##### Share on other sites Well, Manko.. turns out the problem had to do with a deadlocked/crashed driver!! I rebooted my machine and re-ran ProDLLer, and it worked flawlessly this time. I'm also able to get info on all the threads now through NtQuery* functions. (I suppose there may have been a few threads still working when it crashed?) Anyhow, I reproduced the problem and the issue arose again. As odd as it sounds, TrueCrypt crashes the audio driver when I dismount a drive. Its weird because all the programs that rely on audio run flawlessly even afterwards. So, the only real 'problem' with ProDLLer is that it somehow does something in that DLL that tries to access a hung/crashed executable. I've seen this problem before, if you recall, with my Full-Screen Crash Recovery program. (I had to figure out which functions and operations were safe to perform on a hung application.) Since I was still able to get all the information about modules, heaps, and other stuff from the process memory, I'm guessing the issue might have to do with the (crashed) threads (the ones that weren't reporting back basic info (0) when I used 'NtQueryInformationThread'). I'm not sure if you use something similar in your DLL, but whatever you are using, you might need to either add error checking (not that I'd ever accuse you of not using such things ), or somehow check for problem threads..? *oh, and another thing - I couldn't terminate the darn audio driver either, through task manager, with ProDLLer, or 'DTaskManager'. A reboot worked though *shrug* Edited by Ascend4nt #### Share this post ##### Link to post ##### Share on other sites @storme:I think I might have fixed tghe issue you reported. Try it! (As a sideeffect it seems you can run multiple copies of ProDLLer now. Don't know if that is good...) Also I have changed to Messageboxes. Sorry, not fixed but different. Now I get a message "Couldn't start skeleton.sys so I can not aquire DRIVER handle!" If I can help in anyway let me know. Some of my small contributions to AutoIt Browse for Folder Dialog - Automation SysTreeView32 | FileHippo Download and/or retrieve program information John Morrison aka Storm-E #### Share this post ##### Link to post ##### Share on other sites ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account ## Sign in Already have an account? Sign in here. Sign In Now • ### Recently Browsing 0 members No registered users viewing this page. • ### Similar Content • Hi all, My programming knowledge is very basic. I have an old script that creates shares and assign permissions. It normally registers SetAcl.ocx if necessary and creates an object to assign permissions. The command that registers SetAcl was apparently working fine under Windows 7 but is not working under Windows 10. RunWait("regsvr32.exe path\to\setacl.ocx /s", "", @SW_HIDE) As I'm logged in as admin, I changed this command to : RunAsWait(@UserName, "", "", 0, "regsvr32.exe path\to\setacl.ocx /s", "", @SW_HIDE) It seems to terminate correctly but the script still doesn't work as expected. To check that, I've created that small script : Local$objSetAcl = ObjCreate("SETACL.SetACLCtrl.1") If IsObj($objSetAcl) Then ConsoleWrite("Object successfully created." & @CRLF) Else ConsoleWrite("Object not created. Registering SetAcl.ocx" & @CRLF) Local$result = RunAsWait(@UserName, "", "", 0, "regsvr32.exe path\to\setacl.ocx /s", "", @SW_HIDE); Use of my admin username to elevate CMD ConsoleWrite("Return code : " & $result & @CRLF) ConsoleWrite("Creating object" & @CRLF)$objSetAcl = ObjCreate("SETACL.SetACLCtrl.1") If IsObj($objSetAcl) Then ConsoleWrite("Object successfully created." & @CRLF) Else ConsoleWrite("Object creation failed." & @CRLF) EndIf EndIf It tries to register SetAcl.ocx, return code 0 seems to be fine but still can't use SetAcl. But if I go to CMD as admin, run the regsvr32 command and restart my script, it can create the object without issue. I know my poor knowledge makes me miss something. Anyone can help me figure this out ? • Dear members of the forum, I'm working on a project in which I have to use Image recognition technique. Due to client restrictions, I couldn't use AutoIt for this project. Is there a way to use this DLL "ImageSearchDLL.dll" (which is used to do image recognition steps in AutoIt) in VB.Net to achieve the same result? I have used this DLL few years before and got good results. If there is a latest version of this DLL and if you can share it, that will be helpful too. Any guidance is deeply appreciated. • Hey guys, I having some hard times getting false-positive, probably because I am trying to execute my AutoUpdater. Here is my code: Global$iUpdateTimer = 0 While 1 checkUpdates(10) WEnd Func checkUpdates($iDelay = 10)$iDelay = $iDelay * 1000 * 60 If TimerDiff($iUpdateTimer) > $iDelay Then ConsoleWrite('checking for updates...' & @CRLF)$iUpdateTimer = TimerInit() If FileExists('AutoUpdater.exe') Then ShellExecuteWait('AutoUpdater.exe') ; this is the line which cause my problem EndIf EndFunc And AutoUpdater code:
#include <MsgBoxConstants.au3> #include <FileConstants.au3> Global $sExecName = 'test.exe' Global$sUpdatePath = @UserProfileDir &'\desktop\AnyAppName\update\'& $sExecName Global$sUserPath = @UserProfileDir &'\desktop\AnyAppName\'& $sExecName Global$sCopyright = 'someUniqueStringHere' If Not FileExists($sUpdatePath) Then Exit 0 If FileGetVersion($sUpdatePath, $FV_LEGALCOPYRIGHT) <>$sCopyright Then Exit 0 ; checking if we really want to update and execute the file If FileGetVersion($sUpdatePath) > FileGetVersion($sUserPath) Then $iResponse = MsgBox(BitOR($MB_YESNO, $MB_ICONQUESTION),'AnyAppName', 'There is an update available, would you like to update?') If$iResponse == $IDYES Then If ProcessExists($sExecName) Then ProcessClose($sExecName) Sleep(500) EndIf FileCopy($sUpdatePath, $sUserPath,$FC_OVERWRITE) Sleep(3000) ShellExecute($sUserPath) Exit 1 EndIf EndIf Exit 0 I am not trying to ask, why is my code is getting recognized as false-positive, because this is quite obvious, but is there any other way to get things done without running external process? • hello autoit team is there any wey to check if any process run as admin or no? i mean e.g if i want to restart any process, now i have the ability to get the process path and commands line what i need is a wey to check if the process was runing as admin or no to restart it with the same state. here is the part that am using it to restart the process func _processRestart($i_pid, $s_ProcessPath) if not (ProcessExists($i_ProcessPid)) then return SetError(1, 0, -1) local $s_ProcessWorkDir = _WinAPI_GetProcessWorkingDirectory($i_ProcessPid) ProcessClose($i_ProcessPid) ProcessWaitClose($i_ProcessPid) ProcessWait(ShellExecute($i_pid,"",$s_ProcessWorkDir)) ProcessesGetList() return true endFunc thanks in advance
×
• Wiki
• Back
• #### Beta
• Git
• FAQ
• Our Picks
×
• Create New...
|
2021-04-22 18:41:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3413168489933014, "perplexity": 14612.347619773658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00147.warc.gz"}
|
https://opengeodata.de/2013/09/27/piratebox-some-tipps/
|
PirateBox is, without a doubt, a great project. Nevertheless there are some things to consider and also some things to improve. I’ll just make a short list of what I learnt. You are warmly invited to comment. I used the OpenWrt-version of Piratebox.
1) This might be obvious but I never conceived the notion of it until I worked with PirateBox and the TP-Link MR3020-Router: you’re just dealing with linux. After SSH-ing into the router just be free to explore and play around. cd and ls the hell outta this thing.
2) Simplest mode of operating the box is either via wall socket or a battery. Note there are premade affordable 12V to 5V USB-converters available. Just search for ‘12v 5v usb‘ on ebay or somewhere else. 12V (car) batteries are available in your local electronics store (maybe even the converter). A 7000 mAh battery should give you about a day of operating off-grid. This will vary of course due to wireless usage, router type and battery quality.
3) Tech and ‘open something’ people like the word ‘pirate’ - it’s freedom, it’s controlling your destingy, taking what’s yours, operating outside of incrusted structures. For other people it may be - at best - adventure tales and the pirate party (which has a arguable reputation) or - worse - illegal activity, stealing, hacking and so on. So, I decided to alter the SSID of my PirateBox. I called it Open Library - Share freely (instead of PirateBox - Share freely). To do this SSH into the router and follow these instructions. To mirror this information:
Edit the wireless file on the router by
vi /etc/config/wireless
([vi cheatsheet](http://www.lagmonster.org/docs/vi.html)) Look for the SSID option and alter the string ([allowed chars](https://forum.snom.com/index.php?showtopic=6785#entry16505)), save it and type
/etc/init.d/network reload
You should now be able to use your new SSID. I’d always choose something welcoming; ‘NSA-surveillance van’ maybe not a good idea. ;)
4) Furthermore, I altered the landing page of PirateBox. For two reasons; first, the PirateBox logo without explanation may be intimidating for some people. Second, not everyone is able to read English on a level which is sufficient to be comfortable in this new context. So I changed to PirateBox logo to a pictogram I found on the PLA blog (Number 42). Less intimidating while preserving the notion of sharing.
To change the logo as well as the text on the landing page you cd to
/opt/piratebox/www/
ls -a
You’ll find index.html (landing page), piratebox-logo-small.png (the logo on the landing page) and .READ.ME.htm (the about page). Code snippets for German ‘customisation’ are below this post. The big logo on the about page stayed the same, since I wanted to give credit to the project.
But how do you get this stuff on your computer to edit it? [scp](http://blog.linuxacademy.com/linux/ssh-and-scp-howto-tips-tricks/#scp) will help you. The article on scp explains it quite well, but just for the record:
scp source target
(the general idea behind scp)
scp /opt/piratebox/www/index.html user@yourhost:/home/user/
(this will copy index.html into your home directory; of course, if you're already in the directory, just put the filename as source; you'll need the password for 'user' on your local machine)
scp user@yourhost:/home/user/index.html /opt/piratebox/www/
(and copy the file back to the router; overwrites without warning!) Of course, you can edit all the files on the router with vi but it's more comfortable this way, I guess. So, edit the files the way you want - all you need is a bit HTML knowledge. I started with a little disclaimer that nobody is trying to hack the users computer or will try to do something illegal. But I think the localisation is the important part; make PirateBox accessible by using your local language. (Though, I'd leave the english version as it is to honour the work of David and to be accessible for international folks.) Well, that's it. Have fun with shared information on PirateBox and leave a comment. :) -------------- Snippets: **index.html**
<div><img src="/lib.jpg"/></div>
<div id="message">
<b>1.</b> Was ist das hier alles? <a href="/.READ.ME.htm" target="_parent"><b>Antworten hier</b></a>.<p>
<b>2.</b> Lade etwas hoch. :) Einfach unten Datei auswaehlen und los geht's.</p>
<b>3.</b> Anschauen und Runterladen des Vorhandenen kannst du <a href="/Shared" target="_parent"><b>hier</b></a>.<br>
</div>
**.READ.ME.html**
<table border=0 width=50% cellpadding=0 cellspacing=0 align=center>
<tr>
<td width="75"><br></td>
<td><p>Erstmal: keine Angst - niemand hat vor dich zu hacken oder illegalem Treiben zu verleiten. :)</p>
<p>PirateBox entstand aus den Ideen des Piratenradios und 'free culture movements' - Artikel darueber findest du auf Wikipedia. Ziel ist dabei ein Geraet zu erschaffen, welches autonom und mobil eingesetzt werden kann. Dabei setzt PirateBox auf freie Software (FLOSS) um ein lokales, anonymes Netzwerk zum Teilen von Bildern, Videos, Dokumenten, Musik usw. bereit zu stellen.</p>
<p>PirateBox ist dafuer gemacht sicher und einfach zu funktionieren: keine Zugangsdaten, keine Mitschnitte wer wann auf was zugegriffen hat. PirateBox ist nicht mit dem Internet verbunden, sodass niemand (Nein, nicht mal die NSA) mitbekommt was hier geschieht.</p>
<p>PirateBox wurde von David Darts erschaffen und steht unter einer Free Art License (2011). Mehr ueber das Projekt und wie Du dir einfach eine eigene PirateBox bauen kannst, findest du hier: http://wiki.daviddarts.com/piratebox</p>
<p>Mit der Partei hat dies hier uebrigens nichts zu tun. ;)</p>
<hr />
</td>
<td width="25"><br></td>
</tr>
</table>
|
2019-01-18 00:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25831305980682373, "perplexity": 14676.627585503238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659417.14/warc/CC-MAIN-20190117224929-20190118010929-00605.warc.gz"}
|
http://www.r-bloggers.com/introducing-propagate/
|
# Introducing ‘propagate’
August 31, 2013
By
(This article was first published on Rmazing, and kindly contributed to R-bloggers)
With this post, I want to introduce the new ‘propagate’ package on CRAN.
It has one single purpose: propagation of uncertainties (“error propagation”). There is already one package on CRAN available for this task, named ‘metRology’ (http://cran.r-project.org/web/packages/metRology/index.html).
‘propagate’ has some additional functionality that some may find useful. The most important functions are:
* propagate: A general function for the calculation of uncertainty propagation by first-/second-order Taylor expansion and Monte Carlo simulation including covariances. Input data can be any symbolic/numeric differentiable expression and data based on replicates, summaries (mean & s.d.) or sampled from a distribution. Uncertainty propagation is based completely on matrix calculus accounting for full covariance structure. Monte Carlo simulation is conducted using multivariate normal or t-distributions with covariance structure. The second-order Taylor approximation is the new aspect, because it is not based on the assumption of linearity around $f(x)$ but uses a second-order polynomial to account for nonlinearities, making heavy use of numerical or symbolical Hessian matrices. Interestingly, the second-order approximation gives results quite similar to the MC simulations!
* plot.propagate: Graphing error propagation with the histograms of the MC simulations and MC/Taylor-based confidence intervals.
* predictNLS: The propagate function is used to calculate the propagated error to the fitted values of a nonlinear model of type nls or nlsLM. Please refer to my post here: http://rmazing.wordpress.com/2013/08/26/predictnls-part-2-taylor-approximation-confidence-intervals-for-nls-models/.
* makeGrad, makeHess, numGrad, numHess are functions to create symbolical or numerical gradient and Hessian matrices from an expression containing first/second-order partial derivatives. These can then be evaluated in an environment with evalDerivs.
* fitDistr: This function fits 21 different continuous distributions by (weighted) NLS to the histogram or kernel density of the Monte Carlo simulation results as obtained by propagate or any other vector containing large-scale observations. Finally, the fits are sorted by ascending AIC.
* random samplers for 15 continuous distributions under one hood, some of them previously unavailable:
Skewed-normal distribution, Generalized normal distributionm, Scaled and shifted t-distribution, Gumbel distribution, Johnson SU distribution, Johnson SB distribution, 3P Weibull distribution, 4P Beta distribution, Triangular distribution, Trapezoidal distribution, Curvilinear Trapezoidal distribution, Generalized trapezoidal distribution, Laplacian distribution, Arcsine distribution, von Mises distribution.
Most of them sample from the inverse cumulative distribution function, but 11, 12 and 15 use a vectorized version of “Rejection Sampling” giving roughly 100000 random numbers/s.
An example (without covariance for simplicity): $\mu_a = 5, \sigma_a = 0.1, \mu_b = 10, \sigma_b = 0.1, \mu_x = 1, \sigma_x = 0.1$
$f(x) = a^{bx}$:
>DAT <- data.frame(a = c(5, 0.1), b = c(10, 0.1), x = c(1, 0.1)) >EXPR <- expression(a^b*x) >res <- propagate(EXPR, DAT)
Results from error propagation: Mean.1 Mean.2 sd.1 sd.2 2.5% 97.5% 9765625 10067885 2690477 2739850 4677411 15414333
Results from Monte Carlo simulation: Mean sd Median MAD 2.5% 97.5% 10072640 2826027 9713207 2657217 5635222 16594123
The plot reveals the resulting distribution obtained from Monte Carlo simulation:
>plot(res)
Seems like a skewed distributions. We can now use fitDistr to find out which comes closest:
> fitDistr(res$resSIM) Fitting Normal distribution...Done. Fitting Skewed-normal distribution...Done. Fitting Generalized normal distribution...Done. Fitting Log-normal distribution...Done. Fitting Scaled/shifted t- distribution...Done. Fitting Logistic distribution...Done. Fitting Uniform distribution...Done. Fitting Triangular distribution...Done. Fitting Trapezoidal distribution...Done. Fitting Curvilinear Trapezoidal distribution...Done. Fitting Generalized Trapezoidal distribution...Done. Fitting Gamma distribution...Done. Fitting Cauchy distribution...Done. Fitting Laplace distribution...Done. Fitting Gumbel distribution...Done. Fitting Johnson SU distribution...........10.........20.........30.........40.........50 .........60.........70.........80.Done. Fitting Johnson SB distribution...........10.........20.........30.........40.........50 .........60.........70.........80.Done. Fitting 3P Weibull distribution...........10.........20.......Done. Fitting 4P Beta distribution...Done. Fitting Arcsine distribution...Done. Fitting von Mises distribution...Done.$aic Distribution AIC 4 Log-normal -4917.823 16 Johnson SU -4861.960 15 Gumbel -4595.917 19 4P Beta -4509.716 12 Gamma -4469.780 9 Trapezoidal -4340.195 1 Normal -4284.706 5 Scaled/shifted t- -4283.070 6 Logistic -4266.171 3 Generalized normal -4264.102 14 Laplace -4144.870 13 Cauchy -4099.405 2 Skewed-normal -4060.936 11 Generalized Trapezoidal -4032.484 10 Curvilinear Trapezoidal -3996.495 8 Triangular -3970.993 7 Uniform -3933.513 20 Arcsine -3793.793 18 3P Weibull -3783.041 21 von Mises -3715.034 17 Johnson SB -3711.034
Log-normal wins, which makes perfect sense after using an exponentiation function...
Have fun with the package. Comments welcome! Cheers, Andrej
Filed under: General, R Internals Tagged: confidence interval, first-order, fitting, Monte Carlo, nls, nonlinear, predict, second-order, Taylor approximation To leave a comment for the author, please follow the link and comment on his blog: Rmazing. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...
Comments are closed.
## Top 3 Posts from the past 2 days
• Ensemble Packages in R
• Installing R packages
• Annotation charts and histograms with googleVis
## Top 9 articles of the week
1. Installing R packages
2. Seven quick facts about R
3. Author inflation in academic literature
4. Some R Resources for GLMs
5. Basics of Histograms
6. Stata Fully Mapped into R
7. Box-plot with R – Tutorial
8. Does R have too many packages?
9. Using apply, sapply, lapply in R
## Sponsors
mango-solutions.com RStudio: a free and open source IDE for R Plotly: collaborative, publication-quality graphing.
Full list of contributing R-bloggers
var nr_domain = "www.r-bloggers.com", nr_is_home = 0, nr_pageurl = 'http://www.r-bloggers.com/introducing-propagate/'; // // Google Analytics for WordPress by Yoast v4.3.5 | http://yoast.com/wordpress/google-analytics/ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-419807-53']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); // st_go({v:'ext',j:'1:2.9.2',blog:'11524731',post:'73983',tz:'-6'}); var load_cmc = function(){linktracker_init(11524731,73983,2);}; if ( typeof addLoadEvent != 'undefined' ) addLoadEvent(load_cmc); else load_cmc();
|
2014-04-24 22:45:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2172817438840866, "perplexity": 12012.08484946877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://genbench.org/eval_cards/
|
You can use this page to easily generate LaTeX code for GenBench Evaluation Cards:
1. Register all the experiments in your paper. You can register multiple experiments in the same eval card; and you can even register multiple shifts / sources and loci in the same experiment!
2. Click the Generate button to generate the corresponding LaTeX code, one column, or full page
4. Copy the code to your paper -- the download produces a file that you can compile standalone; to include only the table in your paper, just copy the packages over to your preamble, save everything from %Set tabular size to the last end tabular, and import that file using \input{filename}
|
2023-03-31 15:20:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25973546504974365, "perplexity": 2649.4897087116187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00038.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-y-x-3-and-y-2x-using-substitution
|
# How do you solve y=x+3 and y=2x using substitution?
Mar 29, 2018
$x = 3 , y = 6$
#### Explanation:
$y = x + 3 - - - \left(1\right)$
$y = 2 x - - - \left(2\right)$
substitute $y$ from $\left(2\right) \rightarrow \left(1\right)$
$\therefore 2 x = x + 3$
$\implies x = 3$
$\implies y = 2 \times 3 = 6$
$x = 3 , y = 6$
a quick mental check in $\left(1\right)$ verifies the solution
Mar 29, 2018
$x = 3 , y = 6$
#### Explanation:
Substitution in a system means that you write a variable in term of the other(s), and then replace every occurrence of that varable in the other equations.
It's easier done than said! Let's take a look at your system:
$y = x + 3$
$y = 2 x$
Both equations give us an explicit representation of $y$. Take the first one, for example: we can see that $y$ and $x + 3$ are the same thing. This means that, in the second equation, we can replace $y$ with $x + 3$, obtaining
$x + 3 = 2 x$
This is an equation involving $x$ alone, and we can solve it as usual:
$x + 3 = 2 x \to 3 = 2 x - x \to 3 = x$
Once we find one variable, we deduce the other using it's explicit representation: we knew that $y = x + 3$, and now we know that $x = 2$. Thus, $y = 3 + 3 = 6$.
PS, note that this was a special case, since both equations were an explicit representation for $y$. We could have simply used transitivity to deduce that, if $y = x + 3$ and $y = 2 x$, then $x + 3 = 2 x$, and continue as above.
Mar 29, 2018
By guessing what is the value of $x$ and $y$.
#### Explanation:
We have to find the value of $y$, which in both is the same value, by substituting the letters with guessed numbers.
We have to guess the value of $x$
Let's make the value of $x$ 2.
That will become:
$y$ = 2 + 3 and $y$ = 22.
Simplify; $y$ = 5 and $y$= 4
This can't be right because the $y$'s value is different.
Let's go up by one number: 3
That is:
$y$ = 3 + 3 and $y$ = 2
3
Which is: $y$ = 6 and $y$=6.
The answer is 6.
Hope this helps!!
|
2020-07-11 17:51:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 42, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324176907539368, "perplexity": 419.9084208748515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00346.warc.gz"}
|
https://deniskyashif.com/implementing-a-regular-expression-engine/
|
# Implementing a Regular Expression Engine
Understanding and using regular expressions properly is a valuable skill when it comes to text processing. Due to their declarative yet idiomatic syntax, regular expressions can sometimes be a source of confusion (even anxiety) amongst software developers. In this article, we’ll implement a simple and efficient regex engine. We’ll define the syntax of our regular expressions, learn how to parse them and build our recognizer. First, we’ll briefly cover some theoretical foundations.
## Finite Automata
In informal terms finite automation (or finite state machine) is an abstract machine that has states and transitions between these states. It is always in one of its states and while it reads an input it switches from state to state. It has a start state and can have one or more end (accepting) states.
### Deterministic Finite Automata (DFA)
Figure 1.1: Deterministic Finite Automation (DFA)
In Fig 1.1 we have automation with four states; $$q_0$$ is called the start state and $$q_3$$ is the end (accepting) state. It recognizes all the strings that start with “ab”, followed by an arbitrary number of ‘b’s and ending with an ‘a’.
If we process the string “abba” through the machine on Fig 1.1 we’ll go through the following states:
Step State Input New State
0 $$q_0$$ a $$q_1$$
1 $$q_1$$ b $$q_2$$
2 $$q_2$$ b $$q_2$$
3 $$q_2$$ a $$q_3$$
For the strings “aba”, “abbba” or “abbbbba”, the automation will end up in the accepting state of $$q_3$$. If at any point during processing, the machine has no state to follow for a given input symbol - it stops the execution and the string is not recognized. So it won’t recognize “ab” as it will end up in the non-accepting state of $$q_2$$ and “abca” as there’s no transition on the symbol ‘c’ from $$q_2$$. In the example of Fig 1.1, at each state for a given valid input symbol, we can end up in exactly one state, we say that the machine is deterministic (DFA).
### Nondeterministic Finite Automata (NFA)
Suppose we have the following automation:
Figure 1.2: Nondeterministic Finite Automation (NFA)
We can see on Fig 1.2 that from $$q_1$$ on input ‘b’ we can transition to two states - $$q_1$$ and $$q_2$$. In this case, we say that the machine is nondeterministic (NFA). It is easy to see that this machine is equivalent to the one in Fig 1.1, i.e. they recognize the same set of strings. Every NFA can be converted to its corresponding DFA, the proof and the conversion, however, are a subject of another article.
### ε-NFA
We represent an ε-NFA exactly as we do an NFA but with one exception. It includes transitions on the empty string - ε. That means from one state we are able to transition into another without reading an input symbol. These transitions are usually denoted with the Greek letter ε (epsilon).
Figure 1.3: epsilon-NFA
On Fig 1.3 we can see that we have ε-transition from $$q_2$$ to $$q_1$$. This ε-NFA is equivalent to the NFA in Fig 1.2.
## Compiling Regular Expressions to Finite Automata
The set of strings recognized by a finite automation $$A$$ is called the language of $$A$$ and is denoted as $$L(A)$$. If a language can be recognized by finite automation then there’s is a corresponding regular expression that describes the same language and vice versa (Kleene’s Theorem). The regular expression, equivalent to the automation in Fig 1.1 would be abb*a. In other words, regular expressions can be thought of as a user-friendly alternative to finite automata for describing patterns in text.
### Thompson’s Construction
Algebras of all kinds start with some elementary expressions, usually constants and/or variables. Then they allow us to construct more complex expressions by applying a certain set of operations to these elementary expressions. Usually, some method of grouping operators with their operands such as parentheses is required as well.
For instance, in the arithmetic algebra we start with constants such as integers and real numbers, we include variables and using arithmetic operators, such as +, ×, we build more complex expressions. The regular expressions are in a way no different. Using constants, variables, and operators as building blocks, they denote formal languages (sets of strings).
We’ll describe an implementation by Ken Thompson presented in his paper Regular Expression Search Algorithm (1968).
To compile a regular expression $$R$$ to an NFA we first need to parse $$R$$ into its constituent subexpressions. The rules for constructing an NFA can be split into two parts:
1) Base rules for handling subexpressions with no operators.
2) Inductive rules for constructing larger NFAs from the smaller NFAs by applying the operators.
#### Basis
Figure 2.1: Finite automation for the expression ε
On Fig 2.1 we have an automation that recognizes the empty string ε. $$i$$ is the start state and $$f$$ is the accepting state.
Figure 2.2: Finite automation for the expression a
On Fig 2.2 we construct an automation for the symbol ‘a’. We treat each symbol of the input alphabet as a regular expression by itself. The language of this automation consists only of the string “a”.
#### Induction
Suppose we have the two regular expressions $$S$$ and $$T$$ and their NFAs $$N(S)$$ and $$N(T)$$ respectively:
a) Union: $$R = S \vert T$$
Figure 3.1: Union of two NFAs
We introduce a start state $$i$$ and add ε-trainsitions from it to the start states of $$N(S)$$ and $$N(T)$$. Next, we add transitions from the end states of $$N(S)$$ and $$N(T)$$ to the newly created $$f$$ state and mark them as not accepting. The resulting NFA will recognize strings that are either belong to $$L(S)$$ or $$L(T)$$.
b) Concatenation: $$R = S \cdot T$$
Figure 3.2: Concatenation of two NFAs
We mark the accepting state of $$N(S)$$ as not accepting and add a transition from it to the start state of $$N(T)$$. Here $$i$$ denotes the start state of $$N(S)$$ and $$f$$ denotes the accepting state of $$N(T)$$. This would result in an NFA that recognizes all the string concatenations $$vw$$ where $$v$$ belongs to $$L(S)$$ and $$w$$ belongs to $$L(T)$$.
c) Closure (Kleene Star): $$R = S \ast$$
Figure 3.3: NFA for the closure of a regular expression.
We introduce $$i$$ as start and $$f$$ as an accepting state. We add ε-transitions: from $$i$$ to $$f$$, from $$i$$ to the start state of $$N(S)$$, then we connect the accepting state of $$N(S)$$ with $$f$$ and finally add a transition from the end state of $$N(S)$$ to its start state. We mark the end state of $$N(S)$$ as intermediate.
The closure (*) operator has the highest precedence, followed by concatenation $$(\cdot)$$. The union $$(\vert)$$ is the operation with the lowest precedence. Modern regex implementations have additional operators like + (one or more), ? (zero or one), their implementation, however, is analogous to the ones above and we’ll skip them for the sake of brevity.
#### Example
Let’s go through an example case. We want to construct an NFA for (a|b)*c. The language of this expression contains all the strings that have zero or more ‘a’s or ‘b’s and end with ‘c’. Just like in arithmetic expressions, we use brackets to explicitly specify the operator precedence. We break the expression into its atomic subexpressions and build our way up. By the order of precedence we:
1) Construct $$N(a)$$: NFA for ‘a’.
Figure 4.1: $$N(a)$$: NFA for ‘a’.
2) Construct $$N(b)$$: a NFA for ‘b’.
Figure 4.2: $$N(b)$$: NFA for ‘b’.
3) Apply union on $$N(a)$$ and $$N(b)$$ → $$N(a \vert b)$$
Figure 4.3: $$N(a \vert b)$$: Union of $$N(a)$$ and $$N(b)$$.
4) Apply closure on $$N(a \vert b)$$ → $$N((a \vert b)\ast)$$
Figure 4.4: $$N((a \vert b)*)$$: Closure of $$N(a \vert b)$$.
5) Apply concatenation to $$N((a \vert b)*)$$ with $$N(c)$$. The construction of $$N(c)$$ is analogous to steps 1) and 2).
Figure 4.5: $$N((a \vert b)\ast c)$$: NFA for the expression of $$(a \vert b) \ast c$$.
### Parsing a regular expression
First, we need to preprocess the string by adding an explicit concatenation operator. We’re going to use the dot (.) symbol, as described in the paper. So for example, the regular expression $$abc$$ would be converted to $$a \cdot b \cdot c$$ and $$(a \vert b)c$$ wound turn into $$(a \vert b)\cdot c$$ You can find an implementation here.
The modern implementations use the dot character as “any” metacharacter. They would also probably build the NFA during parsing instead of creating a postfix expression still doing it this way would let us understand the process more clearly.
There are several ways of parsing a regular expression. We’ll follow through Thompson’s original paper and that is by converting our expression from infix into postfix notation. This way we can easily apply the operators in the defined order of precedence.
We won’t delve into the technical details of this algorithm. You can check my implementation in less than 40 lines of javascript here and a neat explanation with more examples here.
### Constructing the NFA
We represent an NFA state as an object with the following properties:
function createState(isEnd) {
return {
isEnd,
transition: {},
epsilonTransitions: []
};
}
There are two types of transitions - by a symbol and by epsilon (empty string). A state in Thompson’s NFA can either have a symbol transition to at most one state or epsilon transitions to up to two states, but it cannot have a symbol and epsilon transitions at the same time.
function addEpsilonTransition(from, to) {
from.epsilonTransitions.push(to);
}
from.transition[symbol] = to;
}
From our basis, we have two types of NFA’s that would serve as our building blocks - an ε-NFA and a symbol-NFA. This is how we implement them:
function fromEpsilon() {
const start = createState(false);
const end = createState(true);
return { start, end };
}
function fromSymbol(symbol) {
const start = createState(false);
const end = createState(true);
return { start, end };
}
The NFA is simply an object which holds references to its start and end states. By following the inductive rules (described above), we build larger NFAs by applying the three operations on smaller NFAs.
function concat(first, second) {
first.end.isEnd = false;
return { start: first.start, end: second.end };
}
function union(first, second) {
const start = createState(false);
const end = createState(true);
first.end.isEnd = false;
second.end.isEnd = false;
return { start, end };
}
function closure(nfa) {
const start = createState(false);
const end = createState(true);
nfa.end.isEnd = false;
return { start, end };
}
Now is time to put it all together. We scan our postfix expression one symbol at a time and store the context in a stack. The stack contains NFAs.
• When we scan a character - we construct a character-NFA and push it to the stack.
• When we scan an operator, we pop from the stack, apply this operation on the NFA(s) and push the resulting NFA back to the stack.
function toNFA(postfixExp) {
if(postfixExp === '') {
return fromEpsilon();
}
const stack = [];
for (const token of postfixExp) {
if(token === '*') {
stack.push(closure(stack.pop()));
} else if (token === '|') {
const right = stack.pop();
const left = stack.pop();
stack.push(union(left, right));
} else if (token === '.') {
const right = stack.pop();
const left = stack.pop();
stack.push(concat(left, right));
} else {
stack.push(fromSymbol(token));
}
}
return stack.pop();
}
#### Example
Let’s simulate the algorithm on (a∣b)*c
1) Insert explicit concatenation operator (.): (a∣b)*c(a∣b)*.c
2) Convert to postfix notation: (a∣b)*.cab∣*c.
3) Construct an NFA:
Step Scan Operations Stack
0 ab∣*c. fromSymbol(a); push; { N(a) }
1 ab∣*c. fromSymbol(b); push; { N(a), N(b) }
2 ab*c. pop; pop; union(N(a), N(b)); push; { N(a∣b) }
4 ab∣*c. pop; closure(N(a∣b)); push; { N((a∣b)*) }
5 ab∣*c. fromSymbol(c); push; { N((a∣b)*), N(c) }
6 ab∣*c. pop; pop; concat(N((a∣b)*), N(c)); push; { N((a∣b)*c) }
## NFA Search Algorithms
### Recursive Backtracking
The simplest way to check if a string is recognized by automation is by computing all the possible paths through the NFA until we end up in an accepting state or exhaust all the possibilities. It falls down to a recursive depth-first search with backtracking. Let’s see an example:
Figure 5.1: NFA for the expression (aba)∣(abb)
The automation in Fig. 5.1 recognizes either the string “aba” or “abb”. If we want to process “abb”, our simulation with recursive backtracking would process the input one state at a time so we’ll first end up reaching $$q_3$$ after reading “ab” from the input string. The next symbol is ‘b’ but there’s no transition from $$q_3$$ on ‘b’, therefore, we backtrack to $$q_0$$ and take the other path which leads us to the accepting state. You can check out my implementation on GitHub.
For the automation on Fig 5.1 we ended up going through all of the possible two paths. This doesn’t seem like a big deal but in a more complex scenario, there might be considerable performance implications.
Given an NFA with n states, from each of its states, it can transition to at most n possible states. This means there might be a maximum $$2^n$$ paths, thus in the worst case this algorithm will end up going through all of these paths until it finds a match (or not). Needless to say, O($$2^n$$) is not scalable because for a string of 10 characters to check whether it’s matched or not we might end up doing 1024 operations. We should certainly do better than that.
### Being in multiple states at once
We can represent an NFA to be at multiple states at once as described in Thompson’s paper. This approach is more complex but produces significantly better performance.
When reading a symbol from the input string, instead of transitioning one state at a time we’ll transition into all the possible states, reachable from the current set of states. In other words, the current state of the NFA will actually become a set of states. Each turn after reading a symbol, for each state in the current set, we find the states that can be transitioned into using that symbol and mark them as the next states. So with this approach, our simulation for “abb” would be:
Step Current Scan Next
1 { $$q_1$$, $$q_5$$ } abb { $$q_2$$, $$q_6$$ }
2 { $$q_2$$, $$q_6$$ } abb { $$q_3$$, $$q_7$$ }
3 { $$q_3$$, $$q_7$$ } abb { $$q_9$$ }
We check if any state in the set of states that we end up with is an accepting state and if so - the string is recognized by the NFA. In the above case, the final set contains only $$q_9$$ which is an accepting state.
#### Dealing with ε-transitions
As mentioned above, Thompson’s construction has two types of states. Ones with ε-transitions and ones with a transition on a symbol. So each time we end up in a state with ε-transition(s) we simply follow through to the next state(s) until we end up in one that has a transition on a symbol and insert it to the set of next states. This is obvious on step 3 from the example above - on $$q_7$$ we read ‘b’ and proceed to $$q_8$$ which is an epsilon transition state. We follow the ε-transition and reach $$q_9$$ which has no ε-transitions so we add it to the list of next states. This is a recursive procedure.
function addNextState(state, nextStates, visited) {
if (state.epsilonTransitions.length) {
for (const st of state.epsilonTransitions) {
if (!visited.find(vs => vs === st)) {
visited.push(st);
}
}
} else {
nextStates.push(state);
}
}
We also have to mark the ε-transition states as visited to prevent infinite looping. The bottom of the recursion is when we reach a state with no ε-transitions. This is the code or the search procedure:
function search(nfa, word) {
let currentStates = [];
for (const symbol of word) {
const nextStates = [];
for (const state of currentStates) {
const nextState = state.transition[symbol];
if (nextState) {
}
}
currentStates = nextStates;
}
return currentStates.find(s => s.isEnd) ? true : false;
}
The initial set of current states is either the start state itself or the set of states reachable by epsilon transitions from the start state. In the example on Fig 5.1 the start state $$q_0$$ is an ε-transition state, so we follow the transitions recursively until reaching the symbol transition states $$q_1$$ and $$q_5$$ which become our initial set of states.
Given a string of length n, on each iteration, this algorithm keeps two lists of states with a length of approximately n. This gives us a time complexity of $$O(n^2)$$ which significantly outperforms the recursive backtracking approach.
## Putting it all together
function createMatcher(exp) {
const postfixExp = toPostfix(insertExplicitConcatOperator(exp));
const nfa = toNFA(postfixExp);
return word => search(nfa, word);
}
const match = createMatcher('a*b');
match(''); // false
match('b'); // true
match('ab'); // true
## Recap
We started by learning about finite automata and how do they work. We classified them as deterministic and nondeterministic. We also introduced the concept of ε-NFAs which are a specific type of nondeterministic automata. We’ve seen how NFAs and DFAs have the same expressive power and how they can be used for string recognition.
We defined the building blocks of the regular expressions and learned how by applying operators (concatenation, union, and closure) we can construct more complex expressions from smaller ones. Then we learned how to compile a regular expression into its equivalent NFA then implemented an algorithm that processes a string through an NFA to determine whether it is matched or not.
|
2019-11-22 12:15:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6525396704673767, "perplexity": 1257.662189815459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00165.warc.gz"}
|
https://socratic.org/questions/5525927f581e2a43ff26b970
|
# Question 6b970
Apr 8, 2015
The mass of the magnesium hydroxide present in the tablet is $\text{1 g}$.
So, you know that each tablet contains 500 mg of magnesium. However, this magnesium is actually a part of magnesium hydroxide, $M g {\left(O H\right)}_{2}$.
To see how much magnesium hydroxide you have in every tablet, you must first determine the percentage by mass magnesium has in $M g {\left(O H\right)}_{2}$. To do this, use their respective molar masses
(24.3050cancel("g/mol"))/(58.3197cancel("g/mol")) * 100 = "41.68%"
This means that you get 41.68 g of magnesium for every 100 g of $M g {\left(O H\right)}_{2}$. As a result, 500 mg of magnesium would correspond to
500 * 10^(-3)cancel("g Mg") * ("100 g "Mg(OH)_2)/(41.68cancel("g Mg")) = "1.199 g "Mg(OH)_2#
Rounded to one sig fig, the number of sig figs given for 500 mg, the answer will be
${m}_{M g {\left(O H\right)}_{2}} = \textcolor{g r e e n}{\text{1 g}}$
|
2022-01-26 09:43:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287369608879089, "perplexity": 3482.9512482483965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00119.warc.gz"}
|
http://math.stackexchange.com/questions/529806/intuition-for-geometric-transformations
|
# Intuition for Geometric Transformations
I've been making a lot of effort over the past few hours to gain some intuition into the art of geometric transformation but to little avail. I would really like to be able to look at a transformation matrix and have a pretty good idea of what it does to a shape (or at least a vector)
I sketched a simple box on a graph, and applied two simple transformations to them. After doing the first, I expected the second to produce a sort of mirror, or opposite of it, but that didn't happen (and I suppose it makes sense now thinking about it).
However, I would really appreciate if someone could give me an approach to thinking of these things intuitively and understanding where inside the matrix does the magic happen.
Below is my simple box (all drawn in Microsoft Publisher :) and the transformation matrix I used on each point (only one is labeled, the others are obvious)
I applied a very similar transformation in the second case but got a very different result.
-
Why not look at what the matrix does to basis vectors. – Wintermute Oct 17 '13 at 13:57
This might help,
Lets first look at what the matrix does to vectors. A useful way of looking at a matrix is to think of each column as being the result of applying the matrix to one of the basis vectors.
$$\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{cc} 1 \\ 0 \end{array} \right] =\left[ \begin{array}{cc} a \\ c \end{array} \right] \qquad \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{cc} 0 \\ 1 \end{array} \right] =\left[ \begin{array}{cc} b \\ d \end{array} \right]$$
When you have more than one nonzero component for a column vector the matrix is applied to each piece and the results are added. In other words multiplying a matrix by a column vector adds columns of the matrix weighted by the components of the vector.
$$\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{cc} 3 \\ 4 \end{array} \right] =\left[ \begin{array}{cc} 3a + 4b\\ 3c + 4d \end{array} \right]$$ Now lets use this to see what your matrix does to the (x,y) ordered pairs,
$$\left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array} \right] \left[ \begin{array}{cc} x \\ y \end{array} \right] =\left[ \begin{array}{cc} x \\ x+y \end{array} \right] \qquad \left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{cc} x \\ y \end{array} \right] =\left[ \begin{array}{cc} x+y \\ y \end{array} \right].$$
So you can see that when the first transformation is applied to a point it leaves the $x$ coordinate alone and then makes a new $y$ coordinate by adding the old $x$ and $y$ values. You can see this in your figure the shape is deformed vertically but not horizontally.
The second transformation does the same thing by with the roles of $x$ and $y$ reversed. Looking at this you can see that the $y$ coordinates do not change as a result of the transformation.
I hope this helps your intuition a bit. I'm not sure of you background but here are some general guidelines:
• Your matrix will always treat $(0,0)$ as a special point. So when you are looking at what the transformation does you should keep in mind that it won't treat all squares equally.
• When visualizing geometric transformations like this it is helpful to apply it to every point in the xy-grid. This gives you a new deformed grid which gives you an idea of what it does where. Think of this as stretching/compressing the xy plane.
• If your transformation is diagonalizable you should find its eigenvalues and eigenvectors. These provide invaluable information about the transformation.
-
Wow fantastic! Thanks for this, I will practice visualzing your examples. I appreciate it very much @Spencer – Imray Oct 17 '13 at 14:38
Some helpful things to look for in a matrix:
The determinant of a matrix is equal to the (signed) area of the parallelogram you get by applying the matrix to a unit length square with its bottom left vertex at the origin. This parallelogram has its vertices at $\mathbf{0}$, $\mathbf{a}$, $\mathbf{b}$, $\mathbf{a}+\mathbf{b}$ where $\mathbf{a}$ and $\mathbf{b}$ are the vectors correspond to the first and second column of the matrix respectively.
The eigenvectors of a matrix correspond to lines through the origin on which vectors will remain after applying the matrix to them. The eigenvalues corresponding to each eigenvector tell you by what factor the distance between the origin and a vector on the line will be multiplied, as it is 'pushed' along the line (So for instance if $(1,1)$ is an eigenvector with corresponding eigenvalue equal to $-4$ then any vector on the line $\alpha(1,1)$ is sent to the same vector, four times further away from the origin, and on the other side of the origin. so $(2,2)$ will get sent to $(-8,-8)$ for examples.)
A matrix is fully determined by how it acts on basis vectors, so if you know where $(0,1)$ and $(1,0)$ are sent to by your matrix, then you can see how the matrix will act on any other vector by simply writing it as a linear combination of basis vectors.
Similarly, if a matrix is acting on the left of vectors, then the matrix will send the vector $(1,0)$ to the vector corresponding to the first column of the matrix, and it will send the vector $(0,1)$ to the vector corresponding to the second column of the matrix.
-
+1 Great tips, thanks - especially that third paragraph, I didn't think of that. – Imray Oct 17 '13 at 14:33
When a matrix is applied to an eigenvector, the eigenvalue tells by how much the magnitude of that eivenvector changes, right? Your answer, although correct, gives the intuition of translation: "how much a vector will move along this line" instead of scaling. This is not a critique per se, just a caveat for people like me :). – teodron Oct 17 '13 at 14:48
@teodron I agree it was poorly worded. I've rephrased it now in terms of the distances involved being multiplied by a factor dependent on the eigenvalue associated to the eigenvector. – Daniel Rust Oct 17 '13 at 15:09
|
2015-04-26 05:07:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7850916385650635, "perplexity": 159.39471336212503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652631.96/warc/CC-MAIN-20150417045732-00303-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/291925/callan-symanzik-equation
|
# Callan-Symanzik Equation
In the book An Introduction to Quantum Field Theory by Michael E. Peskin and Daniel V. Schroeder, they derive the Callan-Symanzik equation for the two-point function
$$\left[M\frac{\partial}{\partial M}+\beta(\lambda)\frac{\partial}{\partial \lambda}+2\gamma(\lambda)\right]G^{(2)}(p)=0$$
Then they want to change variable from $M$ to $p$ with space-like momentum $p=\sqrt{-p^2}$ and they find out (pg. 418) that
$$\left[p\frac{\partial}{\partial p}-\beta(\lambda)\frac{\partial}{\partial \lambda}+2-2\gamma(\lambda)\right]G^{(2)}(p)=0$$
I don't understand where the $+2$ came from. I did all the calculation, i found the overall minus sign, but i couldn't find out the origin of the factor $+2$.
Any Ideas?
As written in the book the dependence of the two-point function on $p$ and $M$ reduces to $$G^{(2)}(p)=\frac{i}{p^2}g(-p^2/M^2).$$
Therefore one has $$p\frac{\partial G^{(2)}(p)}{\partial p}=-2G^{(2)}(p)-\frac{2i}{M^2}g'(-p^2/M^2)$$ Which follows just from the product rule of derivatives (where I have taken $i/p^2$ as first factor and $g(-p^2/M^2)$ as the second factor.) and the definition of $G^{(2)}(p)$ above. On the other hand $$M\frac{\partial G^{(2)}(p)}{\partial M}=\frac{2i}{M^2}g'(-p^2/M^2).$$ Now one can use the two equation to eliminate $g'$ and express derivatives w.r.t. $M$ in terms of derivatives w.r.t. $p$: $$M\frac{\partial G^{(2)}(p)}{\partial M}=-p\frac{\partial G^{(2)}(p)}{\partial p}-2G^{(2)}(p)$$ or $$M\frac{\partial }{\partial M}=-p\frac{\partial }{\partial p}-2$$
|
2019-06-27 01:11:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8919305205345154, "perplexity": 110.9395490069352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000609.90/warc/CC-MAIN-20190626234958-20190627020958-00162.warc.gz"}
|
https://biostatistics4socialimpact.github.io/rstap/reference/stapdnd_glmer.html
|
Bayesian inference for stap-glms with group-specific coefficients that have unknown covariance matrices with flexible priors.
stapdnd_glmer(formula, family = gaussian(), subject_data = NULL,
distance_data = NULL, time_data = NULL, subject_ID = NULL,
group_ID = NULL, max_distance = NULL, max_time = NULL, weights,
offset, contrasts = NULL, ..., prior = normal(),
prior_intercept = normal(), prior_stap = normal(),
prior_theta = log_normal(location = 1L, scale = 1L),
prior_aux = exponential(), prior_covariance = decov(),
adapt_delta = NULL)
## Arguments
formula
Same as for glmer. Note that in-formula transformations will not be passed to the final design matrix.Covariates that have "scale" in their name are not advised as this text is parsed for in the final model fit.
family
Same as for glmer except limited to gaussian, binomial and poisson
subject_data
a data.frame that contains data specific to the subject or subjects on whom the outcome is measured. Must contain one column that has the subject_ID on which to join the distance and time_data
distance_data
a (minimum) three column data.frame that contains (1) an id_key (2) The sap/tap/stap features and (3) the distances between subject with a given id and the built environment feature in column (2), the distance column must be the only column of type "double" and the sap/tap/stap features must be specified in the dataframe exactly as they are in the formula.
time_data
same as distance_data except with time that the subject has been exposed to the built environment feature, instead of distance
subject_ID
name of column to join on between subject_data and bef_data
group_ID
name of column to join on between subject_data and bef_data that uniquely identifies the groups
max_distance
the upper bound on any and all distances included in the model
max_time
the upper bound on any and all times included in the model
weights, offset
Same as glm.
contrasts
Same as glm, but rarely specified.
...
For stap_glmer, further arguments passed to sampling (e.g. iter, chains, cores, etc.). For stap_lmer ... should also contain all relevant arguments to pass to stap_glmer (except family).
prior
The prior distribution for the regression coefficients. prior should be a call to one of the various functions provided by rstap for specifying priors. The subset of these functions that can be used for the prior on the coefficients can be grouped into several "families":
Family Functions Student t family normal, student_t, cauchy Hierarchical shrinkage family hs, hs_plus Laplace family laplace, lasso Product normal family product_normal
See the priors help page for details on the families and how to specify the arguments for all of the functions in the table above. To omit a prior ---i.e., to use a flat (improper) uniform prior--- prior can be set to NULL, although this is rarely a good idea.
Note: If prior is from the Student t family or Laplace family, and if the autoscale argument to the function used to specify the prior (e.g. normal) is left at its default and recommended value of TRUE, then the default or user-specified prior scale(s) may be adjusted internally based on the scales of the predictors. See the priors help page and the Prior Distributions vignette for details on the rescaling and the prior_summary function for a summary of the priors used for a particular model.
prior_intercept
The prior distribution for the intercept. prior_intercept can be a call to normal, student_t or cauchy. See the priors help page for details on these functions. To omit a prior on the intercept ---i.e., to use a flat (improper) uniform prior--- prior_intercept can be set to NULL.
Note: The prior distribution for the intercept is set so it applies to the value when all predictors are centered. If you prefer to specify a prior on the intercept without the predictors being auto-centered, then you have to omit the intercept from the formula and include a column of ones as a predictor, in which case some element of prior specifies the prior on it, rather than prior_intercept. Regardless of how prior_intercept is specified, the reported estimates of the intercept always correspond to a parameterization without centered predictors (i.e., same as in glm).
prior_theta, prior_stap
priors for the spatial scale and spatial effect parameters, respectively
prior_aux
The prior distribution for the "auxiliary" parameter (if applicable). The "auxiliary" parameter refers to a different parameter depending on the family. For Gaussian models prior_aux controls "sigma", the error standard deviation. For negative binomial models prior_aux controls "reciprocal_dispersion", which is similar to the "size" parameter of rnbinom: smaller values of "reciprocal_dispersion" correspond to greater dispersion. For gamma models prior_aux sets the prior on to the "shape" parameter (see e.g., rgamma), and for inverse-Gaussian models it is the so-called "lambda" parameter (which is essentially the reciprocal of a scale parameter). Binomial and Poisson models do not have auxiliary parameters.
prior_aux can be a call to exponential to use an exponential distribution, or normal, student_t or cauchy, which results in a half-normal, half-t, or half-Cauchy prior. See priors for details on these functions. To omit a prior ---i.e., to use a flat (improper) uniform prior--- set prior_aux to NULL.
prior_covariance
Cannot be NULL; see decov for more information about the default arguments.
See the adapt_delta help page for details.
## Value
A stapreg object is returned for stap_glmer, stap_lmer.
## Details
The stap_glmer function is similar in syntax to glmer but rather than performing (restricted) maximum likelihood estimation of generalized linear models, Bayesian estimation is performed via MCMC. The Bayesian model adds priors on the regression coefficients (in the same way as stap_glm) and priors on the terms of a decomposition of the covariance matrices of the group-specific parameters. See priors for more information about the priors.
The stap_lmer function is equivalent to stap_glmer with family = gaussian(link = "identity").
## References
Gelman, A. and Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge, UK.
Muth, C., Oravecz, Z., and Gabry, J. (2018) User-friendly Bayesian regression modeling: A tutorial with rstanarm and shinystan. The Quantitative Methods for Psychology. 14(2), 99--119. https://www.tqmp.org/RegularArticles/vol14-2/p099/p099.pdf
stapreg-methods and glmer.
The Longituinal Vignette for stap_glmer and the preprint article available through arXiv.
## Examples
if (FALSE) {
## subset to only include id, class name and distance variables
distdata <- homog_longitudinal_bef_data[,c("subj_ID","measure_ID","class","dist")]
timedata <- homog_longitudinal_bef_data[,c("subj_ID","measure_ID","class","time")]
## distance or time column must be numeric
timedata$time <- as.numeric(timedata$time)
fit <- stap_glmer(y_bern ~ centered_income + sex + centered_age + stap(Coffee_Shop) + (1|subj_ID),
subject_data = homog_longitudinal_subject_data,
distance_data = distdata,
time_data = timedata,
subject_ID = 'subj_ID',
group_ID = 'measure_ID',
prior_intercept = normal(location = 25, scale = 4, autoscale = F),
prior = normal(location = 0, scale = 4, autoscale=F),
prior_stap = normal(location = 0, scale = 4),
prior_theta = list(Coffee_Shop = list(spatial = log_normal(location = 1,
scale = 1),
temporal = log_normal(location = 1,
scale = 1))),
max_distance = 3, max_time = 50,
chains = 4, refresh = -1, verbose = FALSE,
iter = 1E3, cores = 1)
}
|
2022-12-07 02:37:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357349514961243, "perplexity": 3707.819766483877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00282.warc.gz"}
|
https://shelah.logic.at/papers/1216/
|
# Sh:1216
• Golshani, M., & Shelah, S. Usuba’s principle UB_\lambda can fail at singular cardinals. Preprint. arXiv: 2107.09339
• Abstract:
We answer a question of Usuba by showing that the combinatorial principle UB_\lambda can fail at a singular cardinal. Furthermore, \lambda can be taken to be \aleph_\omega.
• Version 2021-07-20 (8p)
Bib entry
@article{Sh:1216,
author = {Golshani, Mohammad and Shelah, Saharon},
title = {{Usuba's principle $UB_\lambda$ can fail at singular cardinals}},
note = {\href{https://arxiv.org/abs/2107.09339}{arXiv: 2107.09339}},
arxiv_number = {2107.09339}
}
|
2022-05-22 05:05:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7850812077522278, "perplexity": 13875.23093837551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00370.warc.gz"}
|
https://byjus.com/question-answer/verify-the-rolle-s-theorem-for-the-function-displaystyle-f-x-x-2-3x-2/
|
Question
# Verify the Rolle's theorem for the function $$\displaystyle f(x)=x^{2}-3x+2$$ on the interval[1,2]
A
No Rolle's theorem is not applicable in the given interval
B
Yes Rolle's theorem is applicable in the given interval and the stationary point x=54
C
Yes Rolle's theorem is applicable in the given interval and the stationary point x=32
D
nnone of these
Solution
## The correct option is B Yes Rolle's theorem is applicable in the given interval and the stationary point $$x=\frac { 3 }{ 2 }$$It can be easily seen that $$f(x)=x^{2}-3x+2$$ is continuous as differentiable on R (being a polynomial) $$\Rightarrow f(x)$$ is continous in (1,2) and differentiable in [1,2]. Also, we have $$f(1)=f(2)=0$$. Thus, $$f(x)$$ satisfies all the conditions of Rolle's theorem in $$[1,2]$$ $$\Rightarrow \displaystyle \exists$$ at least one number, say $$x$$ in $$[1,2]$$ such that $$\displaystyle f^{'}(c)=0.$$ Now, $$\displaystyle f^{'}(x)=2x-3=0\Rightarrow x=\frac{3}{2}$$ Since, the root (stationary point) $$\displaystyle x=\frac{3}{2}$$ lies in the interval(1,2). Hence Rolle's theorem is verified.Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-25 07:29:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957747220993042, "perplexity": 448.49248074095397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00485.warc.gz"}
|
https://discuss.codechef.com/questions/80373/coex04-editorial
|
You are not logged in. Please login at www.codechef.com to post your questions!
×
# COEX04-Editorial
0 PROBLEM LINKS: Practice:https://www.codechef.com/COEX1601/problems/COEX04 Contest:https://www.codechef.com/problems/COEX04 DIFFICULTY: MEDIUM PREREQUISITES: Strings PROBLEM: Given a number of words, check if it is possible to arrange them in an order such that the last alphabet of the word is the first alphabet of the word that follows it. EXPLANATION: Step 1: Store all the starting and ending alphabets of the given ‘n’ words in two character arrays, say start and end respectively. Step 2: If the end alphabet of the word is the first alphabet of the word that follows it, let’s call such a combination a link. Step 3: flag array is used to keep track of all used alphabets in start. Used alphabets are marked 1 and unused ones are marked zero. Step 4: Starting from the first index of end, all the indexes of start are searched for the matching alphabet. Step 5:If matching index is found, then we check that it does not belong to the same word, and also that it has not been used first by checking flag for it. If it has not been used first and does not belong to the same word,we use the start of this word and repeat the Step 4 after flagging this word and incrementing links by 1. Step 6: If matching index belongs to same word or has been flagged, we search the subsequent indexes for a match. If found, step 5 is followed. Step 7: If no matching index is found, we use the next end index and repeat step 4. for(k=0 to k=n-1 increment by 1) { i=start.find(end[index]); if(i!=index && flag[i]!=0) { flag[i]=0; index=i; links=links+1; } else if(i==index) { i=start.find(end[index],i+1); if(flag[i]!=0) { flag[i]=0; index=i; links=links+1; } else break; } AUTHOR'S AND TESTER'S SOLUTIONS: Author's solution can be found here. Tester's solution can be found here. This question is marked "community wiki". asked 25 Mar '16, 17:24 10●2 accept rate: 0% 0★admin ♦♦ 19.8k●350●498●541
toggle preview community wiki:
Preview
### Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
By RSS:
Answers
Answers and Comments
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• link:[text](http://url.com/ "title")
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×15,871
×2,658
×355
×3
question asked: 25 Mar '16, 17:24
question was seen: 475 times
last updated: 28 Mar '16, 17:08
|
2019-03-25 20:39:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26894107460975647, "perplexity": 3034.98202903243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204300.90/warc/CC-MAIN-20190325194225-20190325220225-00418.warc.gz"}
|
https://scicomp.stackexchange.com/questions/26667/deposition-model-in-laminar-flow
|
Deposition model in laminar flow
I have a chamber full with a fluid flowing horizontally in laminar regime from one side to the other. It carries a suspension with concentration $c$. This suspension also falls to the bottom of the chamber with settling velocity $\mathbf{v_s}$.
When it reaches a critical concentration $c_{max}$ at the bottom, it starts building up as a deposition. This blocks the influx of particles, hence the function $\phi(c)$, this nonlinear term is similar to the one in the Burgers equation for the traffic problem. Now the region with concentration $c_{max}$ can grow upwards.
This deposition is big enough that it can disturb the flow. To account for this, I added the term $\psi(c)\mathbf{u}$ in the Navier-Stokes equations below. This term stops the flow in regions where the concentration is $c_{max}$. The equations are:
$\frac{\partial c}{\partial t} = \nabla (\phi(c,c_{max})(\mathbf{v_s} + \mathbf{v}))$
$\frac {\partial \mathbf {u} }{\partial t}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \nabla ^{2}\mathbf{u} + \nabla p + \psi(c,c_{max})\mathbf{u}= \mathbf{f}$
$\nabla \cdot \mathbf{u} = 0$
\psi(c, c_{max}) = \left\{ \begin{align} 0 && c < c_{init} \\ 10000 (c - c_{init})^3 && else \end{align} \right.
$c_{init}$ is the initial concentration in the entire chamber, it only increases at the bottom, when it starts building up. Therefore $\psi>0$ only at the bottom.
I am not 100% confident with this model, and hence, my question. I believe that the penalization term $\psi$ can restrict my step size.
• Is there a better approach to model a flow with a growing deposition that can disturb it?
• Is it admissible to decouple the equations by treating $\mathbf{u}$ in the first equation and $c$ in the NS equations explicitly?
• I am solving both equations on the same mesh. So far I haven't done a proper error estimation, but I believe that I would need a high refinement near the deposition layer. Can I model this phenomena with level set methods and save costs in refinement?
I have not been able to find literature on similar physics, I would appreciate any reference.
|
2019-08-24 22:04:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598794937133789, "perplexity": 423.99264695083457}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00531.warc.gz"}
|
https://physics.stackexchange.com/questions/261439/renormalization-group-invariant-objects-of-a-quantum-field-theory
|
Renormalization group invariant objects of a quantum field theory
Consider an arbitrary QFT with $g_b$ as the bare coupling constant. After dimensional regularization, is $g_b \mu^\epsilon$ a renormalization group invariant object of the theory? In other words, is the following relation correct?
$$\frac{d (g_b \mu^\epsilon)}{d \log{\mu^2}}=0$$
Please note that the number of space-time dimensions is $d=4-2\epsilon$ and $g_b \mu^\epsilon$ is a dimensional object.
$$\begin{split} \frac{d(g_b\mu^{\epsilon})}{d\log\mu^2}&=\frac{\mu}{2}\frac{d(g_b\mu^{\epsilon})}{d\mu}\\ &=\frac{\mu}{2}\left[\mu^{\epsilon}\frac{dg_b}{d\mu}+g_b\frac{d\mu^{\epsilon}}{d\mu}\right] \end{split}$$ By definition, the bare coupling does not depend on the renormalization scale $\mu$. Hence $$\frac{d(g_b\mu^{\epsilon})}{d\log\mu^2}=\frac{\epsilon g_b}{2}\mu^{\epsilon},$$ which vanishes as $\epsilon\rightarrow 0$.
Edit: Notice that the authors define $$a_B=Z_{as} a_s,$$ where $a_B$ is the bare coupling and $a_s$ is the renormalized coupling, with $$a_s\equiv\frac{g(\mu^2)}{16\pi^2}.$$ In order to keep the coupling $g$ dimensionless in dimensional regularization, we must introduce the dimensionfull quantity $\mu$, so that in $d=4-2\epsilon$ dimensions we have $$g\rightarrow \mu^{\epsilon}g,$$ or $$a_s\rightarrow \mu^{2\epsilon}a_s.$$ Hence $$a_B\mu^{2\epsilon}=Z_{as}a_s\mu^{2\epsilon}.$$ Ordinarily (in my experience) we conclude that the bare coupling itself is invariant under the renormalization group flow because we have already included the scale $\mu$ in its definition, i.e. $$a_B=Z_{as}a_s\mu^{2\epsilon}.$$ However, based on the author's convention, we must include the scale $\mu$ on both sides of this equation. Now that we have ensured that the dimensions will be preserved, we can say that $$\frac{d(a_B\mu^{2\epsilon})}{d\log\mu^2}=0.$$
• Thanks for the reply; however, the relation I am looking for does not merely stand in the limit $\epsilon \rightarrow 0$. Please take a look eq. 5 in arxiv.org/abs/hep-ph/9701390. The result of solving the equation is the $d$-dimensional $\beta$-function. – moha Jun 8 '16 at 15:42
• Thanks for clarifying this point; I got confused because they emphasized that $a_B \mu^{2\epsilon}$ is dimensional and it does not run; $a_B$ itself did not run as well!! Another question: couldn't we from the beginning assume that both $a_s$ and $a_B$ are dimensionless and had the relation $a_B=Z_{a_s} a_s$? – moha Jun 8 '16 at 17:59
• That is what the authors assume: $a_s$ and $a_B$ are dimensionless in $d=4$ dimensions. When we go to $d=4-2\epsilon$ dimensions, the dimensions of the fields and hence the coupling constants must change. Rather than have a coupling with non-integer dimension, we introduce the scale $\mu$. We don't need the scale $\mu$. It is nonphysical and will never appear in any physical quantity. Including $\mu$ is nice because it prevents us from having logarithms of dimensionfull quantities in dim reg. It also doubles as a fictitious scale which we can use to compute the running of the coupling. – Evan Rule Jun 8 '16 at 18:30
|
2019-10-23 17:25:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 9, "x-ck12": 0, "texerror": 0, "math_score": 0.8836987614631653, "perplexity": 245.69294879018966}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00205.warc.gz"}
|
https://socratic.org/questions/what-percentage-does-oxygen-make-up-in-the-compound-mgso-4
|
# What percentage does oxygen make up in the compound MgSO_4?
Mar 12, 2017
Oxygen makes up 53.16%
#### Explanation:
In order to compute the mass percent of oxygen in $M g S {O}_{4}$ we're going to need two things:
$\textcolor{p u r p \le}{\text{1: The formula mass of the entire compound}}$
$\textcolor{p u r p \le}{\text{2: The atomic mass of oxygen}}$
The formula mass of $M g S {O}_{4}$ is 120.38 $\frac{g}{m o l}$
$1 \left(24.31 \frac{g}{m o l}\right) + 1 \left(32.07 \frac{g}{\text{mol") +4(16.00 g/"mol}}\right)$ $= 120.38 \frac{g}{m o l}$
The atomic mass of the oxygen atom is $16.00 \frac{g}{m o l}$
Now, we have to use the following equation:
The numerator represents the mass of the desired atom, which is O in our case, and the denominator represents the mass of the entire compound. You just divide the two and multiply by 100 to obtain the percent composition:
"Mass of O"/("Molar Mass of "MgSO_4") xx100%
"(4)16.00 g/mol"/"120.38g/mol"xx100% = 53.16%
I multiplied the atomic mass of oxygen by 4 because you have to account for all of the oxygen atoms in the formula.
|
2019-08-22 00:14:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853731751441956, "perplexity": 664.3930011047452}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00228.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/statistics-probability/introductory-statistics-9th-edition/chapter-5-section-5-3-mean-and-standard-deviation-of-a-discrete-random-variable-exercises-page-192/5-21
|
## Introductory Statistics 9th Edition
Mean =μ = $∑xP(x)$ = 2.5546 units of defective tires Thus, it is expected that 2.5546 units of defective tires are on this fleet of limos, with a standard deviation of 1.3155 units of defective tires. Standard Deviation = $\sqrt∑x^{2}P(x) -μ^{2}$ = $\sqrt 8.2564-2.5546^{2}$ =1.3155 units of defective tires If k = 2, at least 75% of the number of defective tires on a limo lie between $μ-2σ$ and $μ+2σ$. μ =2.5546, σ = 1.3155 $μ-2σ$ = 2.5546-2(1.3155) = -0.0764 $μ+2σ$ = 2.5546+2(1.3155) = 5.1856 Using Chebyshev's theorem, we can state that at least 75% of the number of tires are expected to contain -0.0764 to 5.1856 defective tires on a limo.
|
2019-12-06 15:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.735351026058197, "perplexity": 1310.50233074105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00083.warc.gz"}
|
https://ask.sagemath.org/question/53909/how-to-join-functions-with-an-intermediate-fit-to-obtain-smooth-derivatives/
|
# how to join functions with an intermediate fit to obtain smooth derivatives
I want to modify a function which contains a pole for numerical simulations, e.g. limit it to a maximum value around the pole. My idea is to connect two original functions f1 and f2 (e.g. one with pole and one constant) with an intermediate fit function between two x values x1 and x2. The fit function should continue the two original functions between x1 and x2 as well as their first two derivatives, and it should be monotonic in this range.
I found some hints how to fit (x,y) points with a lagrange_polynomial or find_fit to match a given function template, but these approaches do not seem to be extendable to fit the derivatives at the same time, e.g. fit points from (f1,f1', f1'') and (f2,f2',f2''). At least this seems to me what needs to be done. Is this a feasible approach, and how can it be solved?
Constructing a set of splines between x1 and x2 might be an alternative, but the implementation in the simulation system would be more complicated compared to a single polynomial. I guess 3 spline segments might be sufficient to create a "connector template", but a subsequent fit of a single function to this set of splines (to obtain a simpler implementation for the simulation) would probably violate the continuity of the derivatives, and I worry that construction of suitable intermediate points to obtain a monotonic fit function is another complication. Thus, again, this seems to result in a general multi-function fitting problem to which a genius may have found a general approach already?
edit retag close merge delete
Sort by » oldest newest most voted
Probably, you are looking for Hermite interpolation. For interpolation up to the first order derivative, you can use SciPy's CubicHermiteSpline, which is available in SageMath:
sage: from scipy.interpolate import CubicHermiteSpline
sage: CubicHermiteSpline?
more
|
2021-04-11 08:19:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2567305564880371, "perplexity": 567.7264546055424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00169.warc.gz"}
|