url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://en.wikipedia.org/wiki/Schulze_method
|
# Schulze method
The Schulze method is a voting system developed in 1997 by Markus Schulze that selects a single winner using votes that express preferences. The method can also be used to create a sorted list of winners. The Schulze method is also known as Schwartz Sequential Dropping (SSD), Cloneproof Schwartz Sequential Dropping (CSSD), the Beatpath Method, Beatpath Winner, Path Voting, and Path Winner.
The Schulze method is a Condorcet method, which means the following: if there is a candidate who is preferred over every other candidate in pairwise comparisons, then this candidate will be the winner when the Schulze method is applied.
The output of the Schulze method (defined below) gives an ordering of candidates. Therefore, if several positions are available, the method can be used for this purpose without modification, by letting the k top-ranked candidates win the k available seats. Furthermore, for proportional representation elections, a single transferable vote variant has been proposed.
The Schulze method is used by several organizations including Debian, Ubuntu, Gentoo, Software in the Public Interest, Free Software Foundation Europe, Pirate Party associations and many others.
## Description of the method
### Ballot
The input to the Schulze method is the same as for other ranked single-winner election methods: each voter must furnish an ordered preference list on candidates where ties are allowed (a strict weak order).[1]
One typical way for voters to specify their preferences on a ballot (see right) is as follows. Each ballot lists all the candidates, and each voter ranks this list in order of preference using numbers: the voter places a '1' beside the most preferred candidate(s), a '2' beside the second-most preferred, and so forth. Each voter may optionally:
• give the same preference to more than one candidate. This indicates that this voter is indifferent between these candidates.
• use non-consecutive numbers to express preferences. This has no impact on the result of the elections, since only the order in which the candidates are ranked by the voter matters, and not the absolute numbers of the preferences.
• keep candidates unranked. When a voter doesn't rank all candidates, then this is interpreted as if this voter (i) strictly prefers all ranked to all unranked candidates, and (ii) is indifferent among all unranked candidates.
### Computation
$d[V,W]$ is assumed to be the number of voters who prefer candidate $V$ to candidate $W$.
A path from candidate $X$ to candidate $Y$ of strength $p$ is a sequence of candidates $C(1),...,C(n)$ with the following properties:
1. $C(1) = X$ and $C(n) = Y$.
2. For all $i = 1,...,(n-1): d[C(i),C(i+1)] > d[C(i+1),C(i)]$.
3. For all $i = 1,...,(n-1): d[C(i),C(i+1)] \text{≥} p$.
$p[A,B]$, the strength of the strongest path from candidate $A$ to candidate $B$, is the maximum value such that there is a path from candidate $A$ to candidate $B$ of that strength. If there is no path from candidate $A$ to candidate $B$ at all, then $p[A,B] = 0$.
Candidate $D$ is better than candidate $E$ if and only if $p[D,E] > p[E,D]$.
Candidate $D$ is a potential winner if and only if $p[D,E] \text{≥} p[E,D]$ for every other candidate $E$.
It can be proven that $p[X,Y] > p[Y,X]$ and $p[Y,Z] > p[Z,Y]$ together imply $p[X,Z] > p[Z,X]$.[1]:§4.1 Therefore, it is guaranteed (1) that the above definition of "better" really defines a transitive relation and (2) that there is always at least one candidate $D$ with $p[D,E] \text{≥} p[E,D]$ for every other candidate $E$.
## Example
In the following example 45 voters rank 5 candidates.
• 5 $ACBED$ (meaning, 5 voters have order of preference: $A > C > B > E > D$)
• 5 $ADECB$
• 8 $BEDAC$
• 3 $CABED$
• 7 $CAEBD$
• 2 $CBADE$
• 7 $DCEBA$
• 8 $EBADC$
The pairwise preferences have to be computed first. For example, when comparing $A$ and $B$ pairwise, there are $5+5+3+7=20$ voters who prefer $A$ to $B$, and $8+2+7+8=25$ voters who prefer $B$ to $A$. So $d[A, B] = 20$ and $d[B, A] = 25$. The full set of pairwise preferences is:
Directed graph labeled with pairwise preferences d[*, *]
Matrix of pairwise preferences
d[*,A] d[*,B] d[*,C] d[*,D] d[*,E]
d[A,*] 20 26 30 22
d[B,*] 25 16 33 18
d[C,*] 19 29 17 24
d[D,*] 15 12 28 14
d[E,*] 23 27 21 31
The cells for d[X, Y] have a light green background if d[X, Y] > d[Y, X], otherwise the background is light red. There is no undisputed winner by only looking at the pairwise differences here.
Now the strongest paths have to be identified. To help visualize the strongest paths, the set of pairwise preferences is depicted in the diagram on the right in the form of a directed graph. An arrow from the node representing a candidate X to the one representing a candidate Y is labelled with d[X, Y]. To avoid cluttering the diagram, an arrow has only been drawn from X to Y when d[X, Y] > d[Y, X] (i.e. the table cells with light green background), omitting the one in the opposite direction (the table cells with light red background).
One example of computing the strongest path strength is p[B, D] = 33: the strongest path from B to D is the direct path (B, D) which has strength 33. But when computing p[A, C], the strongest path from A to C is not the direct path (A, C) of strength 26, rather the strongest path is the indirect path (A, D, C) which has strength min(30, 28) = 28.The strength of a path is the strength of its weakest link.
For each pair of candidates X and Y, the following table shows the strongest path from candidate X to candidate Y in red, with the weakest link underlined.
Strongest paths
... to A ... to B ... to C ... to D ... to E
from A ...
A-(30)-D-(28)-C-(29)-B
A-(30)-D-(28)-C
A-(30)-D
A-(30)-D-(28)-C-(24)-E
from A ...
from B ...
B-(25)-A
B-(33)-D-(28)-C
B-(33)-D
B-(33)-D-(28)-C-(24)-E
from B ...
from C ...
C-(29)-B-(25)-A
C-(29)-B
C-(29)-B-(33)-D
C-(24)-E
from C ...
from D ...
D-(28)-C-(29)-B-(25)-A
D-(28)-C-(29)-B
D-(28)-C
D-(28)-C-(24)-E
from D ...
from E ...
E-(31)-D-(28)-C-(29)-B-(25)-A
E-(31)-D-(28)-C-(29)-B
E-(31)-D-(28)-C
E-(31)-D
from E ...
... to A ... to B ... to C ... to D ... to E
Strengths of the strongest paths
p[*,A] p[*,B] p[*,C] p[*,D] p[*,E]
p[A,*] 28 28 30 24
p[B,*] 25 28 33 24
p[C,*] 25 29 29 24
p[D,*] 25 28 28 24
p[E,*] 25 28 28 31
Now the output of the Schulze method can be determined. For example, when comparing A and B, since 28 = p[A,B] > p[B,A] = 25, for the Schulze method candidate A is better than candidate B. Another example is that 31 = p[E,D] > p[D,E] = 24, so candidate E is better than candidate D. Continuing in this way, the result is that the Schulze ranking is E > A > C > B > D, and E wins. In other words, E wins since p[E,X] ≥ p[X,E] for every other candidate X.
## Implementation
The only difficult step in implementing the Schulze method is computing the strongest path strengths. However, this is a well-known problem in graph theory sometimes called the widest path problem. One simple way to compute the strengths therefore is a variant of the Floyd–Warshall algorithm. The following pseudocode illustrates the algorithm.
1. # Input: d[i,j], the number of voters who prefer candidate i to candidate j.
2. # Output: p[i,j], the strength of the strongest path from candidate i to candidate j.
3.
4. for i from 1 to C
5. for j from 1 to C
6. if (i ≠ j) then
7. if (d[i,j] > d[j,i]) then
8. p[i,j] := d[i,j]
9. else
10. p[i,j] := 0
11.
12. for i from 1 to C
13. for j from 1 to C
14. if (i ≠ j) then
15. for k from 1 to C
16. if (i ≠ k and j ≠ k) then
17. p[j,k] := max ( p[j,k], min ( p[j,i], p[i,k] ) )
This algorithm is efficient, and has running time proportional to C3 where C is the number of candidates. (This does not account for the running time of computing the d[*,*] values, which if implemented in the most straightforward way, takes time proportional to C2 times the number of voters.)
## Ties and alternative implementations
When allowing users to have ties in their preferences, the outcome of the Schulze method naturally depends on how these ties are interpreted in defining d[*,*]. Two natural choices are that d[A, B] represents either the number of voters who strictly prefer A to B (A>B), or the margin of (voters with A>B) minus (voters with B>A). But no matter how the ds are defined, the Schulze ranking has no cycles, and assuming the ds are unique it has no ties.[1]
Although ties in the Schulze ranking are unlikely,[2] they are possible. Schulze's original paper[1] proposed breaking ties in accordance with a voter selected at random, and iterating as needed.
An alternative, slower, way to describe the winner of the Schulze method is the following procedure:
1. draw a complete directed graph with all candidates, and all possible edges between candidates
2. iteratively [a] delete all candidates not in the Schwartz set (i.e. any candidate which cannot reach all others) and [b] delete the weakest link
3. the winner is the last non-deleted candidate.
## Satisfied and failed criteria
### Satisfied criteria
The Schulze method satisfies the following criteria:
### Failed criteria
Since the Schulze method satisfies the Condorcet criterion, it automatically fails the following criteria:
Likewise, since the Schulze method is not a dictatorship and agrees with unanimous votes, Arrow's Theorem implies it fails the criterion
The Schulze method also fails
### Comparison table
The following table compares the Schulze method with other preferential single-winner election methods:
Schulze Ranked pairs Copeland Kemeny-Young Nanson Baldwin Instant-runoff voting Borda Bucklin Coombs MiniMax Plurality Anti-plurality Contingent voting Sri Lankan contingent voting Monotonic Condorcet Majority Condorcet loser Majority loser Mutual majority Smith ISDA LIIA Clone independence Reversal symmetry Polynomial time Participation, Consistency Resolvability Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes No No Yes Yes No No Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes No No Yes No Yes Yes Yes Yes Yes Yes No No No Yes Yes No Yes No Yes Yes Yes Yes Yes Yes No No No No Yes No Yes No No Yes Yes Yes Yes No No No Yes No Yes No Yes Yes No No Yes Yes No No No No No Yes Yes Yes Yes Yes No Yes No Yes Yes No No No No No Yes No Yes No No Yes Yes Yes Yes No No No No No Yes No Yes Yes Yes Yes No No No No No No No No Yes No Yes Yes No Yes No No No No No No No No Yes Yes Yes Yes No No No Yes No No No No No No Yes Yes Yes No No Yes Yes Yes No No No No No No Yes No Yes No No Yes No No No No No No No No Yes No Yes No No Yes No No No No No No No No Yes No Yes No Yes Yes No No No No No No No No No No Yes
The main difference between the Schulze method and the ranked pairs method can be seen in this example:
Suppose the MinMax score of a set X of candidates is the strength of the strongest pairwise win of a candidate A ∉ X against a candidate B ∈ X. Then the Schulze method, but not Ranked Pairs, guarantees that the winner is always a candidate of the set with minimum MinMax score.[1]:§4.8 So, in some sense, the Schulze method minimizes the largest majority that has to be reversed when determining the winner.
On the other hand, Ranked Pairs minimizes the largest majority that has to be reversed to determine the order of finish, in the minlexmax sense. [4] In other words, when Ranked Pairs and the Schulze method produce different orders of finish, for the majorities on which the two orders of finish disagree, the Schulze order reverses a larger majority than the Ranked Pairs order.
## History
The Schulze method was developed by Markus Schulze in 1997. It was first discussed in public mailing lists in 1997–1998[5] and in 2000.[6] Subsequently, Schulze method users included Software in the Public Interest (2003),[7] Debian (2003),[8] Gentoo (2005),[9] TopCoder (2005),[10] Wikimedia (2008),[11] KDE (2008),[12] the Free Software Foundation Europe (2008),[13] the Pirate Party of Sweden (2009),[14] and the Pirate Party of Germany (2010).[15] In the French Wikipedia, the Schulze method was one of two multi-candidate methods approved by a majority in 2005,[16] and it has been used several times.[17]
In 2011, Schulze published the method in the academic journal Social Choice and Welfare.[1]
## Users
sample ballot for Wikimedia's Board of Trustees elections
The Schulze method is not currently used in parliamentary elections. However, it has been used for parliamentary primaries in the Swedish Pirate Party. It is also starting to receive support in other public organizations. Organizations which currently use the Schulze method are:
## Notes
1. Markus Schulze, A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent single-winner election method, Social Choice and Welfare, volume 36, number 2, page 267–303, 2011. Preliminary version in Voting Matters, 17:9-19, 2003.
2. ^ Under reasonable probabilistic assumptions when the number of voters is much larger than the number of candidates
3. ^ a b c Douglas R. Woodall, Properties of Preferential Election Rules, Voting Matters, issue 3, pages 8-15, December 1994
4. ^ Tideman, T. Nicolaus, "Independence of clones as a criterion for voting rules," Social Choice and Welfare vol 4 #3 (1987), pp 185-206.
5. ^ See:
6. ^ See:
7. ^ a b Process for adding new board members, January 2003
8. ^ a b See:
9. ^ a b See:
10. ^ a b See:
11. ^ See:
12. ^ a b section 3.4.1 of the Rules of Procedures for Online Voting
13. ^ a b See:
14. ^ a b See:
15. ^ a b 11 of the 16 regional sections and the federal section of the Pirate Party of Germany are using LiquidFeedback for unbinding internal opinion polls. In 2010/2011, the Pirate Parties of Neukölln (link), Mitte (link), Steglitz-Zehlendorf (link), Lichtenberg (link), and Tempelhof-Schöneberg (link) adopted the Schulze method for its primaries. Furthermore, the Pirate Party of Berlin (in 2011) (link) and the Pirate Party of Regensburg (in 2012) (link) adopted this method for their primaries.
16. ^ a b Choix dans les votes
17. ^ fr:Spécial:Pages liées/Méthode Schulze
18. ^ §12(4), §12(15), and §14(3) of the bylaws, April 2013
19. ^ Election of the Annodex Association committee for 2007, February 2007
20. ^ Ajith, Van Atta win ASG election, April 2013
21. ^ §6 and §7 of its bylaws, May 2014
22. ^ §9a of the bylaws, October 2013
23. ^ Condorcet method for admin voting, January 2005
24. ^ See:
25. ^ Project Logo, October 2009
26. ^ "Codex Alpe Adria Competitions". 0xaa.org. 2010-01-24. Retrieved 2010-05-08.
27. ^ Civics Meeting Minutes, March 2012
28. ^ "Fellowship Guidelines" (PDF). Retrieved 2011-06-01.
29. ^ Report on HackSoc Elections, December 2008
30. ^ Adam Helman, Family Affair Voting Scheme - Schulze Method
31. ^ appendix 1 of the constitution
32. ^ See:
33. ^ "Guidance Document". Eudec.org. 2009-11-15. Retrieved 2010-05-08.
34. ^ article XI section 2 of the bylaws
35. ^ Democratic election of the server admins, July 2010
36. ^ Campobasso. Comunali, scattano le primarie a 5 Stelle, February 2014
37. ^ article 25(5) of the bylaws, October 2013
38. ^ 2° Step Comunarie di Montemurlo, November 2013
39. ^ article 12 of the bylaws, February 2014
40. ^ article 51 of the statutory rules
41. ^ Voters Guide, September 2011
42. ^ See:
43. ^ GnuPG Logo Vote, November 2006
44. ^ §14 of the bylaws
45. ^ "User Voting Instructions". Gso.cs.binghamton.edu. Retrieved 2010-05-08.
46. ^ Haskell Logo Competition, March 2009
47. ^ "Hillegass-Parker House Bylaws § 5. Elections". Hillegass-Parker House Wiki. Retrieved 25 April 2014.
48. ^ article VI section 10 of the bylaws, November 2012
49. ^ A club by any other name ..., April 2009
50. ^ See:
51. ^
52. ^ Kubuntu Council 2013, May 2013
53. ^ See:
54. ^ article 8.3 of the bylaws
55. ^ See:
56. ^ The Principles of LiquidFeedback. Berlin: Interaktive Demokratie e. V. 2014. ISBN 978-3-00-044795-2.
57. ^ Lumiera Logo Contest, January 2009
58. ^ bylaws
59. ^ The MKM-IG uses Condorcet with dual dropping. That means: The Schulze ranking and the ranked pairs ranking are calculated and the winner is the top-ranked candidate of that of these two rankings that has the better Kemeny score. See:
60. ^ "Wahlmodus" (in German). Metalab.at. Retrieved 2010-05-08.
61. ^
62. ^ See:
63. ^ 2009 Director Elections
64. ^ NSC Jersey election, NSC Jersey vote, September 2007
65. ^ Online Voting Policy
66. ^ See:
67. ^ "Voting Procedures". Parkscholars.org. Retrieved 2010-05-08.
68. ^ National Congress 2011 Results, November 2011
69. ^ §6(10) of the bylaws
70. ^
71. ^ §11.2.E of the statutory rules
72. ^ article 7.5 of the bylaws
73. ^ Rules adopted on 18 December 2011
74. ^ Vote Result for Name Definition
75. ^ Help mee met het nieuwe Piratenpartij-logo!, August 2013
76. ^ 23 January 2011 meeting minutes
77. ^ Piratenversammlung der Piratenpartei Schweiz, September 2010
78. ^ Article IV Section 4 of the constitution
79. ^ 2006 Community for Pittsburgh Ultimate Board Election, September 2006
80. ^ Committee Elections, April 2012
81. ^ LogoVoting, December 2007
82. ^ See:
83. ^ Squeak Oversight Board Election 2010, March 2010
84. ^ See:
85. ^ Election status update, September 2009
86. ^ §10 III of its bylaws, June 2013
87. ^ Minutes of the 2010 Annual Sverok Meeting, November 2010
88. ^ constitution, December 2010
89. ^ article VI section 6 of the bylaws
90. ^ Ubuntu IRC Council Position, May 2012
91. ^ See this mail.
92. ^ Pairwise Voting Results
93. ^ See e.g. here (May 2009), here (August 2009), and here (December 2009).
94. ^ See here and here.
95. ^ See:
|
2014-09-20 00:08:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7165148258209229, "perplexity": 3939.9372681187065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132372.31/warc/CC-MAIN-20140914011212-00232-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://clay6.com/qa/1076/find-the-area-of-the-region-in-the-first-quadrant-enclosed-by-axis-line-and
|
Browse Questions
# Find the area of the region in the first quadrant enclosed by $x$ - axis, line $x = \sqrt {3}\: y$ and the circle $x^2 + y^2 = 4.$
$\begin{array}{1 1} \large \frac{\pi}{3} sq. units. \\ \large \frac{\pi}{6} sq. units. \\\large \frac{\pi}{9} sq. units. \\ \large \frac{\pi}{4} sq. units \end{array}$
Toolbox:
• Area of the region bounded between a curve y=f(x) and a line is given by $A=\int_a^by\;dx=\int_a^bf(x)\;dx$
• where a and b are the point of intersection of the line and curve.
• To find the point of intersection,we can solve the two equations.
To find the area of the region bounded by the circle $x^2+y^2=4$ and $x=\sqrt 3y$ and the x-axis.
The required area of the region is given by the shaded portion in the fig.
To find the point of intersection ,let us solve the equation;
$x^2+y^2=4$-----(1) and $x=\sqrt 3y\Rightarrow x^2=3y^2$
$\Rightarrow \frac{x^2}{3}=y^2$--------(2)
substituting for $y^2$ in equ(1) we get
$x^2+\frac{x^2}{3}=4 \Rightarrow 3x^2+x^2=12$
$\Rightarrow 4x^2=12$
$\Rightarrow x^2=3$
$\Rightarrow x=\pm \sqrt 3$
If x=$\sqrt 3$ then y=1.
Hence we can take the limit as $\sqrt 3$ to 2 .since the radius of the circle is 2.
The required area $A=\int_\sqrt 3^2(y_1+y_2)dx$
where $y_1=\sqrt {4-x^2}$
$\;\;\;\;\;\;\;\;\;\;y_2$=area of the triangle.
$A=\int_\sqrt 3^2\sqrt {4-x^2}$+area of the triangle
on integrating $y_1$ we get,
$A=\begin{bmatrix}\frac{x}{2}\sqrt{4-x^2}+\frac{4}{2}\sin^{-1}(\frac{x}{2})\end{bmatrix}_\sqrt 3^2$
on applying the limits we get,
$A=\begin{bmatrix}\frac{2}{2}\sqrt{4-4}+\frac{4}{2}\sin^{-1}(\frac{2}{2})\end{bmatrix}-\begin{bmatrix}\frac{\sqrt 3}{2}\sqrt{4-3}+\frac{4}{2}\sin^{-1}(\frac{\sqrt 3}{2})\end{bmatrix}$
$A=\begin{bmatrix}0+2(\frac{\pi}{2})-\frac{\sqrt 3}{2}-2(\frac{\pi}{3})\end{bmatrix}$
$A\;\;\;=\begin{bmatrix}\pi-\frac{\sqrt 3}{2}-\frac{2\pi}{3}\end{bmatrix}=\begin{bmatrix}\frac{\pi}{3}-\frac{\sqrt 3}{2}\end{bmatrix}$------(3)
$y_2$=area of the triangle bounded between x-axis and the line $x=\sqrt 3y$
$\;\;\;=\frac{1}{2}\times \sqrt 3\times 1=\frac{\sqrt 3}{2}$-------(4)
Required area can be obtained by combining (3) and (4)
$A=\frac{\pi}{3}-\frac{\sqrt 3}{2}+\frac{\sqrt 3}{2}=\frac{\pi}{3}$ sq.units.
Hence the required area=$\Large \frac{\pi}{3}$ sq. units.
|
2017-06-26 12:20:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242509007453918, "perplexity": 471.5088305195442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00683.warc.gz"}
|
https://www.nature.com/articles/s41598-017-15826-3?error=cookies_not_supported&code=84a510d0-bcd4-4d18-868a-08826236f564
|
Introduction
Since their initial development, lasers have been implemented as irreplaceable components of various applications, e.g., mass spectrometers1, laser machining devices2,3,4,5, car ignition plugs6,7, satellite propulsion devices8,9,10, and medical equipment11,12. In particular, the Q-switching technique allows solid-state lasers to generate a short, high-powered pulse output13,14,15, which is crucial for the above applications.
Previously, we proposed a magnetooptical (MO) Q-switch composed of a ferrimagnetic rare-earth substituted iron garnet (RIG) film and small coils16,17. Magnetic garnets are well known for their large MO effects and high transmittance in the near-infrared region18,19. However, although RIGs have potential use in optical applications such as two-dimensional integrated arrays20 and storage media21, there are few reports of Q-switches employing these materials. In one of our previous studies, we demonstrated the MO Q-switch in a diode-pumped Nd:GdVO4 laser system and showed the potential of this Q-switch, for which a notably compact system size was obtained (cavity length L: 10 mm). Note that such a small L is impossible using other active Q-switches, for example, electro-optic (EO)22 and acousto-optic (AO)23 Q-switches. EO Q-switches require a cubic polariser in the cavity and a high-voltage power supply for operation, whereas the AO module in an AO Q-switch cannot be made appropriately small to yield sufficient interaction and radio-frequency (RF) power supply for operation. Therefore, it is difficult to miniaturise L or the entire laser system for these Q-switches. As higher output photon densities can be achieved for Q-switched lasers with shorter length L 14,24, diode-pumped solid-state micro lasers having L values of millimetre order are attracting interest.
Although the compactness of the MO Q-switched laser incorporating RIG film was demonstrated previously, the output peak power remained small. Therefore, in this paper, a quasi-continuous-wave (QCW)25,26 pumping technique using pulsed pumping light is employed to provide higher pump energy to the lasing material. In addition and for the first time, Nd:YAG emitting randomly polarised light is used as a laser material to demonstrate the MO Q-switch. The combination of Nd:YAG and an MO Q-switch has special meaning unlike other Q-switches because a common misunderstanding is that the Q-switching using MO materials is only based on the Faraday rotation, meaning that the input light must always be in a linearly polarised state while using the MO Q-switch. Regarding the achievement of integrated actively Q-switched micro lasers, Nd:YAG is a promising lasing material. The crystal structure of Nd:YAG is similar to RIG, and these materials have similar thermal expansion coefficients27,28,29,30; thus, RIG film can grow on the Nd:YAG via epitaxial growth techniques31 or bond to the Nd:YAG surface as Cr4+:YAG32,33. The latter material also possesses a garnet structure and a similar thermal expansion coefficient, and has been reported as an appropriate material for a passive Q-switch with high miniaturisation34,35. Therefore, the performance of MO Q-switching using RIG in an Nd:YAG laser is important for the implementation of actively Q-switched micro lasers. Finally, the use of a different lasing material also facilitates discussion of the polarisation state of the MO Q-switched laser. The RIG-based Q-switch does not require the presence of a polariser in the cavity; therefore, the isotropic structure of the Nd:YAG must affect the output polarisation state.
Experimental setup
A schematic of the prepared Q-switched laser cavity is shown in Fig. 1a, with parts of the cavity components being cut away for improved visibility. A diode at 808-nm wavelength end-pumped the 0.5 at.% Nd:YAG crystal, which was 3 mm × 3 mm × 4 mm in size. The Nd:YAG crystal was wrapped in foil and fixed in the water-cooled Cu heat sink, and its temperature was stabilised at 20 °C by a proportional-integral-derivative-controlled Peltier cooler. Dielectric multilayer coating was present on the input and output surfaces of the Nd:YAG, having high reflectance (HR) of 99.8% at the 1,064-nm wavelength and high transmittance (HT) of 98% at 808-nm wavelength on the input side, and 98% HT and 99.8% HR at 1,064- and 808-nm wavelengths, respectively.
A concave mirror with 300-mm curvature radius and 90% reflectance at the 1,064-nm wavelength was placed 10-mm from the Nd:YAG input surface as an output coupler. The MO Q-switch was inserted in the cavity. The RIG used in this study was the same 190-μm-thick film employed in our previous studies16,17, which was formed via a liquid-phase epitaxy method on a 560-μm-thick single-crystalline Gd3Ga5O12 substrate. The composition of this film measured by energy-dispersive x-ray spectroscopy (JEOL, JED-2201F) was Tb2.0Bi1.0Fe4.8Ga0.2O12–ξ, where ξ indicates the number of oxygen vacancies. The optical loss characterized by a spectrophotometer (UV-3150, Shimadzu) was 108 dB/cm at the wavelength of 1,064 nm. The Faraday rotation angle at 1,064-nm wavelength measured via the rotating analyser method was 2.4 × 103 °/cm. Further, the figure of merit (FOM) defined by the Faraday rotation angle divided by the absorption was 222 °/dB. Maze-shape magnetic domains appeared in the RIG with an average width of ~50 μm. A pair of small coils with 5.3-mm diameter sandwiched the RIG and were connected to a pulse current generator, which applied a peak current of 56 A with 3-μs duration for the applied pulsed field. The generated field applied to the coil was estimated to be more than 200 Oe, which is the saturation field of the RIG film.
Figure 1b shows the beam diameter in the cavity estimated using the ABCD matrix method24, along with the refractive indexes of the cavity components. The beam diameter in the RIG was approximately 245 μm, which is five times larger than the average widths of the magnetic domains in the RIG film. The laser output was simultaneously monitored by an energy meter (Ophir, VEGA) and an InGaAs-based fast-response optical detector (Thorlabs, DET10C/M).
Results
Peak power and beam quality
The pulse width τ p and peak power of a Q-switch laser are proportional and inversely proportional to L, respectively36. The relation between τ p and L is expressed as14
$${\tau }_{p}\approx \frac{r\eta (r)}{r-1-\,\mathrm{ln}\,r}{\tau }_{c}=\frac{r\eta (r)}{r-1-\,\mathrm{ln}\,r}(\frac{2L}{c\delta }),$$
(1)
where r is the inversion ratio, η is the energy extraction efficiency, τ c is the cavity delay time, c is the velocity of light, and δ is the cavity loss before Q-switching. The values of c, r, η, and δ were 3.0 × 108 m/s, 1.063, 0.908, and 1.127, respectively. These were derived by the relationships r = N i/N th, η = (N i − N f)/N i, δ = −ln(R) − 2ln(T on), where N i is the initial population inversion density, N th is the threshold population inversion density, N f is the final population inversion density, R is the reflectance of the output coupler (=0.9), and T on is the transmittance of the RIG film when the field was applied (=0.582). Moreover, the values of N i, N th, and N f are determined by the equations N i = (δ + δ Q)/(2σL), N th = δ/(2σL), and N f = N i − N th·ln(N i/N f), where σ is the stimulated emission cross-section of Nd:YAG ( = 2.8 × 10−23 m2)14 and δ Q is the additional loss caused by the Q-switch. The peak power is equivalent to E o/τ p, where E o is the output energy.
To examine the influence of L on the peak power and beam quality, the output coupler was gradually moved along the beam propagation axis to vary L from 10 to 130 mm. The pumping repetition ratio and pump duration were fixed to 1 kHz and 200 μs, respectively, corresponding to a pumping energy of 6.4 mJ/pulse, because the obtained peak power was maximum in our setup. Figure 2a shows the peak power and τ p results for varying L. The obtained values (circles and squares) agreed well with the calculated values (solid lines) determined using equation (1). The dashed line indicates the minimum L obtainable using the 4-mm-long Nd:YAG and RIG film (thickness: 4.75 mm) employed in this study. The pulse shape produced by the cavity with L = 10 mm is shown in Fig. 2b. The obtained pulse energy and duration were 27 μJ and 25 ns, respectively, corresponding to a 1.1-kW peak power. The pulse power fluctuated within 7% deviation during more than 10 repeated measurements. To the best of our knowledge, this is the highest peak power value produced to date using MO Q-switches. The output spectrum obtained for L = 10 mm was measured using a spectrum analyser (Anritsu, MS9740A) and is shown in the inset of Fig. 2b. There were three peaks in the measured spectrum (black circles). The full widths at half maximum at each centre wavelengths are 47 pm at 1,064.54 nm (red line), 39 pm at 1,064.59 nm (blue line), and 35 pm at 1,064.66 nm (green line). This spectrum split might indicate that there were mainly three modes of propagation beam in the cavity, and the split may be disappear in more shortened cavity. The beam quality M 2 of the output pulse was also measured using a lens with 25-mm focal length via the knife-edge method, according to ISO Standard 1114637. M 2 was estimated to be 3.7.
Pumping energy
To determine the minimum input energy for Q-switching, the pumping energy was changed by modulating the diode-output. The duration between the pumping-pulse fall time and the electric-pulse rise time applied to the coils was fixed to 15 μs. Figure 3 shows the output energy obtained as a function of the input energy for these conditions. The threshold was 2.9 mJ and the output energy became saturated for a pumping energy of more than 3.1 mJ. Such a saturation characteristic shows good agreement with previous experimental reports using passive Q-switch lasers7. The output energy is constant until additional pulses are generated. Hence, a single pulse was obtained in this setup.
Output polarisation
The output-pulse polarisation state was analysed using a quarter-wave (λ/4) plate and an analyser. Initially, the output power was measured with a rotating analyser, and the results indicated that the state exhibited circular or random polarisation. Then, a λ/4 plate was inserted between the output coupler and the analyser. The power change was monitored using an InGaAs-based fast-response optical detector (Thorlabs, DET10C/M). As shown in Fig. 4, the λ/4 plate exerted no influence on the polarisation; therefore, random polarisation of the output was confirmed.
Discussion
In the polarisation state measured using the λ/4 plate shown in Fig. 4, small dent-like features are apparent. However, these features were caused by the λ/4 plate insertion and are unrelated to the output state produced by the MO Q-switch laser. Note that Nd:YAG has an isotropic crystal structure and emits unpolarised light. Further, although an MO Q-switch using an isotropic lasing material has been reported38, this is the first report of a randomly polarised output for an MO Q-switch laser, because the RIG-based MO Q-switch does not contain polarisers. If one needs to control the polarisation state, changing the lasing material from an isotropic crystal to an anisotropic one would be the easiest way. For the laser cavity using Nd:GdVO4, which emits linearly polarised light, a circularly polarised output was obtained. Thus, the results obtained in this study show that the MO Q-switch using RIG film can be used with various lasing materials. These results are contrary to the mechanism of an MO Q-switch explained by Faraday rotation. While we do not have a clear model explaining the entire mechanism of the MO Q-switch, these results would help.
Overall, this study demonstrates integrable MO Q-switching using RIG film in an Nd:YAG laser system for the first time. The 10-mm-long cavity produced 1.1-kW peak power and a 27-ns-long output at a centre wavelength of 1064.58 nm via QCW diode pumping, generating randomly polarised pulses. The repetition ratio was 1 kHz. In addition, the output polarisation state was confirmed to be random and an M 2 value of 3.7 was obtained. In this laser system, the Q-switch and the lasing material have a similar crystal structure; therefore, the MO Q-switch and the Nd:YAG can be combined into actively Q-switched micro lasers, similar to epitaxial growth31 or bonding of passive Q-switches on lasing materials (e.g., Cr4+:YAG on Nd:YAG)32,33. The experimental evidence provided in this study advances this research field toward the realisation of actively controllable integrated micro lasers.
|
2022-12-09 19:24:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5959747433662415, "perplexity": 2515.82835245329}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00242.warc.gz"}
|
http://lxxb.cstam.org.cn/EN/figure/showFigure.do?fid=359
|
IMAGE/TABLE DETAILS Figure Option
EVOLUTION CHARACTERISTIC AND WORKING MECHANISM ANALYSIS OF ROTATING THIN-WALLED STRUCTURES IN POST-CRITICAL TURBULENT INTERVAL BASED ON FIELD MEASUREMENT
Wang Hao, Ke Shitang
Chinese Journal of Theoretical and Applied Mechanics 2019, 51 (1): 111-123. DOI: 10.6052/0459-1879-18-125
Abstract (217 HTML2 PDF(pc) (15937KB)(284)
Recent study found that the time-varying characteristic of the load may have a significant effect on the vibrational strength and energy mechanism. The most important structures in fire/nuclear power plants (such as cooling towers, chimneys, etc.) are all typical rotating thin-walled structures. To reveal the vibration evolution characteristic and working mechanism of thin-walled structures in post-critical turbulent interval, the vibration responses of eight typical rotating thin-walled structures of high Reynolds number flow ($Re \ge$3.5$\times$10$^6$) are measured. Firstly, non-stationary identifications of signals with different time intervals are performed after depressing and filtering noise. The time-varying mean and extreme estimation of response are studied based on non-stationary analysis model. Besides, the frequency domain evolution characteristics are studied based on evolution spectrum method. On this basis, proportion of resonance component in wind-induced response and its effect are discussed. Then, self-resonant frequency and damping ratio of the structures are identified, and the damping mechanism of different rotating thin-walled structures is studied. The evolution characteristic and working mechanism are revealed as follows. (1) The wind-induced vibration response of the rotating thin-shell structure in post-critical turbulent interval is characterized by stable frequency evolution characteristics and non-stationary evolution characteristics in intensity aspect; (2) The wind-induced vibration problem of rotating thin-walled structures in post-critical turbulent interval should be studied as quasi-static and resonance excitation points separately. The vibration energy distributions of resonance excitation points at different regions of the cooling tower were similar, but the PSD functions of quasi-static points were dramatically different from each other; (3) Vibration energy distribution of the resonant excitation points showed a phased trend, and the proposed resonance spectral expression takes three variation stages of responses into account and achieves high prediction accuracy; (4) With the concept of equivalent damping ratio proposed in this paper, the damping ratio prediction formula is proposed. More importantly, these analysis results show that resonance effects and non-stationary effects on wind-induced effects of rotating thin-walled structures in post-critical turbulent interval are generally notable, and the irrationality of 5% damping ratio value commonly used in the current project for this type of rotating thin-walled structure has been demonstrated.
Fig. 2 Time-history and frequency spectra of measured responseattypical measuring point...
Extracts from the Article
|
2020-10-29 07:49:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6758502721786499, "perplexity": 3741.068027476642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00410.warc.gz"}
|
https://bitbucket.org/gnotaras/django-postgresql-manager/src/0d6d2566bbbf?at=0.1.6
|
# django-postgresql-manager /
Filename Size Date modified Message
contrib/example-project
src/PostgreSQL_manager
282 B
231 B
718 B
2.8 KB
1.8 KB
11.1 KB
336 B
523 B
2.5 KB
314 B
33 B
134 B
3.1 KB
django-postgresql-manager is a Django application which can be used to manage PostgreSQL users and databases.
Project Development Web Site:
Public Source Code Repository:
## How this distribution package is organized
The following list outlines the kind of information each of the files of this distribution package contains:
• AUTHORS : List of all authors and contributors.
• BUGS : Information about how to file bug reports or ask for new features.
• HELP : Instructions on how to use this software.
• INSTALL : Information on how to install this application.
• SUPPORT : Information on how to get support for this software.
## How to read the HELP document
The HELP document is written in reStructuredText (rst). Although reStructuredText is an easy-to-read plain text markup format, it is recommended to convert it to HTML for a better reading experience.
In order to convert this document to HTML you need docutils, which you can install in your system using pip:
pip install docutils
Or easy_install:
easy_install docutils
Once docutils is installed, you can use the rst2html.py utility to perform the conversion:
rst2html.py HELP help.html
Then use any web browser to view the exported help.html file.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
2015-04-27 11:35:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27212318778038025, "perplexity": 3431.8816072457166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658061.59/warc/CC-MAIN-20150417045738-00051-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://competitive-exam.in/questions/discuss/fledgling-is-a-term-often-used-to-denote-the
|
# Fledgling is a term often used to denote the young one of a
bird
fox
dog
man
Please do not use chat terms. Example: avoid using "grt" instead of "great".
|
2020-04-03 03:14:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893488347530365, "perplexity": 6211.159610032544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00161.warc.gz"}
|
https://openmx.ssri.psu.edu/thread/267?q=thread/267
|
# Mixture distribution, zygosity example
9 posts / 0 new
Offline
Joined: 07/31/2009 - 15:14
Mixture distribution, zygosity example
AttachmentSize
29.08 KB
2.66 KB
29.09 KB
I seem to have this almost working but for a pesky error:
Error: The job for model 'twinACE' exited abnormally with the error message: Objective returned invalid value.
Execution halted
Script and data files attached (albeit with an extra _ in the name of the latter, ffs).
Offline
Joined: 07/31/2009 - 15:24
I checked in a patch so that
I checked in a patch so that the error message will now tell you if the objective function is returning NaN or it is returning an infinite value.
Offline
Joined: 07/31/2009 - 15:14
A great patch that. For some
A great patch that. For some reason, the script now works when all I was expecting was better debugging information!
Offline
Joined: 07/31/2009 - 14:25
:-)
"Your error was interesting: code 2 with little bit of 4 and 7 - Anyhow, I patched myself to cope with this in future, ran the script and emailed you the results in Molecular Psychiatry format.. hope that's OK,
Yours,
O. Supermodeler"
Offline
Joined: 07/31/2009 - 15:14
I now have another problem.
I now have another problem. Suppose I want to use a definition variable in the mxAlgebra that is operating on the vector of likelihoods coming back from an mxModel's mxFIMLObjective. For example, with code snippet:
mxModel("MZlike",
mxData(DataMZ, type="raw"),
mxFIMLObjective("twinACE.expCovMZ", "twinACE.expMean",selVars, vector=T)),
mxModel("DZlike",
mxData(DataMZ, type="raw"),
mxFIMLObjective("twinACE.expCovDZ", "twinACE.expMean",selVars, vector=T)),
mxAlgebra(-2*sum(log(MZlike@DataMZ.pMZ%x%MZlike.objective + (1-MZlike@DataMZ.pMZ)%x%DZlike.objective)), name="twin"),
mxAlgebraObjective("twin"))
The mxAlgebra complains that it can't find DataMZ.pMZ. Is there some way I can get a hold of it? I've tried going down the path of
twinACEFit@submodels$MZcorrect@data but I don't seem to be able to reference the dataframe or matrix's third column. MZlike@DataMZ[,3] doesn't work, nor does twinACEFit@submodels$MZcorrect@data$pMZ) nor twinACEFit@submodels$MZcorrect@data@pMZ)
and I'm running out of ideas.
Offline
Joined: 07/30/2009 - 14:03
As far as I know, mxAlgebras
As far as I know, mxAlgebras do not have access to mxData columns. I am not exactly sure why that is. Perhaps Michael Spiegel can let us know about that.
In the mean time, you can always create an mxMatrix for use in your algebra:
> # ----------------------------------
> # Create the data frame.
>
> tFrame <- data.frame(x=c(1:10))
>
> # ----------------------------------
> # Create a test model, matrix, and algebra.
>
> tModel <- mxModel("testModel",
+ mxData(tFrame, type="raw"),
+ mxMatrix(type="Full", nrow=length(tFrame$x), + ncol=1, values=tFrame$x, name="x"),
+ mxAlgebra(-2*log(x), name="lumber")
+ )
>
> # ----------------------------------
> # Evaluate the elements of the model.
>
> mxEval(lumber, tModel, compute=TRUE)
[,1]
[1,] 0.000000
[2,] -1.386294
[3,] -2.197225
[4,] -2.772589
[5,] -3.218876
[6,] -3.583519
[7,] -3.891820
[8,] -4.158883
[9,] -4.394449
[10,] -4.605170
Offline
Joined: 07/31/2009 - 15:24
Umm, don't we assume that
Umm, don't we assume that algebras are calculated once per optimization iteration? This restriction would prohibit us from using definition variables, as we don't know which value of the definition value should be selected. A follow up question: should we allow the use of data sets in matrix algebra computations? Something like:
alg <- mxAlgebra(model.data %*% B, "alg")
Offline
Joined: 07/31/2009 - 15:14
Ok I think I got it going.
Ok I think I got it going. However, it was quite a bit of a fiddle. I've put it into trunk/models/passing/Acemix2.R
There's a slight disconnect in that a definition variable routinely appears as a 1x1 matrix as far as mxAlgebras are concerned, so to get a vector, as in the vector of likelihoods, we have to work with a separate dataframe, not a definition variable extracted from the mxData() command.
It almost seems to me that we should be able to refer to definition variables as either the whole vector of them (as needed for this weighted likelihood example) or to the individual level elements, which change with each evaluation.
Some distinction between def as a vector and def(i) as a scalar for the ith record in the sample would be required.
Offline
Joined: 07/31/2009 - 15:24
Right. One strategy for
Right. One strategy for allowing the whole vector of definition variables is to allow the data to appear in the matrix algebras. Well, and we need a mechanism for pulling a column out of a matrix. Plus we need something sane for the case where the data is a data.frame.
Mike N: It almost seems to me that we should be able to refer to definition variables as either the whole vector of them
|
2018-01-19 11:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144568562507629, "perplexity": 2827.62010843306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00232.warc.gz"}
|
https://eprint.iacr.org/2019/1116
|
### Computational Extractors with Negligible Error in the CRS Model
Ankit Garg, Yael Tauman Kalai, and Dakshita Khurana
##### Abstract
In recent years, there has been exciting progress on building two-source extractors for sources with low min-entropy. Unfortunately, all known explicit constructions of two-source extractors in the low entropy regime suffer from non-negligible error, and building such extractors with negligible error remains an open problem. We investigate this problem in the computational setting, and obtain the following results. We construct an explicit 2-source extractor, and even an explicit non-malleable extractor, with negligible error, for sources with low min-entropy, under computational assumptions in the Common Random String (CRS) model. More specifically, we assume that a CRS is generated once and for all, and allow the min-entropy sources to depend on the CRS. We obtain our constructions by using the following transformations. 1. Building on the technique of [BHK11], we show a general transformation for converting any computational 2-source extractor (in the CRS model) into a computational non-malleable extractor (in the CRS model), for sources with similar min-entropy. We emphasize that the resulting computational non-malleable extractor is resilient to arbitrarily many tampering attacks (a property that is impossible to achieve information theoretically). This may be of independent interest. This transformation uses cryptography, and relies on the sub-exponential hardness of the Decisional Diffie Hellman (DDH) assumption. 2. Next, using the blueprint of [BACDLT17], we give a transformation converting our computational non-malleable seeded extractor (in the CRS model) into a computational 2-source extractor for sources with low min-entropy (in the CRS model). Our 2-source extractor works for unbalanced sources: specifically, we require one of the sources to be larger than a specific polynomial in the other. This transformation does not incur any additional assumptions. Our analysis makes a novel use of the leakage lemma of Gentry and Wichs [GW11].
Note: Full version of Eurocrypt 2020 paper titled "Low Error Efficient Computational Extractors in the CRS Model".
Available format(s)
Category
Foundations
Publication info
A major revision of an IACR publication in EUROCRYPT 2020
Keywords
computational extractorstwo sourcenon-malleable
Contact author(s)
dakshita @ illinois edu
History
2020-05-13: last of 2 revisions
See all versions
Short URL
https://ia.cr/2019/1116
CC BY
BibTeX
@misc{cryptoeprint:2019/1116,
author = {Ankit Garg and Yael Tauman Kalai and Dakshita Khurana},
title = {Computational Extractors with Negligible Error in the CRS Model},
howpublished = {Cryptology ePrint Archive, Paper 2019/1116},
year = {2019},
note = {\url{https://eprint.iacr.org/2019/1116}},
url = {https://eprint.iacr.org/2019/1116}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2022-09-27 17:56:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5656899213790894, "perplexity": 3435.3376637855704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00626.warc.gz"}
|
https://www.mathworks.com/help/deeplearning/ug/improve-neural-network-generalization-and-avoid-overfitting.html
|
Improve Shallow Neural Network Generalization and Avoid Overfitting
Tip
To learn how to set up parameters for a deep learning network, see Set Up Parameters and Train Convolutional Neural Network.
One of the problems that occur during neural network training is called overfitting. The error on the training set is driven to a very small value, but when new data is presented to the network the error is large. The network has memorized the training examples, but it has not learned to generalize to new situations.
The following figure shows the response of a 1-20-1 neural network that has been trained to approximate a noisy sine function. The underlying sine function is shown by the dotted line, the noisy measurements are given by the + symbols, and the neural network response is given by the solid line. Clearly this network has overfitted the data and will not generalize well.
One method for improving network generalization is to use a network that is just large enough to provide an adequate fit. The larger network you use, the more complex the functions the network can create. If you use a small enough network, it will not have enough power to overfit the data. Run the Neural Network Design example `nnd11gn` [HDB96] to investigate how reducing the size of a network can prevent overfitting.
Unfortunately, it is difficult to know beforehand how large a network should be for a specific application. There are two other methods for improving generalization that are implemented in Deep Learning Toolbox™ software: regularization and early stopping. The next sections describe these two techniques and the routines to implement them.
Note that if the number of parameters in the network is much smaller than the total number of points in the training set, then there is little or no chance of overfitting. If you can easily collect more data and increase the size of the training set, then there is no need to worry about the following techniques to prevent overfitting. The rest of this section only applies to those situations in which you want to make the most of a limited supply of data.
Retraining Neural Networks
Typically each backpropagation training session starts with different initial weights and biases, and different divisions of data into training, validation, and test sets. These different conditions can lead to very different solutions for the same problem.
It is a good idea to train several networks to ensure that a network with good generalization is found.
Here a dataset is loaded and divided into two parts: 90% for designing networks and 10% for testing them all.
```[x, t] = bodyfat_dataset; Q = size(x, 2); Q1 = floor(Q * 0.90); Q2 = Q - Q1; ind = randperm(Q); ind1 = ind(1:Q1); ind2 = ind(Q1 + (1:Q2)); x1 = x(:, ind1); t1 = t(:, ind1); x2 = x(:, ind2); t2 = t(:, ind2);```
Next a network architecture is chosen and trained ten times on the first part of the dataset, with each network’s mean square error on the second part of the dataset.
```net = feedforwardnet(10); numNN = 10; NN = cell(1, numNN); perfs = zeros(1, numNN); for i = 1:numNN fprintf('Training %d/%d\n', i, numNN); NN{i} = train(net, x1, t1); y2 = NN{i}(x2); perfs(i) = mse(net, t2, y2); end```
Each network will be trained starting from different initial weights and biases, and with a different division of the first dataset into training, validation, and test sets. Note that the test sets are a good measure of generalization for each respective network, but not for all the networks, because data that is a test set for one network will likely be used for training or validation by other neural networks. This is why the original dataset was divided into two parts, to ensure that a completely independent test set is preserved.
The neural network with the lowest performance is the one that generalized best to the second part of the dataset.
Multiple Neural Networks
Another simple way to improve generalization, especially when caused by noisy data or a small dataset, is to train multiple neural networks and average their outputs.
For instance, here 10 neural networks are trained on a small problem and their mean squared errors compared to the means squared error of their average.
First, the dataset is loaded and divided into a design and test set.
```[x, t] = bodyfat_dataset; Q = size(x, 2); Q1 = floor(Q * 0.90); Q2 = Q - Q1; ind = randperm(Q); ind1 = ind(1:Q1); ind2 = ind(Q1 + (1:Q2)); x1 = x(:, ind1); t1 = t(:, ind1); x2 = x(:, ind2); t2 = t(:, ind2);```
Then, ten neural networks are trained.
```net = feedforwardnet(10); numNN = 10; nets = cell(1, numNN); for i = 1:numNN fprintf('Training %d/%d\n', i, numNN) nets{i} = train(net, x1, t1); end```
Next, each network is tested on the second dataset with both individual performances and the performance for the average output calculated.
```perfs = zeros(1, numNN); y2Total = 0; for i = 1:numNN neti = nets{i}; y2 = neti(x2); perfs(i) = mse(neti, t2, y2); y2Total = y2Total + y2; end perfs y2AverageOutput = y2Total / numNN; perfAveragedOutputs = mse(nets{1}, t2, y2AverageOutput) ```
The mean squared error for the average output is likely to be lower than most of the individual performances, perhaps not all. It is likely to generalize better to additional new data.
For some very difficult problems, a hundred networks can be trained and the average of their outputs taken for any input. This is especially helpful for a small, noisy dataset in conjunction with the Bayesian Regularization training function `trainbr`, described below.
Early Stopping
The default method for improving generalization is called early stopping. This technique is automatically provided for all of the supervised network creation functions, including the backpropagation network creation functions such as `feedforwardnet`.
In this technique the available data is divided into three subsets. The first subset is the training set, which is used for computing the gradient and updating the network weights and biases. The second subset is the validation set. The error on the validation set is monitored during the training process. The validation error normally decreases during the initial phase of training, as does the training set error. However, when the network begins to overfit the data, the error on the validation set typically begins to rise. When the validation error increases for a specified number of iterations (`net.trainParam.max_fail`), the training is stopped, and the weights and biases at the minimum of the validation error are returned.
The test set error is not used during training, but it is used to compare different models. It is also useful to plot the test set error during the training process. If the error in the test set reaches a minimum at a significantly different iteration number than the validation set error, this might indicate a poor division of the data set.
There are four functions provided for dividing data into training, validation and test sets. They are `dividerand` (the default), `divideblock`, `divideint`, and `divideind`. You can access or change the division function for your network with this property:
```net.divideFcn ```
Each of these functions takes parameters that customize its behavior. These values are stored and can be changed with the following network property:
```net.divideParam ```
Index Data Division (divideind)
Create a simple test problem. For the full data set, generate a noisy sine wave with 201 input points ranging from −1 to 1 at steps of 0.01:
```p = [-1:0.01:1]; t = sin(2*pi*p)+0.1*randn(size(p)); ```
Divide the data by index so that successive samples are assigned to the training set, validation set, and test set successively:
```trainInd = 1:3:201 valInd = 2:3:201; testInd = 3:3:201; [trainP,valP,testP] = divideind(p,trainInd,valInd,testInd); [trainT,valT,testT] = divideind(t,trainInd,valInd,testInd); ```
Random Data Division (dividerand)
You can divide the input data randomly so that 60% of the samples are assigned to the training set, 20% to the validation set, and 20% to the test set, as follows:
```[trainP,valP,testP,trainInd,valInd,testInd] = dividerand(p); ```
This function not only divides the input data, but also returns indices so that you can divide the target data accordingly using `divideind`:
```[trainT,valT,testT] = divideind(t,trainInd,valInd,testInd); ```
Block Data Division (divideblock)
You can also divide the input data randomly such that the first 60% of the samples are assigned to the training set, the next 20% to the validation set, and the last 20% to the test set, as follows:
```[trainP,valP,testP,trainInd,valInd,testInd] = divideblock(p); ```
Divide the target data accordingly using `divideind`:
```[trainT,valT,testT] = divideind(t,trainInd,valInd,testInd); ```
Interleaved Data Division (divideint)
Another way to divide the input data is to cycle samples between the training set, validation set, and test set according to percentages. You can interleave 60% of the samples to the training set, 20% to the validation set and 20% to the test set as follows:
```[trainP,valP,testP,trainInd,valInd,testInd] = divideint(p); ```
Divide the target data accordingly using `divideind`.
```[trainT,valT,testT] = divideind(t,trainInd,valInd,testInd); ```
Regularization
Another method for improving generalization is called regularization. This involves modifying the performance function, which is normally chosen to be the sum of squares of the network errors on the training set. The next section explains how the performance function can be modified, and the following section describes a routine that automatically sets the optimal performance function to achieve the best generalization.
Modified Performance Function
The typical performance function used for training feedforward neural networks is the mean sum of squares of the network errors.
`$F=mse=\frac{1}{N}\sum _{i=1}^{N}{\left({e}_{i}\right)}^{2}=\frac{1}{N}\sum _{i=1}^{N}{\left({t}_{i}-{\alpha }_{i}\right)}^{2}$`
It is possible to improve generalization if you modify the performance function by adding a term that consists of the mean of the sum of squares of the network weights and biases $msereg=\gamma *msw+\left(1-\gamma \right)*mse$, where $\gamma$ is the performance ratio, and
`$msw=\frac{1}{n}\sum _{j=1}^{n}{w}_{j}^{2}$`
Using this performance function causes the network to have smaller weights and biases, and this forces the network response to be smoother and less likely to overfit.
The following code reinitializes the previous network and retrains it using the BFGS algorithm with the regularized performance function. Here the performance ratio is set to 0.5, which gives equal weight to the mean square errors and the mean square weights.
```[x,t] = simplefit_dataset; net = feedforwardnet(10,'trainbfg'); net.divideFcn = ''; net.trainParam.epochs = 300; net.trainParam.goal = 1e-5; net.performParam.regularization = 0.5; net = train(net,x,t); ```
The problem with regularization is that it is difficult to determine the optimum value for the performance ratio parameter. If you make this parameter too large, you might get overfitting. If the ratio is too small, the network does not adequately fit the training data. The next section describes a routine that automatically sets the regularization parameters.
Automated Regularization (trainbr)
It is desirable to determine the optimal regularization parameters in an automated fashion. One approach to this process is the Bayesian framework of David MacKay [MacK92]. In this framework, the weights and biases of the network are assumed to be random variables with specified distributions. The regularization parameters are related to the unknown variances associated with these distributions. You can then estimate these parameters using statistical techniques.
A detailed discussion of Bayesian regularization is beyond the scope of this user guide. A detailed discussion of the use of Bayesian regularization, in combination with Levenberg-Marquardt training, can be found in [FoHa97].
Bayesian regularization has been implemented in the function `trainbr`. The following code shows how you can train a 1-20-1 network using this function to approximate the noisy sine wave shown in the figure in Improve Shallow Neural Network Generalization and Avoid Overfitting. (Data division is cancelled by setting `net.divideFcn` so that the effects of `trainbr` are isolated from early stopping.)
```x = -1:0.05:1; t = sin(2*pi*x) + 0.1*randn(size(x)); net = feedforwardnet(20,'trainbr'); net = train(net,x,t); ```
One feature of this algorithm is that it provides a measure of how many network parameters (weights and biases) are being effectively used by the network. In this case, the final trained network uses approximately 12 parameters (indicated by `#Par` in the printout) out of the 61 total weights and biases in the 1-20-1 network. This effective number of parameters should remain approximately the same, no matter how large the number of parameters in the network becomes. (This assumes that the network has been trained for a sufficient number of iterations to ensure convergence.)
The `trainbr` algorithm generally works best when the network inputs and targets are scaled so that they fall approximately in the range [−1,1]. That is the case for the test problem here. If your inputs and targets do not fall in this range, you can use the function `mapminmax` or `mapstd` to perform the scaling, as described in Choose Neural Network Input-Output Processing Functions. Networks created with `feedforwardnet` include `mapminmax` as an input and output processing function by default.
The following figure shows the response of the trained network. In contrast to the previous figure, in which a 1-20-1 network overfits the data, here you see that the network response is very close to the underlying sine function (dotted line), and, therefore, the network will generalize well to new inputs. You could have tried an even larger network, but the network response would never overfit the data. This eliminates the guesswork required in determining the optimum network size.
When using `trainbr`, it is important to let the algorithm run until the effective number of parameters has converged. The training might stop with the message "Maximum MU reached." This is typical, and is a good indication that the algorithm has truly converged. You can also tell that the algorithm has converged if the sum squared error (SSE) and sum squared weights (SSW) are relatively constant over several iterations. When this occurs you might want to click the Stop Training button in the training window.
Summary and Discussion of Early Stopping and Regularization
Early stopping and regularization can ensure network generalization when you apply them properly.
For early stopping, you must be careful not to use an algorithm that converges too rapidly. If you are using a fast algorithm (like `trainlm`), set the training parameters so that the convergence is relatively slow. For example, set `mu` to a relatively large value, such as 1, and set `mu_dec` and `mu_inc` to values close to 1, such as 0.8 and 1.5, respectively. The training functions `trainscg` and `trainbr` usually work well with early stopping.
With early stopping, the choice of the validation set is also important. The validation set should be representative of all points in the training set.
When you use Bayesian regularization, it is important to train the network until it reaches convergence. The sum-squared error, the sum-squared weights, and the effective number of parameters should reach constant values when the network has converged.
With both early stopping and regularization, it is a good idea to train the network starting from several different initial conditions. It is possible for either method to fail in certain circumstances. By testing several different initial conditions, you can verify robust network performance.
When the data set is small and you are training function approximation networks, Bayesian regularization provides better generalization performance than early stopping. This is because Bayesian regularization does not require that a validation data set be separate from the training data set; it uses all the data.
To provide some insight into the performance of the algorithms, both early stopping and Bayesian regularization were tested on several benchmark data sets, which are listed in the following table.
Data Set Title
Number of PointsNetworkDescription
BALL
672-10-1Dual-sensor calibration for a ball position measurement
SINE (5% N)
411-15-1Single-cycle sine wave with Gaussian noise at 5% level
SINE (2% N)
411-15-1Single-cycle sine wave with Gaussian noise at 2% level
ENGINE (ALL)
11992-30-2Engine sensor—full data set
ENGINE (1/4)
3002-30-2Engine sensor—1/4 of data set
CHOLEST (ALL)
2645-15-3Cholesterol measurement—full data set
CHOLEST (1/2)
1325-15-3Cholesterol measurement—1/2 data set
These data sets are of various sizes, with different numbers of inputs and targets. With two of the data sets the networks were trained once using all the data and then retrained using only a fraction of the data. This illustrates how the advantage of Bayesian regularization becomes more noticeable when the data sets are smaller. All the data sets are obtained from physical systems except for the SINE data sets. These two were artificially created by adding various levels of noise to a single cycle of a sine wave. The performance of the algorithms on these two data sets illustrates the effect of noise.
The following table summarizes the performance of early stopping (ES) and Bayesian regularization (BR) on the seven test sets. (The `trainscg` algorithm was used for the early stopping tests. Other algorithms provide similar performance.)
Mean Squared Test Set Error
MethodBallEngine (All)Engine (1/4)Choles (All)Choles (1/2)Sine (5% N)Sine (2% N)
ES1.2e-11.3e-21.9e-21.2e-11.4e-11.7e-11.3e-1
BR1.3e-32.6e-34.7e-31.2e-19.3e-23.0e-26.3e-3
ES/BR925411.55.721
You can see that Bayesian regularization performs better than early stopping in most cases. The performance improvement is most noticeable when the data set is small, or if there is little noise in the data set. The BALL data set, for example, was obtained from sensors that had very little noise.
Although the generalization performance of Bayesian regularization is often better than early stopping, this is not always the case. In addition, the form of Bayesian regularization implemented in the toolbox does not perform as well on pattern recognition problems as it does on function approximation problems. This is because the approximation to the Hessian that is used in the Levenberg-Marquardt algorithm is not as accurate when the network output is saturated, as would be the case in pattern recognition problems. Another disadvantage of the Bayesian regularization method is that it generally takes longer to converge than early stopping.
Posttraining Analysis (regression)
The performance of a trained network can be measured to some extent by the errors on the training, validation, and test sets, but it is often useful to investigate the network response in more detail. One option is to perform a regression analysis between the network response and the corresponding targets. The routine `regression` is designed to perform this analysis.
The following commands illustrate how to perform a regression analysis on a network trained.
```x = [-1:.05:1]; t = sin(2*pi*x)+0.1*randn(size(x)); net = feedforwardnet(10); net = train(net,x,t); y = net(x); [r,m,b] = regression(t,y) ```
```r = 0.9935 m = 0.9874 b = -0.0067 ```
The network output and the corresponding targets are passed to `regression`. It returns three parameters. The first two, `m` and `b`, correspond to the slope and the y-intercept of the best linear regression relating targets to network outputs. If there were a perfect fit (outputs exactly equal to targets), the slope would be 1, and the y-intercept would be 0. In this example, you can see that the numbers are very close. The third variable returned by `regression` is the correlation coefficient (R-value) between the outputs and targets. It is a measure of how well the variation in the output is explained by the targets. If this number is equal to 1, then there is perfect correlation between targets and outputs. In the example, the number is very close to 1, which indicates a good fit.
The following figure illustrates the graphical output provided by `regression`. The network outputs are plotted versus the targets as open circles. The best linear fit is indicated by a dashed line. The perfect fit (output equal to targets) is indicated by the solid line. In this example, it is difficult to distinguish the best linear fit line from the perfect fit line because the fit is so good.
|
2020-05-26 21:54:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7321260571479797, "perplexity": 405.2388254541966}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00594.warc.gz"}
|
https://numericalshadow.org/numerical-range/generalizations/joint-numerical-range/
|
# Joint numerical range
### Definition
Consider $k$ Hermitian matrices $A_1,\ldots,A_k$ of dimension $d d$. Their joint numerical range (JNR)$L(A_1,,A_k)$ is defined as $L(A_1,\ldots,A_k) = \left\{ \left( \Tr \rho A_1,\ldots, \Tr \rho A_k \right): \rho \in \Omega_d \right\}.$ Due to taking all mixed states the joint numerical range is a convex body in a $k$-dimensional space. More information about convexity of joint numerical range we can find in [1].
## Kippenhahn Theorem’s for joint numerical range
An algebraic curve has been associated with the numerical range and has been studied from the 1930s on [2]. Let $\1$ denote the $d\times d$ identity matrix, $p(x)=( x_0 \1 + x_1 A_1 • x_n A_n ),$ and consider the complex projective hypersurface $(p)=\{x^n p(x) = 0 \}.$ It was shown that the eigenvalues of $A_1+ A_2$ are the foci of the affine curve of real points $T=\{ y_1+y_2 y_1,y_2,(1:y_1:y_2) (p)^\}.$ Kippenhahn recognized the meaning of the convex hull of $T$.
### Theorem
The numerical range of $A_1+\ii A_2$ is the convex hull of $T$, in other words, $W(A_1+\ii A_2)=\text{conv}(T)$.
Let we denote $T_i$ as the semi-algebraic set $T_i = \{ (y_1,,y_n) (n) (1:y_1::y_n)(X_i^)_ \}, i=1,,r$ and $(X^{}_i)_{\text{reg}}$ as the set of the regular points of the dual variety $X^$.
### Theorem
The joint numerical range $W$ is the convex hull of the Euclidean closure of $T_1\cup\cdots\cup T_r$, in other words, $W = (T_1T_r)$.
### Classification of JNRs
##### Two qutrit matrices
This classification comes from [3]. Consider two Hermitian matrices $A_1$ and $A_2$ of size $3 \times 3$. Then there are four possible shapes of the JNRs
• An oval–object without any flat parts, the boundary is a sextic curve.
• Object with one flat part, a convex hull of a quatric curve.
• Convex hull of an ellipse and an outside point, which has two flat parts on the boundary.
• A triangle (when $A_1$ and $A_2$ commute). This can be further degenerated.
##### Three qutrit matrices
This classification is taken from [4] (see for details). Such JNRs must obey the following rules
1. In this case we may restrict ourselves to only pure states.
2. Any flat part in the boundary is the image of the Bloch sphere - two-dimensional subspace of a the sapce of $3 \times 3$ Hermitian matrices without corner points for
all configurations of Figure 2. We are unaware of earli
1. Two two-dimensional subspaces must share a common point, hence all flat parts are mutually connected.
2. Convex geometry of a three-dimensional Euclidean space supports up to four mutually intersecting ellipses.
3. If three ellipses are present in the boundary, the geometry does not allow for existence of any additional segment.
4. If two segments are present in the boundary, there exist an infinite number o other segments.
All configurations permitted by these rules are realized. Let us denote by $e$ the number of ellipses in the boundary and by $s$ the number of segments. There exists object with [5]:
• no flat parts in boundary at all $e=0$, $s=0$,
• one segment in the boundary $e=0$, $s=1$,
• one ellipse in the boundary $e=1$, $s=0$,
• one ellipse and a segment $e=1$, $s=1$,
• two ellipses in the boundry $e=2$, $s=0$,
• two ellipses and a segment $e=2$, $s=1$,
• three ellipses $e=3$, $s=0$,
• four ellipses $e=4$, $s=0$,
Additionally in the qutrit case, if there exist of the JNR, the following configurations are possible:
• JNR is the convex hull of an ellipsoid and a point outside the ellipsoid, $e=0$, $s=\infty$,
• JNR is the convex hull of an ellipse and a point outside the affine hull of the ellipse, $e=1$, $s=\infty$,
### Application
An example application of numerical range can be found in [6], [7], [2].
# References
1. [1]C.-K. Li, Y.-T. Poon, and Y.-S. Wang, “Joint numerical ranges and communtativity of matrices,” arXiv preprint arXiv:2002.02768, vol. 0, 2020, [Online]. Available at: https://arxiv.org/abs/2002.02768.
2. [2]D. Plaumann, R. Sinn, and S. Weis, “Kippenhahn’s Theorem for joint numerical ranges and quantum states,” arXiv preprint arXiv:1907.04768, vol. 0, 2019, [Online]. Available at: https://arxiv.org/abs/1907.04768.
3. [3]D. S. Keeler, L. Rodman, and I. M. Spitkovsky, “The numerical range of 3x3 matrices,” Linear Algebra and its Applications, vol. 252, no. 1-3, pp. 115–139, 1997, [Online]. Available at: https://dx.doi.org/10.1016/0024-3795(95)00674-5.
4. [4]K. Szymański, S. Weis, and K. Życzkowski, “Classification of joint numerical ranges of three hermitian matrices of size three,” Linear Algebra and its Applications, vol. 0, 2017, [Online]. Available at: https://www.sciencedirect.com/science/article/pii/S0024379517306456.
5. [5]J. Xie et al., “Observing geometry of quantum states in a three-level system,” arXiv preprint arXiv:1909.05463, vol. 0, 2019, [Online]. Available at: https://arxiv.org/abs/1909.05463.
6. [6]K. J. Szymański and K. Życzkowski, “Geometric and algebraic origins of additive uncertainty relations,” Journal of Physics A: Mathematical and Theoretical, vol. 0, 2019, [Online]. Available at: https://iopscience.iop.org/article/10.1088/1751-8121/ab4543/meta.
7. [7]J. Czartowski, K. Szymański, B. Gardas, Y. V. Fyodorov, and K. Życzkowski, “Separability gap and large-deviation entanglement criterion,” Physical Review A, vol. 100, no. 4, p. 042326, 2019, [Online]. Available at: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.042326.
|
2023-01-29 06:33:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7809621691703796, "perplexity": 1181.165388804622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00298.warc.gz"}
|
http://www.kjm-math.org/article_40640.html
|
# On Some Fractional Integral Inequalities of Hermite-Hadamard Type for $r$-Preinvex Functions
Document Type: Original Article
Authors
Department of Mathematics, Faculty of Science and Arts, University of Kahramanmaraş Sütçü İmam, 46100, Kahramanmaraş, Turkey.
Abstract
In this paper, we prove Hermite-Hadamard type inequalities for $r$-preinvex
functions via fractional integrals. The results presented here would provide
extensions of those given in earlier works.
Keywords
Main Subjects
|
2019-12-12 19:33:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5157564282417297, "perplexity": 5826.961813182724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00550.warc.gz"}
|
https://codereview.stackexchange.com/questions/221029/babel-code-in-react-component-to-display-tasks
|
# Babel code in React component to display tasks
I'm trying to think of ways to clean up this component in my app so it looks more readable to other devs. This is the best I came up with so far. It's not that bad, but I think it could be a lot better. What do you guys think? How would you clean this code up to make it more readable?
const DisplayTasks = ({ tasksArray, removeTask, crossOutTask }) => {
return (
<div id="orderedList">
<ol>
<li onClick={ () => crossOutTask(index) } key={index} >
<button id="removeButton" onClick={ event => removeTask(event, index) } >
Remove
</button>
</li>
))}
</ol>
</div>
);
};
$$$$
• you could remove the wrapping div with the id orderedList... The ol tag serves the same purpose, plus it's not a very specific name. I think it probably makes more sense to add a class to the ol with a value like tasks`.
|
2019-07-16 07:51:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2938337028026581, "perplexity": 1790.7930112458307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524503.7/warc/CC-MAIN-20190716055158-20190716081158-00503.warc.gz"}
|
https://physics.stackexchange.com/questions/432071/spinning-basketball-on-water-surface-preferential-axis
|
# Spinning basketball on water surface--preferential axis
This question came to me while I was in the pool last month. I took a basketball and I was making it spin on the surface of the water in a few different ways. When the ball rested on the surface of the water, a majority of the ball was above the surface of the water, indicating that the density of the ball is less than half of the density of water.
First I would spin the basketball so that the angular momentum vector was vertical, perpendicular to the water surface. Then I spun it so that the angular momentum vector was horizontal, parallel to the surface of the water. In each case, the spinning would slow down, although in the first case where the angular momentum vector was vertical the ball took longer to slow down.
Finally, I spun the basketball at an angle, so that the angular momentum vector was neither parallel nor perpendicular to the water surface. What I saw was consistent with what I saw earlier. The horizontal component of the angular momentum vector decayed more quickly than the vertical component, so that after maybe 10 seconds the rotation was essentially with a vertical axis.
So the question is why? Why does the horizontal component decay faster than the vertical component? I tried to think of it in terms of friction and normal forces, but there didn't seem to be a difference between the 2 cases.
|
2022-08-20 06:39:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719491362571716, "perplexity": 167.84455877510732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00400.warc.gz"}
|
http://nonconditional.com/2013/03/partially-fixing-numerical-underflow-for-mixture-models/
|
Partially fixing numerical underflow for mixture models
In a mixture model the probability of an event $$x$$ is written
$$P(x=X) =\sum_{i}\pi_{i}P_{i}(x=X)$$,
where $$\pi_{i}$$ is the probability of the point belonging to mixture $$i$$ and $$P_{i}(x=X)$$ is the probability of the event $$x=X$$ in the $$i$$-th mixture. The problem is usually that the $$P_i$$ are small which makes underflow happen when you multiply them with big numbers such as $$\pi_i$$ or summing up the values. Underflow is simply your computer not having enough precision to handle all those tiny numbers and so rounding errors happen. For example $$1+\epsilon = 1$$ if $$\epsilon$$ is really small.
To fix underflow one usually operates in the log domain i.e.
$$\log P(x=X) = \log [ \sum_{i}\pi_{i}P_{i}(x=X) ]$$.
The problem with this is that the log cannot decompose sums and we still get underflow. To fix this(somewhat) we can write:
$$\log P(x) = \log [ \sum_{i}\pi_{i}P_{i}(x=X) ] = \log [ \sum_{i} \exp\big\{ \log\pi_{i} + \log P_{i}(x=X) \big\} ]$$
Now by finding the max value of the different sums $$\log(\pi_{i}) + \log ( P_{i}(x=X) )$$ and deducting it we can move out most of the probability mass in the equation. It is simple if one looks at the following calculations for a mixture model with $$2$$ mixture components.
$$\log p = \log [p_1 + p_2] = \log [ \exp(\log p_1) + \exp(\log p_2) ]$$
$$pMax = \max [\log p_1 ,\log p_2 ]$$
$$\log p = \log[ \exp(pMax) * ( \exp\big\{\log p_1 -pMax\big\} +\exp\big\{log p_2-pMax\big\}) ]$$
$$\log p = pMax + \log [ \exp(\log p_1 – pMax) + \exp( \log p_2 – pMax) ]$$
Now if we for example assume $$\log p_1 > \log p_2$$ then
$$\log p = pMax + \log [ \exp(0) + \exp\big\{\log p_2 -pMax\big\} ] = \\ pMax + \log [ 1 + \exp\big\{\log p_2 – pMax\big\} ]$$
This means that we gotten out most of the probability mass from the sum and we have made the exponent in the exponential closer to zero and thus less small. This of course doesn’t mean numerical issues will be a concern anymore but I believe we are more in the clear than before.
Below is an implementation in Matlab. Instructions in the file.
logSumExp
|
2020-04-01 15:04:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914530277252197, "perplexity": 316.46543598947034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00262.warc.gz"}
|
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.lapacklin.zsytri.html
|
# naginterfaces.library.lapacklin.zsytri¶
naginterfaces.library.lapacklin.zsytri(uplo, a, ipiv)[source]
zsytri computes the inverse of a complex symmetric matrix , where has been factorized by zsytrf().
For full information please refer to the NAG Library document for f07nw
https://www.nag.com/numeric/nl/nagdoc_27.3/flhtml/f07/f07nwf.html
Parameters
uplostr, length 1
Specifies how has been factorized.
, where is upper triangular.
, where is lower triangular.
acomplex, array-like, shape
Details of the factorization of , as returned by zsytrf().
ipivint, array-like, shape
Details of the interchanges and the block structure of , as returned by zsytrf().
Returns
acomplex, ndarray, shape
The factorization is overwritten by the symmetric matrix .
If , the upper triangle of is stored in the upper triangular part of the array.
If , the lower triangle of is stored in the lower triangular part of the array.
Raises
NagValueError
(errno )
On entry, error in parameter .
Constraint: or .
(errno )
On entry, error in parameter .
Constraint: .
Warns
NagAlgorithmicWarning
(errno )
Element of the diagonal is exactly zero. is singular and the inverse of cannot be computed.
Notes
zsytri is used to compute the inverse of a complex symmetric matrix , the function must be preceded by a call to zsytrf(), which computes the Bunch–Kaufman factorization of .
If , and is computed by solving for .
If , and is computed by solving for .
References
Du Croz, J J and Higham, N J, 1992, Stability of methods for matrix inversion, IMA J. Numer. Anal. (12), 1–19
|
2021-10-18 21:31:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029157161712646, "perplexity": 2225.120036925191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00558.warc.gz"}
|
http://mathoverflow.net/questions/29978/do-there-exist-nonconstant-functions-such-that/29985
|
Do there exist nonconstant functions such that…
Do there exist nonconstant real valued functions $f$ and $g$ such that the expression: $$f(x) -v/g(x)$$ is maximized at $x = v$ for all positive real $v$?
-
Functions on $\mathbb{R}$ or on the positive reals? In the latter case, $- \log x$ and $x$. – Homology Jun 29 '10 at 23:26
Function on $\mathbb R$. Positivity is for $v$ only. – Wadim Zudilin Jun 29 '10 at 23:28
Motivation for the question? – Yemon Choi Jun 29 '10 at 23:35
@Yemon: might be a homework, might be curiosity. I don't see any deep context here but it's tricky. – Wadim Zudilin Jun 29 '10 at 23:39
Do we need $g$ to be non-vanishing? And what is this for? – Homology Jun 29 '10 at 23:47
Take $f(x)=(x+1)e^{-x}$ and $g(x)=e^x$, then $f(x)-v/g(x)=(x+1-v)e^{-x}$ and the derivative with respect to $x$ is $(v-x)e^{-x}$.
-
Doesn't this work for any real $v$? (So, you do much more!) – Wadim Zudilin Jun 29 '10 at 23:58
May I suggest you to take a more realistic name for MO? (I am very sorry for making a joke with your present one in a comment above but you don't give me an option to cite your answer correctly.) – Wadim Zudilin Jun 30 '10 at 0:03
By the arithmetic-geometric mean inequality, when $v$ is positive $$-|x|^{\frac{1}{2}}-\frac{v}{|x|^{\frac{1}{2}}}$$ is maximized at $x=v$ and $x=-v$.
-
Nice! 123456789 (I needed some extra characters) – Wadim Zudilin Jun 29 '10 at 23:57
Nice! – Steven Gubkin Jun 30 '10 at 1:47
It is not defined at $x=0$. I think the question was about functions on the whole real line. – T.. Jun 30 '10 at 23:55
Let $f$ be arbitrary (but non-constant, real-valued, and differentiable), let $h$ be any antiderivative of $f'(x)/x$, and let $g=1/h$; then $f'(v)g'(v)=v$, so $f-v/g$ has a critical point at $x=v$. Now you can look for conditions under which that critical point is a maximum.
-
The following calculation suggests that a nice probability interpretation may exist for any solution one can construct.
$u(x) = f(x) - v/g(x)$ and all its $x$ derivatives are linear functions of $v$ with coefficients that are functions of $x$.
Thus, to have an extremum at $x=v$ the first derivative $u'$ must be of the form $(v-x)M(x)$. Integrating the $v$-degree 0 and 1 parts of this equation produces $f$ and (the reciprocal of) $g$. Algebraically this will be equivalent to Gerry's solution.
The interesting points are that:
1. To have a maximum we need $M(x) \geq 0$, so $M$ can be interpreted as a density.
2. The total mass $\int M$ has to be finite in order for $g(x)$ to exist on the whole real line. This is so that we can choose a constant of integration larger (in absolute value) than the total mass, when computing $1/g = C + \int M$. Thus, $M$ is a sort of probability measure, and literally is one when $\int M = 1$.
3. $f$ is calculated as integral of $xM$, ie., an expected value of $x$.
4. $1/g$ is calculated using the integral of $M$, ie., a probability.
So there might be a simple probability inequality lurking behind most of the solutions.
-
|
2014-04-17 12:46:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455119967460632, "perplexity": 259.8908081365798}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://ask.sagemath.org/questions/8097/revisions/
|
Revision history [back]
Parsing MATHML
Is there a possibility to parse content-mathml in sarge?
I would like to evaluate a formula that is given in a content-mathml document. How would I do this?
Parsing MATHMLParse and Evaluate MathML
Is there a possibility to parse content-mathml in sarge?
I would like to evaluate a formula that is given in a content-mathml document. How would I do this?
3 No.3 Revision benjaminfjones 2725 ●7 ●43 ●76 http://bfj7.com/
Parse and Evaluate MathML
Is there a possibility to parse content-mathml in sarge? Sage?
I would like to evaluate a formula that is given in a content-mathml document. How would I do this?
|
2020-11-23 19:31:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910343587398529, "perplexity": 3159.7154151382965}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00398.warc.gz"}
|
https://ask.sagemath.org/question/66925/solving-2-non-linear-functions-max-degree-of-3-fails/
|
# solving 2 non-linear functions (max degree of 3) fails
I'm having trouble solving the following:
It's a simple system that uses the functions and their respective derivatives to find values fo x and k. Mathematica has no problem with this. My machine and the cocalc machine become compute bound. My machine runs out of memory. How do I redo this to get sage to solve it ? Should I try a numeric solve ?
edit retag close merge delete
( 2023-03-14 03:22:47 +0200 )edit
Sort by » oldest newest most voted
Un gros MERCI Emmanuel! I got the solution I needed for the 3rd part of this piecewise funtion (it agrees with mathematica) I need the k and the x values that are real values:
*****50.4900997166127
0.881173769148990*****
Here is the code. It took a little bit of digging to get the substitutions. It would also be nice to know if there is an option in solve to just return the real values :)
fp2 = sqrt(r^2-(x-c)^2)
dfp2 = derivative(fp2, x)
fp3 = -k*(x-1)^3
dfp3 = derivative(fp3, x)
sys = [fp2 == fp3,dfp2 == dfp3]
SS = solve(sys, (x, k), algorithm="sympy")
print(SS[0][k].subs(c=0.7,r=0.2))
print(SS[0][x].subs(c=0.7,r=0.2))
print(SS[1][k].subs(c=0.7,r=0.2))
print(SS[1][x].subs(c=0.7,r=0.2))
*****
50.4900997166127
0.881173769148990
*****
1.04977199683675*I
0.368826230851010
Bien à vous! Pat Browne Rimouski Québec
more
Edited for legibility.
You can select the all-real solutions directly from the symbolic solution :
sage: [s for s in [dict(zip(v.keys(), [u.subs({r:1/5, c:7/10}) for u in v.values()])) for v in SS] if all([t in AA for t in s.values()])]
[{k: -128*sqrt(-1/100*(5*sqrt(29/5) - 7)^2 + 16/25)/(sqrt(29/5) - 7)^3,
x: -1/8*sqrt(29/5) + 7/8}]
or, with "float" approximations :
sage: [s for s in [dict(zip(v.keys(), [CDF(u.subs({r:1/5, c:7/10})) for u in v.values()])) for v in SS]
if all([t in RDF for t in s.values()])]
[{k: 0.8212757749910893, x: 0.5739601355301926}]
Note : I tend to use exact values (symbolic or rational) rather than inexact values ("float" or RR/RDF) in explicit solutions (thus avoiding the problems of comparing floats values).
( 2023-03-15 07:36:44 +0200 )edit
Note that these two special solutions allow to check easily that fp3 != dfp3 :
sage: [(fp3-dfp3).subs(s).subs({r:1/5, c:7/10}).is_zero() for s in SS]
[False, False]
sage: [(fp3-dfp3).subs(s).subs({r:1/5, c:7/10}).n() for s in SS]
[0.510718865288295, 1.42320478260899e-18 + 0.0232426979533997*I]
( 2023-03-15 08:39:20 +0200 )edit
Merci encore, il y a toujours un moyen... il faut s'habituer aux objets comme AA, RR,RDF :)
( 2023-03-15 23:24:03 +0200 )edit
Sage's default solver (i. e. Maima's) indeed fails to solve your system. But Sympy's does. After running :
var('k, c, r')
# Keep these constant symbolic, for clarity's and generality's sake...
# r = 0.2
# c = 0.7
# r = 1/5
# c = 7/10
fp1 = k*x^3
dfp1 = fp1.derivative(x)
fp2 = sqrt(r^2-(x-c)^2)
dfp2 = derivative(fp2, x)
fp3 = -k*(x-1)^3
# Unused for now...
dfp3 = derivative(fp3, x)
sys = [fp1 == fp2, dfp1 == dfp2]
we get two solutions :
sage: SS = solve(sys, (x, k), algorithm="sympy") ; SS
[{k: 16*sqrt(-(c - sqrt(c^2 + 24*r^2))^2 + 16*r^2)/(5*c - sqrt(c^2 + 24*r^2))^3,
x: 5/4*c - 1/4*sqrt(c^2 + 24*r^2)},
{k: 16*sqrt(-(c + sqrt(c^2 + 24*r^2))^2 + 16*r^2)/(5*c + sqrt(c^2 + 24*r^2))^3,
x: 5/4*c + 1/4*sqrt(c^2 + 24*r^2)}]
Checking these solutions isn't absolutely trivial, Sage having simplification problems with radical expressions :
sage: [all([bool(u.subs(s)) for u in sys]) for s in SS]
[False, False]
But we can check that the difference of left- and right-hand sides of each equation is indeed null for each solution :
sage: [all([(u.lhs()-u.rhs()).subs(s).canonicalize_radical().is_zero() for u in sys]) for s in SS]
[True, True]
Solving and checking in Mathematica is left to the reader as an exercise... Also left to the reader is to check that none of these solutions satisfies fp3 == dfp3...
HTH,
more
|
2023-03-31 10:06:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24346978962421417, "perplexity": 10661.05410025334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00132.warc.gz"}
|
https://brainiak.in/986/dimensions-1-u0e0-where-symbols-have-their-usual-meaning
|
# Dimensions of 1/u0E0 where symbols have their usual meaning, are
more_vert
Dimensions of $${1 \over {{\mu _0}{\varepsilon _0}}}$$ where symbols have their usual meaning, are
1. [ L-1T ]
2. [ L-2T2 ]
3. [ L2T-2 ]
4. [ LT-1 ]
more_vert
verified
Correct option is C ) [ L2T-2 ]
Explaination::
The velocity of light in vacuum is
c = $${1 \over {\sqrt {{\mu _0}{\varepsilon _0}} }}$$
$$\therefore[{1 \over {{\mu _0}{\varepsilon _0}}}]$$ = [c2] = [L2T-2]
$$\therefore$$ Dimension of $${1 \over {{\mu _0}{\varepsilon _0}}}$$= [L2T-2]
|
2022-08-19 23:24:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849097728729248, "perplexity": 4767.998676206728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00734.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-7-exponential-functions-7-1-derivative-of-f-x-bx-and-the-number-e-exercises-page-328/90
|
## Calculus (3rd Edition)
$$\frac{\pi}{2}\left(e^{2}-1\right)$$
The volume of revolution about the $x-$ axis is given by \begin{aligned} V&=\pi \int_{a}^{b}f^2(x) d x \\ &=\pi \int_{0}^{1} e^{2 x} d x \\ &=\left.\pi \frac{e^{2 x}}{2}\right|_{0} ^{1} \\ &=\frac{\pi}{2}\left(e^{2}-1\right) \end{aligned}
|
2022-05-26 16:51:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 6352.935945800033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00051.warc.gz"}
|
https://quant.stackexchange.com/questions/38996/bachelier-model-vs-black-scholes-in-call-option-pricing-why-are-they-so-differe
|
# Bachelier model VS Black Scholes in call option pricing. Why are they so different?
I have been working with Bachelier model for some days but when I experimented with the model I saw some unwanted result with huge differences from the Black Scholes model. Bachelier model is described in detail here: Bachelier model call option pricing formula
Here is an numerical experiment: No interest rate; $\sigma=0.15$ for both models.
At time 0 I want to price a ATM European Call with $T=1$ and strike $K=55$ when $S_0=55$
The BS result: $C=3.29$
The Bachelier result: $C=0.06$
Why is there such a huge gap? I have tried to make sense of it by simply looking at the models but it is complicated with the CDFs and PDFs. Bachelier model is normally distributed and the BS model is log-normally distributed. Can we use that for explaining the big difference by claiming that the two processes is way different with the same volatility constant $\sigma$
The no Arbitrage pricing function for the Bachelier model with zero rate can be looked up a lot of places: $$C_t = (S_t-K)*\Phi((S_t-K)/u)+v_t*\phi((S_t-K)/u)$$ where $u = \sigma \sqrt{T-t}$
• You used "σ=0.15 for both models". I am not sure that is right, the two sigmas are defined differently. $\sigma$ for BS is in percentage terms, $\sigma$ for Bachelier is in dollar (or french franc? ;) ) terms. Mar 26, 2018 at 2:13
• To elaborate on @Alexc comment , if $\sigma=0.15$ in the BS model , then the equivalent vol in the Bachelirr is 0.15*55= 8.25.
– dm63
Mar 26, 2018 at 3:54
• Why mulitply with 55?
– Lisa
Mar 26, 2018 at 10:49
• The random variable which is added or subtracted to the stock price under Bachelier, is equal to 15% of the original stock price $S_0$, i.e. 0.15*55 Mar 26, 2018 at 13:32
You used $\sigma=0.15$ for both models. That is not right, the two sigma's are defined differently. $\sigma$ for BS is in percentage terms, $\sigma$ for Bachelier is in dollar terms.
So the equivalent volatility in Bachelier is $0.15 \times 55 = 8.25$ where 55 is the original stock price.
|
2022-07-05 03:17:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5371392369270325, "perplexity": 1037.3734757793}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00391.warc.gz"}
|
https://ncatlab.org/nlab/show/Meer+Ashwinkumar
|
# nLab Meer Ashwinkumar
## Selected writings
On D-brane-realizations of semi-holomorphic 4d Chern-Simons theory:
and specifically with relation to the quantum geometric Langlands correspondence:
On regularized quantization of Green-Schwarz sigma-models for super p-branes, specifically of the super 4-brane in 9d, in generalization of the matrix model of the M2-brane:
category: people
Last revised on March 11, 2021 at 23:59:40. See the history of this page for a list of all contributions to it.
|
2021-08-05 18:15:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24541819095611572, "perplexity": 3851.0973596204058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00110.warc.gz"}
|
https://mathhelpforum.com/threads/absolute-value-integration.144936/
|
# Absolute value integration.
#### integral
Such as:
$$\displaystyle \int^{2}_{0} \left | 2x \right |dx$$
How do you solve this?
Thank you in advance
$$\displaystyle \int$$
#### Prove It
MHF Helper
Such as:
$$\displaystyle \int^{2}_{0} \left | 2x \right |dx$$
How do you solve this?
Thank you in advance
$$\displaystyle \int$$
Remember that
$$\displaystyle |X| = \begin{cases}\phantom{-}X\textrm{ if }X \geq 0\\-X\textrm{ if }X < 0\end{cases}$$.
So if $$\displaystyle X = 2x$$
$$\displaystyle |2x| = \begin{cases}\phantom{-}2x\textrm{ if }X \geq 0\\-2x\textrm{ if }X < 0\end{cases}$$.
In your case, since $$\displaystyle 0 \leq 2x \leq 2$$, that means $$\displaystyle |2x| = 2x$$.
#### integral
So the indefinite would be the integral of positive 2x OR the integral of -2x?
#### Prove It
MHF Helper
So the indefinite would be the integral of positive 2x OR the integral of -2x?
No, it's $$\displaystyle +2x$$ only.
|
2019-11-20 13:50:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871451258659363, "perplexity": 4045.94167600546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00114.warc.gz"}
|
https://www.zentralblatt-math.org/matheduc/en/?q=au:Nachtergaele%2C%20B%2A
|
History
Help on query formulation
Linear algebra as an introduction to abstract mathematics. (English)
Hackensack, NJ: World Scientific (ISBN 978-981-4730-35-8/hbk; 978-981-4723-77-0/pbk). x, 198~p. (2016).
The object of this book is “to introduce abstract mathematics and proofs in the setting of linear algebra to students for whom this may be the first step toward advanced mathematics”. Assuming some calculus and only a very basic background in linear algebra (solution of linear equations and manipulation of matrices), it provides a one-semester course in finite-dimensional vector spaces and linear transformations over the real and complex numbers up to the spectral decomposition theorem for normal operators. The main part of the book consists of eleven chapters, each roughly 10‒12 pages. The chapter headings are as follows: What is linear algebra? Introduction to complex numbers (operations, polar form and geometric interpretation); Fundamental theorem of algebra (proof of the theorem, factoring polynomials over $\mathbb{C}$); Vector spaces (spaces over $\mathbb{R}$ and $\mathbb{C}$, subspaces and direct sums); Span and bases (linear independence, dimension); Linear maps (null space and range, dimension formula, matrices of linear maps with respect to bases); Eigenvalues and eigenvectors (invariant subspaces, existence of eigenvalues over $\mathbb{C}$, diagonal matrices, every complex linear operator has a triangular matrix over a suitable basis); Permutations and determinants (sign of permutation, definition of determinant, cofactor expansion); Inner product spaces (norms, orthogonality, Gram-Schmidt orthogonalization, orthogonal projections and minimization); Change of bases (change of basis matrix for orthogonal bases); Spectral theorem (Hermitian and unitary operators, normal operators, spectral theorem and diagonalization, positive operators, singular value decomposition). Each of the chapters is followed by “calculational exercises” to test the reader’s understanding of the material and by “proof-writing exercises” to develop the reader’s ability to construct mathematical arguments of increasing difficulty. Perhaps unusual in a book at this level is a proof of the fundamental theorem of algebra based on the extreme value theorem for real-valued functions of two real variables, and a proof of the existence of eigenvalues for complex linear transformations without the use of determinants, following {\it S. Axler} [Linear algebra done right. 3rd ed. Cham: Springer (2015; Zbl 1304.15001)]. The use of Axler’s approach means that an instructor could omit the chapter on permutations and determinants without affecting the remainder of the book, and indeed, the chapter on determinants is the least well written and may well have been added as an afterthought. The reviewer has always been challenged to know how to deal with the topic of determinants in a course such as this. On one hand, determinants are ubiquitous in the mathematical literature and the formula for a determinant in terms of permutations is theoretically important, if not always essential, in many arguments. On the other hand, this unintuitive formula is clumsy and does not lead to conceptual proofs of fundamental properties of determinants such as the multiplicative property. The last third of the book consists of four appendices of which Appendix A, Supplementary notes on matrices and linear systems, is by far the longest. Appendix A deals at length with the relationship between linear transformations and matrices, Gaussian elimination, elementary matrices, LU-factorization and solution of linear equations. Appendices B and C summarize facts about set theory and algebraic structures. Appendix D explains some history of mathematical notation, use of logical symbols and who first introduced the symbol $i=\sqrt{-1}$, for example. If you plan to teach a course on linear algebra which emphasizes the theoretical side of the subject, you might well consider this book.
Reviewer: John D. Dixon (Ottawa)
Classification: H65
|
2019-10-14 01:36:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837082386016846, "perplexity": 573.3024274064662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00215.warc.gz"}
|
https://cococubed.com/research_pages/sn1a_collisions.shtml
|
Cococubed.com Colliding White Dwarfs
Home
Astronomy research
Software Infrastructure:
MESA
FLASH-X
STARLIB
MESA-Web
starkiller-astro
My instruments
White dwarf pulsations:
Probe of 12C(α,γ)16O
Impact of 22Ne
Impact of ν cooling
Variable white dwarfs
MC reaction rates
Micronovae
Novae
White dwarf supernova:
Stable nickel production
Remnant metallicities
Colliding white dwarfs
Merging white dwarfs
Ignition conditions
Metallicity effects
Central density effects
Detonation density
Tracer particle burning
Subsonic burning fronts
Supersonic fronts
W7 profiles
Massive stars:
Pop III with HST/JWST
Rotating progenitors
3D evolution to collapse
MC reaction rates
Pre-SN variations
Massive star supernova:
26Al & 60Fe
44Ti, 60Co & 56Ni
SN 1987A light curve
Constraints on Ni/Fe
An r-process
Effects of 12C +12C
Neutron Stars and Black Holes:
Black Hole spectrum
Mass Gap with LVK
Compact object IMF
He burn neutron stars
Neutrino Emission:
Identifying the Pre-SN
Neutrino HR diagram
Pre-SN Beta Processes
Pre-SN neutrinos
Stars:
Hypatia catalog
SAGB stars
Nugrid Yields I
He shell convection
BBFH at 40 years
γ-rays within 100 Mpc
Iron Pseudocarbynes
Pre-Solar Grains:
C-rich presolar grains
SiC Type U/C grains
Grains from massive stars
Placing the Sun
SiC Presolar grains
Chemical Evolution:
Zone models H to Zn
Mixing ejecta
Thermodynamics & Networks
Skye EOS
Helm EOS
Five EOSs
Equations of State
12C(α,γ)16O Rate
Proton-rich NSE
Reaction networks
Bayesian reaction rates
Verification Problems:
Validating an astro code
Su-Olson
Cog8
RMTV
Sedov
Noh
Software instruments
Presentations
Illustrations
Public Outreach
Education materials
2023 ASU Solar Systems Astronomy
2023 ASU Energy in Everyday Life
AAS Journals
2023 MESA VI
2023 MESA Marketplace
2023 MESA Classroom
2022 Earendel, A Highly Magnified Star
2022 White Dwarfs & 12C(α,γ)16O
2022 Black Hole Mass Spectrum
2022 MESA in Don't Look Up
2021 Bill Paxton, Tinsley Prize
Contact: F.X.Timmes
my one page vitae,
full vitae,
research statement, and
teaching statement.
Zero Impact Parameter White Dwarf Collisions in FLASH (collisions III, 2012)
In this article We systematically explore zero impact parameter collisions of white dwarfs (WDs) with the Eulerian adaptive grid code FLASH for 0.64 + 0.64 M$_{\odot}$ and 0.81 + 0.81 MM$_{\odot}$ mass pairings. Our models span a range of effective linear spatial resolutions from 5.2 $\times$ 10$^7$ to 1.2 $\times$ 10$^7$ cm. However, even the highest resolution models do not quite achieve strict numerical convergence, due to the challenge of properly resolving small-scale burning and energy transport. The lack of strict numerical convergence from these idealized configurations suggests that quantitative predictions of the ejected elemental abundances that are generated by binary WD collision and merger simulations should be viewed with caution.
Nevertheless, the convergence trends do allow some patterns to be discerned. We find that the 0.64 + 0.64 M$_{\odot}$ head-on collision model produces 0.32 M$_{\odot}$ of $^{56}$Ni and 0.38 M$_{\odot}$ of $^{28}$Si, while the 0.81 + 0.81 M$_{\odot}$ head-on collision model produces 0.39 M$_{\odot}$ of $^{56}$Ni and 0.55 M$_{\odot}$ of $^{28}$Si at the highest spatial resolutions. Both mass pairings produce $\simeq$ 0.2 M$_{\odot}$ of unburned $^{12}$C+$^{16}$O. We also find the 0.64 + 0.64 M$_{\odot}$ head-on collision begins carbon burning in the central region of the stalled shock between the two WDs, while the more energetic 0.81 + 0.81 M$_{\odot}$ head-on collision raises the initial post-shock temperature enough to burn the entire stalled shock region to nuclear statistical equilibrium.
56Ni Production in Double-degenerate White Dwarf Collisions (collisions II, 2010)
In this article we present a comprehensive study of white dwarf collisions as an avenue for creating type Ia supernovae. Using a smooth particle hydrodynamics code with a 13-isotope, $\alpha$-chain nuclear network, we examine the resulting $^{56}$Ni yield as a function of total mass, mass ratio, and impact parameter. We show that several combinations of white dwarf masses and impact parameters are able to produce sufficient quantities of $^{56}$Ni to be observable at cosmological distances.
We find that the $^{56}$Ni production in double-degenerate white dwarf collisions ranges from sub-luminous to the super-luminous, depending on the parameters of the collision. For all mass pairs, collisions with small impact parameters have the highest likelihood of detonating, but $^{56}$Ni production is insensitive to this parameter in high-mass combinations, which significantly increases their likelihood of detection. We also find that the $^{56}$Ni dependence on total mass and mass ratio is not linear, with larger-mass primaries producing disproportionately more $^{56}$Ni than their lower-mass secondary counterparts, and symmetric pairs of masses producing more $^{56}$Ni than asymmetric pairs.
On Type Ia supernovae from the collisions of two white dwarfs (collisions I, 2009)
In this letter we explore collisions between two white dwarfs as a pathway for making Type Ia supernovae (SNIa). White dwarf number densities in globular clusters allow 10-100, redshift z $\lesssim$ 1 collisions per year, and observations by Chomiuk et al. of globular clusters in the nearby S0 galaxy NGC 7457 have detected what is likely to be a SNIa remnant. We carry out simulations of the collision between two 0.6 M$_{\odot}$ white dwarfs at various impact parameters and mass resolutions. For impact parameters less than half the radius of the white dwarf, we find such collisions produce $\simeq$ 0.4 M$_{\odot}$ of $^{56}$Ni, making such events potential candidates for underluminous SNIa or a new class of transients between Novae and SNIa.
|
2023-03-21 19:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79050612449646, "perplexity": 10116.92325641647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00363.warc.gz"}
|
http://en.wikipedia.org/wiki/Context-free_language
|
# Context-free language
In formal language theory, a context-free language is a language generated by some context-free grammar. The set of all context-free languages is identical to the set of languages accepted by pushdown automata.
## Examples
An archetypical context-free language is $L = \{a^nb^n:n\geq1\}$, the language of all non-empty even-length strings, the entire first halves of which are $a$'s, and the entire second halves of which are $b$'s. $L$ is generated by the grammar $S\to aSb ~|~ ab$, and is accepted by the pushdown automaton $M=(\{q_0,q_1,q_f\}, \{a,b\}, \{a,z\}, \delta, q_0, z, \{q_f\})$ where $\delta$ is defined as follows:
$\delta(q_0, a, z) = (q_0, az)$
$\delta(q_0, a, a) = (q_0, aa)$
$\delta(q_0, b, a) = (q_1, \varepsilon)$
$\delta(q_1, b, a) = (q_1, \varepsilon)$
$\delta(\mathrm{state}_1, \mathrm{read}, \mathrm{pop}) = (\mathrm{state}_2, \mathrm{push})$
Context-free languages have many applications in programming languages; for example, the language of all properly matched parentheses is generated by the grammar $S\to SS ~|~ (S) ~|~ \varepsilon$. Also, most arithmetic expressions are generated by context-free grammars.
## Closure properties
Context-free languages are closed under the following operations. That is, if L and P are context-free languages, the following languages are context-free as well:
• the union $L \cup P$ of L and P
• the reversal of L
• the concatenation $L \cdot P$ of L and P
• the Kleene star $L^*$ of L
• the image $\varphi(L)$ of L under a homomorphism $\varphi$
• the image $\varphi^{-1}(L)$ of L under an inverse homomorphism $\varphi^{-1}$
• the cyclic shift of L (the language $\{vu : uv \in L \}$)
Context-free languages are not closed under complement, intersection, or difference. However, if L is a context-free language and D is a regular language then both their intersection $L\cap D$ and their difference $L\setminus D$ are context-free languages.
### Nonclosure under intersection and complement
The context-free languages are not closed under intersection. This can be seen by taking the languages $A = \{a^n b^n c^m \mid m, n \geq 0 \}$ and $B = \{a^m b^n c^n \mid m,n \geq 0\}$, which are both context-free.[citation needed] Their intersection is $A \cap B = \{ a^n b^n c^n \mid n \geq 0\}$, which can be shown to be non-context-free by the pumping lemma for context-free languages.
Context-free languages are also not closed under complementation, as for any languages A and B: $A \cap B = \overline{\overline{A} \cup \overline{B}}$.
## Decidability properties
The following problems are undecidable for arbitrary context-free grammars A and B:
• Equivalence: is $L(A)=L(B)$?
• is $L(A) \cap L(B) = \emptyset$ ? (However, the intersection of a context-free language and a regular language is context-free, so if $B$ were a regular language, this problem becomes decidable.)
• is $L(A)=\Sigma^*$ ?
• is $L(A) \subseteq L(B)$ ?
The following problems are decidable for arbitrary context-free languages:
• is $L(A)=\emptyset$ ?
• is $L(A)$ finite?
• Membership: given any word $w$, does $w \in L(A)$ ? (membership problem is even polynomially decidable - see CYK algorithm and Earley's Algorithm)
## Parsing
Determining an instance of the membership problem; i.e. given a string $w$, determine whether $w \in L(G)$ where $L$ is the language generated by some grammar $G$; is also known as parsing.
Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and the Earley's Algorithm.
A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser.[3]
See also parsing expression grammar as an alternative approach to grammar and parser.
|
2013-06-19 12:40:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743477821350098, "perplexity": 605.9372052264905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00064-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://pypi.org/project/pytest-track/
|
No project description provided
# Pytest plugin for additional test reporting
• Skipped tests percentage
• (For Selenium tests) HTML Coverage report
These two functionalities are not related and can be used separately. At the time, it's just easier to have them in the same codebase. Both of them have the option to output the report in terminal or Confluence.
## Skipped test percentage
To execute it:
$pytest demo --track terminal ============== test session starts =================== plugins: track-0.1.0 collected 7 items demo/test_models.py .s.s. [ 71%] demo/test_views.py .s [100%] ======= 4 passed, 3 skipped in 0.02 seconds ========== Total: 4 from 7 tests not skipped (57.14%) test_models, 3 from 5 tests not skipped (60.00%) test_views, 1 from 2 tests not skipped (50.00%) Current functionality requires only test collection so this can be used with pytest's --collect-only To store result in Confluence, see the Confluence Configuration section and use: $ pytest --track confluence
## HTML Coverage
To have this you need both to have selenium tests and access to the HTML source for the project you want to compute coverage.
This plugin works by recording all the identifiable source code elements in the HTML source and comparing how many of them the selenium tests view and inspect.
In greater details this is doing:
1. In the plugin configure phase, read HTML files and create a simple tree with identifiable elements
• This step can be cached (See options)
• Identifiable elements is a tag with an id or a class or aa special HTML tag (app- for Angular)
2. While tests run, on each Selenium driver URL get, record the received HTML as viewed
3. While tests run, on each Selenium driver find_element, record the element as inspected
4. In the plugin unconfigure phase, calculate a naive % HTML seen and % HTML inspected
# To also report missing elements, use
$pytest --html-cov --html-cov-show-elements To store result in Confluence, see the Confluence Configuration section and use: $ pytest --html-cov confluence
## Confluence reporting
To configure Confluence settings add a section to pytest.ini
[pytest_track]
track_confluence_url=<confluence_root_url>
# For skipped test percentage
confluence_report_parent_page_id=<id_of_the_parent_page>
confluence_report_page_title=<title_for_the_results_page>
# For HTML coverage
confluence_coverage_parent_page_id=<id_of_the_parent_page>
confluence_coverage_page_title=<title_for_the_results_page>
## Contrib
Before PRs, only:
pre-commit install
tox
## Acknowledgements
Based on the initial work of Vasilica Dumbrava.
## Project details
Uploaded source
Uploaded py3
|
2022-10-01 02:06:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1856633573770523, "perplexity": 14545.945836502518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00281.warc.gz"}
|
http://www.kaiyin.co.vu/2014/05/jacobian-method-for-solving-linear.html
|
## Jacobian method for solving linear systems
Jacobian method for solving linear systems
Jacobian method is an iterative method for solving linear systems.
I have no idea why this works, but you have x on both sides, it looks like an iterative approach is possible:
And the error for the ith iteration $e_{(i)}$ can converge to zero, i.e. the ith solution $x_{(i)}$ can converge to the true solution, under the condition that $\max(|\lambda_i|) < 1$, where $\lambda_i$ is the ith eignevalue of $B$. And $\max(|\lambda_i|)$ is called spectral radius of $B$.
The following demonstration in matlab shows how the Jacobian method works by solving the following system:
\begin{align*} \begin{pmatrix} 3 & 2 \\ 2 & 6 \\ \end{pmatrix} x &= \begin{pmatrix} 2 \\ -8 \end{pmatrix} \end{align*}
Solving Ax = b with Jacobian method (matlab code)
Click to toggle code (may have to try twice)
a = [3 2; 2 6];
b = [2; -8];
d = diag(diag(a));
e = a - d;
invd = inv(d);
B = - invd * e;
eig(B);
z = invd * b;
x = rand(2, 1);
xtrack = x;
while 1
x1 = B*x + z;
xtrack = horzcat(xtrack, x1);
xdiff = abs(x1 - x);
if max(xdiff(:)) < 0.00001
x = x1;
break
end
x = x1;
end
plot(xtrack(1,:), xtrack(2,:))
hold on
scatter(xtrack(1,:), xtrack(2,:))
|
2022-12-06 03:08:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996416568756104, "perplexity": 1075.6135008526269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00356.warc.gz"}
|
https://learn.careers360.com/ncert/question-name-the-functional-groups-present-in-the-following-compounds-a-ch-3-co-ch-2-ch-2-ch-2-ch-3/
|
# Get Answers to all your Questions
#### Name the functional groups present in the following compounds(a) $CH_3 CO CH_2 CH_2 CH_2 CH_3$(b) $CH_3 CH_2 CH_2 COOH$(c) $CH_3 CH_2 CH_2 CH_2 CHO$ (d) $CH_3 CH_2 OH$
(a) Ketone
(b) Carboxylic acid, —COOH
(c) Aldehyde, —CHO
(d) Alcohol, —OH
|
2023-02-02 11:34:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24937739968299866, "perplexity": 14880.987112470104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00066.warc.gz"}
|
https://www.sanfoundry.com/electrical-measurements-exam-questions-answers/
|
Electrical Measurements Questions and Answers – Advanced Problems on Error Analysis in Electrical Instruments
This set of Electrical Measurements Questions & Answers for Exams focuses on “Advanced Problems on Error Analysis in Electrical Instruments”.
1. A resistor of 10 kΩ with the tolerance of 5% is connected in parallel with 5 kΩ resistors of 10% tolerance. What is the tolerance limit for a parallel network?
a) 9%
b) 12.4%
c) 8.33%
d) 7.87%
Explanation: Here, R1 and R2 are in parallel.
Then, $$\frac{1}{R} = \frac{1}{R_1} + \frac{1}{R_2}$$
Or, R = $$\frac{50}{15}$$ kΩ
∴$$\frac{△R}{R} = \frac{△R_1}{R_1^2} + \frac{△R_2}{R_2^2}$$
And △R1 = 0.5×103, △R2 = 0.5×103
∴$$\frac{△R}{R} = \frac{10 × 10^3}{3 × 10 × 10^3} × \frac{0.5 × 10^3}{10 × 10^3} + \frac{10}{3} × \frac{10^3}{5 × 10^3} × \frac{0.5 × 10^3}{5 × 10^3}$$
= $$\frac{0.5}{30} + \frac{1}{15} = \frac{2.5}{30}$$ = 8.33%.
2. A 0-400V voltmeter has a guaranteed accuracy of 1% of full scale reading. The voltage measured by this instrument is 250 V. Calculate the limiting error in percentage.
a) 4%
b) 2%
c) 2.5%
d) 1%
Explanation: The magnitude of limiting error of the instrument
ρA = 0.01 × 400 = 4 V
The magnitude of voltage being measured = 250 V
The relative at this voltage Er = $$\frac{4}{250}$$ = 0.016
∴ Voltage measured is between the limits
Aa = As(1± Er)
= 250(1 ± 0.016)
= 250 ± 4 V.
3. The current flowing in a resistor of 1Ω is measured to be 25 A. But it was discovered that ammeter reading was low by 1% and resistance was marked high by 0.5%. Find true power as a percentage of the original power.
a) 95%
b) 101.5%
c) 100.1%
d) 102.4%
Explanation: True current = 25(1 + 0.01) = 25.25 A
True resistance R = 1(1 – 0.005) = 0.995Ω
∴ True power = I2R = 634.37 W
Measured power = (25)2 × 1 = 625 W
∴ $$\frac{True \,power}{Measured \,power}$$ × 100 = $$\frac{634.37}{625}$$ × 100 = 101.5%.
4. A resistor of 10 kΩ with the tolerance of 5% is connected in series with 5 kΩ resistors of 10% tolerance. What is the tolerance limit for a series network?
a) 9%
b) 12.04%
c) 8.67%
d) 6.67%
Explanation: Error in 10 kΩ resistance = 10 × $$\frac{5}{100}$$ = 0.5 kΩ
Error in 5 kΩ resistance = 5 × $$\frac{10}{100}$$ = 5 kΩ
Total measurement resistance = 10 + 0.5 + 5 + 0.5 = 16 kΩ
Original resistance = 10 + 5 = 15 kΩ
Error = $$\frac{16-15}{15}$$ × 100 = $$\frac{1}{15}$$ × 100 = 6.67%.
5. Two resistances 100 ± 5Ω and 150 ± 15Ω are connected in series. If the error is specified as standard deviations, the resultant error will be _________
a) ±10 Ω
b) ±10.6 Ω
c) ±15.8 Ω
d) ±20 Ω
Explanation: Given, R1 = 100 ± 5 Ω
R2 = 150 ± 15 Ω
Now, R = R1 + R2
The probable errors in this case, R = $$± (R_1^2 + R_2^2 )^{0.5}$$ = ± 15.8 Ω.
6. Resistances R1 and R2 have respectively, nominal values of 10Ω and 5Ω and limiting error of ± 5% and ± 10%. The percentage limiting error for the series combination of R1 and R2 is?
a) 6.67%
b) 5.5%
c) 7.77%
d) 2.8%
Explanation: R1 = 10 ± 5%
R2 = 5 ± 10%
R1 = 10 ± $$\frac{5}{100}$$ × 10 = 10 ± 0.5Ω
R2 = 5 ± $$\frac{5}{100}$$ × 5 = 5 ± 0.5Ω
The limiting value of resultant resistance = 15 ± 1
Percentage limiting error of series combination of resistance = $$\frac{1}{15}$$ × 100 = 6.67%.
7. A voltmeter has a sensitivity of 1000 Ω/V reads 200 V on its 300 V scale. When connected across an unknown resistor in series with a millimeter. When the milliammeter reads 10 mA. The apparent resistance of the unknown resistor will be?
a) 20 kΩ
b) 21.43 kΩ
c) 18.57 kΩ
d) 22.36 kΩ
Explanation: RT = $$\frac{V_T}{I_T}$$
VT = 200 V, IT = 10 A
So, 20 kΩ.
8. A voltmeter has a sensitivity of 1000 Ω/V reads 200 V on its 300 V scale. When connected across an unknown resistor in series with a millimeter. When the milliammeter reads 10 mA. The actual resistance of the unknown resistor will be?
a) 20 kΩ
b) 18.57 kΩ
c) 21.43 kΩ
d) 22.76 kΩ
Explanation: Resistance of voltmeter,
RV = 1000 × 300 = 300 kΩ
The Voltmeter is in parallel with an unknown resistor,
RX = $$\frac{R_T R_V}{R_T – R_V} = \frac{20 × 300}{280}$$ = 21.43 kΩ.
9. A voltmeter has a sensitivity of 1000 Ω/V reads 200 V on its 300 V scale. When connected across an unknown resistor in series with a millimeter. When the milliammeter reads 10 mA. The error due to the loading effect of the voltmeter is ________
a) 3.33%
b) 6.67%
c) 13.34%
d) 13.67%
Explanation: RT = $$\frac{V_T}{I_T}$$
VT = 200 V, IT = 10 A
So, RT = 20 kΩ
Resistance of voltmeter,
RV = 1000 × 300 = 300 kΩ
Voltmeter is in parallel with unknown resistor,
RX = $$\frac{R_T R_V}{R_T – R_V} = \frac{20 × 300}{280}$$ = 21.43 kΩ
Percentage error = $$\frac{Actual-Apparent}{Actual}$$ × 100
= $$\frac{21.43-20}{21.43}$$ × 100 = 6.67%.
10. A 500 A, 50 Hz current transformer has a bar primary. The secondary burden is a pure resistance of 1 Ω and it draws a current of 5 A. If the magnetic core requires 250 Ampere-turn for magnetization, the percentage ratio error is __________
a) 10.56%
b) -10.56%
c) 11.80%
d) -11.80%
Ip Or, R = $$\frac{V^2}{R}$$
|
2021-04-14 13:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43251094222068787, "perplexity": 4207.718902170881}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00587.warc.gz"}
|
https://homework.zookal.com/questions-and-answers/one-of-the-most-impressive-experimental-confirmations-of-time-dilation-395668954
|
1. Science
2. Physics
3. one of the most impressive experimental confirmations of time dilation...
# Question: one of the most impressive experimental confirmations of time dilation...
###### Question details
One of the most impressive experimental confirmations of Time Dilation involves muons. Muons are unstable elementary particles with a lifetime of 2.2 μμs. They are naturally produced in collisions between cosmic radiation and atoms in our upper atmosphere (~3000m above the Earth's surface). A muon travels at 0.98c. In a reference frame that is attached to the moving muon (the muon's reference frame), how far can the muon travel before it decays (recall that v = d/t)?
For an observer at rest on the Earth, what does he measure for the lifetime of the muon (in seconds)?
For an observer at rest on the Earth, how far does the muon travel before it decays?
|
2021-04-19 21:02:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074491024017334, "perplexity": 1012.4421157582511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00059.warc.gz"}
|
https://answers.ros.org/answers/60282/revisions/
|
There should be defines in the message for _type and the python __slots__ are defined for the fields.
|
2022-06-26 07:29:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7391895055770874, "perplexity": 1865.9421845651905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00666.warc.gz"}
|
https://proofwiki.org/wiki/Squeeze_Theorem/Functions/Proof_3
|
# Squeeze Theorem/Functions/Proof 3
## Theorem
Let $a$ be a point on an open real interval $I$.
Also let $f$, $g$ and $h$ be real functions defined at all points of $I$ except for possibly at point $a$.
Suppose that:
$\forall x \ne a \in I: g \left({x}\right) \le f \left({x}\right) \le h \left({x}\right)$
$\displaystyle \lim_{x \mathop \to a} \ g \left({x}\right) = \lim_{x \mathop \to a} \ h \left({x}\right) = L$
Then:
$\displaystyle \lim_{x \mathop \to a} \ f \left({x}\right) = L$
## Proof
By the definition of the limit of a real function, we have to prove that:
$\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \left({\left|{x - a}\right| < \delta \implies \left|{f \left({x}\right) - L}\right| < \epsilon}\right)$
Let $\epsilon \in \R_{>0}$ be given.
We have:
$\displaystyle \lim_{x \mathop \to a} \ g \left({x}\right) = \lim_{x \mathop \to a} \ h \left({x}\right)$
Hence by Sum Rule for Limits of Functions:
$\displaystyle \lim_{x \mathop \to a} \ h \left({x}\right) - g \left({x}\right) = 0$
By the definition of the limit of a real function:
$(1): \quad \forall \epsilon' \in \R_{>0}: \exists \delta \in \R_{>0}: \left({\left|{x - a}\right| < \delta \implies \left|{h \left({x}\right) - L}\right| < \epsilon'}\right)$
$(2): \quad \forall \epsilon' \in \R_{>0}: \exists \delta \in \R_{>0}: \left({\left|{x - a}\right| < \delta \implies \left|{g \left({x}\right) - L}\right| < \epsilon'}\right)$
$(3): \quad \forall \epsilon' \in \R_{>0}: \exists \delta \in \R_{>0}: \left({\left|{x - a}\right| < \delta \implies \left|{h \left({x}\right) - g \left({x}\right)}\right| < \epsilon'}\right)$
Take $\epsilon' = \dfrac {\epsilon} 3$ in $(1)$, $(2)$, $(3)$.
Then there exists $\delta_1, \delta_2, \delta_3$ that satisfies $(1)$, $(2)$, $(3)$ with $\epsilon' = \dfrac \epsilon 3$.
Take $\delta = \min\{\delta_1, \delta_2, \delta_3\}$.
Then:
$\displaystyle \left\vert{x - a}\right\vert$ $<$ $\displaystyle \delta$ $\displaystyle \implies \ \$ $\displaystyle \left\vert{h \left({x}\right) - L}\right\vert$ $<$ $\displaystyle \frac {\epsilon} 3$ $\, \displaystyle \land \,$ $\displaystyle \left\vert{g \left({x}\right) - L}\right\vert$ $<$ $\displaystyle \frac {\epsilon} 3$ $\, \displaystyle \land \,$ $\displaystyle \left\vert{h \left({x}\right) - g \left({x}\right)}\right\vert$ $<$ $\displaystyle \frac {\epsilon} 3$
So, if $\left\vert{x - a}\right\vert<\delta$:
$\displaystyle \left\vert{f \left({x}\right) - L}\right\vert$ $=$ $\displaystyle \left\vert{f \left({x}\right) - g \left({x}\right) + h \left({x}\right) - L + g \left({x}\right) - h \left({x}\right)}\right\vert$ $\displaystyle$ $\le$ $\displaystyle \left\vert{f \left({x}\right) - g \left({x}\right)}\right\vert + \left\vert{h \left({x}\right) - L}\right\vert + \left\vert{h \left({x}\right) - g \left({x}\right)}\right\vert$ $\displaystyle$ $\le$ $\displaystyle \left\vert{h \left({x}\right) - g \left({x}\right)}\right\vert + \left\vert{h \left({x}\right) - L}\right\vert + \left\vert{h \left({x}\right) - g \left({x}\right)}\right\vert$ $\displaystyle$ $\le$ $\displaystyle \frac {\epsilon} 3 + \frac {\epsilon} 3 + \frac {\epsilon} 3$ $\displaystyle$ $=$ $\displaystyle \epsilon$
$\blacksquare$
|
2019-12-12 08:15:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854283332824707, "perplexity": 71.5199036826593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00538.warc.gz"}
|
http://1h20.services/Lcm%20Di%205%2010
|
Lcm Di 5 10 » 1h20.services
# calculate the LCM least common multiple of.
Here you can find answers to questions like: LCM of 5, 10 and 15 or What is the LCM of 5, 10 and 15? Use this calculator to find the Least Common Multiple LCM for up to 3 mumbers. Online Calculators > Math Calculators LCM of 5 and 10. What is the LCM of 5 and 10? - The least common multiple of 5 and 10 is 10. LCM5,10 to find out what is the least common multiple of 5 and 10 or the smallest number that is divisble by 5 and 10. Tiger calculates LCM least common multiple LCM5,10,15showing steps. LCM5,7,10 Least Common Multiple is: 70 Calculate Least Common Multiple for: 5, 7 and 10. Factorize of the above. LCM = 2 • 5 • 7 Least Common Multiple is: 70 Processing ends successfully. Subscribe to our mailing list. Email Address. Terms and Topics. LCM Calculator.
Download our mobile app and learn how to find LCM of upto four numbers in your own time: Android and iPhone/ iPad. Find least common multiple lcm of: 20 & 10 30 & 15 50 & 25 2 & 1 70 & 35 20 & 5 10 & 10 30 & 5 10 & 15 50 & 5 10 & 25 70 & 5 10 & 35. Tiger calculates LCM least common multiple LCM3,5,10showing steps. SimplyLCM5,10,15= 30 For elaboration and to clear your concept, understand that LCM stands for Lowest Common Multiple. So, if you are quite smart, you'll.
To construct the $\textLCM$ of a list of non-negative integers, $S=\s_1,\ldots,s_n\,$ which is the smallest positive integer that is divisible by each element $s_i \in S$, one must factor each [math]s_i \in \ma. Algebra Examples. Popular Problems. Algebra. Find the LCM 3, 5, 10, The LCM is the smallest number that all of the numbers divide into evenly. 1. List the prime factors of each number. 2. Multiply each factor the greatest number of times it occurs in either number. Since has no factors besides and. 31/12/2014 · Kickstart your Holiday shopping. Rishaveon. Rishaveon asked in Science & Mathematics Mathematics · 5 years ago. 20 LCM means the lowest common multiple. In other words, you need to find the first common multiple of 4,5 and 10 ie the first number that you find can be divisible by 4,5 and 10 with no remainders. So all you have to do is to write out the multiples of 4,5 and 10 multiples of 4: 4, 8, 12, 16, color red20, 24, 28, 32, 36, 40 multiples of 5. The rule is simple, if you already studied prime numbers. According to the fundamental theorem of arithmetic, [1] each number greater than 1 can be represented as a product of prime numbers to their respective powers. 1. Divide each number into fa.
2 settembre 2019 ore 10.00, Aula L, Polo Didattico V. delle Fontane per tutti gli studenti delle lauree triennali con titolo di studio straniero, per gli studenti non comunitari residenti all’estero e per gli studenti italiani con diploma di studi superiori conseguito all’estero. Least common multiple LCM of 5 and 11 is 55. LCM5,11 = 55. Least common multiple or lowest common denominator lcd can be calculated in two way; with the LCM formula calculation of greatest common factor GCF, or multiplying the prime factors with the highest exponent factor. LCM5, 10: Least common multiple calculator. LCM calculator to calculate the least common multiple of several numbers. 09/11/2010 · What is the LCM of 5, 10 and 25? PLEASE HELPP: Answer Save. 8 Answers. Relevance. Dean. 9 years ago. Best Answer. First we look at the prime factors of each number. 5 = 5. 10 = 2 5. 25 = 5^2. Then we multiply the different numbers with the biggest powers. 2 5^2 = 50. The lowest common multiple is 50. 0 0 0. redman.
Least Common Multiple LCM of 10 and 5. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. If it's not what You are looking for type in the calculator fields your own values, and You will. 15/10/2019 · In this video, I find the LCM or lowest common multiple of the numbers 5,15, and 20. The LCM is sometimes also called the least common multiple. The LCM of five, fifteen and twenty equals sixty. I use a factor tree to find the LCM In addition, I find the GCF or greatest common factor of 5,15, and 20. The greatest common factor of 5. LCM2, 5: Least common multiple calculator. LCM calculator to calculate the least common multiple of several numbers.
|
2021-06-22 18:06:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40817296504974365, "perplexity": 674.5173488960444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00565.warc.gz"}
|
http://math.stackexchange.com/questions/57544/confusion-on-change-of-basis-matrix
|
# Confusion on Change of basis matrix
I am reading section 46 of Halmos' Finite Dimensional Vector Spaces. In this section Halmos poses two questions the first of which is:
Question 1: If $x$ is in a finite dimensional vector space $V$, write $x = \sum_i \xi_i x_i = \sum_i \eta_i y_i$, what is the relation between its coordinates $(\xi_1,\xi_2, \ldots \xi_n)$ with respect to the basis $X = (x_1, \ldots, x_n)$ and its coordinates $(\eta_1, \ldots ,\eta_n)$ in the basis $Y = (y_1 \ldots y_n)$?
After a few lines, Halmos defines the linear transformation $A$ by $A(x_i) = y_i$, $i=1,2, \ldots n$. From what I understand, suppose we have a basis vector $x_i$. Then this should correpond to the column vector
$$\left[\begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{array}\right]$$
in the basis $X$ where the $1$ is in the $i-th$ row of the vector.
Then if the matrix $A = a_{ij}$ is applied to this vector, we should have the $i-th$ column of the matrix. Let the $i-th$ column of the matrix be
$$\left[\begin{array}{c} a_{1i} \\ a_{2i} \\ \vdots \\ a_{ni} \end{array} \right].$$
However note that the $i-th$ column of the matrix is expressed in terms of the basis $X$, because Halmos states that $y_j = Ax_j = \sum_i a_{ij}x_i$. In other words, he has chosen to express each vector in the basis $Y$ as a linear combination of the basis vectors in $X$. Is there a particular reason of doing this? From my limited experience in linear algebra, if say we have a vector in a basis $X$, and we wish to express it in a basis $Y$, the thing to do would be to express $x_1$ as a linear combination of the $y_i's$ in $Y$, $x_2$, and so on. This is the opposite to what Halmos has done.
Is this difference anything significant to be aware of?
$\textbf{Edit : }$ I will write out the relevant bit that I am referring to in Halmos' book:
"Let $V$ be an $n$- dimensional vector space and let $X = (x_1, \ldots x_n)$ and $y=(y_1,\ldots y_n)$ be two bases in $V$. We may ask the following two questions:
Question 1. If $x$ is in $V$, $x = \sum_i \xi_ix_i = \sum_i\eta_i y_i$, what is the reation between its coordinates $xi_i$ with respect to $X$ and its coordinates $\eta_i$ with respect to $y$?
Question 2. If $(\xi_1, \ldots \xi_n)$ is an ordered set of $n$ scalars, what is the relation between the vectors $x = \sum_i \xi_ix_i$ and $y = \sum_i \xi_i y_i$?
Both these questions are easily answered in the language of linear transformations. We consider, namely the linear transformation $A$ defined by $Ax_i = y_i$. More explicitly:
$A(\sum_i \xi_i x_i) = \sum_i \xi_iy_i.$ "
-
View your two basis as two isomorphisms $x:K^ n\to V$ and $y:K^ n\to V$. Then you have $x(\xi)=y(\eta)$ iff $\xi=x^ {-1}(y(\eta))$. – Pierre-Yves Gaillard Aug 15 '11 at 10:53
@Pierre-Yves Gaillard Ok thanks. How does my matrix $A$ relate to this? – user38268 Aug 15 '11 at 11:28
To tell the truth, I focused on “Question 1”, and didn’t read carefully enough what you wrote next. Sorry. But do you agree that this answers “Question 1”? I’ll read more seriously the whole of your post, and if I have anything sensible to say about it (which I doubt), I’ll say it. – Pierre-Yves Gaillard Aug 15 '11 at 11:37
Je suis d'accord avec vous. Je suis curieux de savoir si la matrice que j'ai écrire $A$ a une connection avec les isomorphismes que vous avez dit. Par exemple pour écrire la matrice de la transformation linéare donnée par $x$, on forme la matrice dont colonnes sont les images des vecteurs $e_i$ dans $V$. Mais à mes yeus la matrice $A$ est la même matrice comme ce que j'ai dit avant. Je pense que je pourrais avoir tort.... – user38268 Aug 15 '11 at 12:25
Félicitations pour votre excellent français! Où l'avez-vous appris? - Le premier passage que je ne comprends pas dans votre question est: "the matrix $A = a_{ij}$". Vous aviez pourtant défini $A$ comme étant une transormation linéaire. Je vais essayer d'accéder au livre de Halmos et regarder le passage en question. – Pierre-Yves Gaillard Aug 15 '11 at 12:45
The matrix which expresses basis vectors of one basis $X$ in terms of another basis $Y$ and the matrix which expresses vectors of $Y$ in terms of $X$ are just inverses to each other and which one you call $A$ and which one becomes $A^{-1}$ is a matter of notation or convention - you just have to pick the right one when doing an actual change of coordinates.
-
|
2014-10-21 20:34:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892360270023346, "perplexity": 386.7597539466187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00078-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://latex.org/forum/viewtopic.php?f=59&t=29178&p=98591
|
## LaTeX forum ⇒ Theses, Books, Title pages ⇒ Master-doctoral-thesis template not compiling after updating Miktex Topic is solved
Classicthesis, Bachelor and Master thesis, PhD, Doctoral degree
rj2585
Posts: 2
Joined: Thu Feb 16, 2017 12:39 am
### Master-doctoral-thesis template not compiling after updating Miktex
I am trying to use the template for a thesis found at http://www.latextemplates.com/template/masters-doctoral-thesis . The package worked fine for me on overleaf, but the project got to big to manage there. I downloaded the zip file to open it on TexStudio and it compiled fine until I updated the Tex packages in Tex Live (I am using MacTex but the same happened with MikTex after updating in Windows). After the update, when I compile the project, I get the following error:
Missing \endcsname inserted.<to be read again> \tex_let:D l.164 \cleardoublepage The control sequence marked <to be read again> shouldnot appear between \csname and \endcsname.
This error appears every time commands like \cleardoublepage, \chapter, \begin{abstract}, \tableofcontents and others are called.
The log file is included as an attachment as well as the template from latextemplates.
Attachments
thesis_1.zip
main.log
Tags:
BlackForestrian
Posts: 34
Joined: Thu Sep 04, 2014 2:43 pm
Location: Black Forest, Germany
The issue is related to the recent (2017/10/02) update of the xparse and l3kernel packages, leading to an error with commands defined with
\NewDocumentCommand
.
The precise reason is yet unclear.
\makeatletter\AtBeginDocument{ \renewcommand{\blank@p@gestyle}{empty}}\makeatother
Oh my god, you are confusing me with someone who is really, really, really interested in that!
rj2585
Posts: 2
Joined: Thu Feb 16, 2017 12:39 am
That solved the problem.
Thank you very much.
Johannes_B
Site Moderator
Posts: 3671
Joined: Thu Nov 01, 2012 4:08 pm
With the next update, everything should be working like before without the hack.
The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
|
2018-04-25 08:35:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160803914070129, "perplexity": 9561.134774913942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947759.42/warc/CC-MAIN-20180425080837-20180425100837-00450.warc.gz"}
|
https://wiki.math.uwaterloo.ca/statwiki/index.php?title=neighbourhood_Components_Analysis&oldid=1940
|
neighbourhood Components Analysis
Introduction
Neighbourhood Components Analysis (NCA) is a method for learning a Mahalnobis distance measure for k-nearest neighbours (KNN). In particular, it finds a distance metric that maximises the leave one out (LOO) error on the training set for a stochastic variant of KNN. NCA can also learn a low-dimensional linear embedding of labelled data for data visualisation and for improved KNN classification speed.
k-Nearest Neighbours
k-Nearest neighbours is a simple classification technique that determines a test point's label by looking at the labels of the $k$ training points that are nearest the test point. This is a surprisingly effective method that has a non-linear decision surface that is non-parametric, except for the parameter $k$.
However, KNN suffers from two problems. First, it can be computationally expensive to classify points, as they must be compared to the entire training set. There is also the problem of determining which distance metric to define "nearest" points.
NCA attacks the above two problems of KNN. It finds a distance metric that defines which points are nearest. It can restrict this distance metric to be low rank, reducing the dimensionality of the data and thus reducing storage and search times.
NCA finds the matrix $A$ where $Q=A^TA$ and distance between two points is defined as: $d(x,y) = (x - y)^TQ(x-y) = (Ax - Ay)^T(Ax - Ay)$
The basic idea of NCA is to find the distance metric $A$ that maximises the KNN classification on test data. As test data is not available during training, the method uses Leave One Out (LOO) performance. However, KNN has a discontinuous error function as points can suddenly jump class as they cross class boundaries. Instead, NCA optimises using stochastic neighbour assignment. Here, a test point is assigned a neighbour (and the neighbour's class) according to probability $p_{ij}$. This probability decreases as the distance between points increases.
NCA maximises the expected number of correctly classified points according to this objective function:
$f(A) = \sum_i \sum_{j \in C_i} p_{ij} = \sum_i p_i$
The gradient for which is easily found:
$\frac{\partial F}{\partial A} = 2A \sum_i \left( p_i \sum_k p_{ik} x_{ik} x_{ik}^T - \sum_{j \in C_i} p_{ij} x_{ij} x_{ij}^T \right)$
NCA then finds A using a gradient-based optimiser. Note, however, that the cost function is not convex, so the optimisation must handle local minima. The authors comment that they never observed any effects of overfitting in this method, and that performance never decreased as the training set size increased.
It is possible to use a cost function based on KL-divergence, and the authors note that performance is very similar to that achieved through the above cost function.
The scale of $A$ found by NCA, and relative row directions can provide an estimate to the optimal number of neighbours to use for KNN. In Stochastic NN, a larger scale A tends to consult average over fewer neighbours, and vice versa.
|
2022-12-05 10:56:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7513709664344788, "perplexity": 728.934217117961}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00164.warc.gz"}
|
https://www.nature.com/articles/ncomms10903?error=cookies_not_supported&code=42fbe7b5-d292-4822-a82a-db7436618ece
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Interplanar coupling-dependent magnetoresistivity in high-purity layered metals
## Abstract
The magnetic field-induced changes in the conductivity of metals are the subject of intense interest, both for revealing new phenomena and as a valuable tool for determining their Fermi surface. Here we report a hitherto unobserved magnetoresistive effect in ultra-clean layered metals, namely a negative longitudinal magnetoresistance that is capable of overcoming their very pronounced orbital one. This effect is correlated with the interlayer coupling disappearing for fields applied along the so-called Yamaji angles where the interlayer coupling vanishes. Therefore, it is intrinsically associated with the Fermi points in the field-induced quasi-one-dimensional electronic dispersion, implying that it results from the axial anomaly among these Fermi points. In its original formulation, the anomaly is predicted to violate separate number conservation laws for left- and right-handed chiral (for example, Weyl) fermions. Its observation in PdCoO2, PtCoO2 and Sr2RuO4 suggests that the anomaly affects the transport of clean conductors, in particular near the quantum limit.
## Introduction
The magnetoconductivity or -resistivity of metals under a uniform magnetic field μ0H (μ0 is the permeability of free space) is highly dependent on the precise shape of their Fermi surface (FS) and on the orientation of the current flow relative to the external applied field H1,2. This is particularly true for high-purity metals at low temperatures, whose carriers may execute many cyclotronic orbits in between scattering events. However, the description of the magnetoconductivity of real systems in terms of the Boltzmann equation including the Lorentz force, the electronic dispersion and realistic scattering potentials is an incredibly daunting task, whose approximate solutions can only be obtained through over simplifications. Despite the inherent difficulty in describing the magnetoresistivity of metallic or semi-metallic systems, it continues to be a subject of intense interest. Indeed, in recent years, a number of new magnetoresistance phenomena have been uncovered. For example, although semi-classical transport theory predicts a magnetoresistivity ρ(μ0H)(μ0H)2, certain compounds such as β-Ag2Te display a linear, non-saturating magnetoresistivity3, which is ascribed to the quantum magnetoresistive scenario4, associated with linearly dispersing Dirac-like bands5. However, in semi-metals characterized by a bulk Dirac dispersion and extremely high electron mobilities such as Cd3As2, the linear magnetoresistivity develops a weak (μ0H)2 term as the quality of the sample increases6. Its enormous magnetoresistivity is claimed to result from the suppression of a certain protection against backscattering channels6. The semi-metal WTe2 was also found to display a very large and non-saturating magnetoresistivity, which is (μ0H)2 under fields up to 60 T. This behaviour was ascribed to a nearly perfect compensation between the densities of electrons and holes7. In recent times, a series of compounds were proposed to be candidate Weyl semi-metals characterized by a linear touching between the valence and the conduction bands at several points (Weyl points) of their Brillouin zone8. These Weyl points are predicted to lead to a pronounced negative magnetoresistivity for electric fields aligned along a magnetic field due to the so-called axial anomaly9,10.
Here we unveil the observation of yet another magnetoresistive effect, namely a pronounced negative magnetoresistivity in extremely clean and non-magnetic layered metals. We study the delafossite-type PtCoO2 and PdCoO2 compounds, which are characterized by a single FS sheet and, as with Cd3As2, can display residual resistivities on the order of a just few tenths of nΩ cm. Given its extremely low level of disorder, for specific field orientations along which the interlayer coupling vanishes, PdCoO2 can display a very pronounced positive magnetoresistivity that exceeds 550,000% for μ0H35 T and for currents along the interlayer axis. Nevertheless, as soon as the field is rotated away from these specific orientations and as the field increases, this large orbital effect is overwhelmed by the emergence of a pronounced negative magnetoresistivity. For fields along the interlayer direction, a strong longitudinal negative magnetoresistivity is observed from μ0H=0 T to fields all the way up to μ0H=35 T. Very similar behaviour is observed in the PtCoO2 compound. For the correlated Sr2RuO4, the longitudinal negative magnetoresistivity effect is also observable but only in the cleanest samples, that is, those displaying the highest superconducting transition temperatures. We suggest that this effect might result from the axial anomaly between Fermi points in a field-induced, quasi-one-dimensional electronic dispersion.
## Results
### Observation of an anomalous longitudinal magnetoresistivity
As shown in Fig. 1a, PdCoO2 crystallizes in the space group , which results from the stacking of monatomic triangular layers11. The synthesis of PdCoO2 single crystals is described in the Methods section. According to band structure calculations12,13,14, the Fermi level EF is placed between the filled t2g and the empty eg levels with the Pd triangular planes dominating the conductivity and leading to its highly anisotropic transport properties. The reported room temperature in-plane resistivity is just 2.6 μΩ cm, making PdCoO2 perhaps the most conductive oxide known to date15. Figure 1b,c show the configuration of contacts used for measuring the longitudinal magnetoresistivity of all compounds. de Haas van Alphen measurements15 reveal a single, corrugated and nearly two-dimensional FS with a rounded hexagonal cross-section, in broad agreement with both band structure calculations12,13,14 and angle-resolved photoemission measurements16. de Haas van Alphen yields an average Fermi wave vector m−1or an average Fermi velocity m s−1 (where μ1.5 is the carrier effective mass15 in units of free electron mass). Recent measurements of interplanar magnetoresistivity ρc(μ0H) reveal an enormous enhancement for fields along the direction, that is, increasing by 35,000% at 2 K under μ0H=14 T, which does not follow the characteristic ρ(μ0H)(μ0H)2 dependence at higher fields17. This behaviour can be reproduced qualitatively by semi-classical calculations, assuming a very small scattering rate17. Most single crystals display in-plane residual resistivities ρab0 ranging from only 10 up to 40 nΩ cm, which correspond to transport lifetimes ranging from $\underset{~}{>}$ 20 down to 5.5 ps (e is the electron charge and n2.4 × 1028 m−3 (ref. 11)) or mean free paths ranging from 4 up to 20 μm (ref. 15). However, according to ref. 15, the quasiparticle lifetime extracted from the Dingle temperature becomes (in units of length) μm. Hence, the transport lifetime is larger than the quasiparticle lifetime by at least one order of magnitude, which is the hallmark of a predominant forward scattering mechanism (see ref. 18). For a magnetic field along c axis, when μ0H$\underset{~}{>}$1 T; in contrast, when μ0H>10 T. These estimations suggest the importance of the Landau quantization for understanding our observations over a wide range of fields up to μ0H30 T.
As shown in Fig. 2a, the low-T magnetoresistivity or Δρc=(ρcρ0)/ρ0, where ρ0 is the zero-field interplanar resistivity, decreases (up to 70%) in a magnetic field of 30 T oriented parallel to the applied current. Given that PdCoO2 is non-magnetic and extremely clean (see Methods), this effect cannot be attributed to magnetic impurities. In addition, the magnitude of the observed magnetoresistivity cannot be explained in terms of weak localization effects19,20. To support both statements, in Fig. 2b we show Δρc for a PdCoO2 single crystal as a function of H applied along the planar direction and for several temperatures T. In sharp contrast to results shown in Fig. 2a, as T decreases, Δρc(μ0H) increases considerably, by more than three orders of magnitude when T<10 K, thus confirming the absence of scattering by magnetic impurities or any role for weak localization. In addition, it is noteworthy that Δρc(μ0H)2 at low fields, which indicates that the interlayer transport is coherent at low fields21. Figure 2c depicts a simple Kohler plot of the magnetoresistivity shown in Fig. 2b, where the field has also been normalized by ρ0(T), which indicates unambiguously that the transverse magnetoresistive effect in PdCoO2 is exclusively orbital in character and is dominated by the scattering from impurities/imperfections and phonons1.
The evolution of the longitudinal magnetoresistance with temperature is depicted in Fig. 3a. ρc is seen to decrease by a factor surpassing 60% for fields approaching 9 T and for all temperatures below 30 K. Figure 3b displays ρc(μ0H)/ρ0 as a function of the angle θ between μ0H and the c axis at a temperature T=1.8 K, for a third single crystal. For θ>10°, the pronounced positive magnetoresistance observed at low fields, due to an orbital magnetoresistive effect, is overpowered at higher fields by the mechanism responsible for the negative magnetoresistivity. This behaviour is no longer observed within this field range when θ is increased beyond 20°. Figure 3c shows a Kohler plot, that is, Δρc/ρ0 as a function of μ0H normalized by ρ0. As seen in Fig. 3c, all curves collapse on a single curve, indicating that a particular transport mechanism dominates even at high temperatures where phonon scattering is expected to be strong. The red line is a fit to (μ0H)−1, indicating that at lower fields.
### Angular dependence of the anomalous magnetoresistive response
Fig. 4 shows the longitudinal magnetoresistance for fields and currents along the axis. For this orientation, the charge carriers follow open orbits along the axis of the cylindrical FS instead of quantized cyclotronic orbits. In contrast to Δρc/ρ0, but similar to the longitudinal magnetoresistivity of ultra-clean elemental metals1,2, is observed to increase and saturate as a function of μ0H. This further confirms that conventional mechanisms, for example, impurities, magnetism and so on, are not responsible for the negative longitudinal magnetoresistivity observed in Δρc/ρ0.
Figure 5a shows ρc as a function of the angle θ between the field and the c axis, for three different field values: 8, 25 and 30 T. ρc(θ) displays the characteristic structure displayed by quasi-two-dimensional metals, namely a series of sharp peaks at specific angles called the Yamaji angles (where n is an integer, c is the interplanar distance and is the projection of the Fermi wave number on the conduction plane), for which all cyclotronic orbits on the FS have an identical orbital area22. In other words, the corrugation of the FS no longer leads to a distribution of cross-sectional areas, as if the corrugation has been effectively suppressed. As discussed below, in terms of the energy spectrum, this means that the Landau levels become non-dispersive at the Yamaji angles18,23; hence, one no longer has Fermi points. The sharp peak at θ=90° is attributed to coherent electron transport along small closed orbits on the sides of a corrugated cylindrical FS24,25. The width of this peak Δθ, shown in Fig. 5b for several temperatures, allows us to estimate the interlayer transfer integral tc (ref. 26),
assuming a simple sinusoidal FS corrugation along the kz direction. Here, the interplanar separation is d=c/3, as there are three conducting Pd planes per unit cell, each providing one conducting hole and therefore leading to three carriers per unit cell. This value is consistent with our Hall-effect measurements (not included here). The full width at half maximum of the peak at 90° is Δθ0.78° and EF is given by eV; therefore, one obtains tc=2.79 meV or 32.4 K. Figure 5c displays ρc as a function of μ0H for two angles; the Yamaji angle θn=1=23.0° and θ=22.7°, respectively. As seen, ρc(μ0H) for fields along θn=1 displays a very pronounced positive magnetoresistance, that is, ρc/ρ0 increases by 550,000% when μ0H is swept from 0 to 35 T. However, at μ0H=35 T, ρc/ρ0 decreases by one order of magnitude as μ0H is rotated by just 0.3° with respect to θn=1. Furthermore, as seen in Fig. 5d, at higher fields ρc displays a cross-over from a very pronounced and positive to a negative magnetoresistance, resulting from a small increment in θ relative to θn=1. This is a very clear indication for two competing mechanisms, with negative magnetoresistivity overcoming the orbital effect when the orbitally averaged interlayer group velocity (or the transfer integral tc) becomes finite at θθn. We emphasize that for a conventional and very clean metal, composed of a single FS sheet, the magnetoresistivity should either be (μ0H)2 (ref. 21) or saturate as seen in quasi-two-dimensional metals close to the Yamaji angle27, or in Fig. 2a,b for fields below 15 T. This is illustrated by the Supplementary Fig. 1 (see also Supplementary Note 1), which contrasts our experimental observations with predictions based on semi-classical transport models, which correctly describe the magnetoresistance of layered organic metals in the vicinity of the Yamaji angle. In contrast, as illustrated by the dotted red line in Fig. 5d, ρc(μ0H) can be well described by the expression . Here, the ρc(μ0H)−1 term describes the negative magnetoresistivity as previously seen in Fig. 3, whereas the ρcμ0H term describes the non-saturating linear magnetoresistance predicted and observed for systems close to the quantum limit3,4,5,28. This expression describes ρc(μ0H, θ) satisfactorily, except at the Yamaji angle where both terms vanish. In the neighbourhood of θn, the addition of a small ρc(μ0H)2 term improves the fit, with its pre-factor increasing as θn is approached. ρc also displays Shubnikov de Haas oscillations at small (and strongly θ dependent) frequencies, which were not previously detected in ref. 15. As discussed in ref. 29, these slow oscillations, observed only in the interlayer magnetoresistance of layered metals, originate from the warping of the FS. In Supplementary Fig. 2 (See also Supplementary Note 2), we show how these frequencies disappear when the group velocity vanishes at θn.
Significantly, this effect does not appear to be confined to PdCoO2. Figure 6 presents an overall evaluation of the longitudinal magnetoresistance of isostructural PtCoO2, whereas Supplementary Fig. 3 displays the observation of impurity-dependent negative magnetoresistivity in the correlated perovskite Sr2RuO4 (See also Supplementary Note 3). As shown in Fig. 6, PtCoO2 presents a pronounced negative longitudinal magnetoresistivity either for c axis or for μ0H close to an Yamaji angle (j is the current density). It also presents a very pronounced and non-saturating magnetoresistivy for fields applied along the Yamaji angle. For both systems, the magnetoresistivity does not follow a single power law as a function of μ0H. In fact, as shown in Supplementary Fig. 4, at θn the magnetoresistivity of the (Pt,Pd)CoO2 system follows a (μ0H)2 dependence for μ0H15 T. At intermediate fields, ρ(μ0H) deviates from the quadratic dependence, recovering it again at subsequently higher fields. As Kohler’s rule implies that Δρ/ρ0(μ0H/ρ0)2, we argue that the observed increase in slope would imply a field-dependent reduction in scattering by impurities (see Supplementary Fig. 4 and Supplementary Note 4). The precise origin of this suppression in scattering remains to be identified. Nevertheless, the enormous and positive magnetoresistivity observed for fields along θn seems consistent with a simple scenario, that is, an extremely clean system(s) whose impurity scattering weakens with increasing magnetic field. In Sr2RuO4, the negative longitudinal magnetoresistivity is observed only in the cleanest samples and for angles within 10° away from the c axis. This compound is characterized by three corrugated cylindrical FS sheets, each leading to a distinct set of Yamaji angles, making it impossible to completely suppress the interplanar coupling at specific Yamaji angle(s).
## Discussion
Negative magnetoresistivity is a common feature of ferromagnetic metals near their Curie temperature, or of samples having dimensions comparable to their electronic mean free path where the winding of the electronic orbits under a magnetic field reduces the scattering from the surface. It can also result from the field-induced suppression of weak localization or from the field-induced suppression of spin-scattering/quantum-fluctuations as seen in f-electron compounds30. None of the compounds described in this study are near a magnetic instability, nor do they contain significant amounts of magnetic impurities or disorder to make them prone to weak localization. The magnitude of this anomalous magnetoresistivity, coupled to its peculiar angular dependence, are in fact enough evidence against any of these conventional mechanisms. Below, we discuss an alternative scenario based on the axial anomaly, which in our opinion explains most of our observations.
The axial anomaly is a fundamental concept of relativistic quantum field theory, which describes the violation of separate number conservation laws of left- and right-handed massless chiral fermions in odd spatial dimensions due to quantum mechanical effects31,32. When three-dimensional massless Dirac or Weyl fermions are placed under parallel electric and magnetic fields, the number difference between the left and the right-handed fermions is expected to vary with time according to the Adler–Bell–Jackiw formula9,33
Here, nR/L are the number operators for the right- and the left-handed Weyl fermions, with the electric and the magnetic field strengths respectively given by E and B. The Dirac fermion describes the linear touching of twofold Kramers degenerate conduction and valence bands at isolated momentum points in the Brillouin zone. By contrast, the Weyl fermions arise due to the linear touching between nondegenerate conduction and valence bands. The axial anomaly was initially proposed to produce a large, negative longitudinal magnetoresistance, for a class of gapless semiconductors, for which the low-energy band structure is described by massless Weyl fermions10. The reason for the negative magnetoresistance is relatively straightforward. The number imbalance due to axial anomaly can only be equilibrated through backscattering between two Weyl points. This involves a large momentum transfer QW. Quite generally the impurity scattering in a material can be modeled by a momentum dependent impurity potential V(Q), where Q is the momentum transfer between the initial and the final electronic states. If V(Q) is a smoothly decreasing function of |Q| (such as Gaussian or Lorentzian), the backscattering amplitude can be considerably smaller than its forward scattering counterparts (occurring with small Q around each Weyl point). Therefore in the presence of axial anomaly the transport lifetime can be considerably larger than the one in the absence of a magnetic field. Consequently the axial anomaly in the presence of parallel E and B fields can give rise to larger conductivity or smaller resistivity i.e., negative magnetoresistance. Recent theoretical proposals for Weyl semi-metals34,35,36,37 followed by experimental confirmation38,39 have revived the interest in the experimental confirmation of the axial anomaly through efforts in detecting negative longitudinal magnetoresistivity40,41,42,43,44,45,46. There are examples of three-dimensional Dirac semi-metals47,48,49, which may be converted, through Zeeman splitting, into a Weyl semi-metal. Examples include Bi1−xSbx at the band inversion transition point between topologically trivial and nontrivial insulators42, and Cd3As2 (ref. 6).
In analogy with the predictions for the axial anomaly between Weyl points, here we suggest that our observations might be consistent with the emergence of the axial anomaly among the Fermi points of a field-induced, one-dimensional electronic dispersion18. In effect, in the presence of a strong magnetic field, the quantization of cyclotron motion leads to discrete Landau levels with one-dimensional dispersion and a degeneracy factor eB/h, see Fig. 7a–c. Consider the low-energy description of a one-dimensional electron gas, in terms of the right- and left-handed fermions obtained in the vicinity of the two Fermi points. In the presence of an external electric field E, the separate number conservation of these chiral fermions is violated according to
where nR/L corresponds to the number operators of the right- and left-handed fermions, respectively31,32. Each partially occupied Landau level leads to a set of Fermi points and the axial anomaly for such a level can be obtained from equation 3, after multiplying by eB/h. Therefore, each level has an axial anomaly determined by equation (2). When only one Landau level is partially filled, we have the remarkable universal result for the axial anomaly described by Adler–Bell–Jackiw formula of equation (2). For a non-relativistic electron gas, this would occur at the quantum limit. In contrast, this situation would naturally occur for Dirac/Weyl semi-metals, when the Fermi level lies at zero energy, that is, the material has a zero carrier density. Figure 7b describes the situation for a quasi-two-dimensional electronic system on approaching the quantum limit, or when the interplanar coupling becomes considerably smaller than the inter Landau level separation (for example, in the vicinity of the Yamaji angle). We emphasize that the observation of a pronounced, linear-in-field magnetoresistive component, as indicated by the fit in Fig. 5d, is a strong experimental evidence for the proximity of PdCoO2 to the quantum limit on approaching the Yamaji angle. Therefore, we conclude that the axial anomaly should be present in every three-dimensional conducting system, on approaching the quantum limit. Explicit calculations indicate that the axial anomaly would only cause negative magnetoresistance for predominant forward scattering produced by ionic impurities18,50. ρ(μ0H)(μ0H)−1 as observed here (Figs 3 and 5) would result from Gaussian impurities18. As our experimental results show, PdCoO2 is a metal of extremely high conductivity, thus necessarily dominated by small-angle scattering processes and therefore satisfying the forward scattering criterion. In this metal the Landau levels disperse periodically as shown in Fig. 7b,c, depending on the relative strength of the cyclotron energy ħωc=ħeB/μ with respect to the interlayer transfer integral tc. The condition 4tc>ħωc is satisfied when μ0H roughly exceeds 100 T. For this reason, Fig. 7c, with multiple partially occupied Landau levels, describes PdCoO2 for fields along the c axis or for arbitrary angles away from the Yamaji ones. Nevertheless, one can suppress the Fermi points by aligning the field along an Yamaji angle and this should suppress the associated axial anomaly. As experimentally seen, the suppression of the Fermi points suppresses the negative magnetoresistivity, indicating that the axial anomaly is responsible for it.
In summary, in very clean layered metals we have uncovered a very clear correlation between the existence of Fermi points in a one-dimensional dispersion and the observation of an anomalous negative magnetoresistivity. The suppression of these points leads to the disappearance of this effect. This indicates that the axial anomaly and related negative magnetoresistivity would not be contingent on the existence of an underlying three-dimensional Dirac/Weyl dispersion. Instead, our study in PdCoO2, PtCoO2 and Sr2RuO4, which are clean metals with no Dirac/Weyl dispersion at zero magnetic field, indicates that the axial anomaly and its effects could be a generic feature of metal(s) near the quantum limit. Nevertheless, the detection of negative magnetoresistivity would depend on the underlying scattering mechanisms, that is, observable only in those compounds that are clean enough to be dominated by elastic forward scattering18,50. In a generic metal with a high carrier density, it is currently impossible to reach the quantum limit; for the available field strength, many Landau levels would be populated, thus producing a myriad of Fermi points. In this regard, extremely pure layered metals such as (Pd,Pt)CoO2 are unique, as by just tilting the magnetic field in the vicinity of the Yamaji angle one can achieve the condition of a single, partially filled Landau level as it would happen at the quantum limit. An explicit analytical calculation of transport lifetime in the presence of axial anomaly due to multiple partially filled Landau levels is a technically challenging task. Therefore at present we do not have a simple analytical formula for describing the observed (μ0H)−1 behavior of the negative magnetoresistance along the c axis (for magnetic field strengths much smaller than the one required to reach the quantum limit). Nevertheless, the suppression of this negative magnetoresistivity for fields precisely aligned along the Yamaji angles indicates unambiguously that the electronic structure at the Fermi level is at the basis for its underlying mechanism. The observation of (μ0H)−1 behavior in the magnetoresistance around the Yamaji angle (when only one partially filled Landau level contributes) gives us the valuable insight that the anomaly induced negative magnetoresistance is quite robust irrespective of the number of partially filled Landau levels. However the determination of a precise functional form for the magnetoresistance in the presence of multiple partially filled Landau levels remains as a technical challenge for theorists. The situation is somewhat analogous to that of the Weyl semi-metals, which are characterized by a number of Weyl points in the first Brillouin zone37, and apparently with all Weyl points contributing to its negative longitudinal magnetoresistivity46. Hence, our results suggest that the axial anomaly among pairs of chiral Fermi points may play a role in ultra-clean systems even when they are located far from the quantum limit.
Finally, it is noteworthy that negative longitudinal magnetoresistivity is also seen in kish graphite at high fields, which is characterized by ellipsoidal electron- and hole-like FSs, on approaching the quantum limit and before the onset of a many-body instability towards a field-induced insulating density-wave ground state51. As discussed in ref. 18, the axial anomaly on approaching the quantum limit may also play a role for the negative magnetoresistivities observed in ZrTe5 (ref. 52) and in α−(ET)2I3 (ref. 53), indicating that this concept, which is the basis of our work, is likely to be relevant to a number of physical systems, in particular semi-metals.
## Methods
### Crystal synthesis
Single crystals of PdCoO2 were grown by the self-flux method through the following reaction PdCl2+2CoO→PdCoO2+CoCl2 with starting powders of PdCl2 (99.999%) and CoO (99.99+%). These powders were ground for for up to 60 min and placed in a quartz tube. The tube was sealed in vacuum and heated up to 930 °C in a horizontal furnace within 2 h and subsequently up to 1,000 °C within 6 h, and then cooled down quickly to 580 °C in 1 or 2 h. The tube is heated up again to 700 °C within 2 h, kept at 700 °C for 40 h and then cooled down to room temperature at 40 °C h−1. Single crystals, with sizes of approximately 2.8 × 1.3 × 0.3 mm3 were extracted by dissolving out CoCl2 with hot ethanol.
### Single-crystal characterization
These were characterized by powder X-ray diffraction, energy dispersive X-ray analysis and electron probe microanalysis. The powder X-ray diffraction pattern indicated no impurity phases. In the crystals measured for this study, electron probe microanalysis indicated that the ratio of Pd to Co is 0.98:1, and that the amount of Cl impurities is <200 p.p.m.
### Experimental setup
Transport measurements were performed by using conventional four-terminal techniques in conjunction with a Physical Properties Measurement System, a 18-T superconducting solenoid and a 35-T resistive magnet, coupled to cryogenic facilities such as 3He systems and variable temperature inserts.
How to cite this article: Kikugawa, N. et al. Interplanar coupling-dependent magnetoresistivity in high-purity layered metals. Nat. Commun. 7:10903 doi: 10.1038/ncomms10903 (2016).
## References
1. Pippard, A. B. Magnetoresistance in Metals: Cambridge Studies in Low Temperature Physics 2 Cambridge Univ. Press (1989).
2. Pippard, A. B. Longitudinal magnetoresistance. Proc. R. Soc. A A282, 464–484 (1964).
3. Lee, M., Rosenbaum, T. F., Saboungi, M. L. & Schnyders, H. S. Band-gap tuning and linear magnetoresistance in the silver chalcogenides. Phys. Rev. Lett. 88, 066602 (2002).
4. Abrikosov, A. A. Quantum linear magnetoresistance. Europhys. Lett. 49, 789793 (2000).
5. Zhang, W. et al. Topological aspect and quantum magnetoresistance of β-Ag2Te. Phys. Rev. Lett. 106, 156808 (2011).
6. Liang, T. et al. Ultrahigh mobility and giant magnetoresistance in the Dirac semi-metal Cd3As2 . Nat. Mater. 14, 280–284 (2015).
7. Ali, M. N. et al. Large, non-saturating magnetoresistance in WTe2 . Nature 514, 205–208 (2014).
8. Huang, S.-M. et al. Theoretical discovery/prediction: Weyl semimetal states in the TaAs material (TaAs, NbAs, NbP, TaP) class. Nat. Commun. 6, 7373 (2015).
9. Bell, J. S. & Jackiw, R. A PCAC puzzle: π0→γγ in the σ-model. Nuovo Cimento A 60, 47–61 (1969).
10. Nielsen, H. B. & Ninomiya, M. The Adler-Bell-Jackiw anomaly and weyl fermions in a crystal. Phys. Lett. B 130B, 389–396 (1983).
11. Takatsu, H. et al. Roles of high-frequency optical phonons in the physical properties of the conductive delafossite PdCoO2 . J. Phys. Soc. Jpn 76, 104701 (2007).
12. Eyert, V., Frésard, R. & Maignan, A. On the metallic conductivity of the delafossites PdCoO2 and PtCoO2 . Chem. Mater. 20, 2370–2373 (2008).
13. Seshadri, R., Felser, C., Thieme, K. & Tremel, W. Metal-metal bonding and metallic behavior in some ABO2 delafossites. Chem. Mater. 10, 2189–2196 (1998).
14. Kim, K., Choi, H. C. & Min, B. I. Fermi surface and surface electronic structure of delafossite PdCoO2 . Phys. Rev. B 80, 035116 (2009).
15. Hicks, C. W. et al. Quantum oscillations and high carrier mobility in the delafossite PdCoO2 . Phys. Rev. Lett. 109, 11640 (2012).
16. Noh, H. J. et al. Anisotropic electric conductivity of delafossite PdCoO2 studied by angle-resolved photoemission spectroscopy. Phys. Rev. Lett. 102, 256404 (2009).
17. Takatsu, H. et al. Extremely large magnetoresistance in the nonmagnetic metal PdCoO2 . Phys. Rev. Lett. 111, 056601 (2013).
18. Goswami, P., Pixley, J. & Das Sarma, S. Axial anomaly and longitudinal magnetoresistance of a generic three dimensional metal. Phys. Rev. B 92, 075205 (2015).
19. Hikami, S., Larkin, A. I. & Nagaoka, Y. Spin-orbit interaction and magnetoresistance in the two-dimensional random system. Prog. Theor. Phys. 63, 707–710 (1980).
20. Bergmann, G. Weak localization in thin films a time-of-flight experiment with conduction electrons. Phys. Rep. 107, 1–58 (1984).
21. Moses, P. & Mackenzie, R. H. Comparison of coherent and weakly incoherent transport models for the interlayer magnetoresistance of layered Fermi liquids. Phys. Rev. B 60, 7998 (1999).
22. Yamaji, K. On the angle dependence of the magnetoresistance in quasi-two-dimensional organic superconductors. J. Phys. Soc. Jpn 58, 1520–1523 (1989).
23. Kurihara, Y. A microscopic calculation of the angular-dependent oscillatory magnetoresistance in quasi-two-dimensional systems. J. Phys. Soc. Jpn 61, 975–982 (1992).
24. Singleton, J. et al. Test for interlayer coherence in a quasi-two-dimensional superconductor. Phys. Rev. Lett. 88, 037001 (2002).
25. Hanasaki, H., Kagoshima, S., Hasegawa, T., Osada, T. & Miura, N. Contribution of small closed orbits to magnetoresistance in quasi-two-dimensional conductors. Phys. Rev. B 57, 1336–1339 (1998).
26. Uji, S. et al. Fermi surface and angular-dependent magnetoresistance in the organic conductor (BEDT-TTF)2Br(DIA). Phys. Rev. B 68, 064420 (2003).
27. Yagi, R., Iye, Y., Osada, T. & Kagoshima, S. Semiclassical interpretation of the angular-dependent oscillatory magnetoresistance in quasi-two-dimensional systems. J. Phys. Soc. Jpn 59, 3069–3072 (1990).
28. Hu, J. & Rosenbaum, T. F. Classical and quantum routes to linear magnetoresistance. Nat. Mater. 7, 697–700 (2008).
29. Kartsovnik, M. V., Grigoriev, P. D., Biberacher, W., Kushch, N. D. & Wyder, P. Slow oscillations of magnetoresistance in quasi-two-dimensional metals. Phys. Rev. Lett. 89, 126802 (2002).
30. Zeng, B. et al. CeCu2Ge2: challenging our understanding of quantum criticality. Phys. Rev. B 90, 155101 (2014).
31. Peskin, M. E. & Schroeder, D. V. An Introduction to Quantum Field Theory Addison-Wesley (1995).
32. Fujikawa, K. & Suzuki, H. Path Integrals and Quantum Anomalies Clarendon Press (2004).
33. Adler, S. Axial-vector vertex in spinor electrodynamics. Phys. Rev. 177, 2426–2438 (1969).
34. Wan, X., Turner, A., Vishwanath, A. & Savrasov, S. Y. Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. Phys. Rev. B 83, 205101 (2011).
35. Xu, G., Weng, H., Wang, Z., Dai, X. & Fang, Z. Chern semimetal and the quantized anomalous Hall effect in HgCr2Se4 . Phys. Rev. Lett. 107, 186806 (2011).
36. Burkov, A. A. & Balents, L. Weyl semimetal in a topological insulator multilayer. Phys. Rev. Lett. 107, 127205 (2011).
37. Weng, H. M. et al. Weyl semimetal phase in noncentrosymmetric transition-metal monophosphides. Phys. Rev. X 5, 011029 (2015).
38. Lv, B. Q. et al. Observation of Weyl nodes in TaAs. Nat. Phys. 11, 724–727 (2015).
39. Yang, L. X. et al. Weyl semimetal phase in the non-centrosymmetric compound TaAs. Nat. Phys. 11, 728–732 (2015).
40. Aji, V. Adler-Bell-Jackiw anomaly in Weyl semi-metals: application to pyrochlore iridates. Phys. Rev. B 85, 241101 (2012).
41. Son, D. T. & Spivak, B. Z. Chiral anomaly and classical negative magnetoresistance of Weyl metals. Phys. Rev. B 88, 104412 (2013).
42. Kim, H.-J. et al. Dirac versus Weyl fermions in topological insulators: Adler-Bell-Jackiw anomaly in transport phenomena. Phys. Rev. Lett. 111, 246603 (2013).
43. Parameswaran, S. A., Grover, T., Abanin, D. A., Pesin, D. A. & Vishwanath, A. Probing the chiral anomaly with nonlocal transport in three-dimensional topological semimetals. Phys. Rev. X 4, 031035 (2014).
44. Burkov, A. A. Chiral anomaly and diffusive magnetotransport in Weyl metals. Phys. Rev. Lett. 113, 247203 (2014).
45. Kim, K.-S., Kim, H.-J. & Sasaki, M. Boltzmann equation approach to anomalous transport in a Weyl metal. Phys. Rev. B 89, 195137 (2014).
46. Huang, X. C. et al. Observation of the chiral-anomaly-induced negative magnetoresistance in 3D Weyl semimetal TaAs. Phys. Rev. X 5, 031023 (2015).
47. Liu, Z. K. et al. Discovery of a three-dimensional topological Dirac semimetal Na3Bi. Science 343, 864–867 (2014).
48. Neupane, M. et al. Observation of a three-dimensional topological Dirac semimetal phase in high-mobility Cd3As2 . Nat. Commun. 5, 3786 (2014).
49. Borisenko, S. et al. Experimental realization of a three-dimensional Dirac semimetal. Phys. Rev. Lett. 113, 027603 (2014).
50. Argyres, P. N. & Adams, E. N. Longitudinal magnetoresistance in the quantum limit. Phys. Rev. 104, 900–908 (1956).
51. Fauqué, B. et al. Two phase transitions induced by a magnetic field in graphite. Phys. Rev. Lett. 110, 266601 (2013).
52. Li, Q. et al. Chiral magnetic effect in ZrTe5. Nat. Phys. (in the press).
53. Tajima, N., Sugawara, S., Kato, R., Nishio, Y. & Kajita, K. Effects of the zero-mode landau level on inter-layer magnetoresistance in multilayer massless Dirac fermion systems. Phys. Rev. Lett. 102, 176403 (2009).
## Acknowledgements
We thank S. Das Sarma, V. Yakovenko, L. Balents, E. Abrahams and J. Pixley for useful discussions. The NHMFL is supported by NSF through NSF-DMR-1157490 and the State of Florida. N.K. acknowledges the support from the overseas researcher dispatch program at NIMS. P.M.C.R. and N.E.H. acknowledge the support of the HFML-RU/FOM, member of the European Magnetic Field Laboratory (EMFL). Y.M. is supported by the MEXT KAKENHI 15H05852. L.B. is supported by DOE-BES through award DE-SC0002613.
## Author information
Authors
### Contributions
N.K. performed the measurements and analysed the data. A.K., E.S.C., D.G., R.B., J.S.B., S.U., K.S., T.T., P.M.C.R. and N.E.H. contributed to the collection of experimental data at high magnetic fields. L.B. provided scientific guidance and P.G. the theoretical interpretation. H.T., S.Y. and Y.M. synthesized and characterized the single crystals. Y.I. and M.N. performed electron probe microanalysis of the measured single crystals, to confirm their high degree of purity. P.G., N.H. and L.B. wrote the manuscript with the input of all co-authors.
### Corresponding author
Correspondence to L. Balicas.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Supplementary information
### Supplementary Information
Supplementary Figures 1-4, Supplementary Notes 1-4 and Supplementary References (PDF 582 kb)
## Rights and permissions
Reprints and Permissions
Kikugawa, N., Goswami, P., Kiswandhi, A. et al. Interplanar coupling-dependent magnetoresistivity in high-purity layered metals. Nat Commun 7, 10903 (2016). https://doi.org/10.1038/ncomms10903
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/ncomms10903
• ### Directional ballistic transport in the two-dimensional metal PdCoO2
• Maja D. Bachmann
• Aaron L. Sharpe
• Philip J. W. Moll
Nature Physics (2022)
• ### Rational strain engineering in delafossite oxides for highly efficient hydrogen evolution catalysis in acidic media
• Daniel Weber
• Bettina V. Lotsch
Nature Catalysis (2020)
• ### Interlayer quantum transport in Dirac semimetal BaGa2
• Sheng Xu
• Changhua Bao
• Tian-Long Xia
Nature Communications (2020)
• ### Spin-dependent scattering induced negative magnetoresistance in topological insulator Bi2Te3 nanowires
• Biplab Bhattacharyya
• Sudhir Husale
Scientific Reports (2019)
• ### Unconventional magneto-transport in ultrapure PdCoO2 and PtCoO2
• Nabhanila Nandi
• Thomas Scaffidi
• Andrew P. Mackenzie
npj Quantum Materials (2018)
|
2022-08-13 10:43:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7225995063781738, "perplexity": 2945.5056232952893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00102.warc.gz"}
|
http://wikiwaves.org/Dispersion_Relation_for_a_Free_Surface
|
# Dispersion Relation for a Free Surface
## Introduction
The dispersion equation for a free surface is one of the most important equations in linear water wave theory. It arises when separating variables subject to the boundary conditions for a free surface.
The same equation arises when separating variables in two or three dimensions and we present here the two-dimensional version. We denote the vertical coordinate by $z\,,$ which points vertically upwards, and the free surface is at $z=0\,.$
We also assume that Frequency Domain Problem with frequency $\omega$ and we assume that all variables are proportional to $\exp(-\mathrm{i}\omega t)\,$
The water motion is represented by a velocity potential which is denoted by $\phi\,$ so that
$\Phi(\mathbf{x},t) = \mathrm{Re} \left\{\phi(\mathbf{x},\omega)e^{-\mathrm{i} \omega t}\right\}.$
The equations therefore become
\begin{align} \Delta\phi &=0, &-h\ltz\lt0,\,\,\mathbf{x} \in \Omega \\ \partial_z\phi &= 0, &z=-h, \\ \partial_z \phi &= \alpha \phi, &z=0,\,\,\mathbf{x} \in \partial \Omega_{\mathrm{F}}, \end{align}
(note that the last expression can be obtained from combining the expressions:
\begin{align} \partial_z \phi &= -\mathrm{i} \omega \zeta, &z=0,\,\,\mathbf{x} \in \partial \Omega_{\mathrm{F}}, \\ \mathrm{i} \omega \phi &= g\zeta, &z=0,\,\,\mathbf{x} \in \partial \Omega_{\mathrm{F}}, \end{align}
where $\alpha = \omega^2/g \,$)
We use separation of variables We express the potential as
$\phi(x,z) = X(x)Z(z)\,$
and then Laplace's equation becomes
$\frac{X^{\prime\prime}}{X} = - \frac{Z^{\prime\prime}}{Z} = k^2$
### Separation of variables for a free surface
We use separation of variables
We express the potential as
$\phi(x,z) = X(x)Z(z)\,$
and then Laplace's equation becomes
$\frac{X^{\prime\prime}}{X} = - \frac{Z^{\prime\prime}}{Z} = k^2$
The separation of variables equation for deriving free surface eigenfunctions is as follows:
$Z^{\prime\prime} + k^2 Z =0.$
subject to the boundary conditions
$Z^{\prime}(-h) = 0$
and
$Z^{\prime}(0) = \alpha Z(0)$
We can then use the boundary condition at $z=-h \,$ to write
$Z = \frac{\cos k(z+h)}{\cos kh}$
where we have chosen the value of the coefficent so we have unit value at $z=0$. The boundary condition at the free surface ($z=0 \,$) gives rise to:
$k\tan\left( kh\right) =-\alpha \,$
which is the Dispersion Relation for a Free Surface
The above equation is a transcendental equation. If we solve for all roots in the complex plane we find that the first root is a pair of imaginary roots. We denote the imaginary solutions of this equation by $k_{0}=\pm ik \,$ and the positive real solutions by $k_{m} \,$, $m\geq1$. The $k \,$ of the imaginary solution is the wavenumber. We put the imaginary roots back into the equation above and use the hyperbolic relations
$\cos ix = \cosh x, \quad \sin ix = i\sinh x,$
to arrive at the dispersion relation
$\alpha = k\tanh kh.$
We note that for a specified frequency $\omega \,$ the equation determines the wavenumber $k \,$.
Finally we define the function $Z(z) \,$ as
$\chi_{m}\left( z\right) =\frac{\cos k_{m}(z+h)}{\cos k_{m}h},\quad m\geq0$
as the vertical eigenfunction of the potential in the open water region. From Sturm-Liouville theory the vertical eigenfunctions are orthogonal. They can be normalised to be orthonormal, but this has no advantages for a numerical implementation. It can be shown that
$\int\nolimits_{-h}^{0}\chi_{m}(z)\chi_{n}(z) \mathrm{d} z=A_{n}\delta_{mn}$
where
$A_{n}=\frac{1}{2}\left( \frac{\cos k_{n}h\sin k_{n}h+k_{n}h}{k_{n}\cos ^{2}k_{n}h}\right).$
|
2017-02-20 03:53:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999016523361206, "perplexity": 430.95356253013375}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00166-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://nforum.ncatlab.org/discussion/10229/3group/?Focus=79557
|
Start a new discussion
Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorJohn Baez
• CommentTimeAug 9th 2019
Fixed third definition of 3-group.
• CommentRowNumber2.
• CommentAuthorAli Caglayan
• CommentTimeAug 9th 2019
What does it mean for a 1-morphism to have a weak inverese? Is it some morphism that tensors to the identity?
• CommentRowNumber3.
• CommentAuthorRichard Williamson
• CommentTimeAug 9th 2019
• (edited Aug 9th 2019)
Should just mean that it is an isomorphism up to a 2-isomorphism (and maybe some coherence conditions).
|
2019-12-08 21:22:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253965973854065, "perplexity": 13621.026415504235}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00359.warc.gz"}
|
http://popflock.com/learn?s=Spinor_field
|
Spinor Field
Get Spinor Field essential facts below. View Videos or join the Spinor Field discussion. Add Spinor Field to your PopFlock.com topic list for future reference or share this resource on social media.
Spinor Field
In differential geometry, given a spin structure on an n-dimensional orientable Riemannian manifold (M, g), a section of the spinor bundle S is called a spinor field. A spinor bundle is the complex vector bundle ${\displaystyle \pi _{\mathbf {S} }:{\mathbf {S} }\to M\,}$ associated to the corresponding principal bundle ${\displaystyle \pi _{\mathbf {P} }:{\mathbf {P} }\to M\,}$ of spin frames over M via the spin representation of its structure group Spin(n) on the space of spinors ?n.
In particle physics, particles with spin s are described by a 2s-dimensional spinor field, where s is an integer or a half-integer. Fermions are described by spinor field, while bosons by tensor field.
## Formal definition
Let (P, FP) be a spin structure on a Riemannian manifold (M, g) that is, an equivariant lift of the oriented orthonormal frame bundle ${\displaystyle \mathrm {F} _{SO}(M)\to M}$ with respect to the double covering ${\displaystyle \rho :{\mathrm {Spin} }(n)\to {\mathrm {SO} }(n)\,.}$
One usually defines the spinor bundle[1] ${\displaystyle \pi _{\mathbf {S} }:{\mathbf {S} }\to M\,}$ to be the complex vector bundle
${\displaystyle {\mathbf {S} }={\mathbf {P} }\times _{\kappa }\Delta _{n}\,}$
associated to the spin structure P via the spin representation ${\displaystyle \kappa :{\mathrm {Spin} }(n)\to {\mathrm {U} }(\Delta _{n}),\,}$ where U(W) denotes the group of unitary operators acting on a Hilbert space W.
A spinor field is defined to be a section of the spinor bundle S, i.e., a smooth mapping ${\displaystyle \psi :M\to {\mathbf {S} }\,}$ such that ${\displaystyle \pi _{\mathbf {S} }\circ \psi :M\to M\,}$ is the identity mapping idM of M.
## Notes
1. ^ Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, p. 53
## References
This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.
|
2021-11-28 14:04:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937025427818298, "perplexity": 672.671398774568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00048.warc.gz"}
|
https://netket.readthedocs.io/en/latest/docs/configurations.html
|
# Configuration Options#
NetKet exposes a few configuration options which can be set through environment variables by doing something like
# without exporting it
NETKET_DEBUG=1 python ...
# by exporting it
export NETKET_DEBUG=1
python ...
# by setting it within python
python
>>> import os
>>> os.environ["NETKET_DEBUG"] = "1"
>>> import netket as nk
>>> print(netket.config.netket_debug)
True
Some configuration options can also be changed at runtime by setting it as:
>>> import netket as nk
>>> nk.config.netket_debug = True
>>> ...
You can always query the value of an option by accessing the nk.config module:
>>> import netket as nk
>>> print(nk.config.netket_debug)
False
>>> nk.config.netket_debug = True
>>> print(nk.config.netket_debug)
True
Please note that not all configurations can be set at runtime, and some will raise an error.
Options are used to activate experimental or debug functionalities or to disable some parts of netket. Please keep in mind that all options related to experimental or internal functionalities might be removed in a future release.
# List of configuration options#
Name
Values [default]
Changeable
Description
NETKET_DEBUG
True/[False]
yes
Enable debug logging in many netket functions.
NETKET_EXPERIMENTAL
True/[False]
yes
Enable experimental features such as gradients of non-hermitian operators.
NETKET_MPI_WARNING
[True]/False
no
Raise a warning when running python under MPI without mpi4py and other mpi dependencies installed.
NETKET_MPI
[True]/False
no
When true, NetKet will always attempt to load (and initialize) MPI. If this flag is 0 mpi4py and mpi4jax will not be imported. This can be used to prevent crashes with some MPI variants such as Cray which cannot be initialised when not running under mpirun.
NETKET_USE_PLAIN_RHAT
[True]/False
yes
By default, NetKet uses the split-R̂ Gelman-Rubin diagnostic in netket.stats.statistics, which detects non-stationarity in the MCMC chains (in addition to the classes of chain-mixing failures detected by plain R) since version 3.4. Enabling this flag restores the previous behavior of using plain (non-split) Rhat.
NETKET_EXPERIMENTAL_FFT_AUTOCORRELATION
True/[False]
yes
The integrated autocorrelation time $$\tau_c$$ is computed separately for each chain $$c$$. To summarize it for the user, Stats.tau_corr is changed to contain the average over all chains and a new field Stats.tau_corr_max is added containing the maximum autocorrelation among all chains (which helps to identify outliers). Using the average $$\tau$$ over all chains seems like a good choice as it results in a low-variance estimate (see here for a good discussion).
NETKET_EXPERIMENTAL_DISABLE_ODE_JIT
True/[False]
yes
Disables the jitting of the whole ode solver, mainly used within TDVP solvers. The jitting is sometimes incompatible with GPU-based calculations, and on large calculations it gives negligible speedups so it might be beneficial to disable it.
NETKET_SPHINX_BUILD
True/[False]
no
Set to True when building documentation with Sphinx. Disables some decorators. This is for internal use only.
|
2022-09-30 06:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4692119359970093, "perplexity": 6106.331513817573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00634.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/9915
|
# Knowledge Bank
## University Libraries and the Office of the Chief Information Officer
The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content.
# ${\Delta} v = 1$ AND ${\Delta} v = 2$ TRANSITIONS OF NO EMISSION IN THE INFRARED
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/9915
Files Size Format View
1976-WA-06.jpg 69.23Kb JPEG image
Title: ${\Delta} v = 1$ AND ${\Delta} v = 2$ TRANSITIONS OF NO EMISSION IN THE INFRARED Creators: Mantz, A. W.; Rao, K. Narahari Issue Date: 1976 Abstract: Continuing the Investigation of NO emissions (${\Delta} v = 1$ transitions) in the 5-7 ${\mu} m$ region reported by Mantz, Shafer, and Rao,${^{1}}$ the experimental conditions required to study the ${\Delta} v = 2$ transitions were established and the emission bands observed including the 6-4 hand. A $0.1 cm^{-1}$ Fourier transform spectrometer was used in this research. URI: http://hdl.handle.net/1811/9915 Other Identifiers: 1976-WA-6
|
2014-04-18 12:46:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2838282287120819, "perplexity": 7168.852594855915}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://headinside.blogspot.com/2013/03/
|
0
## Offbeat Memory Challenges
Published on Sunday, March 31, 2013 in , , , , ,
When you first learn memory techniques, you also tend to apply them to a small set of standard lists, such as presidents, monarchs, countries, states, capitals, and so on.
It's not uncommon to start wondering whether there's any other type of lists to memorize, beyond just school standards. In this post, you'll find enough resources to challenge your memory skills for the rest of your life!
mentalfloss.com, which has long billed itself as, “Where knowledge junkies get their fix,” is a natural starting point. They also have a free iPad app (iTunes Link) in which you can read full issues, and many of these issues are themselves free thanks to sponsorship from Boeing. Earlier this month, mental_floss also started their own YouTube channel, including fun subjects such as 45 Facts About U.S. Presidents:
Not surprisingly, reddit.com can also be a good source, but there is the problem of too much information there. How do you find good sets of information to memorize there? Fortunately, reddit has done much of the work for you, with this network of subreddits, which I'll refer to as SFWP (take a look at any individual subreddit in the network, and you'll see why). Through the SFWP Network, you can easily find lots of fascinating information to challenge your memory. For example, in the map part of the network, you can see things like a map of the most common surnames in European countries in 2011, or even the most common European male and female first names in 2012. Try exploring, and you'll be amazed at the endless ideas these sections inspire.
Another fun subreddit is r/wordplay, where you can run across all sorts of weird and bizarre uses for words. Here's some 4 by 9 word squares, in which every horizontal and vertical line makes a legitimate word in English, and here you can find a rather nasty tongue twister.
Wordplay is a rich source of memory challenges. You might amuse yourself with this list of heteronyms and antagonyms, as well as a wide array of other word oddities and trivia!
When I originally created my free app Verbatim 2, I designed it for memorizing things like standard speeches, lyrics, and poems. However, when combined with some wordplay, memorizing AND repeating can be quite a feat. Matthew Goldman's Goonerisms Spalore site is a perfect example. The spoonerisms there range from the short and simple, such as Drain Bamage, to fully spoonerized stories, such as the many versions of Rindercella.
Spoonerisms aren't the only type of wordplay out there, though. Many variations of the classic “Who's On First” skit are fun to try and memorize and repeat.
As some closing inspiration, here's a collection of general wordplay videos, many of which have become classics in their own right:
0
## Quick Snippets
Published on Thursday, March 28, 2013 in , , , , ,
March's snippets may be a little late, but they are here!
This month, I present several classic geometry puzzles. Not all of them are solvable, but they are all interesting.
• Let's start this with one of the longest-running, and apparently most maddening, geometry puzzles in history! James Grime discusses “squaring the circle,” the challenge of constructing a square and a circle with the same area, using only a straight edge and a compass, in a finite number of steps:
Despite the impossibility, you can find many interesting approaches which have been tried over the years.
• One geometry puzzle that recently gained plenty of attention over at Gizmodo is the Winston Freer Tile Puzzle. You can purchase your own here, or a smaller version here, but ponder the seeming impossibility of it first:
James Tanton posted an interesting geometric challenge which can be presented in stages. The first challenge is just to determine the size of an arc without a protractor. This is usually solved by finding the center first, but can you do it without finding the center?
• Sometimes geometry itself is the puzzle! Jeff Dekofsky, via TedEd, discusses Euclid's puzzling parallel postulate. This is another part of geometry in which the answer will be forever closed off to us, but will remain interesting to ponder:
• I'll wrap March's snippets with Emma Rounds' poem, “Plane Geometry,” a parody of Lewis Carroll's classic “Jabberwocky”:
‘Twas Euclid, and the theorem pi
Did plane and solid in the text,
And the ang-gulls convex’d.
“Beware the Wentworth-Smith, my son,
And the Loci that vacillate;
Beware the Axiom, and shun
The faithless Postulate.”
He took his Waterman in hand;
Long time the proper proof he sought;
Then rested he by the XYZ
And sat awhile in thought.
And as in inverse thought he sat
A brilliant proof, in lines of flame,
All neat and trim, it came to him,
Tangenting as it came.
“AB, CD,” reflected he–
The Waterman went snicker-snack–
He Q.E.D.-ed, and, proud indeed,
He trapezoided back.
“And hast thou proved the 29th?
Come to my arms, my radius boy!
O good for you! O one point two!”
He rhombused in his joy.
‘Twas Euclid, and the theorem pi
Did plane and solid in the text;
And the ang-gulls convex’d.
0
## Werner Miller's Sub Rosa 3 and 4
Published on Sunday, March 24, 2013 in , , , , ,
If you're familiar with Werner Miller's work, you're in for another treat from him!
If you're not familiar with Werner Miller's work, you're in for several treats!
For those not familiar with him, he's a creator of mathematical magic from Germany. What makes his work so special is his knack for ingeniously and effectively disguising the mathematical principles used in his routines. If you haven't already done so, go back through this site to explore some of his other effects. You'll quickly get an appreciation for his style of thinking.
Last October, he released the first two volumes in a new series called Sub Rosa (Latin for in strictest confidence). Earlier this month, Werner Miller released two new books in the series, Sub Rosa 3 and Sub Rosa 4.
Thanks to the math involved, none of the routines require any difficult sleight-of-hand. For example, one of the routines from Sub Rosa 4 is titled, “ESPecial Countdown,” and performed with ESP cards. After the ESP cards are mixed, an audience member chooses a card. This card is set aside face down, and no one, not even the audience member, knows which ESP symbol was chosen.
The performer then removes 8 cards, and asks for a number (8 or less) from which to countdown. The performer then runs through a procedure in which the cards are counted down. When the countdown reaches 0, the card at that position is removed. Interestingly, it proves an exact match for the card previously chosen by the spectator.
Generously, Werner Miller has allowed me to reveal the method to “ESPecial Countdown” here on Grey Matters. The method is explained below, and the PDF can be downloaded via this link.
I'd like to thank Werner Miller for his generosity in letting me share this routine with you. If you'd like to show him your thanks as well, you can buy Sub Rosa 3, Sub Rosa 4, and his other works over at Lybrary.com.
0
## Cube Roots From Scam School
Published on Thursday, March 21, 2013 in , , , , ,
Scam School has just covered one of my favorite mathematical feats of all time!
Doing cube roots in your head is a skill that seems very impressive, but as you'll see in the video, you can learn the basics and start praciticing in less than 10 minutes.
If you take a number x and multiply by itself two more times, x × x × x, or x3 (x cubed), then x3 would be referred to as the cube of x, and x would be referred to as the cube root of x3. For example, since 3 × 3 × 3 = 27, we say that 27 is the cube of 3, and conversely that 3 is the cube root of 27.
Now that we're clear on the terminology, here's the Scam School episode about doing cube roots in your head:
Over at the Mental Gym, I've had a post explaining this feat for quite some time. Once you feel confident doing cube roots in your head in this manner, you can move on to 5th roots! The approach is similar, but strangely, doing 5th roots is actually easier than cube roots!
Once you master squaring numbers ending in 5, you can even handle square roots in a similar manner, as well.
I've always thought it was somewhat amusing that lower roots were more challenging to approach than higher roots with this approach.
How far have people taken this approach? There are several people who have practiced finding the 13th root of a 100-digit number, and beyond! For a feat like this, any time under 2 minutes is considered an excellent time. It is, of course, far more challenging to turn this one into an entertaining bar stunt.
Practice this one and have fun displaying your new-found skill for your friends!
0
## St. Patrick's Day Puzzles
Published on Sunday, March 17, 2013 in , , , , ,
Happy St. Patrick's Day!
In honor of the holiday, I thought I'd share some classic Irish-themed puzzles for you to ponder.
I'll start with a simple puzzle, which you can simply enjoy without choosing to solve it. Developed by Canadian puzzler and magician Mel Stover, this first one is called The Vanishing Leprechaun:
If you'd like to get a closer look at this page, there are several sites, such as this one, which feature the artwork in detail. On that linked page, you can click the top illustration to switch between the two modes yourself, of look at the 2 individual stages in the illustrations below that. The solution to the puzzle is also available on that page, but spend some time trying to figure it out for yourself first.
It seems the Irish have a knack for vanishing in tricky and amusing ways. Take the story or Casey, for example, who marched in many a St. Patrick's day parade. Sam Loyd's puzzle How Many Men Were In The Parade?, which comes to us courtesy of Martin Gardner, and can be read online (Part 1, Part 2), or as follows:
During a recent St. Patrick's Day parade, an interesting and curious puzzle developed. The Grand Marshall issued the usual notice setting forth that “the members of the Ancient and Honorable Order of Hibernians will march in the afternoon if it rains in the morning, but will parade in the morning if it rains in the afternoon.” This gave rise to the popular impression that rain is to be counted as a sure thing on St. Patrick's Day. Casey boasted that he “had marched for a quarter of a century in every St. Patrick's day parade since he had become a boy.”
I will pass over the curious interpretations which may be made of the above remark, and say that old age and pneumonia having overtaken Casey at last, he had marched on with the immortal procession. When the boys met again to do honor to themselves and St. Patrick on the 17th of March, they found that there was a vacany in their ranks which it was difficult to fill. In fact, it was such an embarrassing vacancy that it broke up the parade and converted it into a panic-stricken funeral procession.
The lads, according to custom, arranged themselves ten abreast, and did march a block or two in that order with but nine men in the last row where Casey used to walk on account of an impediment in his left foot. The music of the Hibernian band was so completely drowned out by spectators shouting to ask what had become of the “the little fellow with the limp,” that it was deemed best to reorganize on the basis of nine men to each row, as eleven would not do.
But again Casey was missed and the procession was halted when it was discovered that the last row came out with but eight men. There was a hurried attempt to form with eight men in each row; again with seven, and then with five, four, three, and even two, but it was found that each and every formation always came out with a vacant space for Casey in the last line. Then, although it strikes us as silly superstition, it became whispered through the lines that Casey's “dot and carry one” step could be heard. The boys were so firmly convinced that Casey's ghost was marching that no one was bold enough to bring up the rear.
The Grand Marshal, however, was a quick-witted fellow who speedily laid out that ghost by ordering the men to march in single file; so, if Casey did march in spirit, he brought up the rear of the longest procession that ever did honor to his patron saint.
Assuming the number of men in the parade did not exceed 7,000, can you determine just how many men marched in the procession?
I'll keep you guessing until Thursday, when I'll reveal the answer to this puzzle.
If your puzzle tastes run more towards the jigsaw variety, try one of Jigzone's St. Patrick's Day online jigsaw puzzles! The ones with the all-over clover patterns are especially challenging.
That's all for now. I simply wished to share some quick puzzles for the holiday. If you have any favorite St. Patrick's Day puzzles you'd like to share, let me know about it in the comments!
3
## Grey Matters' 8th Blogiversary!
Published on Thursday, March 14, 2013 in , , , ,
Can you believe it? Grey Matters is 8 years old today, March 14th! How long is 8 years? When I started this blog, YouTube had been formed as a company, but it would be another month before they would publicly unveil their website.
Besides being this blog's 8th blogiversary, it's also Pi Day (3/14) and Albert Einstein's birthday, so let's have a little fun, shall we?
Mental_Floss.com helps gets the party started by sharing 11 unserious photos of Einstein. Yes, of course the famous tongue picture is there, but there are more with which you may not be familiar.
For a Pi Day party, we need food, and what better food than pies? Matt Parker shows us how to calculate Pi using pies:
If you're concerned about food being used in this way, Matt Parker assures:
Your next concern might be about the accuracy of the measurement, which Wolfram|Alpha gives here to 10 places. At first glance, 3.138 doesn't seem as impressive as it should be.
However, if you remember last month's post on bringing pi digits to life, you'll recall that it only takes 38 digits to measure a universe-sized circle with an accuracy to the nearest hydrogen atom. Considering that, measuring a circle in terms of pies to 3.138 is less surprising, and is a considerably good result.
When I was doing research for my continued fractions post, I was thrilled to discover L. J. Lange's continued fraction of Pi, which he developed in his May 1999 paper An Elegant Continued Fraction for π:
$\LARGE&space;\\\pi=3&+\frac{1}{6&+\frac{9}{6&+\frac{25}{6&+\frac{49}{6&+\frac{81}{6&+\frac{121}{6&+\ddots}}}}}}$
As much as you hear about the randomness and unpredictability of Pi, this continued fraction has an astonishingly simple and regular pattern. The denominators, of course, are all 6. The numerators are the squares of all the odd numbers starting with one. In fact the numerator at any level n can be calculated with the formula (2n - 1)2. For example, the 6th numerator is calculated as (2 × 6 - 1)2 = (12 - 1)2 = 112 = 121. Using Gauss' Kettenbruch notation, we can then write this formula for Pi as:
$\LARGE&space;\\\pi=3+\underset{n=1}{\overset{\infty}{\LARGE\mathrm K}}\frac{(2n-1)^{2}}{6}$
How fast does this get us to the 38 digits for our universe-sized circle which measured to the nearest hydrogen atom?
We can use Wolfram|Alpha to get an idea. The first 10 levels of this fraction give us Pi to 4 digits (the integer part plus 3 decimal places). We get Pi accurate to 7 digits by the 100th level, and to 10 digits by the 1000th level.
Assuming this logarithmic rate of 3 digits for every order of magnitude continues, we would need to go tens of trillions of levels deep to get universe-level accuracy!
Alas, it seems the beauty of this formula's pattern is at the cost of slow convergence to Pi. Since the original formula takes the process to infinite levels, however, at least it gives Pi in the long run. If you're wondering how someone like Archimedes worked out Pi 2200 years ago, without textbooks, calculators, or even calculus, it's actually due to this ingenious approach described at BetterExplained.com. Note that after taking his own approach 96 levels deep, Archimedes also calculated Pi accurate to only 4 digits.
Naturally, many others are celebrating Pi Day today. Check out Ben Vitale's Some Musings on Pi, both part 1 and part 2. The Math Dude podcast also took some time to celebrate the world's best known mathematical constant. One of the more amusing moments in Pi history was the time that the Indiana almost legislated the value of Pi to be exactly 3.2, and James Grime tells the story well.
Thanks to all my readers for reading Grey Matters and keeping this blog going for 8 wonderful years! Now, it's time for you to keep an eye out for what I have in store for my 9th year.
2
## John Conway's Rational Tangles
Published on Sunday, March 10, 2013 in , , , ,
John Conway has brought many new mathematical recreations to the general public. Martin Gardner wrote about him quite often in his Scientific American columns, and I've referred to his works many times here on Grey Matters.
Today's post focuses on a puzzle Conway created. It uses 4 people, some ropes, and is referred to as Rational Tangles, or just Tangles, for short.
Imagine 4 people standing as if they were at the 4 points of a square. In the diagram below, each colored dot represents a distinct person:
Next, 2 ropes are handed out. One of the ropes is held by Blue at one end, and Red at the other end. The other rope is held by Green at one end, and Black on the other end, like this:
In reality, both ropes would be the same color. 2 different color ropes are only used here to make things clearer. Later on, I'll be referring to the above illustration as the starting position. Now that we have the basic set-up, it's time to introduce the rules of the puzzle.
From this point, there are only two moves allowed. The first is a simple 90° clockwise rotation, as seen from above, of all players. This move is referred to as a rotation. From the starting position (above), a clockwise rotation would end with the players and ropes in this position:
The only other legal move, known as a tangle, is performed by having the players in the upper right and lower right (again, as seen from above), switch places. As this happens, the player in the upper right lifts up their rope, and the player in the lower right goes underneath. Not surprisingly, this results in a crossing of the ropes.
Going back to our original starting position, with Red in the upper right and Black in the lower right, a call for a twist would start with Red holding its end up while heading to the lower right, and Black going underneath Red's rope, while heading for the upper right. At completion, the result would appear like this (again, remember this is starting from the original starting position, NOT the previous illustration):
The idea is that you have the 4 people involved, and/or anyone else who is watching to call out a long combination of tangles and rotations, in order to ensure that the 2 ropes are well intertwined. The challenge is, with the tangled section of the rope hidden from view, to untangle the ropes once again.
You might think the solution would simply be to use a memory system to memorize all the calls made, and then call them out in reverse order, making sure you untwist and unrotate at the appropriate steps. The problem here is that untwists and unrotations aren't legal moves.
The only moves you can use, remember, are a 90° clockwise rotation, and a switching of the two rightmost people, such that the rope of the upper right person goes over the rope held by the lower right person as they switch.
Using only those two moves, is it always possible to return to an untangled state? Surprisingly, the answer is yes. The question of course, is how do you do it?
The answer involves keeping track of the tangles mathematically. As there are only two types of moves, this is easier than it might sound. However, the mathematics do involves working with fractions. Should you need a refresher course on fractions, I recommend starting with Math Dude's series of podcasts on fractions (starting with the Nov. 16, 2012 episode). WhyU's video series on fractions, mainly from episode 12 to episode 17, are also very helpful. You'll also want to make sure you're comfortable adding and subtracting negative fractions with a common denominator.
Yes, I know the mentions of fractions, which many math students consider the real F-word, makes this sound scary, but once you get used to the process, it's not as bad as it may seem at first.
Getting back to Conway's Tangles, James Tanton's Rational Tangles PDF is the easiest introduction to the mathematics behind this puzzle.
If you're viewing this on a device that supports the Flash plug-in, NRich.maths.org has some wonderful tools to help you understand Conway's Rational Tangles, as well.
Not surprisingly, you're going to get the best understanding of Rational Tangles from the inventor himself. I'll wind this post up with John Conway's full, 74-minute Tangles, Bangles, and Knots lecture, courtesy of UCTV Prime:
0
## Making The Most Of What You Know
Published on Thursday, March 07, 2013 in , ,
I once met a man who didn't get too far in school. He explained that he had to leave school to help support his family, so the most advanced math he ever learned was multiplying and dividing by 2. He said he never got to understand more advanced things like fractions.
Like me, you might expect that this guy wasn't too bright when it came to math, but he understood how to make the most of what he did know.
I chose two random numbers, not so high as to embarrass him, and asked, “So, if I asked you to multiply, say, 38 by 29, you couldn't do it?” He explained that he could, but he just had to simplify the problem in his own way.
He asked me to write down the problem on a piece of paper:
$\\38\times29$
He reminded me that he had no problem multiplying and dividing by 2, so he explained that he was going to simplify the problem. He calculated, “38 divided by 2 is 19, so put 19 below the 38. 29 times 2 is 58, so put 58 below the 29."
$\\38\times29\\19 \ \ \ \ \ \ 58$
Now, I've seem multiplication simplified this way before, but since he'd mentioned he didn't care for fractions, I was wondering how he could simplify the problem any further.
Confidently, he continued, “Next, we do the same thing again. 19 divided by 2 is 9, so...” I interrupted, “19 divided by 2 is 9½, not 9.” He repeated that he never could deal with fractions, so he just told me to forget the fraction, and just put down 9.
I had to snicker a little, as I couldn't see how this would work. I then did as I was told, put the 9 down, and then put down the double of 58 he'd calculated, 116.
$\\38\times29\\19 \ \ \ \ \ \ 58\\9 \ \ \ \ \ \ 116$
He continued in this manner, always getting the doubling right, but putting down things like 4 has half of 9.
$\\38\times29\\19 \ \ \ \ \ \ 58\\9 \ \ \ \ \ \ 116\\4 \ \ \ \ \ \ 232\\2 \ \ \ \ \ \ 464\\1 \ \ \ \ \ \ 928$
I was about to explain that, due to the lack of fractions, the problem didn't come down to 1 times 928, so 928 wasn't the correct answer, when he stopped me. He explained he wasn't finished yet, and said that he was no going to run through the left column and idenetify which numbers as odd or even.
At this point, I was intrigued, as I couldn't see how he was going to get any kind of answer to the problem.
He explained that he was going to mark a little 0 next to the even numbers, and a little 1 next to the odd numbers. He quickly went down the column, muttering, “19 odd, 9 odd, 4 even...” and marking them accordingly:
$\\38\times29\\^{1}19 \ \ \ \ \ \ 58\\^{1}9 \ \ \ \ \ \ 116\\^{0}4 \ \ \ \ \ \ 232\\^{0}2 \ \ \ \ \ \ 464\\^{1}1 \ \ \ \ \ \ 928$
"Now what?” I wondered out loud. He replied, “Now, I add up ONLY the numbers in the right column whose numbers in the left column are marked with a 1! According to what we have on the paper, that's 58 + 116 + 928."
Instead of writing the answer and carrying all the values, he did the addition problem in his head by working from left to right, starting in the hundreds column! He said, “100...plus 900 is 1000...plus 50 is 1050...plus 10 is 1060...plus 20 is 1080...plus 8 is 1088...plus 6 is 1094...plus 8 is 1102! That's it!"
"That's what?” I asked as he wrote the total proudly below the other numbers. He explained, “The answer to your multiplication problem, 38 times 29!"
$\\38\times29\\^{1}19 \ \ \ \ \ \ 58\\^{1}9 \ \ \ \ \ \ 116\\^{0}4 \ \ \ \ \ \ 232\\^{0}2 \ \ \ \ \ \ 464\\^{1}1 \ \ \ \ \ \ 928\\\\. \ \ \ \ \ \ 1102$
I was astounded. Double checking, I found that not only did 58 + 116 + 928 equal 1102, but that 38 times 29 was 1102, as well!” His way of working through the problem was strange, but I had to admit it worked!
"There's more!” he announced, while I was still recovering from the shock. He pointed out that we hadn't identified one of the numbers on the left as odd or even, and quickly marked 38 with a 0, denoting it was even.
$\\^{0}38\times29\\^{1}19 \ \ \ \ \ \ 58\\^{1}9 \ \ \ \ \ \ 116\\^{0}4 \ \ \ \ \ \ 232\\^{0}2 \ \ \ \ \ \ 464\\^{1}1 \ \ \ \ \ \ 928\\\\. \ \ \ \ \ \ 1102$
He then asked if I would read off the 1s and 0s column in order, reading from the bottom up. I verbalized, “100110.” This man, with a gleam in his eye, inquired, “You do realize that's 38 in binary, don't you?” Sure enough, Wolfram|Alpha backed him up on this.
"How could you possibly know about binary? I wondered. He joked, “Originally, I figured computers were beyond me, but once it was explained that they dealt mainly in powers of 2, and they had difficulty with fractions, too, I could instantly relate."
"Since school, of course, I learned about a few more things about math here and there, such as factorials. You know, 4 factorial is 4 × 3 × 2 × 1, or 24. 5 factorial is 5 × 4 × 3 × 2 × 1, or 120, and so on,” he recalled.
After I stated that I was familiar with them, he pointed out that when you work with larger numbers like that, you tend to get more and more multiples of 5, and of course many more even numbers, so factorials of numbers larger than 5 would always ended in 0. The factorial of larger numbers, of course, ended in many zeroes.
He then explained that calculations like this helped him work out exactly how many zeros would be at the end of a number's factorial. I was confused, but I no longer doubted him. He added, “We've been ignoring the 29 over there, so let's work out how many zeroes are at the end of 29 factorial. I'll start with 58, and erase the 1 rightmost digit from it, leaving 5 there. With the number below it, I'll erase the 2 rightmost digits. In the numbers below, I'll erase the 3 rightmost digits, and so on.” When he'd finished, the paper looked like this:
$\\^{0}38\times29\\^{1}19 \ \ \ \ \ \ 5\\^{1}9 \ \ \ \ \ \ 1\\^{0}4 \ \ \ \ \ \ \\^{0}2 \ \ \ \ \ \ \\^{1}1 \ \ \ \ \ \ \\\\. \ \ \ \ \ \ 1102$
Using the remaining 5 and 1, he simply added them up to get 6, and claimed that 29 factorial had exactly 6 zeroes at the end of it (known as trailing zeroes). I had Wolfram|Alpha work out 29 factorial, and sure enough, while there were 7 zeroes in the entire number, exactly 6 of them were trailing zeroes. Embarrassingly, he pointed out that I could have asked Wolfram|Alpha about the zeroes directly, and save myself the trouble of counting them.
Like I wrote earlier, this guy may not have known much, but he could definitely make the most of what he did know.
---
The math, however, is 100% real. The process of doubling and halving numbers, and adding only particular numbers is commonly referred to as “Russian Peasant Multiplication.” I discuss both this and the halving method to convert to binary last year, in October's Powers of 2 post. I first discussed this approach to the trailing zeroes phase last December.
Amazingly the process above, once understood, works for multiplying any two whole numbers. Unlike many mathematical feats taught on Grey Matters, the focus is not on speed or mental ability, but rather the sheer variety, power, and unexpected simplicity that can be derived from just a single math problem.
Try it out, have fun, and explore this unusual approach to math.
0
## The Weird World of Continued Fractions
Published on Sunday, March 03, 2013 in , , , ,
In my previous Quick Feats post, I briefly made use of continued fractions.
As a concept, they aren't well known, yet they are well worth exploring. When you start learning about continued fractions, there are many seemingly-endless surprises!
Let's start with a simple division problem, such as 47 ÷ 17. If we run that problem through Wolfram|Alpha, its' pretty much what we expect. 4717 simplifies to 21317, the decimal goes on forever, and that the first 16 digits after the decimal point repeat.
Let's learn something new by changing our point of view. Go through this desmos.com demo, which uses a 47 by 17 rectangle. At each stage, the rectangle is divided up into the largest squares possible. The same is then done with any remaining area, until the entire area is divided up into squares of various sizes.
Assuming you've gone through the process visually and geometrically, we're now going to repeat the process arithmetically. We're going to be dealing with fractional division and reciprocals, so here's a video refresher course, should you need it.
Instead of starting with a 47 by 17 rectangle, we'll just start with the problem 4717. We've already seen that this simplifies to 21317, and isn't hard to see how this relates to dividing up our rectangle into 2 perfect squares, with a 13 by 17 rectangle left over.
The next step was dividing up the 17 by 13 rectangle into a one 13 by 13 square, leaving a 4 by 13 rectangle. We need to keep our fraction as the same, but somehow redefine it in terms of 13s at this point. Starting from the fact that a fraction multiplied by its reciprocal equals one, we can work out the following:
$\\ \frac{13}{17} \times \frac{17}{13} = 1\\ \\ \\ \left ( \frac{13}{17} \times \frac{17}{13} \right ) \div \frac{17}{13} = 1 \div \frac {17}{13}\\ \\ \\ \frac{13}{17} = 1 \div \frac{17}{13} = \frac {1}{\frac{17}{13}}$
Yes, it's a rather strange-looking result, but at least we have the 13 on the bottom, where we need it, and the value of the fraction remains the same. Putting the 2 back into the equation, 21317 becomes:
$\\2+\frac{1}{\frac{17}{13}}$
Remember how we took that 13 by 17 rectangle and divided it up into a single 13 by 13 square with a 4 by 13 rectangle left over? Simplifying 1713 into 1 + 413 is the same thing. Not surprisingly, we can repeat this process of flipping and simplifying the fractions until we get down to our 1 by 1 squares:
$\\2+\frac{1}{1+\frac{4}{13}}=\\\\\\2+\frac{1}{1+\frac{1}{\frac{13}{4}}}=\\\\\\2+\frac{1}{1+\frac{1}{3+\frac{1}{4}}}$
Due to the way in which we flipped the fractions, it's not hard to understand why all the numerators (top numbers of the fractions) are 1. In fact, this is the standard way in which continued fractions are written, with all the numerators as 1 (there are exceptions, of course).
Ignoring the numerators for the moment, look at the sequence of the other numbers - 2, 1, 3, 4. If you walked through the desmos.com demo I linked above, you'll recognize this right away! The geometric process resulted in TWO 17 by 17 squares, ONE 13 by 13 square, THREE 4 by 4 squares, and FOUR 1 by 1 squares, just as our continued fraction resulted in 2, 1, 3, and 4!
That's basically what continued fractions do. They show you how to break up a number so as to better understand its structure, and can often help you discover useful patterns in the process.
To get a better understanding of continued fractions in a very clear manner, there's a wonderful series of father-and-son video series called MathForKids that explains them to any beginner very well. The following is their first continued fraction video:
Towards the end of that video, there's another surprise; the continued fractions help solve quadratics equations with far less difficulty than you probably remember from your days in school!
The second video in the series starts with the simplest continued fraction (all 1s), and yet another surprise develops from this simple pattern. The third video in the series shows you a wonderful shortcut for evaluating continued fractions that automatically generates approximate fractions for any number! The fourth video focuses on working out the square root of 2, and the final video focuses on generating Pi approximations.
For a detailed understanding of the amazing power of continued fractions, R. Knott's course, complete with homework assignments, is tough to beat. It even begins with a similar rectangular division explanation with which you're already familiar.
Plus magazine's Chaos In Numberland article goes on to show you some of the amazing uses to which continued fractions have contributed.
As I mentioned the beginning, the surprises you get as you understand more and more about continue fractions are a consistent treat. Take the time to explore them, and the treasures you'll discover will be well worth the time.
|
2019-09-18 22:59:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584570586681366, "perplexity": 1116.9480377199798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00244.warc.gz"}
|
https://forum.snap.berkeley.edu/t/accessing-the-snapcloud-in-a-snap-project-will-get-you-permanently-banned/10748
|
# Accessing the SnapCloud In a Snap! project will get you permanently banned
Folks, please do not attempt to access the SnapCloud in your Snap projects. We've taken some measure to prevent that, and some of you seem to regard this as some kind of technical challenge or contest to set up your own proxy server or to otherwise obfuscate your queries. The reason we don't want you to do that is to protect this community so that Snap! is allowed to be used in schools. By keeping us busy with looking after your more or less inventive ways to break our rules you are causing me work and worry, and you are compromising the basis of this community. Therefore let me repeat this once and for all:
Any attempt to access the SnapCloud via a Snap project will get you instantly permanently banned from this community, as will any discussion of this rule.
I can't believe I even have to say this over and over again.
Never knew you could access it
Surely it's not something I could do on accident right, i want to take every precaution possible so that I can avoid this. I dont quite understand what's going on. If you could explain a little deeper
Are my students and I able to just save our Snap! projects to the SnapCloud? I want to make sure we haven't been doing anything wrong. Thanks!
Welcome! No worries. Of course y'all can (and should) save your projects (as many, as you wish to) to the cloud often and all the time!
There is no way in which you or your students "accidentally" can even come close to breaking this rule. The Snap cloud currently has an api which we've made sure a Snap project can't access. This is sparking some creativity by a small group of - hopefully - kids who are hosting their own proxy servers to make Snap projects that can read or write data to their account. This isn't something you accidentally try.
same i never knew you could access it
Snap! has a cloud server? I did not even know. Is this saying all and any projects with cloud are a no no…? Will I have to delete them because it may take me some time to find them.
This is about attempts to access Snap! infrastructure that people have been told not to try and access.
It is not about accessing cloud servers that other people have created.
To be clear, it isn't ide.cloud (with ide being this.parentThatIsA(IDE_Morph)), is it?
it's exactly that. As well as other attempts to either obfuscate a blocked url or to bypass the block through a proxy. Just don't make us spend our time and resources on technical policing and let us add cool features to Snap instead.
No. I asked the same question. We can save our projects there. They just don't want any shenanigans such as people trying to use the server for nefarious purposes. Sounds like some students were using proxy servers to gain access.
What is Snap! Cloud?
Do you mean GETting stuff from https://snap.berkeley.edu/ like: [scratchblocks](url [https://snap.berkeley.edu/]::sensing[/scratchblocks]
Also [scratchblocks] is broken: [scratchblocks](url [https://snap.berkeley.edu/]::sensing[/scratchblocks] displays html code.
Snap cloud also includes this.parentThatIsA(IDE_Morph).cloud in javascript. I wish we had a native block for [scratchblocks]username::sensing reporter[/scratchblocks], but that isn't going to happen and is probably included in the ban.
Scratchblocks is broken because the forum first converts https://snap.berkeley.edu/ into <a href="https://snap.berkeley.edu/">https://snap.berkeley.edu/</a> so it will be a link, and then creates the scratchblocks after. If it didn't do that, typing the first would not create a link.
Does Snap! Cloud include all urls with domain snap.berkeley.edu?
same
Thats why it exists
It also includes all subdomains, like forum.snap.berkeley.eduand cloud.snap.berkeley.edu.
its the same snap?
You can wrap the scratchblocks in html p tags to make no formatting appear
[scratchblocks] (url [https://snap.berkeley.edu/] :: sensing) [/scratchblocks]
It may not be the best way (as it has to be on it's own line), but it works.
You can also find more info here Snap [scratchblocks] Tutorial (Part 1)
I too am changing the background of my pfp to the ukrainian flag
edit: yessir regular trust level
|
2022-07-01 08:05:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.226755753159523, "perplexity": 2067.61017991015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00553.warc.gz"}
|
https://www.stat.math.ethz.ch/pipermail/r-devel/2006-February/036650.html
|
# [Rd] Links to non-vignette documentation
Duncan Murdoch murdoch at stats.uwo.ca
Fri Feb 24 15:11:44 CET 2006
On 2/24/2006 7:27 AM, Romain Francois wrote:
>
> What about using the latex package pdfpages to copy the pages from your
> PDF file interface96.pdf to your Sweave file. (I don't know if it is
> compatible with Sweave).
>
> Not tested :
>
> \documentclass[a4paper]{article}
> %\VignetteIndexEntry{Interface '96 paper by Marron et al. (1997)}
> %\VignettePackage{clps}
>
> \usepackage{hyperref}
> \usepackage{natbib}
> \usepackage{pdfpages}
>
> \title{Interface '96 paper by \cite{mar:tur:wan:96}}
> \author{Berwin A Turlach}
> \date{September 25, 2004}
>
> \begin{document}
> \maketitle
>
> \newpage
>
> \includepdf{interface96.pdf}
>
>
> \bibliographystyle{dcunsp}
> \bibliography{clps}
>
> \end{document}
That's a nice hack. You probably want the "fitpaper" option on the
\includepdf command, so that you don't get an extra border around the
page. For example, this file test.Rnw
\documentclass{article}
%\VignetteIndexEntry{test include of pdf}
%\VignettePackage{ellipse}
\usepackage{pdfpages}
\begin{document}
\includepdf[fitpaper=true]{response.pdf}
\end{document}
produces an output that looks pretty much exactly like the
"response.pdf" file I used as test input in a viewer.
The only disadvantages I see are that both the test.pdf and response.pdf
files got built into the package (but only test.pdf shows up in the
index), and that test.pdf is a lot larger than response.pdf. (This may
be because response.pdf was small; I haven't checked if the increase is
For a non-hack solution:
A change to the R package build process would be to add support for a
command like
%\VignetteExists
to the test.Rnw file, telling R not to bother trying to build the pdf,
because it had already been built by other means. Then I'd just have
test.Rnw containing
%\VignetteIndexEntry{test include of pdf}
%\VignettePackage{ellipse}
%\VignetteExists
and solve both of the problems with Romain's workaround.
Duncan Murdoch
|
2023-03-23 17:08:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101147055625916, "perplexity": 13526.901972787166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00549.warc.gz"}
|
http://clay6.com/qa/11542/a-body-is-moved-along-a-straight-line-by-a-machine-delivering-constant-powe
|
Browse Questions
# A body is moved along a straight line by a machine delivering constant power. The distance moved by the body in time 't' is proportional to
$(a)\;t^{\large\frac{1}{2}} \quad (b)\;t^{\large\frac{3}{2}} \quad (c)\;t^{\large\frac{3}{4}} \quad (d)\;t^2$
Answer: $t^{\large\frac{3}{2}}$
Power $P = F\;v = ma\; v$
Acceleration $a = \large\frac{S}{t^2}$ and Velocity $v = \large\frac{S}{t}$, where $S$ is the distance moved.
Since the Power $P$ and mass $m$ are constant, we get
$P = \large\frac{mS}{t^2}$$\times \large\frac{S}{t}$$\rightarrow S^2 = t^3 \rightarrow S \propto t^{\large\frac{3}{2}}$
edited Aug 25, 2014
|
2016-12-08 08:00:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387756943702698, "perplexity": 1071.0660140964874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542455.45/warc/CC-MAIN-20161202170902-00459-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/51846-need-help-three-conceptual-calculus-problems.html
|
Math Help - need help with three conceptual calculus problems!
1. need help with three conceptual calculus problems!
ok so 1.
true or false (and provide reasoning, not a full length proof) that
lim as x -> a of f(x)*g(x) can not exist
if lim x->a f(x) and lim x->a g(x) both exist
so its kind of a reasoning to show that the product limit law always works if both limits exist?
2. In the formal definition of limits: lim x->c f(x) = L, if for every E>0 there is a d>0 such that IF 0<|x-c|<d THEN |f(x)-L|<E
can you switch the inequalities after the IF and THEN, and does the definition still hold?
3. TRUE or FALSE
f(x) = 1 if x is rational
= 2 if x is irrational
the limit as X-->0 exists for f(x)
im not sure about this one cause of whether the right and left hand limits meet and if so, at 1 or 2??!?!
2. Originally Posted by stones44
true or false (and provide reasoning, not a full length proof) that
lim as x -> a of f(x)*g(x) can not exist
if lim x->a f(x) and lim x->a g(x) both exist
so its kind of a reasoning to show that the product limit law always works if both limits exist?
If $\lim_{x\to a}f(x)$ and $\lim_{x\to a}g(x)$ exist then $\lim_{x\to a}f(x)g(x)$ exists. That is a theorem.
2. In the formal definition of limits: lim x->c f(x) = L, if for every E>0 there is a d>0 such that IF 0<|x-c|<d THEN |f(x)-L|<E
can you switch the inequalities after the IF and THEN, and does the definition still hold?
No
3. TRUE or FALSE
f(x) = 1 if x is rational
= 2 if x is irrational
the limit as X-->0 exists for f(x)
im not sure about this one cause of whether the right and left hand limits meet and if so, at 1 or 2??!?!
Assume that $\lim_{x\to 0}f(x) = L$ (that is, assume it exists).
If $L < 1$ then pick $\epsilon > 0$ so that $L + \epsilon < 1$.
This means there is $\delta > 0$ so that if $0 < |x| < \delta \implies |f(x) - L| < \epsilon$.
Therefore $L - \epsilon < f(x) < L + \epsilon < 1$ for $0 < |x| < \delta$.
But this is a contradiction since $f(x) \not < 1$.
If $L > 2$ then pick $\epsilon > 0$ so that $L - \epsilon > 2$.
Repeating a similar argument will show $f(x) > 2$ and this is a contradiction too.
Therefore $1\leq L \leq 2$ if it exists. Using a similar argument as above we can show $L\not= 1, L\not = 2$. This forces $1 < L < 2$. But then this means there is $\epsilon > 0$ so that $1< L - \epsilon$ and $L+\epsilon < 2$.
Thus, there is $\delta > 0$ so that $0 < |x| < \delta \implies |f(x) - L| < \epsilon$.
Thus, $1 < L - \epsilon < f(x) < L + \epsilon < 2$.
This is a contradiction beceause $f$ does not take values strictly between $1$ and $2$.
|
2014-08-27 15:11:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401772022247314, "perplexity": 723.743849764596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00050-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/net-force-on-a-charge.356767/
|
# Net force on a charge
1. Nov 21, 2009
### johnnyies
1. The problem statement, all variables and given/known data
Three charges lie along the x -axis. The positive charge q1 = 10.0 microC is at x = 1.00 m, and the negative charge q2 = -2.00 microC is at the origin. Where must a positive charge q3 be placed on the x-axis so that the resultant force on it is zero?
Answer: x = - 0.809 m
2. Relevant equations
Columb's Law
F= k q1 q2
- - - - - -
r^2
k = 8.9875 x 10^9
3. The attempt at a solution
Force of 1 acting on 3 = - k q1 q3 / (1 - x)^2
Force of 2 acting on 3 = k q2 q3 / x^2
k q2 q3 / x^2 - k q1 q3 / (1 - x)^2 = 0
k's and q3's cancel out and I get
q2(1 - x)^2 = q1(x^2)
-2(1 - 2x + x^2) = 10x^2
-2 + 4x - 2x^2 = 10x^2
12x^2 - 4x +2 = 0
x = .167 m
Needing a bit of help in setting this one up perhaps. No solutions guide is available. Can someone have a more conceptual explanation on how to solve this one?
Last edited: Nov 21, 2009
2. Nov 21, 2009
### Staff: Mentor
Thanks for showing your work -- makes this much easier.
This line has a math error in it: q2(1 - x^2) = q1(x^2)
The term on the left should be quantity squared (you pulled the squared inside the parens. So re-write as:
$$q_2 (1-x)^2 = q_1 x^2$$
3. Nov 21, 2009
### johnnyies
oh, that's just a typing error, it doesn't change the answer from what I had originally.
Fixed and thanks.
4. Nov 21, 2009
### Staff: Mentor
Okay, then I think the issue that is left is that you double-did the negative sign for the negative charge:
Force of 1 acting on 3 = - k q1 q3 / (1 - x)^2
Force of 2 acting on 3 = k q2 q3 / x^2
You should let the sign on the charges themselves dictate whether the force is in the + or - x direction.
BTW, the book answer of -0.809m works in the equation you got to this point:
q2(1 - x)^2 = q1(x^2)
.
5. Nov 21, 2009
### johnnyies
double-did the negative? That doesn't make sense. I thought you had to arbitrarily add the negative sign cause that's the direction the force will be in?
|
2017-10-19 05:51:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3638497292995453, "perplexity": 1825.6163929359575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00773.warc.gz"}
|
https://www.aimspress.com/article/doi/10.3934/era.2020004
|
### Electronic Research Archive
2020, Issue 1: 47-66. doi: 10.3934/era.2020004
Special Issues
# The existence of solutions for a shear thinning compressible non-Newtonian models
• Received: 01 September 2019 Revised: 01 November 2019
• Primary: 76A05, 76N10
• This paper is concerned with the initial boundary value problem for a shear thinning fluid-particle interaction non-Newtonian model with vacuum. The viscosity term of the fluid and the non-Newtonian gravitational force are fully nonlinear. Under Dirichlet boundary for velocity and the no-flux condition for density of particles, the existence and uniqueness of strong solutions is investigated in one dimensional bounded intervals.
Citation: Yukun Song, Yang Chen, Jun Yan, Shuai Chen. The existence of solutions for a shear thinning compressible non-Newtonian models[J]. Electronic Research Archive, 2020, 28(1): 47-66. doi: 10.3934/era.2020004
### Related Papers:
• This paper is concerned with the initial boundary value problem for a shear thinning fluid-particle interaction non-Newtonian model with vacuum. The viscosity term of the fluid and the non-Newtonian gravitational force are fully nonlinear. Under Dirichlet boundary for velocity and the no-flux condition for density of particles, the existence and uniqueness of strong solutions is investigated in one dimensional bounded intervals.
[1] Suitable weak solutions and low stratification singular limit for a fluid particle interaction model. Q. Appl. Math. (2012) 70: 469-494. [2] A modeling of biospray for the upper airways. CEMRACS 2004-mathematics and applications to biology and medicine. ESAIM: Proc. (2005) 14: 41-47. [3] G. Böhme, Non-Newtonian Fluid Mechanics, Appl. Math. Mech., North-Holland, Amsterdam, 1987. [4] On the dynamics of a fluid-particle model: The bubbling regime. Nonlinear Analysis: Real World Applications (2011) 74: 2778-2801. [5] Stability and asymptotic analysis of a fluid particle interaction model. Commun. Partial. Differ. Equations (2006) 31: 1349-1379. [6] Simulation of fluid and particles flows: Asymptotic preserving schemes for bubbling and flowing regimes. J. Comput. Phys (2008) 227: 7929-7951. [7] Breaking of non-Newtonian character in flows through a porous medium. Physical Review E (2014) 89: 023002. [8] R. P. Chhabra, Bubbles, Drops, and Particles in Non-Newtonian Fluids, Second Edition. Talor & Francis, New York, 2007. [9] R. P. Chhabra and J. F. Richardson, Non-Newtonian Flow and Applied Rheology, (Second edition), Oxford, 2008. [10] Existence results for viscous polytropic fluids with vacuum. J. Differential Equations (2006) 228: 377-411. [11] Large time behavior of solutions to the Navier-Stokes equations of compressible flow. Arch. Ration. Mech. Anal. (1999) 150: 77-96. [12] On the existence of globally defined weak solution to the Navier-Stokes equations. J.Math.Fluid Mech. (2001) 3: 358-392. [13] Non-Newtonian viscosity of Escherichia coli suspensions. Physical Review Letters (2013) 110: 268103. [14] Partial regularity of suitable weak solutions to the system of the incompressible non-Newtonian fluids. J.Differential Equations (2002) 178: 281-297. [15] Mass concentration phonomenon to the 2D Cauchy problem of the compressible Navier-Stokes equations. Discrete and Continuous Dynamical Systems (2019) 39: 1117-1133. [16] O. A. Ladyzhenskaya, New equations for the description of viscous incompressible fluids and solvability in the large of the boundary value problems for them, In Boundary Value Problems of Mathematical Physics, vol. V, Amer. Math. Soc., Providence, RI, 1970. [17] Regularity to the spherically symmetric compressible Navier-Stokes equations with density-dependent viscosity. Boundary Value Problems (2018) 85: 1-13. [18] (1998) Mathematical Topics in Fluid Dynamics, Vol.2.Compressible models, Oxford University Press. [19] J. Málek, J. Nečas, M. Rokyta and M. R$\dot{\rm u}$žička, Weak and Measure-Valued Solutions to Evolutionary PDEs, Chapman and Hall, New York. 1996. [20] Asymptotic analysis for a Vlasov-Fokker-Planck/compressible Navier-Stokes system of equations. Commun. Math. Phys. (2008) 281: 573-596. [21] Cauchy problem for the non-newtonian viscous incompressible fluid. Applications of Mathematics (1996) 41: 169-201. [22] Nonexistence results for a compressible non-Newtonian fluid with magnetic effects in the whole space. J. Math. Anal. Appl. (2010) 371: 190-194. [23] Three-component analysis of blood sedimentation by the method of characteristics. Math. Biosci. (1977) 33: 145-165. [24] Some results of boundary problem of non-Newtonian fluids. Systems Sci. Math. Sci. (1996) 9: 107-119. [25] Continuous differential sedimentation of a binary suspension. Chem. Eng. Aust. (1996) 21: 7-11. [26] The well-posedness of solution to a compressible non-Newtonian fluid with self-gravitational potential. Open Mathematics (2018) 16: 1466-1477. [27] Strongly degenerate parabolic-hyperbolic systems modeling polydisperse sedimentation with compression. SIAM J. Appl. Math. (2003) 64: 41-80. [28] Well/ill posedness for the dissipative Navier-Stokes system in generalized carleson measure spaces. Advances in Nonlinear Analysis (2019) 8: 203-224. [29] Existence and uniqueness of solutions for a class of non-Newtonian fluids with singularity and vacuum. J. Differential Equations (2008) 245: 2871-2916. [30] Computational modelling of flow through prosthetic heart valves using the entropic lattice-Boltzmann method. Journal of Fluid Mechanics (2014) 743: 170-201. [31] J. Zhang, C. Song and H. Li, Global solutions for the one-dimensional compressible Navier-Stokes-Smoluchowski system, Journal of Mathematical Physics, 58 (2017), 051502, 19pp. doi: 10.1063/1.4982360 [32] Trajectory attractor and global attractor for a two-dimensional incompressible non-Newtonian fluid. J.Math.Anal.Appl. (2007) 325: 1350-1362.
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142
|
2022-05-16 09:40:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5547822713851929, "perplexity": 2034.6237844999794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00557.warc.gz"}
|
https://www.physicsforums.com/threads/couple-of-questions-about-renormalization-schemes-based-largely-on-srednicki-ch27.403581/
|
# Couple of questions about renormalization schemes (based largely on Srednicki CH27)
1. ### LAHLH
411
Hi,
I'm reading through CH27 of Srednicki at the moment, and struggling to understand a couple of concepts.
1) He states that in the MS (bar) scheme the location of the pole in exact propagator is no longer when $$k^2=-m^2$$, where m is Lagrangian parameter usually thought of as mass. I think I understand this because we no longer are imposing the conditions $$\Pi(-m^2)=0$$ etc, as in the OS scheme, so of course there is no pole in the exact prop when $$k^2=-m^2$$. However he then goes on to say the physical mass, $$m_{ph}$$ is defined by the location of the pole: $$k^2=-m^{2}_{ph}$$. Why is the physical mass defined this way? what's so special about the place where there is a pole in exact propagator? How do these things tie in with the Lehman-Kallen form of the exact propagator that clearly shows there must be a pole when $$k^2=-m^2$$, is this 'm' in the Lehman-Kallen formula $$m_{ph}$$?
2) He then states the LSZ formula must be corrected by mutliplying it's RHS by a factor of $$\tfrac{1}{\sqrt{R}}$$ where R is the residue of the pole, and the reason he gives is that it is the field $$\tfrac{\phi(x)}{\sqrt{R}}$$ that has unit amplitude to create a one particle state. I have no idea why this is, and it would be really great if anyone could explain some more.
3) My final question is how he gets to 27.12 just by taking the log of 27.11:
He starts with, $$m^{2}_{ph}=m^2[1+\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c')+O(\alpha^2)]$$
Now taking logs:
$$2ln(m_{ph})=2ln(m)+ln[1+\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c')+O(\alpha^2)]$$
My only thought is that in the second term perhaps you could write $$1+\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c')+O(\alpha^2) =exp(\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c'))$$
since the second term is second order anyway, it kind of doesnt matter if its the real $$O(\alpha^2)$$ term form or not. Then you would have $$ln[1+\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c')+O(\alpha^2)]=ln( exp(\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c')))=\tfrac{5}{12}\alpha(ln(\mu^2/m^2)+c'))$$
and you recover 27.12?
|
2015-03-06 12:43:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133673310279846, "perplexity": 293.5613555894262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468546.71/warc/CC-MAIN-20150226074108-00139-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://techwhiff.com/learn/robert-limited-has-just-paid-a-current-annual/123016
|
#### Similar Solved Questions
##### Show that any linear function f(x) = ax + b is a continuous function in any x_o in ℝ ?
Show that any linear function f(x) = ax + b is a continuous function in any x_o in ℝ ?...
##### Multiple Choice Question 116 Coronado Industries sells MP3 players for $50 each. Variable costs are$30...
Multiple Choice Question 116 Coronado Industries sells MP3 players for $50 each. Variable costs are$30 per unr, and fixed costs total $120000. How many MP3 players must Coronado sell to earn net income of$300000? 21000 10000 7000 6000...
##### 6/10 points Previous Answers 1/3 Submissions Used MyN The UGA Journalism Department is interested in determining...
6/10 points Previous Answers 1/3 Submissions Used MyN The UGA Journalism Department is interested in determining if there is a significant difference in the number of hours, on average per week, that UGA males and UGA female students spend read any online or printed newspaper. Randomly selected stud...
##### 5. TRUSS Determine the force in each member of the truss shown below. There is a...
5. TRUSS Determine the force in each member of the truss shown below. There is a pin at A and a rocker at E. State whether each member is in tension or compression. 150 N 200 N 8 m 125 N 8 m 6 m 6 m...
##### Performance Task (continued Part B 4.NBT.I, 4.NBT.2 In May, between 300,000 and 400,000 people visited Yellowstone. In the total number of visitors for May, the digit 1 has 10 times the value tha...
Performance Task (continued Part B 4.NBT.I, 4.NBT.2 In May, between 300,000 and 400,000 people visited Yellowstone. In the total number of visitors for May, the digit 1 has 10 times the value that it has in the total number of visitors in April. In the total number of visitors for May, the tens digi...
##### Part A Calculate the volume of the gas, in liters, if 1.60 mol has a pressure...
Part A Calculate the volume of the gas, in liters, if 1.60 mol has a pressure of 1.20 atm at a temperature of -5 ∘C. Express the volume in liters to three significant digits. V V = nothing L SubmitRequest Answer Part B Calculate the absolute temper...
##### Problem 3.52 Elina Siljander owns Elina's Stained Glass in Helsinki, Finland. The business produces and sells...
Problem 3.52 Elina Siljander owns Elina's Stained Glass in Helsinki, Finland. The business produces and sells three different types of stained glass windows: small, medium, and large. Elina has two full-time employees who work regular schedules to cut glass and assemble the windows. She borrowed...
##### As a Thai entrepreneur who imports from a Chinese manufacturing company, you need to explain the...
As a Thai entrepreneur who imports from a Chinese manufacturing company, you need to explain the main cultural differences between Thailand and China to your management staff, according to the GMP Group recommendations (separate file): “Understanding Asian and European Business Culture” ...
##### 1.Find the general solution to the following ODE's a). y'' +y= sec^2t b). x^2y'' +3xy'+3y=0
1.Find the general solution to the following ODE's a). y'' +y= sec^2t b). x^2y'' +3xy'+3y=0...
##### A student must prepare 500.0 mL of solution containing 3.550 grams of solid copper(II) sulfate. Which...
A student must prepare 500.0 mL of solution containing 3.550 grams of solid copper(II) sulfate. Which of the following statements are FALSE regarding the proper procedure to prepare this solution? I. The 3.550 grams of solid copper(II) sulfate is dissolved in a 500.0-ml volumetric flask containing 5...
##### Q6. The amplifier in Figure A is the amplifier in Q5. The amplifier is expected to...
Q6. The amplifier in Figure A is the amplifier in Q5. The amplifier is expected to drive to drive the load in Figure B. After the load is connected to the load, the new higher cut-off frequency is 50kHz. If Ru= 5 k-ohm, determine Cų. ( 10 pts) Vpp = +10 V 3RD = 35k2 R = 500k OVO Cc= 10 uF wH R;...
##### 2. Define the term “laissez faire” and explain how it is (was) applied in the making...
2. Define the term “laissez faire” and explain how it is (was) applied in the making and carrying out of law in the United States. _____________________________________________________________________________________________________________________________________________________________...
##### What is the percentage of calcium in a sample if a 0.50g sample of unkown contains...
what is the percentage of calcium in a sample if a 0.50g sample of unkown contains .20g ca?...
|
2022-09-26 16:07:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27985680103302, "perplexity": 4237.3362564952695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00215.warc.gz"}
|
https://forum.obsidian.md/t/block-referencing-with-a-equations/8762
|
Block referencing with a equations
Hello!
First of all, Obsidian is great, thank you for your work! I’ve been lurking around the forum for a while and finally decided to register.
I want to ask whether it is possible to use block referencing when we have equations in combinations with text? If so, how?
The trick is, that equations have to have a new line around them for formatting purposes and blocks seem to be looking for a line break, so I am having trouble defining an equation and its accompanying text as a block.
Here is an example. I would like the block to start right after “Companion form”
**Companion form**
$z_{t}=F_{1}(\Phi) z_{t-1} + F_{c} (\Phi)+\nu_{t}$
where the first n rows of $F_1(\Phi)$, $F_c(\Phi)$, and $v_t$ are defined to reproduce the DGP ^ce73ba
- $\nu_{t} \sim \operatorname{iid} N(0, \Omega(\Sigma))$
- another equation
^test
[[#^test]]
If I delete the line after the first equation, I get the sentence “where the first…” to be inline with the $z_t$ equation, which I do not want. Currently ^test points to the first item in the list, so the second equation.
Is such a thing even possible?
I’ve found two workarounds so far:
1. Use a list, i.e. start each line below the next equation with “-”, then it is not a problem.
2. Use atomic notes, embed them where you need them and reference them.
If this is currently not possible, a workaround could be to be able to tell where the block starts, not only where it ends, i.e. “wrap” blocks and not look only for the ^blockid syntax.
Ohhh, me stupid…
Simply use to force a math linebreak, which, however is not counted for a line break for the blockid.
This is one paragraph:
**Companion form**
$$z_{t}=F_{1}(\Phi) z_{t-1} + F_{c} (\Phi)+\nu_{t}$$
This is not:
**Companion form**
$z_{t}=F_{1}(\Phi) z_{t-1} + F_{c} (\Phi)+\nu_{t}$
Okay, me not so stupid after all. My question still stands.
The solution works but then you lose the list afterwards. So the question remains, how to wrap the whole example above as a block?
I am confused, the behaviour is really wonky.
Can somebody tell me why is this a block:
but this is not?
I am referring to having the text preceding the equation as part of the block in one instance and not as part in another.
I am willing to share the .md files.
No need to share the files—mind pasting the two examples in code-fenced block here, though?
Of course!
However… stupid question, how do I do code-fenced block? I see options for “pre-formatted text” (Ctrl-shift-C) and “Blockquote” (Ctrl+Shift+9).
Here is with pre-formatted text:
**Companion form**
$$z_{t}=F_{1}(\Phi) z_{t-1} + F_{c} (\Phi)+\nu_{t}$$
^tsaffsa
[[#^tsaffsa]]
and
**The DGP is a monthly VAR:**
$$x_{t}= \Phi x_{t-1}+\ldots+\Phi_{p} x_{t-p}+\Phi_{c}+u_{t}, \hspace{20pt} u_{t}=N(0, \Sigma)$$
^testsaf
[[#^testsaf|what?]]
1 Like
Finally getting back to this, sorry for the delay.
It looks like the issue is that you have a line break after the suffix $$ in the second example. This works: You may want to see the discussion here on blocks and $$ use:
Thank you and no worries about any “delay”.
It looks like the issue is that you have a line break after the suffix $$in the second example. Do you mean a line break before the suffix$$? I think that explains it. I have one between line 7 and 8 on the second example and deleting that solves the problem:
I still think this behaviour is wonky. Why does a line break between line 7 and 8, in the equation (which in latex typically has no effect, so people would not expect it) acts as if there is a line break between lines 6 and 7?
I don’t understand your example. In your picture, if I see correctly, the equation name “The DGP is a monthly VAR” is rendered. In my case it is not. As far as I see in your example, the difference is that in the second case there is no break after and ^testsaf. This does not reproduce the behaviour on my end.
And thanks for the link to the thread regarding my original question. I read through and as far as I understood:
• there is a suggested workaround with headings. Headings are a totally different thing for me and I would not put a heading for every equation. Also I do not want equations in the table of contents. The atomic notes (my current workaround) is better but not optimal
• this would not be considered (at least not currently). I really think, that introducing opening and closing synthax is an elegant solution, as suggested in those topics.
Did I understand the discussion correctly?
I see that the discussion has been marked as resolved. I still have no idea how to create a complex block with multiple equations which is not an atomic note.
Yes, the issue is the line break, and yes, if you want to do more complicated block references, you’ll need to use headings.
In reproducing the example I showed, be very careful to use the exact same number of lines & line breaks. I can paste the raw text later if you’d like.
The issue is the markdown parser. I don’t understand it very deeply, but it is non-trivial to get the markdown parser to recognize certain sets of lines as a single block. That’s why one-line equations work, but multi-line will not.
You can certainly create a feature request to encourage the devs to address this issue. Feel free to copy-paste whatever’s helpful from this thread.
Also, for what it’s worth, a workaround is to use e.g. H6 headings and use css to hide them in preview, such that these are not “real” headings, just display math markers.
Ah, I understand, thank you!
For my use-case using atomic notes is actually better. This also makes the obsidian notes easily portable in the future, as far as I understand block-referencing is something added because of user pressure and it is non-standard in terms of mark-down synthax.
I will consider making a feature request, thank!
1 Like
|
2022-06-28 02:20:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7263579368591309, "perplexity": 836.4309858910848}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00020.warc.gz"}
|
http://www.ams.org/bookstore?fn=50&arg1=salegeneralint&ikey=HMATH-25
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Hausdorff on Ordered Sets
Edited by: J. M. Plotkin, Michigan State University, East Lansing, MI
A co-publication of the AMS and the London Mathematical Society.
SEARCH THIS BOOK:
History of Mathematics
2005; 322 pp; softcover
Volume: 25
ISBN-10: 0-8218-3788-5
ISBN-13: 978-0-8218-3788-7
List Price: US$76 Member Price: US$60.80
Order Code: HMATH/25
Georg Cantor, the founder of set theory, published his last paper on sets in 1897. In 1900, David Hilbert made Cantor's Continuum Problem and the challenge of well-ordering the real numbers the first problem of his famous lecture at the International Congress in Paris. Thus, as the nineteenth century came to a close and the twentieth century began, Cantor's work was finally receiving its due and Hilbert had made one of Cantor's most important conjectures his number one problem. It was time for the second generation of Cantorians to emerge.
Foremost among this group were Ernst Zermelo and Felix Hausdorff. Zermelo isolated the Choice Principle, proved that every set could be well-ordered, and axiomatized the concept of set. He became the father of abstract set theory. Hausdorff eschewed foundations and developed set theory as a branch of mathematics worthy of study in its own right, capable of supporting both general topology and measure theory. He is recognized as the era's leading Cantorian.
Hausdorff published seven articles in set theory during the period 1901-1909, mostly about ordered sets. This volume contains translations of these papers with accompanying introductory essays. They are highly accessible, historically significant works, important not only for set theory, but also for model theory, analysis and algebra.
This book is suitable for graduate students and researchers interested in set theory and the history of mathematics.
This volume is one of an informal sequence of works within the History of Mathematics series. Volumes in this subset, "Sources", are classical mathematical works that served as cornerstones for modern mathematical thought.
Also available from the AMS by Felix Hausdorff are the classic works, Grundzüge der Mengenlehre (Volume 61) and Set Theory (Volume 119), in the AMS Chelsea Publishing series.
Graduate students and researchers interested in set theory and the history of mathematics.
Reviews
"...this volume greatly facilitates the access of the international readership to Hausdorff's early contributions to set theory and gives detailed information on the history of set theory in general. It is a very welcome, probably even necessary, complement to the ongoing enterprise of editing Hausdorff's Gesammelta Werke. It will be of great help to anybody interested in the historical and mathematical development of 20th-century set theory and logic."
-- Historia Mathematica
• J. M. Plotkin -- Selected Hausdorff bibliography
• J. M. Plotkin -- Introduction to "About a certain kind of ordered sets"
• F. Hausdorff -- About a certain kind of ordered sets [H 1901b]
• J. M. Plotkin -- Introduction to "The concept of power in set theory"
• F. Hausdorff -- The concept of power in set theory [H 1904a]
• J. M. Plotkin -- Introduction to "Investigations into order types, I, II, III"
• F. Hausdorff -- Investigations into order types [H 1906b]
• J. M. Plotkin -- Introduction to "Investigations into order types IV, V"
• F. Hausdorff -- Investigations into order types [H 1907a]
• J. M. Plotkin -- Introduction to "About dense order types"
• F. Hausdorff -- About dense order types [H 1907b]
• J. M. Plotkin -- Introduction to "The fundamentals of a theory of ordered sets"
• F. Hausdorff -- The fundamentals of a theory of ordered sets [H 1908]
• J. M. Plotkin -- Introduction to "Graduation by final behavior"
• F. Hausdorff -- Graduation by final behavior [H 1909a]
• F. Hausdorff -- Appendix. Sums of $$\aleph_1$$ sets [H 1936b]
|
2015-07-06 02:19:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3091762661933899, "perplexity": 2984.799988606533}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097757.36/warc/CC-MAIN-20150627031817-00192-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://itprospt.com/num/19078866/which-of-the-following-characleristics-indicales-ihe-presence
|
5
# Which of the following characleristics Indicales Ihe presence ol weak Intermolecular lorces In & Ilquld?Select one Iow heat ol vaponzalionlow vapour pressurea h...
## Question
###### Which of the following characleristics Indicales Ihe presence ol weak Intermolecular lorces In & Ilquld?Select one Iow heat ol vaponzalionlow vapour pressurea high surtace lensionda high boiling point
Which of the following characleristics Indicales Ihe presence ol weak Intermolecular lorces In & Ilquld? Select one Iow heat ol vaponzalion low vapour pressure a high surtace lension da high boiling point
#### Similar Solved Questions
##### A small company makes solar panels One possible model for the manufacturing process shann Ihe right Using Ihis mode predici the cos per Unilin a balch 300.Dependent variable Log1ol Cosi squared 89.796 0846 with 15 - 2 = 13 degrees Variable Inlercept uniteunit)squared (acjualed) 88.996Tcedon Coedlcient 7613 0041Coem) 0-07 0oi-KraticP_ale 0CO01 WLuuTThe cost per unit batcn of 300 (Round three decimal places needed )
A small company makes solar panels One possible model for the manufacturing process shann Ihe right Using Ihis mode predici the cos per Unilin a balch 300. Dependent variable Log1ol Cosi squared 89.796 0846 with 15 - 2 = 13 degrees Variable Inlercept unite unit) squared (acjualed) 88.996 Tcedon Coed...
##### 1,Ina5 projectile motion , oaded spring = experimenl Eun spring horizontal then placed cn & compressed range ofthe horizontal able 25 crn by a ball ball is 1,67 wilh # ball at the after neight 0f 1.0 m moment it left the 'release the trigger ol tne above &round The spting gun and the 'spring gun; {b) the force constant sprirg gun; Firid {a) the speed effect ofau, of trie spting of the enore Kfrction inside
1,Ina5 projectile motion , oaded spring = experimenl Eun spring horizontal then placed cn & compressed range ofthe horizontal able 25 crn by a ball ball is 1,67 wilh # ball at the after neight 0f 1.0 m moment it left the 'release the trigger ol tne above &round The spting gun and the &...
##### 2. Verify that the function f (x) = x x2 satisfies the hypotheses of the Mean Value Theorem on the interval [-1,2]:Find all numbers € that satisfy the conclusion of the Mean Value Theorem
2. Verify that the function f (x) = x x2 satisfies the hypotheses of the Mean Value Theorem on the interval [-1,2]: Find all numbers € that satisfy the conclusion of the Mean Value Theorem...
##### A) x2 Y =1. Use Green's theorem to compute the area inside the ellipse 72 182 Use the fact that the area can be written as dxdy=i Jo ~ydx+xdy Hint: x(t) = 7 cos(t)- The area is 126piFind a parametrization of the curve x2/3 + y2/3 12/8 and use it to compute the area of the interior: Hint: x(t) = 1 cos? (t)_ 6174n/5832
A) x2 Y =1. Use Green's theorem to compute the area inside the ellipse 72 182 Use the fact that the area can be written as dxdy=i Jo ~ydx+xdy Hint: x(t) = 7 cos(t)- The area is 126pi Find a parametrization of the curve x2/3 + y2/3 12/8 and use it to compute the area of the interior: Hint: x(t) ...
##### Vorkbook 2.2Lesson 2.2 Ratios and Equn,The sin and cos commands on calculator are used to solve for Unknown angle measures when the trigonometric ratios are known_ For some ratios, the calculator outputs correspond to quadrant angles, while for other ratios, the calculator outputs correspond angles in other quadrants_Use calculator complete the table: Try few Of your own (-values as well Notice that must be between andCalculator output for sin30450 Ldo 210 330 276BeDescribe rule that predicts wh
Vorkbook 2.2 Lesson 2.2 Ratios and Equn, The sin and cos commands on calculator are used to solve for Unknown angle measures when the trigonometric ratios are known_ For some ratios, the calculator outputs correspond to quadrant angles, while for other ratios, the calculator outputs correspond angle...
##### How many subgraphs do the following graphs have?The graph with degree sequence 3,2,2,1_A graph with degree sum of 20.The complete graph on n vertices:
How many subgraphs do the following graphs have? The graph with degree sequence 3,2,2,1_ A graph with degree sum of 20. The complete graph on n vertices:...
##### Determine the minimum height of a vertical flat minor in which person 5'10" in height can see his or her full image. Include ray Path diagram:kaleidoscope_ makes symmetnc patters Wilh [WO plane minun having 60? angle between them shown; Draw the location of the irages3
Determine the minimum height of a vertical flat minor in which person 5'10" in height can see his or her full image. Include ray Path diagram: kaleidoscope_ makes symmetnc patters Wilh [WO plane minun having 60? angle between them shown; Draw the location of the irages 3...
##### Calculate the test statisticOA .07B. 1.26C 10D. 63
Calculate the test statistic OA .07 B. 1.26 C 10 D. 63...
##### For each of the following (a) (2 pts) Find S1o(b) (5 pts) then find a bound on theremainder and then (c) 5 pts) Find a N so that Sw is within 0.001 of the limit. Show all n=ll work s0 / can see how you got your bounds and N.4(3" ) n_1 5" +2
For each of the following (a) (2 pts) Find S1o (b) (5 pts) then find a bound on the remainder and then (c) 5 pts) Find a N so that Sw is within 0.001 of the limit. Show all n=ll work s0 / can see how you got your bounds and N. 4(3" ) n_1 5" +2...
##### (2) Prove thatS := {2 € € : lel = 1}is a subgroup of CX_ Hint: You may use the following properties: If 2, W € C, thenIzwl = Izllwl:If 2 = a + ib, 2 = 0, then2-1a _ ib);(2)
(2) Prove that S := {2 € € : lel = 1} is a subgroup of CX_ Hint: You may use the following properties: If 2, W € C, then Izwl = Izllwl: If 2 = a + ib, 2 = 0, then 2-1 a _ ib); (2)...
##### Determine whether the series is convergent or divergent00 n(2n + 5) 2n n=1
Determine whether the series is convergent or divergent 00 n(2n + 5) 2n n=1...
##### If you look at a fish through the corner of a rectangular aquarium you sometimes see two fish, one on each side of the corner, as shown in Figure $\mathrm{P} 18.51 .$ Sketch some of the light rays that reach your eye from the fish to show how this can happen.
If you look at a fish through the corner of a rectangular aquarium you sometimes see two fish, one on each side of the corner, as shown in Figure $\mathrm{P} 18.51 .$ Sketch some of the light rays that reach your eye from the fish to show how this can happen....
##### Find the absolute extremum, if any, for the following function:f(x) - 7x4 Select the correct choice below and; if necessary; fill in the answer boxes to complete your choice_The absolute minimum isat* =There is no absolute minimum:Select the correct choice below and, if necessary; fill in the answer boxes t0 complete your choice_The absolute maximum isatx =There is no absolute maximum:
Find the absolute extremum, if any, for the following function: f(x) - 7x4 Select the correct choice below and; if necessary; fill in the answer boxes to complete your choice_ The absolute minimum is at* = There is no absolute minimum: Select the correct choice below and, if necessary; fill in the a...
##### The number of TV channels that the average U.S. home receives has been soaring in recent, years. The function $t(x)=0.16 x^{2}+0.46 x+21.36$can be used to estimate this number, where $x$ is the number of years after 1985 (Source: Nielsen Media Research, National People Meter Sample). Use this func-tion for Exercises 97 and 98.In what year did the average U.S. household receive50 channels?
The number of TV channels that the average U.S. home receives has been soaring in recent, years. The function $t(x)=0.16 x^{2}+0.46 x+21.36$can be used to estimate this number, where $x$ is the number of years after 1985 (Source: Nielsen Media Research, National People Meter Sample). Use this func-t...
##### 5 8E 1g 111
5 8 E 1g 1 1 1...
##### For the following precipitation reaction Complete and balarice the molecular equation (10%) Include labels. and write the net ionic equation. FeCiz(aq) NazS(aq)redox reuction. Include labels. net ionic reaclion- Il iS (10%) 10. Balance the following S4Os? (aq) +/ (aq) Ia(s) 52032(aq)
for the following precipitation reaction Complete and balarice the molecular equation (10%) Include labels. and write the net ionic equation. FeCiz(aq) NazS(aq) redox reuction. Include labels. net ionic reaclion- Il iS (10%) 10. Balance the following S4Os? (aq) +/ (aq) Ia(s) 52032(aq)...
|
2022-08-17 06:39:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677440643310547, "perplexity": 5146.332798331224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00504.warc.gz"}
|
http://math.stackexchange.com/questions/91834/pi-system-and-monotone-class
|
# $\pi$-system and monotone class
Assume $P$ is a $\pi$-system (that is, $P$ is closed under finite intersections) and $M$ is monotone class (that is $M$ is a non-empty collection of subsets of $\Omega$ closed under monotone limits).
Show that $P\subset M$ does not imply $\sigma(P)\subset M$.
-
This looks like homework. Please read meta.math.stackexchange.com/questions/1803/… and edit accordingly. In the meantime, -1. – Nate Eldredge Dec 15 '11 at 21:20
Isn't $\{\emptyset\}$ both a $\pi$-class and a monotone class? $\{\emptyset\} \subseteq \{\emptyset\}$ but $\sigma(\{\emptyset\}) \subsetneq \{\emptyset\}$. – guy Dec 15 '11 at 21:39
Put $\Omega=\mathbb N$, $P:=\{A_n,n\in\mathbb N\}$, where $A_n=\{k\in\mathbb N\mid k\geq n\}$, and $M:=P\cup\{\emptyset\}$. $P$ is a $\pi$-system since $A_n\cap A_m=A_{\max(m,n)}\in P$, and $M$ is a monotone class since if $\{B_n\}\subset M$ is an increasing sequence, either all the sets are empty hence the union is empty, or for $j$ large enough $B_j=A_{\varphi(j)}$. Then $\bigcup_{n\in\mathbb N}B_n=A_{\min_{n\in\mathbb N}\varphi(n)}\in M$ (indeed $B_n=A_{\varphi(n)}\subset A_{\min_{k\in\mathbb N}\varphi(k)}$ so $\bigcup_{n\in\mathbb N}B_n\subset A_{\min_{n\in\mathbb N}\varphi(n)}$ and since $\varphi$ is a function of integers there is $k_0$ such that $\varphi(k_0)=\min_{n\in\mathbb N}\varphi(n)$ hence $A_{\varphi(k_0)}=B_{k_0}\subset \bigcup_{n\in\mathbb N}B_n$). If $\{B_n\}\subset M$ is an decreasing sequence, then $B_n=A_{\varphi(n)}$ for all $n$ (if $B_{n_0}=\emptyset$ for a $n_0$ then the intersection is empty), and $\bigcup_{n\in\mathbb N}B_n=\begin{cases}\emptyset&\mbox{ if }\sup_n\varphi(n)=+\infty,\\\ B_{\sup_{n\in\mathbb N}\varphi(n)}&\mbox{ otherwise}.\end{cases}$ Since $\sigma(P)=\mathcal P(\mathbb N)$, and $M\neq \mathcal P(\mathbb N)$, we can't have the inclusion $\sigma(P)\subset M$.
However, if $M$ is a $\lambda$-system (that is: contains $\Omega$, is closed under complement and countable disjoint unions), then the last equality is true. This result is useful when we show that two probability measures which agree on a $\pi$-system agree on the the $\sigma$-algebra generated by this $\pi$-system.
-
Where are you stuck when you try to prove that $M$ is a monotone class? – Davide Giraudo Dec 16 '11 at 15:22
Let $\Omega$ be any infinite set, and let $\preceq$ be a linear order on $\Omega$. Say that $\Delta\subseteq\Omega$ is downward closed if $\alpha\preceq \beta\in\Delta$ implies that $\alpha\in\Delta$, and let $P=\{\Delta\subseteq\Omega:\Delta\text{ is downward closed}\}$. (Note that $\varnothing$ is vacuously downward closed, so $\varnothing\in P$.) You should have no trouble showing that $P\,$ is itself a monotone class that is not closed under complementation. (In fact this is true as long as $\Omega$ has at least two elements.)
|
2015-05-29 04:48:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777621626853943, "perplexity": 117.04041177521253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929869.17/warc/CC-MAIN-20150521113209-00327-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://zenodo.org/record/4139775/export/schemaorg_jsonld
|
Conference paper Open Access
# Drawing Network Visualizations on a Continuous, Spherical Surface
Dario Rodighiero
### JSON-LD (schema.org) Export
{
"description": "<p>Despite the great literature regarding network visualizations, their graphic representation is hardly an object of investigation. Sometimes it is worth more attention, especially when individuals are represented. Visually translating communities in networks, for example, implies that some individuals are always situated at the borders of the representation. This assumption is clearly unfair, especially if each individual of the community is connected with everybody else. To solve this lack of design justice, the community is represented on a spherical network where the surface is continuous. In that space, individuals can be situated in a sparse area, but never on the edges. The spherical network is successively projected on the flat surface to improve the network readability making use of cartographic projections.</p>",
"creator": [
{
"affiliation": "Harvard University",
"@id": "https://orcid.org/0000-0002-1405-7062",
"@type": "Person",
"name": "Dario Rodighiero"
}
],
"headline": "Drawing Network Visualizations on a Continuous, Spherical Surface",
"datePublished": "2020-09-08",
"url": "https://zenodo.org/record/4139775",
"version": "Pre-print",
"@type": "ScholarlyArticle",
"keywords": [
"Cartographic projection; centrality; continuity; design justice; digital humanities; network visualization"
],
"@context": "https://schema.org/",
"identifier": "https://doi.org/10.5281/zenodo.4139775",
"@id": "https://doi.org/10.5281/zenodo.4139775",
"workFeatured": {
"alternateName": "IV2020",
"location": "Melbourne and Vienna",
"@type": "Event",
"name": "24th International Conference Information Visualisation"
},
"name": "Drawing Network Visualizations on a Continuous, Spherical Surface"
}
399
201
views
|
2021-06-15 06:15:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47644346952438354, "perplexity": 9594.565447995114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00206.warc.gz"}
|
https://www.semanticscholar.org/paper/Holographic-Fermi-surfaces-and-entanglement-entropy-Ogawa-Takayanagi/a11a3dfe02bd09f532c0f3fbc51a8a449ef94a59
|
# Holographic Fermi surfaces and entanglement entropy
```@article{Ogawa2012HolographicFS,
title={Holographic Fermi surfaces and entanglement entropy},
author={Noriaki Ogawa and Tadashi Takayanagi and Tomonori Ugajin},
journal={Journal of High Energy Physics},
year={2012},
volume={2012},
pages={1-21}
}```
• Published 4 November 2011
• Physics
• Journal of High Energy Physics
A bstractWe argue that Landau-Fermi liquids do not have any gravity duals in the purely classical limit. We employ the logarithmic behavior of entanglement entropy to characterize the existence of Fermi surfaces. By imposing the null energy condition, we show that the specific heat always behaves anomalously. We also present a classical gravity dual which has the expected behavior of the entanglement entropy and specific heat for non-Fermi liquids.
245 Citations
Quantum corrections to holographic entanglement entropy
• Physics
• 2013
A bstractWe consider entanglement entropy in quantum field theories with a gravity dual. In the gravity description, the leading order contribution comes from the area of a minimal surface, as
Entanglement entropy with background gauge fields
A bstractWe study the entanglement entropy, the Rényi entropy, and the mutual (Rényi) information of Dirac fermions on a 2 dimensional torus in the presence of constant gauge fields. We derive their
• Physics
• 2013
A bstractWe study the field theory dual to a charged gravitational background in which the low temperature entropy scales linearly with the temperature. We exhibit the existence of a sound mode which
Entanglement Entropy from a Holographic Viewpoint
The entanglement entropy has been historically studied by many authors in order to obtain quantum mechanical interpretations of the gravitational entropy. The discovery of anti-de Sitter/conformal
Holographic superconductors with hidden Fermi surfaces
In this paper, we investigate a holographic model of superconductor with hidden Fermi surfaces, which was defined by the logarithmic violation of area law of entanglement entropy. We works in fully
Entanglement temperature for black branes with hyperscaling violation
• Physics
• 2016
Entanglement temperature is an interesting quantity which relates the increased amount of entanglement entropy to that of energy for a weakly excited state in the first-law of entanglement entropy,
Entanglement entropy with background gauge fields
We study the entanglement entropy, the Rényi entropy, and the mutual (Rényi) information of Dirac fermions on a 2 dimensional torus in the presence of constant gauge fields. We derive their general
Thermodynamical property of entanglement entropy for excited states.
• Physics
Physical review letters
• 2013
It is argued that the entanglement entropy for a very small subsystem obeys a property which is analogous to the first law of thermodynamics when the authors excite the system, and this provides a universal relationship between the energy and the amount of quantum information.
## References
SHOWING 1-10 OF 56 REFERENCES
Entanglement entropy and the Fermi surface.
An intuitive account of this anomalous scaling based on a low energy description of the Fermi surface as a collection of one-dimensional gapless modes is given and a violation of the boundary law is predicted in a number of other strongly correlated systems.
Conformal field theory approach to Fermi liquids and other highly entangled states
The Fermi surface may be usefully viewed as a collection of (\$1+1\$)-dimensional chiral conformal field theories. This approach permits straightforward calculation of many anomalous ground-state
Non-Fermi liquids from holography
• Physics
• 2011
We report on a potentially new class of non-Fermi liquids in (2+1)-dimensions. They are identified via the response functions of composite fermionic operators in a class of strongly interacting
Violation of the entropic area law for fermions.
• M. Wolf
• Physics
Physical review letters
• 2006
It is proven that the presented scaling law holds whenever the Fermi surface is finite, and this is, in particular, true for all ground states of Hamiltonians with finite range interactions.
Holographic Derivation of Entanglement Entropy from AdS/CFT
• Physics
• 2006
A holographic derivation of the entanglement entropy in quantum (conformal) field theories is proposed from AdS/CFT correspondence. We argue that the entanglement entropy in d+1 dimensional conformal
Holographic Entanglement Entropy: An Overview
• Physics
• 2009
In this paper, we review recent progress on the holographic understanding of the entanglement entropy in the anti-de Sitter space/conformal field theory (AdS/CFT) correspondence. In general, the
Entanglement entropy of critical spin liquids.
• Physics
Physical review letters
• 2011
It is found that entanglement entropy of the projected Fermi sea state violates the boundary law, with S(2) enhanced by a logarithmic factor, an unusual result for a bosonic wave function reflecting the presence of emergent fermions.
Entanglement entropy and conformal field theory
• Physics
• 2009
We review the conformal field theory approach to entanglement entropy in 1+1 dimensions. We show how to apply these methods to the calculation of the entanglement entropy of a single interval, and
Conformal Field Theory on the Fermi Surface
The Fermi surface may be usefully viewed as a collection of 1 + 1 dimensional chiral conformal field theories. This approach permits straightforward calculation of many anomalous ground state
Entanglement Entropy of Fermi Liquids via Multidimensional Bosonization
• Physics
• 2011
The logarithmic violations of the area law, i.e. an "area law" with logarithmic correction of the form \$S \sim L^{d-1} \log L\$, for entanglement entropy are found in both 1D gapless system and for
|
2022-07-01 18:03:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291958570480347, "perplexity": 1160.4854752574267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00635.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/33469-i-dont-get-thisss-question.html
|
# Math Help - i dont get thisss question!!
1. ## i dont get thisss question!!
find the center of the circle that circumscribes triangle ABC.
1. A (0,0),B (6,4),C (6,0)
2. A (-3,11),B (9,11),C (-3,21)
2. Originally Posted by whmandy
find the center of the circle that circumscribes triangle ABC.
1. A (0,0),B (6,4),C (6,0)
just set up and solve the three simultaneous equations.
you know that the equation of a circle is: $(x - a)^2 + (y - b)^2 = r^2$ where $(a,b)$ is the center.
take $(x,y)$ to be $(0,0),~(6,4)$ and $(6,0)$ respectively, you get:
$(0 - a)^2 + (0 - b)^2 = r^2$ .................................(1)
$(6 - a)^2 + (4 - b)^2 = r^2$ .................................(2)
$(6 - a)^2 + (0 - b)^2 = r^2$ .................................(3)
now solve this system for $a$ and $b$ only
the second problem is done similarly
|
2014-07-12 13:13:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342556953430176, "perplexity": 2185.6143944700043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432978.12/warc/CC-MAIN-20140707234032-00047-ip-10-180-212-248.ec2.internal.warc.gz"}
|
http://ncatlab.org/nlab/show/quantomorphism+group
|
Contents
Idea
Given a (pre)symplectic manifold $(X,\omega)$, its quantomorphism group is the Lie group that integrates the Lie bracket inside the Poisson algebra of $(X, \omega)$. This is a circle group-central extension of the group of Hamiltonian symplectomorphisms. It extends and generalizes the Heisenberg group of a symplectic vector space.
(Warning on terminology: A more evident name for the quantomorphism group might seem to be “Poisson group”. But this already means something different, see Poisson Lie group.)
Over a symplectic manifold $(X, \omega)$ an explicit construction of the corresponding quantomorphism group is obtained by choosing $(P \to X, \nabla)$ a prequantum circle bundle, regarded with an Ehresmann connection 1-form $A$ on $P$, and then defining
$QuantomorphismGroup \hookrightarrow Diff(P)$
to be the subgroup of the diffeomorphism group $P \stackrel{\simeq}{\to} P$ on those diffeomorphisms that preserve $A$. In other words, the quantomorphism group is the group of equivalences of bundles with connection that need not cover the identity diffeomorphism on the base manifold $X$.
Notice that the tuple $(P,A)$ is a regular contact manifold (see the discussion there), and so the quantomorphism group is equivalently that of contactomorphisms $(P,A) \to (P,A)$ of weight 0.
In higher geometry
This perspective lends itself to a more abstract description: we may regard the prequantum circle bundle as being modulated by a morphism
$\nabla : X \to \mathbf{B} U(1)_{conn}$
in the cohesive (∞,1)-topos $\mathbf{H} =$ Smooth∞Grpd, with domain the given symplectic manifold and codomain the smooth moduli stack for circle bundles with connection. This in turn may be regarded as an object $\nabla \in \mathbf{H}_{/\mathbf{B}U(1)_{conn}}$ in the slice (∞,1)-topos. Then the quantomorphism group is the automorphism group
$\mathbf{QuantMorph}(X.\nabla) \coloneqq \underset{\mathbf{B}U(1)_{conn}}{\prod} \mathbf{Aut}(\nabla)$
in $\mathbf{H}$ (Sch).
From this it is clear what the quantomorphism ∞-group of an n-plectic ∞-groupoid should be: for
$\nabla : X \to \mathbf{B}^n U(1)_{conn}$
the morphism modulating a prequantum circle n-bundle, the corresponding quantomorphism $n$-group is again $Aut(\nabla)$, now formed in $\mathbf{H}_{/\mathbf{B}^n U(1)_{conn}}$
Properties
Smooth structure
The quantomorphism group for a symplectic manifold may naturally be equipped with the structure of a group object in ILH manifolds (Omori, Ratiu-Schmid), as well as in convenient manifolds (Vizman, prop.).
Group extension
Proposition
For $(X,\omega)$ a connected symplectic manifold there is a central extension of groups
$1 \to U(1) \to QuantomorphismGroup(X,\omega) \to HamiltonianSymplectomorphisms(X,\omega) \to 1 \,.$
This is due to (Kostant). It appears also (Brylinski, prop. 2.4.5).
higher and integrated Kostant-Souriau extensions:
(∞-group extension of ∞-group of bisections of higher Atiyah groupoid for $\mathbb{G}$-principal ∞-connection)
$(\Omega \mathbb{G})\mathbf{FlatConn}(X) \to \mathbf{QuantMorph}(X,\nabla) \to \mathbf{HamSympl}(X,\nabla)$
$n$geometrystructureunextended structureextension byquantum extension
$\infty$higher prequantum geometrycohesive ∞-groupHamiltonian symplectomorphism ∞-groupmoduli ∞-stack of $(\Omega \mathbb{G})$-flat ∞-connections on $X$quantomorphism ∞-group
1symplectic geometryLie algebraHamiltonian vector fieldsreal numbersHamiltonians under Poisson bracket
1Lie groupHamiltonian symplectomorphism groupcircle groupquantomorphism group
22-plectic geometryLie 2-algebraHamiltonian vector fieldsline Lie 2-algebraPoisson Lie 2-algebra
2Lie 2-groupHamiltonian 2-plectomorphismscircle 2-groupquantomorphism 2-group
$n$n-plectic geometryLie n-algebraHamiltonian vector fieldsline Lie n-algebraPoisson Lie n-algebra
$n$smooth n-groupHamiltonian n-plectomorphismscircle n-groupquantomorphism n-group
(extension are listed for sufficiently connected $X$)
slice-automorphism ∞-groups in higher prequantum geometry
cohesive ∞-groups:Heisenberg ∞-group$\hookrightarrow$quantomorphism ∞-group$\hookrightarrow$∞-bisections of higher Courant groupoid$\hookrightarrow$∞-bisections of higher Atiyah groupoid
L-∞ algebras:Heisenberg L-∞ algebra$\hookrightarrow$Poisson L-∞ algebra$\hookrightarrow$Courant L-∞ algebra$\hookrightarrow$twisted vector fields
higher Atiyah groupoid
higher Atiyah groupoid:standard higher Atiyah groupoidhigher Courant groupoidgroupoid version of quantomorphism n-group
coefficient for cohomology:$\mathbf{B}\mathbb{G}$$\mathbf{B}(\mathbf{B}\mathbb{G}_{\mathrm{conn}})$$\mathbf{B} \mathbb{G}_{conn}$
type of fiber ∞-bundle:principal ∞-bundleprincipal ∞-connection without top-degree connection formprincipal ∞-connection
References
General
Original accounts are
• Jean-Marie Souriau, Structure des systemes dynamiques Dunod, Paris (1970)
Translated and reprinted as (see section V.18 for the quantomorphism group):
Jean-Marie Souriau, Structure of dynamical systems - A symplectic view of physics, Brikhäuser (1997)
• Bertram Kostant, Quantization and unitary representations, in Lectures in modern analysis and applications III. Lecture Notes in Math. 170 (1970), Springer Verlag, 87—208
A textbook account is in section II.4 of
• Jean-Luc Brylinski, Loop spaces, characteristic classes and geometric quantization, Birkhäuser (1993)
and in
• Rudolf Schmid, Infinite-dimensional Lie groups with applications to mathematical physics
The description in terms of automorphism in the slice $\infty$-topos over the moduli stack of (higher) connections is in
and in section 4.4.17 of
Smooth manifold structure
The ILH group structure on the quantomorphism group is discussed in
• H. Omori, Infinite dimensional Lie transformation groups, Springer lecture notes in mathematics 427 (1974)
• T. Ratiu, R. Schmid, The differentiable structure of three remarkable diffeomorphism groups, Math. Z. 177 (1981)
The regular convenient Lie group structure is discussed in
• Cornelia Vizman, Some remarks on the quantomorphism group (pdf)
A metric-structure on quantomorphisms groups is discussed in
• Y. Eliashberg,; L. Polterovich, Partially ordered groups and geometry of contact transformations. Geom.Funct.Anal.10(2000),no.6, 1448-1476.
Revised on September 13, 2013 02:27:11 by Urs Schreiber (77.251.114.72)
|
2014-10-30 18:16:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 42, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.77449631690979, "perplexity": 5471.669567161488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898644.9/warc/CC-MAIN-20141030025818-00208-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/energy-conservation-education-and-funding-for-the-public.231341/
|
# Energy conservation education and funding for the public
• News
## Main Question or Discussion Point
How much does the American public know about or enable energy conservation? It seems to me that by far most effort of this kind is by industry and government.
Has the informational drive to change public consumption through behavior been successful, or is the primary means the pocketbook? If only we foresaw the current rise in prices and instead invested in conservation technology for and responsibility by everyone.
I think that educating individuals to be constantly aware of what energy is and how it manifests in practical ways would be as effective a means of saving as economic penalties. America's tradition of wasting resources can be corrected by close analysis and recognition of common activities.
What would be your initiatives for wise use of energy by the public?
Related General Discussion News on Phys.org
vanesch
Staff Emeritus
Gold Member
How much does the American public know about or enable energy conservation?
The question is: why should one "conserve energy" ? Isn't the issue rather related to "CO2 emissions" or "ressources" ?
If those problems can be met, should one still "conserve energy" ?
CO2 is one of many pollutants that are byproducts of shortsighted energy cycles, and one of great current importance. But let's say we have an "inexhaustible" supply of energy, as from nuclear fusion.
Aside from the creation of radioactive isotopes, our greed for such energy may cause changes related to the production of heat - those eventually affecting climate, human health, the biosphere, and energy utilization redundancy (where systems attempt to fight the 2nd law of thermodynamics, like a refrigerator adding to air conditioning load).
Does thermal energy play a part in the near future of energy concerns?
Ivan Seeking
Staff Emeritus
Gold Member
The question is: why should one "conserve energy" ? Isn't the issue rather related to "CO2 emissions" or "ressources" ?
If those problems can be met, should one still "conserve energy" ?
But we don't have sufficient resources. And no matter what we do, we won't have sufficient resources for at least decades to come.
In answer to the op: It is my perception that very few people will conserve energy of their own accord. In fact it seems that generally speaking, there are only a couple of generations of people in the country who are energy conscious. Many from the generation that preceded mine - the people who, due to the false perception that environmentalism comes from "hippies", fought the environmental movement tooth and nail from the very beginning - spent their retirement years driving motor homes all over the country at ten miles per gallon. And people who are too young to remember the oil shortages of the seventies are too naive to be bothered, so they became the SUV generation. Others if not most are simply too ignorant to understand the stakes.
Many people do recycle their trash now, but unless they have hassle-free curbside recycling available, it's too much trouble.
We are now in a period of feel-good green [as Integral calls it], but this will grow old soon. From there people will continue to follow the latest fads and only think with their wallets. That is why it is imperative that the price of energy be kept high.
Last edited:
Astronuc
Staff Emeritus
Interesting perspective on energy demand and conservation by the CEO of Shell.
On the Record: Jeroen van der Veer
By Alex Markels, U.S. News Senior Writer
Posted 8/19/07
You might expect Royal Dutch Shell CEO Jeroen van der Veer to pooh-pooh the recent surge of interest in renewable energy. But despite his contention that the public is naively placing too much faith in solar and wind power, the 59-year-old Dutchman has raised eyebrows by claiming that the world can meet its energy demand and control greenhouse gases—while still depending on fossil fuels for 70 percent of energy supplies.
http://www-origin.usnews.com/usnews/biztech/articles/070819/27record.htm [Broken]
Is conservation enough?
It's extremely important, but no, conservation isn't enough. Even with conservation, energy demand will double by the year 2050, and more and more of the world's conventional oil fields are going into decline. So supplies of oil and gas that are easy to extract will struggle to keep up with demand, which means increasing use of unconventional fossil fuels, such as oil sands, including and especially coal. Coal is more than twice as CO2 intensive as natural gas, and abundantly available.
. . . .
Last edited by a moderator:
drankin
How much does the American public know about or enable energy conservation? It seems to me that by far most effort of this kind is by industry and government.
Has the informational drive to change public consumption through behavior been successful, or is the primary means the pocketbook? If only we foresaw the current rise in prices and instead invested in conservation technology for and responsibility by everyone.
I think that educating individuals to be constantly aware of what energy is and how it manifests in practical ways would be as effective a means of saving as economic penalties. America's tradition of wasting resources can be corrected by close analysis and recognition of common activities.
What would be your initiatives for wise use of energy by the public?
Energy is conserved more when it costs more. If the government wants to limit consumption all it has to do is increase taxes on it. Nothing educates a person more than his pocketbook. How is it we have a "tradition" of wasting resources, exactly? Leaving the bathroom light on? Going for a Sunday drive into the countryside?
wolram
Gold Member
drankin
LOL, now if we shut down Las Vegas, Disneyland/world, and all other amusement destinations, we could conserve lots of energy! We can all play frisbee and board games while proudly clammering about how much energy we are saving by not having an entertainment industry anymore!
proudly clammering about how much energy we are saving by not having an entertainment industry anymore!
I think we'd see many more benefits than just energy conservation if we shut down the entertainment industry, even if temporarily.
Just think, we might get to see some actual news in the media instead of reading about people who can't figure out how underwear works.
Its not just that the entertainment industry uses a lot of energy, it also takes a lot of energy just to get there.
Do you all think home energy audits very effective in reducing domestic energy consumption?
wolram
Gold Member
It seems whatever we save some one else uses, local authorities seem to put up more street lights every day, soon there will be no dark place on the planet .
I just came back from a central States Combustion meeting and the prospects are not that great. The US uses about a quarter of the worlds oil supply and imports most of it from countries that are not US' best friends. US is not specifically wasteful contrary to many beliefs. The quarter of the worlds energy consumption is backed by a quarter of the words wealth so in that regard we are just 1/1.
Ethanol (especially from corn) has proven to produce more CO2 emission that it helps to reduce. Furthermore, it raises global food prices which are already at an all time high.
Bio-diesel forms a potential candidate. also coal derived gases (Syngas) or fisher-tropsch liquid fuels are likely candidates for the future.
We do not realize how much we have been spoiled with oil that can be sucked out of the ground an is pretty much ready to go. The processing costs for any alternative is much higher.
The last I checked there were several coal fired ethanol plants.
lisab
Staff Emeritus
Gold Member
Do you all think home energy audits very effective in reducing domestic energy consumption?
I suppose an audit would give the homeowner useful information about energy use.
Also, it would be nice if I had a way to know exactly how much energy I'm using, real-time. Right now, if I want to know how much I'm consuming, I have to go out to the meter to see how fast the wheel is spinning.
Wouldn't it be nice to have a gauge that hangs on the wall showing how much energy is being used by your house, real-time?
russ_watters
Mentor
Do you all think home energy audits very effective in reducing domestic energy consumption?
The EPA did a study a while back (I'll see if I can find it) that found a distrubingly high fraction of houses had problems with their HVAC systems, mostly due to poor installation. I think the fraction was something like 80%.
When I moved into my house, I found undersized return ductwork and a major return duct connection that had come apart in the attic, so the primary 2nd floor return air was pulled from the attic. I can only assume that the previous owner had been living with that since the house was built (it was two years old when I bought it).
I also have a gable exhaust fan where the installer rolled up the installation manual and shoved it between the blades of the fan, presumably so he wouldn't lose them (though sabbotage is a real possibility). The motor, of course, burned out and the previous owner never had the advantage of that fan.
I also installed a whole-house fan in the ceiling of the second floor. This makes a huge difference in the spring and fall.
I also added insulation to my basement (probably a code violation that it was missing). My energy costs this winter were much lower than last winter, but it is tough to gage the effect of the insulation - it was a warmer year.
I also hope to add air valves to my ductwork to better target the heating/cooling.
What is the effect of all these measures? I'm not really sure, but if I had to guess, 20-40% (of the hvac usage). I'm tracking my energy use, but unfortunately, I don't have info about the previous owner's use.
lisab
Staff Emeritus
Gold Member
The EPA did a study a while back (I'll see if I can find it) that found a distrubingly high fraction of houses had problems with their HVAC systems, mostly due to poor installation. I think the fraction was something like 80%.
We recently had someone come to our house to give us an estimate for a heat pump. He inspected our duct work and was surprised to see that it was done correctly.
Based on what he said, the 80% figure may be low (at least in our area).
Energy conservation needs to do beyond just conservation of electricity, and most people just don't "get" that. Oil has become a greater problem. The current high oil prices are not because of speculation, but rather because of increasing demand combined with supply that can't keep up. Peak oil is real, and is expected to happen between now and the next 10 years, and US is dragging it's feet when it comes to getting ready for this. We have invested so much in a lifestyle (namely the suburban lifestyle) that is very energy inefficient (requires you to drive everywhere) and depends entirely on cheap oil to keep going. So when that cheap oil runs dry, what will happen to suburbia?
If you're interested in this go watch "the end of suburbia" and "a crude awakening: the coming oil crisis"
Energy conservation needs to do beyond just conservation of electricity, and most people just don't "get" that. Oil has become a greater problem. The current high oil prices are not because of speculation, but rather because of increasing demand combined with supply that can't keep up.
I am not so sure about the speculation part. It sure as heck drove up the price of housing. I saw a lot of "Sold Out" signs in new housing developments during the boom.
I have yet to pull into a gas station that had a sign up indicating that they are out of gasoline.
Is it coincidence that the realization of global warming coincides with runaway oil demand?
Try this one "Cent-A-Meter Wireless Electricity Monitor"
RRP - $256.00 The ebay link:...... I haven't made 15 posts yet, so I couldn't post the link.$200 bucks for a monitor compared to energy efficient light bulbs and turning your gadgets off.
Just go for the bulbs and your off button.
vanesch
Staff Emeritus
Gold Member
CO2 is one of many pollutants that are byproducts of shortsighted energy cycles, and one of great current importance. But let's say we have an "inexhaustible" supply of energy, as from nuclear fusion.
The point I was trying to make was that it isn't "energy consumption" as such that is at issue, but rather the ways we produce it.
Aside from the creation of radioactive isotopes
This is really a minor issue. Of all energy-related polluants, nuclear waste is the easiest to handle and control, because of its finite lifetime and its extremely small quantities.
, our greed for such energy may cause changes related to the production of heat - those eventually affecting climate, human health, the biosphere, and energy utilization redundancy (where systems attempt to fight the 2nd law of thermodynamics, like a refrigerator adding to air conditioning load).
Does thermal energy play a part in the near future of energy concerns?
Eventually, yes. But we are still several orders of magnitude away from that. Solar power (see http://en.wikipedia.org/wiki/Solar_power) gives Earth 174 PW, while average energy consumption worldwide (everything confounded) is of the order of 20 TW (electricity is of the order of 1.6 TW).
Let's ask it differently: suppose you have solar-heated warm water in large quantities. Imagine you have tons of purified rainwater. Is it "wasteful" to stay 45 minutes under a hot shower under these conditions ?
Or, imagine you have a sail boat. Is it wasteful to "use a lot of wind energy" to go on several trips ?
Or let's say that you have wind-generated electrical power with a windmill in your garden and a set of batteries in your basement. Is it wasteful to let your TV-set in stand-by mode ? Are you going to "save the planet" by switching it off ?
|
2019-12-06 16:33:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3627285361289978, "perplexity": 1932.8028930203889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00256.warc.gz"}
|
http://superuser.com/questions/105803/restore-previous-file-selection-in-total-commander
|
# Restore previous file selection in Total Commander
Right now I'm not sure if I used to use this feature in Total Commander of Volkov Commander. I cannot find anything like that in Total Commander. Does it exist there?
I very carefully selected a few files from a directory and instead of moving them I only copied them. So I need to selected them again and move...
-
## 1 Answer
/ on the numpad "restores" last selection. Extremely useful in combination with numpad * (inverse selection)
-
Just to elaborate, this can also be found via the menu: Mark > Restore Selection – Assad Ebrahim Dec 17 '12 at 7:09
|
2015-01-27 14:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42022886872291565, "perplexity": 2298.1900092224428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00131-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.springerprofessional.de/basics-of-continuum-plasticity/15734858
|
scroll identifier for mobile
main-content
## Über dieses Buch
This book describes the basic principles of plasticity for students and engineers who wish to perform plasticity analyses in their professional lives, and provides an introduction to the application of plasticity theories and basic continuum mechanics in metal forming processes.
This book consists of three parts. The first part deals with the characteristics of plasticity and instability under simple tension or compression and plasticity in beam bending and torsion. The second part is designed to provide the basic principles of continuum mechanics, and the last part presents an extension of one-dimensional plasticity to general three-dimensional laws based on the fundamentals of continuum mechanics. Though most parts of the book are written in the context of general plasticity, the last two chapters are specifically devoted to sheet metal forming applications. The homework problems included are designed to reinforce understanding of the concepts involved.
This book may be used as a textbook for a one semester course lasting fourteen weeks or longer. This book is intended to be self-sufficient such that readers can study it independently without taking another formal course. However, there are some prerequisites before starting this book, which include a course on engineering mathematics and an introductory course on solid mechanics.
## Inhaltsverzeichnis
### Chapter 1. Introduction
The following are foundational assumptions for continuum mechanics :
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 2. Plasticity Characteristics (in Simple Tension/Compression)
As discussed in Chapter 1, material properties, or more specifically mechanical properties, are required in addition to Newton’s laws to solve the deformation of materials under external forces in continuum mechanics. However, mechanical properties that address all the relationships between stress and strain measures under various conditions are so diverse that measuring them, even only partially, remains as one of the most challenging technical areas.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 3. Instability in Simple Tension Test
As discussed in Chap. 2, the UTS (ultimate tensile strength) point observed in the simple tension test for both sheet and bulk specimens is important as it is the limit of uniform deformation in the gauge length, which is analyzed here.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 4. Physical Plasticity
Physical plasticity deals with issues relevant to plastic deformation in the microstructural level, which is therefore beyond the scope of continuum plasticity. However, a few basic features are briefly reviewed here, since these provide some theoretical foundations of continuum plasticity, as will be discussed later.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 5. Deformation of Heterogeneous Structures
As previously discussed, plastic deformation occurs by dislocation sliding or twinning, driven by shear stress; however, dislocation sliding is predominant at room temperature for most metals with a few exceptions. As for the shear stress to induce the plastic deformation, known as the critical shear stress , its true magnitude is much lower than the theoretical value, with sliding facilitated by dislocations, on a single crystal level.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 6. Pure Bending and Beam Theory
The deflection of a beam , defined as a uniform long straight slender bar under transverse loading, is discussed here.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 7. Torsion
Torsion of a cylindrical shaft is an important engineering problem especially since it has the exact analytical solution of the linear isotropic elasticity with infinitesimal deformation for a circular cylinder. The uniform circular cross-section may have an arbitrary size (with the radius of ‘a’ here). Note that the pure bending also has the exact analytical solution of the linear isotropic elasticity but its object may have an arbitrary cross-sectional shape unlike the case of torsion here, which is only for circular cross-sections. The infinitesimal elastic solution is extended here for plasticity with finite deformation considering the one-dimensional elasto-perfect plasticity as a first order approximate solution.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 8. Stress
In continuum mechanics, an element of a whole body has a mass (dm: the differential mass) and a volume (dV: the differential volume) as well as a shape. The shape is typically considered to be a hexahedron whose six surfaces are aligned with the coordinate system as shown in Fig. 8.1. The coordinate system in this whole book is the rectangular Cartesian coordinate system, which is denoted as x-y-z or 1-2-3 (for the indicial notation), with unit base vectors, $${\mathbf{e}}_{x} ( = {\mathbf{e}}_{1} )$$, $${\mathbf{e}}_{y} ( = {\mathbf{e}}_{2} )$$ and $${\mathbf{e}}_{z} ( = {\mathbf{e}}_{3} )$$.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 9. Tensors
The stress with nine components derived in Eq. (8.1) is a tensor. The main task of tensors is to transform one vector to another; i.e.,
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 10. Gradient, Divergence and Curl
The differential operator Nabla, $$\nabla$$, is defined as .
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 11. Kinematics and Strain
Consider the changes in position and shape of a continuum body with time, t, as shown in Fig. 11.1. Then,
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 12. Yield Function
In the simple tension test, materials deform elastically until stress reaches the yield point, after which plastic deformation starts as schematically shown in Fig. 2.2. Since there are nine stress components (or six components, if its symmetry is considered), combined loading of some or all of those components forms a yield surface, which defines a boundary of elasticity.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 13. Normality Rule for Plastic Deformation
As for plastic deformation, in order to account for the deformation path (or history) dependent on mechanical properties in plasticity, the plastic (natural) strain increment, $$d{\varvec{\upvarepsilon}}^{p} ( {=} {\mathbf{D}}^{p} dt)$$, is extensively applied as discussed in Remark #11.4.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 14. Plane Stress State for Sheets
When plasticity is applied to thin sheets such as membranes, plates and shells, the yield function and the plastic strain increment function as well as their applications to the dual normality rules become simpler.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 15. Hardening Law for Evolution of Yield Surface
In the past few decades, a few experimentations have been conducted to better understand the evolution of the yield surface during plastic deformation.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 16. Stress Update Formulation
The constitutive law of plasticity consists of three elements: the yield surface defined by the yield function to describe the elasticity limit, the normality rule to define the directions of plastic deformation (for elasto-plasticity) or the stress (for rigid-plasticity) and hardening behavior to describe the yield surface evolution during plastic deformation.
Kwansoo Chung, Myoung-Gyu Lee
### Chapter 17. Formability and Sprinback of Sheets
The main applications of metal plasticity include the process analysis and design of metal forming, which broadly consists of sheet forming and bulk forming.
Kwansoo Chung, Myoung-Gyu Lee
### Backmatter
Weitere Informationen
## BranchenIndex Online
Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.
## Whitepaper
- ANZEIGE -
### Grundlagen zu 3D-Druck, Produktionssystemen und Lean Production
Lesen Sie in diesem ausgewählten Buchkapitel alles über den 3D-Druck im Hinblick auf Begriffe, Funktionsweise, Anwendungsbereiche sowie Nutzen und Grenzen additiver Fertigungsverfahren. Eigenschaften eines schlanken Produktionssystems sowie der Aspekt der „Schlankheit“ werden ebenso beleuchtet wie die Prinzipien und Methoden der Lean Production.
|
2018-10-23 06:03:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5591797828674316, "perplexity": 3117.275448331639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00107.warc.gz"}
|
https://physics.stackexchange.com/questions/625680/massless-particles-and-time-synchrony
|
# Massless particles and time synchrony
Someone once tried to explain to me why neutrino oscillation implies that neutrinos have mass, and I understood it as follows:
1. Change requires time.
2. Massless particles travel at the speed of light.
3. Objects travelling at the speed of light do not experience. time
4. Therefore, if neutrinos oscillate, then they must have mass.
That argument seems to correspond to the most highly rated answer to Why do neutrino oscillations imply nonzero neutrino masses?.
Following from that argument, I considered that a massless particle than then be described as a line connecting two points in 3-space: the point where the particle was emitted, and the point where it was absorbed. Then time synchronization can be defined as follows: two events E1 and E2 occur at the same time if a photon emitted by E2 is absorbed by E2. But then I considered the scenario of an observer that is stationary with respect to a mirror t light-seconds away. When the observer views its own reflection, it will see itself as it was 2t seconds in the past. This contradicts the definition of time synchrony above, which requires that the light emission and observation take place at the same time (unless the process of reflection takes time, but then the time to reflect a photon depends on how far the mirror is from the observer, which also doesn't make sense). So clearly this understanding is faulty.
What is my mistake here?
|
2021-07-28 01:02:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354349970817566, "perplexity": 352.0071732432021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00363.warc.gz"}
|
http://tug.org/pipermail/texhax/2010-October/015884.html
|
Michael Barr barr at math.mcgill.ca
Fri Oct 29 21:19:24 CEST 2010
On Fri, 29 Oct 2010, Philip Taylor (Webmaster, Ret'd) wrote:
> Michael Barr wrote:
>
>> \documentstyle{article}
>> \def\bbrack#1{[\![#1]\!]}
>> \begin{document}
>> $\bbrack{\widehat c =\widehat d}$
>> \end{document}
>>
>> Yes, I could add extra space after the argument, but that is a real
>> kludge. My real question is why can't I get the \widehat to actually be
>> centered over its argument. As you can see, it starts halfway along the
>> "d" and ends well to the right. This is also true of the one on the "c"
>> so it is not just centering over an ascender.
>
> Is it not centered w.r.t. the italic slope ?
> ** Phil.
I don't really care what it is centered with respect to. It looks awful
and I don't expect tex to produce awful-looking output. In all cases it
ought to be centered over the glyph, not some extension of the slope.
|
2017-10-22 23:03:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855656623840332, "perplexity": 3098.0593035495353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00109.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=96121
|
## HW chapter 1 question 3
$c=\lambda v$
Nabeeha Khan 1D
Posts: 30
Joined: Fri Apr 06, 2018 11:03 am
### HW chapter 1 question 3
For question 3 in chapter 1's homework, why is A not the correct answer?
Jasmine Emtage-1J
Posts: 24
Joined: Fri Apr 06, 2018 11:02 am
### Re: HW chapter 1 question 3
The speed of light is constant as shown by the equation 'c=(lambda)(v)'. If the frequency decreases, wavelength must increase keeping the speed the same constant value.
Chem_Mod
Posts: 19138
Joined: Thu Aug 04, 2011 1:53 pm
Has upvoted: 820 times
### Re: HW chapter 1 question 3
Please post full details of the question if you want to ask a question about the book questions so everyone can understand without having to open the book.
|
2021-01-20 16:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32334190607070923, "perplexity": 3132.7732014947755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00045.warc.gz"}
|
http://prob140.org/textbook/chapters/Chapter_19/01_Convolution_Formula
|
# The Convolution Formula
Let $X$ and $Y$ be discrete random variables and let $S = X+Y$. We know that a good way to find the distribution of $S$ is to partition the event ${ S = s}$ according to values of $X$. That is,
If $X$ and $Y$ are independent, this becomes the discrete convolution formula:
This formula has a straightforward continuous analog. Let $X$ and $Y$ be continuous random variables with joint density $f$, and let $S = X+Y$. Then the density of $S$ is given by
which becomes the convolution formula when $X$ and $Y$ are independent:
### Sum of Two IID Exponential Random Variables
Let $X$ and $Y$ be i.i.d. exponential $(\lambda)$ random variables and let $S = X+Y$. For the sum to be $s > 0$, neither $X$ nor $Y$ can exceed $s$. The convolution formula says that the density of $S$ is given by
That’s the gamma $(2, \lambda)$ density, consistent with the claim made in the previous chapter about sums of independent gamma random variables.
Sometimes, the density of a sum can be found without the convolution formula.
### Sum of Two IID Uniform $(0, 1)$ Random Variables
Let $S = U_1 + U_2$ where the $U_i$’s are i.i.d. uniform on $(0, 1)$. The gold stripes in the graph below show the events ${ S \in ds }$ for various values of $S$.
The joint density surface is flat. So the shape of the density of $S$ depends only on the lengths of the stripes, which increase linearly between $s = 0$ and $s = 1$ and then decrease linearly between $s = 1$ and $s = 2$. So the joint density of $S$ is triangular. The height of the triangle is 1 since the area of the triangle has to be 1.
At the other end of the difficulty scale, the integral in the convolution formula can sometimes be quite intractable. In the rest of the chapter we will develop a different way of identifying distributions of sums.
|
2019-04-20 20:13:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410281181335449, "perplexity": 89.88924659675217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00532.warc.gz"}
|
https://blender.stackexchange.com/questions/43686/how-can-i-remove-a-linked-library-with-python?noredirect=1
|
# How can I remove a linked library with Python?
I'm working on an add-on. The purpose is to link and un-link groups from another blend file. Thanks to stack exchange and the Blender documentation this works pretty well.
This means I can load a library and import a group from it.
with bpy.data.libraries.load(path, link=True) as (sourceData, targetData):
targetData.groups = sourceData.groups
Then I search the library and extract the group to create an instance of the group:
instance = bpy.data.objects.new(name, None)
instance.dupli_type = 'GROUP'
instance.dupli_group = group
The result is a dupli-group object that links in the other blend file. (the link is persistent and will survive a reload .. that is what I want).
My idea is to do the opposite. If the user does not want this link anymore, I would like to get rid of
• the instance
• the group
This works pretty nicely too for this instance:
context.scene.objects.unlink(instance)
bpy.data.objects.remove(instance)
and the group:
group.user_clear()
bpy.data.groups.remove(group)
I saw (in the outliner/Blendfile)there is still a user to the library. I guess it is the group object. So I removed that too:
for object in library.users_id:
for object in library.users_id:
object.user_clear()
bpy.data.objects.remove(object)
Now the outline shows me the library without any user ... but
It seems there is no API call to do that. bpy.data.libraries.remove() does not exist. There is a load but no unload.
Saving/Loading the file removes the library. But this is far away from any comfort.
Reloading the blend file destroys the current context. E.g. undo would not be possible anymore. It also involves unnecessary file operations, that leads to another cycle in the backup files. Beside of that it simply feels incorrect to use such a workaround (Imagine you need to restart your PC each time you delete a file)
I found this one: Q: Proper way to remove unused linked Group data-blocks but this does not answer my question as I want to get rid of the library not just the group.
Do I miss something?
Thank you Monster
• Great question! Would be awesome to have this as add-on. Major issue is removing the group IMO, unfortunately I also could not find a proper way of doing this. Why not starting a bounty to get more attention?
– p2or
Jan 15 '16 at 10:49
• I really think this is not implemented. I doubt a bounty will help there any further. Jan 15 '16 at 19:53
Quite an old thread, but it seems there is finally a solution available. Newer versions of Blender provide the batch_remove method in the bpy.types.BlendData type. So you can call:
bpy.data.batch_remove(ids=(my_library,))
my_library is a bpy.types.Library object in this case and no string/name.
Keep in mind that to this date, the documentation states, that this method is considered experimental, but as long as you don't deal with blend file critical data blocks like scenes, it seems you should be fine.
Using a python "with" statement you don't need to unload/remove/delete the library, because when you leave the block of with, the resources are cleanly released regardless of the outcome of the code in the block. I suggest this post to better understand the use of with.
• Unfortunately I do not see how this helps. The "with" clause is used along with load() - which is a (timely) separate step. I do not know what exactly the context managers are doing beside closing the file? My experiments didn't show any success in getting rid of the library reference. Jul 6 '16 at 6:52
• You don't need to close the library, because when you go out of the with statement you have already closed it Jul 6 '16 at 6:58
• My aim is less to close it. It automatically does this as you correctly state in your answer. I want to remove the reference to the library after the last user gets removed. Jul 6 '16 at 7:10
|
2021-09-23 13:13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21333318948745728, "perplexity": 1093.8482942284509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00409.warc.gz"}
|
https://gmatclub.com/forum/the-table-above-shows-the-quantities-and-prices-per-pound-of-three-218593.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 15 Oct 2019, 04:26
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The table above shows the quantities and prices per pound of three
Author Message
TAGS:
### Hide Tags
Intern
Joined: 15 Mar 2015
Posts: 11
Concentration: Operations, Strategy
GMAT 1: 550 Q35 V30
WE: Operations (Manufacturing)
The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
14 May 2016, 23:50
6
18
00:00
Difficulty:
35% (medium)
Question Stats:
70% (02:11) correct 30% (02:23) wrong based on 310 sessions
### HideShow timer Statistics
The table above shows the quantities and prices per pound of three types of nuts that are combined to make a nut mixture. The mixture contains twice as many pounds of cashews as pounds of almonds, and 3 times as many pounds of walnuts as pounds of almonds. What is the cost of the mixture, in dollars, expressed in terms of a?
A) 4a
B) 6a
C) 13.5a
D) 20.5a
E) 24.5a
Attachments
PS Table.JPG [ 18.11 KiB | Viewed 13601 times ]
Math Expert
Joined: 02 Aug 2009
Posts: 7958
Re: The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
15 May 2016, 00:02
10
6
TurgCorp wrote:
The table above shows the quantities and prices per pound of three types of nuts that are combined to make a nut mixture. The mixture contains twice as many pounds of cashews as pounds of almonds, and 3 times as many pounds of walnuts as pounds of almonds. What is the cost of the mixture, in dollars, expressed in terms of a?
A) 4a
B) 6a
C) 13.5a
D) 20.5a
E) 24.5a
hi
1) cashew = c and almonds = a
Quote:
The mixture contains twice as many pounds of cashews as pounds of almonds
$$c=2a$$
2) walnuts = w and almonds = a
Quote:
3 times as many pounds of walnuts as pounds of almonds.
$$w=3a...$$
lets add the price of all three from given table-
$$3.5*c + 4*a + 4.5*w = 3.5 * 2a + 4 * a + 4.5 * 3a = 7a+4a+13.5a = 24.5a$$
E
_________________
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4762
Location: India
GPA: 3.5
Re: The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
15 May 2016, 10:56
4
2
TurgCorp wrote:
The table above shows the quantities and prices per pound of three types of nuts that are combined to make a nut mixture. The mixture contains twice as many pounds of cashews as pounds of almonds, and 3 times as many pounds of walnuts as pounds of almonds. What is the cost of the mixture, in dollars, expressed in terms of a?
A) 4a
B) 6a
C) 13.5a
D) 20.5a
E) 24.5a
Solved in tabular way
Attachment:
Capture.PNG [ 4.48 KiB | Viewed 13541 times ]
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
##### General Discussion
Intern
Joined: 06 Apr 2015
Posts: 1
Re: The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
01 Aug 2016, 17:46
Hi Can any one explain how to decipher phrases such as "Twice as many pounds of cashews as almonds" etc ? I always get these statements wrong. Need help asap.
Intern
Joined: 06 Jul 2014
Posts: 2
The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
08 Jan 2017, 07:00
5
Abe87 wrote:
Hi Can any one explain how to decipher phrases such as "Twice as many pounds of cashews as almonds" etc ? I always get these statements wrong. Need help asap.
Hi its easy, you can just remeber it this way
"the variable which is at the center(cashews) " = "the factor by which it is greater/Lesser (twice)" * "the variable at the end(almonds)"
This works all the time u can just try it out.
Dont try to decipher just write it this way.
Hope this helps...
Director
Status: Professional GMAT Tutor
Affiliations: AB, cum laude, Harvard University (Class of '02)
Joined: 10 Jul 2015
Posts: 706
Location: United States (CA)
Age: 39
GMAT 1: 770 Q47 V48
GMAT 2: 730 Q44 V47
GMAT 3: 750 Q50 V42
GRE 1: Q168 V169
WE: Education (Education)
The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
06 Apr 2018, 20:17
Top Contributor
Just "make it true" by setting a=1, c=2 and w=3. You now have 6 pounds of cashews that average out to slightly more than $4 per pound.$4/pound x 6 pounds = \$24, so the answer must be Choice E.
-Brian
_________________
Harvard grad and 99% GMAT scorer, offering expert, private GMAT tutoring and coaching worldwide since 2002.
One of the only known humans to have taken the GMAT 5 times and scored in the 700s every time (700, 710, 730, 750, 770), including verified section scores of Q50 / V47, as well as personal bests of 8/8 IR (2 times), 6/6 AWA (4 times), 50/51Q and 48/51V.
You can download my official test-taker score report (all scores within the last 5 years) directly from the Pearson Vue website: https://tinyurl.com/y7knw7bt Date of Birth: 09 December 1979.
GMAT Action Plan and Free E-Book - McElroy Tutoring
Contact: mcelroy@post.harvard.edu (I do not respond to PMs on GMAT Club) or find me on reddit: http://www.reddit.com/r/GMATpreparation
Non-Human User
Joined: 09 Sep 2013
Posts: 13154
Re: The table above shows the quantities and prices per pound of three [#permalink]
### Show Tags
03 Jun 2019, 10:43
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: The table above shows the quantities and prices per pound of three [#permalink] 03 Jun 2019, 10:43
Display posts from previous: Sort by
|
2019-10-15 11:26:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7692592740058899, "perplexity": 7913.520839528194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00097.warc.gz"}
|
http://www.turkmath.org/beta/seminer.php?id_seminer=2212
|
#### Sabancı University Algebra Seminars
Height growth rates in dynamical orbits
Özet : Let $V$ be a projective variety defined over a number field. A height function $h: V(\overline{\mathbb{Q}})\rightarrow\mathbb{R}$ is a tool for measuring the arithmetic complexity" of the points of $V$. In particular, one can often use height functions to detect geometric properties of the underlying variety. In this talk, we apply this philosophy to discrete dynamical systems. Namely, given a self map $f:V\rightarrow V$ and a point $P\in V$, then we are interested in the growth rate of $h(f^n(P))$ as we repeatedly iterate the map. What can this growth rate tell us about the variety $V$, the map $f$, and the point $P$? To investigate these questions, we survey basics properties of height functions, discuss several historically significant cases (e.g., elliptic curves and projective space), and motivate additional topics in the burgeoning field of arithmetic dynamics.
|
2019-11-22 16:36:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3942539095878601, "perplexity": 255.91124972366086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00349.warc.gz"}
|
https://www.physicsforums.com/threads/cant-seem-to-figure-out-this-limit.922035/
|
# Homework Help: Can't seem to figure out this limit
1. Aug 6, 2017
### ckyborg4
1. The problem statement, all variables and given/known data
I'm trying to do this limit based on a previous thread ( https://www.physicsforums.com/threads/proving-n-x-n-e-x-integrated-from-0-to-infinity.641947/#_=_ )
I got up to the last part of thread where I need to find the limit of:
limit as x approaches infinity of: (-x^(k+1))/e^x
2. Relevant equations
3. The attempt at a solution
I know that this limit somehow must equal to zero in order to get the right answer, but I did l'Hopital's rule 4 times and it just keeps on going to infinity.
I attached the working out of the whole problem
Really appreciate it if someone could help
File size:
29.9 KB
Views:
58
2. Aug 6, 2017
### LCKurtz
If you keep using L'Hospitals rule with a polynomial in the numerator and an exponential in the denominator, the numerator's degree will eventually become 0 while the exponential remains in the denominator.
3. Aug 6, 2017
### ckyborg4
I cant seem to get it to work here because the exponent of the polynomail has a degree k. If the degree of the polynomail is a variable constant im not sure how i can get it to zero
4. Aug 6, 2017
### PetSounds
What's the derivative of any constant?
5. Aug 7, 2017
### pasmith
Keep going. You must apply l'Hopital $k + 1$ times in total before you get a constant in the numerator.
Alternatively, as every term in the series $e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$ is strictly positive when $x > 0$, we have $e^x > \frac{x^{k+2}}{(k+2)!}$ and hence $$0 < \frac{x^{k+1}}{e^x} < \frac{(k+2)!x^{k+1}}{x^{k+2}} = \frac{(k+2)!}{x}.$$ Now use the squeeze theorem.
6. Aug 7, 2017
### StoneTemplePython
This is how I'd do it, as I tend to think l'Hopital is a rather unintuitive power-tool that should be used as a last resort.
At a minimum, I'd change it to
$0 \leq \frac{x^{k+1}}{e^x} \leq ...$
though, otherwise that strict inequality would seem to cause problems, as in the limit we have $0 \lt 0$
|
2018-05-23 09:29:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080334067344666, "perplexity": 529.5021458271033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00228.warc.gz"}
|
https://zbmath.org/?q=an:05166527&format=complete
|
## Holomorphic Morse inequalities and Bergman kernels.(English)Zbl 1135.32001
Progress in Mathematics 254. Basel: Birkhäuser (ISBN 978-3-7643-8096-0/hbk). xiii, 422 p. (2007).
This book presents in detail various results and techniques relative to the holomorphic Morse inequalities and the asymptotic expansion of the Bergman kernel. Although this book is intended for specialists, it is self-contained and will interest certainly graduate students. The heart of the book is intended to explain analytic localization techniques developed by Bismut-Lebeau and to explain their interaction with complex, Kähler and symplectic geometry. A large number of applications are given in detail: several proofs of the fundamental Kodaira’s embedding theorem, a solution of the Grauert-Riemenschneider and Shiffman conjectures, a compactification of complete Kähler manifolds of pinched negative curvature, asymptotics of the Ray-Singer analytic torsion, the Berezin quantization, weak Lefschetz theorems, etc.
Let’s describe now precisely the structure of the book.
The first chapter is a general introduction on connections of the tangent bundle (Lévi-Civita, Chern, Bismut connections), Dirac operators, Lichnerowicz formula, and gives in details the Bochner-Kodaira-Nakano formula with boundary term. The key point is the so-called spectral gap property for higher tensor powers of line bundles, which asserts that on a projective manifold $$M$$, with $$L$$ an ample line bundle on $$M$$ and $$p \in {\mathbb N}$$, the spectrum of the Kodaira Laplacian $$\square_p$$ on $$L^p$$ satisfies $\text{Spec}(\square_p) \subset \{0\} \cup ]2\pi p- C_L, +\infty[$
for some constant $$C_L >0$$. Note that a version of this result holds for Dirac operators. The end of this dense chapter is dedicated to explain Demailly’s holomorphic (strong and weak) inequalities. The proof is based here on an asymptotic of the heat kernel, i.e that the authors prove the local formula
$\exp\left(-\frac{u}{p}\square_p\right)(x,x)= \frac{1}{(2\pi)^n}\frac{\det([R_L]) e^{2u \omega_d}}{\det(1-e^{-2u[R_L]})}p^n + o(p^n)$
where $$\dim_{\mathbb{C}}M=n$$, $$u>0$$, $$x$$ is a point of $$M$$, $$[R_L]\in \text{End}(T^{(1,0)})$$ is the hermitian matrix associated to the curvature $$R_L$$ of $$L$$, $$R_L(X,Y)=\langle [R_L]X,Y\rangle$$. Moreover, we denote $$\omega_d\in \text{End}(\Lambda(T^{*(0,1)}M))$$ to be the trace $$\omega_d=-\sum_{l,m} R_L(w_l,w_m) \bar{w}^m \wedge i_{\bar{w}_l}$$ for $$w_j$$ a local orthonormal frame of $$T^{(1,0)}$$.
After recalling some basic facts on complex geometry, the second chapter gives some fundamental characterizations of Moishezon manifolds. A compact connected complex manifold $$M$$ is Moishezon if it has $$\dim(M)$$ algebraically independant meromorphic functions. The authors give the proof of Siu-Demailly criterion which answers the Grauert-Riemenschneider conjecture (a manifold is Moishezon if and only if it has a quasi-positive sheaf). The rest of the chapter is about some recent results related to that question. For instance Moishezon manifolds are characterized in terms of integral Kähler currents (Shiffman’s conjecture) and a singular version of holomorphic Morse inequalities is given.
Chapter 3 presents a $$L^2$$-Hodge theory on non-compact hermitian manifolds which leads to holomorphic Morse inequalities for the $$L^2$$-cohomology in that context. This leads to an extension of the Siu-Demailly criterion for compact complex spaces with isolated singularities in order to detect Moishezon spaces. Finally, the authors give a version of the Morse inequalities for $$q$$-convex manifolds and covering manifolds.
In Chapter 4, the asymptotic expansion of the Bergman kernel is given in detail. For $$x,x'\in M$$, the Bergman kernel $$P_p(x,x')$$ is defined as the kernel of the orthogonal $$L^2$$-projection on the space of the holomorphic sections $$H^0(L^p)$$, i.e.,
$P_p(x,x')=\sum_{i=1}^{\dim H^0(M,L^p)} S_i(x) \otimes S_i(x')^{*_{h^p}}$
where $$h$$ is a metric on $$L$$, $$(S_i)_{i=1,\dots,h^0(M,L^p)}$$ is an orthonormal basis of $$H^0(M,L^p)$$ with respect to $$\int_M h^p(.,.)\,dV_M$$. If the holomorphic line bundle $$(L,h)$$ polarizes the manifold $$M$$, the main result of this section shows that
$P_p(x,x) = p^n + \frac{1}{2} \text{scal}(c_1(h))(x) p^{n-1} + O(p^{n-2})$
i.e., that one has a kind of pointwise Riemann-Roch formula (here $$\text{scal}(c_1(h))$$ means the scalar curvature of the Kähler metric associated to $$h$$). Thanks to the spectral gap property of the Laplacian, the authors use the finite propagation speed of solutions of hyperbolic equations in order to localize the problem on $${\mathbb R}^{2n}.$$ Hence their proof of the existence of this asymptotic and the computation of the coefficients is purely local and come from some functional analysis resolvent techniques on $${\mathbb R}^{2n}.$$ Another fact is that outside the diagonal, $$P_p$$ converges exponentially fast to $$0.$$ This asymptotic result is very natural and has been studied by many authors; it has a key role in many problems in Kähler geometry. The interested reader can find other proofs more geometric in the literature [for instance by B. Berndtsson, Contemp. Math. 332, 1–17 (2003; Zbl 1038.32003), and R. Berman, B. Berndtsson and J. Sjoestrand, “Asymptotics of Bergman kernel”, arXiv:math/0506367]. On the other hand, the techniques developed in this chapter give uniform estimates under very weak assumptions for both heat kernels and Bergman kernels (the Bergman kernel is the limit when $$u \rightarrow +\infty$$ of the heat kernel).
Chapter 5 gives some applications of the previous work. The authors study the metric aspect of the Kodaira map following the work of T. Bouche and obtain the famous Kodaira embedding theorem. They give a short but clear explanation of the use of the asymptotic of the Bergman kernel in Donaldson’s theory for extremal metrics. This is related to the quantization procedure of Kähler metrics with constant scalar curvature by balanced metrics. They give the analog in the case of vector bundles. Then, they study the distributions of random sections briefly. A section will interest particularly the specialists. The Bergman kernel is studied on complex orbifolds and an asymptotic formula is given together with Baily’s extension of the Kodaira theorem. The last part of this long chapter is dedicated to the asymptotic of the Ray-Singer analytic torsion when the power of the line bundle tends to infinity. This allows to understand the variation of the Quillen metrics at first order.
In Chapter 6, the authors show how to derive an asymptotic of the Bergman kernel for certain non compact manifolds and various consequences of this fact. In particular, they study the compactification of manifolds with pinched negative curvature. This comes from the relationship through the Morse inequalities between, from one hand, the growth of the space of holomorphic sections of the pluricanonical line bundle and from another hand the volume of the manifold.
Chapter 7 describe the properties of Toeplitz operators and the Berezin-Toeplitz quantization. The Toeplitz operators on $$H^0(L^k)$$ are defined for smooth functions $$f$$ on the manifold with real values by
$T_k(f)= P_p f P_f$
The key result is here that the set of Toeplitz operators form an algebra (the set of Toeplitz operators is closed under composition of operators).
In Chapter 8 is given an asymptotic expansion of the Bergman kernel associated to the modified Dirac operator and the renormalized Bochner Laplacian.
Some appendices describe very useful tools or techniques (for instance the relation between heat kernel and the finite propagation speed of solutions of hyperbolic equations). Note that one can find on the first author’s webpage a list of errata (there are very few of them). This book covers a very wide range of techniques from modern analysis to geometry through the study of Bergman kernels and Morse inequalities. Especially, the authors have done the effort to explain carefully how the study of heat and Bergman kernels enters in various fundamental problems of complex and symplectic geometry. One could regret sometimes the high level of technicity of some sections but this is the price to pay in order to get very general results. We have no doubt that this book will become soon a reference on the subject.
### MSC:
32-02 Research exposition (monographs, survey articles) pertaining to several complex variables and analytic spaces 53-02 Research exposition (monographs, survey articles) pertaining to differential geometry 58-02 Research exposition (monographs, survey articles) pertaining to global analysis 32A25 Integral representations; canonical kernels (Szegő, Bergman, etc.) 58J35 Heat and other parabolic equation methods for PDEs on manifolds 58J50 Spectral problems; spectral geometry; scattering theory on manifolds 58J52 Determinants and determinant bundles, analytic torsion 53D50 Geometric quantization
Zbl 1038.32003
|
2022-07-02 19:32:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204587697982788, "perplexity": 290.83519891129635}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00354.warc.gz"}
|
https://stats.stackexchange.com/questions/194956/why-calculating-standard-error-of-an-mle-and-confidence-intervals-from-hessian
|
Why calculating standard error of an mle (and confidence intervals) from Hessian matrices?
I might not have fully understood these concepts, and I am confused about how standard error is calculated. Here are my understandings and confusions, let me know where went wrong.
EDIT: I was taking about the hessian matrix output from R optim.
Standard error of an parameters $\theta$, is the standard deviation of its estimates, var$(\hat\theta)^{1/2}$. I've read that one should calculated it from the expected information matrix E$[I]^{-1/2}$ which is E$[-H]^{-1/2}$. I assume to get the Expected Hessian matrix I need to run my maximum likelihood program multiple iterations to get multiple hessian matrices. But why can't we just calculate the SD simply from taking sd($\hat\theta$), given we already have a handful amount of estimates $\hat\theta$? Are the results going to be different?
Same question on calculating the confidence interval of a parameter. For example for 95% CI, the standard way seems to be calculate from $1.96\cdot E[-H]^{-1/2}$. Is it different from just run a handful amount of iterations to get a lot of estimates $\hat\theta$, and find where 95% of them fall? Is one more accurate given the same amount of realizations?
• How are you calculating that sd($\hat{\theta}$)? Feb 10 '16 at 22:12
• take all estimates $\hat\theta$, and calculate their variance $E[(\hat\theta - E[\hat\theta])^2 ]$, and take sqrt Feb 10 '16 at 23:56
• Okay, I think I see your misunderstanding. Feb 10 '16 at 23:58
I assume to get the Expected Hessian matrix I need to run my maximum likelihood program multiple iterations to get multiple hessian matrices
No, the expectation is based on the model. We're not getting some kind of ensemble-average, we're literally finding an expectation:
$\mathcal{I}(\theta) = - \text{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta)\right|\theta \right]\,.$
(though we might be finding it from a different expression that yields the same quantity).
That is, we do some algebra before we implement it in computation.
We have a single ML estimate, and we're computing the standard error from the second derivative of the likelihood at the peak -- a "sharp" peak means a small standard error, while a broad peak means a large standard error.
You might like to see that when you do this for a normal likelihood (iid observations from $N(\mu,\sigma)$, with $\sigma$ known) that this calculation yields that the Fisher information is $n/\sigma^2$, and hence that the asymptotic variance of the ML estimate of $\mu$ is $\sigma^2/n$, or its standard error is $\sigma/\sqrt{n}$. (Of course in this case, that's also the small-sample variance and standard error.)
|
2021-09-16 15:49:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565344214439392, "perplexity": 460.8787994894832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00346.warc.gz"}
|
https://computergraphics.stackexchange.com/questions/8226/how-to-rotate-a-shape-in-another-3d-software-so-that-the-rotation-matches-with-t
|
# How to rotate a shape in another 3D software so that the rotation matches with the rotation in Blender?
I import a mesh in Blender, export it with setting axis_forward='-Z', axis_up='Y', import in NVIDIA's FleX, rotate it and store it on disk; I call this mesh X. I also import the mesh in Blender and rotate it the same amount; I call this mesh Y. After I imported X in Blender (setting axis_forward='-Z', axis_up='Y') I realized that X has a different rotation which means that FleX is either using a different global axes (I think it's -Y forward and Z up) or it applies the rotations in a different order or a combination of both. So I am pretty confused now. I do not want to change anything on the FleX side. However, I want to rotate Y in a way that it matches with what FleX exports (X) after rotation. I also tried with Quaternions but I'm still unable to figure out the way FleX is transforming the object. The only thing that I've been able to figure out empirically is that the value for rx rotation is applied the same way for both FleX and Blender. This means that objects X and Y would overlap completely if imported in on scene in Blender.
My goal is to rotate the object in FleX so that object Y's rotation matches exactly with object X when I import X in Blender. For instance, I want FleX rotate the object the same way as Blender rotates it with the Euler rotation vector of [rx, ry, rz]. How can I achieve this?
Note that I am not changing the rotation order in Blender and I'm using the default XYZ rotation.
Here I are some images of the visual differences between FleX and Blender for such transformations:
Euler rotation vector: [0, 89.9, 0] in for object Y in Blender.
After applying the rotation in Blender (object Y):
After applying the same rotation ([0, 89.9, 0]) in FleX and importing the object X in Blender:
Euler rotation vector: [0, 0, 89.9] for object Y in Blender.
After applying the same rotation ([0, 0, 89.9]) in FleX and importing the object X in Blender:
It might look it is easy to guess from the images that FleX swaps the ry and rz when applying the rotation. If I also swap ry and rz when applying the rotations in Blender then both X and Y overlap. However, this only works if all components of the rotation vector are zero except one of them. If the rotation vector is something like [-43.964176, 20.641195, -1.2689421] X and Y do not overlap anymore and discrepancies start to show up as shown below:
The discrepancies are more visible if I put a cloth on X in FleX, import the cloth and Y in Blender and rotate Y by [-43.964176, -1.2689421, 20.641195] (notice that I've swapped ry and rz when applying the rotation in Blender, not in FleX):
Note that if I import X here, the cloth would be perfectly covering it while touching it on the surface:
After doing some hacking for the last example, I noticed that if I apply the rotation vector [-43.964176, 1.2689421, 20.641195] (rx, -rz, ry) the objects overlap almost perfectly:
For another example, I want to apply the rotation [87.68034, 79.94778, 82.697876] in Blender which should ideally give me something like this:
However, FleX is giving me the following (no swapping before passing the rotation vector to FleX):
I was thinking maybe applying the rotation in order of (rx, -rz, ry) in Blender would give me the perfectly overlapping result that I got for the previous example, but instead I got very weird results for another as shown below. Note that I wanted to rotate the object by [87.68034, 79.94778, 82.697876]:
By manually rotating the object in Blender I could eventually get it as close as possible to Y (exported from FleX). Surprisingly, the rotation vector that overlaps X and Y is way different than [87.68034, 79.94778, 82.697876] in (rx, ry, rz) or [87.68034 -82.697876, 79.94778] in (rx, -rz, ry). It is something around [5.38, -10.1, 88.6] in (rx, ry, rz) as shown below:
Although you might need more information to figure out exactly what's going out but below I post the code that is used in FleX to compute its rotation matrices. The top one is used when Euler angles are used and the second one is used when Quaternions are input:
// generate a rotation matrix around an axis, from PBRT p74
inline Mat44 RotationMatrix(float angle, const Vec3& axis)
{
Vec3 a = Normalize(axis);
float s = sinf(angle);
float c = cosf(angle);
float m[4][4];
m[0][0] = a.x * a.x + (1.0f - a.x * a.x) * c;
m[0][1] = a.x * a.y * (1.0f - c) + a.z * s;
m[0][2] = a.x * a.z * (1.0f - c) - a.y * s;
m[0][3] = 0.0f;
m[1][0] = a.x * a.y * (1.0f - c) - a.z * s;
m[1][1] = a.y * a.y + (1.0f - a.y * a.y) * c;
m[1][2] = a.y * a.z * (1.0f - c) + a.x * s;
m[1][3] = 0.0f;
m[2][0] = a.x * a.z * (1.0f - c) + a.y * s;
m[2][1] = a.y * a.z * (1.0f - c) - a.x * s;
m[2][2] = a.z * a.z + (1.0f - a.z * a.z) * c;
m[2][3] = 0.0f;
m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;
return Mat44(&m[0][0]);
}
inline Mat44 RotationMatrix(Quat q)
{
Matrix33 rotation(q);
Matrix44 m;
m.SetAxis(0, rotation.cols[0]);
m.SetAxis(1, rotation.cols[1]);
m.SetAxis(2, rotation.cols[2]);
m.SetTranslation(Point3(0.0f));
return m;
}
And here's how I apply the Euler rotation vector:
obj->Transform(RotationMatrix(op.rotate.x, Vec3(1.0f, 0.0f, 0.0f)));
obj->Transform(RotationMatrix(op.rotate.y, Vec3(0.0f, 1.0f, 0.0f)));
obj->Transform(RotationMatrix(op.rotate.z, Vec3(0.0f, 0.0f, 1.0f)));
And I think this should be how Transform() is defined:
void Mesh::Transform(const Matrix44& m)
{
for (uint32_t i=0; i < m_positions.size(); ++i)
{
m_positions[i] = m*m_positions[i];
m_normals[i] = m*m_normals[i];
}
}
obj->Transform(RotationMatrix(op.rotate.x, Vec3(1.0f, 0.0f, 0.0f)));
The way I figured this was I generated three rotation vectors [90, 0, 0], [0, 90, 0] and [0, 0, 90] and rotated the shapes in FleX. Then I loaded the shapes in Blender and empirically figured rotation on which axis in Blender matches to the shape imported from FleX. I learned that when I rotate the shape on the Yaxis by 90 degrees in Blender, it matches the 90-degree rotation on the Z axis in FleX. Also, I learned that rotating the object by -90 degrees on the Z axis in Blender matches with the 90-degree rotation on the Y axis in FleX.
|
2021-01-18 01:50:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.313513845205307, "perplexity": 3052.598719274782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00164.warc.gz"}
|
http://zimbabweelection.com/forum/exz0b3k.php?590088=heat-absorbed-by-system-is-positive-or-negative
|
Answered January 6, 2018. The heat of reaction is positive for an endothermic reaction. Unless otherwise specified, all reactions in this material are assumed to take place at constant pressure. The direction of the reaction affects the enthalpy value. He wrote Physics II For Dummies, Physics Essentials For Dummies, and Quantum Physics For Dummies. If you mean heat, when heat is absorbed by the system then it is positive. Therefore, you get the following equation: Say that the system absorbs 1,600 joules of heat from the surroundings and performs 2,300 joules of work on the surroundings. The equation tells us that $$1 \: \text{mol}$$ of methane combines with $$2 \: \text{mol}$$ of oxygen to produce $$1 \: \text{mol}$$ of carbon dioxide and $$2 \: \text{mol}$$ of water. The sign of $$q$$ for an exothermic process is negative because the system is losing heat. In the case above, the heat of reaction is $$-890.4 \: \text{kJ}$$. Work done BY the system is negative. A. positive. The sign of $$\Delta H$$ is negative because the reaction is exothermic. If the heat is released to the serrounding then is negetive sign. Have questions or comments? If you want to go by the numbers, use this equation: Then note that because the surroundings are doing work on the system, W is considered negative. Then the moles of $$\ce{SO_2}$$ is multiplied by the conversion factor of $$\left( \frac{-198 \: \text{kJ}}{2 \: \text{mol} \: \ce{SO_2}} \right)$$. Your IP: 165.227.139.174 D. None of these. Marisa Alviar-Agnew (Sacramento City College). $\ce{CaO} \left( s \right) + \ce{CO_2} \left( g \right) \rightarrow \ce{CaCO_3} \left( s \right) \: \: \: \: \: \Delta H = -177.8 \: \text{kJ}$. Performance & security by Cloudflare, Please complete the security check to access. If more energy is produced in bond formation than that needed for bond breaking, the reaction is exothermic and the enthalpy is negative. Answered By . … (B) Exothermic reaction. Since the reaction of $$1 \: \text{mol}$$ of methane released $$890.4 \: \text{kJ}$$, the reaction of $$2 \: \text{mol}$$ of methane would release $$2 \times 890.4 \: \text{kJ} = 1781 \: \text{kJ}$$. The reaction is exothermic and thus the sign of the enthalpy change is negative. This information can be shown as part of the balanced equation. Several factors influence the enthalpy of a system. Many reactions are reversible, meaning that the product(s) of the reaction are capable of combining and reforming the reactant(s). The calculation requires two steps. The $$89.6 \: \text{kJ}$$ is slightly less than half of 198. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. So using. C) both q and w are negative. The process in the above thermochemical equation can be shown visually in the figure below. In chemistry, what we focus on is the system, not surrounding. The quantity W (work) is positive when the system does work on its surroundings and negative when the surroundings do work on the system. Legal. (B) As reactants are converted to products in an endothermic reaction, enthalpy is absorbed from the surroundings. A chemical reaction or physical change is endothermic if heat is absorbed by the system from the surroundings. The heat of reaction is positive for an endothermic reaction. The heat of reaction is the enthalpy change for a chemical reaction. Answer. In that case, the system is at a constant pressure. The change in enthalpy shows the trade-offs made in these two processes. star. The sign of $$q$$ for an endothermic process is positive because the system is gaining heat. When heat is absorbed by the system, the sign of the value of q is taken to be . An endothermic process is a process that absorbs energy as … star. Once heat is absorbed the molecules of the previous system may oscillate at higher speed now than before since u add energy to the system. gellisurabhi. Thinking this way makes the total change of internal energy the following: The internal energy of the system decreases by 5,000 joules, which makes sense. The initial internal energy in a system, Ui, changes to a final internal energy, Uf, when heat, Q, is absorbed or released by the system and the system does work, W, on its surroundings (or the surroundings do work on the system), such that. Therefore, the overall enthalpy of the system decreases. I do feel is ,it is due to change in entropy of the system . The way in which a reaction is written influences the value of the enthalpy change for the reaction. The thermochemical reaction is shown below. To avoid confusion, don’t try to figure out the positive or negative values of every mathematical quantity in the first law of thermodynamics; work from the idea of energy conservation instead. Calculating with the First Law of Thermodynamics: Conserving Energy, How to Calculate a Spring Constant Using Hooke’s Law, How to Calculate Displacement in a Physics Problem, In physics, the first law of thermodynamics deals with energy conservation. The signs are now easy to understand: In this case, the net change to the system’s internal energy is +1,000 joules. Step 1: List the known quantities and plan the problem. For example let's say you push a book and you are the system. The quantity Q (heat transfer) is positive when the system absorbs heat and negative when the system releases heat. In order to better understand the energy changes taking place during a reaction, we need to define two parts of the universe, called the system and the surroundings. The law of conservation of energy states that in any physical or chemical process, energy is neither created nor destroyed. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Ace. Heat changes in chemical reactions are often measured in the laboratory under conditions in which the reacting system is open to the atmosphere. What is the change in the system’s internal energy? Think of values of work and heat flowing out of the system as negative: Say that a motor does 2,000 joules of work on its surroundings while releasing 3,000 joules of heat. Since $$198 \: \text{kJ}$$ is released for every $$2 \: \text{mol}$$ of $$\ce{SO_2}$$ that reacts, the heat released when about $$1 \: \text{mol}$$ reacts is one half of 198. A thermochemical equation is a chemical equation that includes the enthalpy change of the reaction. $2 \ce{SO_2} \left( g \right) + \ce{O_2} \left( g \right) \rightarrow 2 \ce{SO_3} \left( g \right) + 198 \: \text{kJ}$. You can also see negative work when the surroundings do work on the system. Enthalpy $$\left( H \right)$$ is the heat content of a system at constant pressure. C. Zero. The reaction of $$0.5 \: \text{mol}$$ of methane would release $$\frac{890,4 \: \text{kJ}}{2} = 445.2 \: \text{kJ}$$. Energy needs to be put into the system in order to break chemical bonds - they do not come apart spontaneously in most cases. Because the heat is absorbed by the system, the 177.8 kJ is written as a reactant. Chemists routinely measure changes in enthalpy of chemical systems as reactants are converted into products. The enthalpy of a system is determined by the energies needed to break chemical bonds and the energies needed to form chemical bonds. If so, the reaction is endothermic and the enthalpy change is positive. $\ce{CaCO_3} \left( s \right) + 177.8 \: \text{kJ} \rightarrow \ce{CaO} \left( s \right) + \ce{CO_2} \left( g \right)$. The way in which a reaction is written influences the value of the enthalpy change for the reaction. Figure $$\PageIndex{2}$$: (A) As reactants are converted to products in an exothermic reaction, enthalpy is released into the surroundings. $\ce{CH_4} \left( g \right) + 2 \ce{O_2} \left( g \right) \rightarrow \ce{CO_2} \left( g \right) + 2 \ce{H_2O} \left( l \right) + 890.4 \: \text{kJ}$. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot.
|
2021-05-16 13:08:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512784361839294, "perplexity": 352.8489722605149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00474.warc.gz"}
|
https://web2.0calc.com/questions/help_42550
|
+0
# help
0
57
1
If the rectangular faces of a brick have their diagonals in the ratio 3 : 2 \sqrt{3} : \sqrt{15}, what is the ratio of the length of the shortest edge of the brick to that of its longest edge?
Jun 17, 2020
#1
+8336
+1
Let a, b, c be the edges.
$$\begin{cases}ab = 3\\bc = 2\sqrt 3\\ca = \sqrt{15}\end{cases}$$
(1) * (3) / (2) :
$$a^2 = \dfrac{3\sqrt{15}}{2\sqrt 3} = \dfrac{3\sqrt 5}2\\ a = \sqrt{\dfrac{3\sqrt 5}2}$$
Similarly,
$$b = \sqrt{\dfrac6{5}\sqrt5}$$ and $$c = \sqrt{2\sqrt 5}$$
Required ratio = b : c = $$\sqrt 3 : \sqrt 5$$
Jun 17, 2020
|
2020-08-14 04:10:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888566255569458, "perplexity": 974.5269413684013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00317.warc.gz"}
|
http://strata.opengamma.io/apidocs/com/opengamma/strata/collect/tuple/package-summary.html
|
# Package com.opengamma.strata.collect.tuple
Tuple data structures.
Implementation of the common tuple concept, primarily based on a "pair" of two values. Variations are provided for some combinations of primitive types.
• Interface Summary
Interface Description
Tuple
Base interface for all tuple types.
• Class Summary
Class Description
DoublesPair
An immutable pair consisting of two double elements.
DoublesPair.Meta
The meta-bean for DoublesPair.
IntDoublePair
An immutable pair consisting of an int and double.
IntDoublePair.Meta
The meta-bean for IntDoublePair.
LongDoublePair
An immutable pair consisting of a long and double.
LongDoublePair.Meta
The meta-bean for LongDoublePair.
ObjDoublePair<A>
An immutable pair consisting of an Object and a double.
ObjDoublePair.Meta<A>
The meta-bean for ObjDoublePair.
ObjIntPair<A>
An immutable pair consisting of an Object and an int.
ObjIntPair.Meta<A>
The meta-bean for ObjIntPair.
Pair<A,B>
An immutable pair consisting of two elements.
Pair.Meta<A,B>
The meta-bean for Pair.
Triple<A,B,C>
An immutable triple consisting of three elements.
Triple.Meta<A,B,C>
The meta-bean for Triple.
|
2019-07-18 19:45:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3259771764278412, "perplexity": 13720.354770947051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00454.warc.gz"}
|
https://qualitysafety.bmj.com/content/27/7/557.full
|
Article Text
Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data
Free
1. Ava L Liberman1,
2. David E Newman-Toker2,3
1. 1 Department of Neurology, Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, New York, USA
2. 2 Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
3. 3 Departments of Epidemiology and Health Policy and Management, The Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
1. Correspondence to Dr David E Newman-Toker, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA; toker{at}jhu.edu
## Abstract
Background The public health burden associated with diagnostic errors is likely enormous, with some estimates suggesting millions of individuals are harmed each year in the USA, and presumably many more worldwide. According to the US National Academy of Medicine, improving diagnosis in healthcare is now considered ‘a moral, professional, and public health imperative.’ Unfortunately, well-established, valid and readily available operational measures of diagnostic performance and misdiagnosis-related harms are lacking, hampering progress. Existing methods often rely on judging errors through labour-intensive human reviews of medical records that are constrained by poor clinical documentation, low reliability and hindsight bias.
Methods Key gaps in operational measurement might be filled via thoughtful statistical analysis of existing large clinical, billing, administrative claims or similar data sets. In this manuscript, we describe a method to quantify and monitor diagnostic errors using an approach we call ‘Symptom-Disease Pair Analysis of Diagnostic Error’ (SPADE).
Results We first offer a conceptual framework for establishing valid symptom-disease pairs illustrated using the well-known diagnostic error dyad of dizziness-stroke. We then describe analytical methods for both look-back (case–control) and look-forward (cohort) measures of diagnostic error and misdiagnosis-related harms using ‘big data’. After discussing the strengths and limitations of the SPADE approach by comparing it to other strategies for detecting diagnostic errors, we identify the sources of validity and reliability that undergird our approach.
Conclusion SPADE-derived metrics could eventually be used for operational diagnostic performance dashboards and national benchmarking. This approach has the potential to transform diagnostic quality and safety across a broad range of clinical problems and settings.
• diagnostic errors
• patient harm
• outcome measures/methods
• process measures/methods
• public health informatics/methods
• epidemiology/diagnosis
## Introduction
According to the US National Academy of Medicine (NAM), diagnostic errors represent a major public health problem likely to affect each of us in our lifetime.1 The 2015 NAM report, Improving Diagnosis in Healthcare, goes on to state that, ‘improving the diagnostic process is not only possible, but it also represents a moral, professional, and public health imperative.’1 Annually in the USA, there may be more than 12 million diagnostic errors2 with one in three such errors causing serious patient harm.3 The aggregate annual costs to the US healthcare system could be as high as US$100–US$500 billion.4 The global problem is likely even bigger.5–8
Diagnostic errors represent the ‘bottom of the iceberg’ of patient safety—a hidden, yet large, source of morbidity and mortality. Valid operational measures are badly needed to surface this problem so that it can be quantified, monitored and tracked.9 Existing measures of diagnostic error that rely on manual chart review to confirm diagnostic errors suffer from problems of poor chart documentation,10 11 low inter-rater reliability,12 13 hindsight bias14 and the high costs of human labour needed for chart abstraction. Additionally, reliance on chart review alone will likely lead to an underestimation of diagnostic error since key clinical features necessary to identify errors are preferentially missing from charts where errors occur.15 16 We believe that key gaps in operational measures of diagnostic error can be filled via thoughtful statistical analysis of large clinical (electronic health record (EHR)) and administrative (billing, insurance claims) data sets.
In this manuscript, we describe a novel conceptual framework and methodological approach to measuring diagnostic quality and safety using ‘big data’: Symptom-Disease Pair Analysis of Diagnostic Error (SPADE). We illustrate our approach predominantly using a single well-studied example (dizziness-stroke), but provide evidence that SPADE could be used to develop a scientifically valid set of diagnostic performance metrics across a broad range of conditions.
## Diagnostic error and misdiagnosis-related harm definitions
The NAM defines diagnostic error as failure to (A) establish an accurate and timely explanation of the patient’s health problem(s) or (B) communicate that explanation to the patient.1 Harms resulting from the delay or failure to treat a condition actually present (false-negative diagnosis) or from treatment provided for a condition not actually present (false-positive diagnosis) are known as misdiagnosis-related harms.17 18 A key feature of the NAM definition is that it does not require the presence of a diagnostic process failure (eg, failure to perform a specific diagnostic test)17 nor that the error could have been prevented. This patient-centred definition is agnostic as to the correctness of the diagnostic processes; it relies only on the outcome of a patient receiving an inaccurate or delayed diagnosis as opposed to an accurate and timely diagnosis.1
The SPADE approach, described in detail below, uses unexpected adverse health events (eg, stroke, myocardial infarction (MI), death) to measure misdiagnosis-related harms.19–25 SPADE methods maintain core consistency with the NAM definition of diagnostic error by identifying inaccurate or delayed diagnoses, regardless of cause or preventability. Although SPADE does not specifically address communication with patients (part ‘B’ of the NAM definition), if failure to communicate a diagnosis to a patient results in a clinically relevant and harmful health event (ie, misdiagnosis-related harm), the SPADE approach will detect it. A key advantage of this approach is that using ‘hard’ clinical outcomes avoids much of the subjectivity12–14 inherent in other methods that rely on detailed, human medical record reviews to assess for errors.
## The symptom-disease pair framework for measurement
The SPADE approach is premised on three principles: (1) patients with symptoms seek medical attention; (2) the object of the medical diagnostic process is to identify the underlying cause (ie, the condition(s) responsible for the patient’s symptom(s)); and (3) failure to correctly diagnose the underlying disease(s) in a timely manner (NAM-defined diagnostic error) may be followed by illness progression that might have been avoided through prompt diagnosis and treatment (preventable misdiagnosis-related harm). In this approach, we combine what is known about disease natural history and pathophysiology to develop an inferential model for identifying misdiagnosis-related harms based on time-linked markers of diagnostic delay that are clinically sensible, biologically plausible and specific to symptom-disease pairs (figure 1).
Figure 1
Conceptual model for Symptom-Disease Pair Analysis of Diagnostic Error (SPADE). The SPADE conceptual framework for measuring diagnostic errors is based on the notion of change in diagnosis over time. Envisioned is a scenario in which an initial misdiagnosis is identified through a biologically plausible and clinically sensible temporal association between an initial symptomatic visit (that ended with a benign diagnosis rendered) and a subsequent revisit (that ended with a dangerous diagnosis confirmed); note that these ‘visits’ could also be non-encounter-type events (eg, a particular diagnostic test, treatment with a specific medication, or even death). The framework shown here illustrates differences in structure and goals of the ‘look back’ (disease to symptoms) and ‘look forward’ (symptoms to disease) analytical pathways. These pathways can be thought of as a deliberate sequence that begins with a target disease known to cause poor patient outcomes when a diagnostic error occurs: (1) the ‘look back’ approach defines the spectrum of high-risk presenting symptoms for which the target disease is likely to be missed or misdiagnosed; (2) the ‘look forward’ approach defines the frequency of diseases missed or misdiagnosed for a given high-risk symptom presentation. Dx, diagnosis.
Symptom-disease pairs that may be ‘diagnostic error dyads’ can be analysed using either a ‘look-back’ or a ‘look-forward’ approach (figure 2). The look-back approach takes an important disease and identifies which clinical presentations of that disease are most likely to be missed. The look-forward approach takes a common symptom and identifies which important diseases are likely to be missed among patients who present with this symptom. When little is known about misdiagnosis of a particular disease, a look-back analysis helps identify promising targets to establish one or more diagnostic error dyads. Once one or more diagnostic error dyads are established, a look-forward analysis can be performed to measure real-world performance.
Figure 2
Method for establishing a symptom-disease pair using dizziness-stroke as the exemplar. Envisioned is a ‘symptom’ and ‘disease’ visit occurring as clinical events unfold in the natural history of a disease, as illustrated in figure 1. (A) The ‘look-back’ approach is used to take a single disease known to cause harm (eg, stroke) and identify a number of high-risk symptoms that may be missed (eg, dizziness/vertigo). In this sense, the ‘look-back’ approach (case–control design) can be thought of as hypothesis generating. In the exemplar, stroke is chosen as the disease outcome. Various symptomatic clinical presentations at earlier visits are examined as exposure risk factors, some of which are found to occur with higher-than-expected odds in the period leading up to the stroke admission. (B) The ‘look-forward’ approach is used to take a single symptom known to be misdiagnosed (eg, dizziness/vertigo) and identify a number of dangerous diseases that may be missed (eg, stroke). In this sense, the ‘look-forward’ approach (cohort design) can be thought of as hypothesis testing. In the exemplar, dizziness is chosen as the exposure risk factor, and various diseases are examined as potential outcomes, some of which are found to occur with higher-than-expected risk in the period following the dizziness discharge.
The SPADE approach relies on having information from at least two discrete points in time. The first time point is an ‘index’ diagnosis and the second time point is an ‘outcome’ diagnosis (figure 1). The outcome diagnosis must plausibly link back to symptoms or signs from the index visit (and diagnosis) yet be unexpected or improbable if the index diagnosis had been correct. The most common and straightforward diagnostic error scenario is one in which an ambulatory index visit (eg, primary care or emergency department (ED)) results in a discharge for a supposedly benign disorder (treat-and-release visit) and a subsequent outcome visit or admission discloses otherwise. For example, the occurrence of an adverse outcome (eg, hospitalisation for a newly diagnosed stroke, MI or sepsis) shortly after a treat-and-release ED visit with a benign diagnosis rendered is a strong indicator of diagnostic error with misdiagnosis-related harm (assuming similar symptoms or signs are associated with both the benign and dangerous diseases).
For illustrative purposes, we will use the case of a patient seen in the ED with a chief complaint of dizziness diagnosed as a benign inner ear condition, but who has dangerous cerebral ischaemia as the true cause of her symptoms.26 27 Imagine we are unsure of whether this symptom-disease pair (dizziness-stroke) is a real dyad26 28 or merely coincidental. We would note that, biologically speaking, dizziness/vertigo can be a manifestation of minor stroke or transient ischaemic attack (TIA).29 With untreated TIA and minor stroke, there is a marked increased short-term risk of major stroke in the subsequent 30 days that tapers off by 90 days.29–31 A clinically relevant and statistically significant temporal association between ED discharge for supposedly ‘benign’ vertigo followed by a stroke diagnosis within 30 days is therefore a biologically plausible marker of diagnostic error.21 If this missed diagnosis of cerebral ischaemia resulted in a clinically meaningful adverse health outcome (eg, stroke hospitalisation), this would suggest misdiagnosis-related harm.
The association of treat-and-release visits for ‘benign’ vertigo and subsequent hospitalisations for stroke can readily be measured using information collected in administrative claims or large EHR data sets.21 22 25 We can employ a bidirectional analysis (figure 3). Using the look-back method, we start with a disease cohort of hospitalised patients with stroke and look back in time to prior treat-and-release ED visits for vertigo.25 We analyse the observed to expected treat-and-release visit frequency and temporal distribution of such visits during a reasonable time window. We employ positive (headache) and negative (abdominal/back pain) symptom controls, finding that vertigo is the most over-represented prestroke admission treat-and-release ED visit (figure 3A).25 Using the look-forward method, we start with a vertigo symptom cohort of discharged ED patients and look forward in time to subsequent stroke admissions. We can employ positive (intracerebral haemorrhage) and negative (MI) disease controls, finding that only short-term cerebrovascular event rates are elevated above the base rate, suggesting that a ‘benign’ vertigo discharge is a meaningful risk factor for missed stroke but not missed MI (figure 3B).21
Figure 3
Together, these analyses statistically support the symptom-disease pair of dizziness-stroke and create strong inferential evidence of an index visit diagnostic error (incorrect diagnosis of benign vertigo rendered) with subsequent misdiagnosis-related harms (worsening or recurrent cerebral ischaemia necessitating hospitalisation). Specific analyses that can be used to establish major aspects of validity and reliability for SPADE are shown in table 1.32–34 Key among these are: (1) the bidirectional relationship in an overlapping temporal profile, which establishes convergent construct validity of the association and a link to biological plausibility34; and (2) the use of negative control comparisons which establishes discriminant construct validity and makes it highly improbable that patients discharged from the ED merely have an elevated short-term risk of all adverse medical events (ie, are non-specifically ‘sick’). These statistical methods highlight the fact that valid measures of diagnostic error need not be exclusively derived from traditional approaches such as chart review, survey data or prospective studies.
Table 1
Key concepts and methods to establish the reliability32 and validity33 34 of SPADE
## Optimal measurement context for SPADE
### Disease types and analytical approach
The SPADE method should apply to any condition where the short-term risk of worsening or recurrence is high. SPADE has been used for other symptoms and signs tied to missed stroke (headache-aneurysmal subarachnoid haemorrhage19; facial weakness-ischaemic stroke35); to missed cardiovascular events (eg, chest pain-MI)20 24; and to missed infections (eg, fever-meningitis/sepsis36; Bell’s palsy-acute otitis35). Since missed vascular events and infections together account for at least one-third of all misdiagnosis-related harms,37–40 using SPADE to monitor and track such errors would represent a major advance for the field.
SPADE can be used to assess a single symptom tightly linked to a single disease (headache-aneurysm,19 syncope-pulmonary embolus41), but can also be used to measure multiple related symptoms or diseases. For example, if multiple symptoms are associated with a target disease (eg, chest pain, shortness of breath, abdominal pain and syncope for MI), the symptoms may be bundled together in the analysis.20 Likewise, if a single symptom is associated with multiple target diseases (eg, fever for meningitis, toxic shock and sepsis), the diseases may be bundled together in the analysis.36 As proof of concept, a recent SPADE-style analysis of over 10 million ED discharges used multiple symptoms-to-disease mappings to identify misdiagnosis (figure 4).42
Figure 4
Linking multiple symptoms to multiple diseases using a Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Sankey diagram (adapted from Obermeyer et al 42) demonstrating discharge diagnoses from index ED visit (left) and their association with documented causes of death (right) within 7 days of discharge in a subset of Medicare fee-for-service beneficiaries. These results were obtained using a SPADE-style analysis of over 10 million ED discharges and used multiple symptom-disease pairs to identify likely diagnostic errors. Each index and outcome diagnosis category represents an aggregation of related codes (coding details found in ref 42), and line thickness is proportional to the number of beneficiaries. Statistical analyses found excess, potentially preventable deaths based on hospital admission fraction from the ED. These results highlight the viability of using symptom and disease bundling and statistical analysis of visit patterns to track misdiagnosis-related harms—specifically, in this example, mortality associated with diagnostic errors. ED, emergency department.
Some diseases are less well suited to SPADE. For example, chronic diseases for which the risk of misdiagnosis-related harms is either constant or very slowly increasing over time (eg, diabetes, hypertension) will make patterns of diagnostic error difficult to discern via SPADE. For diseases with a subacute time course presenting non-specific symptoms (eg, tuberculosis8 and cancer43), a more complex analytical approach is required. For example, it might be necessary to bundle symptoms and combine with visit/test ordering patterns over time (eg, increased odds of general practitioner visits for new complaints/tests in the 6 months before a cancer diagnosis43).
### Ideal data sets
Large enough data sets are needed to draw statistically valid inferences. Most prior studies using aspects of the SPADE approach have examined data sets containing 20 000–190 000 visits to identify misdiagnosis-related harm rates of ~0.2%–2%.21 25 35 From a statistical standpoint, the total number of diagnostic error-related outcome events (eg, admissions) should ideally not be fewer than 50–100, so this implies minimal sample sizes of 5000–50 000 visits for event rates in the 0.2%–2% range. Thus, even for common symptoms or diseases, data must generally be drawn from a large health system or region over a short period (eg, 6 months) or a small health system or hospital over a longer period (eg, 5 years). Constraints on the spatial and temporal resolution of SPADE make it unlikely that this approach could be used for provider-level feedback. This constraint, however, relates to the frequency of harm, not the SPADE method—in other words, any method that assesses infrequent harms will have to draw from a large sample.
Data sets that include ‘out-of-network’ follow-up provide the most robust estimates of diagnostic error, avoiding the problem of hospital crossover (ie, patient goes to one centre at the index visit but returns to an unaffiliated centre at the outcome visit). In a 1-year study of crossover in ED populations across five health systems, 25% of patients who revisited crossed over.44 In a large study of missed subarachnoid haemorrhage in the ED that used regional health data, hospital crossover occurred in 37% of misdiagnosed patients.19 Taken together, these data suggest that patients who are misdiagnosed may be disproportionately likely to cross over. Thus, SPADE will likely provide the strongest inferences when used with data sets that include crossovers (eg, regional health information exchanges like the Chesapeake Regional Information System for our Patients) or from health systems with integrated insurance plans where patients are tracked when they use outside healthcare facilities (eg, Kaiser Permanente22). Nevertheless, even without data on crossovers, health systems can still track error rates over time—measured rates may be lower than the true rates, but rate changes should still reflect temporal trends.
The best data sets for SPADE will have information on visits and admissions, and on other events, such as intrahospital care escalations (eg, ward to ICU transfers) and deaths. Recently, pairing of non-life-threatening ED discharge diagnoses to subsequent death among Medicare beneficiaries was used to identify misdiagnoses (figure 4).42 However, even without death (or other outcome) data, tracking to monitor diagnostic quality and safety trends and intervene to improve them remains possible. This is because root causes (eg, cognitive biases, knowledge deficits) and process failures (eg, exam findings not elicited, tests not ordered) leading to misdiagnosis of specific dangerous diseases probably do not differ based on the severity of subsequent harms (eg, hospital readmission vs out-of-hospital death). Even for conditions with very high mortality (eg, aortic dissection), many patients would still be captured by a delayed admission-only approach.45 Thus, a diagnostic intervention to improve diagnosis of aortic dissection that reduced misdiagnosis-related readmissions would presumably also reduce misdiagnosis-related deaths.
Having systematically coded EHR data on presenting symptoms (as opposed to inferring these from index visit discharge diagnoses) can enrich a SPADE analysis. However, it is not essential, since it is the benign or non-specific nature of the index visit discharge diagnosis (rather than the presenting symptom, per se) that reflects the diagnostic error. Furthermore, many of the index visit diagnoses are coded as non-specific symptoms (eg, dizziness, not otherwise specified25).
## Using SPADE to assess preventable harms from diagnostic process failures
SPADE measures the frequency of diagnostic errors causing misdiagnosis-related harms, rather than all diagnostic errors. This concept is most intuitive using the look-forward approach. Isolated vertigo of vascular aetiology is the most common early manifestation of brainstem or cerebellar ischaemia and is often missed initially as a stroke sign.29 Since it is unlikely that a patient sent home with an index diagnosis of ‘benign’ vertigo also had other obvious neurological signs (eg, hemiparesis or aphasia), their subsequent hospitalisation for stroke suggests clinical worsening or recurrent ischaemia (eg, major stroke after minor stroke or TIA).31 Thus, graphically, the ‘hump’ (hatched area) shown in figure 3B more accurately reflects misdiagnosis-related harms rather than diagnostic error, per se. Fewer than 20% of patients with TIA or minor stroke go on to suffer a major stroke within 90 days,46 47 so there are likely to be at least fivefold more diagnostic errors (misidentifications of TIA or minor stroke at the index visit) than misdiagnosis-related harms (subsequent, delayed major stroke admissions).
When diagnostic process data (eg, use of imaging, lab tests or consults) are also available, it is possible to identify process failures and test their association with misdiagnosis-related harms. For example, guidelines indicate that benign paroxysmal positional vertigo (BPPV), an inner ear disease, should be diagnosed and treated at the bedside without neuroimaging.48 49 Frequent use of neuroimaging in patients discharged with BPPV suggests knowledge or skill gaps in bedside diagnosis of vertigo.26 50 Such process failures may correlate to misdiagnosis-related harms (eg, use of neuroimaging in ‘benign’ dizziness/vertigo is linked to increased odds of stroke readmission after discharge51). For cancers, process failures can be identified by measuring diagnostic intervals (eg, time from index visit to advanced testing or specialty consultation to treatment)43 52; diagnostic delays can be correlated to outcomes and targeted for disease-specific process improvement.53
The SPADE approach can also facilitate identification of symptom-independent system factors that contribute to misdiagnosis. For example, in the study described above looking at short-term mortality after ED discharge, low hospital admission fraction at the index ED visit was associated with death postdischarge.42 Other studies have found triage to low acuity care is linked to misdiagnosis.19 Healthcare settings can be compared for risk of misdiagnosis and harms—for example, the risk for missed stroke is greater in ED than primary care, but the magnitude of harms is similar because of greater patient volumes in primary care.22 Important demographic and racial disparities in care can also be measured using SPADE.24 25
## Using SPADE to measure diagnostic performance and impact of interventions
The operational quality and safety goal is ongoing measurement of diagnostic performance in actual clinical practice.9 A major advantage of SPADE is that the core, essential administrative data are already being collected and could be easily used to track diagnostic performance without significant financial burdens. Because these data are also available from past years, internal performance trend lines could be readily constructed. For relatively common diagnostic problems such as chest pain-MI or vertigo-stroke, health systems could probably monitor their performance semiannually or quarterly using a rolling window of 6–12 months of data. Such monitoring would facilitate assessment of interventions to improve diagnostic performance.
In 2017, a National Quality Forum expert panel highlighted SPADE methods as a key measure concept to assess ‘harms from diagnostic error based on unexpected change in health status’ that holds promise for operational use because of the ready availability of administrative data.54 Relevant data for applying SPADE are already gathered in standard, structured formats (eg, International Classification of Diseases diagnostic codes); thus, cross-institutional benchmarking is a realistic possibility if data are curated through an ‘honest broker’.55 Geographic or institutional variation in diagnostic accuracy could also be detected.25 56 Eventually, SPADE-derived metrics could be incorporated into operational diagnostic performance dashboards.22
## Differences between SPADE and electronic trigger tools
Electronic trigger tools seek to identify missed diagnostic opportunities or failed diagnostic processes.57–59 Trigger tools use specific predetermined EHR events (eg, unplanned revisits to primary care) to ‘trigger’ medical record review by trained personnel.60 These ‘trigger’ events can be similar to outcome events used in SPADE, but trigger tools rely on human chart review for adjudication of diagnostic errors, while SPADE combines biological plausibility with statistical analysis of large data sets to verify errors. Also, trigger tools are typically used to find individual patient errors for process analysis and remediation, while SPADE would be used to understand the overall landscape of misdiagnosis-related harms to prioritise problems for solution-making and to operationally track performance over time, including to assess impact of interventions.
SPADE will not solve all problems in measuring diagnostic errors.17 61–64 The method probably substantially understates the frequency of NAM-defined diagnostic errors, since it focuses on misdiagnosis-related harms. It is also not readily applied to all disease states, including chronic conditions where adverse outcomes are evenly distributed over time. The spatial and temporal resolutions are too low to provide individual provider feedback. Correlating SPADE outcome measurements directly to bedside process failures (eg, flawed history or examination) will still require free-text analysis of records or other granular data. When using coded diagnoses for index and outcome visits, SPADE is potentially susceptible to various types of coding error and bias, including intentional gaming such as mis-specification, unbundling and upcoding.65 Because SPADE uses large data sets to identify diagnostic error patterns, it risks apophenia,66 so appropriate statistical validation checks and controls are critical when using SPADE (table 1).
Finally, SPADE has not been directly validated against an independent ‘gold standard’. The method is strongly supported by the fact that the dizziness-stroke dyad has an extensive body of remarkably coherent and consistent scientific literature26 28 that includes chart reviews,15 16 67 surveys,68 69 cross-sectional health services research studies,50 51 56 70 prospective cohort studies71 72 and SPADE-type studies using look-back25 and look-forward21–23 methods. Problems inherent in human chart reviews, particularly hindsight and observer biases,12–14 and flawed underlying documentation15 suggest that this is probably not an ideal reference standard for SPADE. A better validation strategy might be to vet coding and classification accuracy against review of videotaped encounters or gold-standard randomised trial data, as from AVERT (Acute Video-oculography for Vertigo in Emergency Rooms for Rapid Triage; ClinicalTrials.gov NCT02483429). The most compelling validation of the SPADE method would probably be to ‘flatten the hump’ (figure 3B) through diagnostic quality and safety interventions—this would demonstrate predictive validity of SPADE-based metrics.
## Conclusions
We have elaborated a conceptual framework, SPADE, that could be used to measure and monitor a key subset of misdiagnosis-related harms using pre-existing, administrative ‘big data’. This directly addresses a major patient safety and public health need1 9 which we believe could be transformational for improving diagnosis in healthcare by surfacing otherwise hidden diagnostic errors. The SPADE approach leverages symptom-disease pairs and uses statistically controlled inferential analyses of large data sets to construct operational outcome metrics that could be incorporated into diagnostic performance dashboards.22 When tested, these metrics have demonstrated multiple aspects of validity and reliability. Broad application of the SPADE approach could facilitate local operational improvements, and large-scale, epidemiological research to assess the breadth and distribution of misdiagnosis-related harms, and international/national benchmarking efforts that establish standards for diagnostic quality and safety. Future research should seek to validate SPADE across a wide range of clinical problems.
## Footnotes
• Funding National Institute on Deafness and Communication Disorders (grant #U01 DC013778) and the Armstrong Institute Center for Diagnostic Excellence.
• Competing interests None declared.
• Provenance and peer review Not commissioned; externally peer reviewed.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
2021-09-24 22:20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2343229055404663, "perplexity": 6679.32424218868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00568.warc.gz"}
|
https://www.concepts-of-physics.com/mechanics/a-uniform-rod-ab-is.php
|
# A uniform rod AB is suspended from a point X
Problem: A uniform rod AB is suspended from a point X, at a variable distance x from A, as shown. To make the rod horizontal, a mass m is suspended from its end A. A set of (m,x) values is recorded. The appropriate variables that give a straight line, when plotted, are (JEE Mains 2018)
1. m, x
2. m, 1/x
3. m, 1/x2
4. m, x2
Solution: Let $M$ be the mass and $L$ be the length of the rod AB. The centre of mass C of uniform rod lies at its geometrical centre i.e., at a distance $L/2$ from A.
The forces on the rod are $mg$ at A, $Mg$ at C and the string tension $T$ at X. To keep the rod horizontal, the torque about any point of the rod should be zero. Take torque about the suspension point X to get \begin{align} \tau=mgx-Mg(L/2-x)=0,\nonumber \end{align} which gives \begin{align} m=\frac{ML}{2}\cdot \frac{1}{x}-M.\nonumber \end{align} Thus, $(m,1/x)$ graph is a straight line with slope $ML/2$ and intercept $-M$.
|
2020-10-27 18:15:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833648562431335, "perplexity": 525.7142976338865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00245.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-frac-1-3-9-6x-x
|
# How do you solve \frac { 1} { 3} ( 9- 6x ) = x?
Mar 13, 2018
The solution is $x = 1$.
#### Explanation:
First, multiply both sides by $3$. Then, add $6 x$ to both sides. Lastly, divide both sides by $9$. Here's how it looks:
$\frac{1}{3} \left(9 - 6 x\right) = x$
$\textcolor{b l u e}{3 \cdot} \frac{1}{3} \left(9 - 6 x\right) = \textcolor{b l u e}{3 \cdot} x$
$\textcolor{red}{\cancel{\textcolor{b l u e}{3}}} \textcolor{b l u e}{\setminus} \cdot \frac{1}{\textcolor{red}{\cancel{\textcolor{b l a c k}{3}}}} \left(9 - 6 x\right) = \textcolor{b l u e}{3 \cdot} x$
$1 \left(9 - 6 x\right) = \textcolor{b l u e}{3} x$
$9 - 6 x = 3 x$
$9 - 6 x \textcolor{b l u e}{+} \textcolor{b l u e}{6 x} = 3 x \textcolor{b l u e}{+} \textcolor{b l u e}{6 x}$
$9 \textcolor{red}{\cancel{\textcolor{b l a c k}{- 6 x \textcolor{b l u e}{+} \textcolor{b l u e}{6 x}}}} = 3 x \textcolor{b l u e}{+} \textcolor{b l u e}{6 x}$
$9 = 3 x + 6 x$
$9 = 9 x$
$9 \textcolor{b l u e}{\div 9} = 9 x \textcolor{b l u e}{\div 9}$
$1 = 9 x \textcolor{b l u e}{\div 9}$
$1 = x$
That's the solution. Hope this helped!
Mar 13, 2018
$x = 1$
#### Explanation:
A few ways, the simplest would be to first move the $\frac{1}{3}$ to the other side so it becomes $\times 3$. So now the equation is
$9 - 6 x = 3 x$
Then move the $- 6 x$ to the other side of the equals sign to make
$9 = 3 x + 6 x$
$9 = 9 x$
Then divide both sides by $9$ (take the $9 x$ which is $9$ multiplied by $x$ back to the other side) to make
$\frac{9 x}{9} = \frac{9}{9}$
$x = 1$
Another way to do it is to actually divide the $9$ and $6$ by $3$ since they are divisible making
$3 - 2 x = x$
Using the same method above this would make
$3 = 3 x$
Making $x = 1$ again.
|
2019-12-16 05:02:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 35, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264026880264282, "perplexity": 290.4071796993695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00312.warc.gz"}
|
https://www.nrich.maths.org/875/solution
|
Base Puzzle
This investigation is about happy numbers in the World of the Octopus where all numbers are written in base 8 .Octi the octopus counts.
Happy Octopus
This investigation is about happy numbers in the World of the Octopus where all numbers are written in base 8 ... Find all the fixed points and cycles for the happy number sequences in base 8.
Basically
The number 3723(in base 10) is written as 123 in another base. What is that base?
Oh for the Mathematics of Yesteryear
Age 11 to 14Challenge Level
Correct solutions were received from Mary (Birchwood Community High School) and Chor Kiang. Well done.
Part 1
First, find the total ounces of bread.
$600 \times35 \times24$ ounces $= 504000$ ounces
Now, find the new number of men-shares needed.
$4800 \times45 = 216000$
Divide $504000$ ounces by $216000$ to get $2\frac{1}{3}$ ounces a day.
Hence, each man must only eat $2\frac{1}{3}$ ounces of bread a day.
Part 2
$£14 10\text{s} = 290\text{s}$
$£5 \; 8\text{s } 9 \text{d} = 108.75$ shillings
Weight (cwt) Distance (miles) Amount (shillings)
$60$ can be transported for $20$ for $290$
$1$ can be transported for $20$ for $\frac{290}{60} = \frac{29}{6}$
$1$ can be transported for $30$ $\frac{29}{6}\times\frac{30}{20}= 7.25$
$\frac{1}{7.25}$ can be transported for $30$ $1$
$\frac{1}{7.25}\times 108.75$ can be transported for $30$ $108.75$
Since $\frac{108.75}{7.25} = 15$
$15$ cwt can be carried for $30$ miles at a cost of $£5 \; 8\text{s } 9 \text{d}$
|
2022-12-08 03:42:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.539430558681488, "perplexity": 2488.633217078254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00219.warc.gz"}
|
https://cran.itam.mx/web/packages/overviewR/vignettes/getting-started.html
|
# Getting started
## What is overviewR?
overviewR is a small yet powerful package that helps you to get an overview – hence, the name – of your data with particular emphasis on the extent that your distinct units of observation are covered for the entire time frame of your data set.
## How can you install it?
A stable version of overviewR can be directly accessed on CRAN:
install.packages("overviewR", force = TRUE)
To install the latest development version of overviewR directly from GitHub use:
library(devtools)
devtools::install_github("cosimameyer/overviewR")
## Why did we build it?
If you have a (large) data set that has many different observations over a long period, it becomes increasingly difficult to identify for each unique observation its exact coverage in the data. In particular, if some observations are not included for the entire time span of the data – either because they entered later, dropped out earlier or have gaps in between – it can become difficult to spot potential problems in your data’s time and scope.
overviewR allows you to quickly get a glimpse of your data and the distribution of your observations over time. With its ability to produce both data.frame objects and LaTeX/.tex outputs, it can be used by practitioners and academics alike.
## How can it be used?
overviewR can be used by everyone who works with data that have time-and-scope characteristics. That is, all data that contains different units of observation over a specific period will benefit from overviewR. To get a quick overview of which units – think of countries, companies, test persons, etc. – are present or missing during a given time span – think of years, months, days, minutes, etc. – overviewR provides an easy and intuitive insight into the set-up of your data.
Consider a data set that covers countries over the past 50 years. Not all countries existed throughout the entire period – some dissolved, others were newly founded and yet for others, data might not be available for the entire period. Before starting any analysis, it is helpful to get an overview not only of which countries are included and what the entire time span is but also to see which countries are present at which points in time. In other words, are there missing data for certain countries at different points in time?
To get a quick and intuitive overview of your data, overviewR provides currently the following basic functions:
• overview_tab generates a basic table that lists all unique units of observation (e.g., countries) and aggregates the time frame at which each unit is present in the data set. This means it also takes into account gaps in the data set, e.g., when there is – for whatever reason – no data available for a country for a few years within the time frame
• overview_crosstab generates a similar table but allows you to separate the overview table using two conditions. For instance, you want to know not only at what time points countries are present in your data but also when these countries can be considered to have high or low GDP and can be categorized as having a small or large population size. For this, overview_crosstab is used. overview_print takes a table – either generated by overview_tab or overview_crosstab and turns it into a LaTeX output. It even allows you to save the LaTeX output in a ready-to-use .tex file.
• overview_plot visualizes the time and scope conditions of your data in a ggplot plot. For each scope object in your data (e.g., countries) on the y-axis, it plots the time coverage (x-axis) as a horizontal line for all time points in your data. This helps to spot gaps in your data for specific scope objects or simply creates a graphical display of your time and scope conditions and can be a good companion for presentation slides or appendices. overview_crossplot is an alternative to visualize a cross table (a way to present results from overview_crosstab)
• overview_heat visualizes the coverage of each time and scope combination of your dataset in a heat map style ggplot. Each cell in the heat map is colored based on the coverage of a scope object at a given time point. Additionally, it displays either the total number of cases covered or a relative percentage as plain text. This helps to spot missing information even more nuanced. For instance, in a monthly data set with countries as the scope object, it illustrates the percentage of covered months in the data for each country-year combination.
• overview_na graphically illustrates the total number of NAs across all variables in your data set as a horizontal bar plot. Similar to the other plot objects, overview_na returns a ggplot object and can be modified and adjusted accordingly.
• overview_overlap provides an overview of the overlap of two data sets
Works with data.frame objects Works with data.table Can take multiple time arguments (year, month, day)
overview_tab yes yes yes
overview_na yes yes
overview_plot yes
overview_crossplot yes
overview_crosstab yes
overview_heat yes
overview_overlap yes
There is also a CheatSheet available here that showcases the functions of overviewR.
## Get involved!
Have you used overviewR in your project? Let us know! You can either e-mail us or use our pull-request template and populate/fill the following with your information:
**PROJECT TITLE**:
--SHORT DESCRIPTION HOW YOU USED OVERVIEWR--
[*LINK TO YOUR WEBSITE*](https://LINK-TO-YOUR-WEBSITE)
We will feature your project below!
|
2022-08-20 02:42:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.373895525932312, "perplexity": 1084.4346472505256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00517.warc.gz"}
|
https://www.ramiro.pro/en/publications/2022-02-25-2022-R-LWE-based-distributed-key-generation-and-threshold-decryption/
|
R-LWE-Based Distributed Key Generation and Threshold Decryption
Published in Special Issue "Recent Advances in Security, Privacy, and Applied Cryptography", Mathematics, MDPI, 2022
Recommended citation: Alborch, F.; Martínez, R.; Morillo, P. R-LWE-Based Distributed Key Generation and Threshold Decryption. Mathematics 2022, 10, 728.
Ever since the appearance of quantum computers, prime factoring and discrete logarithm-based cryptography have been questioned, giving birth to the so-called post-quantum cryptography. The most prominent field in post-quantum cryptography is lattice-based cryptography, protocols that are proved to be as difficult to break as certain hard lattice problems like Learning with Errors (LWE) or Ring Learning with Errors ($$R$$-LWE). Furthermore, the application of cryptographic techniques to different areas, like electronic voting, has also nourished a great interest in distributed cryptography.
In this work, we will give two original threshold protocols based in the lattice problem $$R$$-LWE: one for key generation and one for decryption. We will prove them both correct and secure under the assumption of hardness of some well-known lattice problems. Finally, we will give a rough implementation of the protocols in C to give some tentative results about their viability, in particular our model generates keys in the order of $$10^3$$ ms and decrypts and encrypts in the order of $$10^2$$ ms.
|
2022-10-07 15:25:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19396275281906128, "perplexity": 2259.917443022096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00747.warc.gz"}
|
https://www.conicet.gov.ar/new_scp/detalle.php?keywords=&id=32676&congresos=yes&detalles=yes&congr_id=7497669
|
In this talk we present some recent results obtained in a joint work with K. Li and J. M. Martell, about a multivariable Rubio de Francia extrapolation theorem for multilinear Muckenhoupt classes $A_{\vec{p}}$, and also some extensions to more general classes of weights.To illustrate the power of extrapolation methods we will present some applications of the beforementioned results and some mixed weak-type weighted estimates obtained in a joint work with K. Li and C. P\'erez.
|
2021-04-13 12:48:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30888572335243225, "perplexity": 441.6378885484058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00018.warc.gz"}
|
https://solvedlib.com/power-drive-corporation-designs-and-produces-a,447288
|
# Power Drive Corporation designs and produces a line of golf equipment and golf apparel. Power Drive has 100,000 shares...
##### Olympus has integrated many components into its SLR camera. Recently, Olympus has introduced a smartphone app,...
Olympus has integrated many components into its SLR camera. Recently, Olympus has introduced a smartphone app, Olympus Image Share (OL.Share) to pair with its camera. It says: "Paired with a compatible Olympus camera, the Olympus Image Share (OL.Share) smartphone app makes photography more enjoy...
##### DEFINITION 0.1 _ positive integer p is called prime; if it has only two positive divisors. namely and itself: If n € Nis not prime we say that it composite DFFINITION 0.2_ polynomial with integral coefficient is function of the form f() MI (27 (ul where (, € Zfor all Si< Let polynomial with integral coefficients_ Prove that there exists n € N such that f(o) composite.
DEFINITION 0.1 _ positive integer p is called prime; if it has only two positive divisors. namely and itself: If n € Nis not prime we say that it composite DFFINITION 0.2_ polynomial with integral coefficient is function of the form f() MI (27 (ul where (, € Zfor all Si< Let polynomi...
##### A 3 -m-diameter vertical cylindrical tank is filled with water to a depth of $11 \mathrm{m} .$ The rest of the tank is filled with air at atmospheric pressure. What is the absolute pressure at the bottom of the tank?
A 3 -m-diameter vertical cylindrical tank is filled with water to a depth of $11 \mathrm{m} .$ The rest of the tank is filled with air at atmospheric pressure. What is the absolute pressure at the bottom of the tank?...
##### Solution of RCOzH (acid) RNHz (base) and RH (neutral compound) in diethyl etherExtract wth NaOH solutionorganic Iayer RNHz and RHaqueous layer RCOzExtract with HCI solutlonsolution of RCO-HVacuum fiitrationorganic Iayeraqucous "RN1yer(emove solvenimake basic RNH?RCOzHRHexiract with etnerRNHzaqueous layer Giecamoorganic Iaver iemove soiventFigure Acid-Base Extraction Flow Chart
Solution of RCOzH (acid) RNHz (base) and RH (neutral compound) in diethyl ether Extract wth NaOH solution organic Iayer RNHz and RH aqueous layer RCOz Extract with HCI solutlon solution of RCO-H Vacuum fiitration organic Iayer aqucous "RN1yer (emove solveni make basic RNH? RCOzH RH exiract with...
##### Evaluate the triple integral ~ilii| ddvr Jor by changing to spherical coordinates p
Evaluate the triple integral ~il ii| ddvr Jor by changing to spherical coordinates p...
##### Describe how the people model was applied in the article precede-proceed summary
describe how the people model was applied in the article precede-proceed summary...
##### Why would the prices of bond issues increase when interest rate is rising? You are purchasing...
Why would the prices of bond issues increase when interest rate is rising? You are purchasing bond issues at the prevailing interest rate which would be the coupon rate? Why would it be necessary to reduce the bond price volatility when you can purchase new bonds at a higher coupon rate?...
##### Questlon 7 . (5 pts) Draw the products of each reaction and indicate the stereochemistry where appropriateKOC(CH3)aOTsOHHBrCH3 CN CHsCh;" € c H "CH CHj [2]H,oOTsKCN(CH3);C_PBraOHCH}
Questlon 7 . (5 pts) Draw the products of each reaction and indicate the stereochemistry where appropriate KOC(CH3)a OTs OH HBr CH3 CN CHsCh;" € c H "CH CHj [2]H,o OTs KCN (CH3);C_ PBra OH CH}...
##### (3 points) Which conditions would convert this alkyl chloride starting material into iodocyelohexane the greatest yield?Nal in acetoneNal in waterNal in aqueous NazCOz(D) Nal in tosyl chloride
(3 points) Which conditions would convert this alkyl chloride starting material into iodocyelohexane the greatest yield? Nal in acetone Nal in water Nal in aqueous NazCOz (D) Nal in tosyl chloride...
##### Question 1 Find the exact value of the following: i) cos(600*); ii) sin(5408) . 'b) Which of the following is true for all real € for the following graph:y = f(r)(A) f' (x) > 0 and f"(x) > 0; (B) f' (x) > 0 and f"(x) < 0; (0) f' (x) < 0 and f"(x) > 0; f' (x) < 0 and f"(x) < 0.
Question 1 Find the exact value of the following: i) cos(600*); ii) sin(5408) . 'b) Which of the following is true for all real € for the following graph: y = f(r) (A) f' (x) > 0 and f"(x) > 0; (B) f' (x) > 0 and f"(x) < 0; (0) f' (x) < 0 and f"...
##### F() 3f602 where a =4 form of the derivative Jim X- 12. Use the alternate
f() 3f602 where a =4 form of the derivative Jim X- 12. Use the alternate...
##### Construct a 3-set Venn diagram using the following information. Title your universe and label the...
Construct a 3-set Venn diagram using the following information. Title your universe and label the three sets properly. Each of the eight regions should be labeled with its proper quantity. An order for breakfast consists of coffee, bagel, and salad. Out of the total, which you will determine, there ...
##### A baseball is thrown straight upward at 25 m/s from the top of a tall bridge. How far above the bridge is the baseball, when its speed is half the initial value?
A baseball is thrown straight upward at 25 m/s from the top of a tall bridge. How far above the bridge is the baseball, when its speed is half the initial value?...
##### A direct proof attempt to show 2 = ] is given below. Find the mistaken step, if any Let a-b (multiply by &) a = a6 (add 62 to both side) a* 62 = a.b b2 (the difference of two squares and the common factor) 1 (a-b) (ata) = b.(a-b) (cancel (a-6)) 2a-b72 2=1b. IVNone
A direct proof attempt to show 2 = ] is given below. Find the mistaken step, if any Let a-b (multiply by &) a = a6 (add 62 to both side) a* 62 = a.b b2 (the difference of two squares and the common factor) 1 (a-b) (ata) = b.(a-b) (cancel (a-6)) 2a-b72 2=1 b. IV None...
##### When 0.1 mL of the 10-5and 10-6 dilutions of a bacterial suspension are plated out in duplicate, the following plate counts were generated 250 and 252 from the 10-5 dilution and 15_ from the 10-6 dilution. What is the concentration of the original suspension? Please show your working out: (2 Marks)
When 0.1 mL of the 10-5and 10-6 dilutions of a bacterial suspension are plated out in duplicate, the following plate counts were generated 250 and 252 from the 10-5 dilution and 15_ from the 10-6 dilution. What is the concentration of the original suspension? Please show your working out: (2 Marks)...
##### A dataset contains information about a sample of patients admitted to hospital Intensive Care Unit (ICU): For the research question: Is there evidence that mean systolic blood pressure different in male ICU patients than in female ICU patients? Define the relevant parameters and give the appropriate null and alternative hypotheses using proper notation Your answer should be an expression composed of symbols like: 347,P1,z,P P1 Pz,P,PPi Pz,r.
A dataset contains information about a sample of patients admitted to hospital Intensive Care Unit (ICU): For the research question: Is there evidence that mean systolic blood pressure different in male ICU patients than in female ICU patients? Define the relevant parameters and give the appropriate...
##### 1. Suppose in an economy there is passed a new regulatory law that reduces the productivity...
1. Suppose in an economy there is passed a new regulatory law that reduces the productivity of newly produced machines. As a result of the change A.there will be an initial shortage of savings and equilibrium investment will fall B.there will be an initial surplus of savings and the equil...
##### Let A = {1,2 ~."} Prove that id is the only isomorphism f A from (A.<) (441. (h) Give example of finite ponet (A,K) that has morc than one isomorphism from (A,K) to (A.K). Give H eXample o inlinite sct such tlunt tlere is HOFO tunn OH ismnorphism from S)m(,
Let A = {1,2 ~."} Prove that id is the only isomorphism f A from (A.<) (441. (h) Give example of finite ponet (A,K) that has morc than one isomorphism from (A,K) to (A.K). Give H eXample o inlinite sct such tlunt tlere is HOFO tunn OH ismnorphism from S)m(,...
##### Sphere vs. Point Charge
Charge Q = 8.00*10^-6 C is distributed uniformly over the volume of an insulating sphere that has radius R = 8.00 cm. A small sphere with charge 1.00*10^-6 C and mass6*10^-5 kg is projected toward the center of the large sphere from an initial large distance. The large sphere is held at a fixed posi...
##### [1] Consider the rcaction: NH3(e) 502(8) 4NO(g) 6HzO(g). If NHz consumed at an avcragc ratc of 0.45 M/s betwcen 15 0 scconds and,23.2 scconds What is thc rate of teaction during ths timc interval?Wtof HzO proxdluction dluring this tine intervalWouldtthe tate of NHg consumpton bctween 31 seconds and 34 seconds be less than, equal to_ greater than 04 M/s? Explain your responseavcrge
[1] Consider the rcaction: NH3(e) 502(8) 4NO(g) 6HzO(g). If NHz consumed at an avcragc ratc of 0.45 M/s betwcen 15 0 scconds and,23.2 scconds What is thc rate of teaction during ths timc interval? Wt of HzO proxdluction dluring this tine interval Wouldtthe tate of NHg consumpton bctween 31 second...
##### Suppose your firm is considering investing in a project with the cash flows shown below, that...
Suppose your firm is considering investing in a project with the cash flows shown below, that the required rate of return on projects of this risk class is 8 percent, and that the maximum allowable payback and discounted payback statistics for the project are 3.5 and 4.5 years, respectively. ...
##### An object immersed in water displaces a volume of 5.52 m? of water: Find the magnitude of the buoyant force on the object: (Remember Archimedes' principle): The density of water = 1000 Kg / m3. Give your result to three significant figures.
An object immersed in water displaces a volume of 5.52 m? of water: Find the magnitude of the buoyant force on the object: (Remember Archimedes' principle): The density of water = 1000 Kg / m3. Give your result to three significant figures....
##### Using kinematics equation to solve for this please! 4. A car and a truck are heading...
Using kinematics equation to solve for this please! 4. A car and a truck are heading directly toward one another on a straight narrow street, but they avoid a head-on collision by simultaneously appcars brakes at t-0. Using the graph below calculate the when they have come to a stop, given that ...
##### How do you simplify (16^(5/9) * 5 ^ (7/9)) ^-3?
How do you simplify (16^(5/9) * 5 ^ (7/9)) ^-3?...
##### You have a binary search tree. Consider a leave l. B is the set of keys...
You have a binary search tree. Consider a leave l. B is the set of keys in the path p of l including the leave l and the root of the tree. A is the set of keys to the left of the path p. C is the set of keys to the right of the path p. Is the following statement true or false? Given any element a in...
|
2023-03-28 09:42:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5804339051246643, "perplexity": 4591.991788672507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00246.warc.gz"}
|
https://www.nature.com/articles/s41467-021-20983-1?error=cookies_not_supported&code=bfa735eb-caed-4d7f-83e1-8042969a2781
|
## Introduction
The Hall effect, the generation of voltage transverse to an electric current and a magnetic field, and the anomalous Hall effect (AHE) in magnetic materials1 require time-reversal symmetry breaking. These effects refer to a transverse electric response in the linear region, where the Hall voltage Vy scales linearly with the longitudinal current Ix. The second-order (nonlinear) Hall effect, in which Vy depends quadratically on Ix, has attracted attention in condensed matter physics2,3,4. A quantum origin of the nonlinear Hall effect in time-reversal-invariant materials is the Berry curvature dipole (BCD)3. The nonlinear Hall effect due to the BCD was observed recently in bilayer and few-layer WTe25,6. The BCD generates an effective magnetic field in a stationary state, thus leading to the nonlinear Hall effect3. Electrical second-harmonic generation (SHG), including the nonlinear Hall effect, can exist only when a system lacks inversion symmetry7,8,9. Despite growing interest of BCD10,11,12,13,14, it is subject to strict crystal symmetry restrictions and vanishes in certain crystals even without inversion symmetry3, while second-order response is still allowed. Therefore, a search for electrical SHG independent of the BCD is desirable.
Inversion symmetry is absent in low-symmetry crystals (such as WTe25,6,10), and on a surface or an interface. However, the electrical SHG has not explored in surface/interface systems with time-reversal symmetry. Three-dimensional (3D) topological insulators (TIs) have attracted great interest due to the topological surface state (TSS) with spin-momentum locking15,16,17 for applications in spintronics and quantum computing18,19,20. With an inversion-symmetric bulk, 3D TIs such as Bi2Se3, Bi2Te3, and Sb2Te3 host electrical SHG only on the surfaces. Furthermore, threefold rotational symmetry of the TI surface in Fig. 1a forces a BCD to vanish (Fig. 1b)3; thus, the BCD-induced nonlinear Hall effect is not allowed. In addition to the intrinsic contribution by a BCD, extrinsic effects arising from impurity or phonon scatterings, as intensively studied in AHE1, are yet to be well sorted out for nonlinear effects. 3D TIs are ideal platforms in searching for extrinsic electrical SHG in the absence of a BCD. While recent theoretical studies addressed extrinsic mechanisms21,22,23,24, an experimental observation of extrinsic contributions to the electrical SHG has not been reported.
In this work, we show the observation of electrical SHG in the 3D TI Bi2Se3 with time-reversal symmetry. The transverse voltage response depends quadratically on the applied current in the nonmagnetic Bi2Se3 films under zero magnetic field. The observed second-order response follows a threefold rotational symmetry on the surface of Bi2Se3. Notably, the symmetry excludes a BCD, which distinguishes the mechanism for electrical SHG from the previous studies5,6. We consider our observation arising dominantly from skew scattering in the TSS with its inherently chiral wave function. Instead of a BCD, we introduce the Berry curvature triple, which quantifies the moment of the Berry curvature under the threefold rotational symmetry. The skew scattering mechanism applies to a much wider class of noncentrosymmetric materials as broken inversion is the only symmetry constraint unlike the BCD.
## Results
### Observation of electric SHG
High-quality Bi2Se3 films were grown on Al2O3 (0001) substrates in a molecular beam epitaxy system. The first quintuple layer (QL) of Bi2Se3 is completely relaxed by van der Waals bonds25. In addition, the lattice constant of Bi2Se3 film relaxes to its bulk value, implying the absence of strain from the substrate25. Thus, the induction of BCD via breaking the threefold rotational symmetry26 does not occur in Bi2Se3 films, as confirmed by our angle dependent transport measurements below. Multiple Hall bar devices with current channels along different crystalline directions (Fig. 1c) were fabricated. Figure 1d, e show the basic electrical characterization. The longitudinal resistivity ρ (Fig. 1d) shows a typical metallic behavior and saturates below ~30 K27,28. Figure 1e displays the longitudinal Rxx and Hall Ryx resistances as a function of an out-of-plane magnetic field at 2 K. Rxx at the low field region exhibits the effect of weak anti-localization, indicative of 2D surface transports29. Ryx depends linearly on the magnetic field, from which the n-type carrier density n2D is extracted to be ~6.26 × 1013 cm−2. n2D changes < 2.3% for temperature (T) of 2 < T < 300 K.
To explore the nonlinear electric transport, we perform harmonic measurements using low-frequency lock-in techniques schematically shown in Fig. 2a. We apply the ac current Ix(t) = Isinωt along the x direction and measure the voltage Vy perpendicular to the current. Under time-reversal and threefold rotational symmetries, the transverse voltage response does not contain the linear contribution, leading to the expression
$$V_y = R_{yxx}^{\left( 2 \right)}I_x^2,$$
(1)
which contains the SHG signal $${{V}}_{{y}}^{2\omega } = \frac{1}{2}{{R}}_{{{yxx}}}^{\left( 2 \right)}{{I}}^2\sin \left( {2\omega {{t}} - {\pi}/2} \right)$$. Note that the coefficient $${{R}}_{{{yxx}}}^{\left( 2 \right)}$$ is proportional to the second-order conductivity $${\sigma}_{{{yxx}}}^{\left( 2 \right)}$$ (see Supplementary Note 1), which can be finite in noncentrosymmetric materials3.
Figure 2b shows the second harmonic transverse voltage under zero magnetic field in 20 QL Bi2Se3. Its quadratic dependence on the ac current ($${{V}}_{{y}}^{2{\omega}} \propto {{I}}^2$$) reveals the electrical SHG from a time-reversal-invariant 3D TI. Equivalently, the second harmonic transverse resistance defined as $${{R}}_{{{yx}}}^{2{\omega}} \equiv {{V}}_{{y}}^{2{\omega}}/{{I}}$$ scales linearly with I (Fig. 2c). Moreover, it changes the sign when we invert the current direction and the corresponding Hall probes (schematic in the inset of Fig. 2c). This is consistent with the second-order nature of nonlinear transport in Eq. (1). The electric SHG has little dependence on the input frequencies ranging from 9 to 263 Hz (see Supplementary Fig. 1).
Figure 2d displays the $${{R}}_{{{yx}}}^{2{\omega}}\left( {{I}} \right)$$ data at different temperatures. The slope of $${{R}}_{{{yx}}}^{2{\omega}}\left( {{I}} \right)$$ (i.e. $${{R}}_{{{yxx}}}^{\left( 2 \right)}$$) quantifies the magnitude of the electrical SHG. $${{R}}_{{{yxx}}}^{\left( 2 \right)}$$ decreases gradually as temperature increases in Fig. 2e. In general, finite temperature affects the nonlinear electric transport through thermal smearing of the electron distribution function f and the change of the electron scattering time τ. Thermal smearing has little effect on the result as the Fermi energy is much higher than thermal energy kBT in our Bi2Se3 (kB: the Boltzmann constant). To reveal the effect of τ, we depict the measured carrier mobility µ in Fig. 2f. Both the SHG signal and mobility tend to decrease as temperature rises.
### Angular dependence and scaling of nonlinear transport
To characterize the angular dependence of nonlinear electric transport, we measure various devices with the current applied along different crystal directions on 20 QL Bi2Se3 (Fig. 1c). The current direction is denoted by angle Θ with respect to the $$\overline \Gamma \overline K$$ direction (i.e., [−1, 1, 0] direction on Bi2Se3 (111) surface of the primitive lattice in real space) in Fig. 3. $${{R}}_{{{yx}}}^{2{\omega}}$$ shows the maximum value when the current direction is along $$\overline \Gamma \overline K$$ (Fig. 2b, c), and decreases when the current is rotated 15° away from $$\overline \Gamma \overline K$$ in Fig. 3a. For Θ = 30, i.e., with the ac current along the $$\overline \Gamma \overline M$$ direction, $${{R}}_{{{yx}}}^{2{\omega}}$$ becomes vanishingly small (Fig. 3b). $${{R}}_{{{yx}}}^{2{\omega}}$$ switches sign with a similar magnitude when the current direction is rotated by 60° from the $$\overline \Gamma \overline K$$ to $$\overline \Gamma \overline K ^\prime$$ direction in Fig. 3c. The small non-symmetry of $${{R}}_{{{yx}}}^{2{\omega}}\left( {{I}} \right)$$ at the positive and negative current in Fig. 3a–c can be due to misalignments of Hall bar. The electric SHG measured at 24 different directions is summarized in Fig. 3d, which shows the threefold angular dependence of $${{R}}_{{{yxx}}}^{\left( 2 \right)}$$. The similar angular dependence is also observed in 10 QL Bi2Se3 (Supplementary Fig. 2). We emphasize that threefold rotational symmetric signal with sign change excludes the Joule heating effect as an origin, which is isotropic and generally leads to the third harmonic generation. The threefold symmetry also excludes a BCD, while the helical spin texture30 and the Berry curvature31 (Fig. 1b) on the hexagonally warped Fermi surface (FS) of the TSS32,33 share the same angular dependence. We note that the Berry curvature has the opposite sign along $$\overline \Gamma \overline K$$ and $$\overline \Gamma \overline K ^\prime$$ due to time-reversal symmetry.
The nontrivial wavefunction on the TSS with scattering by impurities or phonons can give rise to finite electrical SHG24. To investigate the microscopic mechanism, we examine the scaling properties of the second-order transport with respect to the linear conductivity σ of the film using the data in Figs. 1d and 2e. Figure 4a shows that the experimental data fit well with $$\frac{{E_y^{\left( 2 \right)}}}{{E_x^{\left( 2 \right)}}} = a\sigma ^2 + b$$, where $$E_y^{\left( 2 \right)} = \frac{{V_y^{2\omega }}}{W}$$ and $$E_x = \frac{{V_x^\omega }}{L}$$ (W and L are the width and length of the sample, respectively). The linear and second-order conductivities σ and $$\sigma _{yxx}^{\left( 2 \right)}$$ are related by $$J_y^{\left( 2 \right)} = \sigma _{yxx}^{\left( 2 \right)}E_x^2 = \sigma E_y^{\left( 2 \right)}$$, so the coefficients a and b represent contributions in $$\sigma _{yxx}^{\left( 2 \right)}$$ that scale as σ3 and σ, respectively. Furthermore, σ is proportional to τ for low frequencies compared to τ−1. Therefore, the intercept b amounts to the τ linear contributions of the second-order conductivity, which are generally attributed to BCD3 and/or side jump6. Note that the former is absent in our case for the symmetry reason, so we attribute the τ-linear contribution to side jump. On the other hand, the slope a quantifies the contribution $$\sigma _{yxx}^{\left( 2 \right)} \propto \tau ^3$$, which originates from skew scattering as we discuss below. We obtain similar fitting results for $$\Theta$$ = 15° in Fig. 4b and also in 10 QL Bi2Se3 (see Supplementary Fig. 2). Notably, the cubic contribution plays a dominant role over the linear one as σ increases in Bi2Se3, and these two contributions are of opposite signs as shown in Fig. 4a, b and are separated in Supplementary Fig. 3. The scaling of electrical SHG with respect to the surface linear conductivity σs is also analyzed in Supplementary Fig. 4.
### Physical origin of nonlinear transport
The TI Bi2Se3 possesses time-reversal and inversion symmetries in the bulk. However, inversion is broken on the surface and hence the metallic TSS with C3v crystalline symmetry can host electrical SHG. It takes the form24,34
$$J = \sigma ^{\left( 2 \right)}\left| {\mathbf{E}} \right|^2\cos 3\Theta {\mathrm{,}}$$
(2)
where Θ is the angle of the applied electric field E with respect to the $$\overline \Gamma \overline K$$ direction and the current density J is measured perpendicular to E. There is only one independent element σ(2) in the second-order conductivity tensor $${\sigma}_{{\mathrm{abc}}}^{\left( 2 \right)}$$ for a two-dimensional system with C3v symmetry (see Methods).
Skew scattering is one of the microscopic mechanisms that contributes to σ(2). It arises even classically when there are nontrivial impurity potentials lacking inversion on the atomic scale8,34,35 or by local correlation of spins36. Alternatively, without relying details of impurities, quantum Bloch functions can imprint inversion breaking and trigger skew scattering, which is the case for the TSS24,34. There is a semiclassical picture for skew scattering in a second-order process, schematically depicted in Fig. 4c. The hexagonally warped Fermi surface consists of the positive and negative Berry curvature segments. Since both segments are anisotropic, they acquire finite but opposite velocities in the second-order response. When we construct a wave packet from states on the Fermi surface, it self-rotates due to finite Berry curvature and the rotation direction depends on the sign of Berry curvature. Like the Magnus effect, even an isotropic scatterer deflects the motion of wave packets in a preferred direction due to the self-rotation, thus leading to finite SHG.
The semiclassical Boltzmann transport calculation24 based on the model Hamiltonian of TSS32,37 leads to the linear conductivity from the TSS $$\sigma _{{\mathrm{TSS}}} = \frac{{e^2\tau \it{\epsilon} _F}}{{4\pi {\hbar}^{2}}}$$ and the second-order conductivity from skew scattering is given by $$\sigma ^{\left( 2 \right)} = \frac{{e^{3} v\tau ^{3}}}{{{\hbar}^2 \widetilde{\tau} }}$$, where τ is the transport scattering time, $$\widetilde \tau$$ is the skew scattering time, e is the electric charge, F is the Fermi energy, and v is the Dirac velocity. Importantly, skew scattering yields σ(2)τ3 (assuming that $$\widetilde \tau$$ is constant) while other contributions including side jump have weaker powers in τ, which distinguishes the skew scattering contribution. The experimentally observed $$\sigma _{yxx}^{\left( 2 \right)} \propto \sigma ^3$$ behavior is supported by the skew scattering mechanism, whose contribution is the largest in our observations.
The second-order conductivity obeys the surface crystalline symmetry to have the form $$\sigma _{yxx}^{\left( 2 \right)} = \sigma ^{\left( 2 \right)}\cos 3{{\Theta }},$$ according to Eq. (2) (Fig. 4d), which is in agreement with our experiment. Instead of a BCD, the threefold rotational symmetry inspires us to define the Berry curvature triple T, a higher-order moment of the Berry curvature distribution in the momentum space. It quantifies the strength of the Berry curvature on the Fermi surface, respecting threefold rotation: $$T\left( {{\it{\epsilon }}_F} \right) = 2\pi {\hbar} {\int} {\frac{{d^2k}}{{\left( {2\pi } \right)^2}}} \delta \left( {{\it{\epsilon }}_F - {\it{\epsilon }}_{\mathbf{k}}} \right){{\Omega }}_z\left( {\mathbf{k}} \right)\cos 3\theta _{\mathbf{k}}$$ (θk: the angle measured from the $$\overline \Gamma \overline K$$ line). For the TSS, we obtain $$T\left( {{\it{\epsilon }}_F} \right) = \frac{{\lambda {\it{\epsilon }}_F}}{{2{\hbar}^{2} v^{3}}}$$. The Berry curvature triple is related to the skew scattering time $$\widetilde \tau$$. When we consider unscreened Coulomb impurities with the strength characterized by the dimensionless parameter $$\alpha = \frac{{e^2Q}}{{4\pi \varepsilon _0\varepsilon {\hbar} v}}$$, where Q is the impurity charge, ε0 is the vacuum permittivity, and ε is the dielectric constant, we find $$\widetilde \tau \approx 4\pi ^2n_i\alpha ^3v^2T\left( {{\it{\epsilon }}_F} \right)$$ (see Supplementary Note 2).
We now provide the theoretical estimate of the second-order response from skew scattering. Though the second-order response arises only on the surface, both 2D surface and bulk states contribute to σ. As the contribution from the TSS is ~40% from the top and bottom surfaces38, we estimate τ ≈ 0.1 ps and $$\widetilde \tau \approx 10\,{\mathrm{ps}}$$ (see Methods section). The ratio $$\tau /\widetilde \tau$$ of ~1% quantifies the relative strength of skew scattering. The estimated τ and $$\widetilde \tau$$ result in the theoretical value σ(2) ≈ 1.0 × 10−11 A·V−2·m. This is about three times larger than the experimentally observed value σ(2) = 2.9 × 10−12 A·V−2·m. We can attribute this difference to the partial cancellation of the second-order response; the contribution of the top surface is dominant over that of the bottom surface. In addition, screening of the Coulomb interaction reduces the response (see Supplementary Fig. 5).
## Discussion
We have demonstrated the electric SHG in a nonmagnetic 3D TI under zero magnetic field. It provides an example of BCD-independent nonlinear transverse transport, which is further revealed to arise from skew scattering. This skew scattering mechanism can be applicable to a broader class of noncentrosymmetric quantum materials, utilizing the chirality of electron wavefunction in Weyl and Dirac fermions39. Though our work reveals the nonlinear transport under low frequencies, it can be extended to higher frequency regimes such as GHz and THz. Thus, the electric SHG is complementary to previous optoelectronic approaches34,40 to reveal the underlying physics of nonlinear effects.
Berry curvature is allowed to exist in the TSS31,41, and concentrates in regions around $$\overline K$$ $$\left( {\overline K ^\prime } \right)$$ points in Fig. 1b, leading to finite Berry curvature triple. Finite Berry curvature also affects the electron distribution function through the collision integral and the anomalous and side jump velocities24. The intrinsic contribution due to the anomalous velocity and hence BCD is absent in Bi2Se3 due to the symmetry reason3; however, the extrinsic contributions such as skew scattering and side jump persist21. The skew scattering contribution dominates in the weak impurity limit (τ → )23,24 because of its high-order τ dependence. Though a full quantitative understanding of various contributions to nonlinear electric transports remains elusive21 which may include phonons, domain boundaries, impurities, and Berry curvature42, identifying major mechanisms is an important step not only for the fundamental understanding of underlying principle, but for the development of rectification or second-harmonic devices for energy harvesting and high-frequency communication. The extrinsic nonlinear effect observed in Bi2Se3 is comparable in magnitude to the intrinsic one in few-layer WTe26, which has a 2D nonlinear conductivity of ~10−12 A·V−2·m. Moreover, the extrinsic mechanism exemplified here applies to a wider class of materials with inversion-symmetry breaking, such as graphene/hexagonal-boron-nitride heterostructures43, Dirac semimetal ZrTe544,45 and the two-dimensional electron gas at the LaAlO3/SrTiO3 interface46. Engineering scattering processes in above materials is a promising way to achieve a prominent SHG by utilizing their much higher carrier mobilities. A higher mobility and long scattering time improve the efficiency in device applications since skew scattering has a higher order dependence on τ1,24,47.
## Methods
### Sample preparation and electric measurements
Bi2Se3 films were grown on Al2O3 (0001) substrates in a molecular beam epitaxy system with a base pressure < 2 × 10−9 mbar, as detailed in Tian et al.47. Van der Waals epitaxy of Bi2Se3 film was achieved by adopting the two-step growth method25,27,48,49. For transport measurements, a capping layer of MgO (2 nm)/Al2O3 (3 nm) was deposited on top of the films prior to device fabrication. Hall bar devices were fabricated using the standard photolithography and Argon plasma etching. They were wire-bonded to the sample holder and installed in a physical property measurement system (PPMS, Quantum Design) for transport measurements. We performed low-frequency ac harmonic electric measurements, using Keithley 6221 current sources and Stanford Research SR830 lock-in amplifiers. During the measurements, a sinusoidal current with a constant amplitude and certain frequency is applied to the devices, and the in-phase first harmonic Vω and out-of-phase second harmonic V longitudinal and transverse voltages were measured simultaneously by four lock-in amplifiers.
### Theoretical modeling and estimate
The Hamiltonian for the TSS is32,37
$$H = {\hbar} v\left( {k_x\sigma _y - k_y\sigma _x} \right) + \frac{\lambda }{2}\left( {k_ + ^3 + k_ - ^3} \right)\sigma _z,$$
(3)
where k± = kx ± iky, σa denotes the Pauli matrix (a = x,y,z), and λ quantifies the hexagonal warping32. In this section, the x axis is set perpendicular to the reflection plane, i.e., along the $$\overline {{\Gamma }} \overline {\mathrm{K}}$$ line. For the surface state of Bi2Se3, we find v = 5 × 105 m/s and λ = 80 eV·Å3, and the FS is located above the Dirac point, where a hexagonally warped FS was found30,33.
In general, the current response quadratic to the electric field E takes the form $$J_a^{\left( 2 \right)} = \sigma _{abc}^{\left( 2 \right)}E_bE_c$$, where $$\sigma _{abc}^{\left( 2 \right)}$$ is the second-order conductivity. For a two-dimensional system with C3V symmetry like the TSS, it has only one independent element $$\sigma ^{\left( 2 \right)} \equiv \sigma _{xxy}^{\left( 2 \right)} = \sigma _{xyx}^{\left( 2 \right)} = \sigma _{yxx}^{\left( 2 \right)} = - \sigma _{yyy}^{\left( 2 \right)}$$. To estimate the transport properties, we assume Coulomb impurities, randomly distributed in a sample. Taking account of the Thomas-Fermi screening, we write the Fourier transform of the Coulomb interaction as $$V\left( q \right) = \frac{{2\pi \alpha {\hbar} v}}{{q + q_{{\mathrm{TF}}}}}$$, where qTF is the Thomas-Fermi wavevector. Here, we consider unscreened Coulomb impurities (qTF = 0), which we discuss below.
In estimating τ and $$\widetilde \tau$$, we use the dielectric constant50 ≈ 100, leading to $$\alpha \approx \frac{1}{{23}}$$. We use the previous observation that the contribution of the TSS from the top and bottom surfaces to the total conduction is ~40%38 and assume that the impurity density ni is approximately the same as the carrier density n2D. Thus, the observed linear conductivity σ = 2.5 × 10−3Ω−1 at 10 K leads to the carrier density of the TSS nTSS = 2.43 × 1012 cm−2, the corresponding Fermi wavelength $$\lambda _F = \sqrt {\frac{\pi }{{n_{{\mathrm{TSS}}}}}} = 11.4\,{\mathrm{nm}}$$, the scattering time τ ≈ 0.1 ps, and the skew scattering time $$\widetilde \tau \approx 10\,{\mathrm{ps}}$$, where we use the expressions24 $$\sigma _{{\mathrm{TSS}}} = \frac{{e^2\tau {\it{\epsilon }}_F}}{{4\pi {\hbar}^{2}}}$$, $$\tau ^{ - 1} = \frac{\pi }{2} n_{i} \alpha ^{2} v {\lambda}_{F}$$, and $${\widetilde{\tau}}^{-1} = \frac{{4\pi ^3}}{{\hbar}}\frac{{n_{i}\alpha^{3}\lambda }}{{\lambda _{F}}}$$. The small ratio of $$\frac{{\uptau }}{{\widetilde \tau }} \ll 1$$ satisfies the condition of the perturbative treatment of impurities in the semiclassical Boltzmann theory.
The Thomas-Fermi wavelength $$\lambda _{{\mathrm{TF}}} = \frac{{2\pi }}{{q_{{\mathrm{TF}}}}}$$ is typically ranging from 26 to 90 nm51,52, resulting in the ratio $$\lambda _F/\lambda _{{\mathrm{TF}}} \ \lesssim\ 0.4$$. We describe the detailed calculations and discussion about the effect of screening in Supplementary Note 2 and Supplementary Fig. 5. We note that for short-range impurities or in the strong screening limit, i.e., λTF → 0, skew scattering vanishes in a gapless Dirac system24,34.
|
2023-03-21 07:46:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026345372200012, "perplexity": 1268.9244929311355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00461.warc.gz"}
|
https://groups.google.com/g/sci.econ/c/WCyo5FBHOSQ/m/KLQq8d2SHCEJ
|
# Show Me Long Period Labor Demand Curves For This Technology
15 views
### Robert Vienneau
Feb 5, 2000, 3:00:00 AM2/5/00
to
1.0 INTRODUCTION
This long post presents an example in which higher wages for
unskilled labor are associated with firms choosing to employ more
unskilled workers per unit output produced. The exact numeric values
used are obviously unreasonable. The example, howevever, is used to
raise questions about the logical implications of maximizing
behavior.
Some further points might help clarify the questions. The example
illustrates behavior that is possible under some maximizing frameworks.
Those who accept one of these frameworks, but reject the possibility
of this behavior occuring in existing economies must accept the
position should clearly state their assumptions, ad hoc as they may be.
They might also try to give some rationale for why one should be
interested in this special case. If one does not accept any maximizing
model that could produce the illustrated behavior in the general case,
but does accept the use of mathematical models of maximization in
economics, one should outline an alternative model. The models in
which I am especially interested, although not exclusively so, are
those of steady state or long run prices. In a long run position,
the need for specific quantities of capital goods will have been
foreseen and the structure of capital goods will have been adapted to
production. I question whether equilibria of this sort can be explained
by the intersections of long run supply and demand curves. Some
economists have raised this question.
2.0 DATA ON TECHNOLOGY
Consider a very simple economy that produces a single consumption
good, corn, from inputs of skilled and unskilled labor, steel, and
(seed) corn. All production processes in this example require a year to
complete. Two production processes are known for producing steel. These
processes require the inputs shown in Table 1 to be available at the
start of the year for each ton steel produced and available at the
end of the year.
TABLE 1: INPUTS REQUIRED PER TON STEEL PRODUCED
Process A Process B
Steel 0 Tons (3/10) Ton
Corn (71/100) Bushel (1/50) Bushel
Skilled Labor (1/100) Person-Year (33/50) Person-Year
Unskilled Labor 1 Person-Year (6/5) Person-Year
Two processes, as shown in Table 2, are also known for producing
corn.
TABLE 2: INPUTS REQUIRED PER BUSHEL CORN PRODUCED
Process C Process D
Steel (1/10) Ton (1/5) Ton
Corn 0 Bushels (1/10) Bushel
Skilled Labor 1 Person-Year (7/10) Person-Year
Unskilled Labor 1 Person-Year (4/5) Person-Year
A technique consists of a process for producing the consumption good,
corn and a process for producing each non-consumption reproducable good
used as an input in the process for producing the consumption good. In
other words, a technique is a combination of one steel-producing
process and one corn-producing process. The number of techniques is the
product of the number of corn-producing processes and the number of
steel-producing processes. Thus, there are four techniques in this
example. They are defined in Table 3.
TABLE 3: TECHNIQUES AND PROCESSES
Technique Processes
Alpha A, D
Beta A, C
Gamma B, C
Delta B, D
3.0 QUANTITY FLOWS
The example is constructed by comparing constant prices associated
with stationary states for producing the net output of a bushel corn.
Four stationary states are possible, or linear combinations of these.
Each of the pure stationary states corresponds to a choice of one
of the techniques. Table 4 shows the quantity flows for a stationary
state in which the alpha technique is used. The quantity flows for
the stationary states in which the other techniques are used are
shown in the appendix.
TABLE 4: QUANTITY FLOWS FOR THE ALPHA TECHNIQUE
INPUTS STEEL INDUSTRY CORN INDUSTRY
Skilled Labor (1/379) Person-Year (350/379) Person-Year
Unskilled Labor (100/379) Person-Year (400/379) Person-Year
Steel 0 Tons (100/379) Ton
Corn (71/379) Bushel (50/379) Bushel
OUTPUTS (100/379) Ton Steel (500/379) Bushel Corn
The amount of any input required per bushel corn produced net is
merely the sum across the columns in the tables showing stationary
state quantity flows. Table 5 shows the unskilled labor intensity
for each technique.
TABLE 5: UNSKILLED LABOR INTENSITY
TECHNIQUE INTENSITY
Alpha (500/379) Unskilled Person Years per Bushel
Beta (1100/929) Unskilled Person Years per Bushel
Gamma (410/349) Unskilled Person Years per Bushel
Delta (400/313) Unskilled Person Years per Bushel
4.0 PRICES
The argument proceeds by determining which technique is
cost-minimizing at equilibrium prices. In this context, equilibria
have the following properties:
o At least one corn-producing process is operated, and at least one
of the steel-producing processes is operated.
o The cost of inputs for each process in operation does not exceed
revenues.
o No process can be used to obtain pure economic profits.
I assume that steel and corn inputs are paid for at the beginning
of the year. Labor, although hired at the beginning of the year,
is paid out of the product at the end of the year.
Given these conditions, Equations 1 and 2 must be satisfied if
the Alpha technique is chosen:
(71/100)( 1 + r ) + (1/100) w1 + w2 = p (1)
[ (1/5) p + (1/10) ]( 1 + r ) + (7/10) w1 + (4/5) w2 = 1 (2)
where corn is the numeraire, p is the price of steel, w1 is the wage of
skilled labor, w2 is the wage of unskilled labor, and r is the rate of
profits (sometimes called the interest rate). This is a system of two
equations with four unknowns. Thus, there are two degrees of freedom.
This system can be solved for the wage of skilled labor and the price
of steel in terms of a given rate of profits and the wage of unskilled
labor. Equations 3 and 4 show this solution:
w1 = [ 379 - 192 r - 71 r r - (500 + r) w2 ]/(351 + r) (3)
p = ( 253 + 248 r + 346 w2 )/(351 + r) (4)
Equation 3 is the factor price surface for the Alpha technique. This is
a two-dimensional surface in a three-dimensional space.
Corresponding price systems for the Beta, Gamma, and Delta techniques
are relegated to the appendix.
5.0 CHOICE OF TECHNIQUE
It remains to be shown which technique is cost minimizing. I consider
two values of the exogeneously specified rate of profits, 0% and 150%.
5.1 CASE 1: r = 0%.
Figure 1 shows the factor price curves for three techniques when
the rate of profits is zero. (The curve for delta is never on
the frontier and is not shown.) The cost-minimizing technique, at
a given rate of profits and a given wage of unskilled labor will
maximize the wage of skilled labor. Thus, the cost-minimizing
technique at a zero rate of profits is the frontier constructed
as the outer envelope of the three lines shown. Notice that the curves
for alpha, beta, and gamma are straight lines. This is an implication
of the mathematics in general. There cannot be reswitching at a given
rate of profits between skilled and unskilled labor.
S |
k 379/351+
i | . alpha
l 929/1001+ ./
Wl | . .
ae | . .
gd 349/383+ . .
e | . . .
L 119/286+ . x
oa | . ..
fb | . .. beta
o | .. ./
r 301/1089+ . x
| . . . gamma
( | . . ./
w | . . .
1 +--------------------+--------------+--------x------x--------x
) 41/88 3229/5445 929/1100
379/500 349/410
Wage of Unskilled Labor (w2)
FIGURE 1: FACTOR PRICE FRONTIER (NOT TO SCALE)
Since the technique and prices have been determined at a zero rate of
profits, for any given wage of unskilled labor, one can graph the labor
intensity for unskilled labor against either the wage of unskilled
labor or relative wages along the factor price frontier. Figure 2
shows the latter. Notice that higher wages of unskilled labor are
associated with a step-decrease in the unskilled labor intensity of the
cost-minimizing technique. This case conforms to the intutition of the
|
500/379+-------+
Labor | |
Intensity | |
| |
(Unskilled 1100/929+ +-----------------+
Person- | |
Years | |
per 410/349+ +------->
Bushel) |
|
|
|
|
+-------+-----------------+---------
5863/5236 3229/1505
Ratio of Wages of Unskilled and Skilled Labor
FIGURE 2: UNSKILLED LAOR INTENSITY VS. RELATIVE WAGES (NOT TO SCALE)
5.2 CASE 2: r = 150%
Figure 2 shows the factor price curves when the rate of profits
is 150%. Only the Gamma and Beta techniques appear on the frontier in
this case; the Alpha and Delta techniques are dominated.
S |
k |
i |
l 95/166+ Gamma
Wl | ./
ae | .
gd 445/802+ .
e | . .
L | . .
oa | . .
fb | . .
o 5/18+ ..
r | . . Beta
| . ./
( | . .
w | . .
1 +------------------------------+-------------------x----------x
) 2/9 19/44 89/200
Wage of Unskilled Labor (w2)
FIGURE 3: ANOTHER FACTOR PRICE FRONTIER (NOT TO SCALE)
Figure 4 shows the intensity of unskilled labor for the chosen
technique graphed against relative wages. In this case, a higher
wage for unskilled labor can be associated with a choice of technique
in which vertically integrated firms want to hire more unskilled
workers per unit output.
|
|
Labor |
Intensity |
|
(Unskilled 1100/929+ +---------------->
Person- | |
Years | |
per 410/349+----------------+
Bushel) |
|
|
|
|
+----------------+------------------
4/5
Ratio of Wages of Unskilled and Skilled Labor
FIGURE 4: UNSKILLED LAOR INTENSITY VS. RELATIVE WAGES (NOT TO SCALE)
6.0 CONCLUSIONS
This example clearly shows that it is possible for a technique that
uses an input more intensively to be adopted by cost-minimizing firms
when the price of that input is higher. In the case illustrated by
Figure 4, vertically integrated firms desire to hire more unskilled
workers, per bushel corn produced net, at a higher wage for unskilled
labor. This is a matter of logic.
Those who do not think that this possibility ever occurs in
the real world have failed to face a challenge for decades now.
What are the special case assumptions adopted so as to rule out the
possibility illustrated in the example? Furthermore, why should
a special-case model be preferred to the more general model? The
general model for analyzing the choice of technique does not imply
a less-labor intensive technique will be adopted at a higher wage.
From long experience, I know that some are likely to make logical
mistakes at this point. So I'll conclude with a few observations. The
effect illustrated in the example can arise when there are many more
processes to choose from. It can arise in models with more than two
goods being produced. I don't think it depends on the existence of a
produced good that is used either directly or indirectly in the
production of all goods. (Both steel and corn have this property in
the example.) It can arise if there are non-produced commodities used
in production ("land") and capital-goods that last more than one
production cycle ("fixed capital" or "machinery"). I gather that
numeric examples with reasonable values are easier to construct, in
some sense, if there are more produced goods. At least, more degrees
of freedom arise.
Consequently, incorrect answers to my question are assumptions
that more goods are produced, more techniques are available, etc.
These assumptions are simply insufficient to imply the conclusion
that higher wages of a specific type are associated with a choice of
a technique using that type of labor less intensively.
The final questions posed by this example are a matter of the
sociology of knowledge. Similar examples have been available
in the literature for over three decades. Many economists,
including specialists in labor economics, seem to be unaware of
this possibility. Why do so many economists have logically
mistaken beliefs about their subject? Why do they continue to
teach irrelevant dogma?
REFERENCES
Heinz D. Kurz and Neri Salvadori, _Theory of Production: A
Long-Period Analysis_, Cambridge University Press, 1995
J. S. MetCalfe and Ian Steedman, "Reswitching and Primary Input Use,"
_Economic Journal_, 1972
APPENDIX
This appendix contains some supplementary tables and calculations for
anybody who wants to check my work.
A.1 QUANTITY FLOWS
TABLE A-1: QUANTITY FLOWS FOR THE BETA TECHNIQUE
INPUTS STEEL INDUSTRY CORN INDUSTRY
Skilled Labor (1/929) Person-Year (1000/929) Person-Year
Unskilled Labor (100/929) Person-Year (1000/929) Person-Year
Steel 0 Tons (100/929) Ton
Corn (71/929) Bushel 0 Bushels
OUTPUTS (100/929) Ton Steel (1000/929) Bushel Corn
TABLE A-2: QUANTITY FLOWS FOR THE GAMMA TECHNIQUE
INPUTS STEEL INDUSTRY CORN INDUSTRY
Skilled Labor (33/349) Person-Year (350/349) Person-Years
Unskilled Labor (60/349) Person-Year (350/349) Person-Years
Steel (15/349) Ton (35/349) Ton
Corn (1/349) Bushel 0 Bushels
OUTPUTS (50/349) Ton Steel (350/349) Bushel Corn
TABLE A-3: QUANTITY FLOWS FOR THE DELTA TECHNIQUE
INPUTS STEEL INDUSTRY CORN INDUSTRY
Skilled Labor (66/313) Person-Year (245/313) Person-Years
Unskilled Labor (120/313) Person-Year (280/313) Person-Years
Steel (30/313) Ton (70/313) Ton
Corn (2/313) Bushel (35/313) Bushel
OUTPUTS (100/313) Ton Steel (350/313) Bushel Corn
A.2 PRICE EQUATIONS
A.2.1 BETA PRICES
The Beta price system is:
(71/100)( 1 + r ) + (1/100) w1 + w2 = p (A-1)
(1/10) p ( 1 + r ) + w1 + w2 = 1 (A-2)
The solution is:
w1 = [ 9929 - 142 r - 71 r r - (1100 + 100 r) w2 ]/( 1001 + r ) (A-3)
p = ( 720 + 710 r + 990 w2 )/( 1001 + r ) (A-4)
A.2.2 GAMMA PRICES
The Gamma price system is:
[ (3/10) p + (1/50) ]( 1 + r ) + (33/50) w1 + (6/5) w2 = p (A-5)
(1/10) p ( 1 + r ) + w1 + w2 = 1 (A-6)
The solution is:
w1 = [ 349 - 152 r - r r - (410 - 90 r) w2 ]/( 383 - 117 r ) (A-7)
p = ( 340 + 10 r + 270 w2 )/( 383 - 117 r ) (A-8)
A.2.3 DELTA PRICES
The Delta price system is:
[ (3/10) p + (1/50) ]( 1 + r ) + (33/50) w1 + (6/5) w2 = p (A-9)
[ (1/5) p + (1/10) ]( 1 + r ) + (7/10) w1 + (4/5) w2 = 1 (A-10)
The solution is:
w1 = ( 313 - 174 r + 13 r r - 400 w2 )/( 311 - 39 r ) (A-11)
p = ( 304 - 26 r + 156 w2 )/( 311 - 39 r ) (A-
--
r a Whether strength of body or of mind, or wisdom,
v c p or virtue, are found in proportion to the
i s e power or wealth of a man is a question fit
e . . perhaps to be discussed by slaves in the
n m c hearing of their masters, but highly unbecoming
@ a o to reasonable and free men in search of the
d e m truth.
r -- Rousseau
### SUSUPPLY
Feb 5, 2000, 3:00:00 AM2/5/00
to
It's re-run season again, is it, Robert?
>1.0 INTRODUCTION
>
> This long post presents an example in which higher wages for
>unskilled labor are associated with firms choosing to employ more
>unskilled workers per unit output produced.
How about you answering some questions for a change, say Chris Auld's two of a
couple of months ago to you?
Then you might want to comment on the following:
<< Mellowing Out? Not This Union Boss
<< Business Week; New York; January 31, 2000; Jack Ewing in Frankfurt;
<< In his three-piece suits, chauffeured Audi limo, and wood-paneled office
outside Frankfurt, Klaus Zwickel could be just another rich investment banker.
But when he jabs his thick index finger to make a point, his rough toolmaker's
hands reveal someone far more fearsome: the head of IG Metall, Germany's
largest labor union. With a wave of one of Zwickel's fat cigars, his union's
2.8 million members could shut down most of German heavy industry, including
companies such as DaimlerChrysler and Mannesmann.
<< Now, as Chancellor Gerhard Schroder finally shows signs of delivering the
reforms long sought by business, Zwickel is the guy standing in the way.
Schroder just found that out. On Jan. 9, he got union leaders to say they would
seek moderate pay increases if business agreed to make it easier for workers to
retire at 60 rather than 65. Zwickel signed--then days later demanded a 5.5%
raise for his people, far above what anyone expected. Business leaders were
enraged. The demand violates the letter and spirit of the agreement,'' says
Dieter Hundt, president of the German Employers' Assn. Schroder remained
silent, not wanting to risk alienating IG Metall before some crucial state
elections.
<< Although the union is likely to settle for 3% to 3.5%, business leaders and
economists say any raise above the estimated 2.5% growth in productivity will
kill jobs and add to Germany's 10.2% unemployment rate. There is no union
that has more radical, more militant, more old-fashioned views than IG
Metall,'' rails Hans-Olaf Henkel, president of the Federal Association of
German Industry.
<< Don't whine to Zwickel, 60, about holding down wages to make Germany more
competitive. The beefy, crew-cut labor boss, whose salary runs into six
figures, personifies the resentment many Germans feel about giving up their
social perks in the name of globalization. His membership is among the best
treated in the world, earning an average of $30,000 a year, with 35-hour work weeks plus six weeks of vacation. Zwickel wants to keep it that way. << He has the clout to do it. His union is so feared that companies such as IBM's German unit have redrawn their corporate structure to minimize the number of workers who would qualify for IG Metall membership. [Their demand curves slope how, Robert?] << And Zwickel's power radiates beyond the union. He sits on the supervisory boards of two of Germany's most important companies, Volks- wagen and Mannesmann. IG Metall typically sets the tone, prompting other unions to seek similar pay hikes. << But critics say the union is digging its own grave. By winning annual raises of up to 6.8% in the last decade, the union drove German business to invest in machines rather than people. [Well, how silly of them.] << And as industry moved production to cheaper locations overseas, IG Metall lost members--600,000 since 1992. As chairman of the union for the last seven years, Zwickel, who declined requests for an interview, shoulders much of the blame. >> The first question I'd ask him is, does he accept advice from Robert. Patrick ### Chris Auld unread, Feb 5, 2000, 3:00:00 AM2/5/00 to Robert Vienneau <rv...@see.sig.com> wrote: > This long post presents an example in which higher wages for >unskilled labor are associated with firms choosing to employ more >unskilled workers per unit output produced. Rob has been working on Sraffa3.ps some more. Is there a new version number now, Rob? In answer to the challenge in the subject: do it yourself. Since you have two types of labor, however, the question is ill-posed. Choose one type and deduce the profit-maximizing amount of that type hired as its own price changes in one period, all else equal (the way Rob phrases the question, "long period demand," implies the ol' ceteris paribus explicit in the definition of "input demand function" is still a point Rob is confused about.) >The exact numeric values >used are obviously unreasonable. The example, howevever, is used to >raise questions about the logical implications of maximizing >behavior. For the n-th time, Rob, all us stupid mainstream economists are well aware that the locus of equilibria traced out as multiple prices endogenously change will not necessary exhibit the same properties as its partial equilibrium analog. Why do write this essay as if you're saying something that's surprising or at odds with the "brainwashing" we teach to even undergraduate micro theory students? >They might also try to give some rationale for why one should be >interested in this special case. Empirical evidence? For instance, Rob has repeatedly suggested that the mechanism in this paper is responsible for Card and Krueger's controversial empirical results on minimum wages. But he refused to say whether he really believed that endogenous responses in the world interest rate feeding back to the labor market in one State were really strong enough to generate the result. Recall that if we leave the pointlessly cumbersome world of fixed coefficient production functions and use differentiable technologies, Rob's result (with one type of labor) can be generally written: dL \partial L \partial L \partial r -- = ---------- + ---------- ---------- . dw \partial w \partial r \partial w The first term on the right-hand side is negative; Rob's result stems from the fact that it's possible that the second term is positive and larger in magnitude than the first. But I've argued that, based on both theory and evidence, that term is likely to be the product of two small values, and is therefore likely to be swamped by the first term. So that is one reason why the partial equilibrium concept 'labor demand schedule' is still useful even when we all know the ceteris paribus assumption is false (oh, and by the way, Rob, I don't agree with the "F-twist," so it's odd you think I've been "lecturing" you on it.) I've also explained how standard econometric techniques can recover both the labor demand curve and the total derivate of labor response to a wage change, and why the former concept is still a useful building block even if these feedback effects are relatively large in magnitude. To no avail, apparently. Why don't you tweak the model in a way that hasn't been done and both post the results here and send them to a journal (sans the overblown rhetoric, of course), Rob? That would be a rare example of a genuine Pareto improvement. -- Chris Auld (403)220-4098 Economics, University of Calgary <mailto:au...@ucalgary.ca> Calgary, Alberta, Canada <URL:http://jerry.ss.ucalgary.ca/> ### Robert Vienneau unread, Feb 6, 2000, 3:00:00 AM2/6/00 to au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: > Robert Vienneau <rv...@see.sig.com> wrote: > > This long post presents an example in which higher wages for > >unskilled labor are associated with firms choosing to employ more > >unskilled workers per unit output produced. > [ Irrelevancy deleted. ] > In answer to the challenge in the subject: do it yourself. Which of these possible responses to my challenge is Chris making: a) Admitting that he doesn't know how to draw factor demand curves in this situation b) Admitting supply and demand is not applicable to factor markets in the long period. c) Deciding to call some such locus like I draw a type of demand curve Or has Chris found some valid fourth response? Note that I do not choose (c) in the post to which Chris is responding. > Since you > have two types of labor, however, the question is ill-posed. Here Chris seems to have actually read part of the post to which he is responding, but not its title. Why the challenge to draw labor demand curveS is ill-posed when there is more than one type of labor is a mystery that we members of the laity will never know. > Choose > one type and deduce the profit-maximizing amount of that type hired as > its own price changes in one period, all else equal (the way Rob phrases > the question, "long period demand," implies the ol' ceteris paribus > explicit in the definition of "input demand function" is still a point > Rob is confused about.) Chris is being silly. Just because I know how to correctly analyze the choice of technique in a long period position does not mean I am now confused about anything. I showed this analysis to highlight a property of the example I thought some might find interesting. The correct analysis also serves as a contrast. The only place the word "demand" appears in the post to which Chris is pretending to respond is in the introductory paragraph concluding with these sentences. "...In a long run position, the need for specific quantities of capital goods will have been foreseen and the structure of capital goods will have been adapted to production. I question whether equilibria of this sort can be explained by the intersections of long run supply and demand curves. Some economists have raised this question." Given Chris' understanding, he should demonstrate that the wage of a type of labor *can* vary in the example, leaving all other prices unchanged. The profit-maximizing firm should be producing at a positive level, then, that can be an equilibrium (when supply curves are at the appropriate location). Chris, of course, substitutes insults for addressing the challenge. > >The exact numeric values > >used are obviously unreasonable. The example, howevever, is used to > >raise questions about the logical implications of maximizing > >behavior. > For the n-th time, Rob, all us stupid mainstream economists are well > aware that the locus of equilibria traced out as multiple prices > endogenously change will not necessary exhibit the same properties as > its partial equilibrium analog. Why do write this essay as if you're > saying something that's surprising or at odds with the "brainwashing" > we teach to even undergraduate micro theory students? If Chris isn't brainwashed, why cannot he respond with something that exhibits a moment's command of reason? For the n-th time, I'd like to see an explicit derivation in one of my examples - in this case, this example - of one element, a demand curve for one or another sort of labor, of that "partial equilibrium analog." The claim I am making is that such a long period curve cannot be constructed. > >They might also try to give some rationale for why one should be > >interested in this special case. > Empirical evidence? If one had some sort of formal model, that would answer the suggested question. But it would be nice to see a response to the question, "What are your assumptions?" other than the non-sequitur, "Assumptions do not need to be realistic." In other words, Chris does not have a model to compare with empirical evidence. Also notice that if Chris had a model appropriate for the suggested question, in context, it would not be a model of suppply and demand curves, as he understands them. By the way, Barkley Rosser has a book chaper from his new edition of _ From Catastrophe to Chaos: A General Theory of Economic Discontinuities_ posted on his Web site <http://cob.jmu.edu/rosserjb/>. In this chapter, he references Albin's (?) paper on a logging example and one of his own co-authored papers as empirical evidence of reswitching. Rosser also predates me in realizing Cambridge Capital Controversy models point towards the possibility of interesting dynamics arising in economic models. I have not read his _Journal of Economic Theory_ (?) paper on the topic, but I was influenced by his book. [ Irrelevancy deleted. ] > Recall that if we leave > the pointlessly cumbersome world of fixed coefficient production > functions The points at issue have nothing to do with fixed coefficients, as I understand them. Furthermore, as has been repeatedly pointed out to Chris, my examples often do not use fixed coefficient production functions. The example in the post to which Chris is pretending to respond does not use fixed coefficient production functions. (If they were fixed, he should be able to specify a unique value for each coefficient.) I'd be quite happy with a non-trivial step function as an answer to my challenge, if it could be answered. > and use differentiable technologies, Rob's result (with one > type of labor) can be generally written: > > dL \partial L \partial L \partial r > -- = ---------- + ---------- ---------- . > dw \partial w \partial r \partial w Here Chris is addressing another post. The interesting aspect of *this* example arises when (del r/del w) is zero. Of course, I change another wage when varying the first wage. That's all part of my question whether one can draw meaningful factor demand curves for this example. Notice that Chris has not derived expressions for the terms in the above equation from maximizing conditions. > The first term on the right-hand side is negative; Rob's result stems > from the fact that it's possible that the second term is positive and > larger in magnitude than the first. But I've argued that, based on both > theory and evidence, that term is likely to be the product of two small > values, and is therefore likely to be swamped by the first term. Uh, this heterogeneous labor example suggests that that's not all there is to it. Furthermore, I disagree with Chris' understanding of the theory. The question is can one coherently vary the wage of just one type of labor, while leaving all other prices fixed? > So that > is one reason why the partial equilibrium concept 'labor demand > schedule' > is still useful even when we all know the ceteris paribus assumption is > false The above seems to be a non-sequitur until Chris shows how to draw a labor demand schedule. > (oh, and by the way, Rob, I don't agree with the "F-twist," so it's > odd you think I've been "lecturing" you on it.) I suppose Chris hasn't said that unrealistic assumptions are a positive virtue. He has merely said discussions about the realism of assumptions are of no worth and has frequently illustrated his supposed position about methodology with examples of assumptions meant to be unrealistic. I guess it's "odd", given some knowledge about economists' discussions on these matters, to infer that Chris is supporting the F-twist. Of course, Chris has also said that he more-or-less agrees with McCloskey's take on Methodology - or rather, methodology. But don't let this side discussion distract Chris from admitting the challenge in the thread title cannot be met. > I've also explained how standard econometric techniques can recover both > the labor demand curve and the total derivate of labor response to a wage > change, and why the former concept is still a useful building block even > if these feedback effects are relatively large in magnitude. To no > avail, > apparently. In a rare burst of candor, Chris once acknowledged he did not know how to use econometric techniques to distinguish between short and long period results. > Why don't you tweak the model in a way that hasn't been done and both > post the results here and send them to a journal (sans the overblown > rhetoric, of course), Rob? That would be a rare example of a genuine > Pareto improvement. Actually, I am using this example to make a point other than the ones Steedman and Metcalfe develop from it. They go on to show that the Heckscher-Ohlin-Samuelson analysis is not correct if production is taken seriously. But my point could not be published because it is obvious and well-known. Once one acknowledges that neoclassical factor supply and demand curves are not applicable to the theory of long period distribution, one could go on to make other points: 1) An important element in the 1870s development of neoclassical economics was the claim that prices and quantities could be explained in all runs by the intersection of stable and regular supply and demand functions. Perhaps the extension of supply and demand curves to the long run was a mistake. Perhaps Ricardo and Marx had a coherent theory of long period positions that should be revived. 2) The Cambridge Capital Controversy was about more than aggregation issues. In particular, it developed, at least, a clarification of the long run theory of prices and distribution. 3) It is often claimed that unemployment results from sticky wages and prices. It is claimed if institutional rigidities in wages and prices (e.g. labor unions, minimum wage laws) were removed, markets would move quicker toward long run positions in which labor markets clear. This perspective seems not to be a logical implication of rigorous economic theory. In other words, Sraffa can add to Keynes' critique of "classical" dogmas. Of course, none of the above points are novel. In fact, I consider them too well known to be publishable. The last point might have policy consequences. ### Chris Auld unread, Feb 6, 2000, 3:00:00 AM2/6/00 to Robert Vienneau <rv...@see.sig.com> wrote: >Which of these possible responses to my challenge is Chris making: > > a) Admitting that he doesn't know how to draw factor demand > curves in this situation As I've explained numerous times before, Rob's ludicrous insistence on doing everything with non-differentiable production functions and without an algebraic presentation makes actually working with any of his models very tedious. I had my fill of deriving labor demand curves in such situations in intermediate micro many years ago. > b) Admitting supply and demand is not applicable to factor > markets in the long period. I don't even know what this means, Rob. Do you mean that, if we *must* change many prices at once, the resulting locus is not a demand curve? Do you mean that supply and demand are not useful tools for in some contexts for predicting gross quantity and price changes? Why have you never answered my two simple questions which turn critically on that point? >Here Chris seems to have actually read part of the post to which he is >responding, but not its title. Why the challenge to draw labor demand >curveS is ill-posed when there is more than one type of labor is a >mystery that we members of the laity will never know. Well, what do you mean by "labor," Rob? The sum of unskilled and skilled workers? In which case, what is the own price of that sum? >> Choose >> one type and deduce the profit-maximizing amount of that type hired as >> its own price changes in one period, all else equal (the way Rob phrases >> the question, "long period demand," implies the ol' ceteris paribus >> explicit in the definition of "input demand function" is still a point >> Rob is confused about.) > >Chris is being silly. Just because I know how to correctly analyze the >choice of technique in a long period position does not mean I am now >confused about anything. I showed this analysis to highlight a property >of the example I thought some might find interesting. The correct >analysis also serves as a contrast. > >The only place the word "demand" appears in the post to which Chris is >pretending to respond is in the introductory paragraph concluding with >these sentences. Rob might want to take a gander at the subject he chose (which brings to mind fond memories of the thread he titled "labor demand curves can slope up," and then insisted, when it was pointed out that he had shown no such thing, that he had never meant to talk about labor demand curves at all). [ quoting out of order] > Furthermore, I disagree with Chris' understanding >of the theory. The question is can one coherently vary the wage >of just one type of labor, while leaving all other prices fixed? [and] >Here Chris is addressing another post. The interesting aspect >of *this* example arises when (del r/del w) is zero. Of course, >I change another wage when varying the first wage. That's all part >of my question whether one can draw meaningful factor demand curves >for this example. [and] >Given Chris' understanding, he should demonstrate that the wage of >a type of labor *can* vary in the example, leaving all other prices >unchanged. No, I should not. This is so frustrating. Rob, a "labor demand curve" is, by definition, derived when only one price changes. It does not matter if, in the model, changes in that price will alter other prices: if we allow those other prices to alter, THE RESULTING OBJECT IS NO LONGER A LABOR DEMAND CURVE. Damnit, what is so hard to understand about this? I've spent years trying to get this point across. Rob indignantly, as in this post, claims to be well aware of it, and then of course it turns out he just doesn't get it, as amply demonstrated above. Look, Rob, generally a labor demand curve can be written L(w, \theta), where \theta is a vector of other prices and any other relevant parameters in the model. Suppose there is a relationship in the model, such as yours, where we can write a functional w=w(\theta). A labor demand curve is STILL (\partial L \over \partial w) even though we would never observe that relationship in the modelled world. Get it? It's a tool, a concept, a building block -- your objection that other prices change with the wage rate does not invalidate the _definition_ of a labor demand curve. If you want to claim that the _concept_ is not "meaningful," you're both wrong (it's very useful) and making an entirely different argument than all your charged rhetoric, and the subject of this thread, suggest. >If Chris isn't brainwashed, why cannot he respond with something >that exhibits a moment's command of reason? Yup. I find it fascinating that Rob thinks statements along the lines "you haven't interpreted this result correctly" are "insults" and gets all huffy, but doesn't think twice about tossing off little jewels of rational discourse such as that. Rob, if it makes you happy to believe that spending a decade learning these concepts formally and actually applying them is "brainwashing," be my guest. >> >They might also try to give some rationale for why one should be >> >interested in this special case. > >> Empirical evidence? > >If one had some sort of formal model, that would answer the suggested >question. But it would be nice to see a response to the question, >"What are your assumptions?" other than the non-sequitur, "Assumptions >do not need to be realistic." In other words, Chris does not have >a model to compare with empirical evidence. OK, my assumption is that feedbacks from changes in the labor market in a given country or part thereof to the interest rate are small relative to the own-price effect. I draw upon a large body of empirical evidence to support that assumption. >Also notice that if Chris had a model appropriate for the suggested >question, in context, it would not be a model of suppply and demand >curves, as he understands them. Rob is so cute when he's condescending! >By the way, Barkley Rosser has a book chaper from his new >edition of _ From Catastrophe to Chaos: A General Theory of Economic >Discontinuities_ posted on his Web site <http://cob.jmu.edu/rosserjb/>. >In this chapter, he references Albin's (?) paper on a logging example >and one of his own co-authored papers as empirical evidence of >reswitching. Rob talking about empirical evidence! Wow, isn't that the fourth sign of the Apocalypse? Wouldn't the empirical methods used to find such an oddity be the same ones I'm not allowed to use because "I don't have a model?" > Rosser also predates me in realizing Cambridge >Capital Controversy models point towards the possibility of >interesting dynamics arising in economic models. Damn, there goes your Nobel. >Notice that Chris has not derived expressions for the terms in the >above equation from maximizing conditions. No, I haven't "derived" them, it's a general statement. The function L() itself is derived from a maximization problem. Rob's pointlessly lengthy posts are completely summarized (and clarified) by that one equation I have, which I suppose ticks Rob off because all that tedious math no doubt took a lot of effort. >> (oh, and by the way, Rob, I don't agree with the "F-twist," so it's >> odd you think I've been "lecturing" you on it.) > >I suppose Chris hasn't said that unrealistic assumptions are a positive >virtue. He has merely said discussions about the realism of assumptions >are of no worth and has frequently illustrated his supposed position >about methodology with examples of assumptions meant to be unrealistic. The "F-twist" says that the realism of assumptions is completely irrelevant. I hold, rather, that unrealistic assumptions are necessary evils ("virtous" in the tortured sense inherent in the oddly construed Georgia O'Keeffee quote at the bottom of my web page) which do not necessarily invalidate the results of a modelling exercise. I therefore find "you are wrong because your objections are unrealistic" a naive and wrongheaded line of argument, altough I might find an objection like "you are wrong because your result hinges on additively seperable utility, and does not hold in the more general case" compelling. Unfortunately, Rob's objections are usually more like the former than the latter ("you are wrong because you assume utility maximization, yet the Slutsky matrix is not found emprically to be symmetric" -- ugh). >In a rare burst of candor, Chris once acknowledged he did not know how >to use econometric techniques to distinguish between short and long >period results. No, I don't, which is why I would use a much better and more explicitly dynamic model to deal with such a situation econometrically. Now, how did the empirical piece Rob cites above deal with the problem? >But my point could not be published because it is obvious and well-known. Then why do you insist on posting this stuff ad nauseum here, Rob? >Once one acknowledges that neoclassical factor supply and demand curves >are not applicable to the theory of long period distribution, one could >go on to make other points: Rob, can you even vaguely understand how pointless your presentation is to professional economists? How ludicrous it is to claim that the results you present here comprise "the death of neoclassical economics?" Do you think you could come up with a new issue to harp on (now that it's a new Millenium and all) that isn't quite so archaic? How about finding fault with some theory that's not archaic, perhaps one developed since, say, 1970, and lecturing us on how dumb economists are for _that_ theory for the next five years? It would be a nice change of pace. ### SUSUPPLY unread, Feb 7, 2000, 3:00:00 AM2/7/00 to Chris Auld laments another shortened career: >> Rosser also predates me in realizing Cambridge >>Capital Controversy models point towards the possibility of >>interesting dynamics arising in economic models. > >Damn, there goes your Nobel. Maybe a compensating entry in the Palgrave; Vienneauian Petulance? Patrick ### Robert Vienneau unread, Feb 7, 2000, 3:00:00 AM2/7/00 to au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: > Robert Vienneau <rv...@see.sig.com> wrote: > >Which of these possible responses to my challenge is Chris making: > > a) Admitting that he doesn't know how to draw factor demand > > curves in this situation > As I've explained numerous times before, Rob's ludicrous insistence on > doing everything with non-differentiable production functions and without > an algebraic presentation makes actually working with any of his models > very tedious. I had my fill of deriving labor demand curves in such > situations in intermediate micro many years ago. Once again, Chris does not rise to the challenge. As I've told him repeatedly, respondents are free to use letters and append any additional assumptions they want (e.g. on the product market). And he hasn't explained that numerous times before. He objected to long decimal expansions which seemed to be approximations to fractions with ungainly numerators or denominators. (I believe that was the case with my previous example, but it's been a while since I constructed it.) So here I present an example with fairly "nice" fractions. Yet he still cannot draw the curves. > > b) Admitting supply and demand is not applicable to factor > > markets in the long period. > I don't even know what this means, Rob. Do you mean that, if we *must* > change many prices at once, the resulting locus is not a demand curve? Yes, as Chris has been repeatedly saying. Here's an example of a long period model. If labor demand curves are applicable, why doesn't he construct them? [>>> Since you ] [>>> have two types of labor, however, the question is ill-posed. ] > >Here Chris seems to have actually read part of the post to which he is > >responding, but not its title. Why the challenge to draw labor demand > >curveS is ill-posed when there is more than one type of labor is a > >mystery that we members of the laity will never know. > Well, what do you mean by "labor," Rob? The sum of unskilled and skilled > workers? In which case, what is the own price of that sum? "Curves" is plural. So one would have a labor demand curve for unskilled labor and another curve for skilled labor. Chris continually responds with these non sequiturs, but then wails and gnashes his teeth if I exhibit any frustration with his reading comprehension problems. If I were to aggregate labor, why would I be asking for him to draw more than one curve? Chris complains I complain about aggregation of capital without worrying about aggregation of labor. Well, here's perhaps the simplest possible example of non-aggregated laborS and non-aggregated commodities. And the usual counter-intuitive result arises. > >> Choose > >> one type and deduce the profit-maximizing amount of that type hired as > >> its own price changes in one period, all else equal (the way Rob > >> phrases > >> the question, "long period demand," implies the ol' ceteris paribus > >> explicit in the definition of "input demand function" is still a point > >> Rob is confused about.) Notice that the above non-sentence is obviously meant to be insulting. > >Chris is being silly. Just because I know how to correctly analyze the > >choice of technique in a long period position does not mean I am now > >confused about anything. I showed this analysis to highlight a property > >of the example I thought some might find interesting. The correct > >analysis also serves as a contrast. > >The only place the word "demand" appears in the post to which Chris is > >pretending to respond is in the introductory paragraph concluding with > >these sentences. > Rob might want to take a gander at the subject he chose... [insulting > silliness meant to be insulting deleted ] I chose the subject title because I'd like to see either a construction of labor demand curves for my example or an acknowledgement that demand and supply functions are not applicable to long period models in general. Chris has not provided either. Instead, he spouts nonsensical non-sequiturs. > [ quoting out of order] > > Furthermore, I disagree with Chris' understanding > >of the theory. The question is can one coherently vary the wage > >of just one type of labor, while leaving all other prices fixed? > [and] > >Here Chris is addressing another post. The interesting aspect > >of *this* example arises when (del r/del w) is zero. Of course, > >I change another wage when varying the first wage. That's all part > >of my question whether one can draw meaningful factor demand curves > >for this example. > [and] > >Given Chris' understanding, he should demonstrate that the wage of > >a type of labor *can* vary in the example, leaving all other prices > >unchanged. > No, I should not. OK, Chris doesn't need to address the challenge in the subject header. > This is so frustrating. Rob, a "labor demand curve" > is, by definition, derived when only one price changes. It does not > matter > if, in the model, changes in that price will alter other prices: if we > allow those other prices to alter, > > THE RESULTING OBJECT IS NO LONGER A LABOR DEMAND CURVE. That is to say factor input curves are not applicable to my example. If Chris disagrees, there's a simple way to show me wrong. Construct them. The text that Chris quotes out of order shows that I understand the point of his definition. > [ Silliness exhibiting Chris' usual reading comprehension - deleted. ] > Look, Rob, generally a labor demand curve can be written L(w, \theta), > where > \theta is a vector of other prices and any other relevant parameters in > the > model. Suppose there is a relationship in the model, such as yours, > where > we can write a functional w=w(\theta). A labor demand curve is STILL > (\partial L \over \partial w) even though we would never observe that > relationship in the modelled world. Chris should say that the own-price derivative of a labor demand curve is the above partial derivative. > Get it? It's a tool, a concept, a > building block -- your objection that other prices change with the wage > rate does not invalidate the _definition_ of a labor demand curve. These lectures on methodology are beside the point. Can factor demand curves be legimately constructed for long period models? Can they be applied to my example? There's a simple way to answer the latter question in the affirmative - construct them. > If > you want to claim that the _concept_ is not "meaningful," you're both > wrong (it's very useful) and making an entirely different argument than > all your charged rhetoric, and the subject of this thread, suggest. Following Paul Feyerabend, one might accept that scientists can advance with logical contradictions, sometimes. > >If Chris isn't brainwashed, why cannot he respond with something > >that exhibits a moment's command of reason? > Yup. I find it fascinating that Rob thinks statements along the > lines "you haven't interpreted this result correctly" are "insults" > and gets all huffy, but doesn't think twice about tossing off little > jewels of rational discourse such as that. Rob, if it makes you > happy to believe that spending a decade learning these concepts > formally and actually applying them is "brainwashing," be my guest. It's weird that Chris doesn't know he's being deliberately insulting. I also find it weird that he's not aware that many think ill of mainstream economists and of the process by which they are "educated." If Chris isn't brainwashed, why does he not respond with the obvious substantial reply to my challenge - construct labor demand curves for the example? > >> >They might also try to give some rationale for why one should be > >> >interested in this special case. > >> Empirical evidence? > >If one had some sort of formal model, that would answer the suggested > >question. But it would be nice to see a response to the question, > >"What are your assumptions?" other than the non-sequitur, "Assumptions > >do not need to be realistic." In other words, Chris does not have > >a model to compare with empirical evidence. > OK, my assumption is that feedbacks from changes in the labor market in > a given country or part thereof to the interest rate are small relative > to the own-price effect. I draw upon a large body of empirical evidence > to support that assumption. That's irrelevant to *this* example anyway and not an assumption consistent with the methodological individualist approach of neoclassical economics. The interesting effect shows up here when the interest rate is constant. > >Also notice that if Chris had a model appropriate for the suggested > >question, in context, it would not be a model of suppply and demand > >curves, as he understands them. > Rob is so cute when he's condescending! Of course, Chris was indeed responding to my suggestion that there might, perhaps, maybe, be some unknown special case assumptions that would imply the standard approach to the analysis of the choice of technique in long period models would necessarily show that firms adopt a more factor-intensive method of production when the price of that factor is higher. This analysis would not be of supply and demand, as Chris has repeatedly stated. I suppose Chris' silliness is in lieu of acknowledging that. > >By the way, Barkley Rosser has a book chaper from his new > >edition of _ From Catastrophe to Chaos: A General Theory of Economic > >Discontinuities_ posted on his Web site <http://cob.jmu.edu/rosserjb/>. > >In this chapter, he references Albin's (?) paper on a logging example > >and one of his own co-authored papers as empirical evidence of > >reswitching. > Rob talking about empirical evidence! Wow, isn't that the fourth sign > of the Apocalypse? So how many times will Chris repeat his astonishment at my talking about empirical evidence? > Wouldn't the empirical methods used to find such an oddity be the > same ones I'm not allowed to use because "I don't have a model?" Don't know, don't care. My understanding of the point at issue is the standard assumptions of long-period neoclassical economics do not imply the results economists thought intuitive. Since this is a point of logic, it is not clear how empirical evidence can be relevant, much less decisive. > [ Silliness deleted. ] > >Notice that Chris has not derived expressions for the terms in the > >above equation from maximizing conditions. > No, I haven't "derived" them, it's a general statement. The function > L() itself is derived from a maximization problem. Notice that Chris has not derived the function L() itself from maximizing conditions. > Rob's pointlessly > lengthy posts are completely summarized (and clarified) by that one > equation I have, which I suppose ticks Rob off because all that tedious > math no doubt took a lot of effort. Chris is projecting the emotional heat with which he replies, I guess. [ Comments about methodology demonstrating Chris' usual reading ] [ difficulties and containing his usual silly personalites - deleted. ] > >In a rare burst of candor, Chris once acknowledged he did not know how > >to use econometric techniques to distinguish between short and long > >period results. > No, I don't, which is why I would use a much better and more explicitly > dynamic model to deal with such a situation econometrically. Now, how > did the empirical piece Rob cites above deal with the problem? So the empirical work to which Chris vaguely refers deals with another issue. How is it relevant to this thread? > >But my point could not be published because it is obvious and > >well-known. > Then why do you insist on posting this stuff ad nauseum here, Rob? It's well-known among well-educated economists. Why does Chris keep disputing established results? > >Once one acknowledges that neoclassical factor supply and demand curves > >are not applicable to the theory of long period distribution, one could > >go on to make other points: > Rob, can you even vaguely understand how pointless your presentation is > to > professional economists? How ludicrous it is to claim that the results > you > present here comprise "the death of neoclassical economics?" Why does Chris keep whining about thread titles from a long time ago? Anyway, the demonstration, on one of several possible grounds, that neoliberal advocates do not have a theory to support their remaking of the world seems to be of contemporary interest. > [ More silliness deleted. ] ### Chris Auld unread, Feb 8, 2000, 3:00:00 AM2/8/00 to Robert Vienneau <rv...@see.sig.com> wrote: >Once again, Chris does not rise to the challenge. As I've told him >repeatedly, respondents are free to use letters and append any >additional assumptions they want (e.g. on the product market). Well, fine: consider the firm's problem as a special case of: profit = pf(x) - wx, where f(x) is the (in this case, discontinous) production function and x is the vector of inputs (here including two types of labor). Let x(w) be the (again, in this case, discontinuous) demand functions resulting from solving the maximization problem. The factor demand curve for any of those inputs is the schedule x_i (w_i | w_{-i} ). Sum these schedules across firms to get the aggregate labor demand schedule. Rob will object that in the "long period" there is a functional relationship between the elements of w. Rob will fail, again, to understand that that fact does not alter anything in the preceding paragraph. If he wants to keep declaring his "challenge has not been met," he must show why the problem he presents is not a special case of the above. >And he hasn't explained that numerous times before. He objected >to long decimal expansions which seemed to be approximations to Actually, Rob, if you check dejanews you will indeed find I have taken you to task numerous times for your atypical presentation; both the bizarre numerical values and the non-algebraic presentation generally. >> Well, what do you mean by "labor," Rob? The sum of unskilled and skilled >> workers? In which case, what is the own price of that sum? > >"Curves" is plural. So one would have a labor demand curve for unskilled >labor and another curve for skilled labor. Chris continually responds >with these non sequiturs, but then wails and gnashes his teeth if I <yawn> Rob, your phrasing was ambiguous. I thought you were asking for a demand curve for "labor" when there are multiple types of labor. >Chris complains I complain about aggregation of capital without >worrying about aggregation of labor. Well, here's perhaps the simplest >possible example of non-aggregated laborS and non-aggregated >commodities. And the usual counter-intuitive result arises. No kidding! Is that why you spent the time rewritting your essay? What a waste. Recall, however, that my question was not that you didn't consider aggregation of labor, but rather that you drew the conclusion from the original model that payments to capital are not in exchange for production. Do you wish to draw that conclusion for labor under the recognition that labor is heterogeneous (not that you needed a model), or do you wish to retract the assertion for capital? >> Rob might want to take a gander at the subject he chose... [insulting >> silliness meant to be insulting deleted ] "Insulting silliness meant to be insulting!" Oh my! Rob, it _was_ rather funny when you insisted the thread you titled "labor demand curves can slope up" wasn't really about labor demand curves. I'd be embarrassed too. But it isn't an insult to bring it up. >> if, in the model, changes in that price will alter other prices: if we >> allow those other prices to alter, >> >> THE RESULTING OBJECT IS NO LONGER A LABOR DEMAND CURVE. > >That is to say factor input curves are not applicable to my >example. Yet again, Rob, simply because multiple prices in your model change as any factor price changes does not mean that a factor demand curve does not exist, nor does it mean that the factor demand curve is not a useful concept. If all you want to show is that other prices change as the wage rate changes, why not simply replace your lengthy essay with one paragraph: "Standard, mainstream economic theory says that partial equilibrium concepts may be misleading. Everyone knows this. I'm just pointing it out again, albeit in an obnoxious fashion and with muddled exposition meant to serve my political purposes and make it appear to laymen that I'm saying something economists don't know." >> jewels of rational discourse such as that. Rob, if it makes you >> happy to believe that spending a decade learning these concepts >> formally and actually applying them is "brainwashing," be my guest. > >It's weird that Chris doesn't know he's being deliberately insulting. >I also find it weird that he's not aware that many think ill of >mainstream economists and of the process by which they are >"educated." Rob, if you want to push your credibility from slim to none, why not keep up this line of argument? >> >But my point could not be published because it is obvious and >> >well-known. > >> Then why do you insist on posting this stuff ad nauseum here, Rob? > >It's well-known among well-educated economists. Why does Chris >keep disputing established results? Exactly what result that's well-known do you imagine I'm disputing, Rob? I do dispute your insistence, over the span of years, that your result shows that factor demand curves can slope up -- that interpretation is not "well known," however, because it is dead wrong. Moreover, Rob has skirted my question with a clumsy insult: Rob, you keep telling us that what you're saying is at odds with mainstream economics, indeed, is of sufficient force to topple all existing results and methodology, "the death of neoclassical economics." The response he got was that this was rubbish: what is true about your results is well-known, and the rhetoric wildly overblown at best. And now he seems to be admitting that everything he's saying is in fact well-known. So where does this leave the vitriolic charges? "You know, that stuff you teach even to undergraduates is... well, ok, it's correct." Wow. You got us there, Rob. >If Chris isn't brainwashed, why does he not respond with the >obvious substantial reply to my challenge - construct labor >demand curves for the example? OK, you got me Rob. I remember the time in grad micro that I brazenly stuck up my hand and asked if partial equilibrium concepts can sometimes be misleading. The instructor, clad in black leather and mirrored sunglasses, merely crinkled one corner of his mouth and almost whispered the dreaded phrase, oh mercy, the dreaded phrase, "Room 303." And off I was dragged to receive punishment for my crime. I learned my lesson that day and refuse to pull out a calculator to work through Rob's presentation not because it would bore me and serve no purpose, but because I am still in the sway of my Dark Masters. Who, of course, realize that using partial equilibrium concepts is the key to keeping the ruling elite in power and stomping on the little guy. Next time you see a black helicopter fly by, look real close and you'll see on the side a little graph with two axes and a curve labelled "Dl".... >Anyway, the demonstration, on one of several possible grounds, that >neoliberal advocates do not have a theory to support their remaking of >the world seems to be of contemporary interest. And here Rob's political motivations are laid bare. Rob's posts would be more honest and more interesting if he would simply argue about his political beliefs rather than pretending to be talking about microeconomic theory. ### SUSUPPLY unread, Feb 10, 2000, 3:00:00 AM2/10/00 to Chris Auld writes some more amusing questions for Robert to not answer: >>"Curves" is plural. So one would have a labor demand curve for unskilled >>labor and another curve for skilled labor. Chris continually responds >>with these non sequiturs, but then wails and gnashes his teeth if I > ><yawn> > >Rob, your phrasing was ambiguous. I thought you were asking for a demand >curve for "labor" when there are multiple types of labor. Something every manager knows quite well. I guess Rob doesn't get around much. >>Chris complains I complain about aggregation of capital without >>worrying about aggregation of labor. Well, here's perhaps the simplest >>possible example of non-aggregated laborS and non-aggregated >>commodities. And the usual counter-intuitive result arises. > >No kidding! Is that why you spent the time rewritting your essay? What >a waste. Recall, however, that my question was not that you didn't consider >aggregation of labor, but rather that you drew the conclusion from the >original model that payments to capital are not in exchange for production. >Do you wish to draw that conclusion for labor under the recognition that >labor is heterogeneous (not that you needed a model), or do you wish >to retract the assertion for capital? Yes, a question that has been put to him how many dozens of times, without Rob even attempting an answer. Some day I'll have to make a list of "Pending Questions for Vienneau". It would begin, "This long post....". >>> Rob might want to take a gander at the subject he chose... [insulting >>> silliness meant to be insulting deleted ] > >"Insulting silliness meant to be insulting!" Oh my! Rob, it _was_ rather >funny when you insisted the thread you titled "labor demand curves can >slope up" wasn't really about labor demand curves. I'd be embarrassed >too. But it isn't an insult to bring it up. One can always offer to stop telling the truth about Rob if he would stop telling lies about thee. Though what fun would that be. [snip] >Yet again, Rob, simply because multiple prices in your model change as >any factor price changes does not mean that a factor demand curve does >not exist, nor does it mean that the factor demand curve is not a useful >concept. If all you want to show is that other prices change as the >wage rate changes, why not simply replace your lengthy essay with one >paragraph: > >"Standard, mainstream economic theory says that partial equilibrium >concepts may be misleading. Everyone knows this. I'm just pointing >it out again, albeit in an obnoxious fashion and with muddled >exposition meant to serve my political purposes and make it appear >to laymen that I'm saying something economists don't know." Well, numerous people have asked a version of that too. With no answer I can ever remember. [snip] >OK, you got me Rob. I remember the time in grad micro that I >brazenly stuck up my hand and asked if partial equilibrium concepts >can sometimes be misleading. The instructor, clad in black leather >and mirrored sunglasses, merely crinkled one corner of his mouth and >almost whispered the dreaded phrase, oh mercy, the dreaded phrase, >"Room 303." And off I was dragged to receive punishment for my crime. Hmm. I think you might actually be encouraging him here. >I learned my lesson that day and refuse to pull out a calculator to >work through Rob's presentation not because it would bore me >and serve no purpose, but because I am still in the sway of my Dark >Masters. Who, of course, realize that using partial equilibrium >concepts is the key to keeping the ruling elite in power and stomping >on the little guy. Next time you see a black helicopter fly by, look >real close and you'll see on the side a little graph with two axes >and a curve labelled "Dl".... I see Robert Vaughn as the head of the Bank of Sweden, Julia Roberts as the love interest.... Patrick ### Robert Vienneau unread, Feb 10, 2000, 3:00:00 AM2/10/00 to au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: > Robert Vienneau <rv...@see.sig.com> wrote: > >Once again, Chris does not rise to the challenge. As I've told him > >repeatedly, respondents are free to use letters and append any > >additional assumptions they want (e.g. on the product market). > Well, fine: consider the firm's problem as a special case of: > profit = pf(x) - wx, > where f(x) is the (in this case, discontinous) production function > and x is the vector of inputs (here including two types of labor). > Let x(w) be the (again, in this case, discontinuous) demand functions > resulting from solving the maximization problem. The factor demand > curve for any of those inputs is the schedule x_i (w_i | w_{-i} ). > Sum these schedules across firms to get the aggregate labor demand > schedule. The above is handwaving that assumes what is to be proved - that the (vertically integrated) firm can be in long-run equilibrium for some interesting range of p and w. > Rob will object that in the "long period" there is a functional > relationship between the elements of w. Chris doesn't seem to understand the relationship between the factor price frontier and profit-maximizing. > Rob will fail, again, to > understand that that fact does not alter anything in the preceding > paragraph. If he wants to keep declaring his "challenge has not > been met," he must show why the problem he presents is not a > special case of the above. This is to disguise that Chris has not met my challenge. He is relying on popular non-formal understandings of "prices" and "inputs" to disguise that he has not shown how to formulate my example in his terms. In particular, I don't know how many inputs he sees in my example, whether he sees more than one variable denoting unskilled labor in his formulation (perhaps at different dates), whether he considers the rate of interest a price, and, if so, the price of what input. Strangely enough, I have brought up these questions before. So he should have been able to foresee these objections to his post. In short, Chris has not met any reasonable standard of the burden of proof. He has not shown how to construct long period labor demand curves for *this* example. His claim that I "must show why the problem he presents is not a special case of the above" is an implicit admission, as I read it, that he has not met the challenge. > If all you want to show is that other prices change as the > wage rate changes, why not simply replace your lengthy essay with one > paragraph: > "Standard, mainstream economic theory says that partial equilibrium > concepts may be misleading. Everyone knows this. I'm just pointing > it out again, albeit in an obnoxious fashion and with muddled > exposition meant to serve my political purposes and make it appear > to laymen that I'm saying something economists don't know." Chris is being silly. It would help his claim that economists such as himself understood my point if he would show some grasp of the analytical tools that I think useful for the analysis of the choice of technique in long period positions. I don't think he understands them. It should be clear that one part of my motivation is finding certain analytical tools elegant. > >> >But my point could not be published because it is obvious and > >> >well-known. > >> Then why do you insist on posting this stuff ad nauseum here, Rob? > >It's well-known among well-educated economists. Why does Chris > >keep disputing established results? > Exactly what result that's well-known do you imagine I'm disputing, > Rob? I suppose Chris cannot be said to be disputing the use of the factor price frontier to analyze the choice of technique. He is merely not demonstrating any command of this tool. In fact, I honestly think he is still ignorant of this tool. > I do dispute your insistence, over the span of years, that > your result shows that factor demand curves can slope up -- that > interpretation is not "well known," however, because it is dead > wrong. "However, as was argued in Section 3 with regard to 'perversely' shaped, that is, upward sloping, factor-demand functions, this possibility would question the validity of the entire economic analysis in terms of demand and supply." -- H. D. Kurz and N. Salvadori, _Theory of Production: A Long Period Analysis_, Cambridge University Press, 1995. I find Kurz and Salvadori quite credible. The above is near the end of a long textbook. The analysis in that textbook, as should be true of most textbooks, was built up over decades. Important economists in developing this analysis include such obscure nonentities as Paul Samuelson, Robert Solow, Franco Modigliani - not that they would necessarily agree with Kurz and Salvadori's conclusions. Despite Chris' insistence that this interpretation is dead wrong, I have yet to see him grapple much with the arguments I report. His demonstrated resistance to such serious intellectual work I find to be of most interest for psychology or the sociology of knowledge. I think he has a point that talking about upward sloping, factor-demand functions in this context is metaphoric. That is why I have moved to thinking it better to saying supply and demand is not applicable to long period analysis. But I'm am still willing to use thread titles like "Literature Review Of Upward Sloping Factor Demand Curves" when posting a literature review of claims that factor demand curves can slope up. Furthermore, anyone who has bothered reading this stuff for a while can look at my first post on this thread and see that I deliberately avoided tendentious phrasing. > [ Further silliness deleted. ] > >If Chris isn't brainwashed, why does he not respond with the > >obvious substantial reply to my challenge - construct labor > >demand curves for the example? > OK, you got me Rob. I remember the time in grad micro that I > brazenly stuck up my hand and asked if partial equilibrium concepts > can sometimes be misleading. The instructor, clad in black leather > and mirrored sunglasses, merely crinkled one corner of his mouth and > almost whispered the dreaded phrase, oh mercy, the dreaded phrase, > "Room 303." And off I was dragged to receive punishment for my crime. > I learned my lesson that day and refuse to pull out a calculator to > work through Rob's presentation not because it would bore me > and serve no purpose, but because I am still in the sway of my Dark > Masters. Who, of course, realize that using partial equilibrium > concepts is the key to keeping the ruling elite in power and stomping > on the little guy. Next time you see a black helicopter fly by, look > real close and you'll see on the side a little graph with two axes > and a curve labelled "Dl".... Of course, the above is a confession, perhaps witty if one finds it so, that Chris has still not met the challenge in the subject header. Of course, it being winter in Canada, Chris really is currently mostly under the sway of darkness. > >Anyway, the demonstration, on one of several possible grounds, that > >neoliberal advocates do not have a theory to support their remaking of > >the world seems to be of contemporary interest. > And here Rob's political motivations are laid bare. Rob's posts > would be more honest and more interesting if he would simply argue about > his political beliefs rather than pretending to be talking about > microeconomic theory. But many economists think the stuff taught in "mainstream" intermediate microeconomics classes is ideology, incoherent as theory. I am indeed arguing about those incoherences. ### Chris Auld unread, Feb 11, 2000, 3:00:00 AM2/11/00 to Robert Vienneau <rv...@see.sig.com> wrote: >au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: >> resulting from solving the maximization problem. The factor demand >> curve for any of those inputs is the schedule x_i (w_i | w_{-i} ). >> Sum these schedules across firms to get the aggregate labor demand >> schedule. > >The above is handwaving that assumes what is to be proved - that >the (vertically integrated) firm can be in long-run equilibrium >for some interesting range of p and w. This is nonresponsive. How precisely does your problem not fit into the general framework given, Rob? Yet again, one does not impose a zero profit condition nor vary any other parameter than the factor price of interest when deriving the schedule. Let's make it even more general, Rob: the write the firm's profit function as P(x | w_i ; \Omega), where x is the set of choice variables, w_i is the factor price we are intersted in deriving a factor demand curve for and \Omega is every other parameter affecting the firm's decisions. Maximize with respect to x. The resulting schedule x_i(w_i | \Omega) is the factor demand function for factor i. It does not slope up, as a simple axiomatic argument shows, and it does exist. Simply, in Rob's context, the demand curve for unskilled labor at time t is the responses to the query "how much unskilled labor would the firm hire at t" as the wage of unskilled labor at t varies, ALL ELSE EQUAL. It is permissible for the answer to be zero, unbounded, or not unique. Consider again a simpler presentation of the basic ideas in Rob's posts. Let z(w,r)=0 denote an aggregate zero profit condition. Suppose it is monotone such that we can write r=r(w) in equilibrium. Let L(w,r) denote a labor demand schedule. L(w|r) is the factor demand schedule, EVEN THOUGH it is generally the case that z(w,r) != 0 along that schedule. >> Rob will object that in the "long period" there is a functional >> relationship between the elements of w. > >Chris doesn't seem to understand the relationship between the >factor price frontier and profit-maximizing. Really? How exactly did Rob draw that conclusion in response to my remark? >> Rob will fail, again, to >> understand that that fact does not alter anything in the preceding >> paragraph. If he wants to keep declaring his "challenge has not >> been met," he must show why the problem he presents is not a >> special case of the above. >This is to disguise that Chris has not met my challenge. >He is relying on popular non-formal understandings of "prices" >and "inputs" to disguise that he has not shown how to formulate >my example in his terms. Actually, Rob, I presented a very general and very common (and formal) statement of the problem. > In particular, I don't know how many >inputs he sees in my example, It doesn't matter. The simple framework I gave is applicable for an arbitrary number of inputs. > whether he sees more >than one variable denoting unskilled labor in his formulation >(perhaps at different dates), whether he considers the rate >of interest a price, and, if so, the price of what input. Yet again, it doesn't matter. The rate of interest is held constant while deriving a factor demand schedule. Yet again, it doesn't matter if that fact places the economy in disequilibrium or violates a zero profit condition. >In short, Chris has not met any reasonable standard of the burden >of proof. He has not shown how to construct long period labor >demand curves for *this* example. His claim that I "must show >why the problem he presents is not a special case of the >above" is an implicit admission, as I read it, that he has >not met the challenge. As I've said repeatedly, I am NOT going to drag out a calculator and work through Rob's pointlessly tedious and cumbersome presentation. Yet again, Rob's claims are based on his faultly understanding of elementary microeconomic theory, namely, his abject failure to acknowledge that when many prices change, the resulting locus of equilibria is not a labor demand curve, "long period" or otherwise. >Chris is being silly. It would help his claim that economists >such as himself understood my point if he would show some grasp >of the analytical tools that I think useful for the analysis >of the choice of technique in long period positions. I don't >think he understands them. Oh, he got me! Turns out junior high school algrebra is far beyond me (don't tell, 'k? I'd get fired -- lucky I've been able to bluff thus far). Why Rob thinks that using cumbersome assumptions such as discontinuous technologies form an "analytical tool" which is mysterious or difficult to understand is quite an enigma. >It should be clear that one part of my motivation is finding >certain analytical tools elegant. Elegant!? Good grief, Rob, you have failed. Miserably. >> Exactly what result that's well-known do you imagine I'm disputing, >> Rob? > >I suppose Chris cannot be said to be disputing the use of the factor >price frontier to analyze the choice of technique. He is merely >not demonstrating any command of this tool. In fact, I honestly >think he is still ignorant of this tool. Damnit, Rob, when many parameters vary (along a factor price frontier) the resulting locus of... oh, why bother? >> I do dispute your insistence, over the span of years, that >> your result shows that factor demand curves can slope up -- that >> interpretation is not "well known," however, because it is dead >> wrong. > > "However, as was argued in Section 3 with regard to 'perversely' > shaped, that is, upward sloping, factor-demand functions, this > possibility would question the validity of the entire economic > analysis in terms of demand and supply." > -- H. D. Kurz and N. Salvadori, _Theory of Production: A Long > Period Analysis_, Cambridge University Press, 1995. > >I find Kurz and Salvadori quite credible. The above is near the >end of a long textbook. The analysis in that textbook, as should >be true of most textbooks, was built up over decades. Important >economists in developing this analysis include such obscure nonentities >as Paul Samuelson, Robert Solow, Franco Modigliani - not that >they would necessarily agree with Kurz and Salvadori's >conclusions. We have discussed this quote before. Either (1) Rob is quoting out of context or (2) Kurz and Salvadori are abusing standard terminology. Rob's argument from authority when he thinks authority is on his side noted (if he can find Samuelson, Solow, or Modigliani talking about upward sloping labor demand curves, _that_ would be interesting). >Despite Chris' insistence that this interpretation is dead wrong, >I have yet to see him grapple much with the arguments I >report. His demonstrated resistance to such serious intellectual >work I find to be of most interest for psychology or the >sociology of knowledge. Yeah, I know Rob, I'm a dunce. Not unlike every other professional economist who's ever posted here, and many other notable economists who Rob has deigned to give his opinions on. Of course, it seems to me that the "serious intellectual work" the eminent Rob Vienneau "reports" on is mostly simplistic, archaic, and often hilariously misinterpreted. Perhaps worthy of a solid A- and a gentle lecture from the professor on style in an undergraduate history of thought class. But what do I know? I'm not as clever as Rob. I guess, given the self-aggrandization Rob indulges in above, he needs to rebuked again for taking himself and this forum far, far too seriously. This is entertainment, Rob, if you want to contribute to the "serious" intellectual conversation of economists, a peer- reviewed journal is the appropriate venue. >I think he has a point that talking about upward sloping, >factor-demand functions in this context is metaphoric. That is >why I have moved to thinking it better to saying supply and >demand is not applicable to long period analysis. Define, "not applicable." Economists tend to use supply and demand to think about gross price and quantity changes, mostly in markets which are small relative to the economy. Under such conditions, the feedback effects Rob has been preaching about for years are vanishing relative to own-price effects, and supply and demand can be a very useful tool. More complex models are employed to analyze deviations from competive assumptions in many markets (notably, the labor market) and when general equilibrium considerations are important (and, of course, in the common situation when outcomes other than prices and quantities are of interest). So, again, exactly what do you mean by "not applicable," Rob? And _exactly_ what are you saying that any second year economics major isn't already aware of? >> >Anyway, the demonstration, on one of several possible grounds, that >> >neoliberal advocates do not have a theory to support their remaking of >> >the world seems to be of contemporary interest. > >> And here Rob's political motivations are laid bare. Rob's posts >> would be more honest and more interesting if he would simply argue about >> his political beliefs rather than pretending to be talking about >> microeconomic theory. > >But many economists think the stuff taught in "mainstream" intermediate >microeconomics classes is ideology, incoherent as theory. I am indeed >arguing about those incoherences. When Rob is railing against "neoliberal advocates," he is not, of course, talking about microeconomic theory. I would be interested if he could name one economist who would label "the stuff" in intermediate micro "incoherent ideology." I seem to recall learning about basic tools and concepts such as: demand, supply, cost, equilibrium, utility, the envelope theorem, the Slutsky equation, the core, moral hazard, adverse selection, externalities, public goods, market structure, Nash and subgame perfect equilibrium, rent seeking behavior, Pareto criteria, dynamic games, a little dynamic optimization, and many others in undergraduate microeconomics. All nonsense; the lot is neoliberal, incoherent ideology. Right Rob? Question: can one understand neo-Marxist, or neo-Institutional, or any other "school" of economic thought absent any understanding of the concepts presented in mainstream undergraduate microeconomics? Rob also seems to think that support for neoliberal politics begins and ends with a demonstration that (consumer surplus + producer surplus) is at a maximum where supply and demand curves intersect. While I would not classify my own political beliefs as consistently "neoliberal," I will point out that Rob's charge is just, well, silly (to use his favorite term). ### SUSUPPLY unread, Feb 11, 2000, 3:00:00 AM2/11/00 to Robert brings up the obvious question (sort of): > he has >>...not met the challenge. > >It would help his claim that economists >>such as himself understood my point if he would show some grasp >>of the analytical tools that I think useful for the analysis... >He is merely >>not demonstrating any command of this tool. In fact, I honestly >>think he is still ignorant of this tool. >>I have yet to see him grapple much with the arguments I >>report. In other words, Robert is claiming that Chris won't answer questions. >> His demonstrated resistance to such serious intellectual >>work I find to be of most interest for psychology or the >>sociology of knowledge. We can all agree that a persistent refusal to answer straightforward questions, all the while accusing others falsely of that charge, is the stuff of what psychologists call "projection", can't we, Robert? So, I forget, what was your advice to the governor about funding the Sraffa memorial, Robert? Patrick ### Robert Vienneau unread, Feb 12, 2000, 3:00:00 AM2/12/00 to au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: > Robert Vienneau <rv...@see.sig.com> wrote: > >au...@jerry.ss.ucalgary.ca (Chris Auld) wrote: > >> resulting from solving the maximization problem. The factor demand > >> curve for any of those inputs is the schedule x_i (w_i | w_{-i} ). > >> Sum these schedules across firms to get the aggregate labor demand > >> schedule. > >The above is handwaving that assumes what is to be proved - that > >the (vertically integrated) firm can be in long-run equilibrium > >for some interesting range of p and w. The firm is in equilibrium when it is profit maximizing: "Given the price system p, the jth producer chooses his production in his production set Yj so as to maximize his profit. The resulting action is called an equilibrium production of the jth producer relative to p." -- Debreu (1959) > This is nonresponsive. How precisely does your problem not fit into > the general framework given, Rob? Chris seems to forget that he was claiming to have met the challenge in the thread title. It is his task, if he wants to address the challenge, to show how the problem maps into his formalism, a task he refuses to even indicate the start of. As for his inference from his reading comprehension problems that I am nonresponsive, see below for one more re-iteration of my point. > Yet again, one does not impose a > zero profit condition nor vary any other parameter than the factor > price of interest when deriving the schedule. Let's make it even > more general, Rob > [ Pointless generality that still provides no guidance on ] > [ meaing of prices and inputs in my example - deleted. ] > Simply, in Rob's context, the demand curve for unskilled labor at time > t is the responses to the query "how much unskilled labor would the > firm hire at t" as the wage of unskilled labor at t varies, ALL ELSE > EQUAL. It is permissible for the answer to be zero, unbounded, or > not unique. I don't know what "at time t" is doing there, considering I asked about long period curves and Chris refuses to discuss how many inputs of unskilled labor are in my example. If Chris is going to produce a step function, I want a non-trivial one - e.g. with at least two steps. I mentioned this some posts ago: "I'd be quite happy with a non-trivial step function as an answer to my challenge, if it could be answered." > Consider again a simpler presentation of the basic ideas in Rob's > posts. Let z(w,r)=0 denote an aggregate zero profit condition. > Suppose it is monotone such that we can write r=r(w) in equilibrium... Suddenly, Chris seems to be talking about homogeneous labor again, unlike me in this thread. > >> Rob will object that in the "long period" there is a functional > >> relationship between the elements of w. > >> Rob will fail, again, to > >> understand that that fact does not alter anything in the preceding > >> paragraph. If he wants to keep declaring his "challenge has not > >> been met," he must show why the problem he presents is not a > >> special case of the above. > >This is to disguise that Chris has not met my challenge. > >He is relying on popular non-formal understandings of "prices" > >and "inputs" to disguise that he has not shown how to formulate > >my example in his terms. > Actually, Rob, I presented a very general and very common (and > formal) statement of the problem. Indeed it is common. However, Chris has not shown how to formulate my example in his terms. He is relying on non-formal understandings. > [ Non-responsiveness deleted. ] > The rate of interest is held constant > while deriving a factor demand schedule. Yet again, it doesn't matter > if that fact places the economy in disequilibrium or violates a zero > profit condition. Consider Chris' thought experiment of varying the wage of unskilled labor, while leaving all other prices and the interest rate constant. This will move the firm off the factor price frontier, assuming it was on it to begin with. The vertically integrated firm will not be profit maximizing if it continues to manufacture both steel and corn. In other words, if the supply curve of (the type of) labor shifted to intersect with Chris' non-existent long period labor demand curve at this new wage, that intersection would not be a long period equilibrium. The above merely reiterates my ignored point from several posts ago: Given Chris' understanding, he should demonstrate that the wage of a type of labor *can* vary in the example, leaving all other prices unchanged. The profit-maximizing firm should be producing at a positive level, then, that can be an equilibrium (when supply curves are at the appropriate location). Chris, of course, substitutes insults for addressing the challenge. There is another way of looking at my example - as a limit of an Arrow-Debreu intertemporal equilibrium. In this case, one would find my example has an infinite number of dated inputs of skilled and unskilled labors. The Arrow-Debreu model is not a long run model, and the limit point is not determined by the intersection of long run demand and supply curves. Furthermore, the factor price frontier does not describe an Arrow-Debreu path. But it can be the case that in a comparison of two paths, the path converging to a point with a higher wage of unskilled labor is also converging to the point with profit-maximizing firms employing more labor. My sort of example seems to have (in)stability implications for such analyses, not that economists haven't known that there are stability problems there anyway. To adopt the Arrow-Debreu intertemporal equilibrium or temporary equilibria models as the preferred framework for discussing my example is to adopt a framework where my challenge cannot be met. This change in the notion of equilibrium was, in fact, a common response among some mainstream economists to this sort of challenge. Thus, the extension by Marshall and others of supply and demand explanations to all runs has been acknowledged to be mistaken, or at least abandoned, by many economists. This leaves it arguable that the only coherent long period theory is that of Ricardo and Marx's. (It may also make one wonder what some contemporary mainstream growth theorists think they are doing.) > >In short, Chris has not met any reasonable standard of the burden > >of proof. He has not shown how to construct long period labor > >demand curves for *this* example. His claim that I "must show > >why the problem he presents is not a special case of the > >above" is an implicit admission, as I read it, that he has > >not met the challenge. > As I've said repeatedly, I am NOT going to drag out a calculator and > work through Rob's pointlessly tedious and cumbersome presentation. > [ More silliness deleted] Chris needs a calculator to count inputs? To determine whether the rate of interest is a price? To specify the elements of a vector? > >Chris is being silly. It would help his claim that economists > >such as himself understood my point if he would show some grasp > >of the analytical tools that I think useful for the analysis > >of the choice of technique in long period positions. I don't > >think he understands them. > Oh, he got me! Turns out junior high school algrebra is far > beyond me (don't tell, 'k? I'd get fired -- lucky I've been able > to bluff thus far). Why Rob thinks that using cumbersome assumptions > such as discontinuous technologies form an "analytical tool" which is > mysterious or difficult to understand is quite an enigma. There you go again. I do indeed find linear production models elegant. Why Chris thinks I think the analysis of the choice of technique through the construction of the factor price frontier is mysterious or difficult to understand is quite an enigma. Of course, I don't think he understands the analysis. > [ Silliness deleted. ] > >> I do dispute your insistence, over the span of years, that > >> your result shows that factor demand curves can slope up -- that > >> interpretation is not "well known," however, because it is dead > >> wrong. > > "However, as was argued in Section 3 with regard to 'perversely' > > shaped, that is, upward sloping, factor-demand functions, this > > possibility would question the validity of the entire economic > > analysis in terms of demand and supply." > > -- H. D. Kurz and N. Salvadori, _Theory of Production: A Long > > Period Analysis_, Cambridge University Press, 1995. > >I find Kurz and Salvadori quite credible. The above is near the > >end of a long textbook. The analysis in that textbook, as should > >be true of most textbooks, was built up over decades. Important > >economists in developing this analysis include such obscure nonentities > >as Paul Samuelson, Robert Solow, Franco Modigliani - not that > >they would necessarily agree with Kurz and Salvadori's > >conclusions. > We have discussed this quote before. Either (1) Rob is quoting out > of context or (2) Kurz and Salvadori are abusing standard terminology. Chris repeats assertions. It would seem kind of stupid to make either assertion above when one hasn't read the text from which the quote is drawn, or hasn't read with any understanding. > Rob's argument from authority when he thinks authority is on his side > noted (if he can find Samuelson, Solow, or Modigliani talking about > upward sloping labor demand curves, _that_ would be interesting). Chris was making silly statements about what is well known. It is no logical fallacy to quote a textbook in reply. On the other hand, this is a logical fallacy: "However, no one doubts that large increases in the minimum wage would decrease employment, and few believe that this policy is a good redistribution tool." -- Chris Auld > >Despite Chris' insistence that this interpretation is dead wrong, > >I have yet to see him grapple much with the arguments I > >report. His demonstrated resistance to such serious intellectual > >work I find to be of most interest for psychology or the > >sociology of knowledge. > [ Silliness deleted.] > I guess, given the self-aggrandization Rob indulges in above, he > needs to rebuked again for taking himself and this forum far, far too > seriously. Only an authoritarian would think he "needs" to do such on this forum. Some might find Chris' posts hilarious. > [ Irrelevancy deleted. ] > >I think he has a point that talking about upward sloping, > >factor-demand functions in this context is metaphoric. That is > >why I have moved to thinking it better to saying supply and > >demand is not applicable to long period analysis. > ...when general equilibrium considerations > are important (and, of course, in the common situation when outcomes > other than prices and quantities are of interest)... But my example is not of neoclassical general equilibrium. > And _exactly_ > what are you saying that any second year economics major isn't > already aware of? Well, one constructs the factor price frontier in determining the choice of technique in long period equilibrium. Cambridge Capital models show that factor demand curves do not exist in long run models of prices and production. Chris will misread the latter statement as not being about logic. > [...] > >But many economists think the stuff taught in "mainstream" intermediate > >microeconomics classes is ideology, incoherent as theory. I am indeed > >arguing about those incoherences. > When Rob is railing against "neoliberal advocates," he is not, of course, > talking about microeconomic theory. Chris' complaint was that I don't do enough "railing" against neoliberal advocates. > I would be interested if he could > name one economist who would label "the stuff" in intermediate micro > "incoherent ideology." ... > Question: can one understand neo-Marxist, or > neo-Institutional, > or any other "school" of economic thought absent any understanding of the > concepts presented in mainstream undergraduate microeconomics? I don't know what "neo-Marxist" means. From some discussion list or another: ==================================================================== > michael et al > i've also been following the thread with interest. i am a bit surprised > no one > has brought up the work of fred lee who i think is doing the best "price > theory" > out there--consistent with an institutional approach. > i do not believe that neoclassical principles should be taught in intro > or > intermediate econ theory courses. i realize that some argue that students > have > to be taught this because otherwise they will be at a disadvantage later > on > since students who do not happen to have the "pleasure" of receiving > their > economics from institutionalists will have been well versed in > neoclassical > econ. i also used to hold that view. however, the majority of students > will not > go on to grad programs. for those who will, the neoclassical principles > can be > learned fairly quickly and far less painfully after learning economics. i > think > it best to offer neoclassical principles in a separate, non-economics, > course > where it is taught as an aberrant, apologetic ideology, carefully and > dispationately dissected in veblenian manner. altho the kansas school > board > would like to have creationism taught in science classes, most reasonable > people > agree that creationism has no place in a science course. it is better > taught and > examined in religious studies or elsewhere. similarly, i think > neoclassicism as > a belief system is extremely interesting and certainly has a place in the > curriculum. but it is probably better placed within religious studies or > deviant > psychology. seriously, at denver we had a one-quarter upper division > course set > aside to teach the neoclassical micro and macro (what little there is of > that). > we encouraged all those who would continue studies in grad programs to > take that > course. that then freed up the intro courses for economics. i have no > evidence > that our students were dis-served by this. many went on to good grad > programs > and seem to have had above-average success at jumping thru the necessary > hoops. > i know that dan uses neoclassical principles as a means to teach > analytical > skills. but i think students would be better-served, if necessary, by > applying > higher-order math to analysis of the acrobatics of angels on pinheads. > this way, > they would never become confused and suppose that those math skills are > useful > in analysis of the economy. > yes, this is an extreme view. but if not you, who? if not now, when? dump > the > texts; dump ideology. teach economics (in an interdisciplinary manner, of > course). > randy ======================================================================== One could also look at Tony Lawson's response to Daniel Haussman in the current(?) issue of the journal _Economics and Philosophy_. Lawson assumes that it is common knowledge among some of his intended audience that many think mainstream economics intellectually bankrupt, for the most part. > [ Strawman deleted. ### Chris Auld unread, Feb 12, 2000, 3:00:00 AM2/12/00 to Robert Vienneau <rv...@see.sig.com> wrote: >> This is nonresponsive. How precisely does your problem not fit into >> the general framework given, Rob? > >Chris seems to forget that he was claiming to have met the >challenge in the thread title. It is his task, if he wants to >address the challenge, to show how the problem maps into his >formalism, a task he refuses to even indicate the start of. Rob, you have profit-maximizing firms in perfectly competitive markets (I guess, it's hard to tell from the muddled and incomplete exposition). In any case, you at least have perfectly competive factor markets for the two types of labor. Let the vector of whatever inputs the firms hire in competive markets be x. Now, how exactly does my general exposition not conform to the problem faced by firms in your model? >I don't know what "at time t" is doing there, considering I asked >about long period curves and Chris refuses to discuss how many inputs >of unskilled labor are in my example. Rob has firms operating in a discrete-time economy. Inputs hired at every point in time are then different. How many inputs does he have? Infinite. (I suppose -- it would help if Rob wrote down the firms' objective functions.) In any case, since there are really no dynamics in this model, I suppose one could ask how much of one of the labor inputs would be hired in every period given a constant set of prices for all time. The answer is the the same, I think, in this economy as if firms were born, solved a one-shot static problem, and were destroyed every period. So "at time t" is irrelevant in this simple economy I guess. >> Consider again a simpler presentation of the basic ideas in Rob's >> posts. Let z(w,r)=0 denote an aggregate zero profit condition. >> Suppose it is monotone such that we can write r=r(w) in equilibrium... > >Suddenly, Chris seems to be talking about homogeneous labor again, >unlike me in this thread. "... of the basic ideas in Rob's posts." Read it again and think of "r" as the wage of unskilled labor and "w" as the price of skilled labor if you want. >Consider Chris' thought experiment of varying the wage of unskilled >labor, [ That is, deriving a demand curve for unskilled labor. Rob never seems to know what I'm talking about when I bring up this obscure concept. ] > while leaving all other prices and the interest >rate constant. This will move the firm off the factor price frontier, >assuming it was on it to begin with. The vertically integrated firm >will not be profit maximizing if it continues to manufacture both steel >and corn. No, Rob, this will move *the economy* off the factor price frontier. Get it? As I already explained, it is *irrelevant* when deriving a firm's factor price schedule if moving along that schedule places *the economy* in disequilibrium. I can still ask, "all else equal, suppose the firm faces a unskilled wage increase of a dollare. How will this affect its behavior?" If, of course, your firms are in competitive factor markets (again, it's hard to tell what's going on exactly because of the hopelessly inadequate presentation). If they aren't, then of course factor demand curves don't exist -- see any Econ 101 textbook for an explanation. >in the notion of equilibrium was, in fact, a common response among >some mainstream economists to this sort of challenge. Thus, the >extension by Marshall and others of supply and demand explanations >to all runs has been acknowledged to be mistaken, or at least >abandoned, by many economists. This leaves it arguable that the only No kidding. How many years have I been telling Rob that the models he likes to take jabs at were largely abandoned before I was born? Why does Rob think that criticising the way economics was done up to around the Second World War is an attack on modern economic thought? Why does Rob think replacing an unsatisfactory framework with a better model is bad? Isn't that the way science is supposed to progress? >There you go again. I do indeed find linear production models >elegant. Why? It makes the analysis cumbersome and opaque. Extraordinarily inelagant. >> We have discussed this quote before. Either (1) Rob is quoting out >> of context or (2) Kurz and Salvadori are abusing standard terminology. > >Chris repeats assertions. It would seem kind of stupid to make either >assertion above when one hasn't read the text from which the quote is >drawn, or hasn't read with any understanding. No, I haven't read the text (this book would not make the top 1000 on my list of stuff I'd like to read) but my statement stands. >> Rob's argument from authority when he thinks authority is on his side >> noted (if he can find Samuelson, Solow, or Modigliani talking about >> upward sloping labor demand curves, _that_ would be interesting). > >Chris was making silly statements about what is well known. It is >no logical fallacy to quote a textbook in reply. Damnit, Rob, it is NOT "well known" that factor demand curves can slope up because factor demand curves CANNOT slope up. All of your examples of upward-sloping relationships between an input and its price violate the ceteris paribus *definition* of factor demand curves. The book you quote is *wrong*. >On the other hand, this is a logical fallacy: > > "However, no one doubts that > large increases in the minimum wage would decrease employment, > and few believe that this policy is a good redistribution tool." > -- Chris Auld It is? It seems to me its a remark about beliefs and cannot be a "logical falacy," although it could be mistaken. Rob, can you name some economists who think that, say, a$100 an hour minimum wage wouldn't reduce employment?
And can you name some who think that minimum wages are good redistribution
tools? Many, including myself, are more or less ambivalent as long as the
minimum remains low, but I can't recall reading anyone who really thinks
raising the minimum is the best, or even a really good, way to redistribute.
Do you? What advantages do you think minimum wages have over, say, EITC
type programs?
>> I guess, given the self-aggrandization Rob indulges in above, he
>> needs to rebuked again for taking himself and this forum far, far too
>> seriously.
>
>Only an authoritarian would think he "needs" to do such on this
>forum. Some might find Chris' posts hilarious.
I hope so -- I do try to slip in some wit once in a while. Rob, on the
other hand, is dour enough to make the economics profession look like a
pack of wild, carefree optimists. Anyways, Rob, I think your egotistical
failure to sway the world through repeated spamming to the internet puts
you on the same august plane as other great thinkers such as Archimedes
Plutonium. Lighten up.
>> ...when general equilibrium considerations
>> are important (and, of course, in the common situation when outcomes
>> other than prices and quantities are of interest)...
>
>But my example is not of neoclassical general equilibrium.
So? Nice quoting job. I was explaining when supply and demand is
used in response to the charge that it "isn't applicable," ever. Do
you have a relevant comment to make in response, Rob?
>> When Rob is railing against "neoliberal advocates," he is not, of course,
>
>Chris' complaint was that I don't do enough "railing" against neoliberal
No, my complaint was that you talk about politics while pretending to be
talking about economic theory. I would be more interested in what you have
to say if you would explicitly talk about your opinions on various policies.
>> I would be interested if he could
>> name one economist who would label "the stuff" in intermediate micro
>> "incoherent ideology." ...
>> Question: can one understand neo-Marxist, or
>> neo-Institutional,
>> or any other "school" of economic thought absent any understanding of the
>> concepts presented in mainstream undergraduate microeconomics?
>
>I don't know what "neo-Marxist" means.
Then perhaps you should read more.
[ unattributed quote deleted ]
It isn't clear from the quote what the author thinks should be taught in
undergraduate micro, Rob, but I would still be shocked if he thought that
all of "the stuff" in most intermediate micro course is "incoherent
ideology." I do think that the program discussed does students a grave
disservice: I think a prime goal of an undergraduate program is to raise
students to a level where they can at least make a stab at understanding
the primary literature. That certainly can't happen if the basic tools
and terminology given in most intermediate micro courses is never taught.
>One could also look at Tony Lawson's response to Daniel Haussman in
>the current(?) issue of the journal _Economics and Philosophy_. Lawson
>assumes that it is common knowledge among some of his intended
>audience that many think mainstream economics intellectually bankrupt,
>for the most part.
< yawn >
Rob, why don't you just program a bot to roam around the net looking for
insulting remarks about economists? You could then have it automatically
post what it finds here, which would free up a lot of your time to toil
away on Sraffa4.ps: the Next Generation.
### SUSUPPLY
Feb 12, 2000, 3:00:00 AM2/12/00
to
Robert tries irony:
>>Only an authoritarian would think he "needs" to do such on this
>>forum. Some might find Chris' posts hilarious.
Oh many do, trust me.
It's probably ripostes like the following we enjoy:
>>I don't know what "neo-Marxist" means.
>
>Then perhaps you should read more.
And his question about funding the Sraffa memorial, that's been keeping me
laughing for a couple months. How about you?
BTW, Robert, if it's elegance you crave I suggest renting a few Fred
Astaire-Ginger Rodgers videos.
Patrick
|
2021-09-17 11:42:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5008348822593689, "perplexity": 8547.925282680311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00115.warc.gz"}
|
http://search.academickids.com/encyclopedia/index.php/Euclidean_algorithm
|
# Euclidean algorithm
The Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (gcd) of two integers. It is one of the oldest algorithms known, since it appeared in Euclid's Elements around 300 BCE. However, the algorithm probably was not discovered by Euclid and it may have been known up to 200 years earlier. It was almost certainly known by Eudoxus of Cnidus (about 375 BCE); and Aristotle (about 330 BCE) hinted at it in his Topics, 158b, 29-35. The algorithm does not require factoring the two integers.
Contents
## Algorithm and implementation
Given two natural numbers a and b, check if b is zero. If yes, a is the gcd. If not, repeat the process using b and the remainder after integer division of a and b (written a modulus b below). The algorithm can be naturally expressed using tail recursion:
function gcd(a, b)
if b = 0
return a
else
return gcd(b, a modulus b)
This can be rewritten iteratively as:
function gcd(a, b)
while b ≠ 0
var t := b
b := a modulus b
a := t
return a
For example, the gcd of 1071 and 1029 is computed by this algorithm to be 21 with the following steps:
a b t 1071 1029 42 1029 42 21 42 21 0 21 0
By keeping track of the quotients occurring during the algorithm, one can also determine integers p and q with ap + bq = gcd(ab). This is known as the extended Euclidean algorithm.
These algorithms can be used in any context where division with remainder is possible. This includes rings of polynomials over a field as well as the ring of Gaussian integers, and in general all Euclidean domains.
Euclid originally formulated the problem geometrically, as the problem of finding a common "measure" for two line lengths, and his algorithm proceeded by repeated subtraction of the shorter from the longer segment. This is equivalent to the following implementation, which is considerably less efficient than the method explained above:
function gcd(a, b)
while a ≠ b
if a > b
a := a - b
else
b := b - a
return a
## Proof of correctness
It is not difficult to prove that this algorithm is correct. Suppose a and b are the numbers whose gcd has to be determined. And suppose the remainder of the division of a by b is t. Therefore a = qb + t where q is the quotient of the division. Now any common divisor of a and b also divides t (since t can be written as t = a − qb); similarly, any common divisor of b and t will also divide a. Thus the greatest common divisor of a and b is the same as the greatest common divisor of b and t. Therefore it is enough if we continue the process with the numbers b and t. Since t is smaller in absolute value than b, we will reach t = 0 after finitely many steps.
## Running time
Missing image
Euclidean_algorithm_running_time_X_Y.png
Plot of the running time for gcd(x,y)
When analyzing the running time of Euclid's algorithm, it turns out that the inputs requiring the most divisions are two successive Fibonacci numbers, and the worst case requires O(n) divisions, where n is the number of digits in the input (see Big O notation). However, it must be noted that the divisions themselves are not atomic operations (if the numbers are larger than the natural size of the computer's arithmetic operations), since the size of the operands could be as large as n digits. The actual running time is therefore O(n²).
This is, nevertheless, considerably better than Euclid's original algorithm, in which the modulus operation is effectively performed using repeated subtraction in O(2n) steps. Consequently, this version of the algorithm requires O(n2n) time for n-digit numbers, or O(mlog m) time for the number m.
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. An alternative algorithm, the binary GCD algorithm, exploits the binary representation used by computers to avoid divisions and thereby increase efficiency, although it too is O(n²) asymptotically.
## Continued fractions
The quotients that appear when the Euclidean algorithm is applied to the inputs a and b are precisely the numbers occurring in the continued fraction representation of a/b. Take for instance the example of a = 1071 and b = 1029 used above. Here is the calculation with highlighted quotients:
1071 = 1029 × 1 + 42
1029 = 42 × 24 + 21
42 = 21 × 2 + 0
From this, one can read off that
[itex]\frac{1071}{1029} = \mathbf{1} + \frac{1}{\mathbf{24} + \frac{1}{\mathbf{2}}}[itex].
This method can even be used for real inputs a and b; if a/b is irrational, then the Euclidean algorithm won't terminate, but the computed sequence of quotients still represents the (now infinite) continued fraction representation of a/b.
## Reference
Knuth, Donald, The Art of Computer Programming, volume 1 and volume 2. Addison-Wesley.
• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy
|
2021-04-21 16:45:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7601925134658813, "perplexity": 429.01509823144954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00383.warc.gz"}
|
https://mathoverflow.net/questions/208524/using-mellin-transform-for-a-certain-function
|
# Using Mellin transform for a certain function
In short, I want to use the Mellin transform to obtain the asymptotic behavior of the sequence $D_n = \frac{ [z^n] D(z)} {C_n}$ where $$D(z) = \frac 1{2z}\sum_{p \ge 1}C_p \left( \sqrt{1-4z+4z^{p+1}}-\sqrt{1-4z}\right)$$ and $C_n=\frac 1 {n+1} \binom{2n}n$ denotes the $n$-th Catalan number.
The generating function $D(z)$ describes the average size of the so-called minimal directed acyclic graph of a binary tree. It is taken from the paper Analytic variations on the common subexpression problem and its asymptotic is there proven to be $2 \sqrt{\frac{\ln 4}{\pi}}\frac{n}{\sqrt{\ln n}}$.
However, since that proof is rather complicated (a detailed proof can be found in the appendix of this paper), because the form of the generating function just screams for Mellin analysis via so-called harmonic sums, and lastly, because there is an analysis of the very similar generating function $$U(z) = \frac 1{2}\sum_{p \ge 1}\left(\sqrt{1-4z+2^{p+1}z^{p+1}}-\sqrt{1-4z}\right)$$ in this paper by F. Disanto, I believe that an analysis via Mellin is possible and insightful.
Some background. The main reference I used for Mellin transforms is the paper Mellin transforms and asymptotics: Harmonic sums.Harmonic sums are functions of the form $$G(z) = \sum_{k } \lambda_k g(\mu_k z)$$ and the aforementioned paper derives under which circumstances the Mellin transformation of those functions may be "factored" to the Mellin transform of the function $g(z)$ multiplied by the so-called Dirichlet series $$\Lambda(s) = \lambda_k \mu_k^{-s}.$$
Where I got stuck. I first carefully followed the paper by Disanto, but the derived function did not meet the requirements under which the function may be factored. This resulted in this question. However, I later noticed that I had made a mistake in the very beginning: Disanto replaces the term $\sqrt{1-4z+2^{p+1}z^{p+1}}$ by $\sqrt{1-4z+2^{-p-1}}$, and shows that the induced error is bounded. Yet in our case, if we replace $\sqrt{1-4z+4z^{p+1}}$ by $\sqrt{1-4z+4^{-p}}$, the corresponding error becomes infinite (this is caused by the Catalan series in the sum).
So I am looking for a different estimate of the term $\sqrt{1-4z+4z^{p+1}}$, possibly one that splits the $p$ away from the $z$, as I think that's the best way to later be able to factor the Dirichlet series away from the base function.
## migrated from math.stackexchange.comJun 5 '15 at 13:53
This question came from our site for people studying math at any level and professionals in related fields.
|
2019-05-21 19:50:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330391883850098, "perplexity": 199.61911119983955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00449.warc.gz"}
|
https://cstheory.stackexchange.com/questions/10196/simple-question-on-complexity-models-w-r-t-linear-algebra
|
# Simple question on complexity models w.r.t linear algebra
For any matrix $B$, there are matrices $X$ and $Y$ such that the product $XBY$ is a diagonal matrix. Suppose $B$ is an $n \times n$ non-singular matrix with $b$-bit integer entries and no repeated eigenvalues. What is the worst-case bit complexity of $X$ and $Y$ as a function of $b$ and $n$ (say when you fix an approximation error for the diagonal matrix as a function of $n$ and $b$)? Also what is the complexity of computing $X$ and $Y$?
• What do you mean by "the complexity of $B$"? – Shir Feb 14 '12 at 7:29
• What do you mean by "the complexity of the transformation"? – Jeffε Feb 14 '12 at 9:10
• I assume he means the complexity of computing what are $X$ and $Y$. – Shir Feb 14 '12 at 11:59
• Shir and Janoma suggest two (or three) natural answers to my question. Hence my question. @vs, could you please edit your original question to make it unambiguous? – Jeffε Feb 15 '12 at 10:14
• So do you mean the following? “For any matrix $B$, there are matrices $X$ and $Y$ such that the product $XBY$ is a diagonal matrix. Suppose $B$ is an $n\times n$ non-singular matrix with $b$-bit integer entries and no repeated eigenvalues. What is the worst-case bit complexity of $X$ and $Y$ as a function of $b$ and $n$?” – Jeffε Feb 16 '12 at 13:52
|
2019-08-22 05:56:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750721216201782, "perplexity": 175.31550161669605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00004.warc.gz"}
|
https://brilliant.org/practice/3d-coordinate-geometry-level-2-3-challenges/?subtopic=coordinate-geometry&chapter=3d-coordinate-geometry
|
Geometry
3D Coordinate Geometry: Level 2 Challenges
Point $P$ is some point on the surface of the sphere ${(x-1)}^{2}+{(y+2)}^{2}+{(z-3)}^{2}=1 .$ What is the shortest possible distance between $P$ and $O=(0, 0, 0)?$
If the two lines \begin{aligned} \frac {x-1 }{k } &=\frac { y+1 }{2 } =z,\\ \\ \\ \frac {x+2 }{-3 } & =1-y =\frac{z+2}{k} \end{aligned} are perpendicular to each other, then what is the value of $k?$
The points $A$, $B$, $P$, and $Q$ all lie on one line, with $A=(-3, 5, 8).$
Point $P$ lies in the $xy$-plane between $A$ and $B$ such that the distance from $A$ to $P$ is twice the distance from $P$ to $B$. Furthermore, $Q$ sits on the $z$-axis such that the distance from $A$ to $Q$ is twice the distance from $Q$ to $B$.
If $B=(x,y,z),$ what is the value of $x+y+z?$
An infinite column is centered along the $z$-axis. It has a square cross-section of side length 10. It is cut by the plane $4x - 7y + 4z = 25.$
What is the area of the surface cut?
If the point $Q=(a, b, c)$ is the reflection of the point $P=(-6, 2, 3)$ about the plane $3x-4y+5z-9=0,$ determine the value of $a+b+c.$
×
|
2022-05-23 10:56:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954261779785156, "perplexity": 121.89912297695393}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00650.warc.gz"}
|
https://physics.stackexchange.com/questions/470158/hamiltonian-directly-expressed-in-q-dotq-how-to-find-what-is-p
|
# Hamiltonian directly expressed in $(q,\dot{q})$ : how to find what is $p$?
I am reading a book about non relativistic quantization of E.M field. But first we do classical field theory.
We directly wrote the Hamiltonian of our study, and a part of our Hamiltonian is the following (we are in Coulomb gauge).
$$H_{trans} = \frac{\epsilon_0}{2} \int d^3 k \sum_{\epsilon} \dot A_{\bot, \epsilon}^* (\vec{k},t) \dot A_{\bot, \epsilon} (\vec{k},t) + \omega^2 A_{\bot, \epsilon}^*(\vec{k},t) A_{\bot, \epsilon}(\vec{k},t)$$
Where $$\epsilon$$ denote a given polarization. Thus, we have 4 generalized coordinates : $$A_{\bot, \epsilon_1},A_{\bot, \epsilon_2},A_{\bot, \epsilon_1}^*,A_{\bot, \epsilon_2}^*$$ (one for each value of $$\epsilon$$) and four velocities (I don't know the exact denomination in english) : $$\dot A_{\bot, \epsilon_1},\dot A_{\bot, \epsilon_2} ,\dot A_{\bot, \epsilon_1}^*, \dot A_{\bot, \epsilon_2}^*$$.
In the book, they say that this Hamiltonian looks like a sum of harmonic oscillators hamiltonian. Even though I agree, I would like to see it properly by finding what the momentum is, to end up with something like $$\frac{p^2}{2m}+\frac{1}{2} m \omega^2 q^2$$ i.e. the field version of it.
Given a Lagrangian, I know how to find the momentum, but how is it done when given the Hamiltonian expressed in position/velocity?
• Which book is this? – Emilio Pisanty Apr 2 at 20:09
• @EmilioPisanty Mécanique quantique - Tome III - Claude Cohen Tanoudji - page 399 (it is in French) – StarBucK Apr 2 at 20:12
• That is in fact not the Hamiltonian, rather the energy associated to a Lagrangian (which is a function of the generalised velocities and positions): some books still call this quantity the Hamiltonian, with abuse of terminology. The true Hamiltonian is the Legendre transform of the Lagrangian as function on the cotangent bundle of the configuration space. – gented Apr 2 at 20:43
• @gented thus it is not possible in practice, given this quantity to guess what the momentum will be ? The only way to do it is to have started from the Lagrangian ? – StarBucK Apr 6 at 17:48
• Correct, the momentum is defined only given a Lagrangian. – gented Apr 6 at 18:34
|
2019-11-15 19:55:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778091073036194, "perplexity": 449.4113567547628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00098.warc.gz"}
|
https://www.cymath.com/practice/calculus-derivative-trigonometric-functions
|
# Calculus: Derivative: Trigonometric Functions
1.
$$\frac{d}{dx} {x}^{4}\tan{x}$$
Solution
2.
$$\frac{d}{dx} 4\cos{x}-3\sin{x}$$
Solution
3.
$$\frac{d}{dx} \csc{x}\cot{x}$$
Solution
4.
$$\frac{d}{dx} \frac{\sin^{2}x}{\cos^{2}x}$$
Solution
5.
$$\frac{d}{dx} \cot^{5}x$$
Solution
6.
$$\frac{d}{dx} \csc{x}x$$
Solution
7.
$$\frac{d}{dx} 3\tan{x}+\sec{x}$$
Solution
8.
$$\frac{d}{dx} \frac{1}{\tan{x}}$$
Solution
9.
$$\frac{d}{dx} \sin^{2}x\cos^{3}x$$
Solution
# Derivatives of Trigonometric Functions - Introduction
By now, you should have seen the derivatives of basic functions such as polynomials. We will now start exploring the derivatives of trigonometric functions. First, let us list the rules:
$$\frac{d}{dx} \sin{x}=\cos{x}$$
$$\frac{d}{dx} \cos{x}=-\sin{x}$$
$$\frac{d}{dx} \tan{x}=\sec^{2}x$$
$$\frac{d}{dx} \csc{x}=-\csc{x}\cot{x}$$
$$\frac{d}{dx} \sec{x}=\sec{x}\tan{x}$$
$$\frac{d}{dx} \cot{x}=-\csc^{2}x$$
Are you curious about how these rules were derived? Let's explore this in the next section. Note that these rules are also on our reference page on trigonometric differentiation.
# Derivatives of Trigonometric Functions - Proof
Let's see if we can prove that
$$\frac{d}{dx} \tan{x}=\sec^{2}x$$
.
$$\frac{d}{dx} \tan{x}$$
By the trigonometric identity of
$$\tan{x}=\frac{\sin{x}}{\cos{x}}$$
, we have:
$$\frac{d}{dx} \tan{x}=\frac{d}{dx} \frac{\sin{x}}{\cos{x}}$$
Then, we apply the quotient rule:
$$\frac{d}{dx} \tan{x}=\frac{\cos{x}\cos{x}-\sin{x}(-\sin{x})}{\cos^{2}x}$$
Simplify:
$$\frac{d}{dx} \tan{x}=\frac{\cos^{2}x+\sin^{2}x}{\cos^{2}x}$$
Apply the trigonometric identity of
$$\cos^{2}x+\sin^{2}x=1$$
:
$$\frac{d}{dx} \tan{x}=\frac{1}{\cos^{2}x}$$
Apply the trigonometric identity of
$$\frac{1}{\cos{x}}=\sec{x}$$
:
$$\frac{d}{dx} \tan{x}=\sec^{2}x$$
We are done. We have shown that the left hand side equals the right hand side, and that the derivative of
$$\tan{x}$$
is indeed
$$\sec^{2}x$$
.
# What's Next
A good way to get better at finding derivatives for trigonometric functions is more practice! You can try out more practice problems at the top of this page. Once you are familiar with this topic, you can also try other practice problems. Soon, you will find all derivatives problems easy to solve.
At Cymath, we believe that sufficient practice and step-by-step guidance can help students master most differentiation and integration problems. You can try our online solver anytime, or download the Cymath homework helper app for iOS and Android today!
|
2020-02-24 12:20:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6283307075500488, "perplexity": 731.4939326503871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00194.warc.gz"}
|
https://www.gamedev.net/topic/602297-i-think-all-programmers-should-know-machine-code/
|
• Create Account
## I think all programmers should know machine code...
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
75 replies to this topic
### #1/ AndyWonHarglesis Members
Posted 18 May 2011 - 01:44 AM
I mean with all the "unknown and weird" difficulty behind the parsing, compiling and generating of the compiler, pre-processor, other programs and step, etc., isn't it nicer to know EXACTLY what you did, what's going on and what's what in your program?
Not to sound crazy or trolly, but actually learning machine language or Assembly is definitely more favoring to programmers in the long run. Sure,
#include <iostream> using namespace std; int main(){cout << "Hello, boring old high-level language!"; cin.get();}
is definitely easier, but it's just so gloomy and dull after you realize how far you are from direct hardware access and control.
It's like ... here's an example... I want to go see a basketball game. C++ is like sitting on the 18th row about 200 feet away from the court. Assembly is like sitting on the first row only 15 feet from the center of the court. The court itself is the processor...
Understand?
I feel that more programmers should dig deeper into what they're "really doing" rather than just tire themselves out with high-level lack of control, confusion on "APIs work", which is only 100 feet away from the game.
It's a long walk, but for what? You can't actually see the game clearly, can you? And even with all of these "tools", nothing beats sitting right in front of the players and witnessing it up close and direct.
It just, to me, seems like a big mess of disorder and tirade attempts for people to torture themselves learning these APIs, high-level languages, etc., when low-level is more right on to things and there's no memorizing/references of thousands of things that you don't even really know what they do 100%.
...I came to see a basketball game with binoculars on the last row. I can see it, but the people in the front are really witnessing the feel of it and getting more excitement, thrill and just full brain-eye accomplishment.
I'm sorry if this comes off in a bad way, it's just an opinion.
### #2/ owl Banned
Posted 18 May 2011 - 01:52 AM
I had a friend who had this special ability for cracking. He got to a point to be able to remember quite a few machine bytecodes and know what they meant.
Technically speaking, machine code is bits. LOTS of them. I think most humans lack (still) enough short-term memory to remember such long sequences. I think there was a standard limit for that but I don't remember.
I like the Walrus best.
### #3szecs Members
Posted 18 May 2011 - 01:56 AM
I know machine code. So what? I made a small scrollable text viewer/syntax highlighter program with assembly (well, that's not really machine code I admit it). It was fun. So what? I wouldn't make ""complex"" games/applications with it.
Stop trolling and start coding. You haven't coded too much yet, It's obvious. That means your opinion isn't worth JACK SHIT:
### #4FelixK15 Members
Posted 18 May 2011 - 02:02 AM
Machine code != Assembly
Learning at least a bit of Assembly = YES.
Learning machine code = useless IMHO.
If you wrote your game or application in VS you can take a look at the dissembly of your program.
The dissembly is your program in assembly, after the VS compiler did all his optimizations (Maybe that's also possible with other IDEs but I'm not sure about that), if you know at least a bit of Assembly you can figure out what the compiler did optimize and -maybe- if the compiler screwed up optimization (e.g. forced inlining etc.)
Visit my blog, follow me on twitter or check out my bitbucket repositories.
### #5BitMaster Members
Posted 18 May 2011 - 02:04 AM
On the off-chance that this is actually a real opinion and not trolling like the last time:
Yes, we all go through that phase at some point. And while an understanding of machine code and how it works (as well as theoretical constructs like Turing machines and primitive recursive functions) is certainly something a programmer should be aware of, we also learn (usually the hard way) that choosing the right tool for the job is important. Machine code is seldom the right tool for any non-trivial job that does not involve work in an extremely limited or extremely performance critical system.
To stay in the metaphor: for the common project, using machine language is like watching a basketball game by analyzing the magnetic pattern of a video tape of the game by hand. It can be done, it's just neither fun nor efficient. Just very, very tedious.
### #6Tachikoma Members
Posted 18 May 2011 - 02:12 AM
I think all assembly programmers should know everything about ASIC design and electronics engineering.
I think all ASIC designers should know everything about physics.
No.
I think you should know to use the right tool and the right skill set for the right job.
Latest project: Sideways Racing on the iPad
### #7/ AndyWonHarglesis Members
Posted 18 May 2011 - 02:18 AM
I think you should know to use the right tool and the right skill set for the right job.
I can make any job complete as long as I have an OS and machine code. I can directly access hardware myself, I don't need "third-party" crap except what I already came with in my licensed-end user agreement with my computer.
I don't need any damned "tools"... Do you people understand that? Tools are not for me!
You people will never get the fulfillment of creativity... This is comparing to painting.
Painting on a canvas, to be precise.
Why paint on a canvas if you don't know the application of force to the canvas, what the canvas is made of, how you can make it and make sure you've created the perfect canvas for the job, while assuring correct depth, application of force to the canvas from a pivoted angle, direction, etc.?
BUT NO... High-level programming is too easy, yeah... NOT! It's harder...
Think about it... Go read a DirectX source code for a small racing game. 3,000-7,000 LINES OF CODE THAT'S UTTERLY, SICKLY AND PSYCHOTICALLY DIFFICULT TO EVEN UNDERSTAND OR MAINTAIN!
Compare that to: 0 and 1. Who do you think wins, besides just ease and straight-up logic?
At the tiring and sad rate people go for "these days" in programming, I'm going to have to come down to having to make my own hardware.
### #8Hodgman Moderators
Posted 18 May 2011 - 02:21 AM
It's like ... here's an example... I want to go see a basketball game. C++ is like sitting on the 18th row about 200 feet away from the court. Assembly is like sitting on the first row only 15 feet from the center of the court. The court itself is the processor...
Understand?
It's a long walk, but for what? You can't actually see the game clearly, can you? And even with all of these "tools", nothing beats sitting right in front of the players and witnessing it up close and direct.
Here's another example. I want to build shit. C++ is like sitting up on the 18th floor of an office building, drawing out awesome blueprints for a bridge that's being built over a massive fucking river, and then driving home in a Porsche that you designed. Assembly is like being one of the poor dirty cash-in-hand workers getting paid a buck fifty an hour to haul bags of cement to build a bridge, but who'll never actually get to see the magnificence of it's completion because they're about to fall to their death in their unsafe work environment and never amount to anything.
Understand?
You can't actually build a bridge, can you? Even with your sandpit and little sand-bridges, nothing meats standing right there in front of that massive river-conquering fucker of a bridge that you built.
### #9BitMaster Members
Posted 18 May 2011 - 02:25 AM
You people will never get the fulfillment of creativity... This is comparing to painting.
Painting on a canvas, to be precise.
If you want the painting analogy to work, I would suggest making your own brushes, making your own canvas (neither with bought ingredient, go out and hunt some furry animals and start weaving your home-grown canvas) and grinding your own paints instead.
### #10lpcstr Members
Posted 18 May 2011 - 02:28 AM
I can directly access hardware myself, I don't need "third-party" crap except what I already came with in my licensed-end user agreement with my computer.
What? Since when does a modern operating system allow you direct access to hardware? That's handled by hardware drivers, regardless of if your program was written in C++ or x86 assembler. As far as needing "third-party crap", I don't need anything but what came with my computer to complete a task using C or C++. I use linux and GCC comes standard. Did your operating system not come with a compiler? If not, I'm surprised it came with an assembler.
I don't need any damned "tools"... Do you people understand that? Tools are not for me!
You people will never get the fulfillment of creativity... This is comparing to painting.
You are an internet troll. I have my doubts that you can write anything at all, much less a work of art in assembly.
I'm not going to quote the rest of your post because you are just rambling incoherently.
### #11Hodgman Moderators
Posted 18 May 2011 - 02:30 AM
BUT NO... High-level programming is too easy, yeah... NOT! It's harder...
Think about it... Go read a DirectX source code for a small racing game. 3,000-7,000 LINES OF CODE THAT'S UTTERLY, SICKLY AND PSYCHOTICALLY DIFFICULT TO EVEN UNDERSTAND OR MAINTAIN!
Compare that to: 0 and 1. Who do you think wins, besides just ease and straight-up logic?
Again, you're just telling us that you suck at programming. Go cry, emo kid.
If coding in machine bits is easier, you'll be able to tell me what this code does, right?
1100101111100010110010101000001011001011111000101101101010111110
1111110100110010000110100110001101010010011011110001101001101101
1111111101100100010100100001011001010001100110000100001101001001
1111010101000001001101010101000101000110010100000000000011011111
If you translated those 7,000 lines of D3D-using high-level code into plain assembly (and also replicated D3D's functionality yourself), how many lines of psychotic assembly would you end up with?
### #12/ AndyWonHarglesis Members
Posted 18 May 2011 - 02:31 AM
Here's another example. I want to build shit. C++ is like sitting up on the 18th floor of an office building, drawing out awesome blueprints for a bridge that's being built over a massive fucking river, and then driving home in a Porsche that you designed. Assembly is like being one of the poor dirty cash-in-hand workers getting paid a buck fifty an hour to haul bags of cement to build a bridge, but who'll never actually get to see the magnificence of it's completion because they're about to fall to their death in their unsafe work environment and never amount to anything.
Understand?
You can't actually build a bridge, can you? Even with your sandpit and little sand-bridges, nothing meats standing right there in front of that massive river-conquering fucker of a bridge that you built.
O.o
Someone's upset because of me just being honest! >.<
Here's the clearest example possible ... C++ is downloading an IDE to swallow up all of your RAM when compiling half a gigabyte of DirectX API files that do shit for you except hurt your head and offer little control without lots of headache and torture.
On top of that, we forgot the "size" issues... Commercials game take usually over 1GB of RAM to run on most modern OS. Why build in high-level, memory hogging and mentally pincing atmospheres? Exactly!
Build from the ground up only by using what IS necessary and leaving all the bulk and trash of APIs and such where they belong: in the trash.
It is a KNOWN fact that Assembly language programs usually take about 1/2 the RAM of high-level work/programs. Look it up.
If I did my lower work in Assembly, yeah, my work'd take a heck of a lot longer than yours would in your "higher languages". But break down the code and you'll see that my lower-level work is not only better organized, will have better performance regarding speed of the program, the feel of actually using building blocks rather than third-party headaches and not killing my CPU with 6GB of RAM on some crappy high-level program with tons of bulk and garbage that Assembly could've cut it down to less than half of that.
If coding in machine bits is easier, you'll be able to tell me what this code does, right? 10001010011011001010110010001100010010100110110010101010100000110111111101001011111110110110101111010100100010101100011010001001010010001000110001000110101000100110101000101100011010101000101100010100110101001000010110010100010101001001100100010101000111
Very vague and blatant. Just spewing out 0s and 1s doesn't show anything other than the fact that you're letting your anger get the best of you here. And insulting me is just pathetic... I never insulted you once. Childish.
Anyhoo, if you give me about 4 hours and some instruction sets, I could find out what that would do exactly.
But, since it's just meaningless code you spewed out, you probably don't even care, so yeah... why bother with you?
To you they're just "bits". To me they're the building blocks of everything. I use them with knowledge, not spew them with hostile emotions.
Once you learn that these "bits" are everything, you'll see that coding directly to and from them is what REALLY is efficient and beneficial in the end.
High-level does NOT and WILL NOT have the same capabilities or control that Assembly or machine language has. Deal with it.
### #13lpcstr Members
Posted 18 May 2011 - 02:36 AM
C is not a high level language. If you are as smart as you claim, then you should be able to "compile" a C program in your head, as most lines of code compile into just a few instructions each. Linus Torvalds calls C "portable assembly", which brings up one of the largest points for non-assembly languages: portability. The only thing worse than having to program an operating system, game or database in assembly would be having to write it multiple times.
### #14lpcstr Members
Posted 18 May 2011 - 02:38 AM
High-level does NOT and WILL NOT have the same capabilities or control that Assembly or machine language has. Deal with it.
Give multiple examples of common scenarios that display this "lack of control."
### #15Hodgman Moderators
Posted 18 May 2011 - 02:40 AM
O.o Someone's upset because of me just being honest! >.<
I'm not angry -- I'm trying to emphasise how fucking awesome that bridge is! It's ridiculously fucking cool! If I enjoy building shit (which you should associate with, being a painter and all), you'd know how awesome it is to complete a wonderful creation.
Seriously, how many bridges have you built? And by that I mean how many games have you finished?
On top of that, we forgot the "size" issues... Commercials game take usually over 1GB of RAM to run on most modern OS. Why build in high-level, memory hogging and mentally pincing atmospheres? Exactly!
Build from the ground up only by using what IS necessary and leaving all the bulk and trash of APIs and such where they belong: in the trash.
It is a KNOWN fact that Assembly language programs usually take 1/2 the RAM of high-level work. Look it up.
Using either C++ or assembly has no impact on the amount of memory your game ends up using. Every PS3 game has to fit into 256MB of RAM, and I assure you that they're generally written in C or C++, but never in 100% assembly.
Anyhoo, if you give me about 4 hours and some instruction sets, I could find out what that would do exactly.
If I posted the equivalent code in C++, it would take 10 seconds for you to know exactly what it does. Q.E.D. 0's and 1's are 1440 times less readable than C++ (4hrs*60mins*40secs / 10secs). Look it up.
But, since it's just meaningless ... you spewed out, you probably don't even care, so yeah... why bother with you?
This is what we're all thinking! You're spouting nonsense and ignoring reason. Why should anyone bother to listen, unless they enjoy feeding trolls?
### #16darxus Members
Posted 18 May 2011 - 02:40 AM
When I started programming by 2004 there was a cliche aspect that a programmer should be a mathematician (probably those who were not into programming though that it was as difficult as math). But by experience I learned that a developer has to get things working, there's no simply room philosophy in that, you use tools to make tools.
Software development is categorized in many specific areas and there are the appropriate tools for each task. So the point is to know how to use the best method and the most suitable tool for each task and requirement.
### #17/ AndyWonHarglesis Members
Posted 18 May 2011 - 02:43 AM
High-level does NOT and WILL NOT have the same capabilities or control that Assembly or machine language has. Deal with it.
Give multiple examples of common scenarios that display this "lack of control."
Not exactly a "lack of control", but more hassle with control. HLLs like C++ and Java COULD MAYBE get as low to the point of controlling bits, nibbles and bytes, storing and such, but it's going to not be done in the same manner as Assembly.
Since bits are really behind this "IDE you're using to hide them", it's pretty obvious that without full bit control you're not doing it right.
### #18BitMaster Members
Posted 18 May 2011 - 02:46 AM
Here's the clearest example possible ... C++ is downloading an IDE to swallow up all of your RAM when compiling half a gigabyte of DirectX API files that do shit for you except hurt your head and offer little control without lots of headache and torture.
You know, confusing C++ and an IDE for C++ is one of the prime indicators for "I like to talk but I have absolutely no clue what I am talking about". Arrogance and absence of knowledge have always been a difficult combination as your posts clearly show.
### #19lpcstr Members
Posted 18 May 2011 - 02:48 AM
Not exactly a "lack of control", but more hassle with control. HLLs like C++ and Java COULD MAYBE get as low to the point of controlling bits, nibbles and bytes, storing and such, but it's going to not be done in the same manner as Assembly.
Since bits are really behind this "IDE you're using to hide them", it's pretty obvious that without full bit control you're not doing it right.
What are you saying? Firstly, I rarely use an IDE for any language. A text editor and command line is more than enough. Even if I did use an IDE, it wouldn't be "hiding" anything. An IDE is a fancy text editor, it doesn't hide any aspects of the language your developing with.
Since when can you not work with bits and bytes in C or C++? It's all there, and obviously it's going to be different. That goes without saying.
### #20/ owl Banned
Posted 18 May 2011 - 02:49 AM
POPULAR
I like the Walrus best.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
2017-01-18 08:24:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20949730277061462, "perplexity": 2217.3561842361373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1552747/how-to-prove-these-basic-concepts-about-integration
|
# How to prove these basic concepts about integration?
While we were introduced to integration, we were told about some basic concepts that, as we were told, could not be proved based on our level of sophistication. They are as follows:
1. $$\int_a^b \! f(x) \, \mathrm{d}x=\phi(b)-\phi(a),$$ where $\phi$ is a primitive of $f$ in $[a,b]$
2. $$\int_b^a \! f(x) \, \mathrm{d}x=-\int_a^b \! f(x) \, \mathrm{d}x$$
When I learned Riemann Integral, I thought I would be able to prove them, as they didn't seem to be too out-of-the-earth type. So, my question is what is the concept of a primitive according to Riemann Integration? And how can it be used to prove $(1)$? If there isn't any, then where should I look?
Also, how can I even conceptualise $(2)$? I mean, Riemann Integration is defined using Sums. How can I Sum in the opposite way? Does it even matter? And how does it become negative?
I'm just a curious high school student, familiar with the basic concepts of Riemann Integration. So, spare me if my questions are too dumb. And it'd be great if you can suggest some study material where I should look for these type of concepts. Thank you in advance.
• 1) is the fundamental theorem of calculus and 2) is usually taken by definition. For 1), look in any text-book on calculus (or search this site for the mentioned theorem). – mickep Nov 30 '15 at 6:26
• @mickep And for $(2)$? Is it just taken by definition? There must be some kind of a proof for it. – SinTan1729 Nov 30 '15 at 6:32
• (2) is in fact a consequence of (1), i.e. $\int_b^a f(x) \ dx = \phi(a) - \phi(b)$ – Siddharth Joshi Nov 30 '15 at 6:33
• @SiddharthJoshi Yes, but this is much like mechanical. I cannot accept an analytical concept that I cannot even visualise. – SinTan1729 Nov 30 '15 at 6:35
• No, it is taken by definition. The motivation is that the "usual" integration rules should work... – mickep Nov 30 '15 at 6:45
The first question is the fundamental theorem of calculus: it says that if there is a differentiable function $F$ with $\frac{dF}{dx} = f$, and $f$ is Riemann integrable on $[a,b]$, then $\int_{a}^b f(x) dx = F(b) - F(a)$. This can be proved from the definitions relatively easily, but not obviously. It will be covered in any book on elementary real analysis, a good book on calculus, or Wikipedia (NB: When I was a curious high school student, Wikipedia was a godsend for explaining things that didn't make sense or weren't explained in calc class!)
The second statement should be considered a definition. For $a < b$, we define the integral $\int_{a}^{b} f(x) dx$ to be the integral of $f$ with respect to the interval $[a,b]$ (note that the definition of the integral only really depends on the interval!). Then, for any $a,b,c$ with $a < b < c$, we have $\int_a^b f(x) dx + \int_b^c f(x) dx = \int_a^c f(x) dx$ (which is easy to see from the definitions). Now, we want to assign meaning to the symbol $\int_b^a f(x) dx$ with $a \leq b$. We can do this in a unique way such that the addition formula above holds for any $a,b,c$. First, plugging in $b = c$, the formula reads $\int_a^b f(x) dx + \int_b^b f(x) dx = \int_a^b f(x) dx$, so $\int_b^b f(x) dx = 0$. Then, if $a < b$, we can plug in $c = a$ to get $\int_a^b f(x) dx + \int_b^a f(x) dx = \int_a^a f(x) dx = 0$, so $\int_b^a f(x) dx = - \int_a^b f(x) dx$.
• Thank you. This makes some sense. – SinTan1729 Nov 30 '15 at 6:57
• @Dorebell: in your first sentence, the FToC requires $f$ continuous; Riemann integrable is not enough. – Martin Argerami Nov 30 '15 at 9:35
• ah my bad! I thought that differentiability of $F$ was sufficient... – Dorebell Nov 30 '15 at 9:58
• @Dorebell: actually, you were right, my bad. I only read "$f$ integrable" and I missed the part "$F$ differentiable". Sorry! – Martin Argerami Nov 30 '15 at 10:17
• Riemann integrability of $F' (=f)$ is enough . – Tony Piccolo Nov 30 '15 at 12:15
You cannot associate any physical interpretation to $\int_b^a f(x) \ dx$
You're right, the sum would be the same irrespective of whether you sum it forward or backward. But $\int_b^a f(x) dx$ doesn't connote taking the sum in the opposite manner. If it were then $\int_b^a f(x) dx$ would have been equal to $\int_a^bf(a+b-x)dx$ which doesn't happen.
However, this is a useful concept since it allows us to write $\int_a^bf(x) dx = \int_a^cf(x) dx + \int_c^bf(x) dx$ even when $a < b < c$
• Hmm. So, we can say that this has been 'tailored' such that we get desired results. – SinTan1729 Nov 30 '15 at 6:53
• yes this is only useful in calculations, nothing else. – Siddharth Joshi Nov 30 '15 at 6:54
• I don't agree. In fact, integration is integration of differential form on orientated manifold (here 1-dimentional, so line segment). If you change orientation, sign changes, because we want integration on closed path a → b → a to be zero. Why? it is not obvious on line segments, but it makes much more sense in higher dimentions, and there is very good physical interpretation. For example en.wikipedia.org/wiki/Divergence_theorem – user2622016 Nov 30 '15 at 12:48
• Orientation is something different from the selection of initial and final points. For example, if you're talking about line integrals, the line integral is irrespective of the initial and final points of a curve along that path. However, only if you give the same curve with the same parameterization the opposite orientation, the line integral changes sign. More precisely, orientation is for a curve and not for a path. You can associate meaning to line integrals of vector fields although. – Siddharth Joshi Nov 30 '15 at 14:30
• Although you can associate notion of orientation of path, or surface to line integrals or surface integrals of vector fields respectively. You cannot do so for scalar fields. – Siddharth Joshi Nov 30 '15 at 14:43
$\int_a^b \! f(x) \, \mathrm{d}x=g(b)-g(a)$ is a result obtained directly from the fundamental theorem of calculus.
Say there's a function $f(x)$
Now the area under the curve from $0$ up til $x$ can be found using Reimann sums. Let A(x) = $\int_0^x f(t) dt$ where A(x) is the area function that gives you the area under the curve of $f(x)$ from $0$ up to an $x$ value
It can be easily shown that $A '(x) = f(x)$ (https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#Geometric_meaning)
Which means, $A(x)$ is an antiderivative of $f(x)$ Using this result, we can compute areas under the curve of a function easily. Depending on your choice of antiderivative, you can calculate the area from any point $a$ up to $x$ under the curve. For e.g if you want to find the area from $0$ up to $x$, you will choose an antiderivative g(x) that satisfies the relation g(0) = 0.
Now say you want to find the area from $a$ up to $b$. To make things easier for us, we can choose any antiderivative g(x) we like, evaluate the value of $g(b)$ ie. find the area under the curve up till $b$ and from it subtract $g(a)$ ie the area under the curve up to $a$ which will leave us with the area from $a$ to $b$
The proof for the above can be found here -https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#Proof_of_the_second_part
• Any input on how I can improve this answer will be appreciated. – User2956 Nov 30 '15 at 8:43
• Instead of your approach, which is rather a justification than a proof, the proof for the second part on the same Wikipedia page is useful, as suggested by @Dorebell. – SinTan1729 Nov 30 '15 at 9:36
• What I was trying to accomplish with this answer was to provide intuition rather than prove it. Nevertheless, I'll include the proof for the second part too. – User2956 Nov 30 '15 at 10:18
• I think we should rather treat this as 'tailored to suit our needs' just as @SiddharthJoshi suggested. – SinTan1729 Nov 30 '15 at 10:23
|
2019-07-23 14:30:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020951390266418, "perplexity": 240.21677111086842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00002.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/5/lesson/5.2.1/problem/5-48
|
### Home > PC3 > Chapter 5 > Lesson 5.2.1 > Problem5-48
5-48.
Usually, Dolores has to stock the shelves by herself and it takes her $7.2$ hours. Today Camille helped Dolores and they were able to finish the task in $2.8$ hours. How long would it have taken Camille if she were working alone? Homework Help ✎
What is Dolores' rate in tasks/hour?
The time in this situation is $2.8$ hours.
(rate)(time) = amount of task completed
$(\text{Dolores'rate})(2.8)+(\text{Camille's rate})(2.8)=1\text{ complete task}$
|
2019-11-12 09:26:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6729044318199158, "perplexity": 5430.568612299939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00160.warc.gz"}
|
https://www.jiskha.com/questions/1315874/the-radius-of-the-earths-orbit-around-the-sun-assumed-to-be-circular-is-1-50-x-108-km
|
# Physics
The radius of the earth's orbit around the sun (assumed to be circular) is 1.50 X 108 km, and the earth travels around this orbit in 365 days. (a) What is the magnitude of the orbital velocity of the earth, in m/s? (b) What is the radial acceleration of the earth toward the sun, in m/s2? (c) Repeat parts (a) and (b) for the motion of the planet Mercury (orbit radius = 5.79 X 107 km, orbital period = 88.0 days)
1. 👍 0
2. 👎 0
3. 👁 2,292
1. r = 1.5 * 10^11 meters
2 pi r = 9.42 * 10^11 meters circumference
and
365d * 24h/d * 3600 s/h =3.15 * 10^7 seconds
v = 9.42*10^11 m/3.15*10^7 s = 2.99*10^4 m/s
Ac = v^2/r
etc
1. 👍 0
2. 👎 0
2. kj
1. 👍 0
2. 👎 1
## Similar Questions
1. ### physics
calculate the centripetal acceleration of the earth in its orbit around the sun and the net force exerted on the earth. What exerts this force on the earth? Assume that the earth's orbit is a circle of radius 1.50x10^11m. (earth's
2. ### Physics
The average distance between Earth and the Sun is 1.5 ✕ 10^11 m. a) Calculate the average speed of Earth in its orbit (assumed to be circular) in meters per second. b) What is this speed in miles per hour?
3. ### Precalculus
Find the distance that the earth travels in nine days in its path around the sun. Assume that a year has 365 days and that the path of the earth around the sun is a circle of radius 93 million miles. [Note: The path of the earth
4. ### Physics
(1) What is the linear speed of the earth in its orbit about the sun? Give your answer in meters per second. (Consider the earth's orbit to be a circle of radius 1.50×1011 m.) (2) What is the linear speed of Jupiter in its orbit
1. ### Physics
The average distance between Earth and the Sun is 1.5 ✕ 10^11 m. (a)Calculate the average speed of Earth in its orbit (assumed to be circular) in meters per second. (b)What is this speed in miles per hour?
2. ### Physics
Our sun rotates in a circular orbit about the center of the Milky Way galaxy. The radius of the orbit is 2.2 1020 m, and the angular speed of the sun is 1.2 10-15 rad/s. (a) What is the tangential speed of the sun? m/s (b) How
3. ### physic
Consider the earth following its nearly circular orbit (dashed curve) about the sun.(Figure 2) The earth has mass mearth=5.98×1024kg and the sun has mass msun=1.99×1030kg. They are separated, center to center, by
Calculate the position of the center of mass of the following pairs of objects. Use acoordinate system where the origin is at the center of the more massive object. Give youranswer not in meters but as a fraction of the radius as
1. ### physics
The power output in the sun is about 4.00 x 10^26W. (a). Calculate the power per unit area (intensity), in kilowatts per square meter, reaching Earth's upper atmosphere from the sun. The radius of the Earth's orbit is 1.5 x
2. ### Astronomy
"If the Sun were magically replaced with a giant rock that had precisely the same mass, Earth’s orbit would not change. Explain why not." I'm a little confused as to how to answer this. Is it because Earth doesn't orbit the
3. ### Physics
A communications satellite with a mass of 450 kg is in a circular orbit about the Earth. The radius of the orbit is 2.9×10^4 km as measured from the center of the Earth. Calculate the weight of the satellite on the surface of the
4. ### physics
Two Earth satellites, A and B, each of mass m, are to be launched into circular orbits about Earth's center. Satellite A is to orbit at an altitude of 7460 km. Satellite B is to orbit at an altitude of 20500 km. The radius of
|
2020-10-25 14:10:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058937191963196, "perplexity": 639.6421943696159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889173.38/warc/CC-MAIN-20201025125131-20201025155131-00399.warc.gz"}
|
https://www.r-bloggers.com/2019/02/le-monde-puzzle-1087/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A board-like Le Monde mathematical puzzle in the digit category:
Given a (k,m) binary matrix, what is the maximum number S of entries with only one neighbour equal to one? Solve for k=m=2,…,13, and k=6,m=8.
For instance, for k=m=2, the matrix
$\begin{matrix} 0 &0\\ 1 &1\\ \end{matrix}$
is producing the maximal number 4. I first attempted a brute force random filling of these matrices with only a few steps of explorations and got the numbers 4,8,16,34,44,57,… for the first cases. Since I was convinced that the square k² of a number k previously exhibited to reach its maximum as S=k² was again perfect in this regard, I then tried another approach based on Gibbs sampling and annealing (what else?):
gibzit=function(k,m,A=1e2,N=1e2){
temp=1 #temperature
board=sole=matrix(sample(c(0,1),(k+2)*(m+2),rep=TRUE),k+2,m+2)
board[1,]=board[k+2,]=board[,1]=board[,m+2]=0 #boundaries
maxol=counter(board,k,m) #how many one-neighbours?
for (t in 1:A){#annealing
for (r in 1:N){#basic gibbs steps
for (i in 2:(k+1))
for (j in 2:(m+1)){
prop=board
prop[i,j]=1-board[i,j]
u=runif(1)
if (log(u/(1-u))maxol){
maxol=val;sole=board}}
}}
temp=temp*2}
return(sole[-c(1,k+2),-c(1,m+2)])}
which leads systematically to the optimal solution, namely a perfect square k² when k is even and a perfect but one k²-1 when k is odd. When k=6, m=8, all entries can afford one neighbour exactly,
> gibzbbgiz(6,8)
[1] 48
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 0 0 1 1 0 0 1
[2,] 1 0 0 0 0 0 0 1
[3,] 0 0 1 0 0 1 0 0
[4,] 0 0 1 0 0 1 0 0
[5,] 1 0 0 0 0 0 0 1
[6,] 1 0 0 1 1 0 0 1
but this does not seem feasible when k=6, m=7, which only achieves 40 entries with one single neighbour.
|
2021-01-16 00:00:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6063971519470215, "perplexity": 881.6502817169825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00537.warc.gz"}
|
https://zbmath.org/?q=an:0990.47024
|
# zbMATH — the first resource for mathematics
Asymptotic behavior of variable-coefficient Toeplitz determinants. (English) Zbl 0990.47024
Let $$\sigma$$ be a continuous function on $$[0,1]\times {\mathbb T}$$, where $$\mathbb T$$ is the unit circle. Denote by $$op_n \sigma$$ the $$(n+1)\times (n+1)$$ matrix whose $$(j,k)$$-entris are given by $${1 \over 2\pi} \int_0^{2\pi} \sigma ({j\over n}, e^{-i(j-k)\theta}) d\theta$$. These matrices can be thought of as variable-coefficient Toeplitz matrices or as the discrete analogues of pseudodifferential operators. For functions $$\sigma$$ with sufficiently smooth logarithms the authors establish the following asymptotic formula $$\det [op_n \sigma] \sim G[\sigma]^{(n+1)} E[\sigma]$$, as $$n \rightarrow \infty$$. The constants $$G[\sigma]$$ and $$E[\sigma]$$ are explicitly determined.
##### MSC:
47B35 Toeplitz operators, Hankel operators, Wiener-Hopf operators 15A15 Determinants, permanents, traces, other special matrix functions
##### Keywords:
Toeplitz determinant; Szegő limit theorem
Full Text:
|
2021-05-08 10:44:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383490443229675, "perplexity": 962.3162622509903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00476.warc.gz"}
|
https://www.tutorialspoint.com/consider-the-grammars-ccc-c-c-dconstruct-the-parsing-table-for-lalr-1-parser
|
# Consider the Grammar S → CC C → c C | d Construct the parsing table for LALR (1) parser.
Compiler DesignProgramming LanguagesComputer Programming
#### Web Design for Beginners: Build Websites in HTML & CSS 2022
68 Lectures 8 hours
#### HTML5 & CSS3 Site Design
61 Lectures 6 hours
## Solution
Step1 − Construct LR (1) Set of items. First of all, all the LR (1) set of items should be generated.
In these states, states I3 and I6 can be merged because they have the same core or first component but a different second component of Look Ahead.
Similarly, states I4 and I7 are the same.
Similarly, states I8 and I9 are the same.
So, I3 and I6 can be combined to make I36.
I4 and I7 combined to make I47.
I8 and I9 combined to make I89.
So, the states will be
∴ I3 = goto (I0, c)
But I3 , I6 combined to make I36
∴ I36 = goto (I0, c)
∴ I4 = goto (I0, d)
But I4 , I7 combined to make I47
∴ I47 = goto (I0, d)
∴ I6 = goto (I2, c)
∴ I36 = goto (I2, c)
∴ I7 = goto (I2, d)
∴ I47 = goto (I2, d)
∴ goto (I3, C) = I8
But I8 is now part of I89
∴ goto (I36, C) = I89
Similarly,goto (I3, d) = I4, goto (I6, d) = I7 ∴ goto (I36, d) = I47
Construction of LALR Parsing Table
Filling of "𝐬𝐡𝐢𝐟𝐭" Entries(s)
Consider goto(I0, c) = I36
∴ Action[0, c] = s36
∴ Write s36 in front of Row state 0 and column c.
Similarly, consider
goto(I2, d) = I47
∴ Action[2, d] = 47
∴ Write s47 in front of Row State 2 and column d.
Filling the "𝐫𝐞𝐝𝐮𝐜𝐞" Entries (r)
Consider productions of the form A → α ∙ ,
For example, Consider State
I47 = goto(I0, d)
C → d ∙, c |d |$∴ C → d ∙, c |d |$ is of form A → α ∙ , a.
Since C → d is production number (3) in given Question.
∴ Write r3 in front of Row State 47 and column c, d, $. Because c, d looks ahead symbols in production C → d ∙ , c | d. Filling of goto Entries It can found out only for Non-Terminal. For example, Consider goto(I0, S) = I1 ∴ goto [0, S] = 1 Filling of "𝐀𝐜𝐜𝐞𝐩𝐭" Entry Since, S′ → S ∙ ,$ is in I1
∴ Write accept in front of Row state 1 & column \$.
LALR Parsing table can also be obtained by merging the rows of combined states of CLR parsing, i.e., Merge Row corresponding to 3, 6, then 4, 7 and then 8, 9.
The resulting LALR Parsing table will be −
Updated on 02-Nov-2021 11:56:06
|
2022-09-28 18:19:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4025847613811493, "perplexity": 12307.950565105633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00352.warc.gz"}
|