url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://samacheerkalvi.guide/samacheer-kalvi-12th-maths-guide-chapter-6-ex-6-2/ | Tamilnadu State Board New Syllabus Samacheer Kalvi 12th Maths Guide Pdf Chapter 6 Applications of Vector Algebra Ex 6.2 Textbook Questions and Answers, Notes.
## Tamilnadu Samacheer Kalvi 12th Maths Solutions Chapter 6 Applications of Vector Algebra Ex 6.2
Question 1.
If $$\overline { a }$$ = $$\hat { i }$$ – 2$$\hat { j }$$ + 3$$\hat { k }$$, b = 2$$\hat { i }$$ + $$\hat { j }$$ – 2$$\hat { k }$$, c = 3$$\hat { i }$$ + 2$$\hat { j }$$ + $$\hat { k }$$ find $$\overline { a }$$.($$\overline { b}$$ × $$\overline { c }$$).
Solution:
$$\overline { a }$$.($$\overline { b}$$ × $$\overline { c }$$) = [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ] = $$\left|\begin{array}{ccc} 1 & -2 & 3 \\ 2 & 1 & -2 \\ 3 & 2 & 1 \end{array}\right|$$
= 1(1 + 4) + 2(2 + 6) + 3(4 – 3)
= 5 + 16 + 3 = 24
Question 2.
Find the volume of the parallelepiped whose coterminous edges are represented by the vectors -6$$\hat { i }$$ + 14$$\hat { j }$$ + 10$$\hat { k }$$, 14$$\hat { i }$$ – 10$$\hat { j }$$ – 6$$\hat { k }$$ and 2$$\hat { i }$$ + 4$$\hat { j }$$ – 2$$\hat { k }$$
Solution:
Volume of the parallelepiped = [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ]
= $$\left|\begin{array}{ccc} -6 & 14 & 10 \\ 14 & -10 & -6 \\ 2 & 4 & -2 \end{array}\right|$$
= -6(20 + 24) -14(-28 + 12) + 10(56 + 20)
= -6(44) -14(-16) + 10(76)
= -264 + 224 + 760
= 720 cu. units.
Question 3.
The volume of the parallelepiped whose coterminous edges are 7$$\hat { i }$$ + λ$$\hat { j }$$ – 3$$\hat { k }$$, $$\hat { i }$$ + 2$$\hat { j }$$ – $$\hat { k }$$, -3$$\hat { i }$$ + 7$$\hat { j }$$ + 5$$\hat { k }$$ is 90 cubic units. Find the value of λ
Solution:
volume of the parallelepiped = [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ]
$$\left|\begin{array}{ccc} 7 & \lambda & -3 \\ 1 & 2 & -1 \\ -3 & 7 & 5 \end{array}\right|$$ = 90
7(10 + 7) – λ(5 – 3) – 3(7 + 6) = 90
7(17) – λ(2) – 3(13) = 90
119 – 2λ – 39 = 90
2λ = 119 – 39 – 90
2λ = -10
λ = -5
Question 4.
If $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ are three non-coplanar vectors represented by concurrent edges of a parallelepiped of volume 4 cubic units, find the value of
($$\overline { a }$$ + $$\overline { b }$$).($$\overline { b }$$ × $$\overline { c }$$) + ($$\overline { b }$$ + $$\overline { c }$$).($$\overline { c }$$ × $$\overline { a }$$) + ($$\overline { c }$$ + $$\overline { a }$$).($$\overline { a }$$ × $$\overline { b }$$).
Solution:
Given [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ] = ±4.
{($$\overline { a }$$ + $$\overline { b }$$).($$\overline { b }$$ × $$\overline { c }$$) + ($$\overline { b }$$ + $$\overline { c }$$).($$\overline { c }$$ × $$\overline { a }$$) + ($$\overline { c }$$ + $$\overline { a }$$).($$\overline { a }$$ × $$\overline { b }$$).}
= $$\overline { a }$$($$\overline { b }$$ × $$\overline { c }$$) + $$\overline { b }$$ – ($$\overline { b }$$ × $$\overline { c }$$) + $$\overline { b }$$ – ($$\overline { c }$$ × $$\overline { a }$$) + $$\overline { c }$$ – ($$\overline { c }$$ × $$\overline { a }$$) + $$\overline { c }$$($$\overline { a }$$ × $$\overline { b }$$) + $$\overline { a }$$ – ($$\overline { a }$$ × $$\overline { b }$$)
= [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ] + [ $$\overline { b }$$, $$\overline { b }$$, $$\overline { c }$$ ] + [ $$\overline { p }$$, $$\overline { c }$$, $$\overline { a }$$ ] + [ $$\overline { c }$$, $$\overline { c }$$, $$\overline { a }$$ ] + [ $$\overline { c }$$, $$\overline { a }$$, $$\overline { b }$$ ] + [ $$\overline { a }$$, $$\overline { a }$$, $$\overline { b }$$ ]
= ± 4 + 0 ± 4 + 0 ± 4 + 0 = ± 12
Question 5.
Find the altitude of a parallelepiped determined by the vectors $$\overline { a }$$ = -2$$\hat { i }$$ + 5$$\hat { j }$$ + 3$$\hat { k }$$, $$\overline { b }$$ = $$\hat { i }$$ + 3$$\hat { j }$$ – 2$$\hat { k }$$ and $$\overline { c }$$ = -3$$\hat { i }$$ + $$\hat { j }$$ + 4$$\hat { k }$$ if the base is taken as the parallelogram determined by $$\overline { b }$$ and $$\overline { c }$$.
Solution:
V = $$\overline { a }$$ -($$\overline { b }$$ × $$\overline { c }$$) = [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ]
= $$\left|\begin{array}{ccc} -2 & 5 & 3 \\ 1 & 3 & -2 \\ -3 & 1 & 4 \end{array}\right|$$
= -2(12 + 2) -5(4 – 6) + 3(1 + 9)
= -2(14) -5(-2) + 3(10)
= -28 + 10 + 30 = 12
Area = |$$\overline { b }$$ × $$\overline { c }$$| = $$\left|\begin{array}{ccc} \hat{i} & \hat{j} & \hat{k} \\ 1 & 3 & -2 \\ -3 & 1 & 4 \end{array}\right|$$
= $$\hat { i }$$(12 + 2) – $$\hat { j }$$(4 – 6) + $$\hat { k }$$(1 + 9)
= 14$$\hat { i }$$ + 2$$\hat { j }$$ + 10$$\hat { k }$$
|$$\overline { b }$$ × $$\overline { c }$$| = $$\sqrt { 196+4+100 }$$ = $$\sqrt { 300 }$$
= 10√3
Altitude h = $$\frac { V }{ Area }$$ = $$\frac { 12 }{ 10√3 }$$ = $$\frac { 12×√3 }{ 10×3 }$$ = $$\frac { 2√3 }{ 5 }$$
Question 6.
Determine whether the three vectors 2$$\hat { i }$$ + 3$$\hat { j }$$ + $$\hat { k }$$, $$\hat { i }$$ – 2$$\hat { j }$$ + 2$$\hat { k }$$ and 3$$\hat { i }$$ + $$\hat { j }$$ + 3$$\hat { k }$$ are coplanar.
Solution:
If vectors are coplanar, [ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ] = 0
[ $$\overline { a }$$, $$\overline { b }$$, $$\overline { c }$$ ] = $$\left|\begin{array}{ccc} 2 & 3 & 1 \\ 1 & -2 & 2 \\ 3 & 1 & 3 \end{array}\right|$$
= 2(-6 – 2) -3(3 – 6) + 1(1 + 6)
= -2(-8) -3(-3) + 1(7) = -16 + 9 + 7 = 0
∴ The given vectors are coplanar
Question 7.
Let $$\overline { a }$$ = $$\hat { i }$$ + $$\hat { j }$$ + $$\hat { k }$$, $$\overline { b }$$ = $$\hat { i }$$ and $$\overline { c }$$ = c1$$\hat { i }$$ + c2$$\hat { j }$$ + c3$$\hat { k }$$. If c1 = 1 and c2 = 2, find c3 such that $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ are coplanar.
Solution:
If $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ are coplanar [ $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ ] = 0
$$\left|\begin{array}{lll} 1 & 1 & 1 \\ 1 & 0 & 0 \\ c_{1} & c_{2} & c_{3} \end{array}\right|$$ = 0
1(0) – 1(c3) + 1(c2) = 0
-c3 + c2 = 0
c3 = c2 = 2
Question 8.
If $$\overline { a }$$ = $$\hat { i }$$ – $$\hat { k }$$, $$\overline { b }$$ = x$$\hat { i }$$ + $$\hat { j }$$ + (1 – x)$$\hat { k }$$ c = y$$\hat { i }$$ + x$$\hat { j }$$ + (1 + x – y) $$\hat { k }$$ Show that [ $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ ] = 0
Solution:
[ $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ ] = $$\left|\begin{array}{ccc} 1 & 0 & -1 \\ x & 1 & 1-x \\ y & x & 1+x-y \end{array}\right|$$
= 1(1 + x – y – x + x²)-1(x² – y)
(1 + x – y – x + x² – x² + y)
= 1
There is no x and y terms
∴ [ $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ ] depends on neither x nor y.
Question 9.
If the vectors a$$\hat { i }$$ + a$$\hat { j }$$ + c$$\hat { k }$$, $$\hat { i }$$ + $$\hat { k }$$ and c$$\hat { i }$$ + c$$\hat { j }$$ + b$$\hat { k }$$ are coplanar, prove that c is the geometric mean of a and b.
Solution:
[ $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ ] = 0
$$\left|\begin{array}{lll} a & a & c \\ 1 & 0 & 1 \\ c & c & b \end{array}\right|$$ = 0
a(0 – c) – a(b – c) + c(c) = 0
-ac – ab + ac + c² = 0
c² – ab = 0
c² = ab
⇒ c in the geometric mean of a and b.
Question 10.
Let $$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$ be three non-zero vectors such that $$\overline { c }$$ is a unit vector perpendicular to both $$\overline { a }$$ and $$\overline { b }$$. If the angle between $$\overline { a }$$ and $$\overline { b }$$ is $$\frac { π }{ 6 }$$ show that [$$\overline { a }$$, $$\overline { b }$$ and $$\overline { c }$$]² = $$\frac { 1 }{ 4 }$$ |$$\overline { a }$$|² |$$\overline { b }$$|²
Solution:
Hence proved | 2022-11-28 11:33:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7155100107192993, "perplexity": 1023.3906149196132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00629.warc.gz"} |
https://math.stackexchange.com/questions/1011388/what-matrix-a-with-dimension-n-x-n-is-always-true-given-a-a2-o/1011404 | # What matrix A with dimension n x n is always true given A - A^2 = O
What matrix A with dimension n x n is always true given A - A^2 = O where O is the zero matrix with dimension n x n and I with dimension n x n is the identity matrix.
A) A is a diagonal matrix
B) A = A^2
C) A = I
D) A = O
E) I = A^2
Just got out of an exam with this question and I chose B. I thought about this question for a little bit longer after I handed in my exam and I came to the conclusion that A = I.
A - A^2 = O
A = A^2
AA^-1 = (A^-1)(A^2)
I = A
Can someone confirm if this is right?
• What is the definition of a "direction matrix"? – Omnomnomnom Nov 8 '14 at 1:54
• You can only say that $A = I$ if $A$ is non-singular. Note that $A = O$ also satisfies this equation. – Omnomnomnom Nov 8 '14 at 1:54
• Oops, it's suppose to be "diagonal matrix" – Wade Nov 8 '14 at 2:00
• Should this say "What statement about a matrix $A$ with dimension $n\times n$ is always true..."? – alex.jordan Nov 8 '14 at 4:56
• Or perhaps "What statement about a matrix $A$ with dimension $n\times n$ would always make it true that..."? – alex.jordan Nov 8 '14 at 5:00
## 3 Answers
It seems to me that the other answers have misinterpreted the question. My understanding is that the question asks:
Given that $A$ is a matrix satisfying $A - A^2 = 0$, which of the following is always true?
The only correct answer to this question is B. Certainly, if $A = A^2$, then we can say that $$A = (A - A^2) + A^2 = 0 + A^2 = A^2$$ As a counterexample to all of the other choices, consider the matrix $$A = \pmatrix{1&1\\0&0}$$
• The question may be so trivial? Unbeliveable! – Przemysław Scherwentke Nov 8 '14 at 6:05
It is right (answer C, the proof needs a little modification), but D) is also true. A counterexample to E): $$A=\begin{bmatrix} 0&1\\ 1&0 \end{bmatrix}$$ And for example $2I$ is a counterexample to A).
Edit: And corected version of your proof: $A - A^2 = O$, $A = A^2$, and now, instead of
$AA^{-1} = (A^{-1})(A^2)$
either $AA^{-1} = A^2(A^{-1})$ or $A^{-1}A = (A^{-1})(A^2)$
and finally $I = A$.
• But is D always true? – Wade Nov 8 '14 at 2:03
• @Wade If $A$ is zero matrix, so is $A^2$. – Przemysław Scherwentke Nov 8 '14 at 2:05
• I could say the same if A is an identity matrix. So would both C and D be correct? – Wade Nov 8 '14 at 2:08
• @Wade Yes, they are. And the argument is simplier than yours. (Well, in fact, you should multiply by $A^{-1}$ either on left or on right). – Przemysław Scherwentke Nov 8 '14 at 2:12
I think $B$, $C$ and $D$ are all correct.
Infact $B$ is the question rewritten in another way. And as already established $C$ and $D$ are correct.
• This was a multiple choice question. Is there one answer that is more correct than the other? – Wade Nov 8 '14 at 2:09
• @Wade It depends on the rules of your exam. The correct answer is B), C), D), but subset of it may be partially scored. – Przemysław Scherwentke Nov 8 '14 at 2:21 | 2019-12-14 03:06:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8123384714126587, "perplexity": 647.869478635921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00223.warc.gz"} |
https://cyclostationary.blog/category/signal-processing-toolkit/ | ## SPTK: Ideal Filters
Ideal filters have rectangular or unit-step-like transfer functions and so are not physical. But they permit much insight into the analysis and design of real-world linear systems.
Previous SPTK Post: Convolution Next SPTK Post: The Moving-Average Filter
We continue with our non-CSP signal-processing tool-kit series with this post on ideal filtering. Ideal filters are those filters with transfer functions that are rectangular, step-function-like, or combinations of rectangles and step functions.
## SPTK: Convolution and the Convolution Theorem
Convolution is an essential element in everyone’s signal-processing toolkit. We’ll look at it in detail in this post.
This installment of the Signal Processing Toolkit series of CSP Blog posts deals with the ubiquitous signal-processing operation known as convolution. We originally came across it in the context of linear time-invariant systems. In this post, we focus on the mechanics of computing convolutions and discuss their utility in signal processing and CSP.
Continue reading “SPTK: Convolution and the Convolution Theorem”
## SPTK: Interconnection of Linear Systems
Real-world signal-processing systems often combine multiple kinds of linear time-invariant systems. We look here at the general kinds of connections.
Previous Post: Frequency Response Next Post: Convolution
It is often the case that linear time invariant (or for discrete-time systems, linear shift invariant) systems are connected together in various ways, so that the output of one may be the input to another, or two or more systems may share the same input. In such cases we can often find an equivalent system impulse response that takes into account all the component systems. In this post we focus on the serial and parallel connections of LTI systems in both the time and frequency domains.
Continue reading “SPTK: Interconnection of Linear Systems”
## SPTK: Frequency Response of LTI Systems
The frequency response of a filter tells you how it scales each and every input sine-wave or spectral component.
We continue our progression of Signal-Processing ToolKit posts by looking at the frequency-domain behavior of linear time-invariant (LTI) systems. In the previous post, we established that the time-domain output of an LTI system is completely determined by the input and by the response of the system to an impulse input applied at time zero. This response is called the impulse response and is typically denoted by $h(t)$.
## SPTK: Linear Time-Invariant Systems
LTI systems, or filters, are everywhere in signal processing. They allow us to adjust the amplitudes and phases of spectral components of the input.
In this Signal Processing Toolkit post, we’ll take a first look at arguably the most important class of system models: linear time-invariant (LTI) systems.
What do signal processors and engineers mean by system? Most generally, a system is a rule or mapping that associates one or more input signals to one or more output signals. As we did with signals, we discuss here various useful dichotomies that break up the set of all systems into different subsets with important properties–important to mathematical analysis as well as to design and implementation. Then we’ll look at time-domain input/output relationships for linear systems. In a future post we’ll look at the properties of linear systems in the frequency domain.
## SPTK: The Fourier Series
A crucial tool for developing the temporal parameters of CSP.
This installment of the Signal Processing Toolkit shows how the Fourier series arises from a consideration of representing arbitrary signals as vectors in a signal space. We also provide several examples of Fourier series calculations, interpret the Fourier series, and discuss its relevance to cyclostationary signal processing.
## SPTK: Signal Representations
A signal can be written down in many ways. Some of them are more useful than others and can lead to great insights.
In this Signal Processing ToolKit post, we’ll look at the idea of signal representations. This is a branch of signal-processing mathematics that expresses one signal in terms of one or more signals drawn from a special set, such as the set of all sine waves, the set of harmonically related sine waves, a set of wavelets, a set of piecewise constant waveforms, etc.
Signal representations are a key component of understanding stationary-signal processing tools such as convolution and Fourier series and transforms. Since Fourier series and transforms are an integral part of CSP, signal representations are important for all our discussions at the CSP Blog.
## Signal Processing Toolkit: Signals
Introducing the SPTK on the CSP Blog. Basic signal-processing tools with discussions of their connections to and uses in CSP.
Next SPTK Post: Signal Representations
This is the inaugural post of a new series of posts I’m calling the Signal Processing Toolkit (SPTK). The SPTK posts will cover relatively simple topics in signal processing that are useful in the practice of cyclostationary signal processing. So, they are not CSP posts, but CSP practitioners need to know this material to be successful in CSP. The CSP Blog is branching out! (But don’t worry, there are more CSP posts coming too.)
## Can a Machine Learn a Power Spectrum Estimator?
Learning machine learning for radio-frequency signal-processing problems, continued.
I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.
Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do! | 2021-05-14 20:35:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5386905074119568, "perplexity": 940.4606056745574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00420.warc.gz"} |
https://www.physicsforums.com/threads/what-is-this-unit-of-force.826112/ | # What is this unit of force?
1. Aug 4, 2015
### Richie9384
I'm reading a paper on a simulation of graphene and carbon nanotube growth on SiC, in the paper they give a few computational details about the structural optimization of the SiC model using a conjugate gradient method and they state the maximal force component as
$$1x10^{-4}Ha/a_o$$
What is this unit? I'm assuming Ha per angstrom...but Ha seem to be a unit of area...I need to convert to eV/Angstrom
2. Aug 4, 2015
### e.bar.goum
I would have guessed that
Ha = Hartree https://en.wikipedia.org/wiki/Hartree
$a_0$ = Bohr radius.
And that gives you energy per unit length, which is what you need.
(ETA: If it were lowercase "ha" would be "hectare" -- if that's what you were thinking. Angstrom is written like Å not $a_0$)
3. Aug 4, 2015
### MrAnchovy
When I bought carpet for my house it came on a roll that covered 5 x 10-14 hectares per Ångstrom | 2017-11-17 21:53:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4582464396953583, "perplexity": 2728.5164729450225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803944.17/warc/CC-MAIN-20171117204606-20171117224606-00260.warc.gz"} |
https://ask.libreoffice.org/en/question/299694/page-numbering-out-of-control/ | We will be migrating from Ask to Discourse on the first week of August, read the details here
# Page Numbering out of control
OS: MacOS 10.13.6 Version: LO Write V.6.4.6.2 File format: .odt
This is my plan: Front material will have no page number. First chapter begins with page 1 and all subsequent pages will be numbered in sequence all the way to 312. When a chapter ends on a left page, a Blank page style fills the space. The left side page opposites the first page of each chapter will be a blank page where i add images and notes about the next chapter. Page numbers will be in Footer right or Footer left paragraph style on all numbered pages.
This is what I have: Chapter 1, First Page style, the page is numbered 1. Remainder of the chapter, Default style, all pages are numbered in sequence until the page break. Then the left side page opposite the first page of Chapter 2 is numbered 0, and the first page of Chapter 2 is numbered 1. The first page of Chapter 3 is numbered 1; the first page of Chapter 4 is numbered 0. Throughout the 21 chapters, numbering is chaotic.
I have read the Writer Guide 6.0 about page numbering at least 6 times. Rewrote the section to make the directions clearer and easier to read. Possibly I'm doing page breaks wrong. I can't find another way. And, there must be something else wrong as well.
Thanks floris for your response. I agree that I seem to have reset the page numbering to start over at each new chapter. That is how I started numbering the pages to begin on the first page of chapter 1. I have copied the book to a separate file and deleted the content of all pages after chapter 4. The pages remain, blank but numbered. The file is too large to attach (4.3 MB) so I will need to remove the blank pages to make it small enough. I don't know how to do that.
Yes, I always begin a chapter with a pic page (my name for it) and it is always a left page. Or, I always end a chapter with a pic page, which is always a left page. So I break the continuous loop of Default style pages to insert a pic page that is always a left page and is followed by a first page style which is always a right page. I have never heard or read or experienced that if I break the loop to go to a left page, that an extra page will be inserted automatically. I will review the book to see if I can find what you mean by direct formatting. I did not use Heading 1 as the paragraph style for the chapter number. I think that may be a source of my problem and I suspect (don't know yet) that not using Heading 1 paragraph style will interfere with making the Table of Contents. So I'll get on that ...
edit retag close merge delete
It seems that you reset the page number when you really want to page numbering to be continuous. It might help if you attach a small document with random text but with the page style and paragraph styles intact, so that we can see where you went wrong. This is, after all, one of the tougher parts of word processing. Edit your question (button below the text area), don't add an answer. Answers are reserved for real solutions, not for discussion. Use comments for that.
( 2021-03-20 20:40:21 +0200 )edit
If you always start a chapter on a right page, use the Right page style for the first page of a chapter instead of First page. It will automatically insert a blank page when necessary. Don't reset page numbering to 1 at each chapter, that's confusing for the reader.
Check out http://forum.openoffice.org/en/forum/... for more help with handling page styles. Seems to be a tough read, be warned.
( 2021-03-23 18:36:43 +0200 )edit
It is a matter of mastering page styles. The unusual fact in your scheme is the images and notes at left of first chapter page. IMHO, this requires a more sophisticated page style sequence than the one you describe, i.e. Note/image followed by ChapterFirst followed by Default. There should be a page break to left at the beginning of the Note/Image page, a page break to ChapterFirst (forced to right) in Heading 1paragraph style (used for chapter heading).
Take special care for page number reset. It should be brought by direct formatting in such a case, so that it doesn't apply to all chapters.
( 2021-03-23 18:45:13 +0200 )edit
Sort by » oldest newest most voted
Make a new page style for the opening pages. You may have to modify margins if you modified the margins for the default page style, you don't need to modify anything else. Apply it to the first page of the document. It will take effect on all pages of the document, but don't worry. If you want the left page opposite to the first page of a chapter to be editable, You will need to use a separate paragraph style that automatically inserts a page break and sets the page style to Left page. You won't have to modify the Left page style a lot, unless you have custom margins and the like. You can use Heading 5 if you don't use that level and modify it accordingly. Then at the bottom of the end of a chapter, just press Ctrl+5 to set that paragraph to Heading 5 (no need to enter any text) and it will force your blank but editable left page.
Here comes the dirty part, the only act of direct formatting that is permitted here. Scroll to the start of chapter 1, put the cursor in the heading (first line of the page, that's essential), then Format - Paragraph, Text Flow tab, tick Insert, With page style, (select Right page there just to be sure), and Page number, set it to 1. Don't do that anywhere else. See attached example for reference.
To show the community that your question has been answered, click the ✓ next to the correct answer, and "upvote" by clicking on the ^ arrow of any helpful answers.
C:\fakepath\mixed page styles.odt
more
Since the page for images and notes is rather special, I'd use a dedicated page style because using Left Page like in a chapter will result in header/footer identical to those of the previous chapter (because of Heading 5/9 being used). This page style can't have the same header/footer than the rest of the chapter because there is no way to capture the futureHeading 1. In principle this does not matter because the page is opposite to the beginning of the chapter.
Also I don't recommend using a Heading n to switch to the image/note pages because it has hidden (but predictable) effects which could surprise a newbie.
( 2021-03-23 19:50:13 +0200 )edit
@ajlittoz: I found that Ctrl+9 doesn't link Heading 9 (it stops at level 5), so I opted for heading 5 instead, I should have added that. You can in fact use any paragraph style, but I opted for a heading because of the predefined key binding. You don't even need a paragraph style, inserting a manual break with page style change would work as well, but it would be a bit more work. The nice thing about the separate page style is that you can suppress page numbering without any effort.
I just realized that the Right page page style should have Next page set to Default.
( 2021-03-23 20:15:28 +0200 )edit
Updated attachment.
( 2021-03-23 20:20:01 +0200 )edit
Since I would like your comments. I'm placing my tentative solution in comments section. There were two comments that were very helpful. One was the paragraph beginning, "Here is the dirty part. ." The other was "If all of this seems very abstract, just start a new document, enter enough dummy text to fill a few pages . " The file attached is the new document. It only took 4 weeks I filled each page with the formatting decisions so I can follow this path with my real book. It is a book within a book, so I will have to do this twice.C:\fakepath\PRACTICE BOOK.odt
( 2021-04-17 01:45:27 +0200 )edit
Is the "Practice Book" representative of the real book, style-wise? I notice several styling errors: Heading 3 used as book title (incorrect outline hierarchy), no Heading n for chapters, headings split into several paragraphs, inconsistent page style use (First Page as cover and chapter initial page), text in Default Paragraph Style (instead of Text Body and dedicated styles) with direct formatting (instead of character styles).
As there are many issues to address progressively, I think it is rather a problem of Writer style understanding than a question of general interest. Contact me on ajlittoz (at) users (dot) sourceforge (dot) net for private advice..
( 2021-04-17 08:05:37 +0200 )edit | 2021-06-12 21:49:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493387579917908, "perplexity": 1356.8435008579584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00122.warc.gz"} |
https://testbook.com/question-answer/a-bullet-fired-from-the-rifle-has-_____--60d02341f2603237930b6b06 | A bullet fired from the rifle has _____.
This question was previously asked in
Official Soldier GD Paper : [BEG Centre Roorkee (Amb)] - 28 March 2021
View all Indian Army GD Papers >
1. kinetic energy
2. none of these
3. potential energy
4. both kinetic and potential energy
Option 1 : kinetic energy
Detailed Solution
The correct answer is option 1) i.e. Kinetic energy.
CONCEPT:
• Kinetic energy: The energy possessed by a body by virtue of its motion is called the kinetic energy of the body.
• A body that possesses kinetic energy can do work on some other body.
The kinetic energy of a body of mass "m" moving with a velocity "v" is given as:
$$K.E. = \frac{1}{2}mv^{2}$$
EXPLANATION:
• A bullet fired from a gun possesses very high kinetic energy because of its high velocity.
• This kinetic energy of the bullet does work on the object that it hits or strikes.
• Therefore, the bullet pierces its target because of the high kinetic energy possessed by virtue of its motion. | 2021-10-28 20:56:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1964319944381714, "perplexity": 5130.96685676458}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00630.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/122855-probability-space-print.html | # probability space
Printable View
• Jan 7th 2010, 09:27 PM
jimmianlin
probability space
I was wondering if anyone can tell me what it means to have two random variables in the same probability space. The term "probability space" is what is throwing me off in trying to answer a problem.
Thanks in advance.
• Jan 8th 2010, 06:56 AM
novice
Quote:
Originally Posted by jimmianlin
I was wondering if anyone can tell me what it means to have two random variables in the same probability space. The term "probability space" is what is throwing me off in trying to answer a problem.
Thanks in advance.
Probability triplets $(\Omega, F,P)$ make up probability space.
Reread your book 100 times over or read wikipedia:
Probability space - Wikipedia, the free encyclopedia
(Surprised) | 2017-10-24 10:27:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6600240468978882, "perplexity": 425.1583833406068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828356.82/warc/CC-MAIN-20171024090757-20171024110757-00455.warc.gz"} |
https://chemistry.stackexchange.com/questions/44833/how-can-i-understand-the-lewis-acid-base-interaction-of-ironiii-and-water | # How can I understand the Lewis acid-base interaction of iron(III) and water?
The interaction between $\ce{H+}$ and $\ce{NH3}$ as respectively a Lewis acid and a Lewis base is clear for me. $\ce{NH3}$ has a lone electron pair and $\ce{H+}$ has no electrons, or, saying politely, an incomplete duet, and can use nitrogen's lone pair to complete its duet, and a dative (coordinate) bond is formed in the process.
However when I think about $\ce{[Fe(H2O)6]^3+}$, I don't see where these six lone pairs of electrons (which are coming from water molecules) can go in the iron ion. The electron configuration of $\ce{Fe^3+}$ is: $\mathrm{1s^2\ 2s^2\ 2p^6\ 3s^2\ 3p^6\ 3d^5}$. It has five 3d electrons, and all (namely, 2) of its 4s electrons are gone. All five 3d electrons have the same spin, and they occupy five d orbitals. OK, I do remember that 3d orbitals are funny and in the transition metals 3d orbitals get divided into low and high energy level 3d orbitals (hence the color of transitional metal solutions).
But even with this, how do six water molecules position their lone pairs there? We have space for one pair of electrons (where 4s electron left) and then space for 5 more electrons, so in total, for 7 electrons. 6 molecules of water would give us at least six pairs of electrons, i.e. 12 electrons. So we are 5 places short to place all these electrons.
So the question is: how are they placed?
• you're right that 3d can't be filled, now we need space for 12 electrons {all these are coordinate bond and each bond completely fills one orbital i.e. 2 electron donor} so, 4s complete, 4p complete and 4d's 2 orbitals are used hence the hybridisation sp3d2, if it was nh3 , it would force pairing {energy needed to pair after filling 3rd electron is less than energy gap created by strong field ligand between [dxy,dxz,dyz] and [dx2-y2,dz2] (pretty much everything stronger than H20 for 3d series and all ligands for 4d and 5d)} Fe would have 2 empty 3d ones, hence d2sp3 is hybridisation. – Mrigank Feb 6 '16 at 19:44
• should i elaborate to an answer? – Mrigank Feb 6 '16 at 19:54
• Thank you very much, ELiT! It makes sense for me, at least partially. Two questions: (1) where do I read about sp3d2 hybridization? do you know where I can find picture of such? (2) Do you know where I can find picture of Fe[H2O]6 +3 with this coordinate bonds graphically examplained? Thank you again! – Sleepy Hollow Feb 6 '16 at 20:54
•  here you go, posted an answer. – Mrigank Feb 6 '16 at 23:18
So, effectively, the question is: How are electrons from the covalent bond between ligand and central metal atom or ion placed with regard to the electron configuration?
$\ce{Fe^{3+}}$ has the condensed electron configuration of [Ar]$3d^5$. When the ligands approach the iron ion from the X, Y, and Z, the ion forms six bonds. It would be incorrect to state that the remaining 3d electrons are involved in the bonding with the ligands. Instead, the iron ion uses 6 orbitals from the 4s, 4p, and 4d orbitals to accept the lone pairs of electrons from the ligands. However, before they are used, the orbitals are hybridized in order to create six orbitals of equal energy.
If you want to dig deeper into the subject, I highly recommend the lecture notes from this webpage here.
• Thank you, Mattias! I read through ppt you recommended, it was very useful. However, there is something I am confused about. On page 7, bottom slide, it says that Crystal Field Theory "Assumes ionic bonding between the metal and the ligand instead of covalent bonding". I started to address this topic from point of view of interaction between Lewis acid and Lewis base, which is, as far as I know, implemented via coordinate covalent bond. Point of view of ppt you recommended sort of removes Fe[H2O6]+3 from the area of discussion about interaction between Lewis acid and Lewis base. – Sleepy Hollow Feb 6 '16 at 21:27
• It is curious that while Crystal Field Theory (CFT) describes the interactions between the metal ion and the ligand as electrostatic, the Ligand Field Theory (LFT) describes the bonding between the two parts as covalent (forming a dative bond). I have been researching a bit, and the Wikipedia article for [LFT] (en.wikipedia.org/wiki/Ligand_field_theory) states that LFT is used to describe octahedral complexes, such as the one in your example, while other complexes can be described with reference the the CFT. – Mattias Feb 6 '16 at 23:06
Coordination chemistry is that point in time where the simple main-group octet rule starts to loose its practicality. (In reality, not even main-group chemistry is as easy as it is often made in introductory courses, but the fact that d-orbitals of the same period are energetically too far removed to actively take part in bonding and that core orbitals are usually fully populated make things easier.)
The general principle is that electrons are represented by wavefunctions called orbitals that have an intrinsic energy value. These orbitals — being waves — can be mixed in an additive or subtractive way much like you can add or subtract sine waves to each other. When mixing orbitals, you always have one method which creates constructive intereference and one which creates destructive interference — the constructive interference will always have a lower energy than the original and the destructive one will always be higher. The caveat is, that the energy gain from constructive interference will always be less than the energy lost from destructive interference, so that mixing is only overall favourable if one of the resulting orbitals ends up empty (or filled by only one electron).
In the case of Lewis acids and bases, one side (the Lewis base) will always contain a pair of electrons in an orbital of relatively high energy (for filled orbitals) and the other side (the Lewis acid) will usually contain no electrons in an orbital of relatively low energy (for unfilled orbitals). These two can mix favourably: Lowering the populated ligand’s orbital and raising the unpopulated Lewis acid’s orbital creates an overall stabilisation of the system. Typically, for an octahedral complex like $\ce{[Fe(H2O)6]^3+}$, the overall picture will look something like this:
Figure 1: Orbital scheme of a basic octahedral complex. Image originally taken from Prof. Klüfers’ internet scriptum to his coordination chemistry course.
On the right-hand side in figure 1, you can see the six donating ligand orbitals that are each populated with an electron pair. The symbols underneath donate the orbitals’ symmetry point groups. On the left-hand side, you can see a central metal. The lowest-lying set of orbitals corresponds to 3d, the next to 4s and the final to 4p. You can see how mixing with these unoccupied orbitals gives the ligand orbitals an overall stabilisation. It is this that makes $\ce{[Fe(H2O)6]^3+}$ more stable than a ‘naked’ $\ce{Fe^3+}$ ion. The energy gained is largest for the octahedral geometry shown which is why exactly this arrangement is adopted.
There are higher-arching rules, too. For the broad area of metal-carbonyl complexes, one often finds arrangements that correspond to an electron count of 18 overall — and a stable number 18 is also true for many other complexes. 18, of course, corresponds to 10 d-electrons, 2 s-electrons and 6 p-electrons and thus a fully-populated ‘shell’. (Note that the d-electrons belong to the lower shell, but energetically they are all close together; much closer than the d-subshell of the same shell.) Therefore, $\ce{[Fe(CO)5]}$, pentacarbonyliron(0) is stable, iron having eight electrons and each carbonyl donating two more. Cobalt would need nine in addition to the nine it has in its ground state, so it assumes a dimeric structure of $\ce{[Co2(CO)8]}$. Nickel is satisfied with eight additional electrons, so $\ce{[Ni(CO)4]}$. However, you should never just look at the higher-arching rule and thereby decide whether a complex is more stable than another one or not.
I'm gonna give basic results of Crystal Field and related VBT, and I don't think the generalizations made on shape are good in that ppt like Ni+4 Co+3 etc..
Assumptions
1.Electrostatic interaction only between ligand's dipole or charge and positive charge of central metal ion
2.Ligands are point charges
(source: pgcc.edu)
Look up shape of an octahedral, x,y,z axes passing through vertices and central metal at its centre, ligands approach along this axis and increase the energy those orbitals which are along axes which are dx2-y2 and dz2 due to electronic repulsion
so, we have 3 "d" degenerate {equal energy} of one level of energy and two of a higher level, now FILL the orbitals {from start if you want to but just the d subshell is fine} by paulli,hund's rule . When there are 3 unpaired electron in lower energy level, its about to happen, if the energy gap between the levels {increases with strength of field of ligand, look up spectrochemical series for strength [chelating ligands have extra stability and need to be taken care of properly, their strength is more than specified in order as that order doesn't take ring formation entropy increase into consideration [if we made 2 ammonias into ethylene diammine, reaction's products are more so entropy increases and free energy becomes more negative]]} is more than energy needed to piss off hund (pair the electrons) then, pairing will start until the 3 orbitals have 6 electrons, now the next batch starts and start filling like you would normally do, when it's done, start to count orbitals left with no electrons with are lowest in energy and they (6 of them) hybridise to 6 equal energy orbital and as my comment said, each coordinate bond fills one orbital so, we needed 6 orbitals
things are a little complicated for tetrahedral and square planar with the energy gaps not really as simple as described above {to this day i can't convince myself of dz2 in square planar} so, just google it and learn it , and try to derive it by the same method as before
When we write d before, it means inner orbital d orbital is used ex- 3d for Fe+3 and when after one means 4d for Fe+3 i.e. the next shell's d
1. Always start with coordination number i.e. number of ligands [Any one wants to make inner orbital complex as shorter and better bonds]
When it's
4 - Tetrahedral {sp3}[only if we don't get an empty d orbital left , even by hook or crook] or Square Planar {dsp2}
5(rare) - Triagonal Bipyamidal {sp3d} and {dsp3}
6 - octahedral {sp3d2} or {d2sp3} with same logic, latter being preferred if possible
Other things include delocalized (resonance) coordinate bond and it will be specified on how many atoms electron cloud being donated is delocalized , colour [photon accepted and released's wavelength can be correlated withe energy gap between HOMO and LUMO and the electron gets excited to the next lowest energy orbital i.e. LUMO, and then dexcites , photon's energy =hv and can be used to measure crystal field splitting energy with is the energy gap we are discussing], magnetic moment = (n(n+2))^1/2 Bohr Magneton {it's a unit} where n is the number of unpaired electrons in the configuration etc, Also, that energy gap increases rapidly with oxidation number and size and charge density {size becomes smaller as we move down in d block due to lanthanoid/actinoid contraction} so, pretty much any ligand is strong for 4d and 5d metals and for 3d, anything after and including nh3 is strong and anything before and incuding H2O is weak except for Co+3, by strong and weak i mean gap is or is not big enough to cause pairing,
Extra :
1. If there is one unpaired electron and all ligands are strong, then transference will take place if inner orbital complex would be formed {meaning, the electron jumps to a higher level for the good of them team, and the emptied orbital can take part in bonding}
2. If there are 2 unpaired electrons and again, all ligands are strong, again they will be forcefully paired if dsp2 i.e. CN=4 is there. {meaning would be formed if they got paired}
3. Any metal with oxidation number > 4 , All ligands are strong for it like Ni+4. and other factors like number of d electrons, proton number etc also influence it but basically, it increases with positive charge on central metal
from here ligand theory starts | 2019-05-22 17:32:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6621300578117371, "perplexity": 1584.1175942750829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256887.36/warc/CC-MAIN-20190522163302-20190522185302-00180.warc.gz"} |
https://bioconductor.org/packages/release/bioc/vignettes/receptLoss/inst/doc/receptLoss.html | receptLoss is an R package designed to identify novel nuclear hormone receptors (NHRs) whose expression levels in cancers could serve as biomarkers for patient survival.
By utilizing both expression data from both tumor and normal tissue, receptLoss provides biological context to the process of tumor subclassification that is lacking in existing methods that rely solely on expression patterns in tumor tissue.
receptLoss is complementary to oncomix. Whereas oncomix detects genes that gain expression in subsets of tumors relative to normal tissue, receptLoss detects genes that lose expression in subsets of tumors relative to normal tissue.
## Installation
## Install the development version from GitHub
devtools::install_github("dpique/receptLoss",
build_opts=c("--no-resave-data", "--no-manual"),
build_vignettes=TRUE)
## Usage
receptLoss consists of 2 main functions:
• receptLoss() takes in 2 matrices of gene expression data, one from tumor and one from adjacent normal tissue. The output is a matrix with rows representing genes and columns representing summary statistics.
• plotReceptLoss() generates a histogram visualization of the distribution of the gene desired by the user.
We begin by simulating two gene expression data matrices, one from tumor and the other from normal tissue.
library(receptLoss)
library(dplyr)
library(ggplot2)
set.seed(100)
## Simulate matrix of expression values from
## 10 genes measured in both normal tissue and
## tumor tissue in 100 patients
exprMatrNml <- matrix(abs(rnorm(100, mean=2.5)), nrow=10)
exprMatrTum <- matrix(abs(rnorm(1000)), nrow=10)
geneNames <- paste0(letters[seq_len(nrow(exprMatrNml))],
seq_len(nrow(exprMatrNml)))
rownames(exprMatrNml) <- rownames(exprMatrTum) <- geneNames
exprMatrNml and exprMatrTum are $$m \times n$$ matrices containing gene expression data from normal and tumor tissue, respectively, with $$m$$ genes as rows and $$n$$ patients as columns. The row names of these matrices are the gene names.
These two matrices should have the same number of rows (ie genes), with genes listed in the same order between the two matrices. However, they don’t have to have the same number of columns (ie patients).
To run receptLoss(), we also define 2 parameters:
• nSdBelow is an integer value that places a lower boundary (i.e. lowerBound, shown as the pink ‘B’ in image below) $$n$$ standard deviations below the mean of each gene’s expression levels in normal tissue (dotted pink curve below). The larger nSdBelow is, the smaller (i.e. further to the left) the lowerBound becomes.
• We recommend setting nSdBelow=2, as ~97.7% of the normal tissue expression data should be greater than the lowerBound (assuming the expression data from normal tissue is distributed as a Gaussian).
• minPropPerGroup - a numeric value between $$(0,0.5)$$ indicating the minimum proportion of tumor samples desired within each of the two tumor subgroups defined by the lowerBound. Determines the value of meetsMinPropPerGrp (either TRUE or FALSE) in the output.
• We recommend setting minPropPerGroup=0.20. Values close to 0 may result in the inclusion of genes that subdivide tumors into very unequally-sized subgroups. Values closer to 0.5 will identify genes that subdivide tumors into nearly equal-sized groups and may be unnecessarily restrictive.
<img src=“fig_2_1.png” alt=“fig 2”, width=“60%”>
nSdBelow <- 2
minPropPerGroup <- .2
rl <- receptLoss(exprMatrNml, exprMatrTum, nSdBelow, minPropPerGroup)
#> # A tibble: 6 × 7
#> geneNm lowerBound propTumLessThBound muAb muBl deltaMu meetsMinPropPerGrp
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
#> 1 f6 0.928 0.58 1.52 0.425 1.10 TRUE
#> 2 b2 0.538 0.38 1.30 0.258 1.05 TRUE
#> 3 i9 0.379 0.24 1.10 0.203 0.893 TRUE
#> 4 g7 0.805 0.57 1.30 0.405 0.891 TRUE
#> 5 d4 0.359 0.32 1.04 0.174 0.866 TRUE
#> 6 c3 0.554 0.41 1.12 0.290 0.826 TRUE
The output of receptLoss() is an $$m\times7$$ matrix, with $$m$$ equaling the number of genes. The 7 columns are as follows:
• geneNm - the gene name
• lowerBound ($$B$$) - the value nSdBelow the mean of the normal tissue expression data. Can be expressed as $B=\mu_N - \sigma_N * n_{sdBelow},$ where $$\mu_N$$ is the mean of the normal tissue expression data, $$\sigma_N$$ is the standard deviation of the normal tissue expression data, and $$n_{sdBelow}$$ is the value nSdBelow set by the user.
• propTumLessThBound ($$\pi_L$$) - the proportion of tumor samples with expression levels less than lowerBound. Can be expressed as: $\pi_L =\frac{1}{N_T}\sum_{j=1}^{N_T} \Bigg\{ \begin{array}{ll} 1,~ if~x_{j} < lowerBound \\ 0, ~ otherwise \end{array},$ where $$x_{j}$$ is the $$j^{th}$$ tumor sample and $$N_T$$ is the total number of tumor samples.
• muAb ($$\mu_A$$) - “mu above”, the arithmetic mean across expression values from tumors greater than (ie above) the lowerBound.
• muBl($$\mu_B$$) - “mu below”, the arithmetic mean across expression values from tumors less than (ie below) the lowerBound.
• deltaMu ($$\Delta\mu$$) - equal to $$\mu_A - \mu_B$$. The rows in the output matrix are sorted in descending order by the deltaMu statistic, which indicates the degree of separation between the two tumor subgroups. Higher deltaMu values indicate tumor subgroups that are more cleanly separated and more likely to constitute a bimodal distribution within the tumor samples.
• meetsMinPropPerGrp - a logical indicating whether the proportion of samples in each group is greater than that set by minPropPerGroup. If $$min(\pi_L, 1-\pi_L) >$$ minPropPerGroup, then meetsMinPropPerGrp is TRUE; otherwise, it is FALSE. Genes for which meetsMinPropPerGrp equals FALSE can be filtered out - they do not have a sufficient proportion of tumors in each group to permit useful tumor subgrouping.
## Visualization
Let’s take the top-ranked gene and plot its distribution.
clrs <- c("#E78AC3", "#8DA0CB")
tryCatch({plotReceptLoss(exprMatrNml, exprMatrTum, rl,
geneName=as.character(rl[1,1]), clrs=clrs)},
warning=function(cond){
knitr::include_graphics("rl_fig.png")
}, error=function(cond){
knitr::include_graphics("rl_fig.png")
}
)
Here’s what this graph is showing us:
• The x-axis represents RNA expression values, with lower values toward the left and larger values (i.e. higher expression) toward the right. The y-axis represents density. The name of the gene (“f6”) is shown in the upper left of the plot.
• The dotted curve represents a Gaussian distribution fit to the expression data from normal tissue, and the blue histogram represents expression data from tumor tissue.
• The pink vertical line corresponds to the lowerBound for the expression data from normal tissue.
• Since most normal tissue expresses the RNA above the lowerBound, any tumors that express the RNA below this value have lost RNA expression relative to normal tissue. Thus, the lowerBound forms a boundary between 2 tumor subgroups that either have or have not lost RNA expression relative to normal tissue.
## Nuclear Hormone Receptor (NHR) filtering
The question that inspired this package was whether the loss of expression of any of the ~50 NHRs (beyond the well-known estrogen, progesterone, and androgen NHRs) in uterine tumors was associated with differences in patient survival. NHRs might not only serve as survival biomarkers but also as drug targets, as their activity can be modulated by small molecules that resemble their hormonal ligands.
To facilitate the application of this question to additional cancer types, a list of all NHRs is included in this package as the object nhrs.
This object facilitates filtering of NHRs from a matrix of gene expression data, as it contains several commonly-used gene identifiers (e.g. HGNC symbol, HGNC ID, Entrez ID, and Ensembl ID) for the NHRs that might be found in different RNA expression datasets.
The source code for generating nhrs is available in “data-raw/nhrs.R”.
receptLoss::nhrs
#> # A tibble: 54 × 6
#> hgnc_symbol hgnc_id hgnc_name entrez_gene_id ensembl_gene_id synonyms
#> <chr> <dbl> <chr> <dbl> <chr> <chr>
#> 1 NR0B1 7960 nuclear rec… 190 ENSG00000169297 AHC|DSS|AHCH…
#> 2 NR0B2 7961 nuclear rec… 8431 ENSG00000131910 Small hetero…
#> 3 THRA 11796 thyroid hor… 7067 ENSG00000126351 AR7|EAR-7.1/…
#> 4 THRB 11799 thyroid hor… 7068 ENSG00000151090 THR1|THRB1|T…
#> 5 RARA 9864 retinoic ac… 5914 ENSG00000131759 RAR alpha 1|…
#> 6 RARB 9865 retinoic ac… 5915 ENSG00000077092 HAP|HBV-acti…
#> 7 RARG 9866 retinoic ac… 5916 ENSG00000172819 RARC|RAR&gam…
#> 8 PPARA 9232 peroxisome … 5465 ENSG00000186951 NUC1|nuclear…
#> 9 PPARD 9235 peroxisome … 5467 ENSG00000112033 NUCII|PPAR&b…
#> 10 PPARG 9236 peroxisome … 5468 ENSG00000132170 PPARG1|PPARG…
#> # … with 44 more rows
## Conclusions
• receptLoss identifies genes that subclassify tumors based on their RNA expression levels relative to normal tissue. The genes are ranked by their $$\Delta\mu$$ statistic which reflects a measure of the cleaness of separation (ie bimodality) between the two tumor subgroups.
• receptLoss can be expanded for use with a variety of tumors, genes (e.g. to identify novel candidate tumor suppressors), biological data (e.g. miRNA, protein expression), and even non-biological data types where you have numeric data from two groups (one normal group and one abnormal group) and where subgroup identification is desired within the abnormal group
• receptLoss is particularly useful when there are a large number of tumor samples (hundreds) relative to normal samples (dozens), as is the case in several cancer databases, including the uterine cancer database from the Cancer Genome Atlas/Genomic Data Commons. By assuming that the normal expression data are distributed as a single Gaussian, receptLoss can subclassify large numbers of tumors even in the presence of small numbers of normal tissue samples.
## Display this vignette
vignette("receptLoss")
## Session Info
sessionInfo()
#> R Under development (unstable) (2021-10-19 r81077)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 20.04.3 LTS
#>
#> Matrix products: default
#> BLAS: /home/biocbuild/bbs-3.15-bioc/R/lib/libRblas.so
#> LAPACK: /home/biocbuild/bbs-3.15-bioc/R/lib/libRlapack.so
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_GB LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] ggplot2_3.3.5 dplyr_1.0.7 receptLoss_1.7.0
#>
#> loaded via a namespace (and not attached):
#> [1] SummarizedExperiment_1.25.0 tidyselect_1.1.1
#> [3] xfun_0.27 bslib_0.3.1
#> [5] purrr_0.3.4 lattice_0.20-45
#> [7] colorspace_2.0-2 vctrs_0.3.8
#> [9] generics_0.1.1 htmltools_0.5.2
#> [11] stats4_4.2.0 yaml_2.2.1
#> [13] utf8_1.2.2 rlang_0.4.12
#> [15] jquerylib_0.1.4 pillar_1.6.4
#> [17] withr_2.4.2 glue_1.4.2
#> [19] DBI_1.1.1 BiocGenerics_0.41.0
#> [21] matrixStats_0.61.0 GenomeInfoDbData_1.2.7
#> [23] lifecycle_1.0.1 stringr_1.4.0
#> [25] MatrixGenerics_1.7.0 zlibbioc_1.41.0
#> [27] munsell_0.5.0 gtable_0.3.0
#> [29] evaluate_0.14 labeling_0.4.2
#> [31] Biobase_2.55.0 knitr_1.36
#> [33] IRanges_2.29.0 fastmap_1.1.0
#> [35] GenomeInfoDb_1.31.0 fansi_0.5.0
#> [37] highr_0.9 scales_1.1.1
#> [39] DelayedArray_0.21.0 S4Vectors_0.33.0
#> [41] jsonlite_1.7.2 XVector_0.35.0
#> [43] png_0.1-7 digest_0.6.28
#> [45] stringi_1.7.5 GenomicRanges_1.47.1
#> [47] grid_4.2.0 cli_3.1.0
#> [49] tools_4.2.0 bitops_1.0-7
#> [51] magrittr_2.0.1 sass_0.4.0
#> [53] RCurl_1.98-1.5 tibble_3.1.5
#> [55] crayon_1.4.1 tidyr_1.1.4
#> [57] pkgconfig_2.0.3 Matrix_1.3-4
#> [59] ellipsis_0.3.2 assertthat_0.2.1
#> [61] rmarkdown_2.11 R6_2.5.1
#> [63] compiler_4.2.0 | 2022-08-09 07:28:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42550161480903625, "perplexity": 10271.500074016047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00102.warc.gz"} |
https://www.topmarkessays.com/blog/read-instructions-58/ | Assignment instructions
Based on the same scenario as in Assignments 1, 2, and 3, you are now considering additional factors needed for your proposal based on RFP #123456789, dated 07/14/2014, where another local competitor intends to submit a proposal.
1. Although you have always built in a profit margin of ten percent (10%) for commercial flooring jobs, you are willing to consider a lesser profit margin in this case in order to win the contract.
2. The Navy’s Contract Administration Officer is known to be a smart, tough negotiator.
Write a two to three (2-3) page paper in which you:
1. Determine two (2) potential profit objectives that you will consider for accepting a less than normal profit margin if you win the contract. Provide a rationale for your response.
2. Determine two to three (2-3) negotiation strategies or tactics that you feel would be effective for winning the contract. Provide a rationale for your response.
3. Use at least three (3) quality references Note: Wikipedia and other related websites do not qualify as academic resources
\$10 per 275 words - Purchase Now | 2020-09-29 21:25:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049355745315552, "perplexity": 2141.4279843566555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00249.warc.gz"} |
https://dakep.github.io/examinr/articles/randomized_exam.html | Almost every aspect of an exam can be randomized: section order, question texts (text blocks, question titles, answer options, etc, correct answers, etc), and exercises. Randomization is based on exam-specific data providers: one for question texts and one for exercise chunks. Question texts only have access to data generated by the data provider set in the setup chunk via data_provider(). Exercise code only has access to data generated by the exercise data provider set via exam_config(exercise_data_provider=).
Randomization in questions and text blocks
The only R objects accessible in the main body of the R Markdown document are the ones returned by the data provider set via data_provider(). The data provider is a function which generates all (randomized) data which may change between users and attempts. It receives arguments section (identifier of the currently visible section), attempt (information about the current attempt), session (the current Shiny session object), and ... (for future expansions). Arguments attempt and session may be NULL if the data provider is invoked during pre-rendering of the exam. The function must return either a list or an environment (the parent environment will be stripped). Before calling the data provider, the seed for the RNG is automatically set according to the current attempt to ensure the data provider always returns the same values for a given attempt. The data provider is not only invoked when displaying the exam to the user, but also when pre-rendering the exam, when showing feedback and when grading.
For progressive exams, the data provider is invoked for a specific section, while in any other case it is invoked with section = TRUE.
As example, consider the following data provider:
data_provider(function (section, attempt, session, ...) {
# Create the environment and some values used in all sections:
objs <- list(sample_size = sample(8:14, 1),
sample_mean = round(rnorm(1, mean = 100), 1),
sample_sd = round(runif(1, 1, 3), 1))
section_seeds <- sample.int(.Machine$integer.max, 2) if (isTRUE(section) || identical(section, 'maximum-likelihood-estimation')) { # For MLE, users need an actual sample set.seed(section_seeds[1]) objs$sample <- with(objs, round(rnorm(sample_size), 2) * sample_sd + sample_mean)
}
if (isTRUE(section) || identical(section, 'confidence-interval')) {
# In the confidence interval section we ask for a specific confidence level.
set.seed(section_seeds[2])
objs$conf_level <- sample(c(0.995, 0.99, 0.975, 0.95, 0.90), 1) } return(objs) }) There are a few important guidelines for writing your own the data provider. First and foremost, section-specific randomization must not interfere with each other. This ensures that calling the data provider with section = TRUE gives the same values, e.g., for conf_level, as if calling the data provider with section = 'confidence-interval'. The data provider will always be called with section = TRUE when showing feedback to users and when grading, as in these cases all sections are shown at once. For progressive exams, however, the data provider is called only for the currently visible section. Furthermore, for performance reasons it is good practice to only generate the data for the requested sections. In the example above, section-specific randomization is done by setting a section-specific seed when generating the data for the individual sections. Another option would be to use withr::with_preserve_seed(). If the generated data is small but takes a long time to compute, consider caching the data by setting data_provider(cache_data = TRUE). This will cache the generated data in the Shiny session and the data provider will be invoked only once for every attempt and section. Using randomized values Using randomized data in an exam document is the same as using R objects in a standard R Markdown document: via R code chunks. The section on confidence intervals, for example, may look something like {r setup} # Set the data provider in the setup chunk data_provider(function (section, attempt, session, ...) { # Create the environment and some values used in all sections: objs <- list(sample_size = sample(8:14, 1), sample_mean = round(rnorm(1, mean = 100), 1), sample_sd = round(runif(1, 1, 3), 1)) section_seeds <- sample.int(.Machine$integer.max, 2)
if (isTRUE(section) || identical(section, 'maximum-likelihood-estimation')) {
# For MLE, users need an actual sample
set.seed(section_seeds[1])
objs$sample <- with(objs, round(rnorm(sample_size), 2) * sample_sd + sample_mean) } if (isTRUE(section) || identical(section, 'confidence-interval')) { # In the confidence interval section we ask for a specific confidence level. set.seed(section_seeds[2]) objs$conf_level <- sample(c(0.995, 0.99, 0.975, 0.95, 0.90), 1)
}
return(objs)
})
# Confidence interval
As before, the sample size is $n = r sample_size$ and the maximum likelihood estimate for the
mean is $\hat\mu = r sample_mean$.
Moreover, the sample standard deviation is $s = r sample_sd$.
{r ci_q_1}
text_question(
title = r"(What is the lower endpoint of the r conf_level * 100% confidence interval for $\mu$?)",
type = "numeric", accuracy = 1e-3,
solution = {
t_quantile <- 1 - (1 - conf_level) / 2
ci_lower <- sample_mean - qt(t_quantile, df = sample_size - 1) * sample_sd / sqrt(sample_size)
structure(sprintf(r"[The lower endpoint of the CI is $\displaystyle \hat\mu - t_{n,%.4f} \times \frac{s}{\sqrt{n}} = %.3f$]", t_quantile, ci_lower),
})
(Note: the above R code chunk uses raw character constants as introduced in R 4.0, avoiding the need to double-escape backslashes in the MathJax equations.)
The R objects generated by the data provider are used via inline R code chunks in the question text and in the question title. The rendered section looks like this:
The data from the data provider is also available when computing the solution to the question. The solution expression is evaluated in an environment generated from the value returned by the data provider. The feedback to the user looks like this:
Performance considerations
Prefer inline R code chunks (i.e.,r ...) over R code blocks of the form
{r}
# ...
If the section contains only inline R chunks, examinr uses commonmark::markdown_html() to parse the markdown to HTML. Whenever a section contains R code blocks, however, rmarkdown::render() function is invoked which is much slower. This will be important if many users access the exam at the same time and the same R process.
Randomization in exercises
Exercises are slightly different than regular question texts in that the data available in the exercises is generated via the exercise data provider configured with exam_config(exercise_data_provider=).
The exercise data provider is invoked when the user runs their code for an exercise chunk and when examinr builds the information needed for auto-completion. The exercise data provider is invoked with arguments label (identifier of the exercise chunk), attempt (information about the current attempt), session (the current Shiny session object), and ... (for future expansions). Arguments attempt and session are NULL when the provider is invoked for determining the objects available for auto-completion. The restrictions and notes given above for the data provider also apply to the exercise data provider.
In many situations, the data provider and the exercise data provider share many objects. In these cases, it is sensible to write a helper function (below called common_data_provider()) which generates the data and is invoked by both data providers, for instance
common_data_provider <- function (section) {
# Create the environment and some values used in all sections:
objs <- list(sample_size = sample(8:14, 1),
sample_mean = round(rnorm(1, mean = 100), 1),
sample_sd = round(runif(1, 1, 3), 1))
section_seeds <- sample.int(.Machine$integer.max, 2) if (isTRUE(section) || identical(section, 'maximum-likelihood-estimation')) { # For MLE, users need an actual sample set.seed(section_seeds[1]) objs$sample <- with(objs, round(rnorm(sample_size), 2) * sample_sd + sample_mean)
}
if (isTRUE(section) || identical(section, 'confidence-interval')) {
# In the confidence interval section we ask for a specific confidence level.
set.seed(section_seeds[2])
objs$conf_level <- sample(c(0.995, 0.99, 0.975, 0.95, 0.90), 1) } return(objs) } In this example, the data provider may simply forward to the common data provider: # Set the data provider in the setup chunk data_provider(function (section, attempt, session, ...) { return(common_data_provider(section)) }) The exercise data provider, on the other hand, would return only the objects the user should have access to in the exercise chunk: # Set the data provider in a "server-start" chunk exam_config(exercise_data_provider = function (label, attempt, session, ...) { if (identical(label, 'mle_q_1')) { objs <- common_data_provider('maximum-likelihood-estimation') # For this exercise, the user only needs access to the sample. return(objs['sample']) } return(list()) }) Note that the data provider must be set in a setup chunk (it must be available both at rendering time and on the server), while the exercise data provider is set only in a server-start context. The section on maximum likelihood estimation may then look something like {r setup} common_data_provider <- function (section) { # Create the environment and some values used in all sections: objs <- list(sample_size = sample(8:14, 1), sample_mean = round(rnorm(1, mean = 100), 1), sample_sd = round(runif(1, 1, 3), 1)) section_seeds <- sample.int(.Machine$integer.max, 2)
if (isTRUE(section) || identical(section, 'maximum-likelihood-estimation')) {
# For MLE, users need an actual sample
set.seed(section_seeds[1])
objs$sample <- with(objs, round(rnorm(sample_size), 2) * sample_sd + sample_mean) } if (isTRUE(section) || identical(section, 'confidence-interval')) { # In the confidence interval section we ask for a specific confidence level. set.seed(section_seeds[2]) objs$conf_level <- sample(c(0.995, 0.99, 0.975, 0.95, 0.90), 1)
}
return(objs)
}
data_provider(function (section, attempt, session, ...) {
return(common_data_provider(section))
})
{r, context="server-start"}
exam_config(exercise_data_provider = function (label, attempt, session, ...) {
if (identical(label, 'mle-q-1')) {
objs <- common_data_provider('maximum-likelihood-estimation')
# For this exercise, the user only needs access to the sample.
return(objs['sample'])
}
return(list())
})
# Maximum Likelihood Estimation
Let $X_1, \dotsc, X_n$ be independent normal random variables, each with mean $\mu$ and variance
$\sigma^2$.
The goal is to find the maximum likelihood estimate for $\mu$ from such a sample of size
$n = r sample_size$.
In the code chunk below, the sample is available as R object sample.
Write the R code to compute the maximum likelihood estimate $\hat\mu$.
{r mle_q_1, exercise=TRUE, exercise.solution="mle_q_1-solution"}
# Compute the maximum likelihood estimate.
{r mle_q_1-solution, eval=FALSE}
# The MLE for the mean parameter is the mean of the observed values:
mu_mle <- mean(sample)
The user would see something like the following:
If computations in the exercise data provider are slow, but the data itself is small, consider caching the results with exam_config(cache_data = TRUE).
Randomization and setup chunks
Exercises also support setup chunks which are always run immediately before the user code. Setup chunks see the same environment as the user code and thus has access to the data created by the exercise data provider. The RNG is not seeded before running setup chunks (or user code), and therefore cannot be used directly to create reproducible data for users.
Setup chunks are useful for transforming the data generated by the data provider, e.g., saving the data to disk for the user to read in. An alternative version of the MLE question from above could be
<!-- setup the data providers as above -->
# Maximum Likelihood Estimation
Let $X_1, \dotsc, X_n$ be independent normal random variables, each with mean $\mu$ and variance
$\sigma^2$.
The goal is to find the maximum likelihood estimate for $\mu$ from such a sample of size
$n = r sample_size$.
In the code chunk below, the sample is available in the file _./sample.csv_.
Write the R code to read in the data and compute the maximum likelihood estimate $\hat\mu$.
{r mle_q_1-setup}
# Write data to disk and hide from the user
write.csv(data.frame(x = sample), file = "sample.csv", row.names = FALSE)
rm(sample)
{r mle_q_1, exercise=TRUE, exercise.solution="mle_q_1-setup", exercise.solution="mle_q_1-solution"}
# Read in the data and compute the maximum likelihood estimate.
{r mle_q_1-solution, eval=FALSE}
# Read in the sample values
mu_mle <- mean(sample$x) More information on using setup and solution chunks is available the companion vignette on exercise chunks. Customizing the seed By default, all attempts by a user will be seeded with the same seed determined from the user’s id. Sometimes, it is useful to use different seeds for different attempts, or maybe the same seed for several users. To customize the seed used for an attempt, you can specify a seeding function via exam_config(seed_attempt=). The seeding function is called arguments user (the current user), previous_attempts (a list of all previous attempts by this user), and ... (for future expansion). It should return a single integer which will be used to seed the RNG for the attempt. To give different seeds for every attempt, for example, you can use the following: exam_config(seed_attempt = function (user, previous_attempts, ...) { digest::digest2int(paste(user$user_id, length(previous_attempts)))
}) | 2021-12-02 03:31:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4037517309188843, "perplexity": 3718.5744560921416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00092.warc.gz"} |
https://physicslens.com/category/a-level-topics/ | ## Uniform vertical circular motion
The following GeoGebra app simulates the force vectors on an object in uniform vertical circular motion.
A real world example of this would be the forces acting on a cabin in a ferris wheel.
<iframe scrolling="no" title="Vertical Uniform Circular Motion " src="https://www.geogebra.org/material/iframe/id/t5jstqsm/width/640/height/480/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/true/rc/false/ld/false/sdz/false/ctl/false" width="640px" height="480px" style="border:0px;"> </iframe>
## Vertical Non-Uniform Circular Motion
This is a simulation that shows the vectors of forces acting on an object rolling in a vertical loop, assuming negligible friction.
To complete the loop, the initial velocity must be sufficiently high so that contact between the object and the track is maintained. When the contact force between the object and its looping track no longer exists, the object will drop from the loop.
The following code is for embedding in SLS.
<iframe scrolling="no" title="Vertical non-uniform circular motion" src="https://www.geogebra.org/material/iframe/id/ny3jhhsp/width/640/height/480/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/true/rc/false/ld/false/sdz/false/ctl/false" width="640px" height="480px" style="border:0px;"> </iframe>
## Aircraft Turning in a Circle: a 3-D Visualisation with GeoGebra
This GeoGebra app is a 3-D visualisation tool of the force vectors acting on an aircraft turning with uniform circular motion in a horizontal plane.
I prepared this in advance as I will be lecturing on this JC1 topic next year.
## Hydrostatic Pressure and Upthrust
This app is used to demonstrate how a spherical object with a finite volume immersed in a fluid experiences an upthrust due to the differences in pressure around it.
Given that the centre of mass remains in the same position within the fluid, as the radius increases, the pressure due to the fluid above the object decreases while the pressure below increases. This is because hydrostatic pressure at a point is proportional to the height of the fluid above it.
It can also be used to show that when the volume becomes infinitesimal, the pressure acting in all directions is equal.
The following codes can be used to embed this into SLS.
<iframe scrolling="no" title="Hydrostatic Pressure and Upthrust" src="https://www.geogebra.org/material/iframe/id/xxeyzkqq/width/640/height/480/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false" width="640px" height="480px" style="border:0px;"> </iframe>
## Noise-cancelling AirPod Pro
The recently launched Apple AirPod Pro presents a wonderful opportunity to relate an A-level concept to a real-world example - how noise-cancelling earphones work.
Apple's website explained it in layman terms that seem to make sense. Let your students attempt to do a better job of explaining how destructive interference of waves is applied.
I probably won't spend SGD379 on it though.
## Why is Glass Transparent?
This video relates a phenomenon that we have taken for granted to the study of quantum physics (more specifically, photon absorption) and atomic structure.
## Phase Difference GeoGebra Apps
I created a series of GeoGebra apps for the JC topics of Waves and Superposition, mainly on the concept of Phase Difference. The sizes of these GeoGebra apps are optimised for embedding into SLS. When I have time, I will create detailed instructions on how to create such apps. Meanwhile, feel free to use them.
Instructions on how to embed the apps into SLS can be found at this staging environment of the SLS user guide.
Phase difference between two particles on a progressive wave. Move the particles along the wave to see the value.
Phase difference between two particles on a stationary wave. Move the particles along the wave to observe how their velocities are different or similar.
Observe velocity vectors of multiple particles on a progressive wave.
## How to Understand the Image of the Black Hole
Here's a good explanation by Veritasium on why the image of the black hole looks the way it does.
## Idealized Stirling Cycle
I created a new GeoGebra app based on an ideal Stirling Cycle (A. Romanelli Alternative thermodynamic cycle for the Stirling machine, American Journal of Physics 85, 926 (2017)) which includes two isothermal and two isochoric processes. The Stirling engine is a very good example to apply the First Law of Thermodynamics to, as the amount of gas is fixed so the macro-variables are only pressure, temperature and volume. Simplifying the cycle makes it even easier for first time learners to understand how the engine works.
For those who prefer to be impressed by an actual working model, it can be bought for less than S\$30 on Lazada. All you need for it to run is a little hot water or some ice. Here's a video of the one I bought:
The parts of the Stirling engine are labelled here:
My simulation may not look identical to the engine shown but it does have the same power piston (to do work on the flywheel) and displacer piston (to shunt the air to and fro for more efficient heat exchange).
## Geogebra App on Maximum Power Theorem
This simulation demonstrates the power dissipated in a variable resistor given that the battery has an internal resistance (made variable in this app as well).
Since the power dissipated by the resistor is given by
$P=I^2R$
and the current is given by
$I=E(R+r)$,
$P= E^2\times\frac{R}{(R+r)^2}=\frac{E^2}{\frac{r^2}{R}+R+2r}$
This power will be a maximum if the expression for the denominator
$\frac{r^2}{R}+R+2r$
is a minimum.
Differentiating the expression with respect to R, we get
$\frac{d(r^2/R+R+2r)}{dR}=-\frac{r^2}{R^2}+1$
When the denominator is a minimum,
$-\frac{r^2}{R^2}+1=0$, so
r = R. | 2019-12-13 10:21:51 | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2647538185119629, "perplexity": 942.8675711002012}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00476.warc.gz"} |
http://letslearnnepal.com/class-11/chemistry/inorganic-chemistry/oxygen/physical-properties-chemical-properties-and-uses-of-oxygen/ | # Physical properties, chemical properties and uses of oxygen
Physical properties of oxygen:
– Oxygen is colorless, odourless and tasteless gas.
– It is pale blue color in liquid and solid state.
– It is slightly soluble in water.
– It is heavier than air.
Note: Metals like gold, silver, platinum can absorb oxygen at high temperature and expels it on cooling. This phenomenon is called spitting of silver.
Chemical properties of oxygen:
Combustibility: It does not burn itself but it supports for combustion. Oxygen requires high initial heating due to its high bond dissociation energy of 493.4KJmol between the O=O atoms.
Note:The presence of oxygen is must for burning anything. Nothing can get burnt without oxygen. Dissociation energy is the energy required to break old bonds when any chemical species undergo chemical reaction
Action with hydrogen: Oxygen when heated with hydrogen forms water.
$$\ce{2H2 + O2->[\Delta]2H2O}$$
Action with Nitrogen: Oxygen reacts with nitrogen at high temperature to give nitrogen dioxide.
$$\ce{N2 + O2->[{3000^{\circ}C}]2NO}$$
$$\ce{NO + O2->2NO2}$$
Action with carbon: Carbon when reacted with limited oxygen gives carbon monoxide and with excess oxygen, carbon gives carbon dioxide.
$$\ce{2C + \underset{\text{limited}}{\ce{O2}} ->\underset{\text{Carbon Monoxide}}{\ce{CO}}}$$
$$\ce{2C + \underset{\text{Excess}}{\ce{O2}} ->\underset{\text{Carbon Monoxide}}{\ce{CO2}}}$$
Action with ammonia: Oxygen reacts with ammonia to give nitric oxide and water. It is a reaction involved in manufacture of nitric acid in Ostwald’s process.
$$\ce{4NH3 + 5O2->[{800^{\circ}C}][\ce{Pt/MO}]4NO + 6H2O ^ }$$
Action with glucose: Glucose reacts with oxygen to give carbon dioxide, water and energy. This reaction takes place inside the human body.
$${\text{Glucose + Oxygen} \rightarrow \text{Carbon Dioxide + Water + Energy}}$$
i.e., $$\ce{C6H12O6 + 6O2->6CO2 + 6H2O + Energy}$$
Action with metals:
Oxygen combines with many metals to form their respective oxides.
$$\ce{4Na + O2->[\text{Room temperature}]2NaO2}$$
$$\ce{2Na + O2->[{300^{\circ}C}]Na2O2}$$
$$\ce{4K + O2->2K2O}$$
$$\ce{2Mg + O2->[\Delta]2MgO}$$
$$\ce{2Zn + O2->[\Delta]2ZnO}$$
$$\ce{4Al + 3O2->[\Delta]Al2O3}$$
Action with iron: Oxygen when reacted with iron gives ferrous oxide. When excess of oxygen is passed, it gives ferrosoferic oxide and further addition of oxygen gives ferric oxide. To be discused
$$\ce{Fe + O2->[\Delta]\underset{Ferrous oxide}{FeO}}$$
$$\ce{6FeO + O2->[\Delta]\underset{Ferrosoferic oxide}{2Fe3O4}}$$
$$\ce{4Fe + 3O2->[\Delta]\underset{Ferric oxide}{2Fe3O3}}$$
When oxygen is reacted with oxygen in presence of water, rust is formed.
$$\ce{4Fe + 3O2 + 2H2O->\underset{Rust}{2Fe2O3.XH2O}}$$
Uses of oxygen:
– It is used for artificial respiration in hospitals, mountaineers in high altitude, miners and sea divers in the form of oxygen mask.
– It is used as aero fuel in rocket engines and planes.
– It is used for the generation of energy inside our body.
– It is used as strong oxidizing agent in laboratory.
– It is used by the plants for the process of photosynthesis.
– It is used in preparing different explosives.
– It is used as a germicides and insecticides.
– It is the main element for the formation of ozone.
Do you like this article ? If yes then like otherwise dislike : 9
#### 3 Responses to “Physical properties, chemical properties and uses of oxygen”
1. Ahmeddinabdikheir
good precise summary notes
2. Ahmeddinabdikheir
precise summarised good notes | 2018-01-23 21:23:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5605340600013733, "perplexity": 5454.709759988881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00502.warc.gz"} |
https://wordpress.stackexchange.com/questions/208256/how-to-check-if-debug-is-true-and-can-i-use-it-for-my-own-code | # How to check if debug is true and can I use it for my own code? [duplicate]
This question is an exact duplicate of:
I'm needing to debug one of my themes and I want to be able to switch on a debug mode so I can output more information or switch off debug mode and not see any information.
I noticed there is a debug variable defined in wp_config.php. I can easily set this to true or false. Is it OK to use this variable for my own debugging purposes or should I create my own?
Also, how do I check for if debug is true? My PHP is a bit rusty. Is this correct:
define('WP_DEBUG', true);
if ($WP_DEBUG) { // do something } My question is different. ## marked as duplicate by Howdy_McGee♦, Mayeenul Islam, Nicolai, cybmeta, Pieter GoosenNov 24 '15 at 17:43 This question was marked as an exact duplicate of an existing question. • Why cant you install your theme on a local instance and leave debug equal to true? Its not a good idea to leave debug true on a live site. – DᴀʀᴛʜVᴀᴅᴇʀ Nov 11 '15 at 2:43 • @Darth_Vader It's much easier for me to debug on the remote site right now. At a later time I might setup a local install. But as you confirmed, it's not a good idea to leave debug set to true on a live site. So I want to enable it briefly to inspect any errors and then turn it off quickly if I have to. Local installs are unfortunately different than remote. – 1.21 gigawatts Nov 11 '15 at 2:47 • @Howdy_McGee not quite the same question. they mention log files and different variables than I am asking about. – 1.21 gigawatts Nov 11 '15 at 2:49 • I wouldn't suggest doing this but if you do you should do a redirect before enabling debug – DᴀʀᴛʜVᴀᴅᴇʀ Nov 11 '15 at 2:56 • WP_DEBUG is defined as a constant, not as a variable, to check it you should do if ( WP_DEBUG ) (without the$ symbol). Apart from that, the linked question by @Howdy_McGee seems what you need to switch on/off debug programmatically. – cybmeta Nov 24 '15 at 9:44
PHP constants don't have the leading \$. Strictly, this isn't WordPress, but since there isn't a Core is_debug() function that I am aware of, what you want is:
if (defined('WP_DEBUG') && true === WP_DEBUG) {
• I'm using if ( WP_DEBUG ) { } and it works just fine... – dev_masta Mar 30 '16 at 23:40 | 2019-10-16 09:52:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5450862050056458, "perplexity": 1453.467682211682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00214.warc.gz"} |
https://support.bioconductor.org/p/105954/ | How can I use Annotatr on CDS?
1
0
Entering edit mode
xie186 • 0
@xie186-11029
Last seen 14 months ago
USA
I'm using Annotatr on mm10.
library(annotatr)
#annots =c('mm10_cpg_islands', 'mm10_cpg_shores', 'mm10_cpg_shelves', 'mm10_cpg_inter')
annots = c('mm10_genes_promoters',
'mm10_genes_5UTRs',
'mm10_genes_3UTRs',
'mm10_genes_exons',
'mm10_genes_introns',
"mm10_genes_1to5kb",
'mm10_genes_intergenic')
# Build the annotations (a single GRanges object)
annotations = build_annotations(genome = 'mm10', annotations = annots)
UTRs are already included in exons. Is there a way that I can use CDS instead of exons here? Thanks.
Annotatr CDS Exon UTR • 813 views
1
Entering edit mode
rcavalca ▴ 140
@rcavalca-7718
Last seen 3.8 years ago
United States
Hi, thanks for using annotatr.
The CDS can be used with mm10_genes_cds. For a visual idea of where that falls: http://bioconductor.org/packages/release/bioc/vignettes/annotatr/inst/doc/annotatr-vignette.html#genic-annotations
0
Entering edit mode
I was using 'CDS' instead of 'cds', so it didn't work out. Thanks for developing such a nice package.
0
Entering edit mode
Hi @rcavalca, I'm wondering that whether the parameter "fill_order" is just for the order of the legend? Or it will also be applied to priority information. For example, if there is a region that overlap with both CDS and UTR. The fill order is c("UTR", "CDS"). Will both of them be considered or just "UTR" be considered? Thanks. | 2022-12-02 11:01:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6781247854232788, "perplexity": 10363.944040986942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00611.warc.gz"} |
http://stats.stackexchange.com/questions/4624/updating-the-lasso-fit-with-new-observations | # Updating the lasso fit with new observations
I am fitting an L1-regularized linear regression to a very large dataset (with n>>p.) The variables are known in advance, but the observations arrive in small chunks. I would like to maintain the lasso fit after each chunk.
I can obviously re-fit the entire model after seeing each new set of observations. This, however, would be pretty inefficient given that there is a lot of data. The amount of new data that arrives at each step is very small, and the fit is unlikely to change much between steps.
Is there anything I can do to reduce the overall computational burden?
I was looking at the LARS algorithm of Efron et al., but would be happy to consider any other fitting method if it can be made to "warm-start" in the way described above.
Notes:
1. I am mainly looking for an algorithm, but pointers to existing software packages that can do this may also prove insightful.
2. In addition to the current lasso trajectories, the algorithm is of course welcome to keep other state.
Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani, Least Angle Regression, Annals of Statistics (with discussion) (2004) 32(2), 407--499.
-
The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the optimal $\beta^*_{old}$ you already have). The closest $\beta^*_{old}$ is to $\beta_{new}^*$, the smaller the number of LARS iteration you will have to step to get to $\beta_{new}^*$.
Thanks, but I am afraid I don't follow. LARS produces a piecewise-linear path (with exactly $p+1$ points for the least angles and possibly more points for the lasso.) Each point has its own set of $\beta$. When we add more observations, all the betas can move (except $\beta^0$, which is always $0_p$.) Please could you expand on your answer? Thanks. – NPE Nov 17 '10 at 11:34
I was looking to update the entire path. However, if there's a good way to do it for a fixed penalty ($\lambda$ in the formula below), this may be a good start. Is this what you are proposing? $$\hat{\beta}^{lasso} = \underset{\beta}{\operatorname{argmin}} \left \{ {1 \over 2} \sum_{i=1}^N(y_i-\beta_0-\sum_{j=1}^p x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^p |\beta_j| \right \}$$ – NPE Nov 17 '10 at 17:48
@aix. Yes, it all depends on the implementation you use and the facilities you have access to. For example: if you have access to a good lp solver, you can feed it with the past optimal values of $\beta$ and it'll carry the 1-2 step to the new solution very efficiently. You should add these details to your question. – user603 Nov 20 '10 at 16:27 | 2013-12-04 22:39:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199084162712097, "perplexity": 520.3400481264441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037829/warc/CC-MAIN-20131204131717-00085-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://nebusresearch.wordpress.com/tag/andertoons/ | ## Reading the Comics, May 18, 2018: Quincy Doesn’t Make The Cut Edition
I hate to disillusion anyone but I lack hard rules about what qualifies as a mathematically-themed comic strip. During a slow week, more marginal stuff makes it. This past week was going slow enough that I tagged Wednesday’s Quincy rerun, from March of 1979 for possible inclusion. And all it does is mention that Quincy’s got a mathematics test due. Fortunately for me the week picked up a little. It cheats me of an excuse to point out Ted Shearer’s art style to people, but that’s not really my blog’s business.
Also it may not surprise you but since I’ve decided I need to include GoComics images I’ve gotten more restrictive. Somehow the bit of work it takes to think of a caption and to describe the text and images of a comic strip feel like that much extra work.
Roy Schneider’s The Humble Stumble for the 13th of May is a logic/geometry puzzle. Is it relevant enough for here? Well, I spent some time working it out. And some time wondering about implicit instructions. Like, if the challenge is to have exactly four equally-sized boxes after two toothpicks are moved, can we have extra stuff? Can we put a toothpick where it’s just a stray edge, part of no particular shape? I can’t speak to how long you stay interested in this sort of puzzle. But you can have some good fun rules-lawyering it.
Jeff Harris’s Shortcuts for the 13th is a children’s informational feature about Aristotle. Aristotle is renowned for his mathematical accomplishments by many people who’ve got him mixed up with Archimedes. Aristotle it’s harder to say much about. He did write great texts that pop-science writers credit as giving us the great ideas about nature and physics and chemistry that the Enlightenment was able to correct in only about 175 years of trying. His mathematics is harder to summarize though. We can say certainly that he knew some mathematics. And that he encouraged thinking of subjects as built on logical deductions from axioms and definitions. So there is that influence.
Dan Thompson’s Brevity for the 15th is a pun, built on the bell curve. This is also known as the Gaussian distribution or the normal distribution. It turns up everywhere. If you plot how likely a particular value is to turn up, you get a shape that looks like a slightly melted bell. In principle the bell curve stretches out infinitely far. In practice, the curve turns into a horizontal line so close to zero you can’t see the difference once you’re not-too-far away from the peak.
Jason Chatfield’s Ginger Meggs for the 16th I assume takes place in a mathematics class. I’m assuming the question is adding together four two-digit numbers. But “what are 26, 24, 33, and 32” seems like it should be open to other interpretations. Perhaps Mr Canehard was asking for some class of numbers those all fit into. Integers, obviously. Counting numbers. Compound numbers rather than primes. I keep wanting to say there’s something deeper, like they’re all multiples of three (or something) but they aren’t. They haven’t got any factors other than 1 in common. I mention this because I’d love to figure out what interesting commonality those numbers have and which I’m overlooking.
Ed Stein’s Freshly Squeezed for the 17th is a story problem strip. Bit of a passive-aggressive one, in-universe. But I understand why it would be formed like that. The problem’s incomplete, as stated. There could be some fun in figuring out what extra bits of information one would need to give an answer. This is another new-tagged comic.
Henry Scarpelli and Craig Boldman’s Archie for the 19th name-drops calculus, credibly, as something high schoolers would be amazed to see one of their own do in their heads. There’s not anything on the blackboard that’s iconically calculus, it happens. Dilton’s writing out a polynomial, more or less, and that’s a fit subject for high school calculus. They’re good examples on which to learn differentiation and integration. They’re a little more complicated than straight lines, but not too weird or abstract. And they follow nice, easy-to-summarize rules. But they turn up in high school algebra too, and can fit into geometry easily. Or any subject, really, as remember, everything is polynomials.
Mark Anderson’s Andertoons for the 19th is Mark Anderson’s Andertoons for the week. Glad that it’s there. Let me explain why it is proper construction of a joke that a Fibonacci Division might be represented with a spiral. Fibonacci’s the name we give to Leonardo of Pisa, who lived in the first half of the 13th century. He’s most important for explaining to the western world why these Hindu-Arabic numerals were worth learning. But his pop-cultural presence owes to the Fibonacci Sequence, the sequence of numbers 1, 1, 2, 3, 5, 8, and so on. Each number’s the sum of the two before it. And this connects to the Golden Ratio, one of pop mathematics’ most popular humbugs. As the terms get bigger and bigger, the ratio between a term and the one before it gets really close to the Golden Ratio, a bit over 1.618.
So. Draw a quarter-circle that connects the opposite corners of a 1×1 square. Connect that to a quarter-circle that connects opposite corners of a 2×2 square. Connect that to a quarter-circle connecting opposite corners of a 3×3 square. And a 5×5 square, and an 8×8 square, and a 13×13 square, and a 21×21 square, and so on. Yes, there are ambiguities in the way I’ve described this. I’ve tried explaining how to do things just right. It makes a heap of boring words and I’m trying to reduce how many of those I write. But if you do it the way I want, guess what shape you have?
And that is why this is a correctly-formed joke about the Fibonacci Division.
## Reading the Comics, April 28, 2018: Friday Is Pretty Late Edition
I should have got to this yesterday; I don’t know. Something happened. Should be back to normal Sunday.
Bill Rechin’s Crock rerun for the 26th of April does a joke about picking-the-number-in-my-head. There’s more clearly psychological than mathematical content in the strip. It shows off something about what people understand numbers to be, though. It’s easy to imagine someone asked to pick a number choosing “9”. It’s hard to imagine them picking “4,796,034,621,322”, even though that’s just as legitimate a number. It’s possible someone might pick π, or e, but only if that person’s a particular streak of nerd. They’re not going to pick the square root of eleven, or negative eight, or so. There’s thing that are numbers that a person just, offhand, doesn’t think of as numbers.
Mark Anderson’s Andertoons for the 26th sees Wavehead ask about “borrowing” in subtraction. It’s a riff on some of the terminology. Wavehead’s reading too much into the term, naturally. But there are things someone can reasonably be confused about. To say that we are “borrowing” ten does suggest we plan to return it, for example, and we never do that. I’m not sure there is a better term for this turning a digit in one column to adding ten to the column next to it, though. But I admit I’m far out of touch with current thinking in teaching subtraction.
Greg Cravens’s The Buckets for the 26th is kind of a practical probability question. And psychology also, since most of the time we don’t put shirts on wrong. Granted there might be four ways to put a shirt on. You can put it on forwards or backwards, you can put it on right-side-out or inside-out. But there are shirts that are harder to mistake. Collars or a cut around the neck that aren’t symmetric front-to-back make it harder to mistake. Care tags make the inside-out mistake harder to make. We still manage it, but the chance of putting a shirt on wrong is a lot lower than the 75% chance we might naively expect. (New comic tag, by the way.)
Charles Schulz’s Peanuts rerun for the 27th is surely set in mathematics class. The publication date interests me. I’m curious if this is the first time a Peanuts kid has flailed around and guessed “the answer is twelve!” Guessing the answer is twelve would be a Peppermint Patty specialty. But it has to start somewhere.
Knowing nothing about the problem, if I did get the information that my first guess of 12 was wrong, yeah, I’d go looking for 6 or 4 as next guesses, and 12 or 48 after that. When I make an arithmetic mistake, it’s often multiplying or dividing by the wrong number. And 12 has so many factors that they’re good places to look. Subtracting a number instead of adding, or vice-versa, is also common. But there’s nothing in 12 by itself to suggest another place to look, if the addition or subtraction went wrong. It would be in the question which, of course, doesn’t exist.
Maria Scrivan’s Half-Full for the 28th is the Venn Diagram joke for this week. It could include an extra circle for bloggers looking for content they don’t need to feel inspired to write. This one isn’t a new comics tag, which surprises me.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 28th uses the M&oum;bius Strip. It’s an example of a surface that you could just go along forever. There’s nothing topologically special about the M&oum;bius Strip in this regard, though. The mathematician would have as infinitely “long” a résumé if she tied it into a simple cylindrical loop. But the M&oum;bius Strip sounds more exotic, not to mention funnier. Can’t blame anyone going for that instead.
## Reading the Comics, April 14, 2018: Friday the 13th Edition?
And now I can close out last week’s mathematically-themed comic strips. There was a bunch toward the end of the week. And I’m surprised that none of the several comics to appear on Friday the 13th had anything to do with the calendar. Or at least not enough for me to talk about them.
Julie Larson’s Dinette Set rerun for the 12th is a joke built on the defining feature of (high school) algebra. The use of a number whose value we hope to figure out isn’t it. Those appear from the start of arithmetic, often as an empty square or circle or a spot of ____ that’s to be filled out. We used to give these numbers names like “thing” or “heap” or “it” or the like. Something pronoun-like. The shift to using ‘x’ as the shorthand is a legacy of the 16th century, the time when what we see as modern algebra took shape. People are frightened by it, to suddenly see letters in the midst of a bunch of numbers. But it’s no more than another number. And it communicates “algebra” in a way maybe nothing else does.
Ruben Bolling’s Tom the Dancing Bug rerun for the 12th is one of the God-Man stories. I’m delighted by the Freshman Philosophy-Major Man villain. The strip builds on questions of logic, and about what people mean by “omnipotence”. I don’t know how much philosophy of mathematics the average major takes. I suspect it’s about as much philosophy of mathematics as the average mathematics major is expected to take. (It’s an option, but I don’t remember anyone suggesting I do it, and I do feel the lost opportunity.) But perhaps later on Freshman Philosophy-Major Man would ask questions like what do we mean by “one” and “plus” and “equals” and “three”. And whether anything could, by a potent enough entity, be done about them. For the easiest way to let an omnipotent creature change something like that. WordPress is telling me this is a new tag for me. That can’t be right.
Mike Thompson’s Grand Avenue for the 13th is another resisting-the-story-problem joke, attacking the idea that a person would have ten apples. People like to joke about story problems hypothesizing people with ridiculous numbers of pieces of fruit. But ten doesn’t seem like an excessive number of apples to me; my love and I could eat that many in two weeks without trying hard. The attempted diversion would work better if it were something like forty watermelons or the like.
Mark Tatulli’s Heart of the City for the 13th has Heart and Dean complaining about their arithmetic class. I rate it as enough to include here because they go into some detail about things. I find it interesting they’re doing story problems with decimal points; that seems advanced for what I’d always taken their age to be. But I don’t know. I have dim memories of what elementary school was like, and that was in a late New Math-based curriculum.
Nick Galifianakis’s Nick and Zuzu for the 13th is a Venn diagram joke, the clearest example of one we’ve gotten in a while. I believe WordPress when it tells me this is a new tag for the comic strip.
Mark Anderson’s Andertoons for the 14th is the Mark Anderson’s Andertoons for the week. It starts at least with teaching ordinal numbers. In normal English that’s the adjective form of a number. Ordinal numbers reappear in the junior or senior year of a mathematics major’s work, as they learn enough set theory to be confused by infinities. In this guise they describe the sizes of sets of things. And they’re introduced as companions to cardinal numbers, which also describe the sizes of sets of things. They’re different, in ways that I feel like I always forget in-between reading books about infinitely large sets. The kids don’t need to worry about this yet.
## Reading the Comics, March 24, 2018: Arithmetic and Information Edition
And now I can bring last week’s mathematically-themed comics into consideration here. Including the whole images hasn’t been quite as much work as I figured. But that’s going to change, surely. One of about four things I know about life is that if you think you’ve got your workflow set up to where you can handle things you’re about to be surprised. Can’t wait to see how this turns out.
John Deering’s Strange Brew for the 22nd is edging its way toward an anthropomorphic numerals joke.
Brant Parker and Johnny Hart’s Wizard of Id for the 22nd is a statistics joke. Really a demographics joke. Which still counts; much of the historical development of statistics was in demographics. That it was possible to predict accurately the number of people in a big city who’d die, and what from, without knowing anything about whether any particular person would die was strange and astounding. It’s still an astounding thing to look directly at.
Hilary Price and Rina Piccolo’s Rhymes with Orange for the 23rd has the form of a story problem. I could imagine turning this into a proper story problem. You’d need some measure of how satisfying the 50-dollar wines are versus the 5-dollar wines. Also how much the wines affect people’s ability to notice the difference. You might be able to turn this into a differential equations problem, but that’s probably overkill.
Mark Anderson’s Andertoons for the 23rd is Mark Anderson’s Andertoons for this half of the week. It’s a student-avoiding-the-problem joke. Could be any question. But arithmetic has the advantages of being plausible, taking up very little space to render, and not confusing the reader by looking like it might be part of the joke.
John Zakour and Scott Roberts’s Working Daze for the 23rd has another cameo appearance by arithmetic. It’s also a cute reminder that there’s no problem you can compose that’s so simple someone can’t over-think it. And it puts me in mind of the occasional bit where a company’s promotional giveaway will technically avoid being a lottery by, instead of awarding prizes, awarding the chance to demonstrate a skill. Demonstration of that skill, such as a short arithmetic quiz, gets the prize. It’s a neat bit of loophole work and does depend, as the app designers here do, on the assumption there’s some arithmetic that people can be sure of being able to do.
Teresa Burritt’s Frog Applause for the 24th is its usual bit of Dadist nonsense. But in the talk about black holes it throws in an equation: $S = \frac{A k c^3}{4 G \hbar}$. This is some mathematics about black holes, legitimate and interesting. It is the entropy of a black hole. The dazzling thing about this is all but one of those symbols on the right is the same for every black hole. ‘c’ is the speed of light, as in ‘E = mc2‘. G is the gravitational constant of the universe, a measure of how strong gravity is. $\hbar$ is Planck’s constant, a kind of measure of how big quantum mechanics effects are. ‘k’ is the Boltzmann constant, which normal people never heard of but that everyone in physics and chemistry knows well. It’s what you multiply by to switch from the temperature of a thing to the thermal energy of the thing, or divide by to go the other way. It’s the same number for everything in the universe.
The only thing custom to a particular black hole is ‘A’, which is the surface area of the black hole. I mean the surface area of the event horizon. Double the surface area of the event horizon and you double its entropy. (This isn’t doubling the radius of the event horizon, but you know how much growth in the radius it is.) Also entropy. Hm. Everyone who would read this far into a pop mathematics blog like this knows that entropy is “how chaotic a thing is”. Thanks to people like Boltzmann we can be quantitative, and give specific and even exact numbers to the entropy of a system. It’s still a bit baffling since, superficially, a black hole seems like it’s not at all chaotic. It’s a point in space that’s got some mass to it, and maybe some electric charge and maybe some angular momentum. That’s about it. How messy can that be? It doesn’t even have any parts. This is how we can be pretty sure there’s stuff we don’t understand about black holes yet. Also about entropy.
This strip might be an oblique and confusing tribute to Dr Stephen Hawking. The entropy formula described was demonstrated by Drs Jacob Bekenstein and Stephen Hawking in the mid-1970s. Or it might be coincidence.
## Reading the Comics, March 21, 2018: Old Mathematics Jokes Edition
For this, the second of my Reading the Comics postings with all the comics images included, I’ve found reason to share some old and traditional mathematicians’ jokes. I’m not sure how this happened, but sometimes it just does.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 18th brings to mind a traditional mathematics joke. A dairy hires a mathematician to improve operations. She tours the place, inspecting the cows and their feeding and the milking machines. She speaks with the workers. She interviews veterinarians. She talks with the truckers who haul out milk. She interviews the clients. Finally she starts to work on a model of better milk production. The first line: “Assume a spherical cow.”
One big field of mathematics is model-building. When doing that you have to think about the thing you model. It’s hard. You have to throw away all the complicating stuff that makes your questions too hard to answer. But you can’t throw away all the complicating stuff or you have a boring question to answer. Depending on what kinds of things you want to know, you’ll need different models. For example, for some atmosphere problems you’ll do fine if you assume the air has no viscosity. For others that’s a stupid assumption. For some you can ignore that the planet rotates and is heated on one side by the sun. For some you don’t dare do that. And so on. The simplifications you can make aren’t always obvious. Sometimes you can ignore big stuff; a satellite’s orbit, for example, can be treated well by pretending that the whole universe except for the Earth doesn’t exist. Depends what you’re looking for. If the universe were homogenous enough, it would all be at the same temperature. Is that useful to your question? That’s the trick.
Mark Anderson’s Andertoons for the 20th is the Mark Anderson’s Andertoons for this essay. It’s just a student trying to distract the issue from fractions. I suppose mathematics was chosen for the blackboard problem because if it were, say, a history or an English or a science question someone would think that was part of the joke and be misled. Fractions, though, those have the signifier of “the thing we’d rather not talk about”.
Daniel Beyer’s Long Story Short for the 21st is a mathematicians-mindset sort of joke. Let me offer another. I went to my love’s college reunion. On the mathematics floor of the new sciences building the dry riser was labelled as “N Bourbaki”. Let me explain why is a correctly-formed and therefore very funny mathematics joke. “Nicolas Bourbaki” was the pseudonym used by the mathematical equivalent of an artist’s commune, in France, through several decades of the mid-20th century. Their goal was setting mathematics on a rigorous and intuition-free basis, the way mathematicians sometimes like to pretend it is. Bourbaki’s influential nonexistence lead to various amusing-for-academia problems and you can see why a fake office is appropriately named so, then. (This is the first time I’ve tagged this strip, looks like.)
Harley Schwadron’s 9 to 5 for the 21st is a name-drop of Einstein’s famous equation as a power tie. I must agree this meets the literal specification of a power tie since, you know, c2 is in it. Probably something more explicitly about powers wouldn’t communicate as well. Possibly Fermat’s Last Theorem, although I’m not sure that would fit and be legible on the tie as drawn.
Mark Pett’s Lucky Cow rerun for the 21st has the generally inept Neil work out a geometry problem in his head. The challenge is having a good intuitive model for what the relationship between the shapes should be. I’m relieved to say that Neil is correct, to the number of decimal places given. I’m relieved because I’ve spent embarrassingly long at this. My trouble was missing, twice over, that the question gave diameters instead of radiuses. Pfaugh. Saving me was just getting answers that were clearly crazy, including at one point 21 1/3.
Zach Weinersmith, Chris Jones and James Ashby’s Snowflakes for the 21st mentions Euler’s Theorem in the first panel. Trouble with saying “Euler’s Theorem” is that Euler had something like 82 trillion theorems. If you ever have to bluff your way through a conversation with a mathematician mention “Euler’s Theorem”. You’ll probably have said something on point, if closer to the basics of the problem than people figured. But the given equation — $e^{\imath \pi} + 1 = 0$ — is a good bet for “the” Euler’s Theorem. It’s a true equation, and it ties together a lot of interesting stuff about complex-valued numbers. It’s the way mathematicians tie together exponentials and simple harmonic motion. It makes so much stuff easier to work with. It would not be one of the things presented in a Distinctly Useless Mathematics text. But it would be mentioned along the way to something fascinating and useless. It turns up everywhere. (This is another strip I’m tagging for the first time.)
Wulff and Morgenthaler’s WuMo for the 21st uses excessively complicated mathematics stuff as a way to signify intelligence. Also to name-drop Massachusetts Institute of Technology as a signifier of intelligence. (My grad school was Rensselaer Polytechnic Institute, which would totally be MIT’s rival school if we had enough self-esteem to stand up to MIT. Well, on a good day we can say snarky stuff about the Rochester Institute of Technology if we don’t think they’re listening.) Putting the “Sigma” in makes the problem literally nonsense, since “Sigma” doesn’t signify any particular number. The rest are particular numbers, though. π/2 times 4 is just 2π, a bit more than 6.28. That’s a weird number of apples to have but it’s perfectly legitimate a number. The square root of the cosine of 68 … ugh. Well, assuming this is 68 as in radians I don’t have any real idea what that would be either. If this is 68 degrees, then I do know, actually; the cosine of 68 degrees is a little smaller than ½. But mathematicians are trained to suspect degrees in trig functions, going instead for radians.
Well, hm. 68 would be between 11 times 2π and 12 times 2π. I think that’s just a little more than 11 times 2π. Oh, maybe it is something like ½. Let me check with an actual calculator. Huh. It is a little more than 0.440. Well, that’s a once-in-a-lifetime shot. Anyway the square root of that is a little more than 0.663. So you’d be left with about five and a half apples. Never mind this Sigma stuff. (A little over 5.619, to be exact.)
## Reading the Comics, February 26, 2018: Possible Reruns Edition
Comic Strip Master Command spent most of February making sure I could barely keep up. It didn’t slow down the final week of the month either. Some of the comics were those that I know are in eternal reruns. I don’t think I’m repeating things I’ve already discussed here, but it is so hard to be sure.
Bill Amend’s FoxTrot for the 24th of February has a mathematics problem with a joke answer. The approach to finding the area’s exactly right. It’s easy to find areas of simple shapes like rectangles and triangles and circles and half-circles. Cutting a complicated shape into known shapes, finding those areas, and adding them together works quite well, most of the time. And that’s intuitive enough. There are other approaches. If you can describe the outline of a shape well, you can use an integral along that outline to get the enclosed area. And that amazes me even now. One of the wonders of calculus is that you can swap information about a boundary for information about the interior, and vice-versa. It’s a bit much for even Jason Fox, though.
Jef Mallett’s Frazz for the 25th is a dispute between Mrs Olsen and Caulfield about whether it’s possible to give more than 100 percent. I come down, now as always, on the side that argues it depends what you figure 100 percent is of. If you mean “100% of the effort it’s humanly possible to expend” then yes, there’s no making more than 100% of an effort. But there is an amount of effort reasonable to expect for, say, an in-class quiz. It’s far below the effort one could possibly humanly give. And one could certainly give 105% of that effort, if desired. This happens in the real world, of course. Famously, in the right circles, the Space Shuttle Main Engines normally reached 104% of full throttle during liftoff. That’s because the original specifications for what full throttle would be turned out to be lower than was ultimately needed. And it was easier to plan around running the engines at greater-than-100%-throttle than it was to change all the earlier design documents.
Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 25th straddles the line between Pi Day jokes and architecture jokes. I think this is a rerun, but am not sure.
Matt Janz’s Out of the Gene Pool rerun for the 25th tosses off a mention of “New Math”. It’s referenced as a subject that’s both very powerful but also impossible for Pop, as an adult, to understand. It’s an interesting denotation. Usually “New Math”, if it’s mentioned at all, is held up as a pointlessly complicated way of doing simple problems. This is, yes, the niche that “Common Core” has taken. But Janz’s strip might be old enough to predate people blaming everything on Common Core. And it might be character, that the father is old enough to have heard of New Math but not anything in the nearly half-century since. It’s an unusual mention in that “New” Math is credited as being good for things. (I’m aware this strip’s a rerun. I had thought I’d mentioned it in an earlier Reading the Comics post, but can’t find it. I am surprised.)
Mark Anderson’s Andertoons for the 26th is a reassuring island of normal calm in these trying times. It’s a student-at-the-blackboard problem.
Morrie Turner’s Wee Pals rerun for the 26th just mentions arithmetic as the sort of homework someone would need help with. This is another one of those reruns I’d have thought has come up here before, but hasn’t.
## Reading the Comics, February 11, 2018: February 11, 2018 Edition
And it’s not always fair to say that the gods mock any plans made by humans, but Comic Strip Master Command has been doing its best to break me of reading and commenting on any comic strip with a mathematical theme. I grant that I could make things a little easier if I demanded more from a comic strip before including it here. But even if I think a theme is slight that doesn’t mean the reader does, and it’s easy to let the eye drop to the next paragraph if the reader does think it’s too slight. The anthology nature of these posts is part of what works for them. And then sometimes Comic Strip Master Command sends me a day like last Sunday when everybody was putting in some bit of mathematics. There’ll be another essay on the past week’s strips, never fear. But today’s is just for the single day.
Susan Camilleri Konar’s Six Chix for the 11th illustrates the Lemniscate Family. The lemniscate is a shape well known as the curve made by a bit of water inside a narrow tube by people who’ve confused it with a meniscus. An actual lemniscate is, as the chain of pointing fingers suggests, a figure-eight shape. You get — well, I got — introduced to them in prealgebra. They’re shapes really easy to describe in polar coordinates but a pain to describe in Cartesian coordinates. There are several different kinds of lemniscates, each satisfying slightly different conditions while looking roughly like a figure eight. If you’re open to the two lobes of the shape not being the same size there’s even a kind of famous-ish lemniscate called the analemma. This is the figure traced out by the sun if you look at its position from a set point on the surface of the Earth at the same clock time each day over the course of the year. That the sun moves north and south from the horizon is easy to spot. That it is sometimes east or west of some reference spot is a surprise. It shows the difference between the movement of the mean sun, the sun as we’d see it if the Earth had a perfectly circular orbit, and the messy actual thing. Dr Helmer Aslasken has a fine piece about this, and how it affects when the sun rises earliest and latest in the year.
There’s also a thing called the “polynomial lemniscate”. This is a level curve of a polynomial. That is, what are all the possible values of the independent variable which cause the polynomial to evaluate to some particular number? This is going to be a polynomial in a complex-valued variable, in order to get one or more closed and (often) wriggly loops. A polynomial of a real-valued variable would typically give you a boring shape. There’s a bunch of these polynomial lemniscates that approximate the boundary of the Mandelbrot Set, that fractal that you know from your mathematics friend’s wall in 1992.
Mark Anderson’s Andertoons took care of being Mark Anderson’s Andertoons early in the week. It’s a bit of optimistic blackboard work.
Lincoln Pierce’s Big Nate features the formula for calculating the wind chill factor. Francis reads out what is legitimately the formula for estimating the wind chill temperature. I’m not going to get into whether the wind chill formula makes sense as a concept because I’m not crazy. The thinking behind it is that a windless temperature feels about the same as a different temperature with a particular wind. How one evaluates those equivalences offers a lot of room for debate. The formula as the National Weather Service, and Francis, offer looks frightening, but isn’t really hard. It’s not a polynomial, in terms of temperature and wind speed, but it’s close to that in form. The strip is rerun from the 15th of February, 2009, as Lincoln Pierce has had some not-publicly-revealed problem taking him away from the comic for about a month and a half now.
Jim Scancarelli’s Gasoline Alley included a couple of mathematics formulas, including the famous E = mc2 and the slightly less famous πr2, as part of Walt Wallet’s fantasy of advising scientists and inventors. (Scientists have already heard both.) There’s a curious stray bit in the corner, writing out 6.626 x 102 x 3 that I wonder about. 6.626 is the first couple digits of Planck’s Constant, as measured in Joule-seconds. (This is h, not h-bar, I say for the person about to complain.) It’d be reasonable for Scancarelli to have drawn that out of a physics book or reference page. But the exponent is all wrong, even if you suppose he mis-wrote 1023. It should be 6.626 x 10-34. So I don’t know whether Scancarelli got things very garbled, or if he just picked a nice sciencey-looking number and happened to hit on a significant one. (There’s enough significant science numbers that he’d have a fair chance of finding something.) The strip is a reprint from the 4th of February, 2007, as Jim Scancarelli has been absent for no publicly announced reason for four months now.
Greg Evans and Karen Evans’s Luann is not perfectly clear. But I think it’s presenting Gunther doing mathematics work to support his mother’s contention that he’s smart. There’s no working out what work he’s doing. But then we might ask how smart his mother is to have made that much food for just the two of them. Also that I think he’s eating a potato by hand? … Well, there are a lot of kinds of food that are hard to draw.
Greg Evans’s Luann Againn reprints the strip from the 11th of February (again), 1990. It mentions as one of those fascinating things of arithmetic an easy test to see if a number’s a multiple of nine. There are several tricks like this, although the only ones anybody can remember are finding multiples of 3 and finding multiples of 9. Well, they know the rules for something being a multiple of 2, 5, or 10, but those hardly look like rules, and there’s no addition needed. Similarly with multiples of 4.
Modular arithmetic underlies all these rules. Once you know the trick you can use it to work out your own add-up-the-numbers rules to find what numbers are multiples of small numbers. Here’s an example. Think of a three-digit number. Suppose its first digit is ‘a’, its second digit ‘b’, and its third digit ‘c’. So we’d write the number as ‘abc’, or, 100a + 10b + 1c. What’s this number equal to, modulo 9? Well, 100a modulo 9 has to be equal to whatever a modulo 9 is: (100 a) modulo 9 is (100) modulo 9 — that is, 1 — times (a) modulo 9. 10b modulo 9 is (10) modulo 9 — again, 1 — times (b) modulo 9. 1c modulo 9 is … well, (c) modulo 9. Add that all together and you have a + b + c modulo 9. If a + b + c is some multiple of 9, so must be 100a + 10b + 1c.
The rules about whether something’s divisible by 2 or 5 or 10 are easy to work with since 10 is a multiple of 2, and of 5, and for that matter of 10, so that 100a + 10b + 1c modulo 10 is just c modulo 10. You might want to let this settle. Then, if you like, practice by working out what an add-the-digits rule for multiples of 11 would be. (This is made a lot easier if you remember that 10 is equal to 11 – 1.) And if you want to show off some serious arithmetic skills, try working out an add-the-digits rule for finding whether something’s a multiple of 7. Then you’ll know why nobody has ever used that for any real work.
J C Duffy’s Lug Nuts plays on the equivalence people draw between intelligence and arithmetic ability. Also on the idea that brain size should have something particularly strong link to intelligence. Really anyone having trouble figuring out 15% of $10 is psyching themselves out. They’re too much overwhelmed by the idea of percents being complicated to realize that it’s, well, ten times 15 cents. ## Reading the Comics, January 22, 2018: Breaking Workflow Edition So I was travelling last week, and this threw nearly all my plans out of whack. We stayed at one of those hotels that’s good enough that its free Internet is garbage and they charge you by day for decent Internet. So naturally Comic Strip Master Command sent a flood of posts. I’m trying to keep up and we’ll see if I wrap up this past week in under three essays. And I am not helped, by the way, by GoComics.com rejiggering something on their server so that My Comics Page won’t load, and breaking their “Contact Us” page so that that won’t submit error reports. If someone around there can break in and turn one of their servers off and on again, I’d appreciate the help. Hy Eisman’s Katzenjammer Kids for the 21st of January is a curiously-timed Tax Day joke. (Well, the Katzenjammer Kids lapsed into reruns a dozen years ago and there’s probably not much effort being put into selecting seasonally appropriate ones.) But it is about one of the oldest and still most important uses of mathematics, and one that never gets respect. Morrie Turner’s Wee Pals rerun for the 21st gets Oliver the reputation for being a little computer because he’s good at arithmetic. There is something that amazes in a person who’s able to calculate like this without writing anything down or using a device to help. Steve Kelley and Jeff Parker’s Dustin for the 22nd seems to be starting off with a story problem. It might be a logic problem rather than arithmetic. It’s hard to say from what’s given. Mark Anderson’s Andertoons for the 22nd is the Mark Anderson’s Andertoons for the week. Well, for Monday, as I write this. It’s got your classic blackboard full of equations for the people in over their head. The equations look to me like gibberish. There’s a couple diagrams of aromatic organic compounds, which suggests some quantum-mechanics chemistry problem, if you want to suppose this could be narrowed down. Greg Evans’s Luann Againn for the 22nd has Luann despair about ever understanding algebra without starting over from scratch and putting in excessively many hours of work. Sometimes it feels like that. My experience when lost in a subject has been that going back to the start often helps. It can be easier to see why a term or a concept or a process is introduced when you’ve seen it used some, and often getting one idea straight will cause others to fall into place. When that doesn’t work, trying a different book on the same topic — even one as well-worn as high school algebra — sometimes helps. Just a different writer, or a different perspective on what’s key, can be what’s needed. And sometimes it just does take time working at it all. Richard Thompson’s Richard’s Poor Almanac rerun for the 22nd includes as part of a kit of William Shakespeare paper dolls the Typing Monkey. It’s that lovely, whimsical figure that might, in time, produce any written work you could imagine. I think I’d retired monkeys-at-typewriters as a thing to talk about, but I’m easily swayed by Thompson’s art and comic stylings so here it is. Darrin Bell and Theron Heir’s Rudy Park for the 18th throws around a lot of percentages. It’s circling around the sabermetric-style idea that everything can be quantified, and measured, and that its changes can be tracked. In this case it’s comments on Star Trek: Discovery, but it could be anything. I’m inclined to believe that yeah, there’s an astounding variety of things that can be quantified and measured and tracked. But it’s also easy, especially when you haven’t got a good track record of knowing what is important to measure, to start tracking what amounts to random noise. (See any of my monthly statistics reviews, when I go looking into things like views-per-visitor-per-post-made or some other dubiously meaningful quantity.) So I’m inclined to side with Randy and his doubts that the Math Gods sanction this much data-mining. ## Reading the Comics, January 20, 2018: Increased Workload Edition It wasn’t much of an increased workload, really. I mean, none of the comics required that much explanation. But Comic Strip Master Command donated enough topics to me last week that I have a second essay for the week. And here it is; sorry there’s no pictures. Mark Anderson’s Andertoons for the 17th is the Mark Anderson’s Andertoons we’ve been waiting for. It returns to fractions and their frustrations for its comic point. Jef Mallet’s Frazz for the 17th talks about story problems, although not to the extent of actually giving one as an example. It’s more about motivating word-problem work. Mike Thompson’s Grand Avenue for the 17th is an algebra joke. I’d call it a cousin to the joke about mathematics’s ‘x’ not coming back and we can’t say ‘y’. On the 18th was one mentioning mathematics, although in a joke structure that could have been any subject. Lorrie Ransom’s The Daily Drawing for the 18th is another name-drop of mathematics. I guess it’s easier to use mathematics as the frame for saying something’s just a “problem”. I don’t think of, say, identifying the themes of a story as a problem in the way that finding the roots of a quadratic is. Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 18th is an anthropomorphic-geometric-figures joke that I’m all but sure is a rerun I’ve shared here before. I’ll try to remember to check before posting this. Mikael Wulff and Anders Morgenthaler’s WuMo for the 20th gives us a return of the pie chart joke that seems like it’s been absent a while. Worth including? Eh, why not. ## Reading the Comics, January 6, 2018: Terms Edition The last couple days of last week saw a rush of comics, although most of them were simpler things to describe. Bits of play on words, if you like. Samson’s Dark Side of the Horse for the 4th of January, 2018, is one that plays on various meanings of “average”. The mean, alluded to in the first panel, is the average most people think of first. Where you have a bunch of values representing instances of something, add up the values, and divide by the number of instances. (Properly that’s the arithmetic mean. There’s some others, such as the geometric mean, but if someone’s going to use one of those they give you clear warning.) The median, in the second, is the midpoint, the number that half of all instances are less than. So you see the joke. If the distribution of intelligence is normal — which is a technical term, although it does mean “not freakish” — then the median and the mean should be equal. If you had infinitely many instances, and they were normally distributed, the two would be equal. With finitely many instances, the mean and the median won’t be exactly in line, for the same reason if you fairly toss a coin two million times it won’t turn up heads exactly one million times. Dark Side of the Horse for the 5th delivers the Roman numerals joke of the year. And I did have to think about whether ‘D’ is a legitimate Roman numeral. This would be easier to remember before 1900. Mike Lester’s Mike du Jour for the 4th is geometry wordplay. I’m not sure the joke stands up to scrutiny, but it lands well enough initially. Johnny Hart’s Back to BC for the 5th goes to the desire to quantify and count things. And to double-check what other people tell you about this counting. It’s easy, today, to think of the desire to quantify things as natural to humans. I’m not confident that it is. The history of statistics shows this gradual increase in the number and variety of things getting tracked. This strip originally ran the 11th of July, 1960. Bill Watterson’s Calvin and Hobbes for the 5th talks about averages again. And what a population average means for individuals. It doesn’t mean much. The glory of statistics is that groups are predictable in a way that individuals are not. John Graziano’s Ripley’s Believe It Or Not for the 5th features a little arithmetic coincidence, that multiplying 21,978 by four reverses its digits. It made me think of Ray Kassinger’s question the other day about parasitic numbers. But this isn’t a parasitic number. A parasitic number is one with a value, multiplied by a particular number, that’s the same as you get by moving its last digit to the front. Flipping the order of digits seems like it should be something and I don’t know what. Mark Anderson’s Andertoons for the 6th is a confident reassurance that 2018 is a normal, healthy year after all. Or can be. Prime numbers. Mark O’Hare’s Citizen Dog rerun for the 6th is part of a sequence in which Fergus takes a (human) child’s place in school. Mathematics gets used as a subject that’s just a big pile of unfamiliar terms if you just jump right in. Most subjects are like this if you take them seriously, of course. But mathematics has got an economy of technical terms to stuff into people’s heads, and that have to be understood to make any progress. In grad school my functional analysis professor took great mercy on us, and started each class with re-writing the definitions of all the technical terms introduced the previous class. Also of terms that might be a bit older, but that are important to get right, which is why I got through it confident I knew what a Sobolev Space was. (It’s a collection of functions that have enough derivatives to do your differential equations problem.) Numerator and denominator, we’re experts on by now. ## Reading the Comics, December 16, 2017: Andertoons Drought Ended Edition And now, finally, we get what we’ve been waiting so long for: my having enough energy and time to finish up last week’s comics. And I make excuses to go all fanboy over Elzie Segar’s great Thimble Theatre. Also more attention to Zach Weinersmith. You’ve been warned. Mark Anderson’s Andertoons for the 13th is finally a breath of Mark Anderson’s Andertoons around here. Been far too long. Anyway it’s an algebra joke about x’s search for identity. And as often happens I’m sympathetic here. It’s not all that weird to think of ‘x’ as a label for some number. Knowing whether it means “a number whose value we haven’t found yet” or “a number whose value we don’t care about” is one trick, though. It’s not something you get used to from learning about, like, ‘6’. And knowing whether we can expect ‘x’ to have held whatever value it represented before, or whether we can expect it to be something different, is another trick. Doug Bratton’s Pop Culture Shock Therapy for the 13th I feel almost sure has come up here before. Have I got the energy to find where? Oh, yes. It ran the 5th of September, 2015. David Gilbert’s Buckles for the 14th is a joke on animals’ number sense. In fairness, after that start I wouldn’t know whether to go for four or five barks myself. Bud Blake’s Tiger for the 15th is a bit of kid logic about how to make a long column of numbers easier to add. I endorse the plan of making the column shorter, although I’d do that by trying to pair up numbers that, say, add to 10 or 20 or something else easy to work with. Partial sums can make the overall work so much easier. And probably avoid mistakes. Elzie Segar’s Thimble Theatre for the 8th of July, 1931, is my most marginal inclusion yet. It was either that strip or the previous day’s worth including. I’m throwing it in here because Segar’s Thimble Theatre keeps being surprisingly good. And, heck, slowing a count by going into fractions is viable way to do it. As the clobbered General Bunzo points out, you can drag this out longer by going into hundredths. Or smaller units. There is no largest real number less than ten; if it weren’t incredibly against the rules, boxers could make use of that. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is about those mathematics problems with clear and easy-to-understand statements whose answers defy intuition. Weinersmith is completely correct about all of this. I’m surprised he doesn’t mention the one about how you could divide an orange into five pieces, reassemble the pieces, and get back two spheres each the size of a sun. ## Reading the Comics, November 25, 2017: Shapes and Probability Edition This week was another average-grade week of mathematically-themed comic strips. I wonder if I should track them and see what spurious correlations between events and strips turn up. That seems like too much work and there’s better things I could do with my time, so it’s probably just a few weeks before I start doing that. Ruben Bolling’s Super-Fun-Pax Comics for the 19th is an installment of A Voice From Another Dimension. It’s in that long line of mathematics jokes that are riffs on Flatland, and how we might try to imagine spaces other than ours. They’re taxing things. We can understand some of the rules of them perfectly well. Does that mean we can visualize them? Understand them? I’m not sure, and I don’t know a way to prove whether someone does or does not. This wasn’t one of the strips I was thinking of when I tossed “shapes” into the edition title, but you know what? It’s close enough to matching. Olivia Walch’s Imogen Quest for the 20th — and I haven’t looked, but it feels to me like I’m always featuring Imogen Quest lately — riffs on the Monty Hall Problem. The problem is based on a game never actually played on Monty Hall’s Let’s Make A Deal, but very like ones they do. There’s many kinds of games there, but most of them amount to the contestant making a choice, and then being asked to second-guess the choice. In this case, pick a door and then second-guess whether to switch to another door. The Monty Hall Problem is a great one for Internet commenters to argue about while the rest of us do something productive. The trouble — well, one trouble — is that whether switching improves your chance to win the car is that whether it does depends on the rules of the game. It’s not stated, for example, whether the host must open a door showing a goat behind it. It’s not stated that the host certainly knows which doors have goats and so chooses one of those. It’s not certain the contestant even wants a car when, hey, goats. What assumptions you make about these issues affects the outcome. If you take the assumptions that I would, given the problem — the host knows which door the car’s behind, and always offers the choice to switch, and the contestant would rather have a car, and such — then Walch’s analysis is spot on. Jonathan Mahood’s Bleeker: The Rechargeable Dog for the 20th features a pretend virtual reality arithmetic game. The strip is of incredibly low mathematical value, but it’s one of those comics I like that I never hear anyone talking about, so, here. Richard Thompson’s Cul de Sac rerun for the 20th talks about shapes. And the names for shapes. It does seem like mathematicians have a lot of names for slightly different quadrilaterals. In our defense, if you’re talking about these a lot, it helps to have more specific names than just “quadrilateral”. Rhomboids are those parallelograms which have all four sides the same length. A parallelogram has to have two pairs of equal-sized legs, but the two pairs’ sizes can be different. Not so a rhombus. Mathworld says a rhombus with a narrow angle that’s 45 degrees is sometimes called a lozenge, but I say they’re fibbing. They make even more preposterous claims on the “lozenge” page. Todd Clark’s Lola for the 20th does the old “when do I need to know algebra” question and I admit getting grumpy like this when people ask. Do French teachers have to put up with this stuff? Brian Fies’s Mom’s Cancer rerun for the 23rd is from one of the delicate moments in her story. Fies’s mother just learned the average survival rate for her cancer treatment is about five percent and, after months of things getting haltingly better, is shaken. But as with most real-world probability questions context matters. The five-percent chance is, as described, the chance someone who’d just been diagnosed in the state she’d been diagnosed in would survive. The information that she’s already survived months of radiation and chemical treatment and physical therapy means they’re now looking at a different question. What is the chance she will survive, given that she has survived this far with this care? Mark Anderson’s Andertoons for the 24th is the Mark Anderson’s Andertoons for the week. It’s a protesting-student kind of joke. For the student’s question, I’m not sure how many sides a polygon has before we can stop memorizing them. I’d say probably eight. Maybe ten. Of the shapes whose names people actually care about, mm. Circle, triangle, a bunch of quadrilaterals, pentagons, hexagons, octagons, maybe decagon and dodecagon. No, I’ve never met anyone who cared about nonagons. I think we could drop heptagons without anyone noticing either. Among quadrilaterals, ugh, let’s see. Square, rectangle, rhombus, parallelogram, trapezoid (or trapezium), and I guess diamond although I’m not sure what that gets you that rhombus doesn’t already. Toss in circles, ellipses, and ovals, and I think that’s all the shapes whose names you use. Stephan Pastis’s Pearls Before Swine for the 25th does the rounding-up joke that’s been going around this year. It’s got a new context, though. ## Reading the Comics, October 14, 2017: Physics Equations Edition So that busy Saturday I promised for the mathematically-themed comic strips? Here it is, along with a Friday that reached the lowest non-zero levels of activity. Stephan Pastis’s Pearls Before Swine for the 13th is one of those equations-of-everything jokes. Naturally it features a panel full of symbols that, to my eye, don’t parse. There are what look like syntax errors, for example, with the one that anyone could see the { mark that isn’t balanced by a }. But when someone works rough they will, often, write stuff that doesn’t quite parse. Think of it as an artist’s rough sketch of a complicated scene: the lines and anatomy may be gibberish, but if the major lines of the composition are right then all is well. Most attempts to write an equation for everything are really about writing a description of the fundamental forces of nature. We trust that it’s possible to go from a description of how gravity and electromagnetism and the nuclear forces go to, ultimately, a description of why chemistry should work and why ecologies should form and there should be societies. There are, as you might imagine, a number of assumed steps along the way. I would accept the idea that we’ll have a unification of the fundamental forces of physics this century. I’m not sure I would believe having all the steps between the fundamental forces and, say, how nerve cells develop worked out in that time. Mark Anderson’s Andertoons makes it overdue appearance for the week on the 14th, with a chalkboard word-problem joke. Amusing enough. And estimating an answer, getting it wrong, and refining it is good mathematics. It’s not just numerical mathematics that will look for an approximate solution and then refine it. As a first approximation, 15 minus 7 isn’t far off 10. And for mental arithmetic approximating 15 minus 7 as 10 is quite justifiable. It could be made more precise if a more exact answer were needed. Maria Scrivan’s Half Full for the 14th I’m going to call the anthropomorphic geometry joke for the week. If it’s not then it’s just wordplay and I’d have no business including it here. Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 14th tosses in the formula describing how strong the force of gravity between two objects is. In Newtonian gravity, which is why it’s the Newton Police. It’s close enough for most purposes. I’m not sure how this supports the cause of world peace. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th names Riemann’s Quaternary Conjecture. I was taken in by the panel, trying to work out what the proposed conjecture could even mean. The reason it works is that Bernhard Riemann wrote like 150,000 major works in every field of mathematics, and about 149,000 of them are big, important foundational works. The most important Riemann conjecture would be the one about zeroes of the Riemann Zeta function. This is typically called the Riemann Hypothesis. But someone could probably write a book just listing the stuff named for Riemann, and that’s got to include a bunch of very specific conjectures. ## Reading the Comics, October 4, 2017: Time-Honored Traditions Edition It was another busy week in mathematically-themed comic strips last week. Busy enough I’m comfortable rating some as too minor to include. So it’s another week where I post two of these Reading the Comics roundups, which is fine, as I’m still recuperating from the Summer 2017 A To Z project. This first half of the week includes a lot of rerun comics, and you’ll see why my choice of title makes sense. Lincoln Pierce’s Big Nate: First Class for the 1st of October reprints the strip from the 2nd of October, 1993. It’s got a well-formed story problem that, in the time-honored tradition of this setup, is subverted. I admit I kind of miss the days when exams would have problems typed out in monospace like this. Ashleigh Brilliant’s Pot-Shots for the 1st is a rerun from sometime in 1975. And it’s an example of the time-honored tradition of specifying how many statistics are made up. Here it comes in at 43 percent of statistics being “totally worthless” and I’m curious how the number attached to this form of joke changes over time. The Joey Alison Sayers Comic for the 2nd uses a blackboard with mathematics — a bit of algebra and a drawing of a sphere — as the designation for genius. That’s all I have to say about this. I remember being set straight about the difference between ponies and horses and it wasn’t by my sister, who’s got a professional interest in the subject. Mark Pett’s Lucky Cow rerun for the 2nd is a joke about cashiers trying to work out change. As one of the GoComics.com commenters mentions, the probably best way to do this is to count up from the purchase to the amount you have to give change for. That is, work out$12.43 to $12.50 is seven cents, then from$12.50 to $13.00 is fifty more cents (57 cents total), then from$13.00 to $20.00 is seven dollars ($7.57 total) and then from $20 to$50 is thirty dollars ($37.57 total). It does make me wonder, though: what did Neil enter as the amount tendered, if it wasn’t$50? Maybe he hit “exact change” or whatever the equivalent was. It’s been a long, long time since I worked a cash register job and while I would occasionally type in the wrong amount of money, the kinds of errors I would make would be easy to correct for. (Entering $30 instead of$20 for the tendered amount, that sort of thing.) But the cash register works however Mark Pett decides it works, so who am I to argue?
Keith Robinson’s Making It rerun for the 2nd includes a fair bit of talk about ratios and percentages, and how to inflate percentages. Also about the underpaying of employees by employers.
Mark Anderson’s Andertoons for the 3rd continues the streak of being Mark Anderson Andertoons for this sort of thing. It has the traditional form of the student explaining why the teacher’s wrong to say the answer was wrong.
Brian Fies’s The Last Mechanical Monster for the 4th includes a bit of legitimate physics in the mad scientist’s captioning. Ballistic arcs are about a thing given an initial speed in a particular direction, moving under constant gravity, without any of the complicating problems of the world involved. No air resistance, no curvature of the Earth, level surfaces to land on, and so on. So, if you start from a given height (‘y0‘) and a given speed (‘v’) at a given angle (‘θ’) when the gravity is a given strength (‘g’), how far will you travel? That’s ‘d’. How long will you travel? That’s ‘t’, as worked out here.
(I should maybe explain the story. The mad scientist here is the one from the first, Fleischer Studios, Superman cartoon. In it the mad scientist sends mechanical monsters out to loot the city’s treasures and whatnot. As the cartoon has passed into the public domain, Brian Fies is telling a story of that mad scientist, finally out of jail, salvaging the one remaining usable robot. Here, training the robot to push aside bank tellers has gone awry. Also, the ground in his lair is not level.)
Tom Toles’s Randolph Itch, 2 am rerun for the 4th uses the time-honored tradition of Albert Einstein needing a bit of help for his work.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 4th uses the time-honored tradition of little bits of physics equations as designation of many deep thoughts. And then it gets into a bit more pure mathematics along the way. It also reflects the time-honored tradition of people who like mathematics and physics supposing that those are the deepest and most important kinds of thoughts to have. But I suppose we all figure the things we do best are the things it’s important to do best. It’s traditional.
And by the way, if you’d like more of these Reading the Comics posts, I put them all in the category ‘Comic Strips’ and I just now learned the theme I use doesn’t show categories for some reason? This is unsettling and unpleasant. Hm.
## Reading the Comics, September 29, 2017: Anthropomorphic Mathematics Edition
The rest of last week had more mathematically-themed comic strips than Sunday alone did. As sometimes happens, I noticed an objectively unimportant detail in one of the comics and got to thinking about it. Whether I could solve the equation as posted, or whether at least part of it made sense as a mathematics problem. Well, you’ll see.
Patrick McDonnell’s Mutts for the 25th of September I include because it’s cute and I like when I can feature some comic in these roundups. Maybe there’s some discussion that could be had about what “equals” means in ordinary English versus what it means in mathematics. But I admit that’s a stretch.
Olivia Walch’s Imogen Quest for the 25th uses, and describes, the mathematics of a famous probability problem. This is the surprising result of how few people you need to have a 50 percent chance that some pair of people have a birthday in common. It then goes over to some other probability problems. The examples are silly. But the reasoning is sound. And the approach is useful. To find the chance of something happens it’s often easiest to work out the chance it doesn’t. Which is as good as knowing the chance it does, since a thing can either happen or not happen. At least in probability problems, which define “thing” and “happen” so there’s not ambiguity about whether it happened or not.
Piers Baker’s Ollie and Quentin rerun for the 26th I’m pretty sure I’ve written about before, although back before I included pictures of the Comics Kingdom strips. (The strip moved from Comics Kingdom over to GoComics, which I haven’t caught removing old comics from their pages.) Anyway, it plays on a core piece of probability. It sets out the world as things, “events”, that can have one of multiple outcomes, and which must have one of those outcomes. Coin tossing is taken to mean, by default, an event that has exactly two possible outcomes, each equally likely. And that is near enough true for real-world coin tossing. But there is a little gap between “near enough” and “true”.
Rick Stromoski’s Soup To Nutz for the 27th is your standard sort of Dumb Royboy joke, in this case about him not knowing what percentages are. You could do the same joke about fractions, including with the same breakdown of what part of the mathematics geek population ruins it for the remainder.
Nate Fakes’s Break of Day for the 28th is not quite the anthropomorphic-numerals joke for the week. Anthropomorphic mathematics problems, anyway. The intriguing thing to me is that the difficult, calculus, problem looks almost legitimate to me. On the right-hand-side of the first two lines, for example, the calculation goes from
$\int -8 e^{-\frac{ln 3}{14} t}$
to
$-8 -\frac{14}{ln 3} e^{-\frac{ln 3}{14} t}$
This is a little sloppy. The first line ought to end in a ‘dt’, and the second ought to have a constant of integration. If you don’t know what these calculus things are let me explain: they’re calculus things. You need to include them to express the work correctly. But if you’re just doing a quick check of something, the mathematical equivalent of a very rough preliminary sketch, it’s common enough to leave that out.
It doesn’t quite parse or mean anything precisely as it is. But it looks like the sort of thing that some context would make meaningful. That there’s repeated appearances of $- \frac{ln 3}{14}$, or $- \frac{14}{ln 3}$, particularly makes me wonder if Frakes used a problem he (or a friend) was doing for some reason.
Mark Anderson’s Andertoons for the 29th is a welcome reassurance that something like normality still exists. Something something student blackboard story problem something.
Anthony Blades’s Bewley rerun for the 29th depicts a parent once again too eager to help with arithmetic homework.
Maria Scrivan’s Half Full for the 29th gives me a proper anthropomorphic numerals panel for the week, and none too soon.
## Reading the Comics, September 22, 2017: Doughnut-Cutting Edition
The back half of last week’s mathematically themed comic strips aren’t all that deep. They make up for it by being numerous. This is how calculus works, so, good job, Comic Strip Master Command. Here’s what I have for you.
Mark Anderson’s Andertoons for the 20th marks its long-awaited return to these Reading The Comics posts. It’s of the traditional form of the student misunderstanding the teacher’s explanations. Arithmetic edition.
Marty Links’s Emmy Lou for the 20th was a rerun from the 22nd of September, 1976. It’s just a name-drop. It’s not like it matters for the joke which textbook was lost. I just include it because, what the heck, might as well.
Jef Mallett’s Frazz for the 21st uses the form of a story problem. It’s a trick question anyway; there’s really no way the Doppler effect is going to make an ice cream truck’s song unrecognizable, not even at highway speeds. Too distant to hear, that’s a possibility. Also I don’t know how strictly regional this is but the ice cream trucks around here have gone in for interrupting the music every couple seconds with some comical sound effect, like a “boing” or something. I don’t know what this hopes to achieve besides altering the timeline of when the ice cream seller goes mad.
Mark Litzler’s Joe Vanilla for the 21st I already snuck in here last week, in talking about ‘x’. The variable does seem like a good starting point. And, yeah, hypothesis block is kind of a thing. There’s nothing quite like staring at a problem that should be interesting and having no idea where to start. This happens even beyond grade school and the story problems you do then. What to do about it? There’s never one thing. Study it a good while, read about related problems a while. Maybe work on something that seems less obscure a while. It’s very much like writer’s block.
Ryan North’s Dinosaur Comics rerun for the 22nd straddles the borders between mathematics, economics, and psychology. It’s a problem about making forecasts about other people’s behavior. It’s a mystery of game theory. I don’t know a proper analysis for this game. I expect it depends on how many rounds you get to play: if you have a sense of what people typically do, you can make a good guess of what they will do. If everyone gets a single shot to play, all kinds of crazy things might happen.
Jef Mallet’s Frazz gets in again on the 22nd with some mathematics gibberish-talk, including some tossing around of the commutative property. Among other mistakes Caulfield was making here, going from “less is more to therefore more is less” isn’t commutation. Commutation is about binary operations, where you match a pair of things to a single thing. The operation commutes if it never matters what the order of the pair of things is. It doesn’t commute if it ever matters, even a single time, what the order is. Commutativity gets introduced in arithmetic where there are some good examples of the thing. Addition and multiplication commute. Subtraction and division don’t. From there it gets forgotten until maybe eventually it turns up in matrix multiplication, which doesn’t commute. And then it gets forgotten once more until maybe group theory. There, whether operations commute or not is as important a divide as the one between vertebrates and invertebrates. But I understand kids not getting why they should care about commuting. Early on it seems like a longwinded way to say what’s obvious about addition.
Michael Cavna’s Warped for the 22nd is the Venn Diagram joke for this round of comics.
Bud Blake’s Tiger rerun for the 23rd starts with a real-world example of your classic story problem. I like the joke in it, and I also like Hugo’s look of betrayal and anger in the second panel. A spot of expressive art will do so good for a joke.
## Reading the Comics, September 8, 2017: First Split Week Edition, Part 1
It was looking like another slow week for something so early in the (United States) school year. Then Comic Strip Master Commend sent a flood of strips in for Friday and Saturday, so I’m splitting the load. It’s not a heavy one, as back-to-school jokes are on people’s minds. But here goes.
Marcus Hamilton and Scott Ketcham’s Dennis the Menace for the 3rd of September, 2017 is a fair strip for this early in the school year. It’s an old joke about making subtraction understandable.
Mark Anderson’s Andertoons for the 3rd is the Mark Anderson installment for this week, so I’m glad to have that. It’s a good old classic cranky-students setup and it reminds me that “unlike fractions” is a thing. I’m not quibbling with the term, especially not after the whole long-division mess a couple weeks back. I just hadn’t thought in a long while about how different denominators do make adding fractions harder.
Jeff Harris’s Shortcuts informational feature for the 3rd I couldn’t remember why I put on the list of mathematically-themed comic strips. The reason’s in there. There’s a Pi Joke. But my interest was more in learning that strawberries are a hybrid created in France from a North American and a Chilean breed. Isn’t that intriguing stuff?
Bill Abbott’s Specktickles for the 8th uses arithmetic — multiplication flash cards — as emblem of stuff to study. About all I can say for that.
## Reading the Comics, August 17, 2017: Professor Edition
To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line.
Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that.
Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics.
Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here.
By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time.
Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like.
Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.
## Reading the Comics, August 12, 2017: August 10 and 12 Edition
The other half of last week’s comic strips didn’t have any prominent pets in them. The six of them appeared on two days, though, so that’s as good as a particular theme. There’s also some π talk, but there’s enough of that I don’t want to overuse Pi Day as an edition name.
Mark Anderson’s Andertoons for the 10th is a classroom joke. It’s built on a common problem in teaching by examples. The student can make the wrong generalization. I like the joke. There’s probably no particular reason seven was used as the example number to have zero interact with. Maybe it just sounded funnier than the other numbers under ten that might be used.
Mike Baldwin’s Cornered for the 10th uses a chalkboard of symbols to imply deep thinking. The symbols on the board look to me like they’re drawn from some real mathematics or physics source. There’s force equations appropriate for gravity or electric interactions. I can’t explain the whole board, but that’s not essential to work out anyway.
Marty Links’s Emmy Lou for the 17th of March, 1976 was rerun the 10th of August. It name-drops the mathematics teacher as the scariest of the set. Fortunately, Emmy Lou went to her classes in a day before Rate My Professor was a thing, so her teacher doesn’t have to hear about this.
Scott Hilburn’s The Argyle Sweater for the 12th is a timely remidner that Scott Hilburn has way more Pi Day jokes than we have Pi Days to have. Also he has octopus jokes. It’s up to you to figure out whether the etymology of the caption makes sense.
John Zakour and Scott Roberts’s Working Daze for the 12th presents the “accountant can’t do arithmetic” joke. People who ought to be good at arithmetic being lousy at figuring tips is an ancient joke. I’m a touch surprised that Christopher Miller’s American Cornball: A Laffopedic Guide to the Formerly Funny doesn’t have an entry for tips (or mathematics). But that might reflect Miller’s mission to catalogue jokes that have fallen out of the popular lexicon, not merely that are old.
Michael Cavna’s Warped for the 12th is also a Pi Day joke that couldn’t wait. It’s cute and should fit on any mathematics teacher’s office door.
## Reading the Comics, August 5, 2017: Lazy Summer Week Edition
It wasn’t like the week wasn’t busy. Comic Strip Master Command sent out as many mathematically-themed comics as I might be able to use. But they were again ones that don’t leave me much to talk about. I’ll try anyway. It was looking like an anthropomorphic-symboles sort of week, too.
Tom Thaves’s Frank and Ernest for the 30th of July is an anthropomorphic-symbols joke. The tick marks used for counting make an appearance and isn’t that enough? Maybe.
Dan Thompson’s Brevity for the 31st is another entry in the anthropomorphic-symbols joke contest. This one sticks to mathematical symbols, so if the Frank and Ernest makes the cut this week so must this one.
Eric the Circle for the 31st, this installment by “T daug”, gives the slightly anthropomorphic geometric figure a joke that at least mentions a radius, and isn’t that enough? What catches my imagination about this panel particularly is that the “fractured radius” is not just a legitimate pun but also resembles a legitimate geometry drawing. Drawing a diameter line is sensible enough. Drawing some other point on the circle and connecting that to the ends of the diameter is also something we might do.
Scott Hilburn’s The Argyle Sweater for the 1st of August is one of the logical mathematics jokes you could make about snakes. The more canonical one runs like this: God in the Garden of Eden makes all the animals and bids them to be fruitful. And God inspects them all and finds rabbits and doves and oxen and fish and fowl all growing in number. All but a pair of snakes. God asks why they haven’t bred and they say they can’t, not without help. What help? They need some thick tree branches chopped down. The bemused God grants them this. God checks back in some time later and finds an abundance of baby snakes in the Garden. But why the delay? “We’re adders,” explain the snakes, “so we need logs to multiply”. This joke absolutely killed them in the mathematics library up to about 1978. I’m told.
John Deering’s Strange Brew for the 1st is a monkeys-at-typewriters joke. It faintly reminds me that I might have pledged to retire mentions of the monkeys-at-typewriters joke. But I don’t remember so I’ll just have to depend on saying I don’t think I retired the monkeys-at-typewriters jokes and trust that someone will tell me if I’m wrong.
Dana Simpson’s Ozy and Millie rerun for the 2nd name-drops multiplication tables as the sort of thing a nerd child wants to know. They may have fit the available word balloon space better than “know how to diagram sentences” would.
Mark Anderson’s Andertoons for the 3rd is the reassuringly normal appearance of Andertoons for this week. It is a geometry class joke about rays, line segments with one point where there’s an end and … a direction where it just doesn’t. And it riffs on the notion of the existence of mathematical things. At least I can see it that way.
Rick Kirkman and Jerry Scott’s Baby Blues for the 5th is a rounding-up joke that isn’t about herds of 198 cattle.
Stephen Bentley’s Herb and Jamaal for the 5th tosses off a mention of the New Math as something well out of fashion. There are fashions in mathematics, as in all human endeavors. It startles many to learn this. | 2018-05-26 12:10:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5282286405563354, "perplexity": 1327.7968759599275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00328.warc.gz"} |
https://arinjayacademy.com/ncert-solutions-for-class-10-maths-chapter-8-exercise-8-3/ | # NCERT Solutions For Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry
Download NCERT Solutions For Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry. This Exercise contains 7 questions, for which detailed answers have been provided in this note. In case you are looking at studying the remaining Exercise for Class 10 for Maths NCERT solutions for other Chapters, you can click the link at the end of this Note.
### NCERT Solutions For Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry
NCERT Solutions For Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry
1. Evaluate:
i. $\cfrac { sin{ 18 }^{ \circ } }{ cos72^{ \circ } }$
= $\cfrac { sin({ 90 }^{ \circ }-72^{ \circ }) }{ cos72^{ \circ } }$
= $\cfrac { cos72^{ \circ } }{ cos72^{ \circ } }$ (∵ sin(90° – A) = cos A)
= 1
ii. $\cfrac { tan{ 26 }^{ \circ } }{ cot64^{ \circ } }$
= $\cfrac { tan{ 26 }^{ \circ } }{ cot64^{ \circ } }$
= $\cfrac { tan({ 90 }^{ \circ }-64^{ \circ }) }{ cot64^{ \circ } }$
= $\cfrac { cot64^{ \circ } }{ cot64^{ \circ } }$ (∵ tan(90° – A) = cot A)
= 1
iii. cos 48° – sin 42°
cos 48° – sin 42° = cos 48° – sin(90° – 48°)
= cos 48° – cos 48° (∵ sin(90° – A) = cos A)
= 0
iv. cosec 31° – sec 59°
cosec 31° – sec 59° = cosec(90° – 59°) – sec 59°
= sec 59° – sec 59° (∵ cosec(90° – A) = sec A)
= 0
2. Show that
i. tan 48° tan 23° tan 42° tan 67° = 1
LHS = tan 48° tan 23° tan 42° tan 67°
= tan(90° – 42°) tan (90° – 67°) tan 42° tan 67°
= cot(42°) cot(67°) tan 42° tan 67° (∵ tan(90° – A) = cot A)
$\cfrac { 1 }{ tan42^{ \circ } } \cfrac { 1 }{ tan67^{ \circ } }$tan42°tan67° (∵ $\cfrac { 1 }{ tanA^{ \circ } }$= cot A)
= 1
= RHS
ii. cos 38° cos 52° – sin 38° sin 52° = 0
LHS = cos 38° cos 52° – sin 38° sin 52°
= cos 38° cos 52° – sin(90° – 52°) sin(90° – 38°)
= cos 38° cos 52° – cos 52° cos 38° (∵ sin(90° – A) = cos A)
= 0
= RHS
3. If tan 2A = cot (A – 18°), where 2A is an acute angle, find the value of A.
tan 2A = cot(A – 18°)
cot(90° – 2A) = cot(A – 18°)
⇒ 90° – 2A = A – 18°
3A = 90° + 18°
3A = 108°
A = 36°
4. If tan A = cot B, prove that A + B = 90°.
Given, tan A = cot B
⇒ cot(90° – A) = cot B (∵ cot(90° – θ) = tan θ)
⇒ 90° – A = B
∴ A + B = 90°
5. If sec 4A = cosec (A – 20°), where 4A is an acute angle, find the value of A.
Given, sec 4A = cosec (A – 20°)
⇒ cosec(90° – 4A) = cosec (A – 20°) (∵ cosec(90° – θ) = sec θ)
⇒ 90° – 4A = A – 20°
5A = 90° + 20° = 110°
∴ A = 22°
6. If A, B and C are interior angles of a triangle ABC, then show that
$sin\left( \cfrac { B+C }{ 2 } \right)$ = $cos\left( \cfrac { A }{ 2 } \right)$
Given, A, B and C are the interior angles of triangle
∴ A + B + C = 180°
⇒ B + C = 180° – A …(i)
LHS = $sin\left( \cfrac { B+C }{ 2 } \right)$
= $sin\left( \cfrac { { 180 }^{ \circ }-A }{ 2 } \right)$ (From equation (i))
= $sin\left( 90^{ \circ }-\cfrac { A }{ 2 } \right)$
$cos\cfrac { A }{ 2 }$ (∵ sin(90° – θ) = cos θ)
= R.H.S
7. Express sin 67° + cos 75° in terms of trigonometric ratios of angles between 0° and 45°.
sin 67° + cos 75°
= sin(90° – 23°) + cos(90° – 15°)
= cos 23° + sin 15° (∵ sin(90° – θ) = cos θ and cos(90° – θ) = sin θ)
NCERT Solutions for Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry, has been designed by the NCERT to test the knowledge of the student on the topic – Trigonometric Ratios of Complementary Angles
Download NCERT Solutions For Class 10 Maths Chapter 8 Exercise 8.3 – Introduction to Trignometry | 2021-04-10 11:39:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564768075942993, "perplexity": 10827.160954609457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00180.warc.gz"} |
https://icsecbsemath.com/2018/10/14/class-9-rectilinear-figures-lecture-notes/ | This chapter is based on polygons. We had published lecture notes on Polygons for Class 8 students. It would be a good idea to revise the Class 8: Polygons – Lecture Notes.
Important points to remember:
1. The sum of the interior angles of a convex polygon of $n$ sides is $(2n-4)$ right angles or $(2n-4) \times 90^o$.
2. If there is a regular polygon of $n$ sides $(n \ge 3)$, then each of its interior angle is equal to $\frac{2n-4}{n}$ $\times 90^o$
3. Each exterior angle of a regular polygon of $n$ sides is equal to $($ $\frac{360}{n}$ $)^o$
4. If each exterior angle of a regular polygon is $x^o$, then the number of sides in the polygon is $\frac{360}{x}$
5. As the number of sides of a regular polygon increase , the measure of each interior angle also increases.
6. If the polygon has $n$ sides, then the number of diagonals in $\frac{n(n-3)}{2}$.
7. The sum of all exterior angles formed by producing the sides of a convex polygon in the same order is equal to four right angles (or $360^o$).
All the problems related to Polygons can be solved using the above key formulas / points. | 2021-06-15 06:39:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6225451827049255, "perplexity": 118.50695262543371}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00550.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00931978 | # Generalizations of Poisson structures related to rational Gaudin model
Abstract : The Poisson structure arising in the Hamiltonian approach to the rational Gaudin model looks very similar to the so-called modi?ed Reflection Equation Algebra. Motivated by this analogy, we realize a braiding of the mentioned Poisson structure, i.e. we introduce a "braided Poisson" algebra associated with an involutive solution to the quantum Yang-Baxter equation. Also, we exhibit another generalization of the Gaudin type Poisson structure by replacing the ?rst derivative in the current parameter, entering the so-called local form of this structure, by a higher order derivative. Finally, we introduce a structure, which combines both generalizations. Some commutative families in the corresponding braided Poisson algebra are found.
Type de document :
Pré-publication, Document de travail
LATEX, 16 pp. 2013
https://hal.archives-ouvertes.fr/hal-00931978
Soumis le : jeudi 16 janvier 2014 - 10:34:04
Dernière modification le : mercredi 19 décembre 2018 - 14:08:04
### Identifiants
• HAL Id : hal-00931978, version 1
• ARXIV : 1312.7813
### Citation
Dimitri Gurevich, Vladimir Rubtsov, Pavel Saponov, Zoran Skoda. Generalizations of Poisson structures related to rational Gaudin model. LATEX, 16 pp. 2013. 〈hal-00931978〉
### Métriques
Consultations de la notice | 2019-01-16 02:18:38 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057783246040344, "perplexity": 4130.610007424489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656577.40/warc/CC-MAIN-20190116011131-20190116033131-00392.warc.gz"} |
http://www.sciforums.com/threads/unscrambling-the-cube.97493/page-8 | # Unscrambling the cube
Discussion in 'General Science & Technology' started by noodler, Nov 11, 2009.
Thread Status:
Not open for further replies.
1. ### noodlerBannedBanned
Messages:
751
I think the claim that computation is part of the cube dynamics can go untested, fairly obviously you process something, MIT's cubing community and Singmaster's algorithmic basis pretty much sum up the details.
Of course, it depends on what category you apply, is it a virtual-register kind of device? Is it strictly monotonic (i.e. there is a linear progression of contexts and switches)?
Well never mind all that, Prolog to the rescue. This is strictly a symbolic language. there are only lists with elements in them which can be other lists, or atomic tokens. The most atomic element is the empty list.
Constructing a Prolog program means defining a list structure, and in general some list traversing functions that are recursive, so a list is partially traversed in any case by first traversing the list at the head (or parsing an atom), then the tail is traversed recursively so that list traversal is ordered by the list itself.
The Singmaster labelling scheme uses 6 Roman letters (you could use Chinese or French versions however, the symbols are merely convenient or heuristic), which are: BDFLRU in alphabetic order.
You need to create lists that order these pairwise so they correspond to opposite faces of a cube.
Then you need to create an ordered list of inverted symbols to represent the "reverse" operator set of symbols: U',D' etc.
Then you will need functions that traverse these lists and compose at least the 2-generator set of operations on adjacent pairs, the cross and anticross subgroups. These correspond to the antitwist and twist groups respectively.
You can twist a single corner cubie and return it to where it started (you transport it in two directions, i.e. it gets pitch + roll), with an anticross move like FR. The cross moves F'R or FR' "eliminate" twist.
In that case, you can use twist parity as a symbol which is written in a read-erase-write cycle.
The other function, which is ancillary, is the copy function.
Solutions to a scrambled code use these subgroups, to eliminate twists from the corner cubies and flips from the edge cubies. Twist and flip essentially close the parity function(al).
If you generate a 6x6 matrix from the ordered list (URFDLB) you get a diagonal of square moves (the squares group UU,RR, ...,BB), this is the "identity function" because it has the parity identity (P,-P) where the cross and anticross groups generate {+1,0,-1} in the corners group. Parity is {0,-1} in the edges group.
Suppose you have a job as an analyst at some company whose details, naturally, need not concern us (or you, for that matter). So your boss hands you the series listed (from the Wikipedia entry for the Pocket Cube, iteration #1, N = 2). He says he wants you to find out what you can about the two lists, the company has no idea what it actually is, it could be in Klingon, or it could be just the output and input response curves of some chemical or other kind of reaction.
The boss wants you to eliminate, if possible, all categories that the two lists can't be in.
You surmise initially that the shorter length curve (parameterized by f in the series) is an exhaust output of some kind from the system (generally speaking), and the longer length curve (parameterized by q in the series) is possibly an input response, since in general a system can have an exhaust cycle that leads an intake--in fact this is how heat engines deliver useful work.
If there is a work function, and if both cycles are 'exhaustive' and the longer one is a system response, so the extra length is some kind of relaxation (a rise time), so the system can input again, maybe, maybe not....
Or maybe the series is a code of some kind, amenable to algorithmic analysis and code-breaking. Alan Turing claimed that, in order to write any number in a finite number of places (numbers have numbers too, called places or digits), an integer for example, writing it as a decimal in a finite number of places is proving the number is real, and also computable.
So, the totals for both sequences are equal, but the subtotals aren't. Writing 3,674,160 as a decimal means writing it as 0.367416. You can do this "in base 10" on paper, but a binary logic computer has to use, well, binary. Number representation in binary electronic computers becomes part of a potential solution.
CS/IS has various methods for encoding integer and real numbers, in binary form. Complements are standard, fixed and floating point reps are often generated with machine instructions (it saves time). Turing's formula has two terms, the left one writes a preamble (the number of digits, n) and the right one is an infinite sum of terms: $(2C_r\; -\; 1)(\frac {2} {3})^r,\; r = [1,\infty)$.
You know that, for a symmetric code, there is a |G| for the generators, which is both the totals, or |G| = Tf = Tq = 3,674,160, for totals T. Perhaps the sequences have this symmetry. You assume that if this is true, there will be a set of rotors (like the Enigma design) that rewrite-after-read, with shifting (pitch, roll, yaw is available in 3-space), this is actually the encoding process).
So, suppose Cr is also related to T, the total, then 2Cr is = (Tf + Tq), under some function that generates the subtotals. Write a Prolog program that orders the subtotals, and finds the recurrence that does this according to Turing's formula (assume the preamble is the program itself, or rather n is the list length).
The program will need to declare functions that traverse, reverse, etc the sublist elements and these will become part of the functional list--the tail of the Prolog listing, usually).
Last edited: Feb 4, 2010
2. ### Google AdSenseGuest Advertisement
to hide all adverts.
3. ### BlindmanValued Senior Member
Messages:
1,425
You do know that real numbers can not be accurately encoded into Binary, Decimal, or any base that allows manipulation without error. ie 1/3
And once again your posting formula without definitions..
Prolog would be that last language to use to solve such a problem (if i can work out what you actually want). You dont have a clue what your typing?
I am willing to discuss Cube problems but please be consistent.
The only reason i am posting to this thread is that you touched apon the problem of, can a computer compute the possible combinations of a cube via simulation of the mechanical properties.
4. ### Google AdSenseGuest Advertisement
to hide all adverts.
5. ### noodlerBannedBanned
Messages:
751
If Prolog is the last language you would use for a solution, would you consider using GAP, or a similar language instead? Would Mathematica be a good or a useful choice?
If you had no choice, but were forced to write a program in machine code, which processor would you choose, or which instruction set?
P.S. I suppose someone who has programmed for as long as you have, must be aware that binary can't represent real numbers except approximately.
Perhaps you've also managed to work out that this "problem" has something to do with the way digital circuitry only deals with fixed voltages and currents, and so it's physically impossible to store analog values.
This conclusion is fairly inescapable, you have to make it if you also study electronics and digital circuitry - with fixed values of charge it is physically impossible to store analog values, because there are only so many number places and they can only be 0 or 1.
This is gob-smackingly obvious to anyone who does even a cursory analysis of digital computers.
But thanks for all the fish, eh...
6. ### Google AdSenseGuest Advertisement
to hide all adverts.
7. ### BlindmanValued Senior Member
Messages:
1,425
Guess you did not read my post correctly.
Or 0,1,2 or 0-3, or base 10, hex, octal
C#
CMOS T800 parallel transputers.. Loved them in the early 1990's had and array of 16 T800's
8. ### noodlerBannedBanned
Messages:
751
I guess if I didn't read your post correctly, I should try again then.
Yes, actually I do know that real numbers can't be represented in base2, there will always be some numbers that have an approximate binary representation
And yet you have stated that C# is a language you would use in preference to Prolog.
I'd like to know what motivated this decision, why is C# a better choice, is list-based programming not really a good approach to analyzing the puzzles? Would you not use the list-based features of C# at all, in that case?
Which features of the C# language do you think are the most useful, regarding the problem of finding a recurrence?
I'm also curious about how you store a 3 or a 10 in a single bit which can physically only be 0 or 1?? Don't you need more than one bit, and isn't there a limit to the number of bits, which logically means you can't represent every number, only a subset?
Can't you prove, fairly easily, that because the number of bits is finite, then real numbers will be represented as approximations - to plus or minus 1 bit?
I have this vague recollection of doing something like that, but it was a while ago, maybe things have changed since 1 + 1 = 2.
9. ### BlindmanValued Senior Member
Messages:
1,425
Most modern versions of Prolog are implemented in C, or C++. C# a higher level language then the afore mentioned, but provides a lower level implementation of any Prolog solutions.
C# compiles into non CPU specific opp code, yet as with C and C++ after just in time compilation can represent very tight machine code.
eg
Code:
for(int i=0;i<10;i++){}
"i" will use a register, "10" will be a immediate value, i++ will encode into a single instruction and "i<10" will encode into two.
Prolog will run at least 10 times slower at best.
impossible.
10. ### BlindmanValued Senior Member
Messages:
1,425
opps if only 3 and 10 easy, 0 = 3 and 1 = 10, hard to do math with but still possible as long as its only 3 and 10 (base 10)
11. ### noodlerBannedBanned
Messages:
751
I'm sorry, I'm having difficulty with the relevance of anything you have posted.
Other than establish a few things that are basic computer science, you have contributed nothing that I can say is remotely interesting.
The problem is this: I want to find a recurrence. You want to talk about C#, but this is just one of dozens of computer languages that could be used - as I state, you can also use machine instructions. In fact the choice of a language is irrelevant to the problem except for providing a higher level of abstraction.
Please either address the problem, or think of something else to do.
12. ### BlindmanValued Senior Member
Messages:
1,425
Well then present the problem
After a full turn of any face.
13. ### noodlerBannedBanned
Messages:
751
When you state: "Well then present the problem", do you mean "present the problem you have already presented over again, for my benefit because I can't be bothered going back over this thread", or do you mean "you haven't presented any problem"?
If you mean either of these, then I have a problem with that.
This problem: "find the recurrence relation for the Pocket Cube", is the one I want a solution for, Whatever your problem is, I can't really help you with it.
14. ### BlindmanValued Senior Member
Messages:
1,425
I have presented the symmetry (recurrence) of the 2by2 cube.
Are you after a mechanical solution???
15. ### noodlerBannedBanned
Messages:
751
You have the recurrence relation for the 2x2x2?
Why don't you post it for us then, so we can all relax?
Do you KNOW what a recurrence relation is, or are you guessing?
16. ### BlindmanValued Senior Member
Messages:
1,425
You mean after X iterations that the state of a system is the same?
17. ### noodlerBannedBanned
Messages:
751
'yawn'....
'skritch'..., 'skritch'.
18. ### BlindmanValued Senior Member
Messages:
1,425
Or "recurrence relation" which has no relevance to the cube problem???
19. ### noodlerBannedBanned
Messages:
751
That's right. The cube is just a cube, don't let it bother you or anything.
Obviously you don't know what the problem is, by which I mean 1) the problem I am trying to address, bit by bit, in which there is no deadline, no contract waiting to be finished so I get paid, no penalty clauses, or anything.
I do this because I know it's been done already, and so it can be done, and so I can work it out (actually I'm a fair way down the track, there is a shitload of stuff I haven't posted here, or anywhere else and none of it is my intellectual property).
And 2):
I guarantee you can't prove that a Rubik's cube has a recurrence relation which is irrelevant to "the cube problem", which might be the problem.
20. ### BlindmanValued Senior Member
Messages:
1,425
But there are so many different sequences of moves, each with its on set of values (encoded in cube configurations).
Simplest 0,1,2,3: single face rotation. Id have to code an alternative 8, right and front 1/4 turns for a sequence. Short of the full set. There is no single recurrence relation.
21. ### noodlerBannedBanned
Messages:
751
Rubbish. There are three formulas for the 3x3x3 that, given n a number of full moves, return the number of positions that can be reached. These formulas are accurate up to some value for n, say.
If these recurrences are "working models" apart from the limits, there must be formulas for the smaller puzzle, this should be obvious because the smaller puzzle is part of the larger one.
(Now for some more coffee)
22. ### noodlerBannedBanned
Messages:
751
The key to this exercise appears to be the 2-ary functionality of the 2-generator group. This is characterized by selecting any two adjacent faces (which are pivots), or XY. Then assume a general swapping algorithm "swap(XY)" that changes the order of any 2-ary word, as long as the word is in the 2-generator set.
So the swap() function just determines the sense any word is read, and we select operators from the full set that correspond to two faces of a cube (which is blocked, or not sliced).
Then swap: XY -> YX is just exchanging the MSD for the LSD of the word "XY", by reading it right-to-left. If the word is "10" the swap function exchanges this for "01", etc.
In fact any dyad has its order changed, including the abstract functions/values "black" and "white", since black can be "0" and white "1", if the cube is all black, swapping white makes it all white, etc.
The other function needed as well as swap(), is one that inverts the direction any monad acts. That is if X is a monad, -X is its inverse. This is characterized in a standard (algorithmic) basis as X'.
With inversion and swapping (changing the order of a word, starting with dyads), you can now form higher functions that use the first two, to generate flip() and twist() functions of operators. These can be generalized to any two words A and B, where A is a word of length a, B a word of length b, etc.
A.B is a pivotal word, the pivot is the "dot", which acts like a decimal point in a real fraction.
Start with U, the "up face", and select a second face, which is one of the out-degrees (any face of a cube has degree-four), say L. Then with the swap() and invert() functions, you get the following algebra: swap(UL), swap(U(invert(L))) = swap(UL'), ... for a total of 8 dyads. Add the square() function that copies a monad s.t. square(U) = UU, etc.
The squares and swaps with inversion can now be used to investigate the 2-generator set of operations. This has order 29160, and it includes (by way of the swap(), square(), flip(), and twist() functors/algorithms), the cross and anticross subgroups. Twist is a function that acts on two elements, you can twist a single element in-place, in two moves. It takes three moves to flip an edge, and the flip() is not available (except as an abstract function) in the Pocket Cube configuration.
We note that square(U) = UU, and invert(UU) = (UU)', are symmetric, in that: square(invert(U)) = square(U') = invert(square(U)) = invert(UU). This is the intersection of the identity with the mechanical inversion [, the function apply(UU) = apply(U'U')] of a face group of elements wrt the face-degree.
square(U) is a function that proves UU is its own inverse, since UU = -(UU), or $U^2\; =\; -U^2\; =\; U^{-2}$
Last edited: Feb 8, 2010
23. ### noodlerBannedBanned
Messages:
751
Now, of course, if we swap the notion of "stickering" with the notion of coordinatizing, or synchronizing the algebra, we have a natural way to tensor the 2-generator subset as all 2x2 matrices that correspond to squares of XY with diagonals that correspond to the above identity--which is the pivot for all functions composed of the "fundamentals" swap(XY), and invert(X).
Then the matrix algebra that corresponds to such valid compositions should include the functions: flip(XY), and twist(XY), that change the parity of individual elements without changing their position.
"Naturally" the postions are changed, but also restored (positions, as variables are involuted, but homological to parity functions, since they transport elements in two directions around B(r) the ball which is closed under all rotations in 3-space, but not [closed to] swap(black,white), swap(XY), ...).
H_color is the algebraic identity, rotations are the geometric "functionality" of H. G is the set of closed rotations that yields |G| under H.
A and B are "simply" words, composed of extensions of the swap, invert, twist and flip [functorials] to the full group. The full group includes all algorithms that construct, slice, rotate, color, along with inversions, all elements, and there is an H_0 -> H_1 homology.
When (r) the radius of B transitions from r = 0, it cannot "go backwards" except arithmetically, and only if the swap function has exchanged "b" for "c", each [are] elements of H_color.
Conjecture: the functional space is logarithmic, with a spiral form which has a polar equation, this is what such a space looks like when color is used to plot an infinitely recursive function:
Please Register or Log in to view the hidden image!
Last edited: Feb 8, 2010
Thread Status:
Not open for further replies. | 2017-12-18 08:55:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6064310073852539, "perplexity": 1425.5607466247502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948612570.86/warc/CC-MAIN-20171218083356-20171218105356-00075.warc.gz"} |
https://s160685.gridserver.com/knight-before-aljpzsq/3389b3-controllable-investment-formula | A Dictionary of Accounting », Subjects: CFI is the official provider of the global Financial Modeling & Valuation Analyst (FMVA)™ FMVA® Certification Join 350,600+ students who work for companies like Amazon, J.P. Morgan, and Ferrari certification program, designed to help anyone become a world-class financial analyst. PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). The formula in computing for the residual income is: where: Desired income = Minimum required rate of return x Operating assets Note: In most cases, the minimum required rate of return is equal to the cost of capital. Quick Reference. ROI (return on investment): Return on investment, or ROI, is a mathematical formula that investors can use to evaluate their investments and judge how well a particular investment has performed compared to others. Understanding Return on Investment (ROI) ROI is a popular metric because of its versatility and simplicity. Profitability Index. … The return on assets formula looks at the ability of a company to utilize its assets to gain a net profit. Controllable margin / Average operating assets = ROI 36. a 29. Please review the insurance contract prospectus for further description of these fees and expenses. Controllable costs are costs that can be influenced or regulated by the manager or head responsible for it.For example: direct materials, direct labor, and certain factory overhead costs are controlled by the production manager. : 5.2: All attractive assets should be physically verified at least annually, either by cyclical or … Business and Management, View all related items in Oxford Reference », Search for: 'controllable contribution' in Oxford Reference ». (Revenue — Investment) / Investment Let’s say that you’ve invested $5,000 in marketing spend and you’ve generated$10,000 in revenue from those channels. Investment Analysis and Portfolio Management 5 The course assumes little prior applied knowledge in the area of finance. The Blue Division of Dalby Company reported the following data for the current year: Sales $3,000,000 Variable costs$1,950,000 Controllable fixed costs $600,000 Average operating assets$5,000,000 Top management is unhappy with the investment center's return on investment (ROI). investment center income / investment center average income assets. Net income in the numerator of the return on assets formula can be found on a company's income statement. ... a. managers are responsible for their departments' controllable costs b. each accounting report contains all items allocated to a responsibility center Even in terms of the COVID-19 crisis, Vietnam so far appears to have emerged strong and organised, especially compared with many of its regional peers. Magic of Compounding Tool: Use this calculator to understand the astounding power of compounding. Widely used measure, so comparisons easy. Residual income . Internally, you can use whatever metric you like to view the financial results of your business. Advantages of ROI. In the example, colors are treated as unique item identifiers – imagine a product available in one size only in just three colors: red, blue, or green. I'd venture to guess … The present value of the future cash flows divided by the initial investment. Examples of fixed costs are rent and insurance . From: Return on investment (ROI) is the ratio of profit made in a financial year as a percentage of an investment. Residual income of a department can be calculated using the following formula: Residual Income = Controllable Margin - Required Return × Average Operating Assets. B : sales are divided by average investment center operating assets. (c) Copyright Oxford University Press, 2021. These measures indicate how effectively a company uses each dollar that is invested in assets to generate profits. Ratios and Formulas in Customer Financial Analysis. controllable investment Return on Investment (ROI) can be calculated using the DuPont formula. Total costs and total revenues can mean different things to different individuals. The formula for computing return on investment is controllable margin divided by average operating assets. 1) The formula for computing return on investment controllable margin divided average operating assets TRUE The formula for computing ROI for investment centers is Controllable Margin divided byAv view … Investment Center Return on Investment (ROI) formula. Social sciences The averageof the operating assets is used when possible. )What is the difference between a budget and a standard? Consider the continuous linear system. From: We hope this guide to the working capital formula … or. See controllable contribution. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice). All Rights Reserved. The percent return is the difference of the current price minus the entry price, divided by the entry price: (price-entry) ÷ entry. This formula demonstrates a very simple inventory concept where current inventory is simply the result of all incoming stock minus all outgoing stock. The operating ratio formula is the ratio of the company’s operating expenses to net sales, where operating expenses include administrative expenses, selling and distribution expenses, cost of goods sold, salary, rent, other labor costs, depreciation, etc. Quantitative risk analysis formula for calculating ROSI. 35. Essentially, ROI can be used as a rudimentary gauge of an investment’s profitability. (c) controllable … C : sales are divided by net income. Your formula would look like this: An investment spreadsheet puts all your investment information in one place. Return on investment is therefore an appropriate basis for evaluation. ROI = (Gain from investment – Cost of investment) / (Cost of investment) Simple ROI Calculator Excel Template. I'd venture to guess most accountants would agree. All Rights Reserved. Profitability index. Learn how to find return on investment … Controllable Investment. c. average investment center operating … The denominator in the formula for return on investment calculation is a. investment center controllable margin. An ROI calculation is sometimes used along with other approaches to develop a business case for a given proposal. 14. You could not be signed in, please check and try again. Return on investment is a relative measure and hence … * Total assets of the company.† 2. if you want this in %, then its = Controllable income/ Revenue … Financial statement analysis is a judgmental process. Definition of Investment. Determine how much your money can grow using the power of compound interest. Evaluation of rate of income contribution of segment. What I mean by that is the income and costs are not clearly specified. {\displaystyle \mathbf {y} (t)=C (t)\mathbf {x} (t)+D (t)\mathbf {u} (t).} Where a division is a profit centre, depreciation is not a controllable cost, as the manager is not responsible for investment … When using the return on investment (ROI) formula, A : controllable margin is divided by average investment center operating assets. c. Controllable margin dollars and total assets. The country’s economy has grown quickly, driven in part by healthy levels of manufacturing investment. So if your total sales are £500,000 and your variable costs are £50,000, then your controllable margin is £450,000. You could not be signed in, please check and try again. The course is intended for 32 academic hours (2 credit points). Determine how much your money can grow using the power of compound interest. The capital employed that is controllable by a divisional manager. The controllable variance is: $92,000 Actual overhead expense - ($20 Overhead/unit x 4,000 Standard units) = $12,000. Policyholder Dividend Ratio: The ratio of dividends to policyholders to net premiums earned. Keep in mind that this doesnt mean that the cost can be eliminated or controlled at will. ROI = (Gain from Investment - Cost of Investment) / Cost of Investment Answer and Explanation: Return on Investment (ROI) shows a company's efficiency at generating returns from invested assets. controllable (traceable) profit - an imputed interest charge on controllable (traceable) investment. — Controllable margin (also called segment margin) is the department's revenue minus all such expenses for which the department manager is responsible. When evaluating residual income, the calculation tells management what percentage return was generated by the particular division being evaluated. ROA Formula / Return on Assets Calculation. For example, consider for particular contract, if you have generated revenue of 35000 USD and for this deliverable you had to bear 15000 expenses, then your controllable income is 20000 USD i.e. A control premium is an amount that a buyer is sometimes willing to pay over the current market price of a publicly traded company in order to acquire a controlling share in that company.. Controllable contribution is the most appropriate measure of a divisional manager’s performance. On the contrary, controllable profit (divisional revenue — divisional controllable costs) is a much better measure of divisional manager’s performance as it considers all costs — fixed or variable — which are within his control. In the formula for return on investment (ROI), the factors for controllable margin and operating assets are, respectively: (a) controllable margin percentage and total operat- ing assets. It uses the net profit margin and total asset turnover in the calculation of ROI. b. Controllable margin dollars and average operating assets. Let’s explore how to calculate each of the components in this formula. When evaluating residual income, the … Also called the benefit-cost ratio. When calculating performance measures for a division it is important to ensure that only … In the company's budget, the budgeted overhead per unit is$20, and the standard number of units to be produced as per the budget is 4,000 units. Relative merits of ROI and Residual Income. Another example: the sales manager has control over the salary and commission of sales personnel. Money handed over to a fraudster won’t grow and won’t likely be recouped. The formula for computing return on investment is controllable margin divided by average operating assets. Controllable contribution is the most appropriate measure of a divisional manager’s performance. Return on Assets (ROA) is a type of return on investment (ROI) ROI Formula (Return on Investment) Return on investment (ROI) is a financial ratio used to calculate the benefit an investor will receive in relation to their investment cost. It is most commonly measured as net income divided by the original capital cost of the investment. Note: In most cases, the minimum required rate of return is equal to the cost of capital.The average of the operating assets is used when possible.. )What is the formula for Return on Investment? We want our efforts to control controllable expenses to have a meaningful financial impact —disregard prejudiced assumptions such as eliminating property management. Do not use for segments or segment managers due to inclusion of non controllable expenses. D : controllable margin is divided by sales. 35000–15000. Business and Management, View all related items in Oxford Reference », Search for: 'controllable investment' in Oxford Reference ». Cost of poor quality (COPQ) or poor quality costs (PQC), are costs that would disappear if systems, processes, and products were perfect.. COPQ was popularized by IBM quality expert H. James Harrington in his 1987 book Poor Quality Costs. However, the manager of an investment centre is responsible for investments and therefore depreciation is a controllable cost. Formula. Controllable working capital is defined as accounts receivable plus inventories less accounts payable. Discount for Lack of Control (DLOC) The Discount for Lack Of Control (DLOC) is a discount that must be applied to the share price when the investor wishes to value a position in a company in which he or she will not have a controlling interest. The capital employed that is controllable by a divisional manager. The attached simple ROI calculator is an Excel template. A Dictionary of Accounting », Subjects: So before committing any money to an investment opportunity, use the “Check Out Your Investment Professional” search tool below the calculator to find out if you’re dealing with a registered investment … in Users may download the financial formulas in PDF format to use them offline to analyze mortgage, car loan, student loan, investments, insurance, retirement or tax efficiently. Simple and low cost. a 29. A method for determining the … Any practical event will ensure that the variable is greater than or equal to zero. The Portfolio is an Investment vehicle for variable annuity contracts and may be subject to fees or expenses that are typically charged by these contracts. Explore answers and all related questions Related questions If there has been a winner in South-East Asia in terms of foreign investment over the last five years, it is undoubtedly Vietnam. Annualized loss expectancy (ALE) The annualized loss expectancy (ALE) is the total annual monetary loss per year expected to result from a specific exposure factor if the security investment is not made. Creating Percent Return Formulas in Excel . In the formula for return on investment (ROI), the factors for controllable margin and operating assets are, respectively: a. Controllable margin percentage and total operating assets. If you have investments with several different companies, such as online brokerage firms, an investment manager, 401(k)s from a different job, and college savings funds, it becomes very time-consuming to track each investment … Social sciences If you have investments with several different companies, such as online brokerage firms, an investment manager, 401(k)s from a different job, and college savings funds, it becomes very time-consuming to track each investment individually. d. In other words, its a cost that management can increase or decrease based on their business decisions. — Working capital in financial modeling. One of the primary objectives is identification of major changes in trends, and relationships and the investigation of the reasons underlying those changes. It is … Example: Computation of RI. Net income of the company. A budget is a total amount and a standard is a unit. When calculating performance measures for a division it is important to ensure that only those assets and liabilities that a manager can influence are included. In practice, however, it can be difficult to distinguish between controllable costs and uncontrollable costs. The first version of the ROI formula (net income divided by the cost of an investment) is the most … controllable investment. We bet after seeing the results, you'll want to try and start investing as soon as possible! An investment center manager can control the investment funds available as well as costs and revenues. Compute for the residual income of an investment center which had operating income of $500,000 and operating assets of$2,500,000. It asks the manager of the Blue Division to submit plans to improve ROI in the next year. ROI is controllable margin divided by average operating assets. such that mean is equal to 1/ λ, and variance is equal to 1/ λ 2.. A cost could be uncontrollable at a low … The return on assets formula, sometimes abbreviated as ROA, is a company's net income divided by its average of total assets. Controllable Profit - Interest charge on controllable investment. The formula for computing return on investment is controllable margin divided by average operating assets. The term controllable net operating income is not one I've heard before. The term controllable net operating income is not one I've heard before. An investment spreadsheet puts all your investment information in one place. Definition: A controllable cost is an expense that a manager has the power to influence. b. dependent on the specific type of profit center. So before committing any money to an investment opportunity, use the “Check Out Your Investment Professional” search tool below the calculator to find out if you’re dealing with a registered investment professional. Internally, you can use whatever metric you like to view the financial results of your business. ROI = Investment Gain / Investment Base . PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). See cash value added. In practice, however, it can be difficult to distinguish between controllable costs and uncontrollable costs. (c) Copyright Oxford University Press, 2021. Thus, for analysis purposes, the basic formula … )What is a standard cost? A controllable cost is just an expense that a manager has influence over. Money handed over to a fraudster won’t grow and won’t likely be recouped. where: Desired income = Minimum required rate of return x Operating assets. In other words, ROI reveals the overall benefit (return) of an investment using the gain or loss from the investment along with the cost of the investment. controllable contribution Investment control or investment controlling is a monitoring function within the asset management, portfolio management or investment management.It is concerned with independently supervising and … 28 the formula for computing return on investment is 28. ROI = Net Income / Cost of Investment. Residual income is the income that remains after subtracting the minimum rate of return on a company's average operating assets. Solved Expert Answer to In the formula for return on investment (ROI), the factors for controllable margin and operating assets are, respectively: (a) controllable ma Get Best Price Guarantee + 30% Extra Discount ROI may be calculated in Excel, but there is no specific formula for it — it simply displays inputs and outputs to help you come up with the final number. Requires input from all persons who have responsibility for costs and quantities. a 30. 5.1: Controllable Asset managers should also be familiar with the FAM 2201 series of directives on TCAs. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice). 1. The reverse of a controllable cost is a fixed cost, which can only be altered over a long period of time. Investment center residual income formula. 37. {\displaystyle W (t_ {0},t_ {1})} is the Controllability Gramian . Calculation of the Exponential Distribution (Step by Step) Step 1: Firstly, try to figure out whether the event under consideration is continuous and independent in nature and occurs at a roughly constant rate. Evaluation of the earning power of the company. y ( t ) = C ( t ) x ( t ) + D ( t ) u ( t ) . The return on investment formula is calculated by subtracting the cost from the total income and dividing it by the total cost.As you can see, the ROI formula is very simplistic and broadly defined. Where a division is a profit centre, depreciation is not a controllable cost, as the manager is not responsible for investment decisions. (b) controllable margin dollars and average operat- ing assets. … The sales revenue of a division less those costs that are controllable by the divisional manager (see controllable costs). Removing property management … 28. in The formula looks like this: Total gross sales - total variable costs = controllable margin. quantities. Five years, it can be used as a rudimentary gauge of an investment spreadsheet puts all investment... Company 's average operating assets is used when possible ) profit - an imputed charge. Operating income is not responsible for investment decisions it uses the net profit the term controllable operating. Money can grow using the return on assets formula, a: controllable margin divided by the investment... Working capital is defined as accounts receivable plus inventories less accounts payable please the... Bet after seeing the results, you can use whatever metric you like to the. Investment funds available as well as costs and quantities and operating assets has control over the salary and of... Of ROI to inclusion of non controllable expenses to have a meaningful financial —disregard. Investment ’ s economy has grown quickly, driven in part by healthy levels of manufacturing investment,. Center average income assets sales personnel the particular division being evaluated explore answers and all related questions questions... Percent return Formulas in Excel Overhead/unit x 4,000 standard units ) = $12,000 an expense that a has! However, controllable investment formula manager is not responsible for investment decisions and Portfolio management 5 course... Margin is £450,000 to improve ROI in the calculation of ROI its average total... Familiar with the FAM 2201 series of directives on TCAs want to try and start investing soon. - an imputed interest charge on controllable ( traceable ) profit - an imputed interest charge on (. Profit made in a financial year as a percentage of an investment as ROA, is a centre. Average of total assets 2201 series of directives on TCAs eliminating property management expenses for which the 's! Center income / investment center manager can control the investment mean is equal to 1/ 2! Money can grow using the power of compound interest dependent on the specific type of profit center$ 20 x... For segments or segment managers due to inclusion of non controllable expenses … 28 the for. Fam 2201 series of directives on TCAs can controllable investment formula or decrease based on business. Likely be recouped the salary and commission of sales personnel the ratio of dividends to to. Mean different things to different individuals total Asset turnover in the numerator of future. Or equal to 1/ λ 2 those changes total assets company 's income statement controllable controllable investment formula Understanding return on (... ) profit - an imputed interest charge on controllable ( traceable ).... Center income / investment center average income assets investment ’ s performance responsible for investments therefore... = ROI 36 for costs and total revenues can mean different things to different individuals the return investment! A cost that management can increase or decrease based on their business decisions 's income statement has quickly. Not responsible for investment decisions of non controllable expenses to have a meaningful financial impact —disregard assumptions... Most accountants would agree … the present value of the reasons underlying those changes if has. The last five years, it is most commonly measured as net income divided by divisional. The investigation of the Blue division to submit plans to improve ROI in the area of finance fraudster ’! —Disregard prejudiced assumptions such as eliminating property management what percentage return was generated the... Responsibility for costs and uncontrollable costs the residual income, the … Creating Percent return Formulas in Excel intended. Of its versatility and simplicity handed over to a fraudster won ’ t likely recouped! Looks at the ability of a division is a company uses each dollar that is invested in to. Attached simple ROI Calculator Excel Template gauge of an investment center operating … 28 the for! C ) Copyright Oxford University Press, 2021 … Creating Percent return Formulas in...., then your controllable margin divided by average investment center operating assets \$... 'S net income in the area of finance Asset managers should also familiar. Divided by average operating assets = ROI 36 center return on assets,... Review the insurance contract prospectus for further description of these fees and expenses hours ( 2 points. To calculate each of the investment just an expense that a manager has control over the five. Gauge of an investment center operating … 28 the formula for computing return on?. Where a division less those costs that are controllable by a divisional manager ’ s profitability all outgoing.. Of compound interest plans to improve ROI in the numerator of the components in formula..., depreciation is a unit terms of foreign investment over the last five years, it is undoubtedly.... Standard is a profit centre, depreciation is a unit in one place manager... Value of the components in this formula demonstrates a very simple inventory concept where inventory. Investment – cost of investment ) simple ROI Calculator Excel Template greater or... Original capital cost of the investment you like to view the financial results of your.!, sometimes abbreviated as ROA, is a profit centre, depreciation not! ) / ( cost of investment ) / ( cost of investment ) / ( cost of investment simple! Formula demonstrates a very simple inventory concept where current inventory is simply the of... Defined as accounts receivable plus inventories less accounts payable ) profit - imputed.: the sales manager has control over the salary and commission of sales.! Particular division being evaluated working capital is defined as accounts receivable plus inventories less accounts payable South-East in! Your money can grow using the return on assets formula can be to. Of manufacturing investment property management formula demonstrates a very simple inventory concept where current is. Also called segment margin ) is the ratio of profit made in a financial as... Then your controllable margin divided by average investment center operating … 28 the formula for computing on. The FAM 2201 series of directives on TCAs be signed in, please check and try again return. Investment ’ s economy has grown quickly, driven in part by healthy levels of manufacturing.. Used along with other approaches to develop a business case for a given proposal, is a profit centre depreciation! Stock minus all outgoing stock and average operat- ing assets reasons underlying those changes )! Essentially, ROI can be eliminated or controlled at will of ROI the residual income the. To inclusion of non controllable expenses to have a meaningful financial impact —disregard prejudiced such! Property management department manager is responsible standard is a popular metric because of versatility! The attached simple ROI Calculator Excel Template how effectively a company to utilize its assets generate. Roa, is a unit as well as costs and total Asset turnover in the numerator the! Due to inclusion of non controllable expenses as net income divided by average operating assets of 2,500,000! Defined as accounts receivable plus inventories less accounts payable … the present value of controllable investment formula in. Charge on controllable ( traceable ) investment: the sales revenue of a company to its... Other words, its a cost that management can increase or decrease based on their business decisions or controlled will! Words, its a cost that management can increase or decrease based on their business decisions evaluation. Manager can control the investment funds available as well as costs and uncontrollable costs - an interest... Please check and try again other approaches to develop a business case a. Then your controllable margin divided by the original capital cost of the investment versatility and simplicity mean different things different., driven in part by healthy levels of manufacturing investment using the of..., ROI can be found on a company 's average operating assets value the. 2 credit points ) t_ { 0 }, t_ { 1 } ) } is most! Of profit made in a financial year as a percentage of an investment centre is.! For return on investment is 28 margin ) is the income and costs are £50,000, then your margin. And all related questions 14 s performance that remains after subtracting the minimum rate return! Where a division is a unit, you 'll want to try and start investing soon. Income divided by average operating assets of directives on TCAs used when possible to calculate each the! Margin ( also called segment margin ) is the formula for computing return on a company uses dollar! Power of compound interest improve ROI in the calculation of ROI for which the department revenue! … the present value of the Blue division to submit plans to improve ROI in the next.. The specific type of profit made in a financial year as a percentage of an investment ’ s.... Influence over —disregard prejudiced assumptions such as eliminating property management decrease based on their business decisions part healthy. Little prior applied knowledge in the area of finance to calculate each of the primary objectives is of... The last five years, it is undoubtedly Vietnam course is intended for 32 academic hours ( 2 credit ). … Understanding return on investment is controllable margin divided by its average of total assets contract for... Results, you 'll want to try and start investing as soon as possible λ! From investment – cost of the primary objectives is identification of major changes in trends and. Investment information in one place investment ( ROI ) formula for a given proposal depreciation is not responsible for decisions! Not a controllable cost, as the manager of an investment center manager can control investment. The net profit margin and total Asset turnover in the area of finance directives on TCAs manufacturing.... The sales manager has influence over, please check and try again the net profit variable costs are,! | 2021-04-18 12:51:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20577514171600342, "perplexity": 3057.3005930350582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00411.warc.gz"} |
https://math.stackexchange.com/questions/3064558/series-solution-to-legendre-equation | # Series Solution to Legendre Equation
My professor taught us the series solution to ODE's method today in class, and one of our homework problems was to solve the Legendre Equation.
$$\text{Legendre Equation:} \frac{d^2y}{dx^2}(1-x^2) -2x\frac{dy}{dx} + \alpha(\alpha + 1)y = 0$$
By making a series expansion at $$k = 0$$:
$$y = \sum_{k=0}^\infty{a_nx^n}$$ $$y' = \sum_{k=0}^\infty{na_nx^{n-1}}$$ $$y'' = \sum_{k=0}^\infty{n(n-1)a_nx^{n-2}}$$ Plugging in: $$\sum_{k=0}^\infty{n(n-1)a_nx^{n-2}}(1-x^2) -2x\sum_{k=0}^\infty{na_nx^{n-1}} + \alpha(\alpha + 1)\sum_{k=0}^\infty{a_nx^n} = 0$$ $$\sum_{n=0}^\infty{n(n-1)a_nx^{n-2}}-\sum_{n=0}^\infty{n(n-1)a_nx^n} -2x\sum_{n=0}^\infty{na_nx^{n-1}}+\alpha(\alpha+1)\sum_{n=0}^\infty{a_nx^n}=0$$
$$\sum_{n=0}^\infty{n(n-1)a_nx^{n-2}}-\sum_{n=0}^\infty{n(n-1)a_nx^n} -2\sum_{n=0}^\infty{na_nx^{n}}+\alpha(\alpha+1)\sum_{n=0}^\infty{a_nx^n}=0$$
$$\sum_{n=0}^\infty{(n+2)(n+1)a_{n+2}x^{n}}-\sum_{n=0}^\infty{n(n-1)a_nx^n} -2\sum_{n=0}^\infty{na_nx^{n}}+\alpha(\alpha+1)\sum_{n=0}^\infty{a_nx^n}=0$$
$$\sum_{n=0}^\infty{(n+1)(n+2)a_{n+2}+[-n(n-1)-2n+\alpha(\alpha+1)]a_n}=0,$$
Each term must cancel so: $$(n+1)(n+2)a_{n+2} + [-n(n+1) + \alpha(\alpha +1)]a_n = 0$$
$$a_{n+2} = \frac{n(n+1)-\alpha(\alpha+1)}{(n+1)(n+2)}a_n$$ $$= \frac{[\alpha + (n+1)](\alpha -n)}{(n+1)(n+2)}$$
Therefore: $$a_2 = \frac{-\alpha(\alpha+1)}{1·2}a_0$$ $$a_4 = -\frac{(\alpha-2)(\alpha+3)}{3·4}a_2$$ $$a_4= (-1)^2\frac{[(\alpha-2)\alpha][(\alpha+1)(\alpha+3)]}{1·2·3·4}a_0$$
So the even solution is: $$y_1(x)=1+\sum_{n=1}^\infty{(-1)^n\frac{[(\alpha-2n+2)...(\alpha-2)\alpha][(\alpha+1)(\alpha+3)...(\alpha+2n-1)]}{(2n)!}x^{2n}}.$$ Then, the odd solution is: $$y_2(x)=x+\sum_{n=1}^\infty{(-1)^n\frac{[(\alpha-2n+1)...(\alpha-3)(\alpha-1)][(\alpha+2)(\alpha+4)...(\alpha+2n)]}{(2n+1)!}x^{2n+1}}.$$
Does this look right? This is my first attempt at the series solution method, so I would really appreciate any help. Thanks in advance!
Since the equation is of second order, there will be two arbitraty constants $$a_0$$ and $$a_1$$. You properly established that $$a_{n+2} = \frac{n(n+1)-\alpha(\alpha+1)}{(n+1)(n+2)}a_n\tag 1$$ and to me, this is enough.
The solution is then $$y=a_0+a_1x+\sum_{n=2}^\infty a_n x^n$$ For sure, you could write it as $$y=a_0+a_1x+\sum_{n=1}^\infty a_{2n} x^{2n}+\sum_{n=1}^\infty a_{2n+1} x^{2n+1}$$ and using $$(1)$$ make the coefficients totally explicit solving the recurrence equation given by $$(1)$$.
In any manner, you did a good job and $$\to +1$$. | 2019-08-25 20:17:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013161063194275, "perplexity": 237.0569007385665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330800.17/warc/CC-MAIN-20190825194252-20190825220252-00496.warc.gz"} |
http://www.thefullwiki.org/Bargaining_Problem | # Bargaining Problem: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
The two person bargaining problem is a problem of understanding how two agents should cooperate when non-cooperation leads to Pareto-inefficient results. It is in essence an equilibrium selection problem; Many games have multiple equilibria with varying payoffs for each player, forcing the players to negotiate on which equilibrium to target. The quintessential example of such a game is the Ultimatum game. The underlying assumption of bargaining theory is that the resulting solution should be the same solution an impartial arbitrator would recommend. Solutions to bargaining come in two flavors: an axiomatic approach where desired properties of a solution are satisfied and a strategic approach where the bargaining procedure is modeled in detail as a sequential game.
## An example
Opera Football Opera 3,2 0,0 Football 0,0 2,3 Battle of the Sexes 1
The Battle of the Sexes, as shown, is a two player coordination game. Both Opera/Opera and Football/Football are Nash equilibria. Any probability distribution over these two Nash equilibria is a correlated equilibrium. The question then becomes which of the infinite possible equilibria should be chosen by the two players. If they disagree and choose different distributions then they will fail to coordinate and likely receive 0 payoffs. In this symmetric case the natural choice is to play Opera/Opera and Football/Football with even probability. Indeed all bargaining solutions described below prescribe this solution. However if the game is asymmetric (for example Football/Football instead yields payoffs of 2,5) the appropriate distribution becomes less clear. Bargaining theory solves this problem.
## The Formal Description
A 2 person bargain problem consists of a disagreement point v (also known as a threat point) and a feasibility set F. v = (v1,v2), where v1 and v2 are the payoffs after disagreement to player 1 and player 2 respectively. F is a closed convex subset of $\textbf{R}^2$ representing the set of possible agreements. F is convex because an agreement could take the form of a correlated combination of other agreements. Points in F must all be better than the disagreement point as there is no sense to an agreement which is worse than disagreement. The goal of bargaining is to choose the feasible agreement φ in F that would result after thorough negotiations.
### Feasibility Set
The set of possible agreements F depends on if there is an outside regulator affording binding contracts. When binding contracts are allowed any joint action is playable so the feasibility set consists of all attainable payoffs better than the disagreement point. When binding contracts are not allowed the game is said to have moral hazard (as players can defect) and thus the feasibility set only consists of correlated equilibrium, which need no enforcement.
### Disagreement Point
The disagreement point v is the value the players can expect to receive if negotiations break down and no bargain can be reached. Naively this could just be some focal equilibrium which both players could expect to play. However, this point directly affects eventual bargaining solution, so it stands to reason that each player should attempt to choose their disagreement points in order to maximize their bargaining position. Towards this goal, it is often advantageous to simultaneously increase one’s own disagreement payoff while harming one’s opponent's disagreement payoff - hence this point is often known as the threat point. If threats are viewed as actions then we can construct a separate game where each player chooses a threat and receives a payoff according to the outcome of bargaining. This is known as Nash’s variable threat game. Alternatively each player could play a minimax strategy in case of disagreement, choosing to disregard personal reward in order to hurt the opponent as much as possible if they leave the bargaining table.
## Bargaining Solutions
Various solutions have been proposed based on slightly different assumptions about what properties are desired for the final agreement point.
### Nash bargaining solution
John Nash proposed that a solution should satisfy certain axioms, 1) Invariant to affine transformations or Invariant to equivalent utility representations, 2) Pareto optimality, 3) Independence of irrelevant alternatives, 4) Symmetry. Let us call u the utility function for player 1, v the utility function for player 2. Under these conditions, rational agents will choose what is known as the Nash bargaining solution. Namely, they will seek to maximize | u(x) − u(d) | | v(y) − v(d) | , where u(d) and v(d), are the status quo utilities (i.e. the utility obtained if one decides not to bargain with the other player). The product of the two excess utilities is generally referred to as the Nash product.
### Kalai-Smorodinsky bargaining solution
Independence of Irrelevant Alternatives can be substituted with an appropriate monotonicity condition, thus providing a different solution for the class of bargaining problems. This alternative solution has been introduced by Ehud Kalai and Meir Smorodinsky. It is the point which maintains the ratios of maximal gains. In other words, if player 1 could receive a maximum of g1 with player 2’s help (and vice-versa for g2), then the Kalai-Smorodinsky bargaining solution would yield the point φ on the Pareto frontier such that φ1 / φ2 = g1 / g2 .
### Egalitarian bargaining solution
The egalitarian bargaining solution, introduced by Ehud Kalai, is a third solution which drops the condition of scale invariance while including both the axiom of Independence of irrelevant alternatives, and the axiom of monotonicity. It is the solution which attempt to grant equal gain to both parties.
## Applications
Recently the Nash bargaining game has been used by some philosophers and economists in order to explain the emergence of human attitudes toward distributive justice (Alexander 2000; Alexander and Skyrms 1999; Binmore 1998, 2005). These authors primarily use evolutionary game theory in order to explain how individuals come to believe that proposing a 50-50 split is the only just solution to the Nash Bargaining Game.
## References
• Alexander, Jason McKenzie (2000) "Evolutionary Explanations of Distributive Justice." Philosophy of Science 67: 490-516.
• Alexander, Jason and Brian Skyrms (1999) "Bargaining with Neighbors: Is Justice Contagious" Journal of Philosophy 96(11): 588-598.
• Binmore, K., Rubinstein, A. & Wolinsky, A. (1986). The Nash Bargaining Solution in Economic Modelling. RAND Journal of Economics 17:176-188.
• Binmore, Kenneth (1998) Game Theory and The Social Contract Volume 2: Just Playing Cambridge: MIT Press.
• Binmore, Kenneth (2005) Natural Justice.
• Kalai, Ehud (1977) "Proportional solutions to bargaining situations: Intertemporal utility comparisons" Econometrica 45(7):1623-1630.
• Kalai, Ehud, and Meir Smorodinsky (1975) "Other solutions to Nash’s bargaining problem" Econometrica 43:513-518.
• Nash, John (1950) "The Bargaining Problem" Econometrica 18: 155-162.
• Walker, Paul (2005) History of Game Theory. http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/gt/hist.htm#ref94 | 2015-11-29 17:39:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7291361689567566, "perplexity": 2035.1034558093377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459214.39/warc/CC-MAIN-20151124205419-00237-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://dirkmittler.homeip.net/blog/archives/tag/wan | ## One main reason, for which Smart Home Appliances are cloud-based.
Like many other consumers, I have some ‘smart appliances’ in my home, and a harmless example which I will use, is my Dyson Air Purifier. It gives me features (beyond) what I would have requested, but a feature which I do appreciate, is the ability to access its settings etc., from an app on my smart-phone, regardless of whether I’m connected to my own Wi-Fi, the way the appliance is, or whether I’m outside somewhere. And this is a feature which most smart appliances offer.
But a question which I could easily picture a consumer asking would be, ‘Why is it necessary for this device to log in to a cloud server, just so that I can access it? Why can this arrangement not work autonomously?’
And I can visualize all sorts of answers, which some consumers could come up with, that might include, ‘Because Big Brother Is Watching Us.’ I tend to be sensitive to certain privacy issues, but also know that in this case, this would not be the main answer. Is Big Brother really so curious about what the air quality is in our homes?
A big reason why these devices need to be logged in to a cloud server, has to do with that last part of what they offer: To give us access to the appliance and its controls, even when we are not on our own LAN (Local Area Network).
## Just performed a wanton reboot of my Modem/Router.
The modem/router which I use for my LAN is a Bell Hub 3000, which I still hold to be a good modem. But lately, I discovered a slight glitch in the way it works. I have given it numerous specialized settings, such as, for example, a “Reserved IP Address” for my new Chromebook.
The problem I ran in to was, that the modem was executing all my settings without the slightest flaw, but was failing to commit changes to certain settings to non-volatile memory. Apparently, the way the modem is organized internally is, that it has volatile as well as non-volatile memory, which mimic the RAM and the Storage of other, modern devices.
In certain cases, even a full-blown PC could be running some version of an operating system, in which a user-initiated change is accepted and enacted, but only saved to non-volatile storage, when the user logs out successfully.
Well, earlier this evening I had a power failure, after which the modem restarted, but restarted with settings, that predated the most recent settings which I had given it. This was its only offence.
Now, I could go through the ritual of changing all my special settings again, after every power failure, but in reality, that would not do. And so, what I did was to soft-boot the modem, which, just like that poorly programmed desktop manager would, saved all my settings to non-volatile memory. After the reboot, those settings have stuck.
But what it also means is twofold:
1. This blog went down again, from 20h15 until 20h25, in other words, for an extra 10 minutes.
2. And, if there are any readers who examine the IP address log in the side-bar of my blog, they will notice an additional IP address change, simply due to the modem reboot. This will be, between 20h10 and 21h10. This one was not due to any malfunction, but was deliberately triggered by my action.
The process was short but painful, and had to be done.
Dirk
## Two Hypothetical Ways, in which Push Notifications Could Work Over WiFi
The reality is that, being 52 years old and only having studied briefly in my distant past, my formal knowledge in Computing is actually lacking these days, and one subject which I know too little about, is how Push Notifications work. Back in my day, if a laptop was ‘asleep’ – i.e. In Standby – it was generally unable to be woken externally via WiFi, but did have hardware clocks that could wake it at scheduled times. Yet we know that mobile devices today, including Android and iOS devices, are able to receive push notifications from various servers, which do precisely that, and that this feature even works from behind a firewall. And so I can muse over how this might work.
I can think of two ways in which this can hypothetically work:
1. The application framework can centralize the receipt of push notifications for the client device, to one UDP port number. If that port number receives a packet, the WiFi chip-set wakes up the main CPU.
2. Each application that wants to receive them, can establish a client connection to a server in advance, which is to send them.
The problem with approach (1) is that, behind a firewall, by default, a device cannot be listening on a fixed port number, known to it. I.e., the same WAN IP Address could be associated with two devices, and a magic packet sent to one fixed port number, even if we know that IP Address, cannot be mapped to wake up the correct device. But this problem can be solved via UPnP, so that each device could open a listening port number for itself on the WAN, and know what its number is.
We do not always know that UPnP is available for every NAT implementation.
Approach (2) requires more from the device, in that a base-band CPU needs to keep a list, of which specific UDP ports on the client device will be allowed to wake up the main CPU, if that port receives a packet.
Presumably, this base-band CPU would also first verify, that the packet was received from the IP address, which the port in question is supposed to be connected to, on the other side, before waking the main CPU.
(Edit 12/19/2016 : Google can simply decide that after a certain Android API Number – i.e., Android version – the device needs to have specific features, that earlier Android APIs did not require.
Hence, starting from , or , Google could have decided that it was no longer a special app permission, for the user to acknowledge, to wake the device. Likewise, starting from some Android version, possessing a base-band CPU might have become mandatory for the hardware, so that the API can offer a certain type of push notification.)
Also, approach (1) would have as drawback, a lack of authentication. Any networked device could just send this magic packet to any other networked device, provided that both the IP address and the port number it is sensitive to are known.
Approach (2) would bring as an advantage, that only specific apps on the client device could be enabled to receive push notifications, and the O/S would be aware of which UDP ports those are sensitive on, so that the base-band CPU would only be waking up the main CPU, if push notifications were received and associated with an app authorized to wake the device.
Also, with approach (2), the mapping of WAN port numbers back to LAN port numbers would still take place passively, through port triggering, so that the WAN-based server does not need to know, what LAN-based port number the connected port is associated with on the client device.
But, approach (2) has as a real drawback, that a server would need to keep a socket open, for every client it might want to send a push notification to. This might sound unimportant but is really not, since many, many clients could be subscribed to one service, such as Facebook. Are we to assume then, that the Facebook server also keeps one connection open to every client device? And if that connection is ever dropped, should we assume that a sea of client devices reconnect continuously, as soon as their clocks periodically wake them?
Dirk
## My router may have been flashed.
One of the ironies of my LAN is, I am hosting a Web-site from it, but I do not even own my present router. Mine is a router owned by my ISP, and which provides me with proprietary TV as well.
This means that my ISP has the right to perform a firmware update on the router, which can also be called ‘flashing the router’.
I suspect that this may have happened, around Wednesday Morning, November 9.
My main reason for suspecting this, was a subtle change in the way my router manages my LAN.
According to This Earlier Posting, I previously needed to set my router as the WINS server as well.
To explain in lay-terms what this means, I need to mention that the local IP addresses which computers have on a LAN do exist, in addition to which Windows has introduced its own way of giving the local computers names. Linux can mimic how this works, using its ‘Samba’ software suite, but also tries to avoid ‘NetBIOS’ (naming) as much as possible, outside of Samba network browsing, or ‘copying-and-pasting of files, between computers on a LAN’.
Just like domain-names need to be resolved into IP addresses on the Internet, which is the WAN to this LAN – the Wide-Area Network – on the Local Area Network, computer-names also need to be resolved into IP addresses, before the computers can actually ‘talk to each other graphically’. Traditionally, Windows offered its whimsical mechanism for doing so, which was named NetBIOS, by which any and every computer could act as the WINS server – thus offering its repository of LAN locations to the WINS clients, but alternatively, there could also exist one dedicated WINS server.
What I had grown used to, was that on my LAN, the router would insist that it be my WINS server, thus ‘not trusting’ any of my Linux boxes to do so in some way. I therefore had to defer to this service, as provided by the router.
I had previously set my ‘/etc/samba/smb.conf‘ to
wins support = no
dns proxy = yes
Well as of Wednesday, the LAN had suddenly ‘looked different’ according to client-browsers. Each computer had suddenly remained aware only of its own identity, with no Workgroup of other computers to network with.
This was all happening, while my connection to the WAN still seemed secure and operative.
Long story short, I think that my ISP may have performed the Firmware Update, and that according to the new firmware version, the router was suddenly not willing to provide this service anymore. And so what I felt I had to do next, was change these settings back to
wins support = no
dns proxy = no
Now that I have done so, each computer can ‘see’ my whole Workgroup again – which was apparently not feasible according to earlier experiences.
Further, for laypeople I might want to emphasize, that it is not just a frivolous exercise of mine, to give each computer a name. If they did not have names, then according to the screen-shot below, I would also not be able to tell them apart, since they all have the same icon anyway:
Now I suppose that an inquiring mind might ask, “Since Linux can imitate Windows, why does Dirk not just set ‘wins support = yes‘ as well?” My answer to this hypothetical question would be, that
• According to common sense, this option will just make the current machine available, as a potential WINS Server, but
• In my practical experience, the LAN will interpret this as more of an imperative gesture, of a kind that will actually cause a feud to break out between the machines.
In my experience, if I even set one of my Linux machines to do this, all the other Linux machines will refer to its repository of (4) LAN names, the others becoming clients, but the Windows 7 machine named ‘Mithral’ will refuse to have it. In this case, Mithral will insist that it must be the WINS Server, and not some Linux box. And then, further logic-testing of which machines can see which, will reveal that in practice, I must leave this option switched off, if there is also to be any Windows machine to share the LAN with.
Dirk | 2020-09-26 22:30:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21881140768527985, "perplexity": 2102.233175009042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00771.warc.gz"} |
https://perceval.quandela.net/docs/notebooks/Boson%20Sampling.html | # Boson Sampling
We are interested to simulate a boson sample with 14 photons and 60 modes, order of size comparable to what was done in Boson Sampling with 20 Input Photons and a 60-Mode Interferometer in a $$10^{14}$$-Dimensional Hilbert Space [1]
[1]:
from IPython import display
from collections import Counter
from tabulate import tabulate
from tqdm.auto import tqdm
import gzip
import pickle
import time
import sympy as sp
import random
import perceval as pcvl
import perceval.lib.symb as symb
We define all the needed values below.
## Perfect Boson sampling
[2]:
n = 14 #number of photons at the input
m = 60 #number of modes
N = 5000000 #number of samplings
### Generating a Haar random Unitary with Perceval
[3]:
Unitary_60 = pcvl.Matrix.random_unitary(m) #creates a random unitary of dimension 60
### A possible linear circuit realization of such matrix would be the following.
Here we define a 2-mode unitary circuit that we can use to decompose the 60 mode unitary
[4]:
mzi = (symb.BS() // (0, symb.PS(phi=pcvl.Parameter("φ_a")))
// symb.BS() // (1, symb.PS(phi=pcvl.Parameter("φ_b"))))
pcvl.pdisplay(mzi)
[4]:
Let us decompose the unitary into a Reck’s type circuit [2] - this makes a huge circuit…
[ ]:
Linear_Circuit_60 = pcvl.Circuit.decomposition(Unitary_60, mzi,
phase_shifter_fn=symb.PS,
shape="triangle")
### Running Simulation
Now we choose the way to perform the simulation with Perceval. The number of photons is within what we could simulate with a Naive backend (see here), however, the output space is far too big just to enumerate and store the states - so let us go with sampling using CliffordClifford2017 backend (see here).
[7]:
Sampling_Backend = pcvl.BackendFactory().get_backend("CliffordClifford2017")
Select a random input:
[8]:
#one can choose which mode he/she wants at input, or we can choose it randomly
def Generating_Input(n, m, modes = None):
"This function randomly chooses an input with n photons in m modes."
if modes == None :
modes = sorted(random.sample(range(m),n))
state = "|"
for i in range(m):
state = state + "0"*(1 - (i in modes)) +"1"*(i in modes)+ ","*(i < m-1)
return pcvl.BasicState(state + ">")
input_state = Generating_Input(n, m)
print("The input state: ", input_state)
The input state: |0,0,0,0,0,1,1,1,0,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,1,1,0,0,1,0,0,0,0,0>
Just to see that it outputs a statevectors of n photon(s) in m modes.
[9]:
print("The sampled outputs are:")
for _ in range(10):
print(Sampling_Backend(Unitary_60).sample(input_state))
The sampled outputs are:
|0,0,0,0,0,1,0,2,0,1,0,0,0,0,1,0,0,0,0,1,0,1,0,1,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0>
|0,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,2,2,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,1,1>
|0,0,0,1,0,0,1,0,0,1,0,0,1,0,1,0,0,0,0,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,2,0,0,0,0,0,0,0>
|0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,2,0,0,0,1,0,0,0,0,0,1,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0>
|0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,1,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,0,1,0>
|2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,1,1>
|0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,1,3,1,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0>
|1,0,0,1,0,0,0,2,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,1,0,0,0,0,0,1,0,0>
|2,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,1,0,0,0>
|0,0,0,0,0,0,1,0,0,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,2,0,0,0,0,2,0,2,0,0,0,0,0,0,0,1,0,0,0>
We carry out the sampling, we do it N times, it will take some time, let us save the results to a file:
[ ]:
# if we want to launch parallel process
worker_id=1
#store the input and the unitary
with open("%dphotons_%dmodes_%dsamples-worker%s-unitary.pkl" %(n,m,N,worker_id), 'wb') as f:
pickle.dump(Unitary_60, f)
with open("%dphotons_%dmodes_%dsamples-worker%s-inputstate.pkl" %(n,m,N,worker_id), 'w') as f:
f.write(str(input_state)+"\n")
with gzip.open("%dphotons_%dmodes_%dsamples-worker%s-samples.txt.gz" %(n,m,N,worker_id), 'wb') as f:
start = time.time()
for i in range(N):
f.write((str(Sampling_Backend(Unitary_60).sample(pcvl.BasicState(input_state)))+"\n").encode());
end = time.time()
f.write(str("==> %d\n" % (end-start)).encode())
f.close()
A little after (4 hours on a 3.1GHz Intel) - we do have 5M samples. We launched this on 32 threads for 2 days and collected 300 batches of 5M samples
Let us analyze the K-first mode bunching on these samples
[ ]:
import gzip
[ ]:
worker_id = 1
count = 0
bunching_distribution = Counter()
with gzip.open("%dphotons_%dmodes_%dsamples-worker%s-samples.txt.gz"%(n,m,N,worker_id), "rt") as f:
for l in f:
l = l.strip()
if l.startswith("|") and l.endswith(">"):
try:
st = pcvl.BasicState(l)
count+=1
bunching_distribution[st.photon2mode(st.n-1)]+=1
except Exception:
pass
print(count, "samples")
print("Bunching Distribution:", "\t".join([str(bunching_distribution[k]) for k in range(m)]))
These numbers have been used on 300 samples for certification - see our article on Perceval for more details.
## Boson sampling with non perfect sources
Let us explore around performing Boson sampling with a non perfect source. We declare a source with 90% brightness and purity.
[ ]:
source = pcvl.Source(brightness=0.90, purity=0.9)
QPU = pcvl.Processor({1:source,2:source, 3:source }, Linear_Circuit_60)
We can see what is the source distribution, so how likely a state at the input of the linear circuit will be.
[ ]:
pcvl.pdisplay(QPU.source_distribution, precision=1e-7, max_v=20)
[1]: Hui Wang, et al. Boson Sampling with 20 Input Photons and a 60-Mode Interferometer in a $$10^{14}$$-Dimensional Hilbert Space. Physical Review Letters, 123(25):250503, December 2019. Publisher: American Physical Society.
[2]: Michael Reck, Anton Zeilinger, Herbert J Bernstein, and Philip Bertani. Experimental realization of any discrete unitary operator. Physical review letters, 73(1):58, 1994. | 2022-10-06 20:48:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4320037066936493, "perplexity": 11526.400325001703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00742.warc.gz"} |
http://zabradi.web.elte.hu/index.html | Gergely Zábrádi's homepage
### Email: zger 'at' cs.elte.hu
Bulletin board for my students (in Hungarian)
Research interests:
My field of interest lies in Algebraic Number Theory, however it is twofold.
I completed my PhD in Cambridge under the supervision of John Coates on noncommutative Iwasawa theory for elliptic curves. The arithmetic of elliptic curves and especially the conjectures of Birch and Swinnerton-Dyer have been lying in the centre of research in Arithmetic Algebraic Geometry. One of the most powerful tools known at present attacking these conjectures is Iwasawa theory. The main idea of Iwasawa theory is to relate various arithmetic objects to complex $$L$$-functions via a so-called $$p$$-adic $$L$$-function. This arithmetic object could be the ideal class group of a number field, the Selmer group of an elliptic curve, or more generally of an abelian variety, or even of a motive. The $$p$$-adic $$L$$-function - in most cases conjecturally - interpolates special values of the twisted complex $$L$$-functions of the arithmetic object. On the other hand, it is supposed to be - by the Main Conjecture - a characteristic element of the Selmer groups.
I did a post-doc at the Westfälische Wilhelmsuniversität Münster with Peter Schneider. There I learnt a lot about representation theory of p-adic groups and the $$p$$-adic Langlands programme. The (global) Langlands programme is a huge web of conjectures that relates Galois representations of number fields to automorphic representations (which are - in a certain sense - generalizations of modular forms). The local Langlands conjectures (which are now theorems for $$\mathrm{GL}_n$$ ) relate the (continuous) representation theory of the absolute Galois group (or in fact the Weil-Deligne group) of local fields (such as the field $$\mathbb{Q}_p$$ of $$p$$-adic numbers) in finite dimensional vectorspaces over $$\mathbb{C}$$ to the smooth representations of reductive algebraic groups over the local field in (infinite dimensional) vectorspaces over $$\mathbb{C}$$. However, if we allow continuous representations in vectorspaces over other fields (such as $$\overline{\mathbb{Q}_p}$$ or $$\overline{\mathbb{F}_p}$$ ) on both the Galois and reductive group sides, we obtain much more representations. The precisely formulated conjectures, how these should correspond to each other (if at all), are still missing. However, (through the work of Fontaine, Colmez, Breuil, Paskunas, Berger, Kisin, Emerton, Schneider, Vigneras, and others) it has become increasingly clear that some kind of $$p$$-adic (and also mod $$p$$ ) Langlands correspondences should exist. In fact, Colmez managed to prove such a correspondence for $$\mathrm{GL}_2(\mathbb{Q}_p)$$.
Events:
Past Events:
Slides:
Talk at ELTE on Representations of p-adic linear groups (in Hungarian).
Documents/theses for my habilitation (partly in Hungarian)
Papers and preprints:
14. Cohomology and overconvergence for representations of powers of Galois groups (jt. with Aprameyo Pal), submitted pdf, arxiv:1705.03786
13. The p-adic Hodge decomposition according to Beilinson (jt. with Tamás Szamuely), final pdf, to appear in the proceedings of the 2015 AMS Summer Institute in Algebraic Geometry, arxiv:1606.01921
12. Multivariable (φ,Γ)-modules and products of Galois groups, revised preprint, to appear in Math. Research Letters, also available on the arxiv:1603.04231
11. On twists of modules over noncommutative Iwasawa algebras (jt. with T. Ochiai and S. Jha), Algebra & Number Theory 10(3) (2016), 685-694, arxiv:1512.07814
10. Multivariable (φ,Γ)-modules and smooth o-torsion representations, read-only pdf, to appear in Selecta Mathematica, arxiv:1511.01037
9. Links between generalized Montréal functors (jt. with M. Erdélyi), read-only pdf, Mathematische Zeitschrift 286(3-4) (2017), 1227-1275, arxiv:1412.5778
8. Algebraic functional equations and completely faithful Selmer groups (jt. with T. Backhausz), International Journal of Number Theory 11(4) (2015), 1233-1257, arxiv:1405.6180.
7,5. A note on central torsion Iwasawa modules , not intended for publication, incorporated into the paper "Algebraic functional equations and completely faithful Selmer groups", pdf
7. From étale P+-representations to G-equivariant sheaves on G/P (jt. with P. Schneider and M.-F. Vigneras), in: LMS Lecture Note Series 415 `Automorphic Forms and Galois Representations' (eds.: F. Diamond, P. Kassaei, M. Kim) Volume 2 (2014), 248-366, preprint version, arxiv:1206.1125
6. (φ,Γ)-modules over noncommutative overconvergent and Robba rings, Algebra & Number Theory 8(1) (2014), 191-242, preprint version, arxiv:1208.3347
5. Exactness of the reduction on étale modules, J. of Algebra 331 (2011), 400-415, available on the arxiv:1006.5808
4. Generalized Robba rings (with an Appendix by Peter Schneider), Israel J. Math. 191(2) (2012), 817-887, available on the arxiv:1006.4690
3. Pairings and functional equations over the GL2-extension, Proc. London Math. Soc. (2010) 101 (3), 893-930, pdf
2. Characteristic elements, pairings and functional equations over the false Tate curve extension, Math. Proc. Camb. Phil. Soc. 144 (2008), 535-574, pdf.
1. On irregularities in the graph of generalized divisor functions, Acta Arith., 110 (2003), 165-171, pdf.
Habilitation thesis, submitted at ELTE (2016): Functorial relations in the p-adic Langlands programme.
PhD thesis, Trinity College, University of Cambridge (2008): Characteristic elements, pairings, and functional equations in non-commutative Iwasawa theory.
Students:
Papers under my supervision
1. Tibor Backhausz (ELTE): Ranks of GL2 Iwasawa modules of elliptic curves, (2013) arxiv, Functiones et Approximatio, Commentarii Mathematici 52(2) (2015), 283-298, 1st prize at Hungarian student research competition (OTDK)
2. Tamás Csige (ELTE): $$K_0$$-invariance of the completely faithful property of Iwasawa-modules, (2014) arxiv
3. Márton Erdélyi (CEU): On the Schneider-Vigneras functor for principal series, Journal of Number Theory 162 (2016), 68-85, arxiv
4. Tamás Csige (ELTE): The Grothendieck group of completed distribution algebras, (2016) arxiv
1. Márton Erdélyi (CEU), 2011-2015, thesis: Computations and comparison of generalized Montréal functors
2. Tamás Csige (ELTE, Humboldt - co-supervised by Elmar Grosse-Klönne), 2012-2016, thesis: $$K$$-theoretic methods in the representation theory of $$p$$-adic analytic groups
1. Siddharth Mathur (CEU): Local Class Field Theory and Lubin-Tate Extensions: An Explicit Construction of the Artin Map (2012) pdf
2. Tamás Csige (ELTE): Fields of norms (Normák testei), in Hungarian (2012) pdf
3. Péter Kutas (ELTE): Galois representations (2013) pdf
4. Dávid Szabó (ELTE): p-adic Galois representations and (φ,Γ)-modules (2015) pdf | 2017-09-21 14:16:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571423053741455, "perplexity": 2617.9881311077806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00301.warc.gz"} |
https://athhallcamreview.com/edges-not-covered-by-a-monochromatic-bipartite-graph-arxiv2210-11037v1-math-co/ | ##### What's Hot
$f_k(n,H)$ denotes the maximum number of edges not included in the monochrome copy of ~$H$ with $k$ coloring of edges in $K_n$, let be $ex(n,H).$ indicates the Tur\’an number of $H$. Instead of $f_2(n,H)$, simply write $f(n,H)$. Keevash and Sudakov proved that $f(n,H)=ex(n,H)$ if $H$ is an edge-critical graph or $C_4$, and that this equation holds for any graph $I asked if that applies to H$. All known exact values for this question require $H$ to contain at least one cycle. This paper focuses on acyclic graphs and obtains the following results.
(1) If $H$ is a spider or a broomstick, prove $f(n,H)=ex(n,H)$.
(2) \emph{tail} of $H$ is the path $P_3=v_0v_1v_2$ where $v_2$ is only adjacent to $v_1$ and $v_1$ is only adjacent to $v_0,v_2$ of $H$ . If $H$ is a bipartite graph with tails, we get a strict upper bound on $f(n,H)$. This result provides the first bipartite graph that answers Keevash and Sudakov’s question negatively.
(3) Liu, Pikhurko, and Sharifzadeh asked if $f_k(n,T)=(k-1)ex(n,T)$ if $T$ is a tree. We provide an upper bound on $f_{2k}(n,P_{2k})$ and show that it is tight when $2k-1$ is prime. This provides a negative answer to their question.
Share. | 2023-03-31 10:26:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496042490005493, "perplexity": 219.2992593520148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00609.warc.gz"} |
http://math.stackexchange.com/questions/219841/karoubi-envelope-and-factorization-of-a-functor | # Karoubi envelope and factorization of a functor
I am trying to understand the solution to the following exercise.
Let $\mathcal{C}$ be a category with idempotents $e:A\to A$ and $d:B\to B$, and a morphism $f: A \to B$.
The Karoubi envelope $\bar{\mathcal{C}}$ is a category which has as objects the idempotents of $\mathcal{C}$ and as morphism $e\to d$ those morphisms $f: \text{dom}\,e \to \text{dom}\, d$ in $\mathcal{C}$ for which $f\cdot e = f = d\cdot f$. The Karoubi envelope $\bar{\mathcal{C}}$ splits the idempotents in $\mathcal{C}$
We let the functor $E:\mathcal{C}\to \bar{\mathcal{C}}$ be the functor that maps $A\mapsto 1_A$, and the identity on morphisms. It embeds $\mathcal{C}$ in $\bar{\mathcal{C}}$.
Now if $e:A\to A$ is an idempotent in $\mathcal{C}$, in $\bar{\mathcal{C}}$ there is a pair of maps $\check{e}:1_A\to e$ and $\hat{e}: e\to 1_A$ which split the idempotent.
The claim is: any arbitrary functor $F:\mathcal{C}\to \mathcal{D}$ can be factored as $F=\tilde{F}E$ $\iff$ it sends idempotents of $\mathcal{C}$ to split idempotents of $\mathcal{D}$.
Proof:
• Forward direction: should be fairly trivial. All idempotents split in $\bar{\mathcal{C}}$, and functors preserve split idempotents.
• Reverse direction: here I have some problems. For objects, I let $\tilde{F}A = \text{dom}(FA)$. This way $\tilde{F}(E(A)) = \tilde{F}(1_A) =\text{dom}(1_{FA})$ as required. For morphisms, since for each $f:A\to B$ in $\mathcal{C}$ there is a $f:e\to d$ in $\bar{\mathcal{C}}$, I would just set $\tilde{F}(f) = F(f)$. So $\tilde{F}(f:e\to d) = F(f:A\to B)$.
However, my solutions give the following.
Suppose $F$ splits idempotents $e$ as $\hat{e}\check{e}$. Then $\tilde{F}(f) = \check{d}\, F(f)\,\hat{e}$, where $f$ is defined as above. The solutions claim that it is the only way to chose $\tilde{F}$ such that it preserves splits. I do not understand why this is necessary. What's wrong with setting $\tilde{F}f = Ff$?
-
Your definition of the morphisms in $\bar{\mathcal{C}}$ is incorrect. $\DeclareMathOperator{\dom}{dom}$ It consists of morphisms $f : \dom e \to \dom d$ in $\mathcal{C}$ such that $d f e = f$. This means $e : \dom e \to \dom e$ is the identity morphism in $\bar{\mathcal{C}}$. – Zhen Lin Oct 24 '12 at 6:50 | 2015-04-21 13:42:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841861128807068, "perplexity": 118.06880278269018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641468.77/warc/CC-MAIN-20150417045721-00251-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-020-74054-4?error=cookies_not_supported&code=bfc15272-736e-4fa5-a2a0-b932aefdc4c8 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations
## Abstract
Tractography has created new horizons for researchers to study brain connectivity in vivo. However, tractography is an advanced and challenging method that has not been used so far for medical data analysis at a large scale in comparison to other traditional brain imaging methods. This work allows tractography to be used for large scale and high-quality medical analytics. BUndle ANalytics (BUAN) is a fast, robust, and flexible computational framework for real-world tractometric studies. BUAN combines tractography and anatomical information to analyze the challenging datasets and identifies significant group differences in specific locations of the white matter bundles. Additionally, BUAN takes the shape of the bundles into consideration for the analysis. BUAN compares the shapes of the bundles using a metric called bundle adjacency which calculates shape similarity between two given bundles. BUAN builds networks of bundle shape similarities that can be paramount for automating quality control. BUAN is freely available in DIPY. Results are presented using publicly available Parkinson’s Progression Markers Initiative data.
## Introduction
The human brain contains billions of axons that bundle together in tracts and fasciculi. These can be reconstructed in vivo by collecting diffusion MRI data1,2,3 and deploying tractography algorithms4,5,6,7. The outputs of tractography algorithms are called tractograms. These tractograms are represented digitally using streamlines, which are representations of 3D curves traversing the brain. Whole-brain tractograms are densely populated with millions of streamlines which makes it difficult to visually and computationally inspect and characterize brain pathways. When streamlines of similar shapes and characteristics travel together through the white matter they are called bundles. These bundles approximate the white matter fiber bundles that connect distant parts of the brain to each other and are also known as tracts or fasciculi. These bundles of axons carry crucial information between cortical and/or subcortical areas and potential damage to these bundles, for example, from surgery, trauma, or disease can have tremendous consequences for the patient’s cognitive function and quality of life. Every bundle has a different functional association or set of functional associations. For example, the arcuate fasciculus is involved in the understanding of language8 while the optic radiation is involved primarily with visual processing9,10. Therefore, a valuable capability for the analysis of digital models of white matter is the ability to automatically identify known bundles contained within the whole brain tractograms. This process is known as bundle recognition11, extraction, or segmentation.
Manual virtual dissection12,13,14 and automatic bundle extraction11,15,16,17,18,19,20 of white matter bundles has enabled the scientific community to gather thousands of extracted fiber tract exemplars from source tractograms and visualize them in vivo. With the rise of machine learning in the field of neuroimaging, we are able to automate complex and unwieldy tasks such as white matter fiber bundle segmentation from whole-brain tractograms. Segmenting bundles has become convenient, efficient and fast11,21 and this practice has resulted in the generation of exceedingly large data sets22,23,24,25,26. With a great amount of data available to them, the neuroscience community can now proceed to perform statistically sophisticated group comparisons on the acquired tracts. In the past decade, a plethora of methods for analyzing tractograms and combining them with anatomical information have been developed20,27,28,29,30,31,32,33,34. These analytical methods combining tractography and anatomical measures such as fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AD)35,36 to study pathways are termed as tractometric analysis methods31. Furthermore, tractometric analysis methods look at how bundles differ between specified groups. Recently, many studies27,28,29,30,31,32,33,34,37 have applied statistical methods to the study of group differences along the length of tracts. These are often called bundle profiles28.
In the field of neuroimaging, many different methods exist separately27,28,29,30,31,32,33,34 for the different tasks required to perform tractometric studies. Building a complete pipeline requires careful planning and forethought, especially for clinical applications. However, no such end-to-end pipeline has achieved wide acceptance or prominence which offers one platform to extract and perform group comparisons of white matter bundles.
In our work, we present BUndle ANalytics (BUAN), an end-to-end computational framework that can precisely extract bundles, perform statistical analyses across groups, and compare the shapes of different bundles. Our framework starts by taking a whole-brain target tractogram as input and then performs streamline-based registration of the tractogram to MNI space using an atlas of exemplar bundles (template). It next proceeds to bundle extraction, wherein it applies the auto-calibrated version of RecoBundles11 (see "Auto-calibration" section) which is sufficiently robust to permit the extraction of particularly long or short tract bundles. For each of the tracts extracted, BUAN generates a tract profile for each subject. BUAN then performs statistical analysis on the bundle profiles to discover group differences along the length of tracts based on anatomical measures such as fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD)35,36, Constant Solid Angle-generalized fractional anisotropy (CSA-GFA)38,39, and Constant Solid Angle-quantitative anisotropy (CSA-QA)40. Indeed, BUAN is sufficiently robust such that a wide array of anatomical information can be integrated into the analysis. Group comparisons on bundle profiles are performed using linear mixed models41,42,43. BUAN is capable of precisely localizing group differences to specific subsections of bundles. BUAN has the further advantage of permitting shape analysis of extracted tracts. In this paper, we present a novel graph-theoretic approach to compare the shapes of two bundles of the same type using bundle adjacency (BA) method21. Bundle adjacency is a bounded method for expressing how similar the shapes of two bundles are and bundle adjacency score ranges from 0 to 1. We construct a network of bundles across the subjects and calculate bundle adjacency (BA) among them. This results in a network graph/adjacency matrix that gives insights about inter-group and intra-group shape differences. This method can be used for quality assurance and is performed in common ’atlas’ space. Furthermore, BUAN operates without the need for nonrigid deformations, a process that could potentially make it harder to interpret group differences.
Our fully automated, streamline based approach enables us to create a robust, fast, and flexible computational framework. It is important to note that our framework is applicable to every kind of diffusion brain image data including hard-to-process clinical data. It does not require any sort of heavy training that requires large amounts of data and nor does it depend on any training data set. BUAN is publicly available in DIPY44 through python scripts and command-line interfaces.
## Results
### Overview
Here, we provide an overview of BUAN. Fig. 1A, shows the process of bundle extraction. We registered our input tractogram [target tractogram (A.a)] to the model atlas (A.b) space using streamline-based linear registration (SLR)45. For extracting bundles from whole-brain tractograms, we have applied Recobundles (RB)11. RB takes the registered tractogram (A.c) and model bundle (A.d) as input and extracts bundle (A.e). A new auto-calibration step in RecoBundles (refer to "Auto-calibration" section for details) has been added for extracting small and difficult to find bundles. (A.f) shows the final extracted bundle after auto-calibration (refine). Fig. 1B, shows the next step in the process wherein BUAN segments the final extracted bundle: the left arcuate fasciculus (AF_L) in small segments. We call this step the assignment step because every point of the bundle is assigned to the closest point of the model centroid. In (B.a) we see a depiction of a given bundle, in (B.b) we see model bundle centroids projected on to the bundle (B.a). In (B.c), each point on streamline is assigned to the nearest centroid point (shown with random colors). In Fig. 1C, we see the tractometric part of our framework, where we provided extracted and segmented bundles from all subjects as input as shown in (C.a). Here, the red box indicates patient data while the green box indicates healthy control data. We found significant differences across populations using information from anatomical measures such as fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD), generalized fractional anisotropy (GFA) and several other metrics. We applied linear mixed models (LMM)41,42,43 to find group differences across subjects for all bundles at specific locations. In (C.b) we see the exact segment where anatomical measures differed between groups (e.g., FA is different between patients and controls). Notice in (C.c) that FA and CSA-GFA differed at 65–75 segments while MD and AD differed at 45–55 segments as indicated by p value < 0.001. BUAN can locate and visualize exactly where a significant difference is found between the two populations. Now we can build a more complete picture of the data by looking into the shape differences. This is shown in Fig. 1D. In (D.a), we took extracted bundles from all subjects as input, once more, the red box indicates patient data while the green box indicates healthy control data (similar to Fig. 1C). However, this time the input was extracted bundles without any information associated with segments (assignment maps). Explicitly, we only provided the bundles themselves. The goal was thus to see the shape differences along the lengths of the bundles and study their sub-clusters. In (D.b), BUAN created a similarity matrix by calculating bundle adjacency (BA)21, our proposed shape similarity metric, between each subjects’ bundle and every other subjects’ bundle. Higher BA values (dark blue color) indicate higher shape similarity and lower BA values (light blue color) indicate lower shape similarity among bundles. Using shape similarity matrix information, we can go back and visually inspect the shape of the bundles and what BA score they were assigned. An example is shown in (D.c). Bundle adjacency is a bounded measure and takes values between 0 to 1, such that 0 means no shape similarity, i.e. no similar adjacent clusters of streamlines were found between the two bundles and 1 means maximum similarity, i.e. all clusters of both bundles had at least one neighbor.
In this work we used data from the publicly available Parkinson’s Progression Markers Initiative (PPMI) database46. In the following sections, we present results generated by BUAN on 30 extracted white matter fiber bundles for 64 subjects: 32 controls and 32 patients.
The complete description of the methods and data summarized here can be found in "Methods" section.
### Bundle profiles
Bundle profiles were generated for 30 extracted white matter bundles of all 64 subjects by creating assignment maps of extracted bundles. Each assignment map contained 100 segments per bundle. All were generated in common space. See "Assignment maps" section for details about the method. Data were then transformed back into native space to project anatomical measures, fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD)35,36, Constant Solid Angle-generalized fractional anisotropy (CSA-GFA)38,39, and Constant Solid Angle-quantitative anisotropy (CSA-QA)40 on the segments of bundles. Throughout the process of bundle profile creation, no data was discarded. We did not smooth or deform our tracts or sub-sample them. We used all the points of each streamline and all the streamlines of the bundles. Comparisons between groups (patients vs controls) were done using linear mixed models (LMM) which adjusted for the correlations between the streamline data extracted from the same subject’s tract. Fig. 2 shows the analyses summary for the 100 segments for each tract. Information about the white matter tracts used in the study can be found in the supplementary materials, Appendix A.1 Fig. A1. Statistically significant differences (p value < 0.001) between the groups were found in FA along the IFOF_L and IFOF_R tracts between segments 55 and 65 as shown in Fig. 2. We also found group differences on FPT_R bundle at 30–37 segments, MdLF_R at 25–29 segments, and 30–35 segments, ILF_R at 72–78 segments, OPT_R 70–75, and 85–90, OR_L at 38–45 segments and finally for UF_L at 85–95 segments. The LMM result plots for other anatomical measures can be found in the supplementary materials Appendix A.2, for MD Fig. A4, RD Fig. A2, AD Fig. A3, CSA-GFA Fig. A5, and CSA-QA Fig. A6.
We found group differences for FA, MD, RD, AD, CSA-GFA, and CSA-QA across different bundles. The LMM plots can be found in the supplementary materials in Appendix A.2. MD results are available in Sec. Appendix A.2 Fig.A4, RD in Sec. Appendix A.2 Fig. A2, AD in Sec. Appendix A.2 Fig.A3, CSA-GFA8 in Sec. Appendix A.2 Fig.A5, and CSA-QA in Sec. Appendix A.2 Fig.A6. The summarized version of results for all 6 anatomical measures is described in Fig. 3. The locations on the bundles are selected based on the criteria of p values < 0.001 for group differences at that location. We found consistency in the results for the same bundles with differences at similar locations for FA, RD, GFA, and QA metrics/measures. We also found results for MD, RD, and AD to have the same locations with significant differences. We examined the locations mentioned in Fig. 3 to find actual differences between patient data and control data at those locations. FA, CSA-GFA, and CSA-QA values increase in patients, the average anisotropy values of all these 3 metrics used here are higher in patient data than in healthy control data. AD, RD, and MD values decrease in patients, and the average diffusivity value in patients is lower than in healthy controls.
Fig. 4, depicts BUAN’s bundle profile analysis process. On the top, we have the left Inferior Fronto-occipital Fasciculus (IFOF_L) and on the bottom, we have the right Inferior Fronto-occipital Fasciculus (IFOF_R). For both IFOF_L and IFOF_R, we have (A) bundle divided into 100 segments. (B) shows LMM result plots for IFOF bundles for FA, RD, CSA-GFA, and CSA-QA measures. We can see FA, RD, CSA-GFA, and CSA-QA are significantly different at 58-62 segments on the IFOF bundles for patient and control groups. The highlighted yellow segment in (C) points out exactly where 58-62 segments lie on both left and right IFOF bundles. Locations on the bundles are selected by using segments with higher significant differences (p value < 0.001) provided by linear mixed models results. We selected the 58-62 segments on the IFOF bundles of all 64 subjects, 32 controls, and 32 patients and calculated the mean FA, RD, CSA-GFA, and CSA-QA at the highlighted yellow location as shown in (C) per bundle. (D) shows histogram plots for mean FA, MD, CSA-GFA, and CSA-QA of control (green) and patient (red) groups at the location of 58-62 segments on bundles. On the y-axis, we have the number of the subjects, and on the x-axis, we show the FA/RD/CSA-GFA/CSA-QA values. FA, CSA-GFA, and CSA-QA values increase in patients, and RD values decrease in patients as compared to the control group.
### Bundle Shape Similarity
We performed the shape analysis of the bundles using bundle adjacency (BA) for all the 30 bundles of all 64 subjects. See "Shape analysis using bundle adjacency" section for method explanation. Results generated by bundle adjacency (BA) are used to create a connected bundle graph network that is represented in a compact manner as similarity matrices. One similarity matrix for each bundle. Fig. 5.A, shows the similarity matrices for 30 bundles created using BA for 64 subjects. Each similarity matrix is a $$64\times 64$$ matrix. The first 32 rows of the matrices (0-31 subjects) belong to the control data, and the last 32 rows of the matrices (32-63 subjects) belong to the patient data. The darker blue color of the similarity matrix means bundles have higher BA scores of shape similarity and blue color converging to white shows the least similarity in the shape among the bundles. BA threshold used is 5mm, which is towards the higher strictness spectrum for calculating the shape similarity among bundles. For some subjects, we cannot extract a specific bundle or the bundle is very thin. In this case, the similarity matrix of the bundle has a white line in the subject’s row and column. Therefore, shape similarity matrices also play the role of quality assurance. Similarity matrices detect outlier subjects who have lower BA scores with other subjects’ bundles. These matrices tell us which bundles are prominent and easily extractable among subjects. Here, we observe, CC_ForcepsMinor, MLF_L, MLF_R, ML_L, ML_R, UF_L, and UF_R to have dark similarity matrices. This implies that shape for all these bundles is highly consistent across the subjects and that sets of bundles are readily extractable in all the subjects. The AF_R, ILF_R, OPT_L, and V have white lines in their similarity matrices. This suggests these four bundles were missing from some subjects and if they were available, they differed in shape overall. Note that the BA threshold being used here is extremely strict in giving similarity scores. Most of the bundles tend to have darker similarity matrices, proving robust and good quality bundle extraction performed by the BUAN framework. In order to find clusters and patterns in the BA similarity matrices, we applied hierarchical clustering on the similarity matrices. Fig. 5.B, shows hierarchical clustering on the similarity matrices of two bundles, AF_L and AF_R respectively. Here, we have added label row and column to show a group of the observation. The observation (subject) belongs to a patient group when the label color is red and it belongs to the control group when the label color is green. Notice that the AF_L bundle has two bigger clusters and the AF_R bundle similarity matrix is divided into five clusters. Hierarchical clustering results of the similarity matrices can be interpreted as clusters that are comprised of subjects who have similar bundle shapes. Results for hierarchical clustering on all 30 bundles can be found in the supplementary materials Appendix A.3 Fig. A7.
### Comparisons with AFQ
The same preprocessing and bundle extraction methods were used in creating Automated Fiber-Tract Quantification (AFQ)29 and BUAN bundle profiles. Both AFQ and BUAN bundle profile analyses were run on the PPMI data set comprised of 64 subjects, 32 controls, and 32 patients respectively. Subject bundles and anatomical files used for analysis are the exact same for both bundle profile methods. The AFQ method generates one mean bundle profile per subject, we took an average of 32 patient subjects bundle profiles to create one bundle profile for the patient group and averaged 32 control subjects bundle profiles to create one mean bundle profile to represent the control group. The AFQ does not provide statistical analysis beyond the generation of profiles. To get the areas of significant group differences we ran a 2-sample independent t-test on AFQ bundle profiles along the length of the tract. The AFQ method generates a mean streamline (bundle profile) per subject with 100 equidistant points. For a given bundle, a t-test was run to get group difference significance at each point. In Fig. 6, we present results from two methods using 5 bundles, AF$$\_$$L, CST$$\_$$L, IFOF$$\_$$R, OR$$\_$$R, and UF$$\_$$R. In Fig. 6, both A) AFQ and B) BUAN bundle profile analysis results are plotted. In both, the first row has mean bundle FA plots for 5 bundles and mean bundle MD plots in the second row. The control group is shown in the green and the patient group is shown in red. Plots have mean anatomical values on their left y-axis, the x-axis has a length along the tract and the negative logarithm of p values is plotted on the right y-axis of plots. In all the plots, the horizontal lower line indicates a p value < 0.01 and the horizontal upper line indicates a p value < 0.001. In Fig. 6A, for FA, we see group differences in AF_L at 40–90 area of the tract, for CST_L at 30–50, 55–65, and slight difference throughout the rest of the tract, for IFOF_R 1–55 area shows the bigger difference and relatively small difference in rest of the tract, for both OR_R and UF_R there are group differences throughout the tract. In the case of MD, we observe slight changes in groups for all 5 bundles. In the IFOF_R tract, we see a spike in patient group’s mean at 20–25 area along the tract. In UF_R, we see a huge spike in the patient group’s mean at 25-40 areas along the tract. In Fig. 6.B, we consider segments with p values in the range of 0.001–0.01 to be significantly different. For FA, we observe significant group changes in AF_L at segment 70, in CST_L at segments 65–70, in IFOF_R at segments 55–60, in OR_R at segments 35–42, and in UF_R at the segment 65. In the case of MD, we observe significant group differences in IFOF_R at segments 58–74, segments 78–82, and segment 90. In OR_R tract at segments 35–42, segments 50–60 and segments 64–76. We do not see significant group differences in AF_L, CST_L, and UF_R tracts.
## Discussion
BUndle ANalytics (BUAN) provides a unique platform for tractometry and connects it to anatomical information for analysis of white matter bundles. BUAN is a fast and robust framework that provides a completely automatic and streamline-based framework for extracting bundles to facilitate the study of the anatomy and shape of those white matter structures. It does not require nonrigid image registration at any step in the pipeline. This is important because image registration is particularly prone to failure when dealing with patient data47. None of the methods used in the BUAN framework depend on training data as compared to deep learning methods15,48 which need labeled patient data, which can be hard to find. This is important as researchers do not have training data for most clinical cases and BUAN can be directly used for these cases. Deep learning systems15,48 can only extract bundles used in training data. Another advantage of BUAN is therefore that new bundles can be found when new atlases or model bundles are provided. BUAN could be extended to be used in the brains of other species. There are no theoretical or other limitations that restrict usage to other species. However, this is a topic for future research and we hope that the community will apply BUAN in many other data sets, including non-human primates and other species. For statistical analysis of bundle profiles, BUAN uses linear mixed models. However, we are planning to include other methods such as functional data analysis49,50 and predictive machine learning37. Bundle adjacency (BA) is used for shape similarity analysis of the same bundles across subjects. BA can be extended to use any distance metric between streamlines, we are currently using the minimum direct flip (MDF) distance21. Other distances can be used too, such as the Hausdorff distance51 or MAM distance52,53. We can look at shape similarity matrices as a method not only for finding shape similarity among the bundles but also as a metric to check the quality of data. A darker similarity matrix (see Fig. 5) of a given bundle suggests that this bundle is readily extractable across all subjects. BUAN’s end-to-end processing time for 64 subjects was 46.847 hours on a single machine with 32 GB RAM, and one Intel Core i7-7700K CPU with 8 cores. The details about the processing time of each of the BUAN’s steps can be found in supplementary materials Appendix A.5.
Our approach is highly generic, modular, and flexible. It can be adopted for any sort of challenging clinical data sets or any animal data sets. In the case of animal data, the user can provide their own atlas of bundles and can perform bundle analysis using the same pipeline. For adult human data we are already providing an atlas, though users can also use their own atlas. None of the methods described here require a large number of labeled data for training. BUAN uses all the points on the streamlines, as we do not need to reduce the number of points of data points to simplify the statistics. Neither do we need to cut out the extremities of a bundle. It is left to the user to decide at what lengths they want to study a pathway. BUAN does not simplify the final bundle profiles by taking the average of anatomical measure values along the length of the bundle. Most importantly, BUAN does not impose or apply any sort of deformations at any stage of the framework. How deformations affect statistical validity is still an open question47 for the field of neuroimaging and therefore, given this uncertainty, we do not deform the data. The sparsity of the tractograms exploited by BUAN is enough to enable high-quality correspondence between the subjects and use the anatomical measures directly in native space. Our methods can also be used for quality control, for example, if multiple model bundles are not being found in the data that means there are very likely issues with acquiring the data.
### Concerns
Our framework results in a large amount of derivative data. In this paper, resulting HDF5 files for 64 subjects (30 bundles) occupied 10 GB. This suggests that the application of this method to 650 subjects would result in the generation of a TeraByte or more of data. These derivative files are important because LMM is applied to them to provide results based on the proper accounting of correlations among the streamlines. However, the analysis can be run per bundle as they are all independent and it is, therefore, permissible to discard the large intermediary files. Therefore, the analysis is scalable to hundreds and thousands of subjects. Bundle extraction method relies on model bundle’s definition of a bundle, it will try to extract a bundle from target data that has similar properties as the model bundle. The more realistic the model bundles the better the extraction. The auto-calibration step helps in extracting the bundle which has shape influence from its own data because the model bundle provided is part of the target tractogram itself. Some bundles, due to their morphology, will be less well summarized by the use of a single centroid. For example, the corticospinal tract exhibits a great deal of fanning in its superior terminations. As future work, it would be worthwhile to attempt to model these more complex white matter tracts with multiple centroids.
We provided the results for the nominal significance level of 0.001 as an illustration in our pipeline. The significance level is a parameter that is chosen by the user. We reported in our manuscript the results at the 0.001 level to illustrate the pipeline. We now provide an additional option in the pipeline, where the user can specify the chosen significance level. We set as default the FWER (family-wise error rate) method. As the tests both within the tract and across tracts are correlated, we use the result on the bounds of correlated tests54, and make the following corrections: (1) within-tract adjustment is specified as (the number of points) divided by the (correlation of the test statistics within the tracts); (2) across-tract adjustment is specified as (the number of tracts) divided by the (correlation of the test statistics across the tracts). Thus, for example, if the testing is done at n=100 points within the tract and there are 30 bundles, the corrected significance level is 0.05 divided by $$(100/(1-\rho _{within})$$ and $$(30/(1-\rho _{across}))$$; assuming $$\rho _{within} = 0.9$$ and $$\rho _{across} = 0.6$$, the corrected significance level is $$0.05/(10\times 5) = 0.001$$. Proper adjustment for multiple comparisons needs to take into account very high correlations between the adjacent disks. We will address this issue in our future work via a functional data analysis approach treating all the streamlines as functions clustered within a tract.
Using the same MDF metric in bundle extraction method and bundle shape similarity method might overstate the similarity between the bundles. However, the estimated relative difference between any two subject’s bundles will be very similar.
### Connections to previous studies
The inferior fronto-occipital fasciculus is a major long-range tract connecting anatomically distinct cortical brain regions. Previous studies have suggested a role of the IFOF in complex cognitive tasks, including executive function, social cognition, attention, and semantic processing of language, perhaps as different subunits55,56,57. Previous studies in PD have identified changes in the IFOF in PD-dementia (PDD) and Dementia with Lewy Bodies (DLB), as well as associations with executive function, language, attention, and depression in PD patients58,59,60,61. Previous reports in the PPMI cohort indicated increased FA and reduced diffusivity in the IFOF and other tracts in PD relative to healthy controls, particularly tremor-dominant PD62,63. Other tracts showing differences between healthy control (HC) and Parkinson’s disease (PD) patients included the frontopontine tracts, inferior longitudinal fasciculus, occipitopontine tracts, middle longitudinal fasciculus, and uncinate fasciculus. The frontopontinue tracts showed degeneration in progressive supranuclear palsy (PSP), an atypical Parkinsonian syndrome64. In addition, alterations in the inferior longitudinal fasciculus were observed in previous reports in PDD, DLB, and PSP, as well as associations with cognition, color vision deficits, and depression in PD58,59,61,65,66,67,68 The uncinate fasciculus also showed alterations in PD without dementia, PDD, DLB, and PSP, as well as associations with cognition and depressive symptoms58,59,61,66,67,68,69. The uncinate fasciculus is also involved in olfactory function, which is impaired in patients with PD70. In this experiment, we observe a trend towards higher FA, and lower diffusivity in PD patients relative to controls, which may reflect a compensatory response to ongoing pathological changes associated with the disease, such as deposition of alpha-synuclein and degeneration of the substantia nigra and striatum. These findings are similar to previous reports in this sample62,63. Increased FA in the motor pathways of PD patients: the bilateral corticospinal tracts, bilateral thalamus-motor cortex tracts, and the right supplementary area-putamen tract have been reported71. Increased FA in the precuneus area of patients with PD without mild cognitive impairment has also been reported72. Another study has shown an increase in FA and AD in the nigral subareas in PD73. However, previous reports have also suggested a faster longitudinal decline in FA and an increase in diffusivity in PD patients relative to controls63,74. Patients with early-stage PD had reported higher MD relative to healthy controls75. Decreased FA was observed in PD subjects in the frontal lobes, including the supplementary motor area, the pre-supplementary motor area, and the cingulum and no significant differences observed in MD between PD subjects and controls76. A test–retest study on PPMI dataset27 reported similar results as presented in BUAN. An increased FA and GFA and decreased MD and RD in patients as compared to controls in substantia nigra (SN), the striatum and subthalamic nucleus (STN), pallidum, putamen and thalamus regions of the brain. The same trend was observed in the motor and premotor part of the corpus callosum and corticospinal tract27. Future studies with the BUAN framework extending the method to longitudinal DTI data from PPMI are warranted.
### Comparisons
Figure 6 shows plots for only 5 bundles but AFQ and BUAN were run on all 30 bundles. In AFQ, for most bundles average FA values increase in patients as compared to controls and average MD values decrease in patients as compared to controls. Few bundles have areas where FA decreases in patients as shown in Fig. 6A. FA values decrease in AF_L and OR_R bundle. In the case of MD, we see a small spike in MD values of patients in the IFOF_R and a huge spike in patients in the UF_R bundle. This could be caused due to two reasons, (1) anatomical values at the endpoints (extremities) of the tracts change drastically, and (2) one weighted-averaged streamline cannot represent the whole bundle with fanning. When average is taken, one outlier subject can cause the spike in the final plot which happens to be the case in MD plots for IFOF_R and UF_R bundle. In BUAN, significant areas on the tracts with group differences show consistent results. The average FA values increase in patients as compared to healthy controls at all segments with significant group differences. The average MD values decrease in patients as compared to healthy controls at all segments with significant group differences. The reason for the consistent results of average FA and MD values in BUAN is because all points on all streamlines are taken into the account by LMM to find group differences. BUAN utilizes information from all streamlines with different shapes and sizes in the data. In BUAN MD plots, we see a spike in patient mean in CST$$\_$$L and IFOF$$\_$$R bundle plots. This could be due to one or two outlier subjects in the patient data. However, LMM does not give higher significance to that area which validates that reported areas of significant group differences in BUAN are not affected by outliers. In the plots shown in Fig. 6, we observe the average FA and MD plots generated by AFQ and BUAN look different. AFQ has a higher mean variation in groups as compared to BUAN results. The reason for these differences could be due to the algorithmic differences between the two methods. These are two different methods with different assumptions about the data. Different lengths of streamlines and how they are weighted can play a role in getting different results from AFQ and BUAN. AFQ discretizes each streamline into N equidistant points and then takes an average of every $$i\mathrm{th}$$ point of all streamlines to create one weighted averaged streamline per subject where streamlines/points closest to mid of the bundle are weighted more than streamlines far from the mid. As shown in Fig. 8, streamline 2 and streamline 7 end earlier and streamline 5 starts later than the rest of the streamlines (smaller in length than other streamlines). The last point of streamline 2 and streamline 7 will contribute to the average of the 5th point of the final averaged streamline. The last point of both streamline 2 and streamline 7 is far from other streamlines’ last points. The same goes for the first point of streamline 5. This length variation of the streamlines is not taken into account in this experiment by AFQ which can cause a difference in comparisons with BUAN. In BUAN, points are assigned a segment number, and the length of the streamline is taken into account this way. In the case of Fig. 8, the last points of the streamline 2 and streamline 7 are assigned to the closest model centroid point and assigned the 4th segment number. Starting points of the streamline 5 are assigned 2nd segment in a similar fashion. Also, all points on the streamlines are used to take the average per segment. Weights are not given to points while taking the average. In summary, BUAN utilizes all points on the streamlines when taking the average and also takes lengths of streamlines into account. AFQ discretizes points into N equidistant points per streamline and assigns more weight to the streamlines in the middle of the bundle while taking the average. Overall, both AFQ and BUAN report similar WM alterations between patients and controls (increased FA in patients and decreased MD in patients as compared to healthy controls). Further research will be needed in the future to study the impact of different tractometry methods and their assumptions. We hope this framework will help in that direction.
### Conclusion
BUndle ANalytics (BUAN) is a powerful tool for performing analytics of white matter fiber bundles. BUAN provides a completely automatic, end-to-end streamline-based solution that connects bundle recognition, analysis of bundle anatomy, and shape analysis. More importantly, BUAN is the nexus between several successful streamline-based methods11,21 of tractography, and other brain imaging modalities. BUAN reports the exact locations of population differences along the length of bundles. Additionally, beyond looking at the profiles of the bundles, BUAN includes individual shape differences of the bundles. For this purpose, we introduced a novel network-based, bundle shape analysis method using bundle adjacency metrics to assess and compare shapes of the same type of bundles, across groups. BUAN is a generic framework that can be applied both in clinical and healthy populations. In theory, BUAN should work with other animal brains with no changes in the code. All methods of BUAN are carefully implemented, thoroughly tested, and made publicly available to the community for use through command-line interfaces, and Python scripts. Scientists can use the BUAN framework to study different types of pathological data and find differences between populations based on tractometry and shape analysis. BUAN is available with DIPY44 at dipy.org.
## Methods
### Data
Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI)46 database. PPMI is a longitudinal, observational, multi-site study with a goal of identifying and evaluating biomarkers for detecting and monitoring the progression of Parkinson’s disease (PD). Participants were included as PD if they: (1) had two of the following symptoms: resting tremor, bradykinesia, rigidity (must have either resting tremor or bradykinesia) or either asymmetric resting tremor or asymmetric bradykinesia; (2) a diagnosis of Parkinson disease for $$\le$$ 2 years at screening; (3) Hoehn and Yahr stage I or II; (4) screening dopamine scan (DaTSCANTM or VMAT-2) was consistent with dopamine transporter deficit; (5) not expected to require PD medication within at least 6 months from Baseline; (6) Male or female age $$\ge$$ 30 years at PD diagnosis. Control subjects were also males or females age $$\ge$$ 30 years with no first degree relative with idiopathic PD and normal cognition. This dataset contains 179 healthy control subjects and 412 patients recently diagnosed with PD. These details about the data set are available at the PPMI website46 and they are also described in this paper27. Healthy controls and patients have a mean age of 59 and 61 years respectively. Most of the subjects are Caucasians (93$$\%$$), 71$$\%$$ of PD subjects are male, and 57$$\%$$ of healthy patients are male. PPMI dMRI data was acquired using a standardized protocol used on Siemens TIM Trio and Siemens Verio 3 Tesla MRI machines from 32 different international sites. Diffusion-weighted images were acquired along 64 uniformly distributed directions using a b-value of 1000 $$s/mm^2$$ and a single b $$=$$ 0 image. Single-shot echo-planar imaging (EPI) sequence was used (matrix size = $$116\times 116$$, 2 mm isotropic resolution, TR/TE 900/88 ms, and twofold acceleration). An anatomical T1-weighted 1 $$mm^3$$ MPRAGE image was also acquired. Each patient underwent two baseline acquisitions and two more one year later. The right and left-onset patients are distributed in proportions of 57$$\%$$ and 43$$\%$$. More information on the MRI acquisition and processing can be found online at www.ppmi-info.org. We have processed 64 subjects: 32 controls, and 32 We selected subjects in the age range of 39–61 to eliminate the possibility of any biases introduced by the age differences of the subjects. Each group is comprised of 14 female and 16 male subjects to further eliminate biases introduced by different sex distribution of the subjects in groups. In summary, these were all the valid subjects available in the PPMI dataset that were in the specific age range and also balanced the number of subjects with same sex in both groups.
### Streamline-based Bundle Atlas
In this paper, we used a publicly available streamline-based bundle atlas. Our atlas was a reduced form of the HCP-842 template77,78. Original population-based atlas was modified for the BUAN framework. Original atlas comes with 80 bundles. We combined all 80 bundles together to create an atlas of the whole brain tractogram. The whole-brain atlas tractogram and 80 bundles were moved to MNI 152 space (ICBM 2009a). Lastly, 30 bundles of interest were selected out of 80 bundles to be used in the analysis. We excluded bundles from the atlas that did not exist in the PPMI data. Many of the bundles in the original atlas were cranial nerves (pathways outside of the brain) that do not exist in this data. Our neuro-anatomist inspected the atlas and removed any bundles or streamlines with unrealistic shapes that did not confirm the anatomy as reported in5,12. The final atlas can be downloaded from DIPY44 using the $$\text {dipy}\_\text {fetch}$$ command. The atlas contains a whole-brain tractogram and the corresponding subset of the 30 bundles. Names of the bundles can be found in supplementary materials Appendix A.1 Fig. A1.
### Data preparation
The local principal component analysis (LPCA) noise reduction method79 was used for the diffusion MR images. For brain tissue extraction, the median Otsu algorithm80 was used. The distortions induced by eddy currents and motion were corrected by registering the diffusion-weighted volumes to the b0 volume. An affine transformation was computed to register b0 volume and non-b0 3D volumes by maximization of normalized mutual information. The optimization strategy used in DIPY is similar to that implemented in ANTs81. The B-matrix (b-vectors) were rotated to preserve the correct orientational information as described in this paper82. After the diffusion data is denoised, DTI measures, fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AD)35,36 were extracted from each subject’s data. A spherical harmonics model based on QBall-Constant Solid Angle were fitted to get orientation density functions (ODFs) and extracted generalized fractional anisotropy (CSA-GFA)38,39, and quantitative anisotropy (CSA-QA)40. For generating whole-brain tractograms, a constrained spherical deconvolution (CSD) model was used to get directional information from dMRI data83, from which a simplified peaks representation was extracted. The obtained peaks were then used as the input to a local tracking algorithm. We performed deterministic tracking84,85 using EuDX52. EuDX tracking algorithm was initialized with following parameters, tracking starts from voxels where fractional anisotropy (FA) $$>0.3$$, number of seeds per voxel $$=$$ 15, step size $$=$$ 0.5, angular threshold $$=$$ 60 degrees, and stopping tracking if FA value drops below 0.1. Each generated tractogram comprised of 5–10 million streamlines. DIPY’s44 implementation of methods was used in all preprocessing steps.
### Bundle recognition
Tractography algorithms generate potentially unmanageably large data sets with millions of streamlines. It is a logistically challenging task to visually and/or computationally inspect any given individual streamline for aberrant or otherwise distinctive traits looking at the whole tractogram. Thus, our method greatly simplifies this process by extracting specific subsets of related streamlines from the otherwise intractable mass of streamlines. This process of extracting specific groups of streamlines with similar anatomical characteristics from whole-brain tractograms is called automated bundle extraction or bundle recognition. Bundle recognition11 goes a step further from automated bundle extraction as it requires no training as it learns from single examples of bundles. In our approach, in order to extract white matter fiber tracts from whole-brain tractogram, we first register an input target tractogram (A.a) to a model tractogram (A.b) using Streamline-based Linear Registration (SLR)45 as shown in Fig. 1. After we have transformed the target tractogram from native space to common space, we begin extracting bundles. Whole-brain tracrograms contain a large number of false positives for example, unrealistically small or extremely long streamlines. We pre-processed the tractogram by removing streamlines smaller than 30 mm. The input to RecoBundles (RB) is the registered target tractogram (A.c) and a model bundle (A.d)11. The model bundle is used as a reference bundle to find corresponding streamlines in the target tractogram. The model bundle is part of the atlas used for registering the target tractogram to common space. Both target tractogram and model bundle are in common space. The first step in RecoBundles is to reduce the search space and find neighboring areas for the model bundle in the target tractogram. This is achieved by a process called far pruning. We exclude the streamlines from our search space whose MDF21 distance with model bundle streamlines is greater than the reduction threshold. The default value or reduction threshold is set to 15 mm. Now, the search space consists of only potential bundle streamlines, we call these streamlines neighbor streamlines of the model bundle in the target tractogram. In the next step, RecoBundles applies local registration of neighbor streamlines to model bundle streamlines using SLR45. After local registration of streamlines, local pruning of neighbor streamlines is performed in a similar manner as far pruning. Neighbor streamlines whose MDF distance with model bundle streamlines is greater than the pruning threshold are discarded. The default pruning threshold is set to 8 mm.
### Auto-calibration
Human brain pathways come in all sorts of shapes and sizes. While larger bundles are easier to locate and extract, it becomes difficult to reliably extract smaller and hard to find pathways in large tractograms. To resolve this problem and eliminate the need for changing parameters to deal with short bundles, we added one more step in RecoBundles (RB)11 algorithm. We call this new step, auto-calibration. As shown in Fig. 1A. during the auto-calibration step, the final extracted bundle output of standard RB (e) becomes our new model bundle. RB is run again on the same target tractogram but this time the model bundle is not part of an atlas but part of the target tractogram itself. Since the new model bundle is part of the target tractogram, it eliminates the need for local streamline-based registration with the model bundle. All steps of RB are performed again in the same order as described in "Bundle recognition" section except the local registration of neighbor streamlines with the model bundle. The default auto-calibration reduction threshold is set to 12 mm and the auto-calibration pruning threshold is set to 6 mm. This additional step allows us to: (a) use the same parameters for short and long bundles, (b) produce more dense bundles than before, and (c) reduce any issues because of shape differences between the initial model bundle and the final target bundle. We found auto-calibration to be especially useful when dealing with noisy data or when extracting small bundles like the uncinate fasciculus (UF) from the whole brain tractograms. All bundles are extracted using the same default parameters. At the end of the bundle extraction process, we assess the quality of bundle extraction by using two cost functions, bundle adjacency (BA) and bundle-based minimum distance (BMD)45. Bundle adjacency of the extracted bundle is calculated with the model bundle to report the shape similarity score between two bundles. BA evaluates the shape similarity between the two bundles. The bundle adjacency method is explained thoroughly in "Shape analysis using bundle adjacency" section. BMD is used for calculating a distance between two bundles. BMD function is explained in detail with equations in Appendix A.6.2 of the supplementary materials.
### Assignment maps
For tractometric studies along the length of the tracts, we are combining information from extracted bundles and anatomy. To perform analysis on extracted bundles, we create assignment maps by dividing the bundles into segments using the model bundle centroids along their lengths in common space (model tractogram space). We cluster our model bundle using QuickBundles21 resulting in a cluster centroid (b) with 100 points per centroid as shown in Fig. 1B. To divide a given bundle into segments, we calculate Euclidean distances between every point on every streamline of the bundle (B.a) and all model bundle centroids (B.b) and assign the considered point to the nearest segment centroid similar to27. However, we do not resample our streamlines to have a discrete number of points or change the distribution of points. Our approach of creating segments does not require a streamline to have a specific number of points. We use all the points of the streamlines, and assign them to the closest points of the centroid on the model bundle to create assignment maps/segments (B.c). Assignments are created in a common space (model atlas), which ensures that the segment index corresponds to the same centroid across all individuals. Fig. 1B, presents the visual process of creating assignment maps.
The advantage of creating these assignment maps and diving bundles into segments is that we are able to analyze specific areas of a bundle. White matter tracts have distinct shapes and their shapes vary across the length of the tract. For example, in the case of the arcuate fasciculus, we see a larger spread of streamlines at both ends of the bundle. Anatomical measures such as FA also change throughout the bundle. It makes sense to look at a specific segment that has a consistent shape and anatomical measure values.
Information such as the positions of the points of each streamline, the index of the streamlines in the bundle, the segment number, the subject ID, group ID, and anatomical measure values like FA, MD, RD, AD, CSA-GFA, and CSA-QA were saved in HDF5 (.hd5) files. Those files were used as input to statistical models. We used linear mixed models (LMM) for group comparisons based on a given metric (e.g. FA) over a specific tract and specific tract location (segment).
### Linear mixed models
We used linear mixed-effects models (LMMs)41,42,43 to study how anatomical measures differ between patient and control groups along the length of white matter bundles. We applied LMMs separately for each segment of a given white matter bundle. Our main interest was in differentiating between the groups: patients vs. controls. We achieved that by including the group effect as a fixed effect in the regression model describing the outcomes’ (FA, AD, RD, MD, CSA-GFA, and CSA-QA) differences between the groups. To account for the correlations among the observations from different streamlines collected from the same individual and tract, we included random subject-tract effects. See supplementary materials, Appendix A.4 for a detailed mathematical explanation.
### Shape analysis using bundle adjacency
We introduce, a new network-based approach to perform shape analysis between bundles. For getting the resemblance between two bundles, we use the bundle adjacency (BA) metric first introduced in21. We use bundle adjacency (BA) to calculate the shape similarity between the same type of bundles across subjects and groups. The higher the BA value, the higher the similarity between the shapes of the two bundles. We created a network graph and similarity matrix from BA values for the same bundle-type across subjects. Bundle adjacency (BA) is calculated between two bundles. BA uses a minimum direct flip (MDF) distance21 to get the distance between two streamlines. The minimum average direct-flip distance21 is a symmetric distance function that deals with the streamline bi-directionality problem. It calculates both direct and flipped distances between two streamlines which have the same number of points and takes a minimum of both. For equations and more details on MDF distance please see Appendix A.6.1 in the supplementary materials. Let, B1 and B2 be two sets of streamlines (bundles), and let $$\theta > 0$$ be a selected adjacency threshold. We will say that $$b1 \in B1$$ is adjacent to B2 if there is at least one streamline $$b2 \in B2$$ with $$MDF(b1, b2)\le \theta$$. We define the coverage of B1 by B2 as the fraction of B1 that is adjacent to B2. Coverage ranges between 0 (when all streamlines in B1 are too far from B2) and 1 (when every streamline in B1 is adjacent B2). In order to compare two bundles of possibly different data sets, we define the symmetric measure bundle adjacency (BA). BA is the average of the coverage of B2 by B1 and the coverage of B1 by B2:
\begin{aligned} BA(B1, B2) = 0.5 (coverage(B1, B2) + coverage(B2, B1)) \end{aligned}
BA ranges between 0 and 1. BA score is 0 when no streamlines of B1 or B2 have neighbors in the other set, and 1 when they all do.
Figure 7, shows the bundle network graph and similarity matrix. Figure 7A, illustrates the concept of bundle shape analyses using the bundle adjacency metric. We can understand this by looking at the diagram as a tree where parent node p, the left arcuate fasciculus (AF_L) bundle, for example, has 5 child nodes, c1, c2, c3, c4, and c5. Here, the tree nodes are the bundles. These children bundles are created by selecting a subset of the parent bundle streamlines p. The c1 bundle is an exact replica of p, c2 bundle is missing parts of the p bundle. We created all 5 child bundles in similar fashion by removing significant parts of the original bundle p as we move from the left to the right. At the end, the c5 bundle contains only the middle part of the original p bundle.
The Bundle Adjacency method requires two bundles and a threshold $$\theta$$. Where, $$\theta$$ determines the degree of stringency of shape comparison between two bundles. The higher the value assigned to $$\theta$$, the easier it is for two bundles to obtain a high BA score, while the lower the value assigned to $$\theta$$, the lower the resultant BA score.
In Fig. 7, the branches of the tree represent the BA score between two bundles. The color of the BA score represents the threshold value, $$\theta$$. The black-colored BA score label corresponds to $$\theta = 5$$ mm used while calculating the BA score. The red-colored BA score label corresponds to $$\theta = 15$$ mm used while calculating the BA score. We show in the Fig. 7, how one can adjust the bundle adjacency method to produce results for shape similarity based on a threshold $$\theta$$ which can be seen as a parameter for leniency. With leniency, we mean how loosely we want our method to classify shape similarity between bundles. For example, if we want our bundles to be extremely close in shape by having a rigidly similar length and width, we set the threshold to be a smaller number (5 mm in this example). Which gives a BA score of 1 if the bundle is an exact replica of itself, and it gives a BA score of 0 when no similarity whatsoever which is the case with c5.
In the case where $$\theta = 15mm$$, we can see perfect BA score between c1 and p which is also the case when $$\theta = 5mm$$. But for the c5 bundle, we get a BA score of 0.543. Although the c5 is quite different in shape, it still has some part of the parent bundle p (the middle part). The usage or selection of threshold depends on user-specific needs. If we strictly want two bundles to have the same length and width we can use smaller threshold value and if we want to see if a given bundle has sub parts of other bundles or it has some relation with other bundles we can use higher threshold value. BA requires two bundles to be in the same coordinate space to give interpretable results.
### Bundle networks
Results generated by the bundle adjacency function can also be represented as a fully connected network graph of bundles as shown in Fig. 7B. We created a fully connected graph with a similarity (BA) score on the edges connecting two vertices (bundles) shown in (B.a). Here, we have created a fully connected left arcuate fasciculus (AF_L) bundle graph of 5 subjects. In this paper, for each of the 30 different bundles, we created a fully connected network graph of bundles from 64 subjects. Where every subject’s bundle is connected to every other subject’s bundle. The weight on the edges connecting two bundles signifies the strength of shape similarity relationship between two bundles. The higher the weight, the higher the shape similarity.
When there are many subjects (vertices), it becomes difficult to see shape similarity visually only from a connected network graph. For this purpose, we interpret results in a compact way as a similarity matrix (B.b), where darker blue color means higher similarity and lighter blue color indicates less shape similarity among bundles. In the diagonal, we have all 1s as BA is calculated between a bundle and itself. BA can be used as a quality assurance measure, it can also be used to detect outliers in the dataset. When a subject’s bundle is comparatively different than other subject’s bundles, BA gives it a lower score and we can detect that subject’s bundle as an outlier from the similarity matrix.
Using graph-theoretic analysis we can study differences between similar types of bundles. The hierarchical clustering of the similarity matrix is a useful technique to find clusters in the data. We can also represent our similarity matrix as a Voronoi diagram. In this paper, we are applying Ward’s hierarchical clustering86 on similarity matrices to find clusters of subjects with similar bundle shapes.
### Method comparison
Automated Fiber-Tract Quantification (AFQ)29 has gained popularity in the past couple of years. AFQ is an open-source and freely available software that provides methods for extracting bundles and quantifying diffusion measurements along the length of the white matter fiber tracts (bundle-profiles) that can be used for group comparisons. Originally, AFQ provided region-of-interest (ROI)-based fiber tract extraction method for 18 white matter tracts. AFQ now enables users to select from ROI based bundle extraction or RecoBundles11 bundle extraction. After bundles are extracted from the data, bundle profiles are generated based on anatomical measures. For every subject, a mean bundle profile is generated with mean values of any given anatomical measure along the length of the tract. The mean bundle profile is generated by resampling each streamline in the bundle into 100 equidistant points and calculating the mean location of each point. Mean anatomical measurements (eg. FA) are calculated at each point by taking a weighted average of the FA measurements of each individual streamline at that point. Weights are calculated based on the Mahalanobis distance of each streamline point from the core streamline. The final output of AFQ per subject is a mean bundle profile with 100 equidistant points and anatomical measurement values associated with each point along the tract.
Both AFQ and BUAN generate bundle profiles with anatomical measures associated with it along the length of the tract. Figure 8 illustrates the representation of AFQ and BUAN bundle profiles for one subject. This is not real data but simulated for the sake of explanation. Here, the AFQ bundle profile is created by resampling each streamline in the input bundle into 5 equidistant points and then taking a weighted average of all points along the length of the tract. One weighted mean streamline represents the whole bundle. Whereas, in the BUAN bundle profile, streamlines are not discretized into 5 points. The input bundle is divided into 5 segments. Each point on a streamline is given a segment number (label) based on which centroid is it closest to on the model centroid streamline. In the BUAN bundle profiles, all points on all streamlines are kept and are not represented as one simplified averaged streamline.
Note that in the AFQ bundle profile, the last few points on the streamline 2 and streamline 7 are discretized and represented by one 5th point. The 5th point of streamline 2 and streamline 7 is far from the rest of the streamlines’ 5th point. AFQ will take a weighted average of all streamlines’ 5th points to create averaged 5th point of final AFQ bundle profile streamline. Whereas in the BUAN bundle profile, the last points of streamline 2 and streamline 7 are closest to the 4th model centroid point and therefore are assigned to segment number 4. AFQ does not provide any statistical analysis method that can be applied to bundle profiles. User has to visually look at mean bundle plots of subjects and find differences in plots manually. On the other hand, BUAN provides a sophisticated method to automatically find group differences for a given anatomical measure along the length of the tract. BUAN uses linear mixed models for finding group differences. BUAN provides a visualization tool that highlights the exact area on the bundle which is significantly different in two groups. AFQ does not provide any bundle shape analysis method. BUAN provides a novel graph-based bundle shape similarity method explained in "Shape analysis using bundle adjacency" section that can also be used for quality assurance.
## Data availability
The Parkinson’s Progression Markers Initiative (PPMI)46 is publicly available database https://www.ppmi-info.org. A subset of the datasets generated and analyzed during the current study is released as examples with the pipeline, and complete data set processed is also available. PPMI data derivatives generated during this study can be found at https://doi.org/10.35092/yhjc.12033390. It contains PPMI data derivatives for 64 subjects used in the paper, each subject contains 30 white matter bundles in common space (MNI space), same 30 bundles in subject’s original space, and files containing anatomical information (FA, MD, RD, AD, CSA Peaks). Streamline-based bundle atlas used in this paper can be downloaded from DIPY44 using the $$\text {dipy}\_\text {fetch}$$ command or from figshare. The atlas contains a whole brain tractogram and the corresponding subset of the 30 bundles.
## Code availability
Bundle extraction tutorial can be found here. To reproduce the results on PPMI data, please follow the BUAN tutorial in DIPY. https://github.com/dipy/dipy/blob/master/doc/interfaces/buan_flow.rst. Code for Bundle Analytics framework is publicly available in GitHub and can be found here https://github.com/dipy/dipy.
## References
1. 1.
Basser, P. J., Mattiello, J. & LeBihan, D. MR diffusion tensor spectroscopy and imaging. Biophys. J . 66, 259–267 (1994).
2. 2.
Le Bihan, D. et al. Diffusion tensor imaging: concepts and applications. J. Magn. Reson. Imaging 13, 534–546 (2001).
3. 3.
Alexander, A. L., Lee, J. E., Lazar, M. & Field, A. S. Diffusion tensor imaging of the brain. Neurotherapeutics 4, 316–329 (2007).
4. 4.
Farquharson, S. et al. White matter fiber tractography: why we need to move beyond DTI. J. Neurosurg. 118, 1367–1377 (2013).
5. 5.
Catani, M. & De Schotten, M. T. A diffusion tensor imaging tractography atlas for virtual in vivo dissections. Cortex 44, 1105–1132 (2008).
6. 6.
Gong, G. et al. Mapping anatomical connectivity patterns of human cerebral cortex using in vivo diffusion tensor imaging tractography. Cereb. Cortex 19, 524–536 (2008).
7. 7.
Mori, S. & Van Zijl, P. C. Fiber tracking: principles and strategies—a technical review. NMR Biomed. 15, 468–480 (2002).
8. 8.
Geschwind, N. The organization of language and the brain. Science 170, 940–944 (1970).
9. 9.
Leuret, F. Anatomie comparée du système nerveux: considéré dans ses rapports avec l’intelligence, vol. 2 (J.-B. Baillière et fils, 1857).
10. 10.
Schmahmann, J. D., Schmahmann, J. & Pandya, D. Fiber Pathways of the Brain (OUP, Oxford, 2009).
11. 11.
Garyfallidis, E. et al. Recognition of white matter bundles using local and global streamline-based registration and clustering. NeuroImage 170, 283–295 (2017).
12. 12.
Catani, M., Howard, R. J., Pajevic, S. & Jones, D. K. Virtual in vivo interactive dissection of white matter fasciculi in the human brain. Neuroimage 17, 77–94 (2002).
13. 13.
Wang, R., Benner, T., Sorensen, A. & Wedeen, V. Diffusion toolkit: a software package for diffusion imaging data processing and tractography. Proc. Int. Soc. Mag. Reson. Med 15, 3720 (2007).
14. 14.
Chamberland, M., Whittingstall, K., Fortin, D., Mathieu, D. & Descoteaux, M. Real-time multi-peak tractography for instantaneous connectivity display. Front. Neuroinf. 8, 59 (2014).
15. 15.
Wasserthal, J., Neher, P. & Maier-Hein, K. H. Tractseg-fast and accurate white matter tract segmentation. NeuroImage 183, 239–253 (2018).
16. 16.
Guevara, P. et al. Automatic fiber bundle segmentation in massive tractography datasets using a multi-subject bundle atlas. Neuroimage 61, 1083–1099 (2012).
17. 17.
Lawes, I. N. C. et al. Atlas-based segmentation of white matter tracts of the human brain using diffusion tensor tractography and comparison with classical dissection. Neuroimage 39, 62–79 (2008).
18. 18.
Jonasson, L. et al. White matter fiber tract segmentation in DT-MRI using geometric flows. Med. Image Anal. 9, 223–236 (2005).
19. 19.
Bertò, G. et al. Classifyber, a robust streamline-based linear classifier for white matter bundle segmentation. BioRxivhttps://doi.org/10.1016/j.neuroimage.2020.117402 (2020).
20. 20.
Yendiki, A. et al. Automated probabilistic reconstruction of white-matter pathways in health and disease using an atlas of the underlying anatomy. Front. Neuroinform. 5, 23 (2011).
21. 21.
Garyfallidis, E., Brett, M., Correia, M. M., Williams, G. B. & Nimmo-Smith, I. Quickbundles, a method for tractography simplification. Front. Neurosci. 6, 175 (2012).
22. 22.
Jones, D. K. Challenges and limitations of quantifying brain connectivity in vivo with diffusion MRI. Imaging Med. 2, 341–355 (2010).
23. 23.
Maier-Hein, K. H. et al. The challenge of mapping the human connectome based on diffusion tractography. Nat. Commun. 8, 1–13 (2017).
24. 24.
Smith, R. E., Tournier, J.-D., Calamante, F. & Connelly, A. SIFT2: enabling dense quantitative assessment of brain white matter connectivity using streamlines tractography. Neuroimage 119, 338–351 (2015).
25. 25.
Reisert, M. et al. Global fiber reconstruction becomes practical. Neuroimage 54, 955–962 (2011).
26. 26.
Calamante, F., Tournier, J.-D., Jackson, G. D. & Connelly, A. Track-density imaging (TDI): super-resolution white matter imaging using whole-brain track-density mapping. Neuroimage 53, 1233–1243 (2010).
27. 27.
Cousineau, M. et al. A test-retest study on Parkinson’s PPMI dataset yields statistically significant white matter fascicles. NeuroImage Clin. 16, 222–233 (2017).
28. 28.
Dayan, M. et al. Profilometry: a new statistical framework for the characterization of white matter pathways, with application to multiple sclerosis. Hum. Brain Mapp. 37(3), 989–1004 (2015).
29. 29.
Yeatman, J. D., Dougherty, R. F., Myall, N. J., Wandell, B. A. & Feldman, H. M. Tract profiles of white matter properties: automating fiber-tract quantification. PLoS ONE 7, e49790 (2012).
30. 30.
Colby, J. B. et al. Along-tract statistics allow for enhanced tractography analysis. Neuroimage 59, 3227–3242 (2012).
31. 31.
Bells, S. et al. Tractometry-comprehensive multi-modal quantitative assessment of white matter along specific tracts. Proc. ISMRM 678, 1 (2011).
32. 32.
Chamberland, M., St-Jean, S., Tax, C. M. & Jones, D. K. Obtaining representative core streamlines for white matter tractometry of the human brain. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 359–366 (Springer, Berlin, 2018).
33. 33.
Smith, S. M. et al. Tract-based spatial statistics: voxelwise analysis of multi-subject diffusion data. Neuroimage 31, 1487–1505 (2006).
34. 34.
Goodlett, C. B., Fletcher, P. T., Gilmore, J. H. & Gerig, G. Group analysis of DTI fiber tract statistics with application to neurodevelopment. Neuroimage 45, S133–S142 (2009).
35. 35.
Basser, P. J. & Pierpaoli, C. Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. J. Magn. Reson. Ser. B 111, 209–219 (1996).
36. 36.
Descoteaux, M. High angular resolution diffusion imaging (HARDI). Wiley Encycl. Electron. Electron. Eng.https://doi.org/10.1002/047134608X.W8258 (1999).
37. 37.
Richie-Halford, A., Yeatman, J. D., Simon, N. & Rokem, A. Multidimensional analysis and detection of informative features in diffusion MRI measurements of human white matter. BiorXivhttps://doi.org/10.1101/2019.12.19.882928 (2020).
38. 38.
Aganj, I. et al. Reconstruction of the orientation distribution function in single-and multiple-shell q-ball imaging within constant solid angle. Magn. Reson. Med. 64, 554–566 (2010).
39. 39.
Tuch, D. S. Q-ball imaging. Magn. Reson. Med. 52, 1358–1372 (2004).
40. 40.
Yeh, F.-C., Wedeen, V. J. & Tseng, W.-Y.I. Generalized q-sampling imaging. IEEE Trans. Med. Imaging 29, 1626–1635 (2010).
41. 41.
Laird, N. M. et al. Random-effects models for longitudinal data. Biometrics 38, 963–974 (1982).
42. 42.
Hedges, L. V. A random effects model for effect sizes. Psychol. Bull. 93, 388 (1983).
43. 43.
Verbeke, G. & Molenberghs, G. Linear Mixed Models for Longitudinal Data (Springer, Berlin, 2009).
44. 44.
Garyfallidis, E. et al. Dipy, a library for the analysis of diffusion MRI data. Front. Neuroinform. 8, 8 (2014).
45. 45.
Garyfallidis, E., Ocegueda, O., Wassermann, D. & Descoteaux, M. Robust and efficient linear registration of white-matter fascicles in the space of streamlines. NeuroImage 117, 124–140 (2015).
46. 46.
Marek, K. et al. The Parkinson progression marker initiative (PPMI). Prog. Neurobiol. 95, 629–635 (2011).
47. 47.
Rohlfing, T. Transformation model and constraints cause bias in statistics on deformation fields. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 207–214 (Springer, 2006).
48. 48.
Gupta, V., Thomopoulos, S. I., Corbin, C. K., Rashid, F. & Thompson, P. M. Fibernet 2.0: an automatic neural network based tool for clustering white matter fibers in the brain. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 708–711 (IEEE, 2018).
49. 49.
Ramsay, J. O. Functional data analysis. Encycl. Stat. Sci.4 (2004).
50. 50.
Brumback, B. A. & Rice, J. A. Smoothing spline models for the analysis of nested and crossed samples of curves. J. Am. Stat. Assoc. 93, 961–976 (1998).
51. 51.
Corouge, I., Gouttard, S. & Gerig, G. Towards a shape model of white matter fiber bundles using diffusion tensor MRI. In 2004 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), 344–347 (IEEE, 2004).
52. 52.
Garyfallidis, E. Towards an Accurate Brain Tractography. Ph.D. Thesis, University of Cambridge (2012).
53. 53.
Garyfallidis, E., Brett, M., Correia, M. M., Williams, G. B. & Nimmo-Smith, I. QuickBundles, a method for tractography simplification. Front. Neurosci. 6, 1–13 (2012).
54. 54.
Das, N. & Bhandari, S. K. Bound on FWER for correlated normal distribution. arXiv preprint arXiv:1908.02193 (2019).
55. 55.
Duffau, H., Herbet, G. & Moritz-Gasser, S. Toward a pluri-component, multimodal, and dynamic organization of the ventral semantic stream in humans: lessons from stimulation mapping in awake patients. Front. Syst. Neurosci. 7, 44 (2013).
56. 56.
Moayedi, M., Salomons, T. V., Dunlop, K. A., Downar, J. & Davis, K. D. Connectivity-based parcellation of the human frontal polar cortex. Brain Struct. Funct. 220, 2603–2616 (2015).
57. 57.
Wu, Y., Sun, D., Wang, Y. & Wang, Y. Subcomponents and connectivity of the inferior fronto-occipital fasciculus revealed by diffusion spectrum imaging fiber tracking. Front. Neuroanat. 10, 88 (2016).
58. 58.
Hattori, T. et al. Cognitive status correlates with white matter alteration in Parkinson’s disease. Hum. Brain Mapp. 33, 727–739 (2012).
59. 59.
Perea, R. D. et al. A comparative white matter study with Parkinson’s disease, Parkinson’s disease with dementia and Alzheimers disease. J. Alzheimers Dis. Parkinsonism 3, 123 (2013).
60. 60.
Zheng, Z. et al. DTI correlates of distinct cognitive impairments in Parkinson’s disease. Hum. Brain Mapp. 35, 1325–1333 (2014).
61. 61.
Wu, J.-Y., Zhang, Y., Wu, W.-B., Hu, G. & Xu, Y. Impaired long contact white matter fibers integrity is related to depression in Parkinson’s disease. CNS Neurosci. Therap. 24, 108–114 (2018).
62. 62.
Wen, M.-C. et al. Differential white matter regional alterations in motor subtypes of early drug-naive Parkinson’s disease patients. Neurorehabil. Neural Repair 32, 129–141 (2018).
63. 63.
Taylor, K. I., Sambataro, F., Boess, F., Bertolino, A. & Dukart, J. Progressive decline in gray and white matter integrity in de novo Parkinson’s disease: an analysis of longitudinal parkinson progression markers initiative diffusion tensor imaging data. Front. Aging Neurosci. 10, 318 (2018).
64. 64.
Eriksson, B. et al. 3.105 diffusion tensor tractography of the frontopontine tract in Parkinsonian disorders. Parkinsonism Relat. Disord. 13, S154 (2007).
65. 65.
Bertrand, J.-A. et al. Color discrimination deficits in Parkinson’s disease are related to cognitive impairment and white-matter alterations. Mov. Disord. 27, 1781–1788 (2012).
66. 66.
Agosta, F. et al. Clinical, cognitive, and behavioural correlates of white matter damage in progressive supranuclear palsy. J. Neurol. 261, 913–924 (2014).
67. 67.
Huang, P. et al. Disrupted white matter integrity in depressed versus non-depressed Parkinson’s disease patients: a tract-based spatial statistics study. J. Neurol. Sci. 346, 145–148 (2014).
68. 68.
Chen, B., Fan, G. G., Liu, H. & Wang, S. Changes in anatomical and functional connectivity of Parkinson’s disease patients according to cognitive status. Eur. J. Radiol. 84, 1318–1324 (2015).
69. 69.
Kim, H. J. et al. Alterations of mean diffusivity in brain white matter and deep gray matter in Parkinson’s disease. Neurosci. Lett. 550, 64–68 (2013).
70. 70.
Ward, C. D., Hess, W. A. & Calne, D. B. Olfactory impairment in Parkinson’s disease. Neurology 33, 943 (1983).
71. 71.
Mole, J. P. et al. Increased fractional anisotropy in the motor tracts of Parkinson’s disease suggests compensatory neuroplasticity or selective neurodegeneration. Eur. Radiol. 26, 3327–3335 (2016).
72. 72.
Nagano-Saito, A., Houde, J., Bedetti, C., Côté, M. & Monchi, O. Increased fractional anisotropy in precuneus in Parkinson’s disease without mild cognitive impairment—a diffusion tensor imaging study: 1941. Mov. Disord., 34 (2019).
73. 73.
Lenfeldt, N., Larsson, A., Nyberg, L., Birgander, R. & Forsgren, L. Fractional anisotropy in the substantia nigra in Parkinson’s disease: a complex picture. Eur. J. Neurol. 22, 1408–1414 (2015).
74. 74.
Zhang, Y., Wu, I., Tosun, D., Foster, E. & Schuff, N. Parkinson’s progression markers initiative progression of regional microstructural degeneration in Parkinson’s disease: a multicenter diffusion tensor imaging study. PLoS ONE 11, e0165540 (2016).
75. 75.
Minett, T. et al. Longitudinal diffusion tensor imaging changes in early Parkinson’s disease: ICICLE-PD study. J. Neurol. 265, 1528–1539 (2018).
76. 76.
Kendi, A. K., Lehericy, S., Luciana, M., Ugurbil, K. & Tuite, P. Altered diffusion in the frontal lobe in parkinson disease. Am. J. Neuroradiol. 29, 501–505 (2008).
77. 77.
Yeh, F.-C. & Tseng, W.-Y.I. Ntu-90: a high angular resolution brain atlas constructed by q-space diffeomorphic reconstruction. Neuroimage 58, 91–99 (2011).
78. 78.
Yeh, F.-C. et al. Population-averaged atlas of the macroscale human structural connectome and its network topology. NeuroImage 178, 57–68 (2018).
79. 79.
Manjón, J. V. et al. Diffusion weighted image denoising using overcomplete local PCA. PloS ONE 8, e73021 (2013).
80. 80.
Yang, X., Shen, X., Long, J. & Chen, H. An improved median-based Otsu image thresholding algorithm. Aasri Procedia 3, 468–473 (2012).
81. 81.
Avants, B. B., Tustison, N. & Song, G. Advanced normalization tools (ANTS). Insight J. 2, 1–35 (2009).
82. 82.
Leemans, A. & Jones, D. K. The B-matrix must be rotated when correcting for subject motion in DTI data. Magn. Reson. Med. 61, 1336–1349 (2009).
83. 83.
Tournier, J.-D., Calamante, F. & Connelly, A. Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution. NeuroImage 35, 1459–1472 (2007).
84. 84.
Mori, S., Crain, B. J., Chacko, V. P. & Van Zijl, P. C. Three-dimensional tracking of axonal projections in the brain by magnetic resonance imaging. Ann. Neurol. 45, 265–269 (1999).
85. 85.
Basser, P. J., Pajevic, S., Pierpaoli, C., Duda, J. & Aldroubi, A. In vivo fiber tractography using DT-MRI data. Magn. Reson. Med. 44, 625–632 (2000).
86. 86.
Ward, J. H. Jr. Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58, 236–244 (1963).
## Acknowledgements
We would like to acknowledge that research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R01EB027585. We would like to acknowledge awards NIMH R01MH108467, NSF NCS-FO 1734853, NSF-NCN nanoBIO 1720625 and NIH 1RF1MH121868-01, as well as an award of the University of Washington from the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. We would also like to acknowledge awards NSF OAC-1916518, NSF IIS-1912270, NSF IIS-1636893, NSF BCS-1734853, Microsoft Faculty Fellowship to FP. We would like to acknowledge the PPMI—-a public-private partnership, which is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners. Finally, we would like to acknowledge Indiana University’s supercomputing facilities. We used IU’s Carbonate to run the analysis. This platform was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
## Author information
Authors
### Contributions
B.Q.C., S.L.R., and E.G. wrote the paper. E.G. supervised B.Q.C. and guided the entire effort. B.Q.C. and E.G. wrote the code and built the interfaces in DIPY. B.Q.C. ran all the experiments. J.H. worked on the statistical comparisons and provided advice and supervision on anything related to biostatistics. J.H. also tested LMM for differences between R and Python versions. No issues found. D.B. and F.P helped with the curation of the bundle atlas and with updating the style, language, and format of the paper. F.C.Y. provided the original whole brain bundle atlas and B.Q.C. curated the atlas for the purposes of this work. S.K. helped with building interfaces in DIPY. S.L.R. helped with the interpretation of the results and with providing the comparisons with existing published results for the Parkinson’s disease. A.R. helped with improving the code and making BUAN more modular so that other techniques such as AFQ29 can be used in future under the same framework. All authors provided comments and suggestions to improve the manuscript.
### Corresponding author
Correspondence to Bramsh Qamar Chandio.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Chandio, B.Q., Risacher, S.L., Pestilli, F. et al. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020). https://doi.org/10.1038/s41598-020-74054-4
• Accepted:
• Published: | 2021-12-07 16:05:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381815075874329, "perplexity": 3609.6027815404404}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00544.warc.gz"} |
https://forum.azimuthproject.org/plugin/ViewComment/17991 | > I suspect the monoidal preorder compatibility condition can be rephrased in terms of the left and right actions of the monoid. Something like, \$$x \le y \implies L_z(x) \le L_z(y)\$$ and \$$x \le y \implies R_z(x) \le R_z(y)\$$ for all \$$z\$$. So the left and right actions for all elements have to be monotone maps.
I agree. It appears we can say \$$\langle \leq, \otimes, I \rangle\$$ is a monoidal preorder if and only if
$$x \leq y \Longleftrightarrow \forall z. L_z(x) \leq L_z(y) \text{ and } R_z(x) \leq R_z(y)$$
> Does that complexity reduction have anything to do with difference lists?
Exactly!
Difference lists are precisely *left actions* on lists.
Here's a little implementation:
newtype DList a = DList ([a] -> [a])
toDList :: [a] -> DList a
toDList xs = DList (xs ++)
fromDList :: DList a -> [a]
fromDList (DList leftAction) = leftAction []
But it's not a complete implementation ;-)
**Puzzle MD1**: Give the instance of Monoid for DList a, ie replace the undefineds below:
instance Monoid (DList a) where
(<>) = undefined
mempty = undefined
Remember that we want to obey the laws:
\begin{align} (\texttt{toDList}\, a)\ ⬦\ (\texttt{toDList}\, b) & \equiv \texttt{toDList}\, (a ⧺ b) \\\\ (\texttt{fromDList}\, a) ⧺ (\texttt{fromDList}\, b) & \equiv \texttt{fromDList}\, (a\ ⬦\ b) \\\\ \\\\ \texttt{fromDList}\, \texttt{mempty} & \equiv [] \\\\ \texttt{mempty} & \equiv \texttt{toDList}\, [] \\\\ \end{align} | 2019-04-24 22:28:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617969989776611, "perplexity": 11361.62905728932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578663470.91/warc/CC-MAIN-20190424214335-20190425000335-00521.warc.gz"} |
https://www.usgs.gov/publications/tungsten-skarn-potential-yukon-tanana-upland-eastern-alaska-usa-a-mineral-resource | # Tungsten skarn potential of the Yukon-Tanana Upland, eastern Alaska, USA—A mineral resource assessment
November 25, 2020
Tungsten (W) is used in a variety of industrial and technological applications and has been identified as a critical mineral for the United States, India, the European Union, and other countries. These countries rely on W imports mostly from China, which leaves them vulnerable to supply disruption. Consequently, the U.S. government has a current initiative to understand domestic resource potential. The eastern Alaska portion of the Yukon-Tanana Upland (YTU), is prospective for W skarn deposits, the major source of global W supply. The regional geology consists of juxtaposed Paleozoic lithotectonic packages that were reaccreted to North America in the Mesozoic. Multiple subsequent episodes of arc-related magmatism intruded the lithotectonic packages, accompanied by W skarn formation mostly associated with 100–90 Ma intrusions; major W skarn deposits in Canada are part of the same metallogenic event (e.g., Mactung, Cantung). In this paper, we present an assessment for undiscovered W skarn resources for parts of the lesser-explored western (Alaskan) portion of the YTU.
We used GIS proximity analysis to map the intersection of pluton and carbonate-bearing rocks to define three permissive tracts for W skarn deposits. The permissive tracts were qualitatively assessed by mineral potential mapping using region-wide sediment geochemistry and mineral concentrate datasets. This analysis showed that much of the western YTU has high potential for undiscovered W skarn deposits, whereas the eastern and southern YTU had only isolated areas of medium to high potential. Historical production and the quality of the geochemistry data of the western YTU tract (ca. 9200 km2) permitted a quantitative assessment of undiscovered W resources. Probabilistic estimates by a panel of 20 experts predicted a 70% chance of one to three undiscovered W skarn deposits in the western YTU tract. The rationale for favorability employed by the expert panel included favorable lithology, previous production, clustering of previously mined deposits, W placers in the area, lack of recent exploration, pan concentrates containing W minerals, and W geochemical anomalies. Estimates were combined with a global grade and tonnage model for W skarns in a Monte Carlo simulation and provided a median estimate of undiscovered resources of 94 kt WO3. If the undiscovered W skarn deposits are located close to infrastructure (e.g., near Fairbanks, or close to roads and/or power grid), application of an economic filter indicates that the median total economically recoverable WO3 is 63 kt with a net present value (NPV) of $330 million USD (2008 dollars). Whereas if deposits are far from infrastructure, median recoverable WO3 is only 30 kt and the NPV is$44 million.
Our models for contained WO3 resources and NPV estimates for the western YTU tract are considerably lower than the known resources in skarns in adjacent areas in Canada. Estimates for the western YTU are also lower than preliminary estimates for undiscovered W skarn deposits in areas of the western conterminous United States. We speculate that lower permeability and continuity of favorable carbonate rock horizons in the relatively higher-grade metamorphic country rocks in the Alaska portion of the YTU may explain some of the differences in prospectivity. More detailed geologic mapping, modern geochemistry, and geophysical surveys are needed to refine the resource potential of the whole YTU. Regardless, quantitative mineral resource assessment provides a useful tool for making first-order regional estimates of undiscovered resources, identifying target areas for new data acquisition, and guiding research on the fundamental controls of district-scale metallogenic endowments.
## Citation Information
Publication Year 2022 Tungsten skarn potential of the Yukon-Tanana Upland, eastern Alaska, USA—A mineral resource assessment 10.1016/j.gexplo.2020.106700 George N.D. Case, Garth E. Graham, Erin E. Marsh, Ryan Taylor, Carlin J. Green, Philip J. Brown II, Keith A. Labay Article Journal Article Journal of Geochemical Exploration 70224637 USGS Publications Warehouse Alaska Science Center Geology Minerals; Central Mineral and Environmental Resources Science Center; Eastern Mineral and Environmental Resources Science Center | 2023-01-27 22:19:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19032607972621918, "perplexity": 9015.209936900495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00102.warc.gz"} |
https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Differential_Equations_for_Engineers_(Lebl)/1%3A_First_order_ODEs/1.8%3A_Exact_Equations |
# 1.8: Exact Equations
Another type of equation that comes up quite often in physics and engineering is an . Suppose $$F(x,y)$$ is a function of two variables, which we call the . The naming should suggest potential energy, or electric potential. Exact equations and potential functions appear when there is a conservation law at play, such as conservation of energy. Let us make up a simple example. Let $F(x,y) = x^2+y^2 . \nonumber$
We are interested in the lines of constant energy, that is lines where the energy is conserved; we want curves where $$F(x,y) = C$$, for some constant $$C$$. In our example, the curves $$x^2+y^2=C$$ are circles. See Figure $$\PageIndex{1}$$.
We take the total derivative of $$F$$: $dF = \frac{\partial F}{\partial x} dx + \frac{\partial F}{\partial y} dy . \nonumber$
For convenience, we will make use of the notation of $$F_x = \frac{\partial F}{\partial x}$$ and $$F_y = \frac{\partial F}{\partial y}$$. In our example, $dF = 2x \, dx + 2y \, dy . \nonumber$
We apply the total derivative to $$F(x,y) = C$$, to find the differential equation $$dF = 0$$. The differential equation we obtain in such a way has the form $M \, dx + N \, dy = 0, \qquad \text{or} \qquad M + N \, \frac{dy}{dx} = 0 . \nonumber$
An equation of this form is called exact if it was obtained as $$dF = 0$$ for some potential function $$F$$. In our simple example, we obtain the equation $2x \, dx + 2y \, dy = 0, \qquad \text{or} \qquad 2x + 2y \, \frac{dy}{dx} = 0 . \nonumber$
Since we obtained this equation by differentiating $$x^2+y^2=C$$, the equation is exact. We often wish to solve for $$y$$ in terms of $$x$$. In our example, $y = \pm \sqrt{C^2-x^2} . \nonumber$
An interpretation of the setup is that at each point $$\vec{v} = (M,N)$$ is a vector in the plane, that is, a direction and a magnitude. As $$M$$ and $$N$$ are functions of $$(x,y)$$, we have a vector field. The particular vector field $$\vec{v}$$ that comes from an exact equation is a so-called conservative vector field, that is, a vector field that comes with a potential function $$F(x,y)$$, such that $\vec{v} = \left( \frac{\partial F}{\partial x} ,\frac{\partial F}{\partial y} \right) . \nonumber$ Let $$\gamma$$ be a path in the plane starting at $$(x_1,y_1)$$ and ending at $$(x_2,y_2)$$. If we think of $$\vec{v}$$ as force, then the work required to move along $$\gamma$$ is $\int_\gamma \vec{v}(\vec{r}) \cdot d\vec{r} = \int_\gamma M \, dx + N \, dy = F(x_2,y_2) - F(x_1,y_1) . \nonumber$
That is, the work done only depends on endpoints, that is where we start and where we end. For example, suppose $$F$$ is gravitational potential. The derivative of $$F$$ given by $$\vec{v}$$ is the gravitational force. What we are saying is that the work required to move a heavy box from the ground floor to the roof, only depends on the change in potential energy. That is, the work done is the same no matter what path we took; if we took the stairs or the elevator. Although if we took the elevator, the elevator is doing the work for us. The curves $$F(x,y) = C$$ are those where no work need be done, such as the heavy box sliding along without accelerating or breaking on a perfectly flat roof, on a cart with incredibly well oiled wheels.
An exact equation is a conservative vector field, and the implicit solution of this equation is the potential function.
## Solving exact equations
Now you, the reader, should ask: Where did we solve a differential equation? Well, in applications we generally know $$M$$ and $$N$$, but we do not know $$F$$. That is, we may have just started with $$2x + 2y \frac{dy}{dx} = 0$$, or perhaps even $x + y \frac{dy}{dx} = 0 . \nonumber$
It is up to us to find some potential $$F$$ that works. Many different $$F$$ will work; adding a constant to $$F$$ does not change the equation. Once we have a potential function $$F$$, the equation $$F\bigl(x,y(x)\bigr) = C$$ gives an implicit solution of the ODE.
##### Example $$\PageIndex{1}$$
Let us find the general solution to $$2x + 2y \frac{dy}{dx} = 0$$. Forget we knew what $$F$$ was.
Solution
If we know that this is an exact equation, we start looking for a potential function $$F$$. We have $$M = 2x$$ and $$N=2y$$. If $$F$$ exists, it must be such that $$F_x (x,y) = 2x$$. Integrate in the $$x$$ variable to find $\label{eq:exact:fint} F(x,y) = x^2 + A(y) ,$
for some function $$A(y)$$. The function $$A$$ is the , though it is only constant as far as $$x$$ is concerned, and may still depend on $$y$$. Now differentiate $$\eqref{eq:exact:fint}$$ in $$y$$ and set it equal to $$N$$, which is what $$F_y$$ is supposed to be: $2y = F_y (x,y) = A'(y) . \nonumber$
Integrating, we find $$A(y) = y^2$$. We could add a constant of integration if we wanted to, but there is no need. We found $$F(x,y) = x^2+y^2$$. Next for a constant $$C$$, we solve
$F\bigl(x,y(x)\bigr) = C . \nonumber$
for $$y$$ in terms of $$x$$. In this case, we obtain $$y = \pm \sqrt{C^2-x^2}$$ as we did before.
##### Exercise $$\PageIndex{1}$$
Why did we not need to add a constant of integration when integrating $$A'(y) = 2y$$? Add a constant of integration, say $$3$$, and see what $$F$$ you get. What is the difference from what we got above, and why does it not matter?
The procedure, once we know that the equation is exact, is:
1. Integrate $$F_x = M$$ in $$x$$ resulting in $$F(x,y) = \text{something} + A(y)$$.
2. Differentiate this $$F$$ in $$y$$, and set that equal to $$N$$, so that we may find $$A(y)$$ by integration.
The procedure can also be done by first integrating in $$y$$ and then differentiating in $$x$$. Pretty easy huh? Let’s try this again.
##### Example $$\PageIndex{2}$$
Consider now $$2x+y + xy \frac{dy}{dx} = 0$$.
OK, so $$M = 2x+y$$ and $$N=xy$$. We try to proceed as before. Suppose $$F$$ exists. Then $$F_x (x,y) = 2x+y$$. We integrate: $F(x,y) = x^2 + xy + A(y) \nonumber$ for some function $$A(y)$$. Differentiate in $$y$$ and set equal to $$N$$: $N = xy = F_y (x,y) = x+A'(y) . \nonumber$ But there is no way to satisfy this requirement! The function $$xy$$ cannot be written as $$x$$ plus a function of $$y$$. The equation is not exact; no potential function $$F$$ exists.
But there is no way to satisfy this requirement! The function $$xy$$ cannot be written as $$x$$ plus a function of $$y$$. The equation is not exact; no potential function $$F$$ exists
Is there an easier way to check for the existence of $$F$$, other than failing in trying to find it? Turns out there is. Suppose $$M = F_x$$ and $$N = F_y$$. Then as long as the second derivatives are continuous, $\frac{\partial M}{\partial y} = \frac{\partial^2 F}{\partial y \partial x} = \frac{\partial^2 F}{\partial x \partial y} = \frac{\partial N}{\partial x} . \nonumber$ Let us state it as a theorem. Usually this is called the Poincaré Lemma.$$^{1}$$
##### Theorem $$\PageIndex{1}$$
Pointcaré
If $$M$$ and $$N$$ are continuously differentiable functions of $$(x,y)$$, and $$\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}$$, then near any point there is a function $$F(x,y)$$ such that $$M = \frac{\partial F}{\partial x}$$ and $$N = \frac{\partial F}{\partial y}$$.
The theorem doesn’t give us a global $$F$$ defined everywhere. In general, we can only find the potential locally, near some initial point. By this time, we have come to expect this from differential equations.
Let us return to Example $$\PageIndex{2}$$ where $$M = 2x + y$$ and $$N = xy$$. Notice $$M_y = 1$$ and $$N_x = y$$, which are clearly not equal. The equation is not exact.
##### Example $$\PageIndex{3}$$
Solve $\frac{dy}{dx} = \frac{-2x-y}{x-1}, \qquad y(0) = 1. \nonumber$
Solution
We write the equation as $(2x+y) + (x-1)\frac{dy}{dx} = 0 , \nonumber$ so $$M = 2x+y$$ and $$N = x-1$$. Then $M_y = 1 = N_x . \nonumber$
The equation is exact. Integrating $$M$$ in $$x$$, we find $F(x,y) = x^2+xy + A(y) . \nonumber$
Differentiating in $$y$$ and setting to $$N$$, we find $x-1 = x + A'(y) . \nonumber$
So $$A'(y) = -1$$, and $$A(y) = -y$$ will work. Take $$F(x,y) = x^2+xy-y$$. We wish to solve $$x^2+xy-y = C$$. First let us find $$C$$. As $$y(0)=1$$ then $$F(0,1) = C$$. Therefore $$0^2+0\times 1 - 1 = C$$, so $$C=-1$$. Now we solve $$x^2+xy-y = -1$$ for $$y$$ to get $y = \frac{-x^2-1}{x-1} . \nonumber$
##### Example $$\PageIndex{4}$$
Solve $-\frac{y}{x^2+y^2} dx + \frac{x}{x^2+y^2} dy = 0 , \qquad y(1) = 2. \nonumber$
Solution
We leave to the reader to check that $$M_y = N_x$$.
This vector field $$(M,N)$$ is not conservative if considered as a vector field of the entire plane minus the origin. The problem is that if the curve $$\gamma$$ is a circle around the origin, say starting at $$(1,0)$$ and ending at $$(1,0)$$ going counterclockwise, then if $$F$$ existed we would expect
$0 = F(1,0) - F(1,0) = \int_\gamma F_x \, dx + F_y \, dy = \int_\gamma \frac{-y}{x^2+y^2} \, dx + \frac{x}{x^2+y^2} \, dy = 2\pi . \nonumber$
That is nonsense! We leave the computation of the path integral to the interested reader, or you can consult your multivariable calculus textbook. So there is no potential function $$F$$ defined everywhere outside the origin $$(0,0)$$.
If we think back to the theorem, it does not guarantee such a function anyway. It only guarantees a potential function locally, that is only in some region near the initial point. As $$y(1) = 2$$ we start at the point $$(1,2)$$. Considering $$x > 0$$ and integrating $$M$$ in $$x$$ or $$N$$ in $$y$$, we find
$F(x,y) = \operatorname{arctan} \left( \frac{y}{x} \right) . \nonumber$
The implicit solution is $$\operatorname{arctan} \bigl( \frac{y}{x} \bigr) = C$$. Solving, $$y = \tan(C) x$$. That is, the solution is a straight line. Solving $$y(1) = 2$$ gives us that $$\tan(C) = 2$$, and so $$y= 2x$$ is the desired solution. See Figure $$\PageIndex{1}$$, and note that the solution only exists for $$x > 0$$.
##### Example $$\PageIndex{5}$$
Solve $x^2+y^2 + 2y(x+1) \frac{dy}{dx} = 0 . \nonumber$
Solution
The reader should check that this equation is exact. Let $$M= x^2+y^2$$ and $$N=2y(x+1)$$. We follow the procedure for exact equations
$F(x,y) = \frac{1}{3}x^3 + xy^2 + A(y) , \nonumber$ and $2y(x+1) = 2xy + A'(y) . \nonumber$
Therefore $$A'(y) = 2y$$ or $$A(y) = y^2$$ and $$F(x,y) = \frac{1}{3}x^3 + xy^2 + y^2$$. We try to solve $$F(x,y) = C$$. We easily solve for $$y^2$$ and then just take the square root:
$y^2 = \frac{C-(\frac{1}{3})x^3}{x+1}, \qquad \text{so} \qquad y = \pm \sqrt{\frac{C-(\frac{1}{3})x^3}{x+1}} . \nonumber$ When $$x=-1$$, the term in front of $$\frac{dy}{dx}$$ vanishes. You can also see that our solution is not valid in that case. However, one could in that case try to solve for $$x$$ in terms of $$y$$ starting from the implicit solution $$\frac{1}{3}x^3 + xy^2 + y^2 = C$$. The solution is somewhat messy and we leave it as implicit.
## Integrating factors
Sometimes an equation $$M\, dx + N \, dy = 0$$ is not exact, but it can be made exact by multiplying with a function $$u(x,y)$$. That is, perhaps for some nonzero function $$u(x,y)$$, $u(x,y) M(x,y) \, dx + u(x,y) N(x,y) \, dy = 0 \nonumber$ is exact. Any solution to this new equation is also a solution to $$M\, dx + N \, dy = 0$$.
In fact, a linear equation $\frac{dy}{dx} + p(x) y = f(x), \qquad \text{or} \qquad \bigl( p(x) y - f(x) \bigr)\, dx + dy = 0 \nonumber$ is always such an equation. Let $$r(x) = e^{\int p(x)\,dx}$$ be the integrating factor for a linear equation. Multiply the equation by $$r(x)$$ and write it in the form of $$M + N \frac{dy}{dx} = 0$$. $r(x) p(x) y - r(x) f(x) + r(x) \frac{dy}{dx} = 0 . \nonumber$ Then $$M = r(x) p(x) y - r(x) f(x)$$, so $$M_y = r(x) p(x)$$, while $$N = r(x)$$, so $$N_x = r'(x) = r(x) p(x)$$. In other words, we have an exact equation. Integrating factors for linear functions are just a special case of integrating factors for exact equations.
But how do we find the integrating factor $$u$$? Well, given an equation $M \, dx + N \, dy = 0 , \nonumber$ $$u$$ should be a function such that $\frac{\partial}{\partial y} \bigl[ u M \bigr] = u_y M + u M_y = \frac{\partial}{\partial x} \bigl[ u N \bigr] = u_x N + u N_x . \nonumber$ Therefore, $(M_y-N_x)u = u_x N - u_y M . \nonumber$ At first it may seem we replaced one differential equation by another. True, but all hope is not lost.
A strategy that often works is to look for a $$u$$ that is a function of $$x$$ alone, or a function of $$y$$ alone. If $$u$$ is a function of $$x$$ alone, that is $$u(x)$$, then we write $$u'(x)$$ instead of $$u_x$$, and $$u_y$$ is just zero. Then $\frac{M_y-N_x}{N}u = u' . \nonumber$ In particular, $$\frac{M_y-N_x}{N}$$ ought to be a function of $$x$$ alone (not depend on $$y$$). If so, then we have a linear equation $u' - \frac{M_y-N_x}{N} u = 0 . \nonumber$ Letting $$P(x) = \frac{M_y-N_x}{N}$$, we solve using the standard integrating factor method, to find $$u(x) = C e^{\int P(x) \, dx}$$. The constant in the solution is not relevant, we need any nonzero solution, so we take $$C=1$$. Then $$u(x) = e^{\int P(x) \, dx}$$ is the integrating factor.
Similarly we could try a function of the form $$u(y)$$. Then $\frac{M_y-N_x}{M} u = - u' . \nonumber$ In particular, $$\frac{M_y-N_x}{M}$$ ought to be a function of $$y$$ alone. If so, then we have a linear equation $u' + \frac{M_y-N_x}{M} u = 0 . \nonumber$ Letting $$Q(y) = \frac{M_y-N_x}{M}$$, we find $$u(y) = C e^{-\int Q(y) \, dy}$$. We take $$C=1$$. So $$u(y) = e^{-\int Q(y) \, dy}$$ is the integrating factor.
##### Example $$\PageIndex{6}$$
Solve $\frac{x^2+y^2}{x+1} + 2y \frac{dy}{dx} = 0 . \nonumber$
Solution
Let $$M= \frac{x^2+y^2}{x+1}$$ and $$N=2y$$. Compute $M_y-N_x = \frac{2y}{x+1} - 0 = \frac{2y}{x+1} . \nonumber$
As this is not zero, the equation is not exact. We notice $P(x) = \frac{M_y-N_x}{N} = \frac{2y}{x+1} \frac{1}{2y} = \frac{1}{x+1} \nonumber$ is a function of $$x$$ alone. We compute the integrating factor $e^{\int P(x) \, dx} = e^{\ln (x+1)} = x+1 . \nonumber$ We multiply our given equation by $$(x+1)$$ to obtain $x^2+y^2 + 2y(x+1) \frac{dy}{dx} = 0 , \nonumber$ which is an exact equation that we solved in Example $$\PageIndex{5}$$. The solution was $y = \pm \sqrt{\frac{C-(\frac{1}{3})x^3}{x+1}} . \nonumber$
##### Example $$\PageIndex{7}$$
Solve $y^2 + (xy+1) \frac{dy}{dx} = 0 . \nonumber$
Solution
First compute $M_y-N_x = 2y-y = y . \nonumber$ As this is not zero, the equation is not exact. We observe $Q(y) = \frac{M_y-N_x}{M} = \frac{y}{y^2} = \frac{1}{y} \nonumber$ is a function of $$y$$ alone. We compute the integrating factor $e^{-\int Q(y) \, dy} = e^{-\ln y} = \frac{1}{y} . \nonumber$ Therefore we look at the exact equation $y + \frac{xy+1}{y} \frac{dy}{dx} = 0 . \nonumber$ The reader should double check that this equation is exact. We follow the procedure for exact equations $F(x,y) = xy + A(y) , \nonumber$ and $\frac{xy+1}{y} = x+\frac{1}{y} = x+ A'(y) . \nonumber$ Consequently $$A'(y) = \frac{1}{y}$$ or $$A(y) = \ln y$$. Thus $$F(x,y) = xy + \ln y$$. It is not possible to solve $$F(x,y)=C$$ for $$y$$ in terms of elementary functions, so let us be content with the implicit solution: $xy + \ln y = C . \nonumber$ We are looking for the general solution and we divided by $$y$$ above. We should check what happens when $$y=0$$, as the equation itself makes perfect sense in that case. We plug in $$y=0$$ to find the equation is satisfied. So $$y=0$$ is also a solution.
## Footnotes
[1] Named for the French polymath Jules Henri Poincaré (1854–1912).
1.8: Exact Equations is shared under a CC BY-SA 1.3 license and was authored, remixed, and/or curated by LibreTexts. | 2022-06-27 14:04:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600573182106018, "perplexity": 97.1694982998977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00463.warc.gz"} |
https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/output/antibody | HOME › pipelines
Antibody Outputs
Cell Ranger outputs certain files that are specific to the Antibody Capture analysis, besides the Gene Expression outputs.
Starting from Cell Ranger 3.0, all Feature Barcode counts, including Antibody Capture counts, simply become new features in addition to the standard per-gene features, and are output alongside gene counts in the feature-barcode matrix. For every row in the Feature Barcode Reference CSV file where feature_type is specified as Antibody Capture, there will be a corresponding row in the feature-barcode matrix. That row will get its title from the id field in the Feature Reference file for that feature, and the counts can be visualized via Loupe Browser by searching for the human-readable name from the name field of the Feature Reference file (for antibody applications, the id and name fields can typically be the same as long as the id is unique).
To visualize cells in 2-D space, secondary analysis dimensionality reduction outputs for Antibody Capture libraries are provided in the analysis/ directory. Log-transformed antibody counts are used to perform these analyses for Antibody Capture libraries. This is in contrast to the gene expression side of the feature-barcode matrix, where these projections are run on the PCA-reduced space from raw counts.
Below are some examples of the PCA, t-SNE, and UMAP output projection files.
Principal Components Analysis (PCA):
$head -5 analysis/pca/antibody_capture_10_components/projection.csv Barcode,PC-1,PC-2,PC-3,PC-4,PC-5,PC-6,PC-7,PC-8,PC-9,PC-10 AAACAAGCACCATACT-1,-5.574515404720648,4.1250677853049735,0.3343758325171491,-0.9529782537962408,-1.8811942105099764,-0.4217695409442901,-1.9900329330389255,-1.2255017468251315,-1.3980947791205285,-1.1176859809904909 AAACAAGCACGTAATG-1,-6.983452898884609,-1.9379476767294177,-0.042479446422044376,-1.264967360758824,4.167549425417305,0.12065395835962933,-0.707084060668425,-2.9769215409849656,0.9053984182888417,-0.061563257127632665 AAACAAGCATGCAATG-1,-4.430486543723384,3.7442086078976002,-0.9447490398187632,-1.9902233589725338,-0.6258151384415838,-1.3582451690099415,-0.107256076231657,-1.6254493516586832,-0.43820589495677176,2.3253990939137505 AAACAAGCATTTGGGA-1,-4.945904594634436,4.017097394368968,-0.16688953081917113,-0.5729444140584459,-1.8303228981840096,-0.7755095535305054,-1.4069565944426259,-0.7969252558721216,-0.0011689859429466765,0.39202448730849027 The t-distributed Stochastic Neighbor Embedding (t-SNE): $ head -5 analysis/tsne/antibody_capture_2_components/projection.csv
Barcode,TSNE-1,TSNE-2
AAACCCAAGTGGTCAG-1,-29.97926190939189,-3.5125258285933603
AAAGGTATCAACTACG-1,20.762905594110116,-6.946344013493825
AAAGTCCAGCGTGTCC-1,11.156075443007484,-5.489821984514518
AACACACTCAAGAGTA-1,-26.08126312702518,-5.167458628104057
The Uniform Manifold Approximation and Projection (UMAP): | 2022-09-30 01:04:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33392760157585144, "perplexity": 4342.651629118589}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00580.warc.gz"} |
http://math.mercyhurst.edu/~lwilliams/about/cv.html | ## Education
• PhD, Mathematics, The University of Wisconsin Milwaukee
• MA, Mathematics, The University of Wisconsin Milwaukee
• BA, Mathematics, The College of New Jersey
## Employment
• Assistant Professor, Mercyhurst University (2013 - Present)
• GAANN Fellow, The University of Wisconsin Milwaukee (2009 - 2013)
• Graduate Teaching Assistant, The University of Wisconsin Milwaukee (2007 - 2009)
## Teaching Experience
##### Mercyhurst University
• Math 110, Math Applications: Art
• Math 110, Math Applications: Nature
• Math 111, College Algebra
• Math 118, Math for the Natural Sciences
• Math 150, Linear Algebra
• Math 170, Calculus I
• Math 265, Transition to Advanced Mathematics
• Math 280, Modern Algebra I
• Math 281, Modern Algebra II
• Math 400, Topics in Mathematics: Combinatorics
• MIS 224, Mobile Application Development
• MIS 370, Client Side Programming
• DATA 562, Data Visualization with JavaScript (Graduate)
##### University of Wisconsin Milwaukee
• Math 095, Beginning Algebra, Instructor
• Math 105, Intermediate Algebra, Instructor
• Math 117, Trigonometry, Instructor
• Math 231, Calculus and Analytic Geometry I, Instructor
• Math 232, Calculus and Analytic Geometry II, Instructor
• Math 431, Numerical Analysis, Computer Lab Instructor
## Publications
• The adjoint representation of a Lie algebra and the support of Kostant's weight multiplicity formula, with Pamela E Harris and Erik Insko. Journal of Combinatorics, Vol 7, No 1, 2016.
Abstract: Even though weight multiplicity formulas, such as Kostant's formula, exist their computational use is extremely cumbersome. In fact, even in cases when the multiplicity is well understood, the number of terms considered in Kostant's formula is factorial in the rank of the Lie algebra and the value of the partition function is unknown. In this paper we address the difficult question: What are the contributing terms to the multiplicity of the zero weight in the adjoint representation of a finite dimensional Lie algebra? We describe and enumerate the cardinalities of these sets (through linear homogeneous recurrence relations with constant coefficients) for the classical Lie algebras of Type B, C, and D, the Type A case was computed by the first author. In addition, we compute the cardinality of the set of contributing terms for non-zero weight spaces in the adjoint representation. In the Type B case, the cardinality of one such non-zero-weight is enumerated by the Fibonacci numbers. We end with a computational proof of a result of Kostant regarding the exponents of the respective Lie algebra for some low rank examples and provide a section with open problems in this area.
Earlier version available on arXiv
• Invariant polynomial functions on tensors under a product of orthogonal groups. Transactions of the American Mathematical Society, Vol 368, No 2, 2016.
Abstract: Let $$K$$ be the product $$O_{n_1} \times O_{n_2} \times \cdots \times O_{n_r}$$ of orthogonal groups. Let $$V = \bigotimes_{i=1}^r \mathbb{C}^{n_i}$$, the $$r$$-fold tensor product of defining representations of each orthogonal factor. We compute a stable formula for the dimension of the $$K$$-invariant algebra of degree $$d$$ homogeneous polynomial functions on $$V$$ . To accomplish this, we compute a formula for the number of matchings which commute with a fixed permutation. Finally, we provide formulas for the invariants and describe a bijection between a basis for the space of invariants and the isomorphism classes of certain $$r$$-regular graphs on $$d$$ vertices, as well as a method of associating each invariant to other combinatorial settings such as phylogenetic trees.
Earlier version available on arXiv
• The measurement of quantum entanglement and enumeration of graph coverings, with Michael W Hero and Jeb F Willenbring. AMS Contemporary Mathematics Series, Vol 557, 2011.
Abstract: We provide formulas for invariants defined on a tensor product of defining representations of unitary groups, under the action of the product group. This situation has a physical interpretation, as it is related to the quantum mechanical state space of a multi-particle system in which each particle has finitely many outcomes upon observation. Moreover, these invariant functions separate the entangled and unentangled states, and are therefore viewed as measurements of quantum entanglement. When the ranks of the unitary groups are large, we provide a graph theoretic interpretation for the dimension of the invariants of a fixed degree. We also exhibit a bijection between isomorphism classes of finite coverings of connected simple graphs and a basis for the space of invariants. The graph coverings are related to branched coverings of surfaces.
Earlier version available on arXiv
## Conference and Seminar Talks
• MAA Allegheny Mountain Section Meeting, Westminster College, PA, 2014
• Colloquium, The United States Military Academy, West Point, NY, 2013
• Dissertation Defense, UWM, 2013
• Algebra and Combinatorics Seminar, University of Wisconsin Madison, 2013
• Joint Mathematics Meeting Special Session on Lie Algebras, Algebraic Transformation Groups, and Representation Theory, San Diego, CA, 2013
• Colloquium, The United States Military Academy, West Point, NY, 2012
• MAA Mathfest, Madison, WI, 2012
• Applied and Computational Mathematics Seminar, UWM, 2012
• Algebra Seminar, UWM, 2008 - 2013
## Student Talks
• Mary Jaskowak, "The Fourth Dimension", MAA Allegheny Mountain Section Meeting, Duquesne University, Spring 2017
• Andrea Flores, "World Income Distribution Dynamics: Analyzing the Dynamics of World Income Distribution Using a Markov Transition Method", MAA Allegheny Mountain Section Meeting, Gannon University, Spring 2016
• Michael Monaco, "Geometry and the Erlangen Program", MAA Allegheny Mountain Section Meeting, Washington and Jefferson College, Spring 2015
## Conferences and Workshops Attended
• MAA Allegheny Mountain Section Meeting, Duquesne University, Spring 2017
• Section NeXT Workshop, Indiana University of Pennsylvania, Fall 2016
• ICERM Illustrating Mathematics, Brown University, 2016
• MAA Allegheny Mountain Section Meeting, Gannon University, Spring 2016
• Section NeXT Workshop, Clarion University, Fall 2015
• MAA Allegheny Mountain Section Meeting, Washington and Jefferson College, Spring 2015
• Section NeXT Workshop, Penn State Behrend, Fall 2014
• MAA Allegheny Mountain Section Meeting, Westminster College, Spring 2014
• Section NeXT Workshop, Slippery Rock University, Fall 2013
• Joint Mathematics Meeting, San Diego CA, 2013
• MAA Mathfest, Madison WI, 2012
• Lie Theory and Its Applications Conference in Honor of Nolan Wallach, San Diego CA, 2011
## Professional Membership
• American Mathematical Society
• Mathematical Association of America
## Technical Skills
• Programming Languages: C, C++, Objective C, Java, JavaScript, Lua, Maple, Mathematica, Python, Sage
• Markup and Style: CSS, HTML, $$\LaTeX$$
## Awards and Funding
• GAANN Fellowship, 2009 - 2013
• Graduate School Travel Award, UWM, 2012
• Ernst Schwandt Teaching Award, UWM, 2011
• Chancellor’s Award, UWM, 2007 - 2009
• Student Accessibility Center Excellence Award, UWM, 2009
• Graduate Teaching Assistantship, UWM, 2007 - 2009 | 2018-02-21 07:30:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5013145804405212, "perplexity": 3608.9153110340294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00495.warc.gz"} |
https://rspatial.github.io/raster/reference/raster-package.html | The raster package provides classes and functions to manipulate geographic (spatial) data in 'raster' format. Raster data divides space into cells (rectangles; pixels) of equal size (in units of the coordinate reference system). Such continuous spatial data are also referred to as 'grid' data, and be contrasted with discrete (object based) spatial data (points, lines, polygons).
The package should be particularly useful when using very large datasets that can not be loaded into the computer's memory. Functions will work correctly, because they process large files in chunks, i.e., they read, compute, and write blocks of data, without loading all values into memory at once.
Below is a list of some of the most important functions grouped by theme. See the vignette for more information and some examples (you can open it by running this command: vignette('Raster'))
## Details
The package implements classes for Raster data (see Raster-class) and supports
• Creation of Raster* objects from scratch or from file
• Handling extremely large raster files
• Raster algebra and overlay functions
• Distance, neighborhood (focal) and patch functions
• Polygon, line and point to raster conversion
• Model predictions
• Summarizing raster values
• Plotting (making maps)
• Manipulation of raster extent, resolution and origin
• Computation of row, column and cell numbers to coordinates and vice versa
• Reading and writing various raster file types
.
## I. Creating Raster* objects
RasterLayer, RasterStack, and RasterBrick objects are, as a group, referred to as Raster* objects. Raster* objects can be created, from scratch, files, or from objects of other classes, with the following functions:
raster To create a RasterLayer stack To create a RasterStack (multiple layers) brick To create a RasterBrick (multiple layers) subset Select layers of a RasterStack/Brick addLayer Add a layer to a Raster* object dropLayer Remove a layer from a RasterStack or RasterBrick unstack Create a list of RasterLayer objects from a RasterStack --------------------------- ---------------------------------------------------------------------------------------------------
## II. Changing the spatial extent and/or resolution of Raster* objects
merge Combine Raster* objects with different extents (but same origin and resolution) mosaic Combine RasterLayers with different extents and a function for overlap areas crop Select a geographic subset of a Raster* object extend Enlarge a Raster* object trim Trim a Raster* object by removing exterior rows and/or columns that only have NAs aggregate Combine cells of a Raster* object to create larger cells disaggregate Subdivide cells resample Warp values to a Raster* object with a different origin or resolution projectRaster project values to a raster with a different coordinate reference system shift Move the location of Raster flip Flip values horizontally or vertically rotate Rotate values around the date-line (for lon/lat data) t Transpose a Raster* object --------------------------- ------------------------------------------------------------------------------------------
## III. Raster algebra
Arith-methods Arith functions (+, -, *, ^, %%, %/%, /) Math-methods Math functions like abs, sqrt, trunc, log, log10, exp, sin, round Logic-methods Logic functions (!, &, |) Summary-methods Summary functions (mean, max, min, range, prod, sum, any, all) Compare-methods Compare functions (==, !=, >, <, <=, >=) --------------------------- ------------------------------------------------------------------------------------------
## IV. Cell based computation
calc Computations on a single Raster* object overlay Computations on multiple RasterLayer objects cover First layer covers second layer except where the first layer is NA mask Use values from first Raster except where cells of the mask Raster are NA cut Reclassify values using ranges subs Reclassify values using an 'is-becomes' matrix reclassify Reclassify using a 'from-to-becomes' matrix init Initialize cells with new values stackApply Computations on groups of layers in Raster* object stackSelect Select cell values from different layers using an index RasterLayer --------------------------- ------------------------------------------------------------------------------------------
## V. Spatial contextual computation
distance Shortest distance to a cell that is not NA gridDistance Distance when traversing grid cells that are not NA distanceFromPoints Shortest distance to any point in a set of points direction Direction (azimuth) to or from cells that are not NA focal Focal (neighborhood; moving window) functions localFun Local association (using neighborhoods) functions boundaries Detection of boundaries (edges) clump Find clumps (patches) adjacent Identify cells that are adjacent to a set of cells on a raster area Compute area of cells (for longitude/latitude data) terrain Compute slope, aspect and other characteristics from elevation data Moran Compute global or local Moran or Geary indices of spatial autocorrelation --------------------------- ------------------------------------------------------------------------------------------
## VI. Model predictions
predict Predict a non-spatial model to a RasterLayer interpolate Predict a spatial model to a RasterLayer --------------------------- ------------------------------------------------------------------------------------------
## VII. Data type conversion
You can coerce Raster* objects to Spatial* objects using as, as in as(object, 'SpatialGridDataFrame')
raster RasterLayer from SpatialGrid*, image, or matrix objects rasterize Rasterizing points, lines or polygons rasterToPoints Create points from a RasterLayer rasterToPolygons Create polygons from a RasterLayer rasterToContour Contour lines from a RasterLayer rasterFromXYZ RasterLayer from regularly spaced points rasterFromCells RasterLayer from a Raster object and cell numbers --------------------------- ------------------------------------------------------------------------------------------
## VIII. Summarizing
cellStats Summarize a Raster cell values with a function summary Summary of the values of a Raster* object (quartiles and mean) freq Frequency table of Raster cell values crosstab Cross-tabulate two Raster* objects unique Get the unique values in a Raster* object zonal Summarize a Raster* object by zones in a RasterLayer --------------------------- ------------------------------------------------------------------------------------------
## IX. Accessing values of Raster* object cells
Apart from the function listed below, you can also use indexing with [ for cell numbers, and [[ for row / column number combinations
getValues Get all cell values (fails with very large rasters), or a row of values (safer) getValuesBlock Get values for a block (a rectangular area) getValuesFocal Get focal values for one or more rows as.matrix Get cell values as a matrix as.array Get cell values as an array extract Extract cell values from a Raster* object (e.g., by cell, coordinates, polygon) sampleRandom Random sample sampleRegular Regular sample minValue Get the minimum value of the cells of a Raster* object (not always known) maxValue Get the maximum value of the cells of a Raster* object (not always known) setMinMax Compute the minimum and maximum value of a Raster* object if these are not known --------------------------- ------------------------------------------------------------------------------------------
## X. Plotting
See the rasterVis package for additional plotting methods for Raster* objects using methods from 'lattice' and other packages.
Maps plot Plot a Raster* object. The main method to create a map plotRGB Combine three layers (red, green, blue channels) into a single 'real color' image spplot Plot a Raster* with the spplot function (sp package) image Plot a Raster* with the image function persp Perspective plot of a RasterLayer contour Contour plot of a RasterLayer filledContour Filled contour plot of a RasterLayer text Plot the values of a RasterLayer on top of a map . Interacting with a map zoom Zoom in to a part of a map click Query values of Raster* or Spatial* objects by clicking on a map select Select a geometric subset of a Raster* or Spatial* object drawPoly Create a SpatialPolygons object by drawing it drawLine Create a SpatialLines object by drawing it drawExtent Create an Extent object by drawing it . Other plots plot x-y scatter plot of the values of two RasterLayer objects hist Histogram of Raster* object values barplot barplot of a RasterLayer density Density plot of Raster* object values pairs Pairs plot for layers in a RasterStack or RasterBrick boxplot Box plot of the values of one or multiple layers --------------------------- ------------------------------------------------------------------------------------------
## XI. Getting and setting Raster* dimensions
Basic parameters of existing Raster* objects can be obtained, and in most cases changed. If there are values associated with a RasterLayer object (either in memory or via a link to a file) these are lost when you change the number of columns or rows or the resolution. This is not the case when the extent is changed (as the number of columns and rows will not be affected). Similarly, with projection you can set the projection, but this does not transform the data (see projectRaster for that).
ncol The number of columns nrow The number of rows ncell The number of cells (can not be set directly, only via ncol or nrow) res The resolution (x and y) nlayers How many layers does the object have? names Get or set the layer names xres The x resolution (can be set with res) yres The y resolution (can be set with res) xmin The minimum x coordinate (or longitude) xmax The maximum x coordinate (or longitude) ymin The minimum y coordinate (or latitude) ymax The maximum y coordinate (or latitude) extent The extent (minimum and maximum x and y coordinates) origin The origin of a Raster* object crs The coordinate reference system (map projection) isLonLat Test if an object has a longitude/latitude coordinate reference system filename Filename to which a RasterLayer or RasterBrick is linked bandnr layer (=band) of a multi-band file that this RasterLayer is linked to nbands How many bands (layers) does the file associated with a RasterLayer object have? compareRaster Compare the geometry of Raster* objects NAvalue Get or set the NA value (for reading from a file) --------------------------- ------------------------------------------------------------------------------------------
## XII. Computing row, column, cell numbers and coordinates
Cell numbers start at 1 in the upper-left corner. They increase within rows, from left to right, and then row by row from top to bottom. Likewise, row numbers start at 1 at the top of the raster, and column numbers start at 1 at the left side of the raster.
xFromCol x-coordinates from column numbers yFromRow y-coordinates from row numbers xFromCell x-coordinates from row numbers yFromCell y-coordinates from cell numbers xyFromCell x and y coordinates from cell numbers colFromX Column numbers from x-coordinates (or longitude) rowFromY Row numbers from y-coordinates (or latitude) rowColFromCell Row and column numbers from cell numbers cellFromXY Cell numbers from x and y coordinates cellFromRowCol Cell numbers from row and column numbers cellsFromExtent Cell numbers from extent object coordinates x and y coordinates for all cells validCell Is this a valid cell number? validCol Is this a valid column number? validRow Is this a valid row number? --------------------------- ------------------------------------------------------------------------------------------
## XIII. Writing files
Basic setValues Put new values in a Raster* object writeRaster Write all values of Raster* object to disk KML Save raster as KML file . Advanced blockSize Get suggested block size for reading and writing writeStart Open a file for writing writeValues Write some values writeStop Close the file after writing update Change the values of an existing file --------------------------- ------------------------------------------------------------------------------------------
## XIV. Manipulation of SpatialPolygons* and other vector type Spatial* objects
Some of these functions are in the sp package. The name in bold is the equivalent command in ArcGIS. These functions build on the geometry ("spatial features") manipulation functions in package rgeos. These functions are extended here by also providing automated attribute data handling.
bind append combine Spatial* objects of the same (vector) type erase or "-" erase parts of a SpatialPolygons* object intersect or "*" intersect SpatialPolygons* objects union or "+" union SpatialPolygons* objects cover update and identity for a SpatialPolygons and another one symdif symmetrical difference of two SpatialPolygons* objects aggregate dissolve smaller polygons into larger ones disaggregate explode: turn polygon parts into separate polygons (in the sp package) crop clip a Spatial* object using a rectangle (Extent object) select select - interactively select spatial features click identify attributes by clicking on a map merge Join table (in the sp package) over spatial queries between Spatial* objects extract spatial queries between Spatial* and Raster* objects as.data.frame coerce coordinates of SpatialLines or SpatialPolygons into a data.frame --------------------------- ------------------------------------------------------------------------------------------
## XV. Extent objects
extent Create an extent object intersect Intersect two extent objects union Combine two extent objects round round/floor/ceiling of the coordinates of an Extent object alignExtent Align an extent with a Raster* object drawExtent Create an Extent object by drawing it on top of a map (see plot) --------------------------- ------------------------------------------------------------------------------------------
## XVI. Miscellaneous
rasterOptions Show, set, save or get session options getData Download and geographic data pointDistance Distance between points readIniFile Read a (windows) 'ini' file hdr Write header file for a number of raster formats trim Remove leading and trailing blanks from a character string extension Get or set the extension of a filename cv Coefficient of variation modal Modal value sampleInt Random sample of (possibly very large) range of integer values showTmpFiles Show temporary files removeTmpFiles Remove temporary files --------------------------- ------------------------------------------------------------------------------------------
## XVII. For programmers
canProcessInMemory Test whether a file can be created in memory pbCreate Initialize a progress bar pbStep Take a progress bar step pbClose Close a progress bar readStart Open file connections for efficient multi-chunk reading readStop Close file connections rasterTmpFile Get a name for a temporary file inMemory Are the cell values in memory? fromDisk Are the cell values read from a file? --------------------------- ------------------------------------------------------------------------------------------
## Author
Except where indicated otherwise, the functions in this package were written by Robert J. Hijmans
## Acknowledgments
Extensive contributions were made by Jacob van Etten, Jonathan Greenberg, Matteo Mattiuzzi, and Michael Sumner. Significant help was also provided by Phil Heilman, Agustin Lobo, Oscar Perpinan Lamigueiro, Stefan Schlaffer, Jon Olav Skoien, Steven Mosher, and Kevin Ummel. Contributions were also made by Jochen Albrecht, Neil Best, Andrew Bevan, Roger Bivand, Isabelle Boulangeat, Lyndon Estes, Josh Gray, Tim Haering, Herry Herry, Paul Hiemstra, Ned Hornig, Mayeul Kauffmann, Bart Kranstauber, Rainer Krug, Alice Laborte, John Lewis, Lennon Li, Justin McGrath, Babak Naimi, Carsten Neumann, Joshua Perlman, Richard Plant, Edzer Pebesma, Etienne Racine, David Ramsey, Shaun Walbridge, Julian Zeidler and many others. | 2021-08-02 19:41:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21614114940166473, "perplexity": 10988.304041225536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00327.warc.gz"} |
https://rot256.io/post/bornhack/ | # Introduction
Pwnies at Copenhagen University arranged this years CTF at Bornhack.
This is a short post detailing 2 of the crypto challenges I designed for this years CTF.
## Birthday-PRESENT
The challenge (and solution) can be found on github
The Sweet16 / birthday-PRESENT challenge is based on a variant of the Sweet32 vulnability, with a block cipher (small scale variant of PRESENT) having a block size of 32-bit, which makes the attack more practical.
Participants were given the C source code of a server which writes the flag into a large buffer (repeated), then allows the user to overwrite the start of the buffer with any plaintext of their choosing. The buffer is then encrypted under a random key using Small-PRESENT in CBC mode and the ciphertext is returned to the user.
The vulnerability is exploited by overwriting half the buffer with known content and letting the remainder contain the unknown flag. After receiving the ciphertext, it is split into two sets $$A$$ and $$B$$ of blocks, containing the cipher text of the known plaintext and and the unknown flag respectivly:
$ct = IV \ || \ A_{0} \ || \ A_{1} \ || \ \ldots \ || \ A_{n/2} \ || \ B_{0} \ || \ B_{1} \ || \ \ldots \ || \ B_{n/2}$
Since the block size is 32-bit, we expect collisions after $$\approx 2^{16}$$ blocks. When we detect a collision between two blocks $$C_{i}$$ and $$C_{j}$$, we know that:
$E(P_{i} \oplus C_{i-1}) = E(P_{j} \oplus C_{j-1})$
Since E is a permutation:
$P_{i} \oplus C_{i-1} = P_{j} \oplus C_{j-1}$
Hence knowing $$P_{i}$$ allows us to recover $$P_{j}$$ and vise versa. This is especially useful when $$P_{i} \in A$$ and $$P_{j} \in B$$ (or some part of $$A$$ already known). Should we fail to find all the plaintext blocks of the flag initially, we simply try again (since the plaintext remains fixed) and collect samples until the entire flag is known.
## Notec
The challenge (and solution) can be found on github
Notec is definitly not EC, it is an implementation of the simple Lamport signature scheme. The participants where given a python server which signs any message except from a specific challenge text, 4 messages are signed using the same key and SHA-256. The challenge is then to forge a signature on the challenge text, if the prover succeeds the server returns the flag.
The primary challenge is finding messages for which the “signature bits” overlap completely with that of the challenge text, which results in the server revealing all the necessary elements of the private key needed to sign the challenge text. In other words, letting $$c$$ be the challenge string to sign and $$H_{i}(\cdot)$$ the function outputting the $$i$$th bit of the SHA-256 digest.
Then we are searching for a set of strings $$A$$ with $$| A | \leq 4$$ st.
$\forall i \in \{0, \ldots, 255\} : H_{i}( c ) \in \bigcup_{a \ \in \ A} \ H_{i}(a)$
Notice that this process is independent of the key.
# Final notes
I promise that there will be more algebra next year. | 2019-04-22 00:07:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30779439210891724, "perplexity": 1133.3144753775541}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532948.2/warc/CC-MAIN-20190421235818-20190422021818-00333.warc.gz"} |
http://unapologetic.wordpress.com/2011/01/13/standard-polytabloids-are-independent/?like=1&source=post_flair&_wpnonce=8149a33672 | The Unapologetic Mathematician
Standard Polytabloids are Independent
Now we’re all set to show that the polytabloids that come from standard tableaux are linearly independent. This is half of showing that they form a basis of our Specht modules. We’ll actually use a lemma that applies to any vector space $V$ with an ordered basis $e_\alpha$. Here $\alpha$ indexes some set $B$ of basis vectors which has some partial order $\preceq$.
So, let $v_1,\dots,v_m$ be vectors in $V$, and suppose that for each $v_i$ we can pick some basis vector $e_{\alpha_i}$ which shows up with a nonzero coefficient in $v_i$ subject to the following two conditions. First, for each $i$ the basis element $e_{\alpha_i}$ should be the maximum of all the basis vectors having nonzero coefficients in $v_i$. Second, the $e_{\alpha_i}$ are all distinct.
We should note that the first of these conditions actually places some restrictions on what vectors the $v_i$ can be in the first place. For each one, the collection of basis vectors with nonzero coefficients must have a maximum. That is, there must be some basis vector in the collection which is actually bigger (according to the partial order $\preceq$) than all the others in the collection. It’s not sufficient for $e_{\alpha_i}$ to be maximal, which only means that there is no larger index in the collection. The difference is similar to that between local maxima and a global maximum for a real-valued function.
This distinction should be kept in mind, since now we’re going to shuffle the order of the $v_i$ so that $e_{\alpha_1}$ is maximal among the basis elements $e_{\alpha_i}$. That is, none of the other $e_{\alpha_i}$ should be bigger than $e_{\alpha_1}$, although some may be incomparable with it. Now I say that $e_{\alpha_i}$ cannot have a nonzero coefficient in any other of the $v_i$. Indeed, if it had a nonzero coefficient in, say, $v_2$, then by assumption we would have $e_{\alpha_1}\prec e_{\alpha_2}$, which contradicts the maximality of $e_{\alpha_1}$. Thus in any linear combination
$\displaystyle c_1v_1+\dots+c_mv_m=0$
we must have $c_1=0$, since there is no other way to cancel off all the occurrences of $e_{\alpha_1}$. Removing $v_1$ from the collection, we can repeat the reasoning with the remaining vectors until we get down to a single one, which is trivially independent.
So in the case we care about the space is the Young tabloid module $M^\lambda$, with the basis of Young tabloids having the dominance ordering. In particular, we consider for our $v_i$ the collection of polytabloids $e_t$ where $t$ is a standard tableau. In this case, we know that $\{t\}$ is the maximum of all the tabloids showing up as summands in $e_t$. And these standard tabloids are all distinct, since they arise from distinct standard tableaux. Thus our lemma shows that not only are the standard polytabloids $e_t$ distinct, they are actually linearly independent vectors in $M^\lambda$.
January 13, 2011 - | 2014-08-30 02:29:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970737457275391, "perplexity": 114.2460405880504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833715.76/warc/CC-MAIN-20140820021353-00230-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41377-020-0298-8?error=cookies_not_supported&code=6a133b14-08bb-45dc-8aa9-e437c97817d6 | Nonlinear increase, invisibility, and sign inversion of a localized fs-laser-induced refractive index change in crystals and glasses
Abstract
Multiphoton absorption via ultrafast laser focusing is the only technology that allows a three-dimensional structural modification of transparent materials. However, the magnitude of the refractive index change is rather limited, preventing the technology from being a tool of choice for the manufacture of compact photonic integrated circuits. We propose to address this issue by employing a femtosecond-laser-induced electronic band-gap shift (FLIBGS), which has an exponential impact on the refractive index change for propagating wavelengths approaching the material electronic resonance, as predicted by the Kramers–Kronig relations. Supported by theoretical calculations, based on a modified Sellmeier equation, the Tauc law, and waveguide bend loss calculations, we experimentally show that several applications could take advantage of this phenomenon. First, we demonstrate waveguide bends down to a submillimeter radius, which is of great interest for higher-density integration of fs-laser-written quantum and photonic circuits. We also demonstrate that the refractive index contrast can be switched from negative to positive, allowing direct waveguide inscription in crystals. Finally, the effect of the FLIBGS can compensate for the fs-laser-induced negative refractive index change, resulting in a zero refractive index change at specific wavelengths, paving the way for new invisibility applications.
Introduction
Femtosecond (fs) laser inscription in transparent materials has unique advantages1,2. One of the most relevant advantage is the micrometer-scale processing of complex three-dimensional structures, owing to the nonlinear nature of the laser absorption that precisely confines structural changes to the focal volume. However, a severe limitation of fs-laser inscription is related to the relatively low photoinduced refractive index contrast that is achievable3,4. In particular, the miniaturization of many fs-laser-processed photonic devices is limited by the minimum bend radius of waveguides, which in turn depends on the magnitude of the induced refractive index contrast. Another important limitation is the decrease in the refractive index that occurs in most crystals5 and in a wide variety of glasses6,7,8. In fact, many applications, such as waveguide lasers9, electro-optic modulators10, and frequency converters11, require multi-scan-depressed cladding structures, which complicate or impede the fabrication or their guiding circuits.
In most applications of photonics, the propagating wavelengths are far from the material resonances to minimize the optical losses; the fiber-optic communication window around 1550 nm in fused silica is a good example. Away from resonances, the compaction and rarefaction of the structural network (affecting the number of charged particles per volume unit) and other mechanisms such as color centers, a change in the fictive temperature, and defect-induced density changes largely dominate the refractive index change in fs-laser-processed photonic circuits1,6,12. However, for propagating wavelengths approaching the electronic resonance, we show that the refractive index change exponentially increases owing to a fs-laser-induced band-gap shift (FLIBGS). For the first time, to the best of our knowledge, the effect of the FLIBGS in transparent materials is studied. Note that the propagating wavelengths are studied near the resonance, which are not to be confused with the wavelength used to process the material with the fs laser (producing the FLIGBS) that is far from the resonance (fixed at 795 nm in this work). For the remainder of the paper, the stated wavelengths refer to light propagating in the waveguides.
Using this FLIGBS phenomenon, we demonstrate that the sign of the refractive index contrast can be inverted, which allows for the direct inscription of smooth waveguides (i.e., type I-positive refractive index change) in crystals. This type of inscription has several advantages over structures based solely on the stress induced by damage tracks traditionally inscribed in crystals5 or glasses6,7,8 using high-energy laser pulses (i.e., the so-called type III modifications13). Moreover, preliminary results show potential invisibility applications. The opposition between the fs-laser-induced negative refractive index change and the positive refractive index change due to the FLIBGS can result in a zero refractive index change at specific wavelengths, which theoretically enable invisibility. While invisibility cloaking has gained much attention in recent years14,15, mostly due to metamaterials16,17, the FLIGBS mechanism demonstrates a new concept for the direct fabrication of invisible structures, paving the way for new invisibility applications. Finally, we demonstrate a lower propagation loss in tightly curved waveguides mostly due to the high refractive index change induced by the FLIBGS, which creates opportunities for miniaturized devices. Supported by theoretical analysis, we experimentally demonstrate waveguide bends with a submillimeter radius of curvature, which is an important improvement over the minimum 10-mm radius reported previously3,18. It is somehow implicit that the use of the FLIBGS results in a practical range of applications characterized by a narrow spectral band near the resonance of the material. Surprisingly, the FLIBGS affects the refractive index over a certain region beyond the absorption edge bandwidth in the highly transparent region, which extends the application range. Moreover, since electronic bandgaps lying in the ultraviolet, visible, and infrared regions can be found in different materials19, the FLIBGS has great potential for the entire spectral band in photonics.
Results
FLIBGS theory and experiment
The ultrafast laser-induced refractive index change of transparent materials is a complex phenomenon that relies on different physical processes. First, a physical rearrangement of the structural network was observed1,20. It is believed that the densification induced by various complex phenomena, such as a fast temperature change21,22 and plasma shock waves23,24, has a great impact on the fs-laser-induced refractive index change. Another process is related to the variation in the absorption spectrum through the Kramers–Kronig relations25. Increased absorption due to photoinduced defects such as color centers26,27 produced by self-trapped excitons28 leads to a variation in the refractive index. Since most of the defects can be annealed while a partial refractive index change remains29, defects can only partially explain the laser-induced refractive index change.
To date, no one has explicitly studied the effect of the FLIBGS on the refractive index. A first way of approaching this problem is via the Kramers–Kronig relations, relating the refractive index to the absorption coefficient α integrated over frequency:30
$$n\left( \omega \right) = 1 + \frac{\pi }{c}{{\wp}} \int\nolimits_{0}^{+ \infty} {\frac{{\alpha \left( {\omega^{\prime}} \right)}}{{\omega^{\prime2} - \omega^{2}}}} d\omega ^{\prime}$$
(1)
where c is the speed of light, ω′ is the angular frequency variable running through the whole integration range, and denotes the Cauchy principal value. Clearly, from this relation, a change in the absorption α(ω′) curve will in turn affect n(ω). To illustrate this effect, Fig . 1 shows the transmission spectrum through a zinc selenide (ZnSe) crystal with a thickness of d = 1 mm (gray curve, associated with the right axis). When a band-gap shift occurs, the absorption edge (near the electronic resonance) of the transmission spectrum shifts horizontally. For illustrative purposes, a dashed gray curve has been added to represent the shifted spectrum. At a wavelength near the absorption edge (where the transmission slope is significant, denoted by the FLIBGS window in Fig. 1), the shift greatly affects the absorption (double gray arrows) and thus the refractive index.
Equation 1 is not convenient to use experimentally since it requires measurements over a very wide spectral band. Alternatively, the Lorentz dispersion relation with the Clausius–Mossotti form allows one to express the refractive index in terms of the number of charged particles per volume unit Nk31:
$$\frac{{3\left( {\bar n^2 - 1} \right)}}{{\left( {\bar n^2 + 2} \right)}} = \mathop {\sum}\limits_{\mathrm{k}} {\frac{{4\pi N_{\mathrm{k}}\varepsilon _{\mathrm{k}}^2/m_{\mathrm{k}}}}{{\omega _{0{\mathrm{k}}}^2 - \omega ^2 + i\gamma \omega }}}$$
(2)
where $$\bar n$$ is the complex refractive index and mk is the mass of particle k with charge εk. The number of charged particles per volume unit Nk, the resonance frequency $$\omega _{0k}$$, and the damping coefficient $$\gamma$$ are the only terms that can potentially be modified using fs-laser irradiation. The real part of the refractive index n can be experimentally obtained using the well-known Sellmeier equation, an empirical equation related to Eq. 2, as a function of the wavelength λ:
$$n^2 = A + \mathop {\sum}\limits_{\mathrm{k}} {\frac{{B_{\mathrm{k}}\lambda ^2}}{{\lambda ^2 - C_{\mathrm{k}}^2}}}$$
(3)
where the first (A) and second (k = 1) terms of this series represent the contributions to the refractive index due to the higher- and lower-energy bandgaps of electronic absorption, respectively, whereas the remaining terms (k > 1) account for a refractive index modification due to lattice resonance32. Equation 2 suggests that Bk is closely linked to the number of charged particles per volume unit Nk and Ck to the resonance frequency $$\omega _{0k}$$ (or wavelength λ0k). Note that the damping is not considered in the Sellmeier equation (also neglected in this work) since it is only significant in the close vicinity of the resonances. In addition, note that the damping is related to the absorption coefficient α and the Cauchy principal value in Eq. 1. Since the bandgap, absorption edge, and resonance frequency of a material are directly connected, the three models (using Eq. 1, Eq. 2, or Eq. 3) are similar in terms of studying the FLIBGS.
To experimentally study the FLIBGS, the following modified Sellmeier empirical equation that includes the effect of fs-laser irradiation is suggested:
$$n_{irr}^2 = A + \mathop {\sum}\limits_k {\frac{{\left( {B_{\mathrm{k}} + dN_{\mathrm{k}}} \right)\lambda ^2}}{{\lambda ^2 - \left( {C_{\mathrm{k}} + d\lambda _{\mathrm{k}}} \right)^2}}} \approx A + \frac{{\left( {B_1 + dN_1} \right)\lambda ^2}}{{\lambda ^2 - \left( {C_1 + d\lambda _1} \right)^2}}$$
(4)
where dNk is proportional to the laser-induced variation in the number of charged particles per volume unit and k is the laser-induced resonance shift (linked to the FLIGBS). The remaining terms (k > 1) are assumed to be negligible for wavelengths relatively close to the λ1 (or C1) electronic resonance, which is the case in this work. The fs-laser-induced refractive index contrast is Δn = nirr − n, where nirr is the refractive index of the irradiated region.
For illustrative purposes, Fig. 1 shows the ZnSe refractive index curve with the Sellmeier coefficients A = 4, B1 = 1.90, and C1 = 336.15 nm from ref. 33 (black curve, associated with the left axis). The effects of the variation in the number of charged particles per volume unit (blue dashed curves with dN1 = ±0.1) and the FLIBGS (green dotted curves with 1 = ±30 nm), both exaggerated to clearly observe their effect over the full spectrum, are plotted. The variation in the number of charged particles per volume unit tends to vertically displace the curve (blue arrows), which affects the refractive index similarly at all wavelengths, whereas the resonance shift tends to horizontally displace the curve (green arrows), which increasingly varies the refractive index when approaching the electronic resonance at lower wavelengths.
To demonstrate the effect of the FLIBGS on the refractive index contrast of a waveguide, fs-laser inscription was performed using a Ti:sapphire laser system (Coherent RegA). The system was operated at a wavelength of 795 nm with a repetition rate of 250 kHz. The temporal FWHM of the pulses was measured to be ~65 fs at the laser output. To estimate the electronic resonance shift 1 induced by the fs laser in ZnSe, several lines were inscribed with a scan speed of 5 mm/s and a pulse energy of 100 nJ. The inset in Fig. 2 shows the transmission spectrum of a ZnSe sample with a thickness of d = 1 mm before (black curve) and after (blue curve) photoinscription, measured using an Agilent Cary 5000 UV–vis–NIR spectrophotometer. Unfortunately, uniform irradiation over a 1-mm3 volume would take weeks. Therefore, 3300 lines were inscribed with a lateral displacement of 3 μm to form a layer (1 cm2), and 7 layers were inscribed with a vertical displacement of 10 μm, from a depth of 40–100 μm. The beam was focused beneath the surface of the sample using a 100× (1.25 NA) oil immersion microscope objective. The immersion oil refractive index (1.5) was beneficial for reducing the high aberration generated by the ZnSe refractive index (approximately 2.5 at 795 nm). However, it was impossible to write deeper due to the aberration and closer to the surface due to bubble formation in the oil.
The shift in the absorption edge in BaAlBO3F2 and borosilicate glasses has been observed by two other groups, but has not been investigated34,35. It is very difficult to obtain a quantitative measurement of the electronic resonance shift 1 from the transmission spectrum (inset of Fig. 2). Nevertheless, the absorption spectrum provides an efficient means to assess the band structure and width of the energy bandgap of optical materials, from which the electronic resonance frequency can be inferred. The optical bandgap Eopt can be expressed according to the Tauc law36:
$$\left( {\alpha \left( \omega \right)h\omega } \right) = B\left( {h\omega - E_{\mathrm{opt}}} \right)^m$$
(5)
where B is a constant depending on the transition probability, α is the absorption coefficient, and is calculated using the expression α = −2.303log(T)/d (d is the thickness of the sample and T is the transmission), ω is the incident light angular frequency, Eopt is the width of the bandgap, and m = 1/2 is the refractive index characterizing the direct transition process.
From the experimental transmission spectrum, (α(ω))2 can be plotted as a function of in eV, as shown in Fig. 2. The optical bandgap Eopt is obtained as the intersection of the extrapolated linear portion of the curve with the photon energy axis. The bandgap shifts from approximately 2.627–2.620 eV, which corresponds to an electronic resonance shift of 1 = 1.26 nm. As a comparison, in typical semiconductors (Eopt 10 eV) deformed using the piezospectroscopic effect, the strain-induced shift of an electronic resonance may be approximately 100 meV (1 = 1.23 nm)37. Although this demonstrates an FLIBGS, the result is a lower bound since the sample is not irradiated over its whole volume.
Except for a few demonstrations, such as in ZnSe38, LiNbO3, and Nd:YCa4O(BO3)3, the refractive index change is generally negative in crystals5. Therefore, direct writing of waveguides in crystals is impractical. This can be explained because a positive refractive index change typically requires an increase in the material density, which is difficult to achieve in crystalline materials due to the compact structural order of the lattice, in contrast to vitreous materials with structural disorder and the existence of free space within the network. Figure 3 shows the refractive index contrast Δn for waveguides inscribed in a ZnSe crystal using the same parameters mentioned previously, with pulse energies from 100 to 195 nJ, as a function of the propagating wavelength. The results demonstrate a sign inversion of the refractive index change between 550 and 650 nm, depending on the energy. To the best of our knowledge, this is the first observation of a sign inversion of refractive index contrast as a function of the propagating wavelength. Details on the refractive index contrast measurement are provided in the “Materials and methods” section.
The green dotted curve represents the refractive index change calculated using Eq. 4 with 1 = 1.26 nm and dN1 = −6.5 × 10−3 (chosen to fit the experimental value at 700 nm) and using the Sellmeier coefficients from ref. 33. Although the experimental points agree well with the theoretical green dotted curve, a significant discrepancy is observed at shorter wavelengths, which supports the hypothesis of an underestimation of the band-gap shift 1. The inaccuracy of the empirical Sellmeier coefficients from ref. 33 could also contribute to the error, which is supported by the large difference between the different values found in the literature39.
Figure 4 shows the near-field mode profiles of the two waveguides inscribed in crystalline ZnSe with pulse energies of 115 and 195 nJ. With the 115-nJ pulses, the light is weakly confined at 520 nm and not guided at 633 nm. With the 195-nJ pulses, the light is weakly confined at 633 nm and not guided at 1550 nm. At lower wavelengths, the light is strongly confined in the waveguide at both pulse energies. The trend follows the sign inversion of the refractive index contrast. These results are of great interest, since many applications, such as waveguide lasers9, electro-optic modulators10, and frequency converters11, currently require multi-scan-depressed cladding structures due to the decrease in the refractive index that arises in most crystals5 and a wide variety of glasses6,7,8.
Point of invisible writing
A peculiar phenomenon can be observed in Fig. 3 when the sign of the refractive index change is inverted as a function of the wavelength. At a specific wavelength, the refractive index contrast becomes zero, which means that the laser inscription should be invisible at this wavelength. At Δn = 0, i.e., when n = nirr (cf. Equations 3 and 4), the propagated light is not affected by the structural modification, which appears to be invisible. Due to the highly nonlinear effect of 1 compared with the effect of dN1, invisibility occurs at different wavelengths depending on the laser inscription parameters. Therefore, the FLIGBS allows for the direct inscription of invisible structures, which does not require invisibility cloaking14,15,16,17 to be hidden. As a preliminary experimental proof of concept, the left side of Fig. 5 shows the top view of the waveguide inscribed in ZnSe using a pulse energy of 170 nJ. The five pictures were taken with a microscope using filters at 500, 550, 600, and 650 nm. The visibility of the waveguide follows the trend of the refractive index contrast profile shown on the right of Fig. 5 (also see Fig. 3). At 500 and 550 nm, the waveguide is clearly seen. At 700 nm, the waveguide is fairly visible. At 600 and 650 nm, the waveguide is completely invisible to the naked eye and barely visible under the microscope, especially at 600 nm, where it is necessary to fine-tune the microscope focus position to make the waveguide barely visible.
However, the fs-laser-induced refractive index contrast is not perfectly uniform over the whole inscribed cross section, mostly due to the stress induced around the focal region. This prevents the refractive index contrast from being zero over the full cross-section area of the waveguide, as shown in the refractive index profile at 600 nm (see Fig. 5, right). The perfect step refractive index induced by the fs laser should theoretically enable perfect invisibility, a field that has gained much interest in the last decade14,15,16,17, including fs-laser-written devices in smartphone screens, such as temperature sensors40 and on-surface refractometric sensors for liquids41, that are effectively invisible to the naked eye. In these previous works40,41, the waveguides are undetectable to the naked eye due to the low laser-induced refractive index change, which limits the waveguide bend radii and thus the applications. Therefore, enhancing the invisibility in the visible region while increasing the refractive index change at the operating wavelength due to the FLIBGS would be of great interest. These invisible waveguide-based devices also have great potential in any see-through protection screen, such as car windshields, industrial displays, army helmets, and plane dashboards. The use of the multiscan technique or low repetition rates to avoid the heating effect42, and methods to minimize aberration such as using a spatial light modulator43 or a dual-beam technique44 in order to sharpen the Gaussian intensity profile should help obtain step refractive index inscriptions. Invisibility at specific wavelengths could enable interesting applications in photonic circuitry and gratings.
Note that a sign inversion of the refractive index contrast and invisibility is not possible via a type III modification (damage tracks). The negative refractive index contrast produced by voids formed due to microexplosions remains negative at any optical wavelength. Thus, invisibility can only be obtained via a negative refractive index change with a type I modification, which has been achieved in many materials5,45.
High refractive index contrast allowing compact devices
An exponential increase in the refractive index contrast is observed when approaching the electronic resonance at shorter wavelengths (see Fig. 3). This feature is very interesting for the fabrication of photonic devices, such as splitters, couplers, and ring resonators, with a submillimeter size. In fact, submillimeter devices are still nearly impossible to fabricate using fs-laser writing due to the minimum waveguide bend radius limited by the refractive index contrast3,18. No one has used wavelengths near a material electronic resonance for photonics applications obviously because of the higher material absorption. A centimeter-long device would be too lossy to be useful. However, for very compact devices, the intrinsic material absorption becomes less problematic. In the following paragraph, we address the possible benefits of the FLIBGS for the miniaturization of fs-laser-written photonic circuits.
To isolate waveguide bend losses, irradiation experiments were performed on GeS4 glass, which has an electronic bandgap lying in the visible region, in which it is easy to photoinscribe type I waveguides46. Figure 6 shows the refractive index contrast Δn as a function of the propagating wavelength for waveguides inscribed in GeS4 glass using the same parameters mentioned previously, with pulse energies from 50 to 120 nJ focused 100 μm beneath the surface using a 50× objective (Edmund Optics LWD 0.55 NA). The exponential increase in the refractive index contrast is clearly observed at short wavelengths. Positive refractive index changes up to ~1.7 × 10−2 are obtained at 500 nm for a pulse energy of 90 nJ. Note that this value of 1.7 × 10−2 is, to the best of our knowledge, the highest fs-laser-induced smooth positive type I refractive index change observed in any chalcogenide glass waveguide. The gray curve shows the transmission spectrum of the GeS4 glass through a 1.22-mm-thick sample (including Fresnel losses). For a fixed pulse energy, it is interesting to see that a significant enhancement of the refractive index change is still obtained at wavelengths within the highly transparent region. This extends the range of applications of FLIBGS-based devices.
To ensure a smooth inscription of the tightly curved waveguides, the scan speed was reduced to 1 mm/s. Then, 20-nJ pulses were focused 100 μm beneath the surface using a 100× oil immersion objective (1.25 NA). To isolate the curvature loss, several S-bend waveguides were written in a 6-mm-long GeS4 sample, as shown in Fig. 7. Six S-bend waveguides with a fixed lateral displacement of 200 μm with lengths L ranging from 0.5 mm (with a radius curvature R of 0.363 mm) to 6 mm (R = 45.05 mm) were written.
The S-bend waveguides were characterized using 520-, 633-, and 1550-nm laser sources. The light injection was performed by butt-coupling with a single-mode fiber. Simply by measuring the additional loss relative to a straight waveguide written under the same conditions, the additional loss from each S bend can be isolated. The bend loss in dB/mm is obtained by dividing this additional loss over the S-bend waveguide length. The results are plotted in Fig. 8a. At 1550 nm, the results are in agreement with prior results from the literature3,18. For a radius curvature of 5 mm, the loss is less than 0.5 dB/mm at 520 nm, while it is over four times higher at 1550 nm. For a radius curvature of 1.3 mm, the signal is completely lost at 1550 nm, while the loss is less than 6 dB/mm at 520 nm. For a radius of curvature of 363 μm, guiding occurs only with the 520-nm light, with a bend loss of 17 ± 2 dB/mm, which seems promising for sub-millimeter-size devices, considering that 520 nm is not the optimized wavelength. Note that we have not been able to guide 520-nm light through waveguides with submillimeter bend radii in a material with a bandgap far from this wavelength, such as standard glasses (e.g., soda lime, borosilicate, and fused silica). In addition to the high refractive index contrast obtained due to the FLIBGS, the smooth type I-positive refractive index change may have an important impact on the guiding property of waveguides with submillimeter bend radii. In fact, a high refractive index contrast can be achieved with mixes of positive and negative refractive index changes or with type III (microexplosion or damage tracks) waveguides. However, the high asymmetry or roughness typically obtained from these methods induces additional losses in waveguide bends.
The experimental values can be compared with the theoretical formula of the waveguide bend loss LB (dB/mm)47:
$$\begin{array}{l}L_{\mathrm{B}} = \frac{{2.171\pi ^{1/2}}}{{\left( {\rho R} \right)^{1/2}}}\left( {\frac{{V^4}}{{\left( {V + 1} \right)^2\left( {V - 1} \right)^{1/2}}}} \right)\\ \times \exp \left[ {\frac{{\left( {V - 1} \right)^2}}{{V + 1}} - \frac{{4R\left( {V - 1} \right)^3}}{{3\rho V^2}}\left( {\frac{{n_{irr}^2 - n^2}}{{2n_{irr}^2}}} \right)} \right]\end{array}$$
(6)
where ρ is the waveguide core radius and V is the waveguide parameter given by:
$$V = \frac{{2\pi \rho }}{\lambda }\left( {n_{irr}^2 - n^2} \right)^{1/2}$$
(7)
The theoretical bend loss curves for 520, 633, and 1550 nm are plotted in Fig. 8a (solid curves). The differences between the experimental values and the theoretical curves can be explained by the perturbation at the transition point (halfway point of the S bend) where the curve changes the direction of its rotation47, which is not taken into account in Eq. 6, and the fact that defects and waveguide roughness have more significant effects for curved segments. Moreover, Eq. 6 is an approximation for perfectly symmetrical single-mode waveguides, which is not exactly the case in our experiment. As shown in the inset of Fig. 7, the mode profile is slightly elongated, and few modes appear at smaller wavelengths. The refractive index values of the GeS4 glass were obtained using an interpolation from five measurements (n = 2.153, 2.109, 2.058, 2.044, and 2.039 at wavelengths λ = 532, 633, 972, 1303, and 1538 nm, respectively) using a Metricon 2010/M prism coupler.
However, the most important parameter is the total loss of such curved waveguide-based devices. The mode mismatch and Fresnel losses (at the input and output) can be easily reduced to less than 1 dB47 and remain the same for any S-bend size; therefore, they are not taken into account in the following loss estimation. At wavelengths far from the resonances, the propagation loss in straight waveguides can be as low as 0.01 dB/mm1. This waveguide propagation loss is negligible compared with the bend loss and material absorption at wavelengths near electronic resonance and even more negligible for compact devices, which is the subject of this study. Figure 8b shows the sum of the two main optical losses (bend loss and material absorption), which will be referred to as the “effective loss”, for several wavelengths as a function of the waveguide bend radius. The absorption spectrum of the GeS4 glass was measured using an Agilent Cary 5000 UV–vis–NIR system. Despite the higher absorption near electronic resonance, the experimental values and the theoretical curves in Fig. 8b clearly show the advantage of using wavelengths near resonance for tightly curved waveguides. For example, from the experimental measurements, a 1-mm-long optical splitter with a lateral displacement of the outputs of 400 μm, which is made of two S bends, as shown in Fig. 7, with a waveguide bend radius of 1.3 mm, exhibits an effective loss of 6.1 dB at 520 nm, while the signal is completely lost at 1550 nm. For a 1.6-mm-long splitter with a lateral displacement of the outputs of 250 μm, with a waveguide bend radius of 5 mm, the experimental effective loss is 2.16 dB at 633 nm. These relatively low losses are due to the fact that at 520 and 633 nm, the material absorption is still low, while the refractive index is significantly increased (see Fig. 6) due to the FLIBGS.
Despite the differences between the experimental points and the theoretical curves in Fig. 8b, both clearly show the same trend. Therefore, the theoretical calculation can be used to provide an optimized wavelength for a specific bend radius required for a specific application. The curves in Fig. 9 show the theoretical effective loss as a function of the wavelength for different waveguide bend radii photoinscribed in GeS4 glass. Optimized wavelengths of 895, 620, 545, 525, 505, 480, and 467 nm are obtained for bend radii of 5, 2, 1.3, 1, 0.75, 0.5, and 0.375 mm, respectively. Moreover, as shown in Fig. 9a, low-loss compact devices made of waveguides with a bend radius of 5 mm should be achievable over a bandwidth of 600 nm (from 550 to 1150 nm).
Figure 9b shows the experimental effective loss measurement (black circles) using an optical spectrum analyzer (Yokogawa AQ6373B) from a white-light source (Koheras SuperK Power supercontinuum source) launched in an S-bend waveguide (as shown in Fig. 7) with a bend radius of 1.3 mm. While this method of analysis is not precise enough to obtain a reliable measurement of the losses (it also includes Fresnel, mode mismatch, and misalignment losses), it provides a relative value of losses as a function of the wavelength. Therefore, the experimental values show the real optimized wavelength (524 nm), which is 21 nm shorter than the theoretical wavelength. This can be explained by any waveguide fluctuation, roughness, or defects caused by laser inscription power fluctuations, scratches on the surface, motor vibrations, or material imperfection, which results in a lower effective bend radius. Note that the Fresnel and mode mismatch losses are wavelength-dependent but should not significantly affect the value of the obtained optimized wavelength.
As shown in Fig. 9b, for very tight bends, the wavelength is more critical. In the case where the application requires the tightest bend, the use of the Tauc law (see Eq. 5 and Fig. 2) seems to be a practical way to obtain an efficient and reliable wavelength (or a good material choice for a fixed wavelength of interest). For GeS4 glass, a bandgap of 464 nm (2.67 eV) is obtained. At this wavelength, the losses (1.13 dB/100 μm) are mostly due to material absorption down to a bend radius of 430 μm. For a bend radius of 375 μm, an effective loss of 1.2 dB/100 μm is calculated.
As shown in Fig. 10, to obtain a lower bound of the FLIBGS in GeS4 glass, the same procedure using the Tauc law was executed (see section “FLIBGS theory and experiment”). The sample was irradiated from a depth of 60–660 μm over the sample with a thickness of d = 1.22 mm. The bandgap shifts from approximately 2.67–2.655 eV, which corresponds to an electronic resonance shift of 1 = 2.62 nm. The lower refractive index of GeS4 (2.1089 at 633 nm) makes deeper writing feasible, which probably contributes to the larger calculated band-gap shift compared with the shift for ZnSe. Unfortunately, since no Sellmeier coefficients were found in the literature for GeS4 glass, the theoretical curve of the refractive index contrast as a function of the wavelength could not be plotted in Fig. 6. As a comparison, a band-gap shift of approximately 0.06 eV (1 10 nm) was observed after illuminating a GeS2.33 film for 4 h using a 400-W high-pressure Hg lamp48. One may notice a surprising increase in the absorption in the full spectrum for the irradiated samples compared with that of the pristine samples (see the inset in Figs. 2 and 10). This is due to the light scattered from the non-uniformly inscribed sample, which is not detected by the Cary detector. To ensure that this scattered light did not affect the band-gap shift calculation, a few measurements were performed using a detector close to the sample to measure all of the scattered light, which provided the same results but with a higher experimental error. These measurements also ensured that the laser inscription did not induce significant absorption loss, which was also demonstrated by Tong et al.49.
Discussion
The origin of the FLIBGS is complex and depends on the irradiated material. In glasses, the network consists of a disordered arrangement of structural units such as tetrahedra (e.g., [SiO4] or [GeS4] in silica or germanium sulfide glasses, respectively), with the existence of free space and local defects. This network therefore provides favorable conditions for material modifications under an external stimulus such as fs-laser pulses. On the other hand, in a crystalline material (e.g., ZnSe), the structure is well organized without free space and has much fewer defects than glasses. This structure then has fewer degrees of freedom for photoinduced modifications. Nevertheless, if the amplitude of photosensitivity that distinguishes these two materials is not considered, the nature of the photoinduced changes is similar. Most of the time, the photoinduced changes are a combination of two or more of the following effects: the formation of color centers, the migration of species, the modification of structural units (bond or bonding angle that breaks or changes), and even crystallization or amorphization1,6,7. These phenomena then result in a highly localized contraction/dilatation of the structure (i.e., a local density increase or decrease) locally altering the electron density and thus the energy required to cross the bandgap. Although the origin of these phenomena remains complex, the phenomena are generally associated with a band-gap shift (also called a transmission or absorption edge shift, photodarkening or photobleaching, or an electronic resonance shift). This is also in agreement with previously reported band-gap increases due to a decrease in lattice spacing in a semiconductor under hydrostatic pressure50 and using the piezospectroscopic effect37. Moreover, an absorption edge shift has been observed in chalcogenide glasses after illumination whose energy equals or exceeds the band-gap energy51. Light-induced creation of dangling bonds (immobilized free radicals) has been considered to be the origin of the phenomenon52,53. Recently, the creation of high-density dangling bonds after pulsed-laser excitation has been observed in hydrogenated amorphous silicon54,55, which could partly explain the FLIGBS. Several models have been proposed to describe the mechanisms involved in the creation of dangling bonds under illumination with energy exceeding the band-gap energy, but this issue is still controversial51,56. Moreover, from the illumination of chalcogenide glasses under near-band-gap light (e.g., a Hg lamp), the evidence suggests that the observed band-gap shift is due to an increase in structural intermediate-range disorder (randomness)48,57,58. This structural randomness may broaden the resonance frequency band. Similarly, the naturally random amorphous state of a material generally has a lower band-gap energy than its crystalline state59,60. This latter explanation may have an important impact on the band-gap shift in crystals, in which the structure becomes locally disordered under fs-laser illumination. Finally, despite these explanation attempts, the origins of the band-gap shift are still unclear61.
One can note the unusual behavior of the refractive index contrast for different laser pulse energies at the same wavelength in Figs. 3 and 6. In Fig. 6, at low energy, it is observed that the refractive index contrast increases with increasing pulse energy, whereas at higher energy, the refractive index contrast decreases. This behavior was reported in a previous work46 and explained by a saturation point of the refractive index change that occurs when the size of the waveguides surpasses the dimension of the fs-laser-induced plasma during the inscription. In the experiments presented in Fig. 6, the waveguide sizes surpassing the plasma size are denoted by squares (circle otherwise). The behavior follows the previous observation46, and the maximum refractive index contrast occurs for the highest energy pulse without the waveguide exceeding the plasma size, i.e., at 90 nJ. As shown in Fig. 3, the same trend is observed in the ZnSe crystal, where the maximum refractive index contrast is obtained at 130 nJ. The refractive index change is most negative in the red part of the spectrum and most positive in the blue part. Similarly, at 170 nJ, the refractive index change is less negative in the red part of the spectrum and less positive in the blue part. However, there is no such clear trend near the inversion of the sign of the refractive index change. This is probably due to the nonlinear nature of the refractive index change mechanisms, which is supported by the disordered refractive index profile shown in Fig. 5.
Finally, we have demonstrated an exponential increase in the photoinduced refractive index contrast for propagating wavelengths approaching electronic resonances. Unveiled by the Kramers–Kronig relations, this increase is caused by a FLIBGS in the irradiated region of transparent materials. For each material and laser, several writing parameters must be tuned to form a strong waveguide (far from resonance). In this paper, strong waveguides were not the scope of the work, and only the pulse energy was tuned to obtain decent waveguides to study the effects of an FLIBGS. Therefore, it would be of great interest to study the effects of an FLIBGS on known strong recipes and observe how the FLIBGS can push the limits of refractive index contrast and the waveguide bend radius. Exploring FLIBGS applications opens up great research opportunities for the entire spectral range in photonics, since electronic band gaps lying in the ultraviolet, visible, and infrared regions can be found in different materials.
Materials and methods
Refractive index modification measurement
To measure the photoinduced refractive index modifications, the structures were examined using a bright-field microscope (Olympus IX71) and a camera equipped with a bidimensional Hartmann grating (Phasics SID4Bio). The camera system acts as a wavefront analyzer that uses lateral shearing interferometry (QWLSI) to generate a quantitative phase image of transparent objects62. This methodology, described in detail in ref. 63, was carried out to recover the refractive index change (Δn) of the waveguides from the phase image. Accordingly, Δn measurements were considered to be exact within a 2% error margin or better. Since the Phasics camera operates in the visible range, ZnSe crystal and GeS4 glass, both of which have electronic bandgaps in the visible range, are excellent materials for the experiment.
Samples
Germanium sulfide (GeS4) glass samples were fabricated in-house following conventional melting–quenching techniques46. The polycrystalline ZnSe sample was obtained from a commercial supplier (Mellers Optic).
References
1. 1.
Gattass, R. R. & Mazur, E. Femtosecond laser micromachining in transparent materials. Nat. Photonics 2, 219–225 (2008).
2. 2.
Malinauskas, M. et al. Ultrafast laser processing of materials: from science to industry. Light 5, e16133 (2016).
3. 3.
Eaton, S. M. et al. High refractive index contrast in fused silica waveguides by tightly focused, high-repetition rate femtosecond laser. J. Noncryst. Solids 357, 2387–2391 (2011).
4. 4.
Arriola, A. et al. Low bend loss waveguides enable compact, efficient 3D photonic chips. Opt. Express 21, 2978–2986 (2013).
5. 5.
Chen, F. & De Aldana, J. R. V. Optical waveguides in crystalline dielectric materials produced by femtosecond-laser micromachining. Laser Photonics Rev. 8, 251–275 (2014).
6. 6.
Tan, D. Z. et al. Femtosecond laser induced phenomena in transparent solid materials: fundamentals and applications. Prog. Mater. Sci. 76, 154–228 (2016).
7. 7.
Fernandez, T. T. et al. Bespoke photonic devices using ultrafast laser driven ion migration in glasses. Prog. Mater. Sci. 94, 68–113 (2018).
8. 8.
Gross, S. et al. Ultrafast laser inscription in soft glasses: a comparative study of athermal and thermal processing regimes for guided wave optics. Int. J. Appl. Glass Sci. 3, 332–348 (2012).
9. 9.
Lapointe, J. et al. Fabrication of ultrafast laser written low-loss waveguides in flexible As2S3 chalcogenide glass tape. Opt. Lett. 41, 203–206 (2016).
10. 10.
Okhrimchuk, A. G. et al. Depressed cladding, buried waveguide laser formed in a YAG: Nd3+ crystal by femtosecond laser writing. Opt. Lett. 30, 2248–2250 (2005).
11. 11.
Liao, Y. et al. Electro-optic integration of embedded electrodes and waveguides in LiNbO3 using a femtosecond laser. Opt. Lett. 33, 2281–2283 (2008).
12. 12.
Burghoff, J. et al. Efficient frequency doubling in femtosecond laser-written waveguides in lithium niobate. Appl. Phys. Lett. 89, 081108 (2006).
13. 13.
Poumellec, B. et al. Modification thresholds in femtosecond laser processing of pure silica: review of dependencies on laser parameters [Invited]. Optical Mater. Express 1, 766–782 (2011).
14. 14.
Zhang, B. L. et al. Macroscopic invisibility cloak for visible light. Phys. Rev. Lett. 106, 033901 (2011).
15. 15.
Cortés, L. R. et al. Full-field broadband invisibility through reversible wave frequency-spectrum control. Optica 5, 779–786 (2018).
16. 16.
Pendry, J. B., Schurig, D. & Smith, D. R. Controlling electromagnetic fields. Science 312, 1780–1782 (2006).
17. 17.
Leonhardt, U. Optical conformal mapping. Science 312, 1777–1780 (2006).
18. 18.
Charles, N. et al. Design of optically path-length-matched, three-dimensional photonic circuits comprising uniquely routed waveguides. Appl. Opt. 51, 6489–6497 (2012).
19. 19.
Levinshtein, M., Rumyantsev, S. & Shur, M. S. Handbook Series on Semiconductor Parameters (Singapore New Jersey: World Scientific, 1996).
20. 20.
Beresna, M., Gecevičius, M. & Kazansky, P. G. Ultrafast laser direct writing and nanostructuring in transparent materials. Adv. Opt. Photonics 6, 293–339 (2014).
21. 21.
Sundaram, S. K. & Mazur, E. Inducing and probing non-thermal transitions in semiconductors using femtosecond laser pulses. Nat. Mater. 1, 217–224 (2002).
22. 22.
Chan, J. W. et al. Structural changes in fused silica after exposure to focused femtosecond laser pulses. Opt. Lett. 26, 1726–1728 (2001).
23. 23.
Juodkazis, S. et al. Laser-induced microexplosion confined in the bulk of a sapphire crystal: evidence of multimegabar pressures. Phys. Rev. Lett. 96, 166101 (2006).
24. 24.
Sakakura, M. et al. Observation of pressure wave generated by focusing a femtosecond laser pulse inside a glass. Opt. Express 15, 5674–5686 (2007).
25. 25.
Lucarini, V. et al. Kramers–Kronig Relations in Optical Materials Research (Springer, Berlin, 2005).
26. 26.
Davis, K. M. et al. Writing waveguides in glass with a femtosecond laser. Opt. Lett. 21, 1729–1731 (1996).
27. 27.
Hirao, K. & Miura, K. Writing waveguides and gratings in silica and related materials by a femtosecond laser. J. Noncryst. Solids 239, 91–95 (1998).
28. 28.
Mao, S. S. et al. Dynamics of femtosecond laser interactions with dielectrics. Appl. Phys. A 79, 1695–1709 (2004).
29. 29.
Streltsov, A. M. & Borrelli, N. F. Study of femtosecond-laser-written waveguides in glasses. J. Optical Soc. Am. B 19, 2496–2504 (2002).
30. 30.
Lucarini, V. et al. Kramers-Kronig Relations in Optical Materials Research. (Springer, Berlin, 2005).
31. 31.
Korff, S. A. & Breit, G. Optical dispersion. Rev. Mod. Phys. 4, 471–503 (1932).
32. 32.
Ghosh, G. Sellmeier coefficients and dispersion of thermo-optic coefficients for some optical glasses. Appl. Opt. 36, 1540–1546 (1997).
33. 33.
Marple, D. T. F. Refractive index of ZnSe, ZnTe, and CdTe. J. Appl. Phys. 35, 539–542 (1964).
34. 34.
Lin, G. et al. Different refractive index change behavior in borosilicate glasses induced by 1 kHz and 250 kHz femtosecond lasers. Optical Mater. Express 1, 724–731 (2011).
35. 35.
Du, X. et al. Femtosecond laser induced space-selective precipitation of a deep-ultraviolet nonlinear BaAlBO3F2 crystal in glass. J. Noncryst. Solids 420, 17–20 (2015).
36. 36.
Tauc, J. Optical properties and electronic structure of amorphous Ge and Si. Mater. Res. Bull. 3, 37–46 (1968).
37. 37.
Akimov, A. V. et al. Ultrafast band-gap shift induced by a strain pulse in semiconductor heterostructures. Phys. Rev. Lett. 97, 037401 (2006).
38. 38.
Macdonald, J. R. et al. Ultrafast laser inscription of near-infrared waveguides in polycrystalline ZnSe. Opt. Lett. 35, 4036–4038 (2010).
39. 39.
Tatian, B. Fitting refractive-index data with the Sellmeier dispersion formula. Appl. Opt. 23, 4477–4485 (1984).
40. 40.
Lapointe, J. et al. Making smart phones smarter with photonics. Opt. Express 22, 15473–15483 (2014).
41. 41.
Lapointe, J. et al. Toward the integration of optical sensors in smartphone screens using femtosecond laser writing. Opt. Lett. 40, 5654–5657 (2015).
42. 42.
Eaton, S. M. et al. Heat accumulation effects in femtosecond laser-written waveguides with variable repetition rate. Opt. Lett. 13, 4708–4716 (2005).
43. 43.
Jesacher, A. et al. Adaptive optics for direct laser writing with plasma emission aberration sensing. Opt. Express 18, 656–661 (2010).
44. 44.
Lapointe, J. & Kashyap, R. A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing. Sci. Rep. 7, 499 (2017).
45. 45.
Bérubé, J. P. et al. Femtosecond laser inscription of depressed cladding single-mode mid-infrared waveguides in sapphire. Opt. Lett. 44, 37–40 (2019).
46. 46.
Bérubé, J. P. et al. Tailoring the refractive index of Ge-S based glass for 3D embedded waveguides operating in the mid-IR region. Opt. Express 22, 26103–26116 (2014).
47. 47.
Snyder, A. W. & Love, J. Optical Waveguide Theory. (Springer Science & Business Media, New York, 1983).
48. 48.
Shimizu, T. et al. Photo-induced ESR and optical absorption edge shift in amorphous Ge-S films. Solid State Commun. 27, 223–227 (1978).
49. 49.
Tong, L. et al. Optical loss measurements in femtosecond laser written waveguides in glass. Opt. Commun. 259, 626–630 (2006).
50. 50.
Neuberger, M. Handbook of Electronic Materials: Volume 5: Group IV Semiconducting Materials. (Springer Science & Business Media, New York, 2012).
51. 51.
Singh, J. & Shimakawa, K. Advances in Amorphous Semiconductors. (CRC Press, London, 2003).
52. 52.
Hirabayashi, I., Morigaki, K. & Nitta, S. New evidence for defect creation by high optical excitation in glow discharge amorphous silicon. Jpn. J. Appl. Phys. 19, L357 (1980).
53. 53.
Dersch, H., Stuke, J. & Beichler, J. Light‐induced dangling bonds in hydrogenated amorphous silicon. Appl. Phys. Lett. 38, 456–458 (1981).
54. 54.
Ogihara, C. et al. Lifetime and intensity of photoluminescence after light induced creation of dangling bonds in a-Si: H. J. Noncryst. Solids 299–302, 637–641 (2002).
55. 55.
Morigaki, K. et al. Light-induced defect creation under pulsed subbandgap illumination in hydrogenated amorphous silicon. Philos. Mag. Lett. 83, 341–349 (2003).
56. 56.
Morigaki, K. Physics of Amorphous Semiconductors. (World Scientific Press, London, 1999).
57. 57.
Pfeiffer, G., Paesler, M. A. & Agarwal, S. C. Reversible photodarkening of amorphous arsenic chalcogens. J. Noncryst. Solids 130, 111–143 (1991).
58. 58.
Street, R. A. Non-radiative recombination in chalcogenide glasses. Solid State Commun. 24, 363–365 (1977).
59. 59.
Feltz, A. Amorphous Inorganic Materials and Glasses. (VCH, Weinheim, 1993).
60. 60.
Stuke, J. Review of optical and electrical properties of amorphous semiconductors. J. Noncryst. Solids 4, 1–26 (1970).
61. 61.
Kasap, S. & Capper, P. Springer Handbook of Electronic and Photonic Materials. (Springer, Cham, 2017).
62. 62.
Roberts, A. et al. Refractive-index profiling of optical fibers with axial symmetry by use of quantitative phase microscopy. Opt. Lett. 27, 2061–2063 (2002).
63. 63.
Bélanger, E. et al. Comparative study of quantitative phase imaging techniques for refractometry of optical waveguides. Opt. Express 26, 17498–17510 (2018).
Acknowledgements
We acknowledge funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) (IRCPJ469414-13), Canada Foundation for Innovation (CFI) (33240 and 37422), Canada Excellence Research Chair (CERC in Photonic Innovations), FRQNT strategic cluster program (2018-RS-203345), Quebec Ministry of Economy and Innovation (PSRv2-352), and Canada First Research Excellence Fund (Sentinel North).
Author information
Authors
Contributions
J.L. conceived the idea, designed the experiment, and performed the optical measurements. J.L., J.-P.B., and A.D. performed the theoretical simulations. Y.L. and Y.M. fabricated the GeS4 samples. J.L. wrote the paper with contributions from J.-P.B., Y.L. V.F., and R.V. All of the authors analyzed the results and commented on the paper.
Corresponding author
Correspondence to Jerome Lapointe.
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
Reprints and Permissions
Lapointe, J., Bérubé, JP., Ledemi, Y. et al. Nonlinear increase, invisibility, and sign inversion of a localized fs-laser-induced refractive index change in crystals and glasses. Light Sci Appl 9, 64 (2020). https://doi.org/10.1038/s41377-020-0298-8
• Revised:
• Accepted:
• Published:
• Control and enhancement of photo-induced refractive index modifications in fused silica
• Jerome Lapointe
• , Jean-Philippe Bérubé
• , Samuel Pouliot
• & Réal Vallée
OSA Continuum (2020)
• Record-high positive refractive index change in bismuth germanate crystals through ultrafast laser enhanced polarizability
• T. Toney Fernandez
• , Karen Privat
• , Michael J. Withford
• & Simon Gross
Scientific Reports (2020)
• Recent Advances in Laser-Induced Surface Damage of KH2PO4 Crystal
• Mingjun Chen
• , Wenyu Ding
• , Jian Cheng
• , Hao Yang
• & Qi Liu
Applied Sciences (2020) | 2020-12-01 09:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5928865075111389, "perplexity": 2013.2734096603897}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141672314.55/warc/CC-MAIN-20201201074047-20201201104047-00465.warc.gz"} |
https://www.jiskha.com/questions/1517629/lim-x-1-x-1-as-x-approaches-1-I-know-the-answer-is-1-but-I-am-not-sure | Calculas
lim (x⁻¹-1)⁄x-1 as x approaches 1.
I know the answer is -1 but I am not sure how to solve the numerator x⁻¹-1.
1. (x⁻¹-1) / x-1
posted by Jake
2. x⁻¹-1 = 1/x - 1 = (1-x)/x
(1-x)/x
---------- = -1/x
x-1
posted by Steve
Similar Questions
1. Calculus
Let f be a function defined for all real numbers. Which of the following statements must be true about f? Which might be true? Which must be false? Justify your answers. (a) lim of f(x) as x approaches a = f(a) (b) If the lim of
2. calculus again
Suppose lim x->0 {g(x)-g(0)} / x = 1. It follows necesarily that a. g is not defined at x=0 b. the limit of g(x) as x approaches equals 1 c.g is not continuous at x=0 d.g'(0) = 1 The answer is d, can someone please explain how?
3. Math
If f(x) = (x+1)/(x-1), what is: i) lim f(x) as x approaches 1 ii) lim f(x) as x approaches ∞ My answer: i) lim f(x) as x approaches 1 is undefined because 2/0 is undefined ii) lim f(x) as x approaches ∞ = ∞ (∞+1)/( ∞-1)
4. calculus
Help - I have three problems that I am stuck on - 1. Lim x approaches infinity (x-3/x squared + 4) 2. Lim x approaches 3 x cubed - x squared - 7 x +3/ x squared - 9 3. Lim x approaches negative infinity (x + square root x squared
5. Calculus
Find the limit lim as x approaches (pi/2) e^(tanx) I have the answer to be zero: t = tanx lim as t approaches negative infi e^t = 0 Why is tan (pi/2) approaching negative infinity is my question?
6. Calculus (Limits)
If g(x) is continuous for all real numbers and g(3) = -1, g(4) = 2, which of the following are necessarily true? I. g(x) = 1 at least once II. lim g(x) = g(3.5) as x aproaches 3.5. III. lim g(x) as x approaches 3 from the left =
7. Calculus
If g(x) is continuous for all real numbers and g(3) = -1, g(4) = 2, which of the following are necessarily true? I. g(x) = 1 at least once II. lim g(x) = g(3.5) as x aproaches 3.5. III. lim g(x) as x approaches 3 from the left =
8. math
Let h be defined by h(x)=f(x)*g(x) x less than or equal to 1 h(x)=k+x if x > 1 If lim as x approaches 1 f(x)=2 and lim as x approaches 1 g(x)=-2 then for what value of k is h continous? A. -5 B. -4 C. -2 D. 2
9. Calculus
Show that limit as n approaches infinity of (1+x/n)^n=e^x for any x>0... Should i use the formula e= lim as x->0 (1+x)^(1/x) or e= lim as x->infinity (1+1/n)^n Am i able to substitute in x/n for x? and then say that e lim
10. Calculus
Show that limit as n approaches infinity of (1+x/n)^n=e^x for any x>0... Should i use the formula e= lim as x->0 (1+x)^(1/x) or e= lim as x->infinity (1+1/n)^n Am i able to substitute in x/n for x? and then say that e lim
11. Calc: Limits
how to calculate the limit: *I got confused especially with the square root. 1. lim as x approaches 2 (squareroot x^2 +5)-3 / (x-2) 2. lim as θ (theta) approaches zero (tan^2 (5θ) / (sin (3θ) sin (2θ) )
More Similar Questions | 2018-09-24 12:35:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875665068626404, "perplexity": 1465.2144197705636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160400.74/warc/CC-MAIN-20180924110050-20180924130450-00043.warc.gz"} |
https://zong-music.com/ha2ten/complex-geometry-chemistry-9b504e | # complex geometry chemistry
These two isomers are called geometrical isomers. As you can quickly verify by examining any three-dimensional tetrahedral shape, any given corner of a tetrahedron is adjacent to the other three. We will consider only the most common coordination numbers, namely, 2, 4, and 6. The RI of the working liquids and the transparent solid parts of the rotor and stator were matched to allow for unobstructed optical access. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Ligand field theory looks at the effect of donor atoms on the energy of d orbitals in the metal complex. In plane geometry, complex numbers can be used to represent points, and thus other geometric objects as well such as lines, circles, and polygons. That isomer in which two identical ligands are next to each other is called the cis isomer, while that in which they are on opposite sides is called the trans isomer. L ligands provide two electrons from a lone electron pair, resulting in a coordinate covalent bond. Complexes with two ligands are invariably linear. Molecular geometry or molecular structure is the three-dimensional arrangement of atoms within a molecule. The answer to the latter questions is a refreshing but qualified “yes.” In this post, we’ll explore the possibilities for complex geometry and develop some general guidelines for predicting geometry. In coordination chemistry, the coordination number is the number of ligands attached to the central ion (more specifically, the number of donor atoms). In Algebra 2, students were introduced to the complex numbers and performed basic operations with them. Have questions or comments? Complex color depends on both the metal and the ligand. Tetrahedral geometry (109.5E bond angles) or square planar geometry (90E bond angles) is observed when the coordination number is 4. Though these two isomers have some properties which are similar, no properties are identical and some are very different. Recent developments in string theory have made it an highly attractive area, both for mathematicians and theoretical physicists. Here, we report the discovery of the geometry of the complex between full-length CXCR4, a prototypical CXC receptor and driver of cancer metastasis, and its endogenous ligand CXCL12. Cis and trans configurations are possible in so… All ligands are equidistant from the central atom, and all ligand-metal-ligand angles are 90°. In this unit, we extend this concept and perform more sophisticated operations, like dividing complex numbers. We also learn about a different way to represent complex numbers—polar form. Problem 82QP from Chapter 22: Consider the complex ion [CoF6]3−.a What is the geometry?b W... Get solutions An example of a chelate ring occurs in the ethylenediamine-cadmium complex: The ethylenediamine ligand has two points of The trans form, by contrast, shows no similar biological activity. When there are six ligands, the geometry of the complex is almost always octahedral, like the geometry of SF6, or of [Cr(H2O)6]3+. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Watch the recordings here on Youtube! In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. 2. For examples of copper containing proteins see the article originally from the University of Leeds, Department of Biochemistry and Molecular Biology at the Scripps Institute. Ligands are classified as L or X (or a combination thereof), depending on how many electrons they provide for the bond between ligand and central atom. Draw a three dimensional structure for this coordination complex below on the left. This reaction is often used in the laboratory to be sure a precipitate is AgCl(s). The flow fields under laminar conditions in two typical regions of a cavity transfer dynamic mixer consisting of an inner rotor and outer stator were visualized by refractive index (RI)-matched particle image velocimetry (PIV) experiments. For example, two different compounds, one violet and one green, have the formula [Co(NH3)4Cl2]Cl. B. Legal. Nobel Symposium NS160 – Chemistry and Physics of Heavy and Superheavy Elements Complex chemistry with complex compounds Robert Eichler 1,2, a , M. Asai 3 , … The octahedral structure also gives rise to geometrical isomerism. A well-known example of such isomerism is given by the two square planar complexes. When two or more ligands are coordinated to an octahedral metal center, the complex can exist as isomers. Complex formation, a competitive process of the electrostatic \u0005eld should extend also … For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Coordination Chemistry Transition Metal Complexes Direct Application of Lewis AB and Hard/Soft AB “Theories” A TEP (Thermal Ellipsoid Plot) of a single molecule of ... Cation C.N. A. Geometry Biological Ligands . It explains how to use it solve for x and y. Coordination number = 2 Chelate, any of a class of coordination or complex compounds consisting of a central metal atom attached to a large molecule, called a ligand, in a cyclic or ring structure. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. It is important to be able to predict and understand the molecular structure of a molecule because many of the properties of a substance are determined by its geometry. The red pigment in the softbilled T(o)uraco Bird contains a copper porphyrin complex. The greater the energy gap in a metal complex, the shorter the wavelength of light the light the complex will absorb. Fractals often start with a simple geometrical object and a rule for changing the object that leads to objects that are so complex that their dimension is not an integer. Both of these complexes are important. Instead, a molecule's point group can be determined by following a set of steps which analyze the presence (or absence) of particular symmetry elements. The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). 4 H2O}$solid around the$\ce{Cu(II)}$ion. For alkenes, the pi bondscan c… Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Missed the LibreFest? Complex Geometry by the Energy Method Authors: Mabuchi , Toshiki The main theme of our book is the very hot issue known as the Yau-Tian-Donaldson conjectureAs to the Yau-Tian-Donaldson conjecture, we focus on the open case on the existence of extremal Kähler metrics * (1996 2 3) Using only ethylene diamine (en = H 2 NCH 2 CH 2 NH 2) and bromide anions as ligands construct a cationic octahedral complex of cobalt (III); your complex cation should have charge +1 and it should be chiral. It is worth noting that cis-trans isomerism is not possible in the case of tetrahedral complexes. For The Complex [XCI_, State The Complex Geometry, Find The Number D- Electrons, Sketch The D-orbital Energy Levels And Determine If It Is Paramagnetic Or Diamagnetic (Neutral Has Two S Electrons And Eight D Electrons). Save your time with this handy tool and make your learning fun and easy. General Chemistry (11th Edition) Edit edition. Have questions or comments? )%2F22%253A_Metals%2F22.09%253A_Geometry_of_Complexes, Ed Vitz, John W. Moore, Justin Shorb, Xavier Prat-Resina, Tim Wendorff, & Adam Hahn, Chemical Education Digital Library (ChemEd DL), information contact us at info@libretexts.org, status page at https://status.libretexts.org. Ed Vitz (Kutztown University), John W. Moore (UW-Madison), Justin Shorb (Hope College), Xavier Prat-Resina (University of Minnesota Rochester), Tim Wendorff, and Adam Hahn. According to Michael Frame, Benoit Mandelbrot (who first coined the word "fractal" and was the founding editor of this journal) considered himself above all a storyteller. The lysosomal protease cathepsin B recognizes defined, short peptide sequences, providing means for effective, targeted drug release. In the process we’ll enlist the aid of a powerful theoretical ally, crystal field theory (CFT), which provides some intuitive explanations for geometry the geometry of organometallic complexes. An important issue that we’ve glossed over until now concerns what organometallic complexes actually look like: what are their typical geometries? View Modelling in Aquatic Chemistry-8.pdf from GENERAL MA n/a at Indian Institute of Management. Complex geometry studies (compact) complex manifolds. It discusses algebraic as well as metric aspects. Metal ions in solution are surrounded by an array of ions or molecules called ligandsto form coordination complexes. Predicting the Geometry of Organometallic Complexes, [ "article:topic", "Lower", "showtoc:no", "authorname:mevans" ], https://chem.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FInorganic_Chemistry%2FModules_and_Websites_(Inorganic_Chemistry)%2FOrganometallic_Chemistry_(Michael_Evans)%2FStructural_Fundamentals%2FPredicting_the_Geometry_of_Organometallic_Complexes, Simplifying the Organometallic Complex (Part 1), information contact us at info@libretexts.org, status page at https://status.libretexts.org. An octahedral complex may also be thought of as being derived from a square planar structure by adding a fifth ligand above and a sixth below on a line through the central metal ion and perpendicular to the plane. The crushed ore is treated with KCN solution and air is blown through it: $4 \text{Au} (s) + 8 \text{CN}^{-} (aq) + \text{O}_{2} (g) + 2 \text{H}_{2} \text{O} (l) \rightarrow 4 \text{[Au(CN)}_{2} \text{]}^{-} (aq) + 4 \text{OH}^{-} (aq) \label{1}$. Copper occurs in biological systems as a part of the prosthetic group of certain proteins. • favored geometry based on stericsonly • minimizes L–L interactions • ‘normal’ unless there is an electronic reason for another geometry • see-saw • primarily observed in p-block metals (e.g., TeCl4 and SF4) • occurs when central atom has four ligands and one non-bonding electron pair • square planar Because the square planar geometry is less symmetrical than the tetrahedral geometry, it offers more possibilities for isomerism. For example, the cis isomer of the above complex is used as an anti-tumor drug to treat cancerous cells. The Au(CN)2– complex is used to extract minute gold particles from the rock in which they occur. The transition elements and main group elements can form coordination compounds, or complexes, in which a central metal atom or ion is bonded to one or more ligands by coordinate covalent bonds. Watch the recordings here on Youtube! The geometry of one of the Cu II -OOH intermediates has been optimized by the density functional theory method, and its calculated electronic and vibrational spectra are almost similar to the experimentally observed values. Those of main group metal ions such as Na(I), Hg(II) or Al(III) (and Cu(I)), which have no lone-pair electrons, Question: X Is A First-row Transition Metal. The ligands are said to be coordinated to the atom. Octahedral geometry (90E bond angles) is observed when the coordination number is 6. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The resultant complex is water soluble. Free Online Calculator here acts as a one stop destination for all your complex and tough math concepts like statistics, algebra, trigonometry, fractions, percentages, etc. ChemPRIME at Chemical Education Digital Library (ChemEd DL) The geometry of a complex is governed almost entirely by the coordination number. X ligands provide one electron, with the central atom providing the other electron, thus forming a regular covalent bond. The effect depends on the coordination geometry geometry of the ligands. In the process we’ll enlist the aid of a powerful theoretical ally, crystal field theory (CFT), which provides some intuitive explanations for geometry the geometry of organometallic complexes. To use it solve for x and y catalytic efficiencies the other electron, the... Chemist Alfred Werner ( 1866–1919 ) properties which are similar, no properties are identical and some are very...., complex geometry chemistry, carboxylate K + 9 anti-tumor drug to treat cancerous.... Bonded to the atom developments in string theory have made it an highly attractive,. Occurs in biological systems as a part of the complexes strongly influenced the catalytic efficiencies the... Determine if the molecule is of high or low symmetry the rotor and stator were matched to allow for optical. Physics, chemistry, Conversions and stator were matched to allow for unobstructed optical access is observed when the number! Are very different extract minute gold particles from the central atom, and 6 made an... Au ( I ) and Au ( CN ) 2– complex is governed almost entirely by coordination... Grant numbers 1246120, 1525057, and 1413739 observed when the coordination geometry... Developments in string theory have made it an highly attractive area, for!, by contrast, shows no similar biological activity the exterior angle theorem for triangles Chemical Education Library..., like dividing complex numbers most common coordination numbers, namely, 2,,. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and all ligand-metal-ligand are! Atom providing the other three \ce { Cu ( II ) } ion!, chemistry, Conversions, it offers more possibilities for isomerism similar biological activity on the left well-known example such... ( ChemEd DL ) the geometry of a complex is also water soluble and affords a method for AgCl. Are Ag ( I ) and Au ( CN ) 2– complex used... Draw a three dimensional structure for this coordination complex below on the coordination number the left libretexts.org... Atom in a molecule occurs in biological systems as a part of the “ bookkeeping metrics we... Learn about a different way to represent complex numbers—polar form the size, charge, and 6 some properties are. Or molecular structure is the three-dimensional arrangement of atoms within a molecule octahedral O,,! 4 H2O }$ ion @ libretexts.org or check out our status page at https: //status.libretexts.org is! Term was originally defined in 1893 by Swiss chemist Alfred Werner ( 1866–1919 ) dimensional structure for this complex... On the size, charge, and electron configuration of the “ bookkeeping metrics ” we ’ ve so. Use it solve for x and y ( NH3 ) 4Cl2 ] Cl verify by examining any three-dimensional tetrahedral,! C… factors determine the geometry of a tetrahedron is adjacent to the atom verify by examining three-dimensional. Foundation support under grant numbers 1246120, 1525057, and all ligand-metal-ligand angles are.. Tetrahedral shape, any given corner of a complex is governed almost entirely by the coordination of... Well-Known example of such compounds are Ag ( I ) and Au ( CN ) complex... Shorter the wavelength of light the complex will absorb gap than the tetrahedral geometry, offers! These two isomers have some properties which are similar, no properties are identical some! Violet and one green, have the formula [ Co ( NH3 4Cl2... A copper porphyrin complex, we extend this concept and perform more sophisticated operations, like complex!, with the central atom are called polydentate ligands and form chelates molecule is of or..., y and z axes the case of tetrahedral complexes is 6 ve explored far! Basic introduction into the exterior angle theorem for triangles ( 90E bond angles ) is when!, 1525057, and 6 a well-known example of such compounds are Ag ( I ) Au! Octahedral metal center, the pi bondscan c… factors determine the geometry of a complex is as... Color depends on the coordination geometry geometry of metal complexes with an ammonia ligand have a larger gap. The energy gap than the tetrahedral geometry, it offers more possibilities for isomerism ( II ) } $.... Chemed DL ) the geometry of a complex is governed almost entirely by the coordination geometry geometry complex geometry chemistry complexes..., have the formula [ Co ( NH3 ) 4Cl2 ] Cl and electron configuration of complexes... Biological systems as a part of the metal and the ligands Math, Physics,,... In biological systems as a part of the metal and the transparent solid parts of the complex... Acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, 6. Is worth noting that cis-trans isomerism is not possible in the softbilled T ( O ) uraco Bird contains copper. And all ligand-metal-ligand angles are 90°, any given corner of a tetrahedron is adjacent to the.! The case of tetrahedral complexes bonded to the atom dissolving AgCl, which is otherwise very.! @ libretexts.org or check out our status page at https: //status.libretexts.org which otherwise! Agcl ( s ) an anti-tumor drug to treat cancerous cells Physics, chemistry, Conversions to! Because the square planar complexes two isomers have some properties which are,! View Modelling in Aquatic Chemistry-8.pdf from GENERAL MA n/a at Indian Institute of Management similar, no properties identical! All the corners are cis to each other, none are trans solve. Ion and the transparent solid parts of the “ bookkeeping metrics ” we ’ ve glossed over until concerns. Is not possible in the case of tetrahedral complexes typical geometries noted LibreTexts... Actually look like: what are their typical geometries attractive area, both for mathematicians theoretical. The trans form, by contrast, shows no similar biological activity are from... More possibilities for isomerism of such isomerism is not possible in the case tetrahedral... The “ bookkeeping metrics ” we ’ ve complex geometry chemistry over until now concerns organometallic... Coordinate covalent bond not possible in the laboratory to be coordinated to the atom performed basic operations with.... Electrons from a lone electron pair, resulting in a coordinate covalent bond isomers... Explored so far to reliably predict geometry is AgCl ( s ) as you quickly... Geometry is less symmetrical than the corresponding fluoride complexes lone electron pair, resulting in a molecule of. Have made it an highly attractive area, both for mathematicians and theoretical physicists what their!, we extend this concept and perform more sophisticated operations, like dividing numbers. Example, two different compounds, one violet and one green, the. ( O ) uraco Bird contains a copper porphyrin complex one violet and green. Geometry is less symmetrical than the corresponding fluoride complexes the corresponding fluoride complexes ligand-metal-ligand angles are 90° z. The two square planar geometry is less symmetrical than the tetrahedral geometry, it offers more possibilities isomerism!, by contrast, shows no similar biological activity one violet and one green, the..., we extend this concept and perform more sophisticated operations, like dividing numbers. Crystallography, the coordination number is 6 unit, we extend this concept and perform more operations. As isomers, like dividing complex numbers the complex geometry chemistry examples of such isomerism is given the!, carboxylate K + 9 the corners are cis to each other, none are trans explored far. It offers more possibilities for isomerism solve for x and y, the coordination geometry... In an octahedral metal center, the shorter the wavelength of light the complex will absorb form, by,! \Ce { Cu ( II ) }$ ion rotor and stator were matched to allow for optical. And perform more sophisticated operations, like dividing complex numbers and performed basic operations with them coordination. The softbilled T ( O ) uraco Bird contains a copper porphyrin complex three-dimensional... L ligands provide two electrons from a lone electron pair, resulting in coordinate! Method for dissolving AgCl, which is otherwise very insoluble ) 4Cl2 ] Cl used an. Theoretical physicists ligands are coordinated to the atom planar geometry is less symmetrical than the tetrahedral,! Ligands all lie along the x, y and z axes will consider the., by contrast, shows no similar biological activity transparent solid parts the! Different way to represent complex numbers—polar form ( CN ) 2– complex is used extract! You can quickly verify by examining any three-dimensional tetrahedral shape, any given corner of a complex is almost! The square planar complexes + 9 by the coordination number is 6 the prosthetic group of certain proteins fluoride.! Low symmetry we ’ ve glossed over until now concerns what organometallic complexes actually look like: what their... The pi bondscan c… factors determine the geometry of metal complexes with an ammonia ligand have a energy. To each other, none are trans 1246120, 1525057, and 1413739 complex is almost. Way to represent complex numbers—polar form ( O ) uraco Bird contains a copper porphyrin complex are 90° bookkeeping. The term was originally defined in 1893 by Swiss chemist Alfred Werner ( 1866–1919 ) such as II ) \$... C… factors determine the geometry of metal complexes by Swiss chemist Alfred Werner ( 1866–1919.... To use it solve for x and y other, none are trans explored far. D-Subshell degeneracy is lifted were matched to allow for unobstructed optical access dimensional for. Green, have the formula [ Co ( NH3 ) 4Cl2 ] Cl out our status page at:... Allow for unobstructed optical access we will consider only the most common coordination numbers namely. Be coordinated to an octahedral complex, the d-subshell degeneracy is lifted corner a! Licensed by CC BY-NC-SA 3.0 and all ligand-metal-ligand angles are 90° governed almost entirely by the coordination geometry the... | 2021-06-20 00:49:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6188601851463318, "perplexity": 2326.7001953192184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00439.warc.gz"} |
http://www.gamedev.net/topic/505068-animation-blending/page-3 | • Create Account
# Animation Blending
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
171 replies to this topic
### #41haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 04:53 AM
@ godmodder
Now I think I got it :) You speak of weights to disable joints since your DCC package is only able to export animations of full featured skeletons. That is totally apart from weights for blending and/or layering. In such a case I would use flags at the joints, and destroy the flagged joints on import. You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.
### #42RobTheBloke Crossbones+ - Reputation: 2536
Like
0Likes
Like
Posted 18 August 2008 - 06:53 AM
Quote:
Original post by haegarrAn example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.
Yeah, I think we are talking cross purposes here. We use a node based system (google morpheme connect to find out more), where an anim is a node, so is a blend operation. We can then provide bone masks to limit evaluation on a given connection. That allows us the ability to reuse entire blend networks - not just limit the result of a single anim - we can limit the result of an entire set of blended anims.
Quote:
Original post by haegarrMasking is a binary decision, while weighting is (more or less) continuous.
I never said it was binary - there's no reason why you can't blend the resulting transforms via a weight - or a set of weights for each track if needed. ie. Think about an anim as a single node. We can ask (via a mask) for a set of TM's to be evaluated and the result can pass onto the next operation. We can also set a second mask to get the rest of the transforms. The result is exactly the same amount of work as evaluating the entire anim - but the results can be fed into seperate networks and blended with IK or physics as needed. So if you request the results of the upper body, you can blend that with an aim anim - or an aim IK modifier. As i said before, we have over 20 ways of blending anims because there is no single blending op that does everything you need.
Quote:
I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).
What happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.
So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?
Quote:
And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.
Who says the masks have to be associated with *each* animation? There are N masks for a given network - you determine how big N is - and then reuse them when and where needed. Given that a 32bit unsigned can store masks for 32 anim tracks, the size of the mask is not that big. Ultimately you can then provide a blend channels node if needed
Quote:
For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that.
Using dcc file formats for game anims isn't a good idea - for the reasons that you've noticed. I'd recommend using your own custom file format that does have the data you need - add in some form of asset converter/compiler and be as hacky as you want there. If you do happen to change DCC packages, it's trivial to extend the asset converter.
Quote:
If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.
I disagree - you just need a DCC package and ingame format that's better suited to the job ;)
Quote:
You can of course externally generated masks, too. You can furthur keep the belonging joints, and ignore them later during processing, but that seems me a waste.
Not at all. In some cases it would be a waste, but then if you have a way of describing all possible blend combinations, you can determine which tracks on which anims you can discard as a pre-processing step (i.e. asset converter stage). But then again, the system i'm describing is 100% data driven so part of the asset is the description of the blending system for a character.
### #43haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 07:40 AM
@ RobTheBloke
Due to the very different animation systems, we've talked at cross purposes for sure. Even the understanding of masking wasn't the same.
I don't doubt that my animation system, whichever approach I choose, can never compete with a professional solution ... but that should not mean that I didn't try ;) So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...
### #44haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 08:50 AM
Quote:
Original post by RobTheBlokeWhat happens when you have locomotion blend system set up for basic movement, but you wish to make modifications to specific bones? for example, the character is walking (via blended anims), aiming his gun (via IK or an anim), and has just been shot in the arm (with the reaction handled in physics)? You can't simply blend the physics with the animation data for a pretty obvious reason - the physics has to modify the result of the animations.So in your case, the aim would be done first as a layer, then what? You need to get the current walk pose from a set of blended anims, and then add in the physics? But then you can't do that, because the physics needs higher priority? But then the physics can't have a higher priority since it needs the results of the blended animations?
Well, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.
However, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist. The case of physics based influence you've described above is, say, "passive" instead. I think you're right when saying that it cannot be implemented as an animation when using an animation system as mine (at least not without modification). Nevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.
But the weakness you've pointed out is perhaps a flawed integration of the systems. The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation. Intertwining may obviously allow for a better control by the designer. That is for sure worth to think about it...
### #45Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 18 August 2008 - 03:37 PM
Quote:
Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.
I want to create an animation system and physical RagDoll. But i've read that RagDoll needs to be FK. If i use Animation System like
class CKeyframe {public:float time;float value;};enum AnimationTrackType {TRANSLATEX = 0,TRANSLATEY,TRANSLATEZ,ROTATEX,ROTATEY,ROTATEZ,//And lots of other stuff};class CAnimationTrack {public:int trackID; //To Find Track By IDint boneID;AnimationTrackType trackType;std::vector<Keyframe *> keys;};class CAnimation {public:std::vector<CAnimationTrack *> tracks;char animation_name[32];float startTime, endTime;CAnimation();~CAnimation();void Animate(CSkeleton *skel);void AddTrack(int trackId, AnimationTrackType trackType);void AddKeyframeToTrack(int trackId, float time, float value);void SetName(char name[32]);CAnimationTrack *FindTrackById(int trackID);CKeyframe * FindKeyframeInTrack(int trackID, float time);static void Blend(Keyframe * outKeyframe, Keyframe * keyPrev, Keyframe *keyCurrent, float blend_factor);};
Can i use FK? And can i create RagDoll? If yes i'll write my animation system.
if no, how do i need to write it?
Thanks,
Kasya
[Edited by - Kasya on August 18, 2008 10:37:15 PM]
### #46haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 08:21 PM
All structures with local co-ordinate frames are inherently FK enabled. AFAIK skeletons are always implemented with local frames. Even if more than a single bone chain can be used, they are still bone _chains_. I.e. a joint's position and orientation denote the axis of a local frame w.r.t. its parental bone. So, as long as you don't set all skeleton global or even world global joint parameters from the animations, you have automatically FK working.
### #47Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 18 August 2008 - 09:16 PM
What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.
I forgot to ask one question?
What is FK for? For Physics or something else?
Thanks,
Kasya
### #48haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 09:31 PM
Any co-ordinate is senseful if and only if you also know the co-ordinate system (or frame) to which the co-ordinate is related. Meshes are normally defined w.r.t. a local frame, i.e. its vertex positions and normals and so on are to be interpreted in that frame.
When you transform the co-ordinates but still interpret the result relative to the same frame, you've actually changed the positions and normals and so on. When you look at the mesh in the world, you'll see it translated or rotated or whatever.
On the other hand, if you interpret the frame as a transformation, apply it to the mesh, and (important!) interpret the result w.r.t. the _parental_ frame, then you've changed the co-ordinates but in the world the mesh is at the same location and shows also the same orientation!
Hence you have to distinguish for what purpose you apply a transformation: Do you want to re-locate and/or re-orientate the mesh, or do you want to change the reference system only? A local-to-global transformation is for changing the reference system only, nothing else.
FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.
### #49RobTheBloke Crossbones+ - Reputation: 2536
Like
0Likes
Like
Posted 18 August 2008 - 11:07 PM
Quote:
So you got me curious on morpheme, but unfortunately the informations available there at naturalmotion.com seem somewhat concise. Have to try to find free tutorials or somewhat similar ...
There isn't much info about it on the internet. Unlike endorhpin, there's no learning edition, so no tutorials available.
Quote:
Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.
IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.
Quote:
Original post by haegarrHowever, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.
Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.
Quote:
Original post by haegarrNevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.
yup - but we need more fidelity and control than that would allow.
Quote:
The morpheme solution seems to allow an intertwining (hopefully the correct word) where I have a strict separation.
More or less what it was designed to do. Most of the stuff we do is AI driven animations, utilising physics and other bits and bobs (you can probably find some euphoria demo's around relating to GTA and star wars) - It just so happens we need a good way to combine all of those things - hence morpheme.
Quote:
Can i use FK? And can i create RagDoll? If yes i'll write my animation system.if no, how do i need to write it?
That's a pretty good starting point, I'd probably suggest trying 2 small simple demo's first though - the anim system, and a basic physics ragdoll. It's then a bit easier to see how to merge the two solutions into one all singing and dancing system.
Quote:
What does it mean? If its in Local Coordinate how can i Transform it? If i transform it that means im making World Coordinate. That means i first need to use FK then Transform.
A bit old, but might give you a few ideas. A lot of the code can be improved fairly drastically, but I'll leave that as an excercise for the reader ;)
Quote:
FK means that co-ordinate frames are related together, and transforming a parental frame has an effect on the child frames, but changing a child frame has no effect on the parental frame. IK, on the oether hand, means that changing a child frame _has_ an effect on the parental frame.
Not quite. FK means you transform a series of parented transforms until you get an end pose. IK means you start with an End pose (the effector), and use CCD or the Jacobian to determine the positions/orientations of those bones. FK is just a standard hierarchical animation system.
### #50haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 18 August 2008 - 11:40 PM
Quote:
Original post by RobTheBloke
Quote:
Original post by haegarrWell, IK by itself is no problem, since animations must not be key-framed (I've mentioned it somewhere in one of the long posts above). Animations are all kinds of time based modifiers. An IK based animation simply interpolates with the IK pose as goal.
IK gives you a rigid fixed solution - by blending that result with animation you can improve the visual look - i.e. overlay small motions caused by breathing etc.
Seconded, and since IK controlled animation is an Animation, it can be blended with others.
Quote:
Original post by RobTheBloke
Quote:
Original post by haegarrHowever, IK is no problem since it is an "active" animation, and as such can be foreseen by the artist.
Unless the IK solution is running in response to game input - i.e. aiming at a moving point. The only way the animator can forsee that is to have a tools that interact with the engine - more or less what morpheme is designed to do really.
That the artist cannot _precompute_ the animation is clear. Else one could use key-framing instead of IK driven animation. However, he can foresee the need for IK driven aiming, and hence integrate scipts/nodes/tracks or however that stuff is implemented, so the system has something available that can be used for blending at all.
Quote:
Original post by RobTheBloke
Quote:
Original post by haegarrNevertheless can a physics system, being separate from animations as defined so far, have an effect on the already computed pose, and can overwrite the pose partly or totally.
yup - but we need more fidelity and control than that would allow.
I understand that. See the remaining text in the original post.
### #51Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 22 August 2008 - 07:21 PM
Hello,
Thats my last Updated Code:
Animation Componenets:
class CKeyframe {
public:
float time;
float value;
};
enum CAnimationTrackType {
TRANSLATEX = 0,
TRANSLATEY,
TRANSLATEZ,
ROTATEX,
ROTATEY,
ROTATEZ
};
class CAnimationTrack {
public:
unsigned int trackID;
unsigned int nodeID;
CAnimationTrackType trackType;
std::vector<CKeyframe *> keys;
};
Animation Class:
class CAnimation {
protected:
std::vector<CAnimationTrack *> m_Tracks;
float startTime, endTime;
char name[32];
public:
CAnimation();
~CAnimation();
CAnimationTrack * FindTrackByID(unsigned int trackID);
CKeyframe * FindKeyframeByTime(unsigned int trackID, float keyframe_time);
void AddTrack(unsigned int trackID, unsigned int nodeID, CAnimationTrackType trackType);
void AddKeyframe(unsigned int trackID, float value, float time);
void SetTime(float start_time, float end_time);
CAnimationTrack * operator[](unsigned int index);
CAnimationTrack *AnimationTrack(unsigned int index);
unsigned int NumAnimationTracks();
};
CBlend.cpp
void CBlend::Blend(CVector3 &vPos, CQuat &qRotate, CKeyframe *curKf, CKeyframe *prevKf, float blend_factor, CAnimationTrackType trackType) {
CVector3 vTmpPrev, vTmpCur;
CQuat qTmpPrev, qTmpCur;
if(trackType == TRANSLATEX) {
vTmpPrev.x = prevKf->value;
vTmpCur.x = curKf->value;
}
if(trackType == TRANSLATEY) {
vTmpPrev.y = prevKf->value;
vTmpCur.y = curKf->value;
}
if(trackType == TRANSLATEZ) {
vTmpPrev.z = prevKf->value;
vTmpCur.z = curKf->value;
}
if(trackType == ROTATEZ) {
qTmpPrev.SetAxis(prevKf->value, 1, 0, 0);
qTmpCur.SetAxis(curKf->value, 1, 0, 0);
}
if(trackType == ROTATEZ) {
qTmpPrev.SetAxis(prevKf->value, 0, 1, 0);
qTmpCur.SetAxis(curKf->value, 0, 1, 0);
}
if(trackType == ROTATEZ) {
qTmpPrev.SetAxis(prevKf->value, 0, 0, 1);
qTmpCur.SetAxis(curKf->value, 0, 0, 1);
}
vPos = vTmpCur * (1.0f - blend_factor) + vTmpPrev * blend_factor;
qRotate.Slerp(qTmpCur, qTmpPrev, blend_factor);
}
void CBlend::Blend(CAnimation *pFinalAnim, CAnimation *pCurAnimation, CAnimation *pNextAnimation, float blend_factor) {
CVector3 vTmp;
CQuat qTmp;
for(unsigned int i = 0; i < pCurAnimation->NumAnimationTracks(); i++) {
for(unsigned int n = 0; n < pNextAnimation->NumAnimationTracks(); n++) {
if(pCurAnimation->AnimationTrack(i)->nodeID == pNextAnimation->AnimationTrack(n)->nodeID && pCurAnimation->AnimationTrack(i)->trackType == pNextAnimation->AnimationTrack(n)->trackType) {
for(unsigned int j = 0; j < pCurAnimation->AnimationTrack(i)->keys.size(); j++) {
for(unsigned int k = 0; k < pNextAnimation->AnimationTrack(n)->keys.size(); k++) {
Blend(vTmp, qTmp, pCurAnimation->AnimationTrack(i)->keys[j], pNextAnimation->AnimationTrack(n)->keys[j], blend_factor, pNextAnimation->AnimationTrack(n)->trackType);
for(unsigned int v = 0; v < pFinalAnim->NumAnimationTracks(); v++) {
if(pFinalAnim->AnimationTrack(v)->nodeID == pCurAnimation->AnimationTrack(i)->nodeID && pFinalAnim->AnimationTrack(i)->trackType == pCurAnimation->AnimationTrack(n)->trackType) {
//What to write here??
}
}
}
}
}
}
}
}
Animate Skeleton:
void CSkeletalAnimation::Animate(CSkeleton *pFinalSkel) {
float time = g_Timer.GetSeconds();
//Haven't written loop yet. :)
for(unsigned int i = 0; i < m_Tracks.size(); i++) {
unsigned int uiFrame = 0;
CBone * bone = pFinalSkel->FindBone(m_Tracks[i]->nodeID);
CAnimationTrackType trackType = m_Tracks[i]->trackType;
CVector3 vTmp;
CQuat qTmp;
while(uiFrame < m_Tracks[i]->keys.size() && m_Tracks[i]->keys[uiFrame]->time < time) uiFrame++;
if(uiFrame == 0) {
if(trackType == TRANSLATEX) {
vTmp.x = m_Tracks[i]->keys[0]->value;
}
if(trackType == TRANSLATEY) {
vTmp.y = m_Tracks[i]->keys[0]->value;
}
if(trackType == TRANSLATEZ) {
vTmp.z = m_Tracks[i]->keys[0]->value;
}
if(trackType == ROTATEX) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 1, 0, 0);
}
if(trackType == ROTATEY) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 1, 0);
}
if(trackType == ROTATEZ) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 0, 1);
}
}
if(uiFrame == m_Tracks[i]->keys.size()) {
if(trackType == TRANSLATEX) {
vTmp.x = m_Tracks[i]->keys[uiFrame-1]->value;
}
if(trackType == TRANSLATEY) {
vTmp.y = m_Tracks[i]->keys[uiFrame-1]->value;
}
if(trackType == TRANSLATEZ) {
vTmp.z = m_Tracks[i]->keys[uiFrame-1]->value;
}
if(trackType == ROTATEX) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 1, 0, 0);
}
if(trackType == ROTATEY) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 1, 0);
}
if(trackType == ROTATEZ) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 0, 1);
}
}
else {
CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];
float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;
CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);
}
bone->qRotate = bone->qRotate * qTmp;
bone->vPos = bone->vPos * vTmp;
}
}
Is that code right?? And please answer my question inside Blend::Blend function.
Thanks,
Kasya
P.S. Timer is temporary i'll change it anyway. :)
### #52haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 22 August 2008 - 09:59 PM
IMHO there are several problems within the code.
(1) You use 3 times ROTATIONZ but no ROTATIONX and ROTATIONY in the CBlend::Blend for 2 key-frames.
(2) The implementation of the same CBlend::Blend as above is very inefficient. My main grumbling is that the trackType is exactly 1 of the possible values, but you ever process all of them. Consider at least to choose a structure like
// either a rotation ...if( trackType>=ROTATEX ) { CQuat qTmpPrev, qTmpCur; if( trackType==ROTATIONX ) { qTmpPrev.SetAxis(prevKf->value, 1, 0, 0); qTmpCur.SetAxis(curKf->value, 1, 0, 0); } else if( trackType==ROTATIONY ) { /// insert appropriate code here } else { /// insert appropriate code here } qRotate.Slerp(qTmpCur, qTmpPrev, blend_factor);}// ... or else a translation ...else { /// insert appropriate code here, similarly to the part above}
As you can see, this snippet tries to avoid senseless computations. You can use a switch statement for the inner decisions, of course. (Yes, you can say that this is a kind of pre-optimization, if you like. But choosing another implementation here has an impact on the invoking code, so changing it later can IMO introduce more problems than expected so far.)
(3) I would not choose the way of CBlend::Blend of 2 animations as you've done. Your way is appropriate for blending 2 animations, but not necessarily for blending more than 2 animations. Instead of blending animations pairwise, I would use the skeleton instance as an accumulator for the blending. I.e. preset the skeleton instance when entering the blending group, process each animation of the group without knowledge of the other animations, and post-process the skeleton instance when exiting the group.
I'm not sure what kind of animation system you try to implement. So I cannot really evaluate the code at the higher logical levels (this is strictly speaking already relevant for point (3) above). You may consider to tell us exactly what your goals are when we should discuss them.
### #53Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 22 August 2008 - 10:26 PM
Hello,
it there a problem inside
CSkeletalAnimation::Animate(CSkeleton *pFinalSkel); except Time
And Thats my new Blending:
void CBlend::Blend(CVector3 &vPos, CQuat &qRotate, CKeyframe *curKf, CKeyframe *prevKf, float blend_factor, CAnimationTrackType trackType) {
if(trackType >= TRANSLATEX && trackType <= TRANSLATEZ) {
CVector3 vTmpPrev, vTmpCur;
if(trackType == TRANSLATEX) {
vTmpPrev.x = prevKf->value;
vTmpCur.x = curKf->value;
}
else if(trackType == TRANSLATEY) {
vTmpPrev.y = prevKf->value;
vTmpCur.y = curKf->value;
}
else if(trackType == TRANSLATEZ) {
vTmpPrev.z = prevKf->value;
vTmpCur.z = curKf->value;
}
vPos = vTmpCur * (1.0f - blend_factor) + vTmpPrev * blend_factor;
}
else if(trackType >= ROTATEX && trackType <= ROTATEZ) {
CQuat qTmpPrev, qTmpCur;
if(trackType == ROTATEX) {
qTmpPrev.SetAxis(prevKf->value, 1, 0, 0);
qTmpCur.SetAxis(curKf->value, 1, 0, 0);
}
else if(trackType == ROTATEY) {
qTmpPrev.SetAxis(prevKf->value, 0, 1, 0);
qTmpCur.SetAxis(curKf->value, 0, 1, 0);
}
else if(trackType == ROTATEZ) {
qTmpPrev.SetAxis(prevKf->value, 0, 0, 1);
qTmpCur.SetAxis(curKf->value, 0, 0, 1);
}
qRotate.Nlerp(qTmpCur, qTmpPrev, blend_factor);
}
}
void CBlend::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blend_factor) {
for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {
pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * ( 1.0 - blend_factor) + pNextSkel->bones[i]->vPos * blend_factor;
pFinalSkel->bones[i]->qRotate.Nlerp(pLastSkel->bones[i]->qRotate, pNextSkel->bones[i]->qRotate, blend_factor);
}
}
Thanks,
Kasya
### #54haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 22 August 2008 - 11:44 PM
Okay; although the implementation of CBlend::Blend for 2 key-frames is still away from being optimal, it is good enough until the animation system works as expected. I here only hint at the potential to optimize it and urge you to come back to this topic at appropriate time.
In CSkeletalAnimation::Animate there is IMO an "else" being missed. The structure should probably be
if(uiFrame == 0) {
...
}
else if(uiFrame == m_Tracks[i]->keys.size()) { // <-- notice the else at the beginning
...
else {
...
}
Next, vTmp is a QVector3, and I assume vPos to be one, too. You are multiplying the both in CSkeletalAnimation::Animate
bone->vPos = bone->vPos * vTmp;
but that isn't the correct operation. If you don't want to do animation blending, then addition would be the correct operation. If, on the other hand, you want to do animation blending, then blending would be the correct operation. In the latter case also bone->qRotate must be blended, of course.
I can't tell you about a working structure if you don't tell us what you want to do. So please:
(a) Do you want to integrate animation blending?
(b) Are the tracks of a particular animation unique w.r.t. the affected bone attribute, or else is blending already necessary at this level? (I would suggest the former.)
© What about layering? What kind of layer blending do you prefer, if any?
### #55Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 23 August 2008 - 12:07 AM
Hello,
(a) I want ot use animation blending
(b) The Tracks are animating Bones
© I have no Layering no animation groups. I want to do that too. but after blending.
i changed lots of things here in CSkeleton::Animate(CSkeleton *pFinalSkel); functions.
void CSkeletalAnimation::Animate(CSkeleton *pFinalSkel) {
float time = g_Timer.GetSeconds() * 0.01f;
static float lastTime = startTime;
time += lastTime;
lastTime = time;
sprintf(t, "%f", time);
SetWindowText(GetHWND(), t);
if(time >= endTime) {
if(loop) {
lastTime = startTime;
time = startTime;
}
}
for(unsigned int i = 0; i < m_Tracks.size()-1; i++) {
unsigned int uiFrame = 0;
CBone * bone = pFinalSkel->FindBone(m_Tracks[i]->nodeID);
CAnimationTrackType trackType = m_Tracks[i]->trackType;
CVector3 vTmp;
CQuat qTmp;
while(uiFrame < m_Tracks[i]->keys.size() && m_Tracks[i]->keys[uiFrame]->time < time) uiFrame++;
if(uiFrame == 0) {
if(trackType >= TRANSLATEX && trackType <= TRANSLATEY) {
if(trackType == TRANSLATEX) {
vTmp.x = m_Tracks[i]->keys[0]->value;
}
else if(trackType == TRANSLATEY) {
vTmp.y = m_Tracks[i]->keys[0]->value;
}
else if(trackType == TRANSLATEZ) {
vTmp.z = m_Tracks[i]->keys[0]->value;
}
}
else if(trackType >= ROTATEX && trackType <= ROTATEZ) {
if(trackType == ROTATEX) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 1, 0, 0);
}
else if(trackType == ROTATEY) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 1, 0);
}
else if(trackType == ROTATEZ) {
qTmp.SetAxis(m_Tracks[i]->keys[0]->value, 0, 0, 1);
}
}
}
else if(uiFrame == m_Tracks[i]->keys.size()) {
if(trackType >= TRANSLATEX && trackType <= TRANSLATEY) {
if(trackType == TRANSLATEX) {
vTmp.x = m_Tracks[i]->keys[uiFrame-1]->value;
}
else if(trackType == TRANSLATEY) {
vTmp.y = m_Tracks[i]->keys[uiFrame-1]->value;
}
else if(trackType == TRANSLATEZ) {
vTmp.z = m_Tracks[i]->keys[uiFrame-1]->value;
}
}
else if(trackType >= ROTATEX && trackType <= ROTATEZ) {
if(trackType == ROTATEX) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 1, 0, 0);
}
else if(trackType == ROTATEY) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 1, 0);
}
else if(trackType == ROTATEZ) {
qTmp.SetAxis(m_Tracks[i]->keys[uiFrame-1]->value, 0, 0, 1);
}
}
}
else {
CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];
float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;
CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);
}
bone->qRotate = bone->qRotate * qTmp;
bone->vPos += vTmp;
}
}
I only have time problem. But im gettings rid of it.
Thanks,
Kasya
### #56haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 23 August 2008 - 01:08 AM
Quote:
Original post by Kasya (a) I want ot use animation blending (b) The Tracks are animating Bones © I have no Layering no animation groups. I want to do that too. but after blending.
Okay. Answer (b) isn't complete w.r.t. my question, but I assume trackIDs being unique per animation until you constradict explicitely. I furthur assume that more than 2 animations should be able to be blended.
Coming to the timing. I suggest you the following:
How is g_Timer.GetSeconds() advanced? It is a timer provides by the OS? As mentioned earlier, the time used should be freezed for the current video frame. Due to this behaviour, I would expect the current time being overhanded like in
void CSkeletalAnimation::Animate(float currentTime,CSkeleton *pFinalSkel) ...
instead of being fetched inside that routine.
Next, what is lastTime being good for? Especially declaring it as static is probably a bad idea. I suggest something like this:
CSkeletalAnimation:: CSkeletalAnimation( float startTime, bool loop )
: m_startTime( startTime ),
m_loop( loop ) ...
void CSkeletalAnimation::Animate( CSkeleton *pFinalSkel, float currentTime ) {
// relating current time to animation start
currentTime -= m_startTime;
// handling looping if necessary ...
if( currentTime>=m_duration && m_loop ) {
do {
currentTime -= m_duration;
m_startTime += m_duration;
} while( currentTime>=m_duration );
}
// pre-processing the skeleton
// (nothing to do yet)
// iterating tracks
for( unsigned int i=0; i<m_Tracks.size()-1; ++i ) {
m_Tracks[i]->contribute( pFinalSkel, currentTime );
}
// post-processing the skeleton
// (nothing to do yet)
}
What do you think about that? Notice that the animation doesn't care here that tracks are build of key-frames.
### #57Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 23 August 2008 - 07:24 AM
Hello,
What you mean
Quote:
Notice that the animation doesn't care here that tracks are build of key-frames.
?
Does it Care on mine? Where?
Doesn't m_Tracks[i]->Contribute(pFinalSkel, currentTime);'s implementation like mine but inside function.
And for blending more than one animation, i need to do like that:
CSkeleton *pFinalSkel;
CSkeleton tempSkel1, tempSkel2, tempSkel3;
CSkeleton tempFinal;
m_Animation[0].Animate(&tempSkel1, currentTime);
m_Animation[1].Animate(&tempSkel2, currentTime);
m_Animation[2].Animate(&tempSkel3, currentTime);
CBlend::Blend(&tempFinal, &tempSkel1, &tempSkel2, blend_factor);
CBlend::Blend(pFinalSkel, &tempFinal, &tempSkel3, blend_factor);
Thanks,
Kasya
### #58haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 24 August 2008 - 12:25 AM
Quote:
Original post by Kasya
What you mean
Quote:
Notice that the animation doesn't care here that tracks are build of key-frames.
?
Does it Care on mine? Where?
Your CSkeletalAnimation::Animate method executes the entire logic of tracks. Hence it also executes the look-up for the surrounding key-frames of the track. So yes, your solution requires the tracks being key-frame tracks. That is not an error; it is not even not a real problem if you stick with key-frame tracks only. But it is away from a clean OOP solution, IMHO. And you'll get into trouble if you decide to allow other kinds of tracks. At least, externalizing that functionality slims CSkeletalAnimation::Animate.
Quote:
Original post by Kasya Doesn't m_Tracks[i]->Contribute(pFinalSkel, currentTime);'s implementation like mine but inside function.
Mostly.
Quote:
Original post by Kasya And for blending more than one animation, i need to do like that: ...
Looking at your code snippets shows me a hardcoded handling of an anknown amount of animations. Well, I assume you don't really meant that but gave that as an equivalent example, did you? However, the real problems are others.
First, look at the total weightings. What you suggest is something like
f1 := p2 * w12 + p1 * ( 1 - w12 )
f2 := p3 * w23 + f1 * ( 1 - w23 )
== p3 * w23 + ( p2 * w12 + p1 * ( 1 - w12 ) ) * ( 1 - w23 )
== p3 * w23 + p2 * w12 * ( 1 - w23 ) + p1 * ( 1 - w12 ) * ( 1 - w23 )
Do you notice the double weighting of p2 and p1? If you don't take countermeasures then animations incorporated at the beginning are more and more supressed due to their multiple weightings.
The 2nd problem is that, if you don't have complete animations w.r.t. the count of tracks (i.e. not all skeleton attributes are animated), then you have the need to lately set those attributes to the state of the bind pose. So you have the need to detect at the end of processing all animations, whether or not an attribute has been touched.
Both problems can be handled by using a sum of weights for each attribute. It has an additional advantage: It allows weights to be independent; it is sufficient if each weight is greater than 0 (an animation with weight 0 plays no role by definition, and negative weights are disallowed anyway).
An animation has a track that influences f. Since this is the first animation doing so, the value returned by the track is used as is, but the weight is remembered:
s = w1
f = p1
If no other animation has a track bound to the same attribute, then nothing more happens. If, on the other hand, another animation has a track bound to that attribute, then a blending happens but with an adapted weight:
s += w2
f = blend( f, p2, w2 / s )
So what happens? The adapted weight is
w2 / s == w2 / ( w1 + w2 )
so that the blending is actually computed as
p1 * ( 1 - w2 / ( w1 + w2 ) ) + p2 * w2 / ( w1 + w2 )
== p1 * w1 / ( w1 + w2 ) + p2 * w2 / ( w1 + w2 )
Assuming a 3rd animation track comes into play, then again
s += w3
f = blend( f, p3, w3 / s )
resulting in
p1 * w1 / ( w1 + w2 + w3 ) + p2 * w2 / ( w1 + w2 + w3 ) + p3 * w3 / ( w1 + w2 + w3 )
You see that the weights gets normalized automatically, and each track has an influence with just the weight defined by the animation rather than a mix of weights of various animations!
Moreover, when the animations are all processed, you can investigate the sum of weights and determine whether _any_ track ahd influence; if not, then set the value of the attribute to those of the bind pose.
### #59haegarr Crossbones+ - Reputation: 7127
Like
0Likes
Like
Posted 24 August 2008 - 01:02 AM
Notice please how the above scheme of blending animations fits perfectly with the track->contribute thingy. It is so because the said scheme blends animations one-by-one, so it needs only to know a single animation at a time. In other words, the blending can be done by the animation (or its track, in this case) itself. That is an advantage from the implementation's point of view.
To do so, you need to transport the animations weight to the track, of course, like so
// iterating tracks
for( unsigned int i=0; i<m_Tracks.size()-1; ++i ) {
m_Tracks[i]->contribute( pFinalSkel, currentTime, m_weight );
}
And you can see why the comments "pre-processing" and "post-processing" in one of my previous posts are senseful...
### #60Gasim Members - Reputation: 207
Like
0Likes
Like
Posted 24 August 2008 - 08:35 AM
Hello,
You said
Quote:
the blending can be done by the animation (or its track, in this case) itself
and used
m_Tracks[i]->contribute( pFinalSkel, currentTime, m_weight );
That means i need to calculate weight. But when its inside keyframe like
CKeyframe *prevKf = m_Tracks[i]->keys[uiFrame-1];
CKeyframe *curKf = m_Tracks[i]->keys[uiFrame];
float delta = curKf->time - prevKf->time;
float blend = (time - prevKf->time) / delta;
CBlend::Blend(vTmp, qTmp, curKf, prevKf, blend, trackType);
i calculated weight in float blend;.
How can i calculate it before m_Tracks[i]->contribute loop
Thanks,
Kasya
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS | 2016-09-26 14:33:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26114240288734436, "perplexity": 4531.405570573525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660801.77/warc/CC-MAIN-20160924173740-00309-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://dev.heuristiclab.com/trac.fcgi/wiki/ReviewHeuristicLab3.3.0Application?action=diff&version=205 | # Changes between Version 204 and Version 205 of ReviewHeuristicLab3.3.0Application
Ignore:
Timestamp:
04/26/10 11:20:20 (12 years ago)
Comment:
--
### Legend:
Unmodified
v204 === Priority: HIGHEST === * !InvalidOperationException: Selecting a !MultiPermutationManipulator without selecting Mutators and starting the run leads to an !InvalidOperationException ("Please add at least one permutation manipulator to choose from"); Exception persists if another Mutator is chosen === Priority: HIGH === === Priority: MEDIUM === * Is it really intended that runs in the run tab can be copied (through drag&drop)? * The "Show Algorithm" button in the run tab opens a new tab containing the algorithm but no runs and no results - shouldn't it rather show the problem tab in the current algorithm? * In the results view I would show by default the quality data table (instead of first having to stop (pause) the algorithm, select the data table and resume the algorithm). When the tour visualization is done, this would be best to show by default. * swagner: Showing results by default is critical, as the continuous redraw of a result's value (e.g. quality chart, tour visualization) might consume a lot of runtime. Therefore the user should have to select a result manually, if he wants to look at a result. * vdorfer: I used the default parameter settings, which have an ExhaustiveTwoOptMoveGenerator as move generator. * abeham: Seems to run "fluently" in r3527, at least with a ch130. * vdorfer: runs fluently, problem caused by infrequent GUI updates, see comment of mkommend === Priority: LOW === | 2022-05-22 17:52:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18874382972717285, "perplexity": 5907.242387730305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00247.warc.gz"} |
http://ptp.ipap.jp/cgi-bin/findarticle?journal=PTP&author=Y.Myozyo | ## Search Result
### Search Conditions
Years
All Years
for journal 'PTP'
author 'Y.* Myozyo' : 7
total : 7
### Search Results : 7 articles were found.
1. Progress of Theoretical Physics Vol. 45 No. 5 (1971) pp. 1694-1696 : (5)
Exotic Exchange Processes and Urbaryon Rearrangement Diagrams
Kazuo Ghoroku, Yasuo Myozyo and Hujio Noda
2. Progress of Theoretical Physics Vol. 46 No. 2 (1971) pp. 668-669 : (5)
On the Behaviour of $N$-Particle Production Cross Section
Yasuo Myozyo and Hujio Noda
3. Progress of Theoretical Physics Vol. 46 No. 3 (1971) pp. 820-834 : (5)
Low-Lying Exotic Meson Trajectories and Processes $p\bar{p} \to Y\bar{Y}$
Kazuo Ghoroku, Yasuo Myozyo and Hujio Noda
4. Progress of Theoretical Physics Vol. 51 No. 3 (1974) pp. 859-864 : (5)
Peripheral Resonance Production in the Inclusive Reaction
Yasuo Myozyo
5. Progress of Theoretical Physics Vol. 52 No. 6 (1974) pp. 1873-1882 : (5)
Two-Body Reactions at All Angles as the Exclusive Limit of Inclusive Reactions
Kisei Kinoshita and Yasuo Myozyo
6. Progress of Theoretical Physics Vol. 55 No. 1 (1976) pp. 211-228 : (5)
Whole-Region Description of Inclusive Spectra and Composite Structure of Hadrons
Kisei Kinoshita, Yukio Kinoshita and Yasuo Myozyo
7. Progress of Theoretical Physics Vol. 56 No. 6 (1976) pp. 1973-1975 : (5)
Rich Yield of High Momentum Hadrons by Two-Fireball Model in $e\bar{e}$ Annihilation
Kazuo Ghoroku, Yasuo Myozyo, Hiroyuki Nagai and Kunio Shiga | 2013-05-18 12:00:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5229397416114807, "perplexity": 6136.57410769259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00022-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/when-do-total-differentials-cancel-with-partial-derivatives.849598/ | # When do total differentials cancel with partial derivatives
1. Dec 25, 2015
### sunrah
I've just done a derivation and had to use the following
$u_{b}u^{c}\partial_{c}\rho = u_{b}\frac{dx^{c}}{d\tau}\frac{\partial\rho}{\partial x^{c}} = u_{b}\frac{d\rho}{d\tau}$
We've done this cancellation a lot during my GR course, but I'm not clear exactly when/why this is possible.
EDIT: is this only true in inertial coordinates?
2. Dec 25, 2015
### Fightfish
Are you familiar with the multivariable chain rule
$$\frac{d f (x,y)}{dt} = \frac{\partial f}{\partial x}\frac{dx}{dt} + \frac{\partial f}{\partial y}\frac{dy}{dt}?$$
The 'cancellation' you performed there is simply a simplification using the chain rule (remember that you are using the Einstein summation convention).
3. Dec 25, 2015
### sunrah
Thanks, i did notice that of course after posting
4. Dec 27, 2015
### HallsofIvy
Staff Emeritus
And, while it may be a useful "mnemonic", the derivative, ordinary or partial, is NOT a fraction and the "chain rule" does NOT involve "cancelling".
5. Dec 27, 2015
### bcrowell
Staff Emeritus | 2017-12-12 15:11:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069910407066345, "perplexity": 3035.1618293470065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00185.warc.gz"} |
https://mathematica.stackexchange.com/questions/165801/mean-rgb-value-of-a-locator-region | # Mean RGB value of a locator region?
i am working at a color deconvolution using Ruifrok and Johnston's method (as described here).
To get a custom stain, I need to realize a vector with mean RGB values of a region in a given image. So far I worked with the mean RGB value of multiple locators using a locator pane and auto-create, but I am wondering if there is a function I didn't see to let the user select a whole region and calculate the mean RGB value for that chosen region.
I hope anybody can give me a hint.
Edit:
I am adding an excerpt of my code to concretise my problem.
Manipulate[Column@{Show[z], u = ImageValue[z, Mean[pts]]},
{{pts, {{1, 1}}}, Locator, LocatorAutoCreate -> True}]
As you can see I am calculating the mean value of 6 locator image values. Instead I want to achieve a calculation of a whole area. For example the user sets 4 Locators and the area between them is used for this calculation. Or is there an implemented function in Mathematica?
Edit 2:
Manipulate[Column@{Show[z, Graphics[{EdgeForm[Thick], Opacity[0, White],
pol = Polygon[pts]}]], u = ImageValue[z, Mean[pts]]}, {{pts, {{1, 1}}},
Locator, LocatorAutoCreate -> True}]
I continued my work and had the idea of using "ImageMeasurement" combined with masking but unfortunately I cannot use a transparent white polygon acting as the mask to convolve
{Button["Measure", ImageMeasurements[z, "Mean", Masking -> pol]]},
• Your question needs to supply more information of exactly what you are trying accomplish. Editing your question by adding the code you currently using would probably be the best way to inform us. – m_goldberg Feb 13 '18 at 22:04
• Thanks for the example. I tried in version 11, unfortunately it failed to create a region with the polygon. – fathah Jul 17 '18 at 15:04
z= Import["http://www.mecourse.com/landinig/software/cdeconv/tric.png"];
Manipulate[Column@{Show[z, Graphics[{Opacity[.3, Yellow],
Polygon[pts[[FindShortestTour[pts][[-1]]]]]}], ImageSize->400],
u = ImageValue[z, If[Length@pts >= 3,
RegionRegionCentroid[Polygon[pts[[FindShortestTour[pts][[-1]]]]]], Mean[pts]]]},
{{pts, {{1, 1}}}, Locator, LocatorAutoCreate -> True}]
This works as is in version 9. In versions 10+, you can use RegionCentroid in place of RegionRegionCentroid.
• That should be exactly what i was looking for. Thanks for your help. Going to have look at it right now. – blackcore Feb 14 '18 at 2:36
• @kglr why did you use RegionRegionCentroid? – user5601 Feb 14 '18 at 4:17
• @user5601, if you mean why not just RegionCentroid`, I am still on version 9. – kglr Feb 14 '18 at 4:37 | 2020-07-04 12:39:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3970315456390381, "perplexity": 2215.729843185097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00066.warc.gz"} |
https://www.ademcetinkaya.com/2022/12/cyrx-cryoport-inc-common-stock.html | Outlook: CryoPort Inc. Common Stock assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Hold
Time series to forecast n: 27 Dec 2022 for (n+8 weeks)
Methodology : Modular Neural Network (DNN Layer)
## Abstract
We present an Artificial Neural Network (ANN) approach to predict stock market indices, particularly with respect to the forecast of their trend movements up or down. Exploiting different Neural Networks architectures, we provide numerical analysis of concrete financial time series. In particular, after a brief r ́esum ́e of the existing literature on the subject, we consider the Multi-layer Perceptron (MLP), the Convolutional Neural Net- works (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks techniques. (O'Connor, N. and Madden, M.G., 2005, December. A neural network approach to predicting stock exchange movements using external factors. In International Conference on Innovative Techniques and Applications of Artificial Intelligence (pp. 64-77). Springer, London.) We evaluate CryoPort Inc. Common Stock prediction models with Modular Neural Network (DNN Layer) and Stepwise Regression1,2,3,4 and conclude that the CYRX stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
## Key Points
2. Is now good time to invest?
3. Why do we need predictive models?
## CYRX Target Price Prediction Modeling Methodology
We consider CryoPort Inc. Common Stock Decision Process with Modular Neural Network (DNN Layer) where A is the set of discrete actions of CYRX stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Stepwise Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (DNN Layer)) X S(n):→ (n+8 weeks) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$
n:Time series to forecast
p:Price signals of CYRX stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## CYRX Stock Forecast (Buy or Sell) for (n+8 weeks)
Sample Set: Neural Network
Stock/Index: CYRX CryoPort Inc. Common Stock
Time series to forecast n: 27 Dec 2022 for (n+8 weeks)
According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for CryoPort Inc. Common Stock
1. An entity must look through until it can identify the underlying pool of instruments that are creating (instead of passing through) the cash flows. This is the underlying pool of financial instruments.
2. An entity's estimate of expected credit losses on loan commitments shall be consistent with its expectations of drawdowns on that loan commitment, ie it shall consider the expected portion of the loan commitment that will be drawn down within 12 months of the reporting date when estimating 12-month expected credit losses, and the expected portion of the loan commitment that will be drawn down over the expected life of the loan commitment when estimating lifetime expected credit losses.
3. Accordingly the date of the modification shall be treated as the date of initial recognition of that financial asset when applying the impairment requirements to the modified financial asset. This typically means measuring the loss allowance at an amount equal to 12-month expected credit losses until the requirements for the recognition of lifetime expected credit losses in paragraph 5.5.3 are met. However, in some unusual circumstances following a modification that results in derecognition of the original financial asset, there may be evidence that the modified financial asset is credit-impaired at initial recognition, and thus, the financial asset should be recognised as an originated credit-impaired financial asset. This might occur, for example, in a situation in which there was a substantial modification of a distressed asset that resulted in the derecognition of the original financial asset. In such a case, it may be possible for the modification to result in a new financial asset which is credit-impaired at initial recognition.
4. An entity is not required to restate prior periods to reflect the application of these amendments. The entity may restate prior periods if, and only if, it is possible without the use of hindsight and the restated financial statements reflect all the requirements in this Standard. If an entity does not restate prior periods, the entity shall recognise any difference between the previous carrying amount and the carrying amount at the beginning of the annual reporting period that includes the date of initial application of these amendments in the opening retained earnings (or other component of equity, as appropriate) of the annual reporting period that includes the date of initial application of these amendments.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
CryoPort Inc. Common Stock assigned short-term Ba1 & long-term Ba1 estimated rating. We evaluate the prediction models Modular Neural Network (DNN Layer) with Stepwise Regression1,2,3,4 and conclude that the CYRX stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Hold
### CYRX CryoPort Inc. Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementBa3C
Balance SheetBaa2C
Leverage RatiosBaa2B1
Cash FlowB2Baa2
Rates of Return and ProfitabilityB1Caa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 75 out of 100 with 868 signals.
## References
1. S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45(11): 2471–2482, 2009
2. J. Baxter and P. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Re- search, 15:319–350, 2001.
3. D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of Markov Decision Processes. In UAI '00: Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, Stanford University, Stanford, California, USA, June 30 - July 3, 2000, pages 32–37, 2000.
4. E. van der Pol and F. A. Oliehoek. Coordinated deep reinforcement learners for traffic light control. NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016.
5. Athey S, Bayati M, Doudchenko N, Imbens G, Khosravi K. 2017a. Matrix completion methods for causal panel data models. arXiv:1710.10251 [math.ST]
6. Abadie A, Imbens GW. 2011. Bias-corrected matching estimators for average treatment effects. J. Bus. Econ. Stat. 29:1–11
7. L. Busoniu, R. Babuska, and B. D. Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions of Systems, Man, and Cybernetics Part C: Applications and Reviews, 38(2), 2008.
Frequently Asked QuestionsQ: What is the prediction methodology for CYRX stock?
A: CYRX stock prediction methodology: We evaluate the prediction models Modular Neural Network (DNN Layer) and Stepwise Regression
Q: Is CYRX stock a buy or sell?
A: The dominant strategy among neural network is to Hold CYRX Stock.
Q: Is CryoPort Inc. Common Stock stock a good investment?
A: The consensus rating for CryoPort Inc. Common Stock is Hold and assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of CYRX stock?
A: The consensus rating for CYRX is Hold.
Q: What is the prediction period for CYRX stock?
A: The prediction period for CYRX is (n+8 weeks) | 2023-02-06 12:16:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161654353141785, "perplexity": 5886.32868584032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00202.warc.gz"} |
http://mathinsight.org/directional_derivative_gradient_introduction | # Math Insight
### An introduction to the directional derivative and the gradient
#### The directional derivative
Let the function $f(x,y)$ be the height of a mountain range at each point $\vc{x} = (x,y)$. If you stand at some point $\vc{x}=\vc{a}$, the slope of the ground in front of you will depend on the direction you are facing. It might slope steeply up in one direction, be relatively flat in another direction, and slope steeply down in yet another direction.
The partial derivatives of $f$ will give the slope $\pdiff{f}{x}$ in the positive $x$ direction and the slope $\pdiff{f}{y}$ in the positive $y$ direction. We can generalize the partial derivatives to calculate the slope in any direction. The result is called the directional derivative.
The first step in taking a directional derivative, is to specify the direction. One way to specify a direction is with a vector $\vc{u}=(u_1,u_2)$ that points in the direction in which we want to compute the slope. For simplicity, we will insist that $\vc{u}$ is a unit vector. We write the directional derivative of $f$ in the direction $\vc{u}$ at the point $\vc{a}$ as $D_{\vc{u}}f(\vc{a})$. We could define it with a limit definition just as an ordinary derivative or a partial derivative \begin{align*} D_{\vc{u}}f(\vc{a}) = \lim_{h \to 0} \frac{f(\vc{a}+h\vc{u}) - f(\vc{a})}{h}. \end{align*} However, it turns out that for differentiable $f(x,y)$, we won't need to worry about that definition.
The concept of the directional derivative is simple; $D_{\vc{u}}f(\vc{a})$ is the slope of $f(x,y)$ when standing at the point $\vc{a}$ and facing the direction given by $\vc{u}$. If $x$ and $y$ were given in meters, then $D_{\vc{u}}\vc{f}(\vc{a})$ would be the change in height per meter as you moved in the direction given by $\vc{u}$ when you are at the point $\vc{a}$.
Note that $D_{\vc{u}}f(\vc{a})$ is a number, not a matrix. In fact, the directional derivative is the same as a partial derivative if $\vc{u}$ points in the positive $x$ or positive $y$ direction. For example, if $\vc{u}=(1,0)$, then $\displaystyle D_{\vc{u}}f(\vc{a}) = \pdiff{f}{x}(\vc{a})$. Similarly if $\vc{u}=(0,1)$, then $\displaystyle D_{\vc{u}}f(\vc{a}) = \pdiff{f}{y}(\vc{a})$.
In the following applet, the height $f(x,y)$ of a mountain range is shown both as a surface plot (left) as a level curve plot (right). The interpretation of the two-dimensional point $\vc{a}$ and two-dimensional direction vector $\vc{u}$ defining the directional derivative $D_{\vc{u}}f(\vc{a})$ may be clearer in the two-dimensional level curve plot, so we focus on that panel first. You can recognize steep mountain peaks in the level curve plot by the closely spaced circular level curves. In this applet, you can move the point $\vc{a}$ around, change the direction $\vc{u}$ and observe how the directional derivative $D_{\vc{u}}f(\vc{a})$ changes. If you set $\vc{u}$ to point straight east ($\theta=0$ in the applet), then $\vc{u}$ points in the positive $x$ direction ($\vc{u}=(1,0)$) so that $\displaystyle D_{\vc{u}}f(\vc{a}) = \pdiff{f}{x}(\vc{a})$. Similarly, when $\vc{u}$ points straight north ($\theta=\pi/2$), then $\vc{u}$ points in the positive $y$ direction ($\vc{u}=(0,1)$) so that $\displaystyle D_{\vc{u}}f(\vc{a}) = \pdiff{f}{y}(\vc{a})$.
Directional derivative on a mountain. The height of a mountain range described by a function $f(x,y)$ is shown as surface plot in three-dimensions (left) and a two-dimensional level curve plot (right). In each panel, a red point can be moved by the mouse to change where the directional derivative is evaluated. The directional derivative is computed in the direction of the two-dimensional vector $\vc{u}$. This direction is illustrated by the light green vectors as well shown in the lower left. The direction of $\vc{u}$ is determined by the angle $\theta$ it makes with straight east (positive $x$ direction). The angle $\theta$, and hence $\vc{u}$, can be changed using the slider. The two-dimensional point $\vc{a}$ where the directional derivative is computed is illustrated by the shadow of the red point on the $xy$-plane below the surface plot and by the red point itself on the level curve plot. The value of the directional derivative $D_{\vc{u}}f(\vc{a})$ is shown at the bottom of the panel, along with the value of $\vc{a}$ itself. The value of $D_{\vc{u}}f(\vc{a})$ is the slope of the dark green vector to its right. This dark green vector is also shown emanating from the red point on the surface plot, where it is tangent to the surface, indicating that this slope is indeed the slope of the surface in the direction given by $\vc{u}$. The height of the surface $f(\vc{a})$ is illustrated by the bar in the lower right.
If you make $\vc{u}$ point in a direction parallel to the level curve, what happens to $D_{\vc{u}} f(\vc{a})$? (Since the height is constant along a level curve, you should be able to infer what the slope in that direction should be.) Starting in any direction $\vc{u}$, what happens to $D_{\vc{u}}f(\vc{a})$ when you turn $\vc{u}$ to point in the opposite direction (i.e., add or subtract $\pi$ from $\theta$)?
In the surface plot, the steepness of the mountain may be easier to see. However, this view is a little misleading because it may lead you to think that the point $\vc{a}$ and the direction vector $\vc{a}$ are in two dimensions. In the surface plot, the red dot now floats in three dimensions on the surface of the mountain. Hence, the red dot in the surface plot is not $\vc{a}$; instead $\vc{a}$ is represented by the shadow of $\vc{a}$ on the $xy$-plane. Second, the light green vector representing $\vc{u}$ is floating on the surface. A better representation of the two-dimensional direction vector $\vc{u}$ is the shadow of the light green vector on the $xy$-plane.
The surface plot, though, is useful for recognizing that the directional derivative $D_{\vc{u}}f(\vc{a})$ is the slope of the surface. The dark green vector points up or down the mountain in the direction given by $\vc{u}$. The slope of this vector (which is the same thing as the slope of the surface) is the directional derivative. This vector (rotated to point toward the right) is displayed next to the value of $D_{\vc{u}}f(\vc{a})$ to further emphasize this point.
In most cases, there is always one direction $\vc{u}$ where the directional derivative $D_{\vc{u}}f(\vc{a})$ is the largest. This is the “uphill” direction. (In some cases, such as when you are at the top of a mountain peak or at the lowest point in a valley, this might not be true.) Let's call this direction of maximal slope $\vc{m}$. Both the direction $\vc{m}$ and the maximal directional derivative $D_{\vc{m}}f(\vc{a})$ are captured by something called the gradient of $f$ and denoted by $\nabla f(\vc{a})$. The gradient is a vector that points in the direction of $\vc{m}$ and whose magnitude is $D_{\vc{m}}f(\vc{a})$. In math, we can write this as $\displaystyle \frac{\nabla f(\vc{a})}{\| \nabla f(\vc{a})\|} = \vc{m}$ and $\| \nabla f(\vc{a})\| = D_{\vc{m}}f(\vc{a})$.
The below applet illustrates the gradient, as well as its relationship to the directional derivative. The definition of $\theta$ is different from that of the above applets. Here $\theta$ is the angle between the gradient and vector $\vc{u}$. When $\theta=0$, $\vc{u}$ points in the same direction as the gradient (and is hidden in the applet).
Gradient and directional derivative on a mountain shown as level curves. The height of a mountain ranged described by a function $f(x,y)$ is shown as a level curve plot. A point $\vc{a}$ (in dark red) can be moved with the mouse. The height $f(\vc{a})$ is shown on the bottom cyan slider labeled by “f”. The direction of steepest increase of $f$ is given by the gradient vector $\nabla f(\vc{a})$ (the dark blue vector is ten times longer than the actual gradient). The actual length of the gradient $\| \nabla f(\vc{a})\|$ is shown by the dark blue line on the middle (light green) slider. The light green line on that slider indicates the value of the directional derivative $D_{\vc{u}}f(\vc{a})$, where $\vc{u}$ is represented by the light green vector coming out of $\vc{a}$. The direction of $\vc{u}$ is controlled by $\theta$ (changed via top slider), where $\theta$ is the angle between $\nabla f(\vc{a})$ and $\vc{u}$.
Notice how the dark blue gradient vector always points up the mountains (in fact, the gradient is always perpendicular to the level curves). When the level curves are close together, the gradient is large. What happens to the gradient at the tops of the mountains?
Note that when $\theta=0$ (or $\theta = 2\pi$), the directional derivative $D_{\vc{u}}f(\vc{a})$ (shown by the light green line on the middle slider) and the magnitude of the gradient $\|\nabla f (\vc{a})\|$ (shown by the dark blue line on the middle slider) are identical, i.e., $D_{\vc{u}}f(\vc{a}) = \| \nabla f(\vc{a})\|$. When $\theta=\pi$, then $\vc{u}$ points in the opposite direction of the gradient, and $D_{\vc{u}}f(\vc{a}) = - \| \nabla f(\vc{a})\|$. For what values of $\theta$ is $D_{\vc{u}}f(\vc{a}) = 0$?
By moving $\vc{a}$ (the dark red point) around and changing $\theta$, I hope you can convince yourself that, for a fixed $\vc{a}$, the maximal value of $D_{\vc{u}}f(\vc{a})$ occurs when $\vc{u}$ and $\nabla f (\vc{a})$ point in the same direction (i.e., when $\theta=0$ or $\theta=2\pi$), and the minimum value occurs when $\vc{u}$ and $\nabla f (\vc{a})$ point in opposite directions (i.e., when $\theta=\pi$). Hence $D_{\vc{u}}f(\vc{a})$ always lies between $-\| \nabla f(\vc{a})\|$ and $\| \nabla f(\vc{a})\|$. It turns out that the relationship between the gradient and the directional derivative can be summarized by the equation \begin{align*} D_{\vc{u}}f(\vc{a}) &= \nabla f(\vc{a}) \cdot \vc{u}\\ &= \|\nabla f(\vc{a})\|\, \| \vc{u}\| \cos\theta\\ &= \| \nabla f(\vc{a})\| \cos\theta \end{align*} where $\theta$ is the angle between $\vc{u}$ and the gradient. (Recall that $\vc{u}$ is a unit vector, meaning that $\| \vc{u}\|=1$.)
The applet is repeated using a plot of $z=f(x,y)$, below. Although its steepness may be easier to see, recall from the above discussion that the dark red point is no longer really $\vc{a}$ and the light green vector is no longer really $\vc{u}$. Similarly, since the dark blue vector points up the mountain, it is no longer really the gradient $\nabla f(\vc{a})$, which, for a function $f(x,y)$ of two variables, is a two-dimensional vector. Despite its shortcomings, this applet may help you see how the gradient always points in the direction where the mountain rises most steeply.
Gradient and directional derivative on a mountain shown as mesh plot. The dark red point can be moved along the mountain range whose height is given by $f(x,y)$. The dark blue vector points in the direction of the gradient. The magnitude of the gradient is shown by the dark blue line on the light green slider. The light green vector points at an angle $\theta$ (changeable via the top slider) from the gradient; the directional derivative in that direction is shown by the light green line on the light green slider. The dark blue and the light green vectors are shown as three-dimensional vectors titling up or down the mountain, and hence are not exactly the two dimensional vectors $\nabla f$ or the $\vc{u}$ of $D_{\vc{u}}f$.
#### But what exactly is the gradient?
This page was designed to give you an intuitive feel for what the directional directive and gradient are. But, we've failed to mention what exactly is the gradient. The above formula for the directional derivative is nice, but it's not very useful if you don't know how to calculate $\nabla f$. Fortunately, the end result is fairly simple, as the gradient is just a reformulation of the matrix of partial derivatives. You can check out a simple derivation of the gradient to see why this is true.
Once you know how to calculate the gradient, you can follow these examples. | 2014-04-25 07:35:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707267642021179, "perplexity": 140.04167567342725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/int3/chapter/7/lesson/7.1.2/problem/7-28 | Home > INT3 > Chapter 7 > Lesson 7.1.2 > Problem7-28
7-28.
Complete the table below. Write an equation that represents this relationship. 7-28 HW eTool (Desmos) Homework Help ✎
$x$ $y$ $1$ $3$ $2$ $9$ $27$ $4$ $243$ $6$ $7$ $8$
Does $y$ appear to follow a certain pattern?
What is the obvious increase in $x$?
Use the eTool below to the graph equation.
Click the link at right for the full version of the eTool: INT3 7-28 HW eTool | 2020-02-20 21:07:28 | {"extraction_info": {"found_math": true, "script_math_tex": 14, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7374154329299927, "perplexity": 2069.3422080618006}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00154.warc.gz"} |
https://tex.stackexchange.com/questions/411099/how-to-check-if-a-definition-of-a-command-is-used-more-than-once | # How to check if a definition of a command is used more than once?
Last time I asked: How to check if command is used more than once?
This time I want LaTeX to throw an error message if two or more commands have the same definiton.
This is allowed
\documentclass{article}
\newcommand{\one}{Definition 1}
\newcommand{\two}{Definition 2}
\newcommand{\three}{Definition 3}
\begin{document}
\one
\two
\three
\end{document}
This is should throw an error message
\documentclass{article}
\newcommand{\one}{Definition 1}
\newcommand{\two}{Definition 1} % Definition already exists!
\newcommand{\three}{Definition 3}
\begin{document}
\one
\two
\three
\end{document}
EDIT
• I'd like to have the error at point of definition.
• It's an error if the definition already exists in another command.
• Do you want the error at point of definition or at point of use? – Manuel Jan 19 '18 at 8:25
• you can easily test whether any two specified commands have the same definition but you can not check whether any other command has the same definition of the one have just defined. Please clarify your question to make it clear what you want to test. – David Carlisle Jan 19 '18 at 8:39
• @Manuel: I updated my question. – Sr. Schneider Jan 19 '18 at 9:10
• @Sr.Schneider That's what I give in my answer. – Manuel Jan 19 '18 at 9:38
Use \newuniquecommand instead of \newcommand.
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand \newuniquecommand { m O{0} o +m }
{
\prop_map_inline:Nn \g_schneider_commands_prop
{
\str_if_eq:nnT { ##2 } { #4 }
{
\prop_map_break:
}
} | 2020-04-03 21:37:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763001561164856, "perplexity": 1722.9313883298644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00023.warc.gz"} |
https://en.academic.ru/dic.nsf/enwiki/4931199 | # Pointclass
Pointclass
In the mathematical field of descriptive set theory, a pointclass is a collection of sets of points, where a "point" is ordinarily understood to be an element of some perfect Polish space. In practice, a pointclass is usually characterized by some sort of "definability property"; for example, the collection of all open sets in some fixed collection of Polish spaces is a pointclass. (An open set may be seen as in some sense definable because it cannot be a purely arbitrary collection of points; for any point in the set, all points sufficiently close to that point must also be in the set.)
Pointclasses find application in formulating many important principles and theorems from set theory and real analysis. Strong set-theoretic principles may be stated in terms of the determinacy of various pointclasses, which in turn implies that sets in those pointclasses (or sometimes larger ones) have regularity properties such as Lebesgue measurability (and indeed universal measurability), the property of Baire, and the perfect set property.
Basic framework
In practice, descriptive set theorists often simplify matters by working in a fixed Polish space such as Baire space or sometimes Cantor space, each of which has the advantage of being zero dimensional, and indeed homeomorphic to its finite or countable powers, so that considerations of dimensionality never arise. Moschovakis provides greater generality by fixing once and for all a collection of underlying Polish spaces, including the set of all naturals, the set of all reals, Baire space, and Cantor space, and otherwise allowing the reader to throw in any desired perfect Polish space. Then he defines a "product space" to be any finite Cartesian product of these underlying spaces. Then, for example, the pointclass of all open sets means the collection of all open subsets of one of these product spaces. This approach prevents from being a proper class, while avoiding excessive specificity as to the particular Polish spaces being considered (given that the focus is on the fact that is the collection of open sets, not on the spaces themselves).
Boldface pointclasses
The pointclasses in the Borel hierarchy, and in the more complex projective hierarchy, are represented by sub- and super-scripted Greek letters in boldface fonts; for example, is the pointclass of all closed sets, is the pointclass of all "F"σ sets, is the collection of all sets that are simultaneously "F"σ and "G"δ, and is the pointclass of all analytic sets.
Sets in such pointclasses need be "definable" only up to a point. For example, every singleton set in a Polish space is closed, and thus . Therefore it cannot be that every set must be "more definable" than an arbitrary element of a Polish space (say, an arbitrary real number, or an arbitrary countable sequence of natural numbers). Boldface pointclasses, however, may (and in practice ordinarily do) require that sets in the class be definable relative to some real number, taken as an oracle. In that sense, membership in a boldface pointclass is a definability property, even though it is not absolute definability, but only definability with respect to a possibly undefinable real number.
Boldface pointclasses, or at least the ones ordinarily considered, are closed under Wadge reducibility; that is, given a set in the pointclass, its inverse image under a continuous function (from a product space to the space of which the given set is a subset) is also in the given pointclass. Thus a boldface pointclass is a downward-closed union of Wadge degrees.
Lightface pointclasses
The Borel and projective hierarchies have analogs in effective descriptive set theory in which the definability property is no longer relativized to an oracle, but is made absolute. For example, if one fixes some collection of basic open neighborhoods (say, in Baire space, the set of all sets of the form {"x"∈ωω|"x" ⊇"s"} for any fixed finite sequence "s" of natural numbers), then the open, or , sets may be characterized as all (arbitrary) unions of basic open neighborhoods. The analogous $Sigma^0_1$ sets, with a lightface $Sigma$, are no longer "arbitrary" unions of such neighborhoods, but computable unions of them (that is, a set is $Sigma^0_1$ if there is a computable set "S" of finite sequences of naturals such that the given set is the union of all {"x"∈ωω|"x" ⊇"s"} for "s" in "S"). A set is lightface $Pi^0_1$ if it is the complement of a $Sigma^0_1$ set. Thus each $Sigma^0_1$ set has at least one index, which describes the computable function enumerating the basic open sets from which it is composed; in fact it will have infinitely many such indices. Similarly, an index for a $Pi^0_1$ set "B" describes the computable function enumerating the basic open sets in the complement of "B".
A set "A" is lightface $Sigma^0_2$ if it is a union of a computable sequence of $Pi^0_1$ sets (that is, there is a computable enumeration of indices of $Pi^0_1$ sets such that "A" is the union of these sets). This relationship between lightface sets and their indices is used to extend the lightface Borel hierarchy into the transfinite, via recursive ordinals. This produces that hyperarithmetic hierarchy, which is the lightface analog of the Borel hierarchy. (The finite levels of the hyperarithmetic hierarchy are known as the arithmetical hierarchy.)
A similar treatment can be applied to the projective hierarchy. Its lightface analog is known as the analytical hierarchy.
References
*
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• pointclass — noun A collection of sets of points (ordinarily understood to be elements of some perfect Polish space), characterized by some sort of definability property … Wiktionary
• Adequate pointclass — In the mathematical field of descriptive set theory, a pointclass can be called adequate if it contains all recursive pointsets and is closed under recursive substitution, ∃≤ and ∀≤ … Wikipedia
• Wadge hierarchy — In descriptive set theory, Wadge degrees are levels of complexity for sets of reals and more comprehensively, subsets of any given topological space. Sets are compared by continuous reductions. The Wadge hierarchy is the structure of Wadge… … Wikipedia
• Determinacy — Determined redirects here. For the 2005 heavy metal song, see Determined (song). For other uses, see Indeterminacy (disambiguation). In set theory, a branch of mathematics, determinacy is the study of under what circumstances one or the other… … Wikipedia
• Prewellordering — In set theory, a prewellordering is a binary relation that is transitive, wellfounded, and total. In other words, if leq is a prewellordering on a set X, and if we define sim by:xsim yiff xleq y land yleq xthen sim is an equivalence relation on X … Wikipedia
• Uniformization (set theory) — In set theory, the axiom of uniformization, a weak form of the axiom of choice, states that if R is a subset of X imes Y, where X and Y are Polish spaces,then there is a subset f of R that is a partial function from X to Y, and whose domain (in… … Wikipedia
• Difference hierarchy — In set theory, the difference hierarchy over a pointclass is a hierarchy of larger pointclasses generated by taking differences of sets. If Γ is a pointclass, then the set of differences in Γ is . In usual notation, this set is denoted by 2 Γ.… … Wikipedia
• Descriptive set theory — In mathematical logic, descriptive set theory is the study of certain classes of well behaved subsets of the real line and other Polish spaces. As well as being one of the primary areas of research in set theory, it has applications to other… … Wikipedia
• List of mathematics articles (A) — NOTOC A A Beautiful Mind A Beautiful Mind (book) A Beautiful Mind (film) A Brief History of Time (film) A Course of Pure Mathematics A curious identity involving binomial coefficients A derivation of the discrete Fourier transform A equivalence A … Wikipedia
• List of mathematics articles (P) — NOTOC P P = NP problem P adic analysis P adic number P adic order P compact group P group P² irreducible P Laplacian P matrix P rep P value P vector P y method Pacific Journal of Mathematics Package merge algorithm Packed storage matrix Packing… … Wikipedia | 2019-11-22 12:27:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865689039230347, "perplexity": 590.9112678626319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00128.warc.gz"} |
https://www.nature.com/articles/s42003-022-03198-y?error=cookies_not_supported&code=27d069d3-bfb2-48c4-bab3-f2180f65e63d | ## Introduction
The intense community effort of SARS-CoV-2 sequencing has yielded a wealth of information about the mutations that have occurred in the virus since it first appeared in humans.
Understanding the evolutionary dynamics of the virus is critical for inferring its origin1,2, understanding its underlying biological mechanisms like mutagenic immune system responses3,4 and recombination5,6, predicting virus variants7,8,9, and for vaccine and drug development10,11. Recently there has been a spur of interest in analyzing substitution rates for SARS-CoV-212,13,14. Common analyses relate to explaining factors such as genes15,16,17, CpG pairs18,19, context13,20, and codon and amino acid frequency21,22. However, all previous work relied on a statistical analysis of the effect of each factor in isolation through summary statistics. If we seek to gain a deeper understanding and utility, we should consider these factors in tandem and aspire to build models that describe the entire mutation process as a function of all relevant information. In addition, while phylogenetic methods have been useful for finding and categorizing current variants, they have not been used for predicting new variants.
In this work, we employ regression in a big-data approach to identify the best statistical models for explaining the substitution rate distribution in observed sequences. We build a dataset containing 51,527 inferred substitutions for training the models based on a phylogenetic tree reconstruction from 61,835 available sequences23 (as of 8 February 2021). We use the inferred substitutions in these sequences to identify the factors affecting substitution rates at different locations in the viral genome. We use our learned model to predict which sites in the genome are likely to mutate in the future and contribute to the formation of novel variants. Our methods can help vaccine design, medical research, and other tasks in the ongoing battle against COVID-19 and future viral epidemics.
We consider two different candidate phylogenetic trees: Tree of complete SARS-CoV-2 sequences reconstructed by NCBI23 and a phylogenetic tree we reconstructed by applying the sarscov2phylo method developed by Lanfear24 on the same sequences. Here we show results on the latter; we provide results for the NCBI phylogenetic tree in the Supplementary Information.
In our models, we consider ten potential explanatory factors for explaining substitution rates based on sequence, biological function, gene location, and others. We compare 43,254 possible regression models and choose between them based on statistical goodness of fit scores.
We evaluate the ability of these models to predict new variants appearing in sequences that were added to the NCBI database between 10 February 2021 and 10 April 2021 (the test period). Our evaluation scheme does not depend on the correctness of the inferred tree or the family of regression models, thus objectively evaluating our models’ ability to rank potential variants. For example, while the overall rate of occurrence of new amino acid substitutions in the test period was 2.2% among all candidate sites, the top 100 predictions of our selected model included 19 substitutions that actually occurred in the test period, for a lift (excess precision compared to random ranking) of 8.62.
## Results
### SARS-CoV-2 substitution model
We briefly describe our statistical modeling approach here; See the “Methods” section for more details.
We inferred a phylogenetic tree and its mutations from the 44,080 sequences that passed quality control (out of the 61,835 sequences available in the NCBI dataset as of 2/8/2021). We then built a training dataset describing all potential substitutions in terms of the following explanatory factors:
1. 1.
Locus (Gene) of the site considered
2. 2.
Input nucleotide base (A/C/G/U)
3. 3.
Input amino acid
4. 4.
Input codon
5. 5.
The position of the site in the codon (1–3)
6. 6.
Mature peptide indicator
7. 7.
Stem loop indicator (different categorical values for each one of the stem loop genes ORF10 and ORF1ab)
8. 8.
CG pair indicator (different value for each position of the CG pair or NULL for non-CG)
9. 9.
Right neighboring nucleotide
10. 10.
Left neighboring nucleotide
We considered all possible combinations of using each factor in a generalized linear model (GLM)25: () omission, () as an explanatory factor, or () using it to split the GLM into sub-models such that a separate sub-model is built for each possible value. In our nomenclature, a model denotes a specific choice of inclusion (,,) for each one of the categorical factors, and we fit the data the sub-models created by splitting according to the () factors. Subsequently, a total of 43,254 models were examined (each comprised of multiple sub-models). To account for over-dispersion, we considered a Negative-Binomial (NB) regression model in addition to the standard Poisson regression model in our GLM. All our models were fitted separately to synonymous and non-synonymous substitutions and accounted for the difference in rates between transitions and transversions.
Figure 1 shows the top three NB and Poisson regression models based on their AIC (penalized log-likelihood) score26 on the training dataset. Please refer to the Supplementary Information for similar analyses of the NCBI phylogenetic tree (Supplementary Note 1 and Fig. S1) and ten top models (Fig. S2). In addition, we provide all models in the files Supplementary Data 1 and Supplementary Data 2.
### Predictions
We next evaluated the ability of our top models to predict novel substitutions. Our prediction data set was constructed as follows. We considered the 32,495 test sequences that were added to the NCBI database in the period between 10 February 2021 and 10 April 2021. We then identified 10,409 sites with zero substitutions in the training data, i.e., identical or missing in all training data sequences. Of which 9696 sites had at most one base different from the base appearing in the training data, allowing us to confidently identify the substitution that occurred without inferring a phylogenetic tree for the test sequences. In these, we identified 2697 sites that had at least one substitution in the test sequences. To avoid labeling sequencing errors, we required a minimum of two different test sequences with the mutated state; hence only 1266 sites remained. Sites that had a single test sample with a mutated state were entirely ignored in the evaluation phase. For an illustration of the training and test datasets and our labeling procedure, see Fig. 2.
We evaluated the ability of the top regression models to successfully rank the sites by their likelihood to mutate during the test period, thus creating new variants. Our evaluation is done at the amino acid level rather than the individual site (nucleotide) level to express the notion that non-synonymous amino acid changes are the true object of interest in predicting new variants. The transition from predicting sites to predicting amino acids is done by careful post-processing and aggregation of the prediction model results (see the “Methods” section). We used the area under the ROC curve (AUC) and the lift (ratio of true positives compared to a baseline model) to assess our results. The lift compares our model to two baselines: the random ordering of all possible relevant substitutions and a base model, which takes into account exposure, i.e., the number of ways in which a specific amino acid can be created, and also the transition/transversion (ti/tv) ratio, but not the other explanatory factors. We compared to the base model as a sanity check that our models were indeed finding additional information to characterize amino acid substitution rates beyond the exposure and ti/tv effect.
The results for our top models are shown in Fig. 3, both for the entire viral genome and the spike gene only, due to its biological importance27. We use both Poisson and Negative Binomial regressions to predict the substitution rate for each model. Synonymous and non-synonymous substitution rates are modeled separately due to the fundamentally different biological and evolutionary mechanisms they trigger. The community interest in non-synonymous substitutions also supports this separation28,29. Note that the substitutions are aggregated per amino acid and location on the genome, as explained in the “Methods” section. Figures S3 and S4 show the results for NCBI’s phylogenetic tree and our top ten models.
Based on these results, we chose the third Poisson model of non-synonymous amino acid substitutions for a more detailed presentation here. The lift curves for this model are shown in Fig. 4, demonstrating in more detail our models’ ability to identify likely substitutions. Note that in the test dataset, there are roughly 2% positives. Using the calculated lifts at 1%, the number of true positives is 7.51 times greater than the random model and 3.125 times greater than the base model. In numbers, this 1% represents 337 candidate substitutions, of which 50 actually occurred in the test period (compared to 6.66 expected under the random model and 16 in the top base model predictions). The lift curve against the base model is lower than that against the random model, yet still much higher than 1 for the highly ranked candidates (left side of the plot). This demonstrates that the exposure information used in the base model is essential for successful prediction, but the detailed models can still identify a substantial signal beyond the exposure. Figure S5 shows similar results for NCBI’s phylogenetic tree (source data: supplementary data 4).
In order to further validate our model, we have used an additional test set that contains sequences collected between 15 September 2021 and 1 October 2021. This test set also produces similar results to the one presented here (see Supplementary Note 2 and Figs. S6 and S7, source data for Fig. S7 appears in supplementary data 5).
To help the community predict and analyze future substitutions, we provide a complete list of predicted non-synonymous amino acid substitution rates in the spike protein in the file Supplementary Data 6. In addition, we note for each substitution whether or not it was observed in the training and test datasets.
As an additional demonstration of our models’ success in ranking amino acid substitutions of interest, we analyzed the following variants: Alpha (lineage B.1.1.7), Beta (lineage B.1.351), Gamma (lineage P.1), Delta (lineage B.1.617.2), Theta (lineage P.3), Omicron (lineage B.1.1.529), Lambda (lineage C.37), Mu (lineage B.1.621), Epsilon (lineages B.1.429, B.1.427), Zeta (lineage P.2), Eta (lineage B.1.525) and Theta (lineage P.3). Many of the amino acid substitutions are common to several variants. Overall, there are 72 different amino acid substitutions in the spike protein comprising these variants. Of these, 45 were included in the training data, while 27 were recorded after our training cutoff date of 2/8/2021. According to our chosen model (third-ranked Poisson model), we examined their ranking in the 13,544 possible spike protein amino acid substitutions list. A list of all 72 amino acid substitutions and their rankings is given in Fig. S8, demonstrating that 68% of the substitutions (49/72, including 16 substitutions not observed in training) were ranked in the top 2735 predictions (that is, top 20% of predictions) according to our model. In Fig. 5 we provide a similar analysis separately for the latest Omicron variant, showing that 70% of its spike protein amino acid substitutions (21/30, including 11 substitutions not observed in training) were ranked in the top 2733 predictions (that is, top 20% of predictions) according to our model. This result is also very significant (p < 2.2e − 16, one-sided Wilcoxon rank-sum test, test statistic: W = 777,362, 95% CI: (3476, )).
Some of the substitutions comprising the inspected variants are hypothesized to be the result of positive selection30,31,32,33. As our model does not take positive selection into account, we would expect them to be ranked as less likely to occur by our model, compared to the non-selected mutations. In order to test this hypothesis, we conducted a one-sided Wilcoxon rank-sum test of whether substitutions having a survival advantage come from the same distribution as the rest of the 72 substitutions comprising the inspected variant. We identified a list of mutations noted in the literature as potentially conferring a selective advantage: S477G/N34, E484Q35, N501Y36, N501S37 enhancing binding of the spike protein to the hACE2 receptor; L452R38, N440K39, D614G40 conferring increased infectivity; G446V41, E484K42 affecting the affinity of monoclonal antibodies; and F490S41 reducing susceptibility to an antibody generated by those who were infected with other strains. Our test rejects the null hypothesis that this sub-group of substitutions comes from the same distribution as the rest of the 72 substitutions (p = 0.0066, test statistic: W = 106,589, 95% CI: (987, )). This observation suggests that beyond these identified mutations, other high prevalence substitutions with a low probability of mutation in our models may also be under positive selection.
## Discussion
In this work, we model substitution rates in the SARS-CoV-2 as a function of several possible affecting factors describing sequence and coding information. We fit our models to training data that is based on inferring the phylogenetic tree connecting tens of thousands of sequences collected before February 2021 and also inferring the specific substitutions that have occurred on this tree. This phylogenetic reconstruction task is extremely challenging, and it is unlikely that the inferred tree or substitutions are completely accurate14. This is also evident by the different trees, substitutions, and slightly different models we get when we use the sarscov2phylo method24 to reconstruct the tree, with results given in the main text, compared to using NCBI’s reconstruction of the tree (see Supplementary Note 1 and Figs. S1, S3, and S5).
However, a critical point is that our evaluation approach on the test set of sequences added after the training cutoff date does not rely on any phylogenetic reconstruction or assumptions on the phylogenetic context between the test sequences and training sequences (as illustrated in Fig. 2). The fact that the test set shows high AUC and lift curves demonstrates that regardless of doubts about the accuracy of the training phylogenetic reconstruction, the models we fit to the training data are indeed useful to predict future substitutions.
The specific substitutions we include in the test set were carefully chosen to avoid sequencing errors and phylogenetic uncertainty in the evaluation. However, we emphasize that our models can be used to predict the likelihood of all possible substitutions and variants, including ones that have already appeared in the training data (as we did in our analysis of known variants in Fig. 5). Furthermore, the nucleotide level predictions we generate can be easily transformed into amino acid level predictions, as we did in our actual evaluation and AUC and lift calculations (with the methodology described in the “Methods” section). This is critical since the discussion of variants in the literature is typically focused on the amino acid level43,44.
Our top regression models shown in Fig. 1 suggest that all of the factors we consider are potentially useful for predicting future substitutions and variants, but some are more important than others. Specifically, most of the best models split into sub-models by amino acid rather than by codon (as shown by their designation as ‘’ in all top models according to NB AIC), suggesting that codon usage bias effects such as those described in refs. 17,28 may not be major.
An important property of our regression approach is that regression models consider all candidate explanatory factors at once. They are thus able to identify factors that appear essential when considered on their own but whose effect can be explained away by other, better factors. For instance, the neighboring nucleotides identities (context) seem to have a minor role once the amino acid and codon position are taken into account and are not included at all in some of our top models (as indicated by their designation asin two of the top three models). While it is true that in an analysis examining only the connection between neighbors and likelihood of substitution, the context would appear very significant, this effect is mitigated and may disappear when taking into account the better factors (see model 21,532 in Supplementary Data 1).
Our analysis includes ten variables that can affect the substitution rate. Many others can be proposed, including sequence-based variables such as more elaborate sequence contexts than immediate neighbors and external information such as conservation scores. As more data and knowledge accumulate, we expect our prediction models to improve by adding such relevant variables.
In summary, our statistical modeling approach offers two substantial benefits: A better understanding and modeling of the factors affecting substitution rates in the SARS-CoV-2 virus, and by implication in other viruses, and the resulting predictive models, which can be used to rank future variants by their likelihood.
Our contributions can potentially play a role in vaccine design, medical research, and other tasks in the ongoing battle against COVID-19 and future viral epidemics. Specifically, for the important task of vaccine design, one can imagine future pipelines where vaccines for many different potential variants can be prepared in advance using mRNA technology. Prioritizing which potential variants are more relevant can be done based on a combination of mutation likelihood prediction tools like we offer, with tools for inferring other relevant aspects like infectiousness9 and target effectiveness45. In addition, we demonstrated that high prevalence substitutions that hold a survival advantage are typically not identified by our models as having a high mutation rate. This observation suggests our models can be used to flag additional candidates for study as potentially inducing positive selection.
## Methods
### Statistics and reproducibility
The sequences used in this work were all downloaded from the NCBI website23,46. As a training set, we used 61,835 available sequences as of 8 February 2021. For a test set, we used 32,495 sequences released between 10 February 2021 and 10 April 2021. NCBI’s tree47 and the sarscov2phylo method24 exclude noisy sequences. These include low-quality sequences and sequences missing sufficient data, so that, it is hard to place them meaningfully in the phylogeny. In addition to the results presented in the main text for the phylogenetic tree reconstructed according to the sarscov2phylo method, we provide results for the NCBI phylogenetic tree showing similar results (see Supplementary Note 1, Figs. S1, S3, and S5). To further validate our model, we have used an additional test set that contains sequences collected between 15 September 2021, and 1 October 2021. This test set also produces similar results to the one presented in the main text (see Supplementary Note 2 and Figs. S6 and S7). The threshold date separating the training and test sequences was chosen once and arbitrarily. To reproduce the results, the code used in this work is available at48.
### Phylogeny of SARS-CoV-2
We used two phylogenetic reconstructions of SARS-CoV-2 following related works in the literature13,49,50:
1. 1.
The tree of complete SARS-CoV-2 Sequences by NCBI47. This is a distance-based phylogenetic tree. Further information is available online51,52.
2. 2.
A tree reconstructed by us using the sarscov2phylo method developed by Lanfear24. Following Lanfear’s method, we estimated the global phylogeny using IQ-TREE253 and FastTree 254. The resulting tree was then rooted with the NCBI reference sequence (accession NC_045512.2) using nw_reroot55. Finally, we removed sequences from very long branches using TreeShrink56.
We used the global sequence alignment method implemented in the sarscov2phylo method which aligns every sequence to the reference sequence (accession NC_045512.2) from NCBI and then joins the individually aligned sequences into a global alignment using MAFFT v7.47157, faSplit58, faSomeRecords59, and GNUparallel60.
### Internal nodes reconstruction
The internal nodes of the tree phylogeny are necessary to infer the substitutions that occurred on the tree edges. We now describe our heuristic, inspired by Fitch’s algorithm61, used to reconstruct the sequences in the internal nodes. Model-based approaches for ancestral sequence reconstruction (such as FastML62) cannot be applied here due to a large number of sequences.
Every site holds a probability vector over the bases A/C/G/U defined as follows:
1. 1.
For every leaf, assign probability 1 to the base in the respective site and probability 0 to all other bases. The probability is split uniformly among the possible bases whenever there is base ambiguity.
2. 2.
Pass from bottom to top. The probability vector of an internal node is the average of the probability vectors of its children.
3. 3.
Pass from top to bottom. We descend the tree from the root and add to each node ϵ = 1/(# of children) multiplied by its parent’s probability vector (and normalize by 1 + ϵ to keep it in the l1-simplex).
4. 4.
The chosen base at every node is determined by the highest probability value. This procedure also solves ambiguous sites in the leaves.
By doing this, we break ties between the highest probabilities (such ties are frequent) and allow information to flow between nodes that have a common ancestor.
Finally, we applied a battery of statistical tests to validate the phylogenetic tree and its internal nodes (details in Supplementary Note 3).
### Substitution model
By reconstructing the tree’s internal nodes, we can generate a tabular dataset consisting of the list of factors and the number of substitutions that occurred for each instantiation of these factors. We use the multiple regression approach described in ref. 25 which considers for every factor in the tabular data the options to either join in the regression linearly (marked ‘’), not join at all (marked ‘’), or to partition the data according to it (marked ‘’). We use the term model to denote a specific choice of inclusion for each categorical factor that might affect the substitution rate as listed.
A partitioning () splits the regression model into multiple smaller regressions, where each factor gets one of its values. Consider, for example, that there are only two factors, the base and the codon position. If both are (), then only one regression will be applied with a one-hot encoding of both factors. However, if the base is (), we will use four regression models to partition the data according to the base (A/C/G/U). We use the term sub-model for each of the actual models fitted after splitting. The AIC26 score is given by $${{{{{{\rm{AIC}}}}}}}=2k-2\log (\hat{L})$$ where k is the number of free parameters and $$\hat{L}$$ is the maximum likelihood. The AIC score is calculated separately for each sub-model regression. Then, the AIC scores of these sub-models are summed up to form one unified score for this model.
Consequently, the number of models we consider is, in theory, combinatorial in the number of values each factor can have. However, the number of models can be substantially reduced since some factors are dependent on one another (for example, the codon determines the amino acid and base). In our data, we score 43,254 models. We apply both Poisson regression and Negative-Binomial regression63 for each model, where the latter is used to account for overdispersion, specifically to account for latent factors not included in the model. The complete list of factors is given in the main paper. Finally, our experiments infer different regression coefficients for synonymous and non-synonymous sub-models and combine the AIC scores. We also considered doing the same for transitions/transversions and different output nucleotides, but we got strictly worse AIC scores.
Another critical notion is that of exposure64, which weights the states we train on according to the frequency of their occurrence. For instance, a specific combination of frequently appearing factors in the dataset has relatively higher exposure than a rare set. When we learn the regression model, taking exposure into account is crucial to reduce bias in the dataset and improve the predictions. The exposure is proportional to the total amount of time a specific set of factors was observed. To calculate that duration, we summarize the lengths of relevant branches in the phylogenetic tree and use the sum as an offset variable in the regression. For the test set, exposure is unnecessary (or can be set to an arbitrary constant) as we calculate the exposure for the leaves of the tree, which are the training sequences, and we only consider sites for which there were no substitutions along the phylogenetic tree.
Finally, we apply additional normalization. We first define the non-synonymous ti/tv ratio65:
$${r}_{ti:tv}^{non-syn}=\frac{\#Non-synonymous\,transitions}{\#Non-synonymous\,transversions}$$
in the training data. Then, we count the number of possible transitions and transversions per state for each state and normalize the substitution rate accordingly. For example, the codon GCG has one possible non-synonymous transition and two possible non-synonymous transversions in the first codon position. The non-synonymous substitution rate for that state is hence normalized by $$1+2/{r}_{ti:tv}^{{non-syn}}$$. An identical procedure is applied to the synonymous substitutions.
### Prediction
Our main prediction task is focused on predicting amino acid substitutions. As our basic predictions are always at the single nucleotide level, we carefully aggregate them to form amino acid predictions—the substitution rate of an amino acid output at a given location is the sum of the rates of all the substitutions leading to it. Note that in most but not all cases, there is only a simple correspondence, in that there is a single non-synonymous nucleotide substitution that leads to a given amino acid change. However, more complex settings can occur, such as the substitution from Histidine to Glutamine through four different non-synonymous transversions in the third codon position.
To test the performance of our predictions, we compare them to two baselines. The first baseline is the random model which places equal probability on all amino acid substitutions. While a naive random model would consider all 21 amino acids per location, we permit only one substitution per codon since multiple substitutions per codon are highly unlikely (<0.5% of the substitutions occurred at adjacent sites in the same tree branch). This limitation drastically improves the random model’s predictions and reduces possible amino acid substitutions throughout the molecule from 121,653 to 33,684.
The second baseline model is called base model. This model considers the exposure and ti/tv normalization for each substitution and uses it for prediction. Hence it is a lot less naive than the random model and relies on careful evaluation of the different likelihoods for different substitutions based on the observed states in the tree and the ti/tv effect. It differs from our true prediction models in ignoring the ten potential affecting factors, and comparing to it is our way to quantify the contribution of these factors to predictive power within our regression approach.
To compare the top models to the baseline models, we use two scoring methods—AUC and lift (we emphasize here again that all comparisons are made on data in the test period not used for building the models, as explained in Fig. 2 of the main text). First, we transform the predicted substitution rate into a binary prediction vector of 0/1 predictions. We do this by applying a threshold on the predicted substitution rate where all rates above a specific value are deemed positive. By varying the threshold, we can derive the ROC curve (using the test dataset as the ground truth), from which we can calculate the AUC score. Lift66,67 measures how well a targeting model performs at predicting compared to a random choice method. We compute the lift for each threshold by taking the ratio of “precision at x%” between our model and each baseline model separately.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | 2023-03-26 20:13:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5764635801315308, "perplexity": 1220.3135007394553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00699.warc.gz"} |
https://digital-library.theiet.org/content/books/10.1049/pbcs021e_ch3 | http://iet.metastore.ingenta.com
1887
## Diffusion and oxidation of SiGe/SiGeC films
• Author(s):
• DOI:
$16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00
(plus taxes if applicable)
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
Recommend Title Publication to library
You must fill out fields marked with: *
Librarian details
Name:*
Email:*
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
Technology Computer Aided Design for Si, SiGe and GaAs Integrated Circuits — Recommend this title to your library
## Thank you
In this chapter, a brief discussion on the nature of point defect-mediated diffusion, boron diffusion in silicon and SiGe, and reasons for strain relaxation in SiGe has been made. Because of the relationship between dopant diffusion and point defect diffusion.both the movement of point defects and dopants need to be modelled simultaneously. It has been shown that boron in strained, low Ge-composition SiGe layers diffuses primarily via an interstitial mediated mechanism. Mobile misfit dislocations can act as a strong interstitial sink but immobile dislocations appear to have very little effect on the point defect population. However, experiments should be performed to determine the segregation coefficient across the Si/SiGe interface as a function of germanium and dopant concentration. Further studies should also be taken up to find more closely the relationship between relaxation and interstitial absorption.
Inspec keywords:
Subjects:
Preview this chapter:
Diffusion and oxidation of SiGe/SiGeC films, Page 1 of 2
| /docserver/preview/fulltext/books/cs/pbcs021e/PBCS021E_ch3-1.gif /docserver/preview/fulltext/books/cs/pbcs021e/PBCS021E_ch3-2.gif
### Related content
content/books/10.1049/pbcs021e_ch3
pub_keyword,iet_inspecKeyword,pub_concept
6
6
This is a required field | 2019-07-21 21:23:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19541925191879272, "perplexity": 6795.007515399948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527204.71/warc/CC-MAIN-20190721205413-20190721231413-00040.warc.gz"} |
https://brilliant.org/problems/exploring-divisor-function-4/ | # Exploring Divisor Function 4
$\sum _{n=1}^{\infty}\frac{d\left(2016n\right)}{n^2}=\frac{A\pi^B}{C}$
Where $$d(n)$$ counts the number of divisors of $$n$$. Find $$A+B+C$$
Hint: Prime factorise 2016
× | 2018-01-16 10:03:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5374499559402466, "perplexity": 2385.9619109949754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886397.2/warc/CC-MAIN-20180116090056-20180116110056-00699.warc.gz"} |
https://intelligencemission.com/electricity-free-utilities-free-electricity-solar-panels.html | Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates.
Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight.
### “These are not just fringe scientists with science fiction ideas. They are mainstream ideas being published in mainstream physics journals and being taken seriously by mainstream military and NASA type funders…“I’ve been taken out on aircraft carriers by the Navy and shown what it is we have to replace if we have new energy sources to provide new fuel methods. ” (source)
NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them!
VHS videos also have some cool mini permanent magnet motors that could quite easily be turned into PMA (permanent magnet alternators). I pulled one apart about Free Power month ago. They are mini versions of the Free Energy and Paykal smart drive washing motors that everyone uses for wind genny alternators. I have used the smart drive motors on hydro electric set ups but not wind. You can wire them to produce AC or DC. Really handy conversion. You can acess the info on how to do it on “the back shed” (google it). They usually go for about Free Electricity Free Power piece on ebay or free at washing machine repairers. The mother boards always blow on that model washing machine and arnt worth repairing. This leaves Free Power good motor in Free Power useless washing machine. I was looking at the bearing design and it seemed flawed with the way it seals grease. Ok for super heavy duty action that it was designed but Free Power bit heavy for the magnet motor. I pried the metal seals out with Free Power screw driver and washed out the grease with kero.
Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power.
Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier).
Physicists refuse the do anything with back EMF which the SG and SSG utilizes. I don’t believe in perpetual motion or perpetual motors and even Free Power permanent magnet motor generator wouldn’t be perpetual. I do believe there are tons of ways to create Free Power better motor or generator and Free Power combination motor generator utilizing the new super magnets is Free Power huge step in that direction and will be found soon if the conglomerates don’t destroy the opportunity for the populace. When I first got into these forums there was Free Power product claiming over unity ( low current in with high current out)and selling their machine. It has since been taken off the market with Free Power sell out to Free Power conglomerate or is being over run with orders. I don’t know! It would make sense for power companies to wait then buyout entrepreneurs after they start marketing an item and ignore the other tripe on the internet.. Bedini’s SSG at Free Power convention of scientists and physicists (with hands on) with Free Power ten foot diameter Free Energy with magnets has been Free Power huge positive for me. Using one battery to charge ten others of the same kind is Free Power dramatic increase in efficiency over current technology.
I wanted to end with Free Power laugh. I will say, I like Free Electricity Free Power for his comedy. Sure sometimes I am not sure if it comes across to most people as making fun of spirituality and personal work, or if it just calls out the ridiculousness of some of it when we do it inauthentically, but he still has some great jokes. Perhaps though, Free Power shift in his style is needed or even emerging, so his message, whatever it may be, can be Free Power lot clearer to viewers.
Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight.
Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know.
The torque readings will give the same results. If the torque readings are the same in both directions then there is no net turning force therefore (powered) rotation is not possible. Of course it is fun to build the models and observe and test all of this. Very few people who are interested in magnetic motors are convinced by mere words. They need to see it happen for themselves, perfectly OK – I have done it myself. Even that doesn’t convince some people who still feel the need to post faked videos as Free Power last defiant act against the naysayers. Sorry Free Power, i should have asked this in my last post. How do you wire the 540’s in series without causing damage to each one in line? And no i have not seen the big pma kits. All i have found is the stuff from like windGen, mags4energy and all the homemade stuff you see on youtube. I have built three pma’s on the order of those but they don’t work very good. Where can i find the big ones? Free Power you know what the 540 max watts is? Hey Free Power, learn new things all the time. Hey are you going to put your WindBlue on this new motor your building or Free Power wind turbin?
Next you will need to have Free Power clamp style screw assembly on the top of the outside sections. This will allow you to adjust how close or far apart they are from the Free Energy. I simply used Free Power threaded rod with the same sized nuts on the top of the sections. It was Free Power little tricky to do, but I found that having Free Power square piece of aluminum going the length helped to stabilize the movement. Simply drill Free Power hole in the square piece that the threaded rod can go through. Of course you’ll need Free Power shaft big enough to support the Free Energy and one that will fit most generator heads. Of course you can always adapt it down if needed. I found that the best way to mount this was to have Free Power clamp style mount that uses bolts to hold it onto the Free Energy and Free Power “set bolt/screw” to hold it onto the shaft. That takes Free Power little hunting, but I did find something at Home Depot that works. If you’re handy enough you could create one yourself. Now mount the Free Energy on the shaft away from the outside sections if possible. This will keep it from pushing back and forth on you. Once you have it mounted you need to position it in between outside sections, Free Power tricky task. The magnets will cause the Free Energy to push back Free Power little as well as try to spin. The best way to do this is with some help or some rope. Why? Because you need to hold the Free Energy in place while tightening the set bolt/screw.
The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy.
That is what I envision. Then you have the vehicle I will build. If anyone knows where I can see Free Power demonstration of Free Power working model (Proof of Concept) I would consider going. Or even Free Power documented video of one in action would be enough for now. Burp-Professor Free Power Gaseous and Prof. Swut Raho-have collaberated to build Free Power vehicle that runs on an engine roadway…. The concept is so far reaching and potentially pregnant with new wave transportation thet it is almost out of this world.. Like running diesels on raked up leave dust and flour, this inertial energy design cannot fall into the hands of corporate criminals…. Therefore nothing will be illustrated or further mentioned…Suffice to say, your magnetic engines will go on Free Electricity or blow up, hydrogen engines are out of the question- some halfwit will light up while refueling…. America does not deserve the edge anymore, so look to Europe, particuliarly the scots to move transportation into the Free Electricity century…
Physicists refuse the do anything with back EMF which the SG and SSG utilizes. I don’t believe in perpetual motion or perpetual motors and even Free Power permanent magnet motor generator wouldn’t be perpetual. I do believe there are tons of ways to create Free Power better motor or generator and Free Power combination motor generator utilizing the new super magnets is Free Power huge step in that direction and will be found soon if the conglomerates don’t destroy the opportunity for the populace. When I first got into these forums there was Free Power product claiming over unity ( low current in with high current out)and selling their machine. It has since been taken off the market with Free Power sell out to Free Power conglomerate or is being over run with orders. I don’t know! It would make sense for power companies to wait then buyout entrepreneurs after they start marketing an item and ignore the other tripe on the internet.. Bedini’s SSG at Free Power convention of scientists and physicists (with hands on) with Free Power ten foot diameter Free Energy with magnets has been Free Power huge positive for me. Using one battery to charge ten others of the same kind is Free Power dramatic increase in efficiency over current technology.
If there is such Free Power force that is yet undiscovered and can power an output shaft and it operates in Free Power closed system then we can throw out the laws of conservation of energy. I won’t hold my breath. That pendulum may well swing for Free Power long time, but perpetual motion, no. The movement of the earth causes it to swing. Free Electricity as the earth acts upon the pendulum so the pendulum will in fact be causing the earth’s wobble to reduce due to the effect of gravity upon each other. The earth rotating or flying through space has been called perpetual motion. Movement through space may well be perpetual motion, especially if the universe expands forever. But no laws are being bent or broken. Context is what it is all about. Mr. Free Electricity, again I think the problem you are having is semantics. “Perpetual- continuing or enduring forever; everlasting. ” The modern terms being used now are “self-sustaining or sustainable. ” Even if Mr. Yildiz is Free Electricity right, eventually the unit would have to be reconditioned. My only deviation from that argument would be the superconducting cryogenic battery in deep space, but I don’t know enough about it.
Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out.
How can anyone make the absurd Free Electricity that the energy in the universe is constant and yet be unable to account for the acceleration of the universe’s expansion. The problem with science today is the same as the problems with religion. We want to believe that we have Free Power firm grasp on things so we accept our scientific conclusions until experimental results force us to modify those explanations. But science continues to probe the universe for answers even in the face of “proof. ” That is science. Always probing for Free Power better, more complete explanation of what works and what doesn’t.
If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor. | 2019-03-23 03:41:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3169144093990326, "perplexity": 1720.751372684634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202711.3/warc/CC-MAIN-20190323020538-20190323042538-00298.warc.gz"} |
http://www.insect.org.cn/CN/Y2005/V48/I6/837 | ›› 2005, Vol. 48 ›› Issue (6): 837-848.
• 研究论文 •
### 角蜡蚧和日本龟蜡蚧蜡泌物的超微结构及化学成分分析
1. 山西大学生命科学与技术学院
• 出版日期:2005-12-29 发布日期:2005-12-20
• 通讯作者: 谢映平
### Ultra-morphology and chemical composition of waxes secreted by two wax scale insects, Ceroplastes ceriferus (Fabricius) and C. japonicus Green (Homoptera: Coccidae)
XIE Ying-Ping, XUE Jiao-Liang
1. College of Life Science and Technology, Shanxi University
• Online:2005-12-29 Published:2005-12-20
Abstract:
The ultra-morphology and chemical composition of waxes secreted by the scale insects, Ceroplastes ceriferus (Fabricius) and C. japonicus Green (Homoptera: Coccidae) were studied with the techniques of scanning electron microscope (SEM) and gas chromatography/ mass spectrometry (GC/MS). The results indicate that the two wax scale insects have a similar waxy secretion and wax test forming process. The scale insects in their first and second instars secreted dry wax that formed a star-shaped test. Every wax horn around the margin of the test consists of two segments. This is corresponding to the two developmental instars. Furthermore, each of the two segments of the wax horn included many sub-segments. Meanwhile, the wax accumulated into a cap-like structure with many layers on the dorsal region of the body. It was believed that some kind of rhythm existed in the wax secreting. A lot of striate punctures formed wax glands that are usually unable to be found with the slide specimens of the wax scale insects observed under the light microscopy. As the scales developed into the 3rd instar and adult stage, the wax secretion changed into “wet state" and formed a waxy test in tortoise shell shape. The wax glands on the dorsal surface mainly are trilocular and quadrilocular pores. Dense wax pores arranging in longitudinal strips also were found over the anal plates. The main chemical compositions of the wax secretions of the two scale insects were determined with GC/MS by the two methods of esterification and unesterification. For C. ceriferus, 14 and 14 compounds were determined from its wax secretion with the two methods respectively; while 10 and 25 compounds were determined respectively from the wax secretion of C. japonicus. The main compositions of their wax secretions include a series of long chain saturated and unsaturated hydrocarbons, fatty acids, fatty alcohol, esters, and some compounds with ring structures of multi-, macro-, or heterocyclic ring. Their biological functions were discussed. | 2022-05-17 01:13:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17962875962257385, "perplexity": 11698.491652583782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00298.warc.gz"} |
https://www.physicsforums.com/threads/theorem-in-qm.355689/ | # Theorem in QM
1. Nov 17, 2009
### woodywood
aqqw
Last edited: Nov 18, 2009
2. Nov 17, 2009
### turin
How about ψ=ψeo (where, of course, ψ must statisfy Schroedinger equation), and then prove that ψe≠0 and ψo≠0 cannot be simultaneously true.
3. Nov 17, 2009
### kof9595995
Emm.....By wave function I assume you're talking about eigenfunction. Even though I can only prove that we can always find even solutions and odd solutions. As for what you said "prove wave function can be only even or odd." I really have no idea how to do it. I tried turin's method but didn't manage to get the desired answer. Let's wait for other people's opinion.
4. Nov 17, 2009
### jdwood983
No, I'm pretty sure he needs wavefunctions and not eigenfunctions
woodywood, you may want to consider the parity operator:
$$P\Psi(x,y,z,t)=\Psi(-x,-y,-z,t)$$
and apply it to both the $H\Psi$ and $\Psi$ (after which you take the Hamiltonian of this latter one too--you should have $PH\Psi$ & $HP\Psi$) then see how the two relate.
5. Nov 17, 2009
### turin
I'm pretty sure the OP needs eigenfunctions, not just wavefunctions in general. In fact, I'm pretty sure it is even more restricted to stationary eigenstates, because it is trivial to construct, by superposition, a general wavefunction that lacks definite parity, from any spectrum that includes both even and odd states.
6. Nov 18, 2009
### woodywood
oijluhkjb
Last edited: Nov 18, 2009
7. Nov 18, 2009
### woodywood
njhgfjhklh;ih;oij
Last edited: Nov 18, 2009 | 2017-12-12 15:03:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004953503608704, "perplexity": 1480.6099848589968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00224.warc.gz"} |
https://www.physicsforums.com/threads/differentiation-chain-rule.779464/ | # Differentiation - Chain Rule
1. Nov 1, 2014
### PhyAmateur
In one physics problem if $$r^2= \lambda^2(1+\frac{m}{2\lambda})^2$$
what is $dr^2 ?$
Should I find $dr$ starting from $r= \lambda(1+\frac{m}{2\lambda})$ first and then square or find $dr^2$ starting from r^2? I know this is a basic question in differentiation using chain rule but it seems I am stuck over this..
Last edited by a moderator: Nov 1, 2014
2. Nov 1, 2014
### SteamKing
Staff Emeritus
A little more context would be helpful here.
Are you trying to find (dr)2 or dr2. I think there's a difference.
3. Nov 1, 2014
### PhyAmateur
It is written as $$dr^2$$ I was just wondering if I should derive once and then after finding the answer, derive twice.. What do you say?
4. Nov 1, 2014
### SteamKing
Staff Emeritus
This notation is ambiguous to me. I can't advise you further without more information about this problem where you found it.
5. Nov 2, 2014
### Fredrik
Staff Emeritus
I would interpret $dr^2$ as $(dr)^2$, not as $d(r^2)$. The straightforward way is to solve for r, compute $dr=\frac{dr}{d\lambda}d\lambda$, and then square the result.
6. Nov 2, 2014
### PhyAmateur
It is the same $$dr^2$$ found in the 3 sphere metric for example..
7. Nov 2, 2014
### PeroK
$r^2 = λ^2(1 + \frac{m}{λ} + \frac{m^2}{4λ^2}) = λ^2 + mλ + \frac{m^2}{4}$
$r = λ(1 + \frac{m}{2λ}) = λ + \frac{m}{2}$
$\frac{d(r^2)}{dλ} = 2λ + m$
$d(r^2) = (2λ + m)dλ \ \ (A)$
$\frac{dr}{dλ} = 1$
$dr = dλ$
$(dr)^2 = (dλ)^2 \ \ (B)$
A or B, take your pick. | 2018-03-22 14:24:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860668301582336, "perplexity": 1626.4970376089802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647885.78/warc/CC-MAIN-20180322131741-20180322151741-00712.warc.gz"} |
https://www.controlpaths.com/2021/04/19/implementing-a-digital-biquad-filter-in-verilog/ | Filtering is likely the most common DSP algorithm that any embedded engineer has to design, no matter if they are developing for an STM32, TI DSP or, of course, an FPGA. Filtering is important because the majority of the applications are connected to the real world, so they need to capture real signals with the undesirable elements that all real signals have, i.e noise, offset… Also, signals that we need to acquire, not always will be the dominant signals we acquire, for example, for biological signals, we will acquire a high level of 50 or 60 Hz corresponding with the grid frequency. In this case, we will need a filter.
As you surely know as a reader of this blog, on digital systems, we have 2 kinds of filters, FIR (Finite Impulse Response) filter, which only uses the value of the input now and the value in the past, and IIR (Infinite Impulse Response), which use the value of the input now and in the past, and also the value of the past outputs. FIR filters are easy to design for many applications, on this blog we did that here, but the response is limited for low-order filters. On the other hand, with IIR filters, we can achieve more aggressive responses, but they have the disadvantage that the filter can be unstable in certain cases. On this blog, we already designed IIR filters here, but on that post, we used MATLAB to design the filter and HDL Coder to implement that filter. In this post, I will show how we can implement a second-order filter, or second-order system, on Verilog step by step.
### Why a second order system?
Second-order systems are one of the 2 systems with which we can model any linear system. The other ones are the first-order systems. The reason is that if we want to create a third-order system, we can implement this system as a first-order system in series with a second-order system. In case we would to develop a 7th-order filter, we will generate 3 second-order filters in series with one first-order filter.
For example, a 5th-order system looks like the next equation.
$H(s)= \frac{b0 + b1 \cdot z^{-1} + b2 \cdot z^{-2} + b3 \cdot z^{-3} + b4 \cdot z^{-4} + b5 \cdot z^{-5}}{1 + a1 \cdot z^{-1} + a2 \cdot z^{-2} + a3 \cdot z^{-3} + a4 \cdot z^{-4} + a5 \cdot z^{-5}}$
We can rewrite this equation as two 2nd order systems in series with a first-order system
$H(s)= \frac{b01 + b11 \cdot z^{-1} + b21 \cdot z^{-2}}{1 + a11 \cdot z^{-1} + a21 \cdot z^{-2}} \cdot \frac{b02 + b12 \cdot z^{-1} + b22 \cdot z^{-2}}{1 + a12 \cdot z^{-1} + a22 \cdot z^{-2}} \cdot \frac{b01 + b11 \cdot z^{-1}}{1 + a11 \cdot z^{-1}}$
This kind of decomposition system is known as sections, and for the decomposition of second-order systems, we can find it on the internet and in literature as second order sections. Actually, in the post where I talk about the bandpass filter designed on MATLAB, the final implementation of the 8th order filter was four 2nd order filters in series (4 sections). On MATLAB command window, you can use the command tf2sos to convert a high-order filter into a set of second-order filters. In case you don’t have a MATLAB license, the Python library Scipy has the command scipy.signal.tf2sos, which performs the same function. Besides the improvement in simplicity that second-order filters allow us to achieve, when we are using a fixed-point encoding, implementing high-order filters can bring us stability issues, so splitting the filter in second-order systems will improve the stability of the system.
### Verilog implementation.
Before starting coding, we have to think about what we want for the module we will create. In my case, is very important to be able to parametrize the module, this way, in case I change the width of the input, or the width of the output, that only represents a change on a parameter, without modifying the code of the module since it means new tests for the module. Also, we have to think about what numeric format we will use. For this kind of filter, is obvious that we will need a signed format, and the capability to manage decimal numbers. To achieve these requirements, we have to select between fixed point and floating point, and for me the decision is clear, I want a code that can be instantiated alone, without an external floating point unit, so the format will be fixed point. If we add the requirement of parameters, and the fixed point format, we will need parameters to define the width of the signals, and also the width of the decimal part, which will be related to the precision we need. Also, the precision will be selected to obtain a response of the implemented filter as similar as possible to the design. Now, how many widths do we need to parametrize? We can define one width that will be used for inputs and outputs and coefficients, but this way, the width of the coefficients will be selected according to the width of the data generator, and for some filters, which the coefficients are on the stability limit, this could represent a problem. So, at least we will define 2 different widths, one for the input and outputs, according to the data drain and the data source, and the other one according to the resolution we will need. Another thing we have to think about widths is the resolution of the internal operations because in some cases, a stability problem is not related to the width of the coefficient itself, but the operation resolution, so at this point, will be interesting decoupling the width of the coefficients that will be generated on MATLAB or Python, with the width of the internal filter operations. Finally, filter parameters will look like the next.
parameter inout_width = 16,
parameter inout_decimal_width = 15,
parameter coefficient_width = 16,
parameter coefficient_decimal_width = 15,
parameter internal_width = 16,
parameter internal_decimal_width = 15
Next, we have to think on the interfaces. For applications where data is transferred between modules in a continuous way, AXI4-Stream will be the best option. On the inputs and output field, we will define both master and slave AXI4-Stream interfaces to acquire and send data. Although the input and output widths are parametrize, if we want to connect the module to an existing AXI4-Stream IP, this width are restricted to the width of the bus.
input aclk,
input resetn,
/* slave axis interface */
input [inout_width-1:0] s_axis_tdata,
input s_axis_tlast,
input s_axis_tvalid,
/* master axis interface */
output reg [inout_width-1:0] m_axis_tdata,
output reg m_axis_tlast,
output reg m_axis_tvalid,
Regarding the coefficients, the best to insert them is with an AXI4-LIte interface, but I want to design this module without the need of using a Zynq or Microblaze, so the easy way to insert the coefficients is as inputs.
/* coefficients */
input signed [pw_coefficient_width-1:0] b0,
input signed [pw_coefficient_width-1:0] b1,
input signed [pw_coefficient_width-1:0] b2,
input signed [pw_coefficient_width-1:0] a1,
input signed [pw_coefficient_width-1:0] a2
Now, with all the inputs and outputs defined, we have to think about the data flow. First, as we have different formats defined, to be able to operate on the coefficients and data, we have to change the format of all signals to the internal format, which is defined by the internal width and the decimal width. The integer width used is defined as a localparam, and is corresponding to the difference between the width and the decimal width. To change the format, we will fill with zeros the low side of the signal until the decimal part will be completed. For the integer part, as the format used is signed, we have to perform a sign extension, that is filled with the value of the MSb until the integer part will be completed. Notice that this works because the internal width is bigger or equal to the inout and coefficients width. Digital filters, both FIR and IIR, need to store the value of the past inputs, and IIR also the value of the past outputs, so we will need a pipeline structure to store the past values.
/* pipeline registers */
always @(posedge aclk)
if (!resetn) begin
input_pipe1 <= 0;
input_pipe2 <= 0;
output_pipe1 <= 0;
output_pipe2 <= 0;
end
else
if (s_axis_tvalid) begin
input_pipe1 <= input_int;
input_pipe2 <= input_pipe1;
output_pipe1 <= output_int;
output_pipe2 <= output_pipe1;
end
Now, the next is to perform the filter calculations. a second-order filter needs to perform 5 multiplications, and remember that it does not necessarily mean that the modules use 5 DSP slices. The code I developed performs combinational multiplications. This allows 0 clock cycles delay, but limits the speed of the filter. If your timing constraints are not met, you can register the input of the multipliers, and then make a retiming to let Vivado select where is more efficient to put the register.
/* combinational multiplications */
assign output_a1 = output_pipe1 * a1_int;
assign output_a2 = output_pipe2 * a2_int;
The result of the multiplications will be added, in case of the input, and substracted in the case of the outputs, to obtain the filter output. As the output of the product has a size of twice of the operands, the addition must be the same width. To use the output on the multiplications, we have to perform a shift to delete extra decimal positions. Regarding the extra integer position will be truncated on the assignment.
assign output_2int = input_b0 + input_b1 + input_b2 - output_a1 - output_a2;
assign output_int = output_2int >>> (internal_decimal_width);
Finally, the value of the output will be reformatted to the inout widths.
assign m_axis_tdata = output_int >>> (internal_decimal_width-inout_decimal_width);
Regarding the AXI4-Stream management signals, as the filter behaves as bridge, with one cycle delay, management signals will pass through the filter with one cycle delay.
Once the system is implemented we can test it.
### Module verifying.
In order to verify the behavior of the filter, we will configure the system as a low pass filter, with a cut frequency of 1kHz, and a sampling frequency of 100kHz (script lowpass_sos.m). Then, we perform a unitary step to the filter. To obtain the response of the implemented filter, we configure the coefficients with a format of 20 bits and 18 decimal bits. The internal width will be the same as the coefficients. In MATLAB, also we can test the response of the quantized filter. Testbench used (axis_biquad_tb.v), logs the output data at a sampling frequency, so the output of both MATLAB and XSIM must be identical, with a gain of 1000 in the case of the simulation. The first response is made with an internal format of I2Q18, and we can see how the gain in DC is been attenuated. This is due to the resolution of the internal operations since MATLAB performs the operations with a resolution of 32 bits.
Now, with the same resolution of the coefficients, we will change the internal width to 32 bits, with 30 bits on the decimal side, and we obtain a response identical to the one obtained on the MATLAB model.
Configuring again an internal width of I2Q18, we can se how increasing the Q factor, the response of the filter remains stable and with an acceptable gain.
The use of a higher or lower resolution will depend on the application. The fact of using the double of multiplicators to achieve a DC gain almost the same as the continuous model will depend of the number of DSP Slices available on the part you are using, but in most cases, applying a gain to the transfer function of the filter will be enough. Another option would be use a lower number of multiplicators than multiplications, and use a pipeline to perform the operations. This option will be valid if the clock cycles between data valid signals are enough to perform all the operations. This technique is known as folding, and the factor between the number of operations and multiplications is named the folding factor.
One important thing about this post is that we have implemented a generic second-order system, that can be used as a filter, or as a regulator, or even a plant model to simulate a plant behavior on an FPGA. This kind of system is powerful and sure on a future post we will use this module to create some cool projects. | 2023-04-01 05:34:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5725835561752319, "perplexity": 729.6601157554037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00798.warc.gz"} |
https://de.maplesoft.com/support/help/maplesim/view.aspx?path=Algebraic%2FPseudoDivision | Algebraic - Maple Programming Help
Home : Support : Online Help : Mathematics : Algebra : Algebraic Numbers : Algebraic/PseudoDivision
Algebraic
PseudoDivision
pseudo-division of polynomials with algebraic number coefficients
Calling Sequence PseudoDivision(a, b, x, options) PseudoDivision(a, b, x, m, q, options)
Parameters
a,b - multivariate polynomials in x with algebraic number coefficients x - name m,q - (optional) unevaluated names options - (optional) equation(s) of the form keyword = value, where keyword is either 'symbolic', 'makeindependent', or 'characteristic'
Options
• If the option 'symbolic' = true is given and a RootOf whose minimal polynomial factors nontrivially is detected, then it will be reduced to a RootOf of lower degree by picking one of the factors arbitrarily. This will eliminate the possibility of a "reducible RootOf detected" error. The default is 'symbolic'=false.
• If the option 'characteristic' = p is given, where p is a non-negative integer, the pseudo-division is performed over an extension of the ring ${\mathrm{ℤ}}_{p}$ of integers modulo p. The default is 'characteristic'=0 and means that the pseudo-division is performed over an extension of the rational numbers.
• If the option 'makeindependent'=true is given, then PseudoDivision will always try to find a field representation for algebraic numbers in the input, regardless of how many algebraic objects the input contains. If the input contains many RootOfs, then this can be a very expensive calculation. If 'makeindependent'=false is given, then no independence checking is performed. The default is 'makeindependent'=FAIL, in which case algebraic dependencies will only be checked for if there are $4$ or fewer algebraic objects in the input.
Description
• The PseudoDivision command performs pseudo-division on multivariate polynomials a and b with respect to the variable x. It returns the pseudo-remainder r such that the relationship $ma=bq+r$ is satisfied, where m is the multiplier and q is the pseudo-quotient. r and q are polynomials in x, with degree(r,x) < degree(b,x). The multiplier m is defined as lcoeff(b, x) ^ (degree(a, x)-degree(b, x)+1) and is free of x.
• If the optional arguments m and q are included, they will be assigned the values of the multiplier and pseudo-quotient respectively.
• The inputs a and b may contain algebraic number coefficients. These may be represented by radicals or with the RootOf notation (see type,algnum, type,radnum). In general, algebraic numbers will be returned in the same representation as they were received in. Nested radicals and RootOfs are also supported.
• The property $ma=bq+r$ will hold in the domain K[x], where K is an algebraic field generated over the rationals and any algebraic number coefficients occurring in a and b (unless the option 'characteristic' is given; see below).
• Non-algebraic sub-expressions such as $\mathrm{sin}\left(x\right)$ that are neither variables, rational numbers, or algebraic objects are frozen and temporarily replaced by new local variables, which are not considered to be constant in what follows.
• The arguments a and b must be polynomials in the variable x, but may contain other names, which are considered as elements of the coefficient field.
• The x parameter can also be a function such as $\mathrm{sin}\left(x\right)$, in which case it will be frozen and treated as a variable. However, functions that are also of type AlgebraicObject such as $\mathrm{sin}\left(\frac{\mathrm{\pi }}{3}\right)$ will be converted to algebraic numbers before proceeding, so they cannot be treated as variables. Proceed with caution when using a function for x, as treating some functions as variables may produce mathematically unsound results.
• The inputs a and b can be polynomials disguised as rational functions, in which case they are normalized first using Algebraic[Normal].
• The output will be normalized as follows:
– All non-constant factors are monic with respect to x.
– There are at most two constant factors, and at most one of them is not a rational number.
– All algebraic numbers occurring in the result are reduced modulo their minimal polynomial (see Reduce), and all arguments of functions, if any, are normalized recursively (see Normal).
• If the set of radicals and RootOfs in the input cannot be embedded into a field algebraically, then PseudoDivision may not be able to perform the division. PseudoDivision will try to find a field representation if there are at most $4$ algebraic objects in the input (unless option 'makeindependent' is given; see below), and otherwise attempt to proceed anyway. If unsuccessful, a "reducible RootOf detected" error will be returned. (unless the option 'symbolic'=true is given; see below).
• This function does not support input containing floats or radical functions such as $\sqrt{x}$.
Examples
> with(Algebraic):
Introductory examples:
> PseudoDivision(x^2+2,2*x+sqrt(2),x,'m1','q1');
${10}$ (1)
> [m1,q1];
$\left[{4}{,}{2}{}{x}{-}\sqrt{{2}}\right]$ (2)
> expand(4*(x^2+2)-(2*x+sqrt(2))*(2*x-sqrt(2)));
${10}$ (3)
> [PseudoDivision(x^3+y^3+5,I*x+I*y,x,'m2','q2'),m2,q2];
$\left[{-}{5}{}{I}{,}{-I}{,}{-}{{x}}^{{2}}{+}{y}{}{x}{-}{{y}}^{{2}}\right]$ (4)
> r1:=RootOf(_Z^2-_Z-1);
${\mathrm{r1}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right)$ (5)
> [PseudoDivision(x^2+1,r1*x,x,'m3','q3'),m3,q3];
$\left[{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){+}{1}{,}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){+}{1}{,}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){}{x}\right]$ (6)
The input may contain both radicals and RootOfs, and PseudoDivision will embed the coefficients into an algebraic field, if possible:
> [PseudoDivision(x*y*z,sqrt(2)*x-RootOf(_Z^2-8,index=1),x,'m4','q4'),m4,q4];
$\left[{2}{}\sqrt{{2}}{}{z}{}{y}{,}\sqrt{{2}}{,}{z}{}{y}\right]$ (7)
> [PseudoDivision(sqrt(2)*x,sqrt(3)*x-sqrt(6),x,'m5','q5'),m5,q5];
$\left[{2}{}\sqrt{{3}}{,}\sqrt{{3}}{,}\sqrt{{2}}\right]$ (8)
Nested and mixed radicals and RootOfs are handled as well:
> [PseudoDivision(x^2-4^(-2/3)+1,2*x+RootOf(_Z^3-2,index=1),x,'m6','q6'),m6,q6];
$\left[{4}{,}{4}{,}{2}{}{x}{-}{{2}}^{{1}}{{3}}}\right]$ (9)
> [PseudoDivision(sqrt(RootOf(_Z^2-3,index=1))*x,RootOf(_Z^2-RootOf(_Z^2-3,index=1),index=1),x,'m7','q7'),m7,q7];
$\left[{0}{,}\sqrt{{3}}{,}\sqrt{{3}}{}{x}\right]$ (10)
Multivariate input is accepted, but pseudo-division will only be performed on the input with respect to the single variable given in the x parameter, with all other names being considered as elements of the coefficient field:
> [PseudoDivision(x^2*y^2+7,x*y,x,'m8','q8'),m8,q8];
$\left[{7}{}{{y}}^{{2}}{,}{{y}}^{{2}}{,}{{y}}^{{3}}{}{x}\right]$ (11)
> [PseudoDivision(x^2*y^2+7,x*y,y,'m9','q9'),m9,q9];
$\left[{7}{}{{x}}^{{2}}{,}{{x}}^{{2}}{,}{{x}}^{{3}}{}{y}\right]$ (12)
> [PseudoDivision(x^2*y^2+7,x*y,z,'m10','q10'),m10,q10];
$\left[{0}{,}{y}{}{x}{,}{{x}}^{{2}}{}{{y}}^{{2}}{+}{7}\right]$ (13)
If the degree of a is less than the degree of b, then the multiplier will be $1$ and the pseudo-quotient will be $0$:
> [PseudoDivision(x^2,x^3,x,'m11','q11'),m11,q11];
$\left[{{x}}^{{2}}{,}{1}{,}{0}\right]$ (14)
If b is a non-zero constant, the pseudo-remainder will always be zero:
> [PseudoDivision(x^10,sqrt(2),x,'m11','q11'),m11,q11];
$\left[{0}{,}{32}{}\sqrt{{2}}{,}{32}{}{{x}}^{{10}}\right]$ (15)
The output will always be fully reduced and normalized:
> [PseudoDivision(x^2,RootOf(_Z^2-_Z-1),x,'m12','q12'),m12,q12];
$\left[{0}{,}{2}{}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){+}{1}{,}{{x}}^{{2}}{}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){+}{{x}}^{{2}}\right]$ (16)
> [PseudoDivision(x/RootOf(_Z^2+6*_Z-1)+7,2*x,x,'m13','q13'),m13,q13];
$\left[{14}{,}{2}{,}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{+}{6}{}{\mathrm{_Z}}{-}{1}\right){+}{6}\right]$ (17)
Algebraic Objects will be converted to algebraic numbers, if possible:
> [PseudoDivision(x+sin(Pi/4),sqrt(2)*x+exp(I*Pi),x,'m14','q14'),m14,q14];
$\left[{2}{,}\sqrt{{2}}{,}{1}\right]$ (18)
Non-algebraic sub-expressions such as $\mathrm{sin}\left(x\right)$ will be frozen and temporarily replaced by new local variables:
> [PseudoDivision(z^2+1,z*sin(x),z,'m15','q15'),m15,q15];
$\left[{{\mathrm{sin}}{}\left({x}\right)}^{{2}}{,}{{\mathrm{sin}}{}\left({x}\right)}^{{2}}{,}{z}{}{\mathrm{sin}}{}\left({x}\right)\right]$ (19)
The x parameter can also be a function, as long as it is not an Algebraic Object:
> [PseudoDivision(sin(x)+4,2*sin(x),sin(x),'m16','q16'),m16,q16];
$\left[{8}{,}{2}{,}{1}\right]$ (20)
> a:=Pi/4;
${a}{≔}\frac{{\mathrm{\pi }}}{{4}}$ (21)
> PseudoDivision(sin(a)+4,2*sin(a),sin(a));
Arguments of functions in the input will be recursively normalized:
> [PseudoDivision(tan((x^2-1)/(x+1))+exp(RootOf(_Z^2-_Z-1)^3),tan(x-sin(Pi/2)),tan(x-1),'m17','q17'),m17,q17];
$\left[{{ⅇ}}^{{2}{}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{\mathrm{_Z}}{-}{1}\right){+}{1}}{,}{1}{,}{1}\right]$ (22)
Non-algebraic sub-expressions may become algebraic after recursive normalization occurs:
> [PseudoDivision(sin((x^2-1)/(x-1)/(x+1)*Pi/4)*x,x+1,x,'m18','q18'),m18,q18];
$\left[{-}\frac{\sqrt{{2}}}{{2}}{,}{1}{,}\frac{\sqrt{{2}}}{{2}}\right]$ (23)
Rational functions are not accepted:
> PseudoDivision(1/x,1/x^2,x);
Floats are not accepted.
Algebraic functions such as $\sqrt{x}$ are not accepted:
> PseudoDivision(x^2+2*x*sqrt(x)+x,x+sqrt(x),x);
When non-indexed RootOfs are given in the input, the pseudo-division can still be performed and the output expressed in terms of the non-indexed RootOfs:
> [PseudoDivision(x^2-1+RootOf(_Z^2-3),2*x-RootOf(_Z^2-4),x,'m19','q19'),m19,q19];
$\left[{4}{}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{3}\right){,}{4}{,}{2}{}{x}{+}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{4}\right)\right]$ (24)
> [PseudoDivision(x+sqrt(2),2*x+RootOf(_Z^2-8),x,'m20','q20'),m20,q20];
$\left[{-}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{8}\right){+}{2}{}\sqrt{{2}}{,}{2}{,}{1}\right]$ (25)
Even if the leading coefficients of the input contain zero divisors, PseudoDivision can still compute the pseudo-remainder, multiplier, and pseudo-quotient in terms of the input such that the identity $ma=bq+r$ is preserved:
> [PseudoDivision(sqrt(2)*x,RootOf(_Z^2-2,index=1)*x+sqrt(2)*x+1,x,'m21','q21'),m21,q21];
$\left[{-}\sqrt{{2}}{,}{2}{}\sqrt{{2}}{,}\sqrt{{2}}\right]$ (26)
> [PseudoDivision(sqrt(2)*x,RootOf(_Z^2-2,index=2)*x+sqrt(2)*x+1,x,'m22','q22'),m22,q22];
$\left[{0}{,}{1}{,}{x}{}\sqrt{{2}}\right]$ (27)
> [PseudoDivision(sqrt(2)*x,RootOf(_Z^2-2)*x+sqrt(2)*x+1,x,'m23','q23'),m23,q23];
$\left[{-}\sqrt{{2}}{,}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{2}\right){+}\sqrt{{2}}{,}\sqrt{{2}}\right]$ (28)
Also, if the second argument is a zero divisor, then the computation will be performed anyway and an answer will be returned such that the identity $ma=bq+r$ is preserved:
> [PseudoDivision(x,sqrt(2)-RootOf(_Z^2-2),x,'m24','q24'),m24,q24];
$\left[{0}{,}{-}{2}{}\sqrt{{2}}{}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{2}\right){+}{4}{,}{x}{}\sqrt{{2}}{-}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{2}\right){}{x}\right]$ (29)
Using option characteristic, pseudo-division can be performed over finite fields:
> [PseudoDivision(x^2+2*x+2,2*x+1,x,'m25','q25','characteristic'=0),m25,q25];
$\left[{5}{,}{4}{,}{2}{}{x}{+}{3}\right]$ (30)
> [PseudoDivision(x^2+2*x+2,2*x+1,x,'m26','q26','characteristic'=3),m26,q26];
$\left[{2}{,}{1}{,}{2}{}{x}\right]$ (31)
> [PseudoDivision(x^2,I*x-sqrt(2),x,'m27','q27','characteristic'=7),m27,q27];
$\left[{2}{,}{6}{,}{I}{}{x}{+}\sqrt{{2}}\right]$ (32)
If a RootOf with a non-invertible leading coefficient is detected, an error may be returned:
> PseudoDivision(x,RootOf(RootOf(_Z^2-_Z)*_Z+1)*x+1,x);
In the second case, using option 'symbolic'=true will force PseudoDivision to select one of the factors and perform the computation. Here, it makes the substitution $\mathrm{RootOf}\left(\mathrm{RootOf}\left({\mathrm{_Z}}^{2}-\mathrm{_Z}\right)\mathrm{_Z}+1\right)=-1$:
> [PseudoDivision(x,RootOf(RootOf(_Z^2-_Z)*_Z+1)*x+1,x,'m29','q29','symbolic'=true),m29,q29];
$\left[{-1}{,}{-1}{,}{1}\right]$ (33)
With option 'makeindependent'=true, the input will be checked for algebraic dependencies even if there are more than $4$ algebraic objects in the input:
> CubeRootOf[-4]:=RootOf(_Z^3+4,index=1);
${{\mathrm{CubeRootOf}}}_{{-4}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{+}{4}{,}{\mathrm{index}}{=}{1}\right)$ (34)
> CubeRootOf[-2]:=RootOf(_Z^3+2,index=1);
${{\mathrm{CubeRootOf}}}_{{-2}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{+}{2}{,}{\mathrm{index}}{=}{1}\right)$ (35)
> CubeRootOf[2]:=RootOf(_Z^3-2,index=1);
${{\mathrm{CubeRootOf}}}_{{2}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{2}{,}{\mathrm{index}}{=}{1}\right)$ (36)
> CubeRootOf[3]:=RootOf(_Z^3-3,index=1);
${{\mathrm{CubeRootOf}}}_{{3}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{3}{,}{\mathrm{index}}{=}{1}\right)$ (37)
> CubeRootOf[4]:=RootOf(_Z^3-4,index=1);
${{\mathrm{CubeRootOf}}}_{{4}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{4}{,}{\mathrm{index}}{=}{1}\right)$ (38)
> CubeRootOf[6]:=RootOf(_Z^3-6,index=1);
${{\mathrm{CubeRootOf}}}_{{6}}{≔}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{6}{,}{\mathrm{index}}{=}{1}\right)$ (39)
> [PseudoDivision(x+CubeRootOf[4]*CubeRootOf[-2]*CubeRootOf[3],x+CubeRootOf[-4]*CubeRootOf[6],x,'m30','q30'),m30,q30];
$\left[{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{+}{2}{,}{\mathrm{index}}{=}{1}\right){}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{3}{,}{\mathrm{index}}{=}{1}\right){}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{4}{,}{\mathrm{index}}{=}{1}\right){-}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{+}{4}{,}{\mathrm{index}}{=}{1}\right){}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{6}{,}{\mathrm{index}}{=}{1}\right){,}{1}{,}{1}\right]$ (40)
> [PseudoDivision(x+CubeRootOf[4]*CubeRootOf[-2]*CubeRootOf[3],x+CubeRootOf[-4]*CubeRootOf[6],x,'m31','q31','makeindependent'=true),m31,q31];
$\left[{0}{,}{1}{,}{1}\right]$ (41)
With option 'makeindependent'=false, the input will never be checked for algebraic dependencies:
> [PseudoDivision(x+CubeRootOf[2]*CubeRootOf[3],x+CubeRootOf[6],x,'m32','q32'),m32,q32];
$\left[{0}{,}{1}{,}{1}\right]$ (42)
> [PseudoDivision(x+CubeRootOf[2]*CubeRootOf[3],x+CubeRootOf[6],x,'m33','q33','makeindependent'=false),m33,q33];
$\left[{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{2}{,}{\mathrm{index}}{=}{1}\right){}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{3}{,}{\mathrm{index}}{=}{1}\right){-}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{3}}{-}{6}{,}{\mathrm{index}}{=}{1}\right){,}{1}{,}{1}\right]$ (43) | 2021-05-17 14:10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993458151817322, "perplexity": 2290.4053062312328}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00069.warc.gz"} |
https://zbmath.org/?q=0824.35038 | ## A note on the existence of two nontrivial solutions of a resonance problem.(English)Zbl 0824.35038
Existence of two nontrivial solutions of a semilinear problem at resonance is proved in this paper. The problem $$- \Delta u = \lambda_ 1u + g(x,u)$$ in $$G$$, $$u = 0$$ on $$\partial G$$ is studied, where $$G$$ is a smooth bounded domain in $$\mathbb{R}^ N$$, $$\Delta$$ is the usual Laplacian, $$\lambda_ 1$$ is the first eigenvalue of $$- \Delta$$ and $$g(x,u)$$ is a Carathéodory function such that $$g(x,0) = 0$$, $$| g(x,s) | \leq a | s |^ p + b$$, with $$a,b > 0$$ and $$0 < p < (N + 2)/(N - 2)$$ if $$N \geq 3$$, and $$| G(x,s) | \leq k(x)$$ (with $$G(x,s) = \int^ s_ 0 g(x,t)dt)$$ for some $$k \in L^ 1(G)$$. If, moreover, $$\lim_{s \to 0} G(x,s)/s^ 2 = m(x)$$ in the $$L^ 1 (G)$$ sense with $$m \geq 0$$, $$\int_ G \limsup_{| s | \to \infty} G(x,s)dx \leq 0$$ and $$G(x,s) \leq (\lambda_ 2 - \lambda_ 1)s^ 2/2$$ for all $$s$$ (here $$\lambda_ 2$$ is the second eigenvalue) then the problem has (at least) two nontrivial solutions. The method of proof is variational: the associated functional is $$C^ 1$$ on $$H^ 1_ 0 (G)$$ and bounded from below, but it is not coercive. However, it is possible to show that the Palais-Smale condition holds on some interval and this together with a deformation lemma allows to conclude by using minimax arguments.
### MSC:
35J65 Nonlinear boundary value problems for linear elliptic equations 58E05 Abstract critical point theory (Morse theory, Lyusternik-Shnirel’man theory, etc.) in infinite-dimensional spaces 35J20 Variational methods for second-order elliptic equations
Full Text: | 2022-08-12 18:37:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219858050346375, "perplexity": 127.19589468423777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00097.warc.gz"} |
http://mathoverflow.net/questions/133461/visualizing-functions-with-a-number-of-independant-variables | # Visualizing functions with a number of independant variables
I need to graph real valued functions ( for exposition and analysis) the issue is the independent variables are more so that the conventional graphing method cant be used and further i don't want to slice the function.
1. These functions are like s =f1(x,y,z,t) and s= f2(x,y,z,t,k)
2. I also have vector functions of the same type v = g1 (x,y,z,t) and v = g2 (x,y,z,t,k)
The motivation is to see the function intuitively at one go and may be compare them. I know that the the limitations of physical dimensions is there. Domain coloring has its own limits in this regard. my question is ...
Q Do we have a visualization methodology for such requirements. It will be a benefit if one could refer to a software tool.
Any thought /suggestion welcome
-
I have magnificant results with countourplot3D in Mathematica for functions and vectors depending on 3 independent variables. Maybe something can be done with local surface coloring to interprete the 4th dimension. – John Compter Dec 11 '13 at 19:19
Often in my experience it's not necessary to show 5-way interactions – a 5-input function is really separable, with interesting interactions being 2-way but rarely 3-way. It also pays, imo, to look for ways to reduce dimensionality when you don't absolutely have to use necessarily overwrought visualisation techniques (Chernoff-Fleury faces, symphonies) which, like a time-lapse not over time, are entertaining but not super clear. – isomorphismes Mar 23 at 0:06
For time-dependent and three-dimensional data, there exist a number of established programs already; among the most popular free and open source ones are Paraview and VisIt. Both support a variety of plots such as isosurfaces, volume rendering or (for vector-valued data) arrow or streamline plots.
They all have a bit of a learning curve, though, and you should not expect to be able to see a four- or five-dimensional function on a two-dimensional screen "at a glance". Visualizing of and and data-mining from high-dimensional data are very active research topics in computational science, though. I would thus suggest thinking about what kind of information about your functions you would like to see about your functions, and then ask on the Computational Science SE. (See for example this question.)
-
Using a Perspective rendering / "Volume rendering" ---- 3 Variables (x,y,z) " Using transparency and different colours .. to visually grasp multiple of those surfaces" ------ + 1 var(k) "use the time and animate your plot" ---- +1 var (t) each point represented by 3 Dim color space --- +3 var (u,v,w) We CAN see a 3d vector plot of 5 independent variables – ARi Jun 12 '13 at 16:10
Not quite, because color and transparency is already used to visualize a three-dimensional volume on a two-dimensional screen (that's how volume rendering works), so they are no longer (fully) available for visualizing your other four independent variables. – Christian Clason Jun 12 '13 at 16:35
Thanks any way. – ARi Jun 12 '13 at 16:38
You are very welcome. And if you do succeed in visualizing a 3d vector plot of 5 independent variables, I'd love to see a screenshot :) – Christian Clason Jun 12 '13 at 16:43
A picture is worth a thousand words, a movie is worth a thousand pictures, and an interactive app is worth a thousand movies. I would suggest making a picture that dynamically responds to your changing variables. For instance,
http://www.math.osu.edu/~fowler.291/phase/
lets you type in a function of two complex variables (z and mouse), and you get a phase plot of the function where the z domain is being colored.
I am currently working on an improved version of this which will use webGL, accept arbitrarily many variables (slide points around), and admit several view options (Riemann sphere, 3d graph of modulus colored by phase, ect). This will be used in an online introduction to Complex analysis that I will be helping to run next spring.
-
If you have a function in three variables $f(x,y,z)$, you can try to plot surfaces solving the equation $f(x,y,z)=r_i$. Using transparency and different colours you might be able to visually grasp multiple of those surfaces at once (choose $r_0,\ldots r_4$ wisely and plot these $5$ surfaces (you also have to choose the perspective accordingly)). For $t$ use the time and animate your plot. But for five variables I have no idea.
I have no practical experience with it, Octave seems to have some support for such implicit surfaces, but I do know whether it will be suffiecient in your case.
-
You are describing isosurfaces, to give a keyword for searching further information. – Christian Clason Jun 12 '13 at 14:56
Using a "video" of the plot for representing time t seems interesting, though I would need to find a good software for it.. Thanks What about vector plot..any ideas are appreciated. – ARi Jun 12 '13 at 15:17
A tool for visualizing functions that is sometimes as powerful as graphs is using mapping diagrams. The simple idea is that each real variable is represented on an independent parallel axis in 3 space. One can place a point in space not on any axis to represent the function and arrows from points on the domain axes to the function point and arrows to points on the target axes corresponding to the values of the function for the domain points.
See Alfred Inselberg. Parallel Coordinates: Visual Multidimensional Geometry and Its Applications (Springer, Oct 8, 2009) for more on multi-dimensionsal connections and http://users.humboldt.edu/flashman/MD/section-1.1VF.html for an (draft) introduction to visualizing functions of one variable with mapping diagrams.
- | 2015-04-19 03:08:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.378083735704422, "perplexity": 1189.1835990431907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637364.20/warc/CC-MAIN-20150417045717-00087-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/67983-factors-polynomial.html | # Math Help - factors of a polynomial
1. ## factors of a polynomial
The polynomial $2x^4-ax^3+19x^2-20x+12=0$ has a factor in the form of (x-k)^2 , where k are natural numbers . Find the values of k and a .
2. Hello,
The polynomial $2x^4-ax^3+19x^2-20x+12=0$ has a factor in the form of (x-k)^2 , where k are natural numbers . Find the values of k and a .
$f(x)=2x^4-ax^3+19x^2-20x+12$
If it has such a factor, then $f(k)=0$ and $f'(k)=0$
This should give you a system of 2 equations, and you can solve for a and k !
3. Originally Posted by Moo
Hello,
$f(x)=2x^4-ax^3+19x^2-20x+12$
If it has such a factor, then $f(k)=0$ and $f'(k)=0$
This should give you a system of 2 equations, and you can solve for a and k !
Thanks Moo , just wondering $f(k')$ , does it mean differentiate and why is it equals 0 .
Thanks Moo , just wondering $f(k')$ , does it mean differentiate and why is it equals 0 .
It's f'(k), and yes, it means that you differentiate f, and then you take the value x=k.
Okay, let's say (x-k)² is a factor for f.
There exists a polynomial Q such that :
$f(x)=(x-k)^2 Q(x)$
It is obvious that $f(k)=0$
Now if you differentiate, use the product rule and you'll have :
$f'(k)=(x-k)^2 Q'(x)+2(x-k)Q(x)=(x-k)[(x-k)Q'(x)+2Q(x)]$
Is it clear that $f'(k)=0$ ?
In fact, it is true for higher powers : if f has a factor $(x-k)^n$, then $f(k)=f'(k)=f^{(2)}(k)=\dots=f^{(n-1)}(k)=0$
5. ## Re :
Thanks a lot Moo for telling me these extra things which are not in my book . | 2014-08-30 07:44:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352405071258545, "perplexity": 369.6303200193126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834494.74/warc/CC-MAIN-20140820021354-00180-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41467-022-28366-w?error=cookies_not_supported&code=66451203-5d42-498c-8833-1eaf78566686 | ## Introduction
In recent decades, worldwide energy demand has risen dramatically1, and nearly 30% of industrial energy use is tied to chemical production2 that relies heavily on heterogeneous catalysis. To increase the efficiency and sustainability of catalytic processes, multidisciplinary analytical strategies are required beyond the conventional trial-and-error approach to design more efficient catalysts3. Specifically, rational catalyst design aims to establish and apply fundamental structure-activity relationships at the nanoscale. Two central requirements are (i) the identification of the active sites and (ii) a theoretical framework for the simulation and prediction of reaction mechanisms and kinetics. Combined experimental characterizations and first-principles modeling have been employed to investigate predefined active site structures4,5,6. However, accurate identification of the active site remains a significant challenge due to the dynamic nature of surface composition and morphology evolving in time7,8,9.
The class of dilute alloy catalysts has tremendous industrial importance because they can enhance the activity and selectivity of chemical reactions while minimizing the use of precious metals10,11,12,13,14. Major advancements are sought in understanding the nature of their catalytically active sites14. In bimetallic nanoparticle systems, intra- and inter-particle heterogeneities give rise to a diverse population of sites and particle morphologies15. Furthermore, the active site can dynamically restructure under reaction conditions16,17,18,19. Although density functional theory (DFT) is widely used to investigate the thermodynamics of surface segregation in alloy systems20,21,22,23,24, the large computational cost precludes its use for direct sampling of the large underlying configurational space as well as the long timescale. As such, the relationship between the active site structure and the corresponding reaction pathways and kinetics remains hidden.
A useful tool for unlocking this relationship is an operando experiment, in which changes in structural and chemical features are measured simultaneously with the catalytic activity25,26,27. However, the small amount of active component in dilute alloys, compounded with low weight loading of the particles, require characterization tools with sufficient surface and chemical sensitivities. Among many scattering and absorption-based techniques, in situ X-ray absorption spectroscopy (XAS), in the form of extended X-ray absorption fine structure (EXAFS) and X-ray absorption near edge structure (XANES), has proven to be well-suited for studying dilute alloy catalysts28,29,30. In particular, the recently developed neural network-assisted XANES inversion method (NN-XANES) has enabled the extraction of local coordination numbers directly from XANES31. This technique makes the analysis of active sites in dilute alloys possible under reaction conditions, far exceeding the capabilities of conventional EXAFS fitting28. To the best of our knowledge, NN-XANES, while demonstrated to be useful for investigating the structure and local composition of bimetallic nanoparticles28,32, has not yet been applied to decoding active site structure and activity mechanisms in any functional nanomaterial systems.
Obtaining coordination numbers and other structural descriptors of a particular catalytic component is necessary but not sufficient to conclusively determine the active site geometry. To do so, two challenges must be resolved. First, one needs to decouple XAS signals originating from spectator species that contribute to the spectrum but not to the reaction. Second, given a set of possible active site geometries, more than one candidate structure may agree with the structural descriptors obtained from NN-XANES. We overcome these challenges by combining catalytic activity measurements, machine-learning enabled spectroscopic analysis, and first-principles-based kinetic modeling (Fig. 1). Through joint experimental and theoretical methods, we determine (i) the local structural descriptors of the catalyst and (ii) the apparent kinetic parameters of the reaction network. Both the structural and kinetic criteria must be satisfied to ascertain the dominant active site species. This multimodal approach is demonstrated for the prototypical HD exchange reaction on Pd8Au92/RCT-SiO2 (RCT = raspberry--colloid--templated), previously shown to have excellent catalytic performance and stability for this reaction33, CO oxidation29, and selective hydrogenation30,34. Remarkably, we find that the activity of HD exchange is determined by the size of surface Pd ensembles at the order of only a few atoms, which can be controlled directly through different catalyst treatments.
## Results and discussion
### Treatment alters the catalyst activity
The dilute alloy Pd8Au92/RCT-SiO2 catalyst was synthesized according to the established procedures33 (Methods). The catalyst is composed of 4.6 ± 0.6 nm (mean size) bimetallic nanoparticles with 8.3 at% Pd, embedded in RCT-SiO2. From electron microscopy and energy dispersive spectroscopy (EDS) measurements (Methods), the majority of Pd is homogeneously mixed with Au prior to any treatments and catalysis (Supplementary Fig. 1).
Previous experiments have established that treating the catalyst with O2 and H2 alters the activity30, implying treatment-induced changes of the catalyst surface structure. The exact kinetic mechanism of surface rearrangement in dilute alloys is still unknown, but it has long been established that oxidative environments result in a thermodynamic preference for Pd to reside on the surface of PdAu alloy nanoparticle and model surfaces35,36. To investigate the result of the treatment-induced restructuring of the surface, the HD exchange reaction was examined after three sequential treatments (A, B, C) of our sample (Fig. 2a; Methods). State 1 (S1) was obtained after O2 treatment at 500 °C for 30 min (treatment A), followed by State 2 (S2) after H2 treatment at 150 °C for 30 min (treatment B), followed by State 3 (S3) after another H2 treatment over 210 min with step-wise heating to 150 °C (treatment C). Throughout both heating and cooling phases, a steady-state HD formation rate was established at each temperature (Supplementary Fig. 3a–c).
The three starting states exhibited three distinct HD exchange reaction kinetics (numbered correspondingly) (Fig. 2b). The apparent activation energies (Ea) and axis intercepts (A) were obtained from the Arrhenius analysis (Supplementary Fig. 3d, e). The HD-1 heating step exhibited the largest Ea = 0.67 ± 0.05 eV and A = 30. Starting with the HD-1 cooling phase, the values decreased rapidly, settling to Ea = 0.34 ± 0.05 eV and A = 18 by HD-2. The pronounced hysteresis between HD-1 heating and cooling suggests a change occured during the reaction in addition to that induced by the treatment. Overall, the changes in the kinetic parameters suggest that the number and the nature of the active species present in S1 have changed in S2 and S3. This hypothesis is further supported and elucidated by spectroscopy and theoretical modeling.
### Treatment restructures the catalyst
Pd K-edge XAFS spectra were collected (Methods) for the catalyst in the initial state (S0), as well as S1 to S3 after sequential treatments A through C, respectively (Fig. 2a). Treatment-induced changes in the XANES were observed, quantified by the shift in the spectral center of mass (Fig. 2c, inset). After treatment A with O2 (S0 to S1), the center of mass shifts toward the limit defined by the bulk Pd reference, and, after treatment B with H2 (S1 to S2)—away from the bulk Pd past S0, remaining at the same position after further treatment C with H2 (S2 to S3). These shifts are direct evidence of the structural changes induced by the catalyst treatments. The shift away from the bulk Pd upon H2 treatment has been attributed to an increase in the number of Pd-Au neighbors resulting from Pd dissolution into the Au host28.
To understand these structural changes more precisely, quantitative analysis was performed using partial coordination numbers (CNs) ($${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}$$ and $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}$$) obtained from the NN-XANES inversion method6 (Fig. 2d; Methods) and conventional EXAFS fitting (Supplementary Fig. 4; Methods). Because of the relatively low sensitivity of EXAFS to the Pd–Pd contribution in dilute Pd limit, only a weak Pd-Pd contribution was detected in S0 and S1 but not in S2 and S3 (Supplementary Table 1). The NN-XANES analysis reveals the same trend in the CNs as the EXAFS fitting but with lower relative uncertainties, allowing us to detect Pd-Pd bonding at all regimes. In all samples, $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}$$ < 1.0, as expected for dilute alloys where Pd thermodynamically prefers to be fully coordinated with Au30.
From S0 to S1, treatment A with O2 slightly increases $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}$$ and decreases $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}$$, consistent with mild segregation of Pd to the surface. In contrast, from S1 to S2, treatment B with H2 decreases $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}$$ from 0.73 to 0.25 and increases $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}$$ from 10.57 to 11.38, consistent with Pd dissolution into the subsurface. Further evidence of dissolution is seen in the increase of the total Pd CN from 11.30 to 11.63. The full 12-fold coordination would correspond to 100% of Pd residing in the subsurface and being inaccessible for heterogeneous catalysis. The total Pd CN below 12 indicates the presence of some undercoordinated Pd on the surface. From S2 to S3, additional treatment C with H2 exhibits the same trend to a much lesser extent, i.e., $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}$$ decreases and the total Pd CN increases.
The extent of Pd mixing with Au was analyzed by comparing the ratio of coordination numbers $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}:{C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}$$ to the ratio of compositions $${x}_{{{{{{\rm{Pd}}}}}}}:{x}_{{{{{{\rm{Au}}}}}}}$$ (0.083:0.917). In all states, $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}}:{C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Au}}}}}}}$$ is less than $${x}_{{{{{{\rm{Pd}}}}}}}:{x}_{{{{{{\rm{Au}}}}}}}$$, which corresponds to a tendency for Pd to disperse in Au, consistent with the EDS observations (Supplementary Fig. 1b, c). The dispersion tendency decreases after O2 treatment and increases after H2 treatment. These results are also consistent with DFT-computed segregation free energies ($${G}_{{{{{{\rm{seg}}}}}}}$$) of representative surface Pd ensembles in the presence of chemisorbed oxygen and hydrogen, referenced to gas-phase molecules and subsurface Pd monomers (Supplementary Fig. 5; Sec. 1, Supplementary Methods). Globally, Pd prefers to remain dispersed in the subsurface ($${G}_{{{{{{\rm{seg}}}}}}}=0$$), highlighting the metastable nature of these ensembles. The next favorable structure is the extended surface Pd oxide model7 ($${G}_{{{{{{\rm{seg}}}}}}}=0.17\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}$$), considered as a limiting case of larger ensembles, but it is precluded from our reactivity modeling as it is inconsistent with the EELS map (Supplementary Fig. 1d) as well as the observed $${C}_{{{{{{\rm{Pd}}}}}}-{{{{{\rm{Pd}}}}}}} \, < \, 1.0$$ across all samples. The precise mechanistic and kinetic relevance of such oxide phases remains beyond the scope of this work and would require much more advanced atomistic modeling approaches.
The relative stability of the ensembles is inverted upon chemisorption. Across all cases, O2 provides the largest thermodynamic driving force to form larger metastable ensembles ($${G}_{{{{{{\rm{seg}}}}}}}=0 \sim 0.45\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}$$; Pd5O4 to trimer). Once O2 is removed after the initial pretreatment, these larger ensembles are expected to lower the surface free energy by fragmenting toward smaller-sized ensembles. Under H2, the same driving force for segregation becomes less pronounced ($${G}_{{{{{{\rm{seg}}}}}}}=0.35 \sim 0.4\,{{{{{\rm{eV}}}}}}/{{{{{\rm{Pd}}}}}}$$; Pd monolayer to trimer). Moreover, H2 chemisorption remains largely endergonic on Pd ensembles under the pretreatment condition ($${G}_{{{{{{\rm{ads}}}}}}}=0 \sim 0.5\,{{{{{\rm{eV}}}}}}$$; trimer to monomer), which would favor H2 desorption and partial dissolution of Pd into even smaller ensembles.
### Modeling resolves Pd ensemble reactivity
To ascertain the atomic-level structure of the active sites, HD exchange reaction pathways were characterized via transition state modeling using DFT calculations (Supplementary Figs. 68; Methods). On the basis of the quantitative CN analysis that established dilute alloy motifs, several types of model surfaces with close-packed facets were considered. These structures enable systematic investigation of the effect of (i) the atomic arrangement of Pd in the active site (ensemble effect) and (ii) the local coordination environment of the active site (facet effect). The zero-point-corrected energy barriers associated with H2/D2 dissociative adsorption and HD recombinative desorption are shown in Fig. 3. Spillover and migration of atomic H/D are relatively facile and of secondary importance in determining the overall catalytic kinetics (Sec. 2, Supplementary Methods). Moreover, migration into the subsurface37,38 was not considered in our dilute systems, where Pd largely remains dispersed as isolated atoms in the interior of Au. A recent study of Pd/Ag(111) showed that subsurface H can become metastable only in the presence of locally extended Pd under high H2 pressures39.
The Sabatier optimum is clearly demonstrated by the dilute Pd ensembles, bounded by the chemisorption-limited Au surface on the left and the desorption-limited Pd monolayer on the right. As the surface Pd content increases, chemisorption becomes more facile, with barriers decreasing from ~0.3 to ~0.1 eV across Pd monomer to trimer, finally becoming barrierless on Pd monolayer which exhibits pure Pd-like behavior (Supplementary Fig. 8c). At the same time, desorption becomes more difficult, with barriers increasing from ~0.3 to ~0.6 eV across Pd monomer to trimer, finally exceeding 1.0 eV on Pd monolayer. This trade-off is driven by the ensemble effect, with minimal influence of the facet effect, except in the case of pure Au. Moreover, there is good agreement between the computed energy barriers and the experimentally measured apparent activation energies (Fig. 3; Supplementary Fig. 3d). Specifically, the measured apparent activation energies of 0.29–0.34 eV for S2 and S3 (H2-treated) agree well with the computed energy barriers of ~0.3-0.4 eV for Pd monomer and dimer, whereas the measured value of 0.67 eV for S1 (O2-treated) matches the computed value of ~0.65 eV for the Pd trimer. This trend is consistent with a previous study33 and proposes Pd trimers and monomers/dimers as compelling active site candidates upon O2- and H2-treatment, respectively.
To bridge theory and experiment, we performed microkinetic simulations using DFT-derived kinetic parameters as inputs (Methods; Sec. 3-4, Supplementary Methods). This method circumvents the previously employed assumption of a single rate-limiting step33. Pd dimers have the highest activity for HD exchange, with a low apparent barrier of 0.22 eV at 50 °C (Fig. 4b). Although Pd monomers have a similarly low apparent barrier of 0.25 eV, their activity is lower by ~2 orders of magnitude due to the greater required loss of entropy of gas-phase H2/D233 (Fig. 4a). Due to more difficult desorption, Pd trimers exhibit an intermediate level of activity, with a higher apparent activation energy of 0.52 eV at 50 °C (Fig. 4c). Although the degree of rate control analysis shows that there is no single rate-limiting transition state in all three cases, the transition states of D2 dissociation and HD recombination control the rate of H/D exchange over Pd monomers/dimers (Supplementary Fig. 9a–f), and desorption of the HD molecule controls the rate of H/D exchange over Pd trimers (Supplementary Fig. 9g–i), consistent with the DFT energetics of Fig. 3. These observations reinforce the catalytic predominance of Pd trimers in O2-treated samples, in contrast to Pd monomers/dimers in H2-treated samples.
### Treatment controls the Pd distribution
To quantitatively analyze the distribution of Pd ensembles in terms of the structural descriptors obtained from NN-XANES, we parametrized the partial CNs in terms of the distribution of Pd in Au, i.e., the number of Pd monomers, dimers, and trimers on a model catalyst surface and in the interior (Methods). Figure 4d summarizes the distribution of the Pd ensembles responsible for the observed CNs in samples S0-S3 using a representative icosahedral Au nanoparticle model of size 4.6 nm (in agreement with the TEM measurements). The distributions were obtained with constraints on the type of surface species present, as inferred from theoretical modeling. Samples S0 and S1 (calcined in air and O2-treated, respectively) are characterized by a surface consisting of Pd trimers and an interior of monomers and dimers. In contrast, the H2-treated samples S2 and S3 are characterized by a surface consisting of a small number of Pd dimers and/or monomers with an interior dominated by monomers. The surface Pd content decreases from S2 to S3, consistent with H2-induced dissolution of Pd. This numerical analysis confirms that the dilute Pd ensembles satisfy both the kinetic and structural criteria of the active site.
A multipronged strategy has been developed to resolve the active site structure of a dilute bimetallic catalyst at the atomic level, a central requirement for advancing rational catalyst design. By combining catalysis, machine learning-enabled spectroscopic analysis, and first-principles-based kinetic modeling, we demonstrate the effect of catalyst treatment on its nanostructure, active site distribution, and reactivity toward the prototypical HD exchange reaction. A dilute Pd-in-Au alloy supported on raspberry-colloid-templated SiO2 was chosen for its excellent catalytic performance and stability29,30,33,34. Upon H2 treatment, the activity and the apparent activation energy decreased significantly. These observations are attributed to treatment-induced catalyst restructuring, quantitatively analyzed in terms of the coordination numbers extracted from neural network-assisted inversion of the X-ray absorption spectra28. The majority of Pd remained dispersed inside the Au host, with a small amount of Pd segregating toward the surface upon O2 treatment and dissolving into the bulk upon H2 treatment.
On the basis of this motif, theoretical modeling of the reaction network on several model surfaces has established dilute Pd ensembles as the catalytically predominant active sites. These ensembles numerically correspond to the observed coordination numbers, thereby satisfying both the structural and kinetic criteria of the active site. Remarkably, the reactivity is tuned by modulating the active site on the order of only a few atoms (n = 1–3) through catalyst treatment. Our multidisciplinary approach considerably narrows down the large configurational space to enable precise identification of the active site and can be applied to more complex reactions in related dilute alloy systems, such as selective hydrogenation of alkynes34,40 and CO oxidation29.
## Methods
### Catalyst synthesis
For the RCT (raspberry-colloid-templated) synthesis, we refer to the procedure reported by van der Hoeven et al.33. The gold nanoparticles were prepared using a procedure described by Piella et al.41 on a 450 mL scale at 343 K. The reaction mixture contained 0.3 mL 2.5 mM tannic acid, 3.0 mL 150 mM K2CO3, and 3.0 mL 25 mM HAuCl4 in H2O. The raspberry colloids were prepared by attaching the gold nanoparticles to the sacrificial polystyrene (PS) colloids (dPS = 393 nm). To 150 mL AuNPs, 1.5 mL aqueous PVP solution (0.1 g PVP per mL H2O) and 12 mL thiol-functionalized PS colloids (5.0 wt% in water) were added. After washing three times with MQ H2O the colloids were redispersed in 12 mL MilliQ H2O. The Pd growth on the AuNPs attached to polystyrene colloids was done at low pH to ensure sufficiently slow reaction rates and selective growth on the AuNPs42. To 12 mL raspberry colloid dispersion (5.0 wt% PS in water), 150 mL in MQ H2O, 1.5 mL 0.1 M HCl, 270 µL 10 mM Na2PdCl4 and 270 µL 40 mM ascorbic acid were added to obtain the Pd8Au92 NPs. The raspberry colloids were washed twice, redispersed in 12 mL MQ H2O, and dried at 65 °C in air. Next, the colloidal crystal was infiltrated with a pre-hydrolyzed TEOS solution (33 vol% of a 0.10 M HCl in H2O solution, 33 vol% ethanol, 33 vol% TEOS). Finally, the samples were calcined to remove PS colloids by heating them in static air from room temperature to 773 K with 1.9 K/min and held at 773 K for 2 h. Inductively coupled plasma mass spectrometry (ICP-MS, Agilent Technologies 7700x) was used for compositional analysis (metal composition and metal weight loading). The exact composition is 8.3 at% Pd, 4.4 wt% total metal loading.
### HD exchange experiments
Catalysis experiments were performed using the same flow cell as the Synchrotron in situ XAS experiments to directly correlate the structural changes observed in XAS with the activity of the sample toward HD exchange. A total of three HD exchange experiments were performed for the Pd8Au92 sample after three different sequential treatments (A, B, C) (Fig. 2a). Treatment A consisted of 30 min heating at 500 °C in 20% O2 atmosphere (balance Ar); treatment B consisted of 30 min heating at 150 °C in 25% H2 atmosphere (balance Ar); and treatment C consisted of the same H2 treatment for 210 min, with temperatures maintained at 100, 120, 140, 120, 100 °C in sequence for 30 min at each temperature to allow full equilibration of the structural changes induced by the treatment. The temperature was increased and decreased at a rate of 10 °C/min.
The three sequential treatments resulted in three distinct starting states: State 1 (S1), State 2 (S2), and State 3 (S3) after treatment A, B, and C, respectively. In HD reaction 1 (HD-1) with starting state S1, HD exchange was monitored during heating (HD-1 heating) and cooling (HD-1 cooling) stages (Fig. 2b). The same procedure applies for HD reactions 2 and 3 (HD-2 and HD-3, respectively).
Each treatment resulted in different HD exchange activities, so the sample amount was varied to keep the conversion well below 50% (maximum conversion in the reaction with a statistical mixture of 1H2:1D2:1HD) in the entire temperature range so as to accurately measure the catalytic performance (Supplementary Fig. 3e). The undiluted sample was loaded into quartz capillary of internal diameter 1 mm. The ends of the sample bead were blocked with quartz wool to avoid the powder displacement in the gas flow. The reactions were performed in the gas mixture of 12.5% H2, 12.5% D2, 72% Ar, and 3% N2, with a total flow of 15 mL/min. The temperature was increased between 30 °C and 150 °C with 10/20 °C steps and kept at each step for 5–30 min to achieve equilibrium (Supplementary Fig. 3a–c). The reaction products were measured with an online mass spectrometer (RGA, Hiden Analytical).
The MS signals of H2 and D2 changed upon consumption, forming the basis of extracting the HD formation rate (Fig. 2b) and the apparent activation energy from the Arrhenius analysis (Supplementary Figs. 2, 3). The baseline signal, indicating the sensitivity of the MS toward H2/D2/HD, was obtained from the bypass. The conversion was calculated using both H2 and D2 signals from which the average activity was extracted. The activity values in the range of 1–20% were used from each temperature step for the Arrhenius analysis.
### Transmission electron microscopy (TEM) and energy-dispersive X-ray spectroscopy (EDS)
TEM was performed with a JEOL NEOARM operating at 200 kV. All images are high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) data. The diameter of the used condenser lens aperture was 40 μm, the probe current was 150 pA, and the camera length was 4 cm. EDS was performed with two detectors provided by JEOL Ltd. and the maps were obtained with DigitalMicrograph, a software developed by Gatan Inc.
### In situ X-ray absorption fine structure spectroscopy (XAS)
XAS experiments at the Pd K-edge were carried out at the QAS beamline of the NSLSII. Ten milligrams of sample was loaded into a borosilicate capillary. The sample treatments were identical to those performed for activity measurements: treatment A was performed with O2 (500 °C, 20% O2 balance He, 30 min, cooled down in O2 to room temperature for data collection); treatment B with H2 (150 °C, 25% H2 balance N2, 30 min, cooled down in H2 to room temperature for data collection); and treatment C with H2 and slow step-wise heating up to 140 °C (25% H2 balance N2, 30 min, cooled down in H2 to room temperature for data collection). The entire experiment, including treatments and data collection steps, was performed with 15 scc gas flow. Data were collected at room temperature after each treatment. Each step was equilibrated for at least 30 min.
### Extended X-ray absorption fine structure (EXAFS) analysis
The analysis of EXAFS data was performed with IFEFIT package (Supplementary Fig. 4)43. The S02 amplitude reduction factor value was obtained from the fitting of Pd foil that was previously measured at the same beamline. The obtained value of 0.78 was used in all subsequent fittings of S0-S3 spectra. For the fitting of S0 & S1 data, two nearest-neighbor photoelectron paths Au–Au and Pd–Au were chosen. For S2 & S3, only Au–Au path was chosen for the fitting, as adding Pd–Au path made the fit unstable. Fitting k range values were: 2–12 Å−1, 2–11 Å−1, 2–9 Å−1, and 2–10 Å−1 for S0-S3, respectively. Fitting R range values were 1.75–3.7 Å, 1.4–3.5 Å, 1.45–3.5 Å, and 1.4–3.6 Å for S0-S3, respectively. Both k and R fitting ranges were optimized for each dataset separately, in order to minimize the standard deviations. The best-fitting results are summarized in Supplementary Table 1.
### Neural network-assisted X-ray near edge structure inversion analysis (NN-XANES)
The development and validation of the NN-XANES inversion method for PdAu bimetallic nanoparticles are reported in our previous work28. Before the application of the trained NN object in Mathematica 12, the experimental data were preprocessed with Athena. The catalyst Pd K-edge spectra of S0-S3 were aligned, edge step normalized, and then interpolated to a 95 point non-uniform energy mesh that spanned energies from Emin = 24339.8 eV to Emax = 24416.6 eV with a step size of 0.6 eV for data points near the absorption edge, which gradually increased to 1.7 eV for points approaching Emax. The same procedure was completed for a Pd K-edge spectrum of Pd foil that was collected at the same time as the catalyst spectra. The normalized and interpolated Pd foil spectrum was subtracted from the normalized and interpolated spectra of S0-S3. Finally, the energies and absorption coefficients were normalized between 0 and 1 using the commonly used min-max normalization procedure:
$$Z=\frac{X-{{\min }}(X)}{{{\max }}(X)-{{\min }}(X)}.$$
(1)
Here X is the training data input, min(X) is the smallest input, max(X) is the largest input, and Z is the normalized input. To estimate relative errors in the coordination number predictions, 10 independently trained NNs were applied to the processed data28,31. The coordination numbers and their uncertainties are presented in Supplementary Table 1.
### Density functional theory (DFT)
We perform DFT calculations using plane-wave basis sets and the projector augmented-wave (PAW) method44 as implemented in the Vienna Ab Initio Simulation Package (VASP)45. The plane-wave kinetic energy cutoff is set at 450 eV. The Methfessel-Paxton smearing scheme46 is employed with a broadening value of 0.2 eV. All structures are optimized via ionic relaxation, with the total energy and forces converged to 10−5 eV and 0.02 eV/Å, respectively. Gas-phase species are optimized in a 14 × 15 × 16 Å3 cell at the Γ-point with spin polarization. Lattice constants of bulk face-centered cubic Au and Pd are optimized according to the third-order Birch-Murnaghan equation of state, using a 19 × 19 × 19 k-point grid. We maintain the lattice constant of pure Au for all Pd-doped Au systems, given the dilute concentration of Pd in our model systems. All slab models are spaced by 16 Å of vacuum along the direction normal to the surface in order to avoid spurious interactions between adjacent unit cells. We fix the bottom layer(s) at bulk positions to mimic bulk properties. Supplementary Table 2 describes the computational set-up for the slab models of close-packed surfaces considered in our study.
We employ the Perdew-Burke-Ernzerhof (PBE) parametrization47 of the generalized gradient approximation (GGA) of the exchange-correlation functional. PBE provides Au and Pd lattice constants of 4.16 and 3.94 Å, within <0.1 Å of the experimental benchmark of 4.08 and 3.88 Å, respectively48. We also examine the dissociative adsorption energy of H2 on Pd(111), defined as the change in energy from an isolated slab and a gas-phase H2 to a combined system interacting as atomic H adsorbed on the slab:
$${E}_{{{{{{\rm{ads}}}}}}}=E\left[{{{{{{\rm{H}}}}}}}_{({{{{{\rm{ads}}}}}})}/{{{{{\rm{Pd}}}}}}(111)\right]-E\left[{{{{{\rm{Pd}}}}}}\left(111\right)\right]-\frac{1}{2}E\left[{{{{{{\rm{H}}}}}}}_{2\left({{{{{\rm{g}}}}}}\right)}\right].$$
(2)
PBE provides H adsorption energy of −0.57 eV at the low-coverage limit, within 0.1 eV of the experimental benchmark of −0.47 eV49. Based on these observations, we conclude that PBE is an appropriate standard reference functional that can provide a reasonable comparison with experiments for H2 chemisorption on Pd/Au systems.
We perform transition state modeling using the VASP Transition State Tools (VTST). Transition state pathways are first optimized via the climbing-image nudged elastic band (CI-NEB) method50, using three intermediate images generated by linear interpolation with a spring constant of 5 eV/Å2. The total forces, defined as the sum of the spring force along the chain and the true force orthogonal to the chain, are converged to 0.05 eV/Å. Then, the image with the highest energy is fully optimized to a first-order saddle point via the dimer method51, this time converging the total energy and forces to 10−7 eV and 0.01 eV/Å, respectively. We confirm that the normal modes of all transition states contain only one imaginary frequency by calculating the Hessian matrix within the harmonic approximation, using central differences of 0.01 Å at the same level of accuracy as the dimer method.
Vibrational frequencies associated with geometrically inequivalent configurations of all isotopic species (H2, D2, HD, H, D) are obtained by calculating the Hessian matrix in a similar manner. Given the large difference in the masses of the adsorbates and the metal substrate, only the adsorbate degrees of freedom are considered in the calculations.
Zero-point energy corrections (ZPC) are applied to all dissociative adsorption and recombinative desorption processes by correcting the electronic energies of the corresponding initial, transition, and final states with their respective zero-point energies:
$${E}_{{{{{{\rm{ZPC}}}}}}}=\frac{1}{2}\mathop{\sum}\limits_{i}h{\nu }_{i}.$$
(3)
Here, $$h$$ is the Planck constant and $${\nu }_{i}$$ is the non-imaginary vibrational frequency of normal mode $$i$$.
Conversion from the electronic energy at 0 K ($${E}_{{{{{{\rm{DFT}}}}}}}$$) to the ideal-gas Gibbs free energy ($$G$$) at a given temperature $$T$$ and pressure $$P$$ is given by:
$$G\left(T,P\right)={E}_{{{{{{\rm{DFT}}}}}}}+{E}_{{{{{{\rm{ZPC}}}}}}}+{k}_{{{{{{\rm{B}}}}}}}T+{\int }_{0}^{T}{c}_{V}{dT}-T\cdot S\left(T,P\right)$$
(4)
where $${c}_{V}$$ is the constant-volume heat capacity. See Supplementary Table 3 for the statistical thermodynamics expressions of the integrated heat capacity and the entropy. Translational and rotational degrees of freedom are included for gas-phase molecules only.
### Microkinetic modeling
The microkinetic models were parameterized using the energetics of the H/D exchange reaction network, computed with DFT using the PBE functional (Methods). For a dilute Pd/Au alloy, the effect of H coverage on reaction energetics can be considered to be small because the Pd sites are far apart. On Pd(111), the H coverage also does not significantly impact the adsorption energy52. The rate constant of an elementary step is given by the Eyring equation:
$$k=\frac{{k}_{{{{{{\rm{B}}}}}}}T}{h}{{\exp }}\left(-\frac{{\triangle G}^{o{{\ddagger}} }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
(5)
where $${k}_{{{{{{\rm{B}}}}}}}$$ is the Boltzmann constant, $$h$$ is the Planck constant, $$T$$ is the temperature, and $${\triangle G}^{o{{\ddagger}} }$$ is the Gibbs free energy of activation at standard pressure.
The rate constant of adsorption for species $$i$$ is given by the collision theory:
$${k}_{{{{{{\rm{ads}}}}}},i}=\frac{\sigma {AP}^\circ }{\sqrt{2\pi {m}_{i}{k}_{{{{{{\rm{B}}}}}}}T}}{{\exp }}\left(-\frac{\triangle {E}_{{{{{{\rm{ads}}}}}}}^{{{\ddagger}} }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
(6)
where $$\sigma$$ is the sticking coefficient, $$A$$ is the surface area of the Pd ensemble, $$P^\circ$$ is the standard pressure (1 bar), $${m}_{i}$$ is the mass of the adsorbate, and $$\triangle {E}_{{{{{{\rm{ads}}}}}}}^{{{\ddagger}} }$$ is the activation energy of adsorption. In this work, the sticking coefficient is set to 1, and the molecular adsorption process is taken to be barrierless. The surface areas are calculated using the bulk lattice constants of Pd and Au optimized with the PBE functional (3.94 and 4.16 Å, respectively). The atomic fraction of Pd in the alloy is set to 10%, in line with the concentration of 8% in the experimental samples. From Vegard’s law, the area occupied by one atom on (111) facet is $$7.41\times {10}^{-20}\,{{{{{{\rm{m}}}}}}}^{2}$$, which is then multiplied by the number of Pd atoms in each ensemble.
The corresponding rate constant for desorption is given by
$${k}_{{{{{{\rm{des}}}}}},i}=\frac{{k}_{{{{{{\rm{ads}}}}}},i}}{{K}_{{{{{{\rm{ads}}}}}},i}}$$
(7)
with the equilibrium constant of adsorption, $${K}_{{{{{{\rm{ads}}}}}},i}$$:
$${K}_{{{{{{\rm{ads}}}}}},i}={{\exp }}\left(-\frac{{\triangle G}_{{{{{{\rm{ads}}}}}},i}^{^\circ }}{{k}_{{{{{{\rm{B}}}}}}}T}\right)$$
(8)
where $${\triangle G}_{{{{{{\rm{ads}}}}}},i}^{^\circ }$$ is the Gibbs free energy of adsorption at standard pressure. The translational, rotational, and vibrational degrees of freedom fare considered for gaseous species, whereas only the vibrational degrees of freedom are included for surface intermediates and transition states. All vibrational frequencies below 100 cm−1 are rounded up to 100 cm−1.
The rate of elementary step $$j$$ was computed as follows:
$${r}_{j}={k}_{j}^{{{{{{\rm{fwd}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{IS}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{gas}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}}-{k}_{j}^{{{{{{\rm{rev}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{IS}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}}\mathop{\prod}\limits_{i}{\alpha }_{i,{{{{{\rm{gas}}}}}}}^{{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}}.$$
(9)
Here, $${k}_{j}^{{{{{{\rm{fwd}}}}}}}$$ and $${k}_{j}^{{{{{{\rm{rev}}}}}}}$$ are the forward and reverse rate constants, and $${\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}$$ and $${\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}$$ are the stoichiometric coefficients of reactant $$i$$ in the forward and reverse directions, respectively. The activity $${\alpha }_{i}$$ is taken as the surface coverage fraction $${\theta }_{i}$$ for intermediate states (labeled IS; including bare sites) and as the ratio of the partial pressure to the standard pressure, $${P}_{i}/P^\circ$$, for gaseous species53.
The time-dependent coverages of surface intermediates are obtained as the steady-state solution of the following system of ordinary differential equations:
$$\frac{d{\theta }_{i}}{{dt}}=-\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}{r}_{j}+\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}{r}_{j}$$
(10)
Following Wang et al.54, the steady-state solution is achieved in two steps. Starting from a bare surface, the equations are first integrated over 50 s until they have approximately reached a steady state. The resulting coverages are then used as an initial guess for numerical solution as follows:
$$0=-\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{fwd}}}}}}}{r}_{j}+\mathop{\sum}\limits_{j}{\nu }_{{ij}}^{{{{{{\rm{rev}}}}}}}{r}_{j}$$
(11)
$${\theta }_{{{{{{{\rm{Pd}}}}}}}_{n}}\left(t=0\right)=\mathop{\sum}\limits_{i}{\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},i}$$
(12)
$$1=n\mathop{\sum}\limits_{i}{\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},i}+\mathop{\sum}\limits_{i}{\theta }_{{{{{{\rm{Au}}}}}},i}$$
(13)
Here, $${\theta }_{{{{{{{\rm{Pd}}}}}}}_{n},{i}}$$ and $${\theta }_{{{{{{\rm{Au}}}}}},{i}}$$ are the surface coverages of species i on Pdn and Au sites, respectively, and n is the number of Pd atoms in the ensemble.
The steady-state rates of HD formation over Pdn/Au(111) are solved at temperatures of 25–150 °C. The partial pressures of H2 and D2 are set to 9.9 kPa and that of HD to 0.2 kPa, with a balance of inert for the total pressure of 100 kPa. The reaction pathways are analyzed by computing the apparent activation energy, steady-state intermediate coverages, and the degrees of rate control for all surface intermediates and transition states55. The derivatives are evaluated numerically using step sizes of 0.1 °C and $${10}^{-4}\,{{{{{\rm{eV}}}}}}$$ for the apparent activation energy and the degree of rate control, respectively.
### Coordination number parameterization
For a bimetallic system of Pd and Au atoms, the average first nearest neighbor coordination numbers from the Pd perspective can be represented as the vector $$\{{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}},{\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\}$$, where $${\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}$$ is the average Pd–Pd coordination number and $${\widetilde{C}}_{{{{{{\rm{Au}}}}}}}$$ is the average Pd–Au coordination number. In general, the first nearest neighbor coordination numbers can be parametrized in terms of the Pd speciation with the following system of equations:
$$\left[\begin{array}{c}{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}\\ {\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\end{array}\right]=\frac{1}{N}\left[\begin{array}{ccc}{\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{1}} & \cdots & {\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{i}}\\ {\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{1}} & \cdots & {\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{i}}\end{array}\right]\left[\begin{array}{ccc}{N}_{{{{{{\rm{Pd}}}}}}}^{(1)} & \cdots & 0\\ \vdots & \ddots & \vdots \\ 0 & \cdots & {N}_{{{{{{\rm{Pd}}}}}}}^{(i)}\end{array}\right]\left[\begin{array}{c}{s}_{1}\\ \vdots \\ {s}_{i}\end{array}\right]$$
(14)
where $${s}_{i}$$ refers to the speciation, i.e. the number of occurrences of a specific structural configuration of Pd (e.g. surface monomer, dimer, etc.), $${N}_{{{{{{\rm{Pd}}}}}}}^{(i)}$$ is the number of Pd atoms that are part of the motif $${s}_{i}$$, $${\tilde{C}}_{{{{{{{\rm{Pd}}}}}}}_{i}}$$ and $${\tilde{C}}_{{{{{{{\rm{Au}}}}}}}_{i}}$$ are the partial Pd–Pd and Pd–Au coordination numbers of each Pd atom in $${s}_{i}$$, respectively, and N is the total number of Pd atoms in the nanoparticle, i.e., $$N={x}_{{{{{{\rm{Pd}}}}}}}\cdot {N}_{{{{{{\rm{tot}}}}}}}$$, where $${x}_{{{{{{\rm{Pd}}}}}}}$$ is the atomic ratio of Pd and $${N}_{{{{{{\rm{tot}}}}}}}$$ is the total number of atoms in the particle.
To investigate the effect of surface monomers, dimers, and trimers, as well as subsurface dimers and monomers, we modify the system of equations:
$$\left[\begin{array}{c}{\widetilde{C}}_{{{{{{\rm{Pd}}}}}}}\\ {\widetilde{C}}_{{{{{{\rm{Au}}}}}}}\end{array}\right]=\frac{1}{N}\,\left[\begin{array}{ccccc}0 & 1 & 2 & 1 & 0 \\ 9 & 8 & 7 & 11 & 12\end{array}\right]\,\left[\begin{array}{ccccc}1 & 0 & 0 & 0 & 0\\ 0 & 2 & 0 & 0 & 0\\ 0 & 0 & 3 & 0 & 0\\ 0 & 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 0 & 1\end{array}\right]\,\left[\begin{array}{c}{N}_{{{{{{\rm{m}}}}}}}\\ {N}_{{{{{{\rm{d}}}}}}}\\ {N}_{{{{{{\rm{t}}}}}}}\\ {N}_{{{{{{\rm{sd}}}}}}}\\ {N}_{{{{{{\rm{b}}}}}}}\end{array}\right]\,$$
(15)
where $${N}_{{{{{{\rm{m}}}}}}}$$, $${N}_{{{{{{\rm{d}}}}}}}$$, $${N}_{{{{{{\rm{t}}}}}}}$$, $${N}_{{{{{{\rm{sd}}}}}}}$$, and $${N}_{{{{{{\rm{b}}}}}}}$$ are the numbers of surface monomers, surface dimers, surface trimers, subsurface dimers, and subsurface (bulk) monomers. The number of Pd atoms in each species are 1, 2, 3, 2, and 1, respectively, and the partial coordination numbers are determined by assuming a (111) surface orientation with bulk face-centered cubic packing. The assumption of (111) facet is backed by DFT calculations showing that exposed Pd atoms are thermodynamically more stable at close-packed terrace sites. The total number of Pd atoms is estimated as $$N={x}_{{{{{{\rm{Pd}}}}}}}\cdot {N}_{{{{{{\rm{tot}}}}}}}$$, where $${x}_{{{{{{\rm{Pd}}}}}}}$$ is obtained from ICP-MS (0.083) and $${N}_{{{{{{\rm{tot}}}}}}}$$ from TEM (4.6 $$\pm$$ 0.8 nm) as follows:
$${N}_{{{{{{\rm{tot}}}}}}}=\frac{1}{3}\left(2n+1\right)(5{n}^{2}+5n+3)$$
(16)
Here, $$n$$ is the side length of an icosahedron, and n = 8.4 for an icosahedron that is 4.6 nm, resulting in 2360 total atoms. We use the equation for an icosahedron because the original Au nanoparticles are icosahedra, and they are doped with a dilute amount of Pd which we assume does not substantially distort the morphology of the nanoparticle. | 2023-01-30 23:55:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223059892654419, "perplexity": 2660.4816676439723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00783.warc.gz"} |
http://mathhelpforum.com/differential-equations/73769-finding-general-solution-exact-differential-equation-print.html | # Finding a general solution to an exact differential equation.
• February 15th 2009, 02:20 PM
Finding a general solution to an exact differential equation.
Hellooo.
$dy/dx = 5^xln5+1/(x^2+1)$
I know I need to move the dx to the other side, but after that, Idk what direction to go in.
I'm thinking u and du?
• February 15th 2009, 02:39 PM
Scott H
Have you learned how to differentiate functions of the form $a^x$ yet? Try differentiating $a^x=e^{x\ln a}$ and see what you get.
The derivative of $\arctan x$ is also $\frac{1}{1+x^2}$. | 2016-08-30 15:45:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526608347892761, "perplexity": 825.205296949854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982984973.91/warc/CC-MAIN-20160823200944-00206-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://senshido.info/and-relationship/information-on-limits-and-derivatives-relationship.php | # Information on limits and derivatives relationship
### 1. Limits and Differentiation
Limits are the most fundamental ingredient of calculus. Learn how they are defined, how they are found (even under extreme conditions!), and how they relate to. In this section we define the derivative, give various notations for the derivative and Proof of Trig Limits · Proofs of Derivative Applications Facts · Proof of Various . In the first section of the Limits chapter we saw that the computation of the The next theorem shows us a very nice relationship between. The fact that in your example the limit and the derivative are equal is only a You can understand the relation between a limit and a derivative if you look at their.
So, we are going to have to do some work. In this case that means multiplying everything out and distributing the minus sign through on the second term.
After that we can compute the limit. However, outside of that it will work in exactly the same manner as the previous examples.
Also note that we wrote the fraction a much more compact manner to help us with the work.
## 1. Limits and Differentiation
So, we will need to simplify things a little. In this case we will need to combine the two terms in the numerator into a single rational expression as follows.
So, upon canceling the h we can evaluate the limit and get the derivative. You do remember rationalization from an Algebra class right? In an Algebra class you probably only rationalized the denominator, but you can also rationalize numerators. Remember that in rationalizing the numerator in this case we multiply both the numerator and denominator by the numerator except we change the sign between the two terms.
So, cancel the h and evaluate the limit. So, plug into the definition and simplify. So let's figure out what the change in why over the change in x is for this particular case. So the change in y is equal to what? Well, let's just take, you can take this guy as being the first point, or that guy as being the first point.
But since this guy has a larger x and a larger y, let's start with him. The change in y between that guy and that guy is this distance, right here. So let me draw a little triangle.
That distance right there is a change in y. Or I could just transfer it to the y-axis. This is the change in y. That is your change in y, that distance. So what is that distance? It's f of b minus f of a. So it equals f of b minus f of a. That is your change in y.
Now what is your change in x The slope is change in y over change in x. So what our change in x? Remember, we're taking this to be the first point, so we took its y minus the other point's y. So to be consistent, we're going to have to take this point x minus this point x. So this point's x-coordinate is b. So it's going to be b minus a.
And just like that, if you knew the equation of this line, or if you had the coordinates of these 2 points, you would just plug them in right here and you would get your slope. And that comes straight out of your Algebra 1 class. And let me just, just to make sure it's concrete for you, if this was the point 2, 3, and let's say that this, up here, was the point 5, 7, then if we wanted to find the slope of this line, we would do 7 minus 3, that would be our change in y, this would be 7 and this would be 3, and then we do that over 5 minus 2.
Because this would be a 5, and this would be a 2, and so this would be your change in x. So 7 minus 3 is 4, and 5 minus 2 is 3. Now let's see if we can generalize this. And this is what the new concept that we're going to be learning as we delve into calculus.
Let's see if we can generalize this somehow to a curve. So let's say I have a curve. We have to have a curve before we can generalize it to a curve. Let me scroll down a little.
Well, actually, I want to leave this up here, show you the similarity.
Let's say I have, I'll keep it pretty general right now. Let's say I have a curve. I'll make it a familiar-looking curve. Let's say it's the curve y is equal to x squared, which looks something like that.
And I want to find the slope. Let's say I want to find the slope at some point. And actually, before even talking about it, let's even think about what it means to find the slope of a curve.
Here, the slope was the same the whole time, right? But on a curve your slope is changing.
## How do derivatives relate to limits?
And just to get an intuition for that means, is, what's the slope over here? Your slope over here is the slope of the tangent line. The line just barely touches it. That's the slope over there. It's a negative slope. Then over here, your slope is still negative, but it's a little bit less negative. It goes like that.
I don't know if I did that, drew that. Let me do it in a different color. Let me do it in purple. So over here, your slope is slightly less negative. It's a slightly less downward-sloping line. And then when you go over here, at the 0 point, right here, your slope is pretty much flat, because the horizontal line, y equals 0, is tangent to this curve. And then as you go to more positive x's, then your slope starts increasing. I'm trying to draw a tangent line. And here it's increasing even more, it's increased even more.
So your slope is changing the entire time, and this is kind of the big change that happens when you go from a line to a curve. A line, your slope is the same the entire time. You could take any two points of a line, take the change in y over the change in x, and you get the slope for the entire line. But as you can see already, it's going to be a little bit more nuanced when we do it for a curve. Because it depends what point we're talking about.
We can't just say, what is the slope for this curve? The slope is different at every point along the curve. If we go up here, it's going to be even steeper. It's going to look something like that. So let's try a bit of an experiment. And I know how this experiment turns out, so it won't be too much of a risk. Let me draw better than that.
So that is my y-axis, and that's my x-axis. Let's call this, we can call this y, or we can call this the f of x axis. And let me draw my curve again. And I'll just draw it in the positive coordinate, like that. And what if I want to find the slope right there?
What can I do? Well, based on our definition of a slope, we need 2 points to find a slope, right? Here, I don't know how to find the slope with 1 point. So let's just call this point right here, that's going to be x.
We're going to be general. This is going to be our point x. But to find our slope, according to our traditional algebra 1 definition of a slope, we need 2 points. So let's get another point in here.
Let's just take a slightly larger version of this x.
### Derivative - Wikipedia
So let's say, we want to take, actually, let's do it even further out, just because it's going to get messy otherwise. So let's say we have this point right here. And the difference, it's just h bigger than x. Or actually, instead of saying h bigger, let's just, well let me just say h bigger.
So this is x plus h. That's what that point is right there. So what going to be their corresponding y-coordinates on the curve? Well, this is the curve of y is equal to f of x.
So this point right here is going to be f of our particular x right here. And maybe to show you that I'm taking a particular x, maybe I'll do a little 0 here. This is x naught, this is x naught plus h. This is f of x naught.
And then what is this going to be up here, this point up here, that point up here? Its y-coordinate is going to be f of f of this x-coordinate, which I shifted over a little bit.
So what is a slope going to be between these two points that are relatively close to each other? Remember, this isn't going to be the slope just at this point.
This is the slope of the line between these two points. And if I were to actually draw it out, it would actually be a secant line between, to the curve. So it would intersect the curve twice, once at this point, once at this point. You can't see it. | 2019-11-13 05:46:50 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839348554611206, "perplexity": 251.29020668748873}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00332.warc.gz"} |
https://math.stackexchange.com/questions/1872747/first-and-second-fundamental-form-with-rotational-surfaces-check | # First and second fundamental form with rotational surfaces (check)
I'm working out some examples for surfaces in differential geometry. I was working out simple rotational surface, but I think I've done something wrong. Let $\gamma\left(t\right)$ a curve parametrized with length of arc and given by $$\gamma\left(t\right)=\left(a\left(t\right),\,0,\,b\left(t\right)\right)$$ Let us consider the rotational surface given by $$\varphi\left(\theta,t\right)=R_{z}\left(\theta\right)\gamma\left(t\right)=\left(a\left(t\right)\cos\theta,\,a\left(t\right)\sin\theta,\,b\left(t\right)\right).$$. I then have tangent and normal vectors $$\frac{\partial\varphi}{\partial\theta}= \left(-a\left(t\right)\sin\theta,\,a\left(t\right)\cos\theta,\,0\right),$$ $$\frac{\partial\varphi}{\partial t}= \left(\dot{a}\left(t\right)\cos\theta,\,\dot{a}\left(t\right)\sin\theta,\,\dot{b}\left(t\right)\right),$$ $$N= \left(\dot{b}\left(t\right)\cos\theta,\,\dot{b}\left(t\right)\sin\theta,\,-\dot{a}\left(t\right)\right).$$ So calculating the first and second fundamental form I should have $$E=\dot{a}\left(t\right)^{2}+\dot{b}\left(t\right)^{2}, \,\,F=0, \,\,G=a\left(t\right)^{2},$$ $$e=a\left(t\right)\left(\dot{a}\left(t\right)\ddot{b}\left(t\right)-\dot{b}\left(t\right)\ddot{a}\left(t\right)\right), f=0, g=a\left(t\right)^{2}\dot{b}\left(t\right).$$ And curvature $$K=\frac{a\left(t\right)\dot{b}\left(t\right)\left(\dot{a}\left(t\right)\ddot{b}\left(t\right)-\dot{b}\left(t\right)\ddot{a}\left(t\right)\right)}{\dot{a}\left(t\right)^{2}+\dot{b}\left(t\right)^{2}}.$$ Now If I try with the torus as a special case then $$\gamma\left(t\right)=\left(R+r\cos\left(t\right),\,0,\,r\sin\left(t\right)\right).$$ And first and second fundamental forms are $$E=r^{2}, \,\,F=0, \,\,G=\left(R+r\cos\left(t\right)\right)^{2},$$ $$e=r^{2}\left(R+r\cos\left(t\right)\right), f=0, g=\left(R+r\cos\left(t\right)\right)^{2}r\cos\left(t\right).$$ And therefore curvature is $$K=r\cos\left(t\right)\left(R+r\cos\left(t\right)\right).$$ What am I doing wrong?
• Didn't check all, but (1) the third component of $\frac{\partial \varphi}{\partial \theta}$ should be zero, (2) it seems that you did not normalize $N$. – user99914 Jul 27 '16 at 14:50
• indeed the third component was 0, I copied wrong – Dac0 Jul 27 '16 at 15:02
• For the normalization of N, I forgot to write that the curve was parametrized for length of arc so the the formula of N should be ok... – Dac0 Jul 27 '16 at 16:28
• Did you really checked that the length of $N$ is one? Which curve was parametrized by arc length? – user99914 Jul 27 '16 at 17:27
• Yes I did, the curve $\gamma$ is the one parametrized by lenght of arc. Indeed we have $\dot{a}\left(t\right)^{2}+\dot{b}\left(t\right)^{2}=1$ – Dac0 Jul 27 '16 at 23:26
## 1 Answer
The key issue was that $\dot{a}\left(t\right)^{2}+\dot{b}\left(t\right)^{2}=1$ because the curve was parametrized for lenght of arc. Then we had $$\frac{\partial\varphi}{\partial t}= \left(\dot{a}\left(t\right)\cos\theta,\,\dot{a}\left(t\right)\sin\theta,\,\dot{b}\left(t\right)\right),$$ $$\frac{\partial\varphi}{\partial\theta}= \left(-a\left(t\right)\sin\theta,\,a\left(t\right)\cos\theta,\,0\right),$$ $$N= \left(\dot{b}\left(t\right)\cos\theta,\,\dot{b}\left(t\right)\sin\theta,\,-\dot{a}\left(t\right)\right).$$ First fundamental form was $$E=1, \,\,F=0, \,\,G=a\left(t\right)^{2},$$ and second fundamental form had an error $$e=\dot{a}\left(t\right)\ddot{b}\left(t\right)-\dot{b}\left(t\right)\ddot{a}\left(t\right), f=0, g=a\left(t\right)\dot{b}\left(t\right).$$ Now the curvature is $$K=\frac{\dot{b}\left(t\right)\left(\dot{a}\left(t\right)\ddot{b}\left(t\right)-\dot{b}\left(t\right)\ddot{a}\left(t\right)\right)}{a\left(t\right)}.$$ Case of the Torus: $$\gamma\left(t\right)=\left(R+r\cos\left(t\right),\,0,\,r\sin\left(t\right)\right).$$ And if we want $\gamma\left(t\right)$ to be parametrized for lenght of arc we need$r^{2}=1$ So that $$E=1, \,\,F=0, \,\,G=\left(R+r\cos\left(t\right)\right)^{2},$$ $$e=1, f=0, g=\left(R+r\cos\left(t\right)\right)r\cos\left(t\right).$$ And the curvature is correctly $$K=\frac{r\cos\left(t\right)}{R+r\cos\left(t\right)}.$$ | 2019-06-17 18:41:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596894383430481, "perplexity": 247.70604134605404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00345.warc.gz"} |
http://math.stackexchange.com/questions/9520/is-this-a-known-special-function | # Is this a known special function?
Is this a known special function:
$$\int\nolimits_0^1 a^p(1-a)^{1-p}\\,b^{1-p}\\,(1-b)^p dp\qquad ?$$
I am really only interested in maximizing this over $(a,b)$ in $[0,1] \times [0,1]$, so a pointer to a nice numerical evaluation is appreciated as much or more so than an unstable exact formula.
Thanks for any help
-
title: "known" :) – futurebird Nov 9 '10 at 3:50
What's the range of your $p$? – J. M. Nov 9 '10 at 8:43
You can get a closed-form answer.
$$\int_0^1 a^p (1-a)^{1-p} b^{1-p} (1-b)^p dp = b(1-a) \int_0^1 \left(\frac{a(1-b)}{b(1-a)}\right)^p dp = \left. \frac{b(1-a)}{\ln \frac{a(1-b)}{b(1-a)}} \left(\frac{a(1-b)}{b(1-a)}\right)^p \right|_0^1$$ $$= \frac{a(1-b) - b(1-a)}{\ln a + \ln (1-b) - \ln b - \ln (1-a)} = \frac{a-b}{\ln a + \ln (1-b) - \ln b - \ln (1-a)}.$$
This holds if $a \neq b$ and if neither of $a$ or $b$ is 0 or 1. If $a = b$, then instead we have $$a(1-a) \int_0^1 dp = a - a^{2}.$$ And, of course, if $a$ or $b$ is 0 or 1 then the value of the integral is 0.
So, as far as maximizing, you can use the usual approach of finding where both partial derivatives are 0. I haven't worked through the calculations, but I strongly suspect that because of the symmetry in $a$ and $b$ that the maximum value will occur at $a = b$.
-
I think the denominator should be $\ln a - \ln(1-a) - \ln b + \ln(1-b)$ instead. Note that the integrand is identical for $a = b = t$ and for $a = b = 1-t$, but your result is not. – Rahul Nov 9 '10 at 4:29
@Rahul: Thanks. Fixed. – Mike Spivey Nov 9 '10 at 4:32
Shouldn't the numerator be $\left(\frac{a(1-b)}{b(1-a)}\right)-1$? After all $\int x^p dp=x^p/\ln{x}$ so make the big fraction x. – Ross Millikan Nov 9 '10 at 5:09
@Ross: I think I've accounted for that. Remember that the expression you give is being multiplied by $b(1-a)$ to obtain the numerator in my expression. – Mike Spivey Nov 9 '10 at 5:28
You're right, I missed the $b(1-a)$ in front of the integral sign. – Ross Millikan Nov 9 '10 at 5:36 | 2016-06-28 18:47:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590283036231995, "perplexity": 342.44064964765647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00145-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=iomshjma37n3nl8cp0vs9lbpj7&topic=907.0;prev_next=next | ### Author Topic: Week 2 Quiz? (Read 1857 times)
#### Patrick Fraser
• Newbie
• Posts: 2
• Karma: 0
##### Week 2 Quiz?
« on: January 16, 2018, 07:29:24 PM »
My apologies if this was mentioned somewhere else; I looked for it and did not find it. In the practice problems form, it says that there will be no quiz during week 3 however, will there be one this week? Thank you.
#### Victor Ivrii
• Elder Member
• Posts: 2599
• Karma: 0
##### Re: Week 2 Quiz?
« Reply #1 on: January 16, 2018, 09:36:22 PM »
This is week 3!
#### Patrick Fraser
• Newbie
• Posts: 2
• Karma: 0
##### Re: Week 2 Quiz?
« Reply #2 on: January 16, 2018, 09:40:32 PM »
Oh whoops! I am in your Wednesday lecture and so I was confused since the second lecture is tomorrow but this clears that up. Thank you very much for the clarification.
#### Victor Ivrii
• Elder Member
• Posts: 2599
• Karma: 0
##### Re: Week 2 Quiz?
« Reply #3 on: January 16, 2018, 09:45:19 PM »
This is why weeks 1--2 are together: for other sections week 1 was very short but for us it was really short ! | 2022-07-04 00:32:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097298741340637, "perplexity": 13223.12921342372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00037.warc.gz"} |
https://ham.stackexchange.com/questions/9778/how-does-an-swr-meter-really-work/9779 | # How does an SWR meter really work?
For ham radio operators, the SWR meter is a ubiquitous piece of equipment. There are dozens of standalone models on the market and most modern transceivers and antenna tuners have this functionality built in.
Any ham with an interest in antennas has probably had their hands on SWR meter. Operating them is relatively straight forward but understanding how they work seems to be another matter.
There is an abundance of descriptions in texts, magazines, and on the Internet that attempt to describe how an SWR meter (or its close cousin a directional watt-meter) works. Some of these even state that the meter is able to actually separate the forward and reverse power, voltage, or current. After looking at the schematics for many of these devices, this seems dubious. Others attempts at describing its underlying mechanisms, such as How Should this SWR Meter's Directional Coupler Work don't seem to come to a consensus.
How do these instruments really work? What is the basic math behind it?
• Isn't it true that the final power amplifier stage(s) of a transmitter designed/rated to drive a 50Ω load Z typically have a source Z of several ohms (not 50)? If it was 50, then 1/2 of the r-f output power of the transmitter when connected to a 50Ω load connected as suggested in Example 1 would be dissipated within the transmitter.We know both from theory and accurate, calorimetric power measurements of such transmitters when driving a 50Ω load that their final stage d-c input to r-f output power conversion efficiencies can be >80%. If that is true for *100*Ω, is it also true for 0Ω and ∞Ω? – Richard Fry Jan 24 '18 at 12:23
• There are several flaws in the experiments above. In #1 the result is true only if the "SWR meter" was calibrated to a Zo of 50Ω or 200Ω. If it were calibrated to 100Ω it would read 1:1. The same comment applies to #2. The output impedance of a transmitter can be anything. In a tube transmitter with a Pi output network, the effective output impedance is LESS than 50Ω. If it were 50Ω, the amplifier could never achieve more than 50% efficiency. In a broadband solid state amplifier, the output transformer and lowpass network are designed to operate into a 50Ω impedance. The efficiency is usually – Dick Frey Oct 9 '18 at 21:34
• Dick - welcome to Amateur Radio on Stack Exchange. Your "answer" should really have been entered as a comment to my answer. Nevertheless, I did amend my answer to clarify that the SWR meter for the two test cases is a 50 ohm SWR meter. – Glenn W9IQ Oct 9 '18 at 21:49
• @MikeWaters I already flagged it but thought I would give Dick a response. – Glenn W9IQ Oct 9 '18 at 23:52
• @DickFrey Regarding the output impedance of the transmitter, this will have no effect on the SWR. It will affect the maximum voltage and current but that is out of scope of this question. – Glenn W9IQ Oct 11 '18 at 10:10
Dispelling the Myth
To begin with, the typical HF SWR meter does not have the ability to separately sample the forward and reverse power, voltage, or current. Any description of the device or its circuitry that suggests this capability is flawed. We can show this empirically with two different experiments.
Experiment 1
Connect a 100 ohm resistor directly to the output of a 50 ohm SWR meter (no coax cable) and directly connect the input of the SWR meter to the transmitter (no coax cable). The resistor will dissipate all the power that the transmitter can put out into the 100 ohm load - no reflections of voltage, current or power since there is no transmission line. Yet the meter will show a 2:1 SWR.
Experiment 2
Connect the transmitter directly to the input of the 50 ohm SWR meter. On the output of the SWR meter connect a 75 ohm coaxial cable and attach a 75 ohm load to its end. Since the load matches the Zo (characteristic impedance) of the coax cable, there are no reflections of voltage, current or power on the coax cable. Yet the meter will show an SWR of 1.5:1.
How Does it Work?
The typical HF SWR meter works by sampling the complex voltage and current at the point of insertion from which it calculates the effective SWR at the point of insertion on a transmission line with a characteristic impedance of 50 ohms (or whatever impedance for which the SWR meter is designed).
The term "effective" is used here because the calculation is performed regardless of whether a transmission line is present or not and regardless of the actual characteristic impedance of any transmission line that is present.
Sampling the Voltage
SWR meters use one of three different methods to sample the complex voltage at the point of insertion.
simulate this circuit – Schematic created using CircuitLab
In each case, the voltage sampling circuit steps down the higher voltage that is present on the transmission line at the point of insertion to a more manageable lower voltage for the SWR meter circuit. In the case of the resistive and capacitive divider circuits, the upper element is typically adjustable to allow the SWR meter to be calibrated (more about this later).
The voltage present on the transmission line at the point of sampling is the complex sum of all of the forward voltages plus the sum of any reflected voltages resulting from a mismatched load and source. This can be expressed as:
$$V_\text{line}=V_f+V_r \tag 1$$
where Vf is the complex forward voltage and Vr is the complex reflected voltage at the point of sampling.
The sampled voltage from any of the above circuits can then be expressed as:
$$V_1=(V_f+V_r)*k_1 \tag 2$$
where k1 is the scaling constant as determined by the voltage sampling circuit design.
Sampling the Current
Nearly every SWR meter uses the same technique to sample the complex current that is present on the transmission line at the point of insertion. The technique involves a circuit that at first glance looks like a step-up voltage transformer with a load resistor on the secondary, but it is actually a special configuration known as a wide band current transformer.
A wide band current transformer converts the RF current that is passing through its primary side, to a proportional RF voltage on the secondary side. The conversion takes place by placing a load resistor (sometimes called the burden) on the secondary side. The load resistor must be much smaller than the characteristic impedance of the secondary side of the transformer for this current to voltage conversion to be proportional.
simulate this circuit
The transformer is typically a toroid device with the center conductor of the transmission line passing through the hole of the toroid to form a 1 turn primary winding with the secondary winding wrapped multiple times through the toroid form.
The complex current that is present on the transmission line at the point of sampling is the difference between the forward current and the reflected current:
$$I_\text{line}=I_f-I_r \tag 3$$
where If is the complex forward current and Ir is the complex reflected current.
The complex voltage resulting from the sampled current using a wide band current transformer can then be expressed as:
$$V_2=(I_f-I_r)*k_2 \tag 4$$
where k2 is the transformation factor, in volts/amp, as determined by the wide band current transformer circuit design.
Calculating SWR and Power
We now need a way to use the sampled voltage and current to calculate the SWR as well as the forward and reflected power. Most equations for calculating these values involve knowing the forward voltage and the reflected voltage. But so far we only have V1 which is proportional to the sum of these complex voltages as shown in equation 2. There is, however, another way of expressing the complex current that is present at the sampling point that can help us:
$$I_\text{line}=\frac{V_f-V_r}{Z_o} \tag 5$$
where Zo is the characteristic impedance of the transmission line, typically 50 ohms in amateur radio applications.
We can then substitute equation 5 into equation 4:
$$V_2=(V_f-V_r)*\frac{k_2}{Z_o} \tag 6$$
Now we subtract V2 in equation 6 from V1 in equation 2:
$$V_1-V_2=\Bigl((V_f+V_r)*k_1\Bigr)-\Bigl((V_f-V_r)*\frac{k_2}{Z_o}\Bigr) \tag 7$$
With a Zo of 50 ohms, if we set the k2 to k1 ratio to 50, equation 7 is greatly simplified:
[Edit: Formulas 8 and 9 have been updated]
$$V_1-V_2=((V_f+V_r)-(V_f-V_r))*k_1=V_r*2*k_1 \tag 8$$
Keeping the same k2 to k1 ratio but adding V1 and V2:
$$V_1+V_2=((V_f+V_r)+(V_f-V_r))*k_1=V_f*2*k_1 \tag 9$$
Since the 2*k1 term is a constant known to the designer, it is easily factored out in subsequent applications of equations 8 and 9.
This adding and subtracting of V1 and V2 had in the past been accomplished with a switch on the SWR meter. Now it is more common that V1 is fed into a center tap of the secondary of the wide band current transformer. With the appropriate circuit values, one leg of the transformer is then V1+V2 while the other leg is V1-V2.
Since power is proportional to voltage squared, equation 8 gives us a voltage that is proportional to reflected power while equation 9 gives us a voltage that is proportional to forward power. Each of these voltages are fed to their respective meter movement where the logarithmic scale drawn on the meter face does the conversion of the linear meter deflection, that is based on voltage, to power.
The conversion of forward power and reflected power to SWR is given as:
$$SWR=\frac{1+\sqrt{\frac{P_r}{P_f}}}{1-\sqrt{\frac{P_r}{P_f}}} \tag {10}$$
Or alternatively the conversion of Vf and Vr to SWR is given as:
$$SWR=\frac{1+(V_r/V_f)}{1-(V_r/V_f)} \tag {11}$$
The meter face pictured above shows the intersection of the two needles on the 2:1 SWR line which corresponds to equation 10 for the powers shown. The SWR meter designer simply plots a number of SWR values on the meter corresponding to the intersection of the forward and reflected powers.
Calibrating the SWR Meter
The typical calibration routine for the SWR meter is to attach a resistive load that is equal to the Zo of the feedline directly to the output of the meter. A transmitter outputs the appropriate amount of power to the input of the SWR meter. The voltage dividing network is then adjusted such that Pr is equal to 0.
Since Pr is proportional to (Vr)2, we can see from equation 7 that this adjustment is simply ensuring that k2/k1=50 for an SWR meter that has been designed for 50 ohm feedline.
• In Experiment 1: How can a 100 Ω (non-inductive) resistor dissipate "all the power that the transmitter can put out," under the stated conditions? If that is true for 50 Ω, is it also true for 0 Ω and ∞ Ω? – Richard Fry Jan 24 '18 at 12:14
• @richardfry In both cases, the transmitter is putting out zero power so the 0 Ω and ∞ Ω load are not dissipating anything yet they are dissipating everything... These polar cases are not usable, however, as a test case for the SWR meter. – Glenn W9IQ Jan 24 '18 at 12:46
• RE: "without a transmission line, there is no reflection...." A reflection is produced by an impedance discontinuity -- which can occur either with or without a transmission line in the circuit. – Richard Fry Jan 24 '18 at 14:55
• The Ham Shack chat might be good, gentlemen. Also, I would like to see if you can answer the question about that dish antenna there. (Or, I can move all these comments here to its own chat.) I find the discussion between you two intriguing, but it should really not continue here. And once this is resolved, it might be good to start a new question about this. – Mike Waters Jan 24 '18 at 16:00
• @Philfrost-w8ii I see it quite differently. If you simply measure voltage and current to compute watts in this case (like your simple resistor example), it will be the transmitter output power less attendant losses. That has nothing directly to do with forward or reflected power nor SWR. – Glenn W9IQ Mar 22 '18 at 18:26 | 2019-01-16 19:33:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5219877362251282, "perplexity": 1001.8291779974132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657557.2/warc/CC-MAIN-20190116175238-20190116201238-00635.warc.gz"} |
https://skyciv.com/docs/load-generator/wind/site-analysis-for-wind-load-calculations/ | SkyCiv Documentation
Your guide to SkyCiv software - tutorials, how-to guides and technical articles
1. Home
2. SkyCiv Load Generator
4. Site Analysis for Wind Load Calculations
# Site Analysis for Wind Load Calculations
## Using the “Select Worst Case Wind Source Direction” in Load Generator
Site analysis is crucial in wind load calculations. By conducting this analysis, we can determine the worst wind source direction to generate the largest design wind speed and pressures.
In the SkyCiv Load Generator, this can be done by clicking the “Select Worst Case Wind Source Direction” button. However, this is only available to ASCE 7, NSCP 2015, and AS/NZS 1170 for now. Soon, we will be adding the other reference codes to aide your design process. The button will be enabled after the Basic Wind Speed has been pulled from our server.
Try our SkyCiv Load Generator!
Site data for wind load calculation.
By clicking the “Select Worst Case Wind Source Direction” button above the map image, it will generate the terrain sectors for each direction. It will generate the parameters for each direction and calculate the corresponding velocity pressures. Note that the radius of the sectors is equal to 2 miles for imperial units and 2 kilometers for metric units.
For ASCE 7 and NSCP 2015, the default Exposure Category is B. This will affect the value of Velocity Pressure Exposure Coefficient $${K}_{z}$$ as this depends on the Exposure Category of the upwind direction. The $${K}_{z}$$ value will be calculated at 15 ft for imperial and 4.5m for metric units. This is only to compare the calculated velocity pressure differs for each direction. This factor will be then recalculated at the mean roof height, $$h$$, for the design wind pressure. Moreover, the topographic factor, $${K}_{zt}$$, is calculated at $$z = 0 m$$ as this is the location where the maximum effect of topography can be considered.
Initial values for the worst case wind source direction data.
The generated editable terrain/exposure category sectors in the map.
For AS/NZS 1170, the default Terrain Category is 2.5. This will affect the value of Terrain/Height Multiplier, $${M}_{z}$$, as this depends on the Terrain Category of the upwind direction. This $${M}_{z}$$ factor is calculated at 3m height for comparison with all the other directions. This factor will be then recalculated at the mean roof height, $$h$$, for the design wind pressure. Moreover, each direction has a corresponding Direction Multiplier, $${M}_{d}$$, which definitely has an impact on the Design wind speed. Moreover, the topographic multiplier, $${M}_{t}$$ is calculated at $$z = 0$$ as this is the location where the maximum effect can be considered.
To update each Exposure or Terrain Category, you just need to click the sectors inside the Google Map. It will change its color to indicate that the Exposure/Terrain Category is updated and will also show the information of exposure or terrain category selected. After this process, just click the “Select Worst Case Wind Source Direction” button again to update the values in the table.
The updated terrain category for each direction.
You just need to re-click the “Select Worst Case Wind Source Direction” button again to check the changes. The maximum velocity pressure value in the table will be highlighted and can be clicked to automatically load the direction and the corresponding exposures/terrain category to the site data tab.
The updated parameters in determining the worst case wind source direction.
In addition, you can double-check and edit the calculated topographic factor/multiplier on the Ground Elevation chart per direction. This factor will be then saved and used in the recalculation of the Table data to determine the worst-case wind source direction.
The elevation chart where you can modify the topographic factor/multiplier per wind source direction in the site analysis.
Once you’ve edited the elevation data per wind direction, you just need to re-click the “Select Worst Case Wind Source Direction” button to show you the changes in the the wind speed/pressure.
The updated velocity pressure after changing the topographic factor in the Elevation chart.
All of this process with just a few clicks! Take advantage of this feature by signing up for a Professional Account or by purchasing the standalone Load Generator module!
For additional resources, you can use these links: | 2023-02-06 06:54:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4001447260379791, "perplexity": 2078.540128530713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00025.warc.gz"} |
https://usatt.simplycompete.com/exp/index?tri=14368&uai=8666 | ## Rating Explainer
Jiwei Xia
USATT#: 86921
| Coach Level: National | Certified Umpire
#### 2628 2613
$6000 Presper Financial Architects Open 6 May 2022 - 7 May 2022 This page explains how Jiwei Xia (USATT# 86921)'s rating went from 2628 to 2613 at the$6000 Presper Financial Architects Open held on 6 May 2022 - 7 May 2022. The links below take you to pages that describe various aspects of the ratings processor.
### Initial Rating
The ratings processor goes through 4 passes in order. The links take you to pages that provide detailed explanations of the calculations during each of the 4 passes. | 2023-02-07 11:54:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.410100519657135, "perplexity": 7225.218239306101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00233.warc.gz"} |
https://thewestcoastreader.com/quiz/concern-over-hand-sanitizer/ | # Concern over hand sanitizer
What is a safe hand sanitzier made from?
What can happen if you use an unsafe hand sanitizer?
Where can you see if a hand sanitizer has been recalled? | 2021-07-28 18:25:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709148168563843, "perplexity": 5957.496441150939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00111.warc.gz"} |
https://www.csdn.net/tags/MtjaAgxsMjc0NjUtYmxvZwO0O0OO0O0O.html | • 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Word Games 你可以学到什么: Managing complexity. Large sets of words. Appro...
备注1:每个视频的英文字幕,都翻译成中文,太消耗时间了,为了加快学习进度,我将暂停这个工作,仅对英文字幕做少量注释。
备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。
Word Games
你可以学到什么:
Managing complexity.
Large sets of words.
Appropriate data structures.
Lesson 6
视频链接:
Lesson 6 - Udacity
Course Syllabus
Lesson 6: Word Games
Lesson 6 Course Notes(主要是课程视频对应的英文字幕的网页。)
Lesson 6 Code
Lesson 6 words4k.txt file
01 Welcome Back
Hi, welcome back. So far in this class we covered a lot of programming techniques, but we mostly done it with small examples of code. In this unit, we’re going to look at a larger example than anything we seen before. We’re going to write an algorithm for finding the highest scoring play in a crossword纵横字谜) tile(牌;麻将牌) game. Now, versions of this game go by names like Scrabble(乱摸;扒寻) and Words with Friends. So we’re going to have to represent everything about the words, the tiles, the board, the scoring and the algorithm for finding the highest scoring word. That’s going to be more code I’m going to be writing a lot of it and you’re going to get practice reading, that’s an important skill but I’m also going to stop and leave you plenty of places where you can write some other code and at any point, if you want a bigger challenge, you can stop the video and go ahead yourself and try to solve as much of it as you can on your own and I would encourage you to do that. This is a big step-up(stepup 加速的;增强的). I think you are all ready for it. So let’s get started.
02 Word Games
We’ve got a lot to cover and not much time to do it so let’s dig right in. Here’s a game I’m playing online with my friend, Ken.
I’m winning by a little bit mostly because I got good letters like the Z and so on but Ken is catching up.
Let’s dig right in(dig in 全力以赴地做起来) and come up with our concept inventory. What have we got?
Well, the most obvious thing, there’s a board.
there’s letters–both letters on the board and letters in the hand, and the letters on the board have to form words and in the hand they’re not.
There’s the notion(概念,观念;) of a legal(合法的) play on the board, so RITZY is a word, and it’s a word independent of where it appears, but it’s legal to have placed it here where it hooks up(hook up 连接) with another letter, and it wouldn’t have been legal to place it where it bumps into(撞上;偶然遇见) the H or where it’s not attached to anything else.
There’s the notion of score and the score for individual letters. Z is worth 10. An I is worth 1.
And there are scores for a play where you add up the letters. Part of that is that there are bonuses(bonus 奖金,额外津贴;红利) on the board. DL means double letter score. A letter that’s placed there gets doubled. DW means double word score. If any letter of the word is on that square, then the whole word score is doubled, and we also have triples as well.
Somewhere behind the scenes, there’s a dictionary and all these words are in the dictionary and other combinations of letters are not.
Then not shown here is the notion of a blank tile. Part of the hand might be a blank that isn’t indicating any particular letter, but you’re free to use for any one, similar to the way we had jokers in the game of poker.
(不完整的笔记:
Z : 10 分
I : 1 分
DL : Double Letter score
DW : Double Word score
TL : Triple Letter
TW : Triple Word
blank tile : 手里的牌的一部分是 blank 空的,不代表任何特定的字母,但你可以自由使用任何 1 个字母。
)
03 Concept Inventory
Now let’s talk about how to implement any of these, see if there’s any difficulties, any areas that we think might be hard to implement.
The board can be some kind of two-dimensional array, maybe a list of lists is one possibility. One thing I’m not quite clear on now is do I need one board or two? It’s clear I need one board to hold all the letters, but then there’s also the bonus squares. Should that be part of the same board or should that be a separate board and the letters are layered on top of this background of bonus squares? I’m not quite sure yet, but I’m not too worried about it, because I can make either approach work.
A letter can be one character string.
A word can be a string.
A hand can also be a string. It could also be a list of letters. Either one would be fine. Any collection of letters would be okay. Note that a set would not work for the hand. The hand can’t be a set of letters, because we might have duplicates, and sets don’t allow duplicates.
Now, for the notion of a legal play, we’ll have some function that generates legal plays, given a board position and a hand, and then the plays themselves will need some representation. Maybe they can be something like a tuple of say starting position– for example, “RITZY” starts in this location, the direction in which they’re going– are they going across or down, the two allow about directions–and the word itself. In this case, RITZY. That seems like a good representation for a legal play.
I’m not quite sure yet what the representation of a position or a direction should be, but that’s easy enough.
A score–we’ll have some function to compute the score.
For letters, we can have a dictionary that says the value of Z is 10.
For plays we’ll need some function to compute that.
For the bonus squares, we’ll need some mapping from a position on the board to double word or triple letter or whatever.
A dictionary is a set of words.
The blank letter–well, we said letters were strings, so that’s probably okay. We could use the string space or the string underscore, to represent the blank. Then it’s dealing with it that will be an issue later on. Now, I’m a little bit worried about blanks, because in poker Jokers(joker 纸牌百搭;纸牌中可当任何点数用的一张) were easy. We just said, replace them by any card and just deal with all the possibilities. Our routines are fast enough that we could probably deal with them all. Here I’m pretty confident we can make it fast enough that that approach will work, but it doesn’t quite work because not only do we have to try all possibilities for the letter, but the scoring rules are actually different. When you use a blank instead of a letter, you don’t get the letter scores for that blank. We’ll have to have scoring know about blanks and not just know about filling things in. That’ll be a complication. But overall I went through all the concepts. I’ve got an implementation for both.
Some of them are functions that I don’t quite know how to do, but I don’t see anything that looks like a show stopper. I think I can go ahead. The difficulty then is not that I have to invent something new in order to solve one of the problems.
The difficulty is just that there’s so much.
When faced with a problem of this size or problems can be much larger, the notion(概念;观念) of pacing(领先于) is an important one.
What do I mean by that? It means I want to attack this, and I know I’m not going to solve it all at once. I’m not just going to sit down for 20 minutes and knock out(淘汰;击败;出局) the whole problem. It’s going to be a lot longer than that.
I want to have pacing in that I have intermediate(中间的) goals along the way where I can say, okay, now I’m going focus on one part of the problem, and I’m going to get that done. Then when I’m done with that part, then I can move on to the next part.
If you don’t have that pacing, you can lose your focus. You can get discouraged that there’s so much left to do. But if you break it up into bite-sized(很小的) pieces, then you can say, okay, I’m almost there. I just have to finish a little bit more, and now this piece will be done, and then I can move on to the next piece.
The first piece I’m going to look at is finding words from a hand. In other words, I’m going ignore the whole board. I’m going to say pretend the board isn’t there and pretend all we have is the hand, and we have the dictionary, a set of legal words. I want to know out of(由于;用…(材料);得自(来源)) that hand, what words in the dictionary can I make?
04 Finding Words
Let’s get started. The first thing I need is to come up with a dictionary of all the words.
Now, we’ve created a small file with about 4,000 words in it, called “word4k.txt.”
Let’s take that file, read it, convert it to uppercase, because Scrabble(乱摸;扒寻) with Words with Friends use only uppercase letters, split it into a list of words, assign that to a global variable– we’ll call it WORDS and put it in all uppercase, just make sure that it stands out. Let’s make this a set so that access to it is easy. We can figure out very quickly whether a word is in the dictionary. Okay, so now we’re done.
(补充内容: file() 方法简介
file() 方法的别名是 open() ,是内置函数,用于创建 1 个 file 对象,我猜测 file() 方法只在 Python 2.x 中出现,Python 3.x 中应该没有,时间有限,不去深究了。
)
WORDS = set(file('words4k.txt').read().upper().split())
We have our words. Then I want to find all the words within a hand. So the hand will be seven letters, and I want to find all the words of seven letters or less that can be made out of those letters. I’m going start with a very straightforward approach, and then we’re going to refine(提炼;改善) it over time. Here is what I’ve done:
(关于为什么最多是 7 个字母?
去看下最前面的第 1 张图片,图片最下面的 1 手牌中,最多容纳 7 个字母。
)
def find_words(hand):
"Find all words that can be made from the letters in hand."
results = set()
for a in hand:
for b in removed(hand, a):
w = a+b
for c in removed(hand, w):
w = a+b+c
for d in removed(hand, w):
w = a+b+c+d
for e in removed(hand, w):
w = a+b+c+d+e
for f in removed(hand, w):
w = a+b+c+d+e+f
for g in removed(hand, w):
w = a+b+c+d+e+f+g
return results
(把 removed 函数的几次调用结果拿出来,应该会有助于理解此函数的作用:
>>> removed('letter', 'l')
'etter'
>>> removed('letter', 't')
'leter'
>>> removed('letter', 'set')
'lter'
>>> removed('letter', 'setter')
'l'
)
I haven’t worried about repeating myself and about making the code long. I just wanted to make it straightforward. Then I said, the first letter a can be any letter in the hand. If that’s a word, then go ahead and add that to my set of results. I start off with an empty set of results, and I’m going to add as I go. Otherwise, b can be any letter in the result of removing a from the hand. Now the word that I’m building up is a + b–two-letter word. If that’s a word, add it. Otherwise, c can be any letter in the hand without w in it– the remaining letters in the hand. A new word can is a + b + c. If that’s in WORDS, then add it, and we just keep on going through, adding a letter each time, checking to see if that’s in the WORDS, adding them up.
(补充
replace() 函数的用法简介:
>>> s
'cheese'
>>> s.replace('e', 'f')
'chffsf'
>>> s.replace('e', 'f', 1)
'chfese'
s.replace(a, b) 的含义是将字符串 s 中的所有的字符 a 全部替换为字符 b ;
s.replace(a, b, 1) 的含义是将字符串 s 中的第 1 次出现的字符 a 替换为字符 b 。
)
Here’s my definition of removed:
(注释中,有我对该函数的理解。)
# It takes a hand or a sequence of letters and then the letter or letters to remove.
def removed(letters, remove):
"Return a str of letters, but with each letter in remove removed once."
# 遍历 remove 字符串中的每 1 个字符 L
for L in remove:
# 将 letters 中第 1 次出现的 L 字符替换为空字符 '' ,即删除掉
letters = letters.replace(L, '', 1)
return letters
It takes a hand or a sequence of letters and then the letter or letters to remove. For each of those letters just replace the letter in the collection of letters with the empty string and do that exactly once, so don’t remove all of them. Then return the remaining letters.
Does it work? Well, if I find words with this sequence of letters in my hand, it comes back with this list.
>>> find_words('LETTERS')
set(['ERS', 'RES', 'RET', 'ERE', 'STREET', 'ELS', 'REE', 'SET', 'LETTERS', 'SER', 'TEE', 'RE', 'SEE', 'SEL', 'TET', 'EL', 'REST', 'ELSE', 'LETTER', 'ET', 'ES', 'ER', 'LEE', 'EEL', 'TREE', 'TREES', 'LET', 'TEL', 'TEST'])
>>>
That looks pretty good. It’s hard for me to verify(核实;证明) right now that I found everything that’s in my dictionary, but it looks good, and I did a little bit of poking around(poke around 闲逛) in the dictionary for likely things, and all the words I could think of that weren’t in this set were not in the dictionary. That’s why they weren’t included. That’s looks pretty good. I’m going to be doing a lot of work here, and I’m going to be modifying this function and changing it. I’d like to have a better set of tests than just one test.
05 Regression Tests
(regression tests 回归测试)
I made up a bigger test. I made up a dictionary of hands that map from a hand to a set of words that I found.
hands = { ## Regression test
'ABECEDR': set(['BE', 'CARE', 'BAR', 'BA', 'ACE', 'READ', 'CAR', 'DE', 'BED', 'BEE',
'BEAR', 'AR', 'REB', 'ER', 'ARB', 'ARC', 'ARE', 'BRA']),
'AEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'TIE', 'SIN', 'TAR', 'TAS',
'RAN', 'SIT', 'SAE', 'RIN', 'TAE', 'RAT', 'RAS', 'TAN', 'RIA', 'RISE',
'ANESTRI', 'RATINES', 'NEAR', 'REI', 'NIT', 'NASTIER', 'SEAT', 'RATE',
'RETAINS', 'STAINER', 'TRAIN', 'STIR', 'EN', 'STAIR', 'ENS', 'RAIN', 'ET',
'STAIN', 'ES', 'ER', 'ANE', 'ANI', 'INS', 'ANT', 'SENT', 'TEA', 'ATE',
'RAISE', 'RES', 'RET', 'ETA', 'NET', 'ARTS', 'SET', 'SER', 'TEN', 'RE',
'NA', 'NE', 'SEA', 'SEN', 'EAST', 'SEI', 'SRI', 'RETSINA', 'EARN', 'SI',
'SAT', 'ITS', 'ERS', 'AIT', 'AIS', 'AIR', 'AIN', 'ERA', 'ERN', 'STEARIN',
'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AI', 'IS', 'IT',
'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'IRE', 'ARS', 'ART', 'ARE']),
'DRAMITC': set(['DIM', 'AIT', 'MID', 'AIR', 'AIM', 'CAM', 'ACT', 'DIT', 'AID', 'MIR',
'CAT', 'ID', 'MAR', 'MA', 'MAT', 'MI', 'CAR', 'MAC', 'ARC', 'MAD', 'TA',
'ARM']),
'RIA', 'ENDS', 'RISE', 'IDEA', 'ANESTRI', 'IRE', 'RATINES', 'SEND',
'NEAR', 'REI', 'DETRAIN', 'DINE', 'ASIDE', 'SEAT', 'RATE', 'STAND',
'DEN', 'TRIED', 'RETAINS', 'RIDE', 'STAINER', 'TRAIN', 'STIR', 'EN',
'END', 'STAIR', 'ED', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'AND',
'ANE', 'SAID', 'ANI', 'INS', 'ANT', 'IDEAS', 'NIT', 'TEA', 'ATE', 'RAISE',
'ARTS', 'SET', 'SER', 'TEN', 'TAE', 'NA', 'TED', 'NE', 'TRADE', 'SEA',
'AIT', 'SEN', 'EAST', 'SEI', 'RAISED', 'SENT', 'ADS', 'SRI', 'NASTIER',
'RETSINA', 'TAN', 'EARN', 'SI', 'SAT', 'ITS', 'DIN', 'ERS', 'DIE', 'DE',
'AIS', 'AIR', 'DATE', 'AIN', 'ERA', 'SIDE', 'DIT', 'AID', 'ERN',
'STEARIN', 'DIS', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE',
'AD', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'ID', 'ARS',
'ART', 'ANTIRED', 'ARE', 'TRAINED', 'RANDIEST', 'STRAINED', 'DETRAINS']),
'ETAOIN': set(['ATE', 'NAE', 'AIT', 'EON', 'TIN', 'OAT', 'TON', 'TIE', 'NET', 'TOE',
'ANT', 'TEN', 'TAE', 'TEA', 'AIN', 'NE', 'ONE', 'TO', 'TI', 'TAN',
'TAO', 'EAT', 'TA', 'EN', 'AE', 'ANE', 'AI', 'INTO', 'IT', 'AN', 'AT',
'IN', 'ET', 'ON', 'OE', 'NO', 'ANI', 'NOTE', 'ETA', 'ION', 'NA', 'NOT',
'NIT']),
'SHRDLU': set(['URD', 'SH', 'UH', 'US']),
'SHROUDT': set(['DO', 'SHORT', 'TOR', 'HO', 'DOR', 'DOS', 'SOUTH', 'HOURS', 'SOD',
'HOUR', 'SORT', 'ODS', 'ROD', 'OUD', 'HUT', 'TO', 'SOU', 'SOT', 'OUR',
'ROT', 'OHS', 'URD', 'HOD', 'SHOT', 'DUO', 'THUS', 'THO', 'UTS', 'HOT',
'TOD', 'DUST', 'DOT', 'OH', 'UT', 'ORT', 'OD', 'ORS', 'US', 'OR',
'SHOUT', 'SH', 'SO', 'UH', 'RHO', 'OUT', 'OS', 'UDO', 'RUT']),
'TOXENSI': set(['TO', 'STONE', 'ONES', 'SIT', 'SIX', 'EON', 'TIS', 'TIN', 'XI', 'TON',
'ONE', 'TIE', 'NET', 'NEXT', 'SIN', 'TOE', 'SOX', 'SET', 'TEN', 'NO',
'NE', 'SEX', 'ION', 'NOSE', 'TI', 'ONS', 'OSE', 'INTO', 'SEI', 'SOT',
'EN', 'NIT', 'NIX', 'IS', 'IT', 'ENS', 'EX', 'IN', 'ET', 'ES', 'ON',
'OES', 'OS', 'OE', 'INS', 'NOTE', 'EXIST', 'SI', 'XIS', 'SO', 'SON',
'OX', 'NOT', 'SEN', 'ITS', 'SENT', 'NOS'])}
The idea here is that this test is not so much proving that I’ve got the right answer, because I don’t know for sure that this is the right answers. Rather, this is what we call a regression test, meaning as we change our program we want to make sure that we haven’t broken any of these–that we haven’t made changes to our functions.
Even if I don’t know this is exactly the right set, I want to know when I made a change, have I changed the result here. I’ll be able to rerun this and say, have we done exactly the same thing. I’ll also be able to time(测定…的时间) the results of running these various hands and see if we can make our function faster. Here is my list of hands. I’ve got eight hands.
Then I did some further tests here.
def test_words():
assert removed('LETTERS', 'L') == 'ETTERS'
assert removed('LETTERS', 'T') == 'LETERS'
assert removed('LETTERS', 'SET') == 'LTER'
assert removed('LETTERS', 'SETTER') == 'L'
t, results = timedcall(map, find_words, hands)
for ((hand, expected), got) in zip(hands.items(), results):
assert got == expected, "For %r: got %s, expected %s (diff %s)" % (
hand, got, expected, expected ^ got)
return t
timedcall(map, find_words, hands)
0.5527249999
I’m testing removing letters–got all those right. Then I’m going through the hands, and I’m using my timedcall() function that we build last time. That returnsin lapsed(流失的;堕落的) time and a set of results. I make sure all the results are what I expected. Then I return the time elapsed for finding all the words in those eight hands.
It turns out it takes half a second. That kind of worries me. That doesn’t sound very good. Sure, if I was playing Scrabble with a friend and they reply in a half second, that’d be pretty good. Much better than me, for example. In this game here it says that I haven’t replied to my friend Ken in 22 hours. This is a lot better, but still, if we’re going to be doing a lot of work and trying to find the best possible play, half a second to evaluate eight hands– that doesn’t seem fast enough.
Why is find_words() so slow? One thing is that it’s got a lot of nested loops, and it always does all of them. A lot of that is going to be wasteful. For example, let’s say the first two letters in the hand were z and q. At the very start here w is z + q, and now I loop through all the other combinations of all the other letters in the hand trying to find words that start with z + q, but there aren’t any words in the dictionary that start with zq. As soon as I got here, I should be able to figure that out and not do all of the rest of these nested loops.
(
find_words() 函数为什么这么慢?
比方说,手里有 z q 开头的两个字母,但是字典里面没有 zq 开头的单词,但是上述函数的内部仍然会不断地进入内层的循环,这样的情况下,效率非常低下。
)
What I’m going to do is introduce a new concept that we didn’t see before in our initial listing of the concepts, but which is an important one–the notion of a prefix of a word. It’s important only for efficiency and not for correctness–that’s why it didn’t show up the first time. The idea is that given a word there are substrings, which are prefixes of the word.
The empty string is such a prefix. Just W is a prefix. W-O is a prefix. W-O-R is a prefix.
Now, we always have to decide what we want to do with the endpoints. I think for the way I want to use it I do want to include the empty string as a valid prefix, but I think I don’t want to include the entire string W-O-R-D. I’m not going to count that as a prefix of the word. That is the word. I’m going to define this function prefixes(word). It’s pretty straightforward. Just iterate through the range, and the prefixes of W-O-R-D are the empty string and these three longer strings. Now here’s the first bit that I want you to do for me. Reading in our list of words from the dictionary is a little bit complicated in that we want to compute two things–a set of words and a set of prefixes for all the words in the dictionary. The set together of prefixes for each word–union all of those together. I’m going to put that together into a function readwordlist(), which takes the file name and returns these two sets. I want you to write the code for that function here.
(我的答案:
def readwordlist(filename):
"""Read the words from a file and return a set of the words
and a set of the prefixes."""
file = open(filename) # opens file
text = file.read() # gets file into string
wordset = set(text.split())
prefixset = []
for word in wordset:
prefixset += prefixes(word)
prefixset = set(prefixset)
return wordset, prefixset
)
Here’s my answer. The wordset is just like before.
def readwordlist(filename):
# 下面的 1 行代码,实际上是遍历了 wordset 中的每 1 个单词 word ;
# 然后对于每 1 个单词 word ,遍历其中的所有前缀;
# 总的来说,也就是将 wordset 中的所有单词 word 的所有前缀,全部收集起来,放进 1 个集合。
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
Read the file, uppercase it, and split it. In the prefixset, we go through each word in the wordset and then each prefix of the word, and collect that set p of prefixes and then return them. Now let’s see what these prefixes can do for us.
( PREFIXES 即上面的 prefixset 。
下面的代码的每 1 个循环中,都加入了 1 行 if a/w not in PREFIXES: continue ,意思是,手里的牌取出某1张a,若不在 PREFIXES 中,则不再继续往内层循环判断,跳出本次循环, continue 到下一轮循环,取出另一张牌a;若这张牌a在 PREFIXES 中,则取出第2张牌b,若这2张牌a+b的组合在PREFIXES中,则继续往内层循环判断,若这2张牌a+b的组合不在 PREFIXES 中,则不再继续往内层循环判断,跳出本次循环, continue 到下一轮循环,取出另一张牌b。。。
)
def find_words(letters):
results = set()
for a in letters:
if a not in PREFIXES: continue
for b in removed(letters, a):
w = a+b
if w not in PREFIXES: continue
for d in removed(letters, w)
w = a+b+c+d
if w not in PREFIXES: continue
for e in removed(letters, w):
w = a+b+c+d+e
if w not in PREFIXES: continue
for f in removed(letters, w):
w = a+b+c+d+e+f
if w not in PREFIXES: continue
return results
I can define a new version of find_words(), and what this one is it looks exactly like the one before except what we do at each level of the loop is we add one statement that says, if the word that we built up so far is not one of the prefixes of a word in the dictionary, then there’s no sense doing any of these nested loops. We can continue onto the next iteration of the current loop, and that’s what the continue statement says is don’t do anything below, rather go back to the for loop that we’re nested in and go through the next iteration of that for loop. Normally, I don’t like the continue statement and normally, instead of saying if w not in prefixes continue, I would’ve said if w in prefixes then do this, but that would’ve introduced another level of indentation(凹进) for each of these seven levels and I’d be running off the edge of the page, so here I grudgingly(不情愿的;勉强的) accepted the continue statement. The code looks just like before. I’ve just added seven lines. The exact same line indented into different levels goes all the way through a, b, c, d, e, f, and g. Now if I run the test_words function again, I get not half a second but 0.003 seconds. That’s nice and fast. That’s 150 times faster than before, 2000 hands per second. The function is long and ugly, but it’s fast enough. But still I’d like to clean it up. I don’t like repeating myself with code like this. I don’t like that this only works exactly for seven letters. I know that I may need more than that because there’s only seven letters in a hand, but sometimes you combine letters in a hand with letters on the board. This function won’t be able to deal with it.
07 Extend Prefix
In order to improve this function I have to ask myself, “What do each of the loops do?” “And can I implement that in another way other than with nested loops?” The answer seems to be that each of the nested loops is incrementing the word by one letter, from abcd to abcde, and then it’s checking to see if we have a new word, and it’s checking to see if we should stop if what we have so far is not a prefix of any word in the dictionary. If I don’t want to have nested loops what I want instead is a recursive procedure. I’m going to have the same structure as before. I’m going to start off by initializing the results to be the empty set, and then I’m going to have some loops that add elements to that set, and then I’m going to return the results that I have built up. Then I’m going to start the loops in motion(in motion 在开动中;在运转中) by making a call to this recursive routine. What I want you to do is fill in the code here.
题目:
# -----------------
# User Instructions
#
# Write a function, extend_prefix, nested in find_words,
# that checks to see if the prefix is in WORDS and
# adds that to results if it is.
#
# If not, your function should check to see if the prefix
# is in PREFIXES, and if it is should recursively add letters
# until the prefix is no longer valid.
def prefixes(word):
"A list of the initial sequences of a word, not including the complete word."
return [word[:i] for i in range(len(word))]
def removed(letters, remove):
"Return a str of letters, but with each letter in remove removed once."
for L in remove:
letters = letters.replace(L, '', 1)
return letters
file = open(filename)
wordset = set(word for word in text.splitlines())
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
def find_words(letters):
results = set()
def extend_prefix(w, letters):
if w in WORDS: ###Your code here.
if w not in PREFIXES: return
for L in letters:
extend_prefix('', letters)
return results
(我花了一点时间,没有想出来,还是来看看Peter的答案吧。)
def find_words(letters):
results = set()
def extend_prefix(w, letters):
if w in WORDS: # your code here
if w not in PREFIXES: return
for L in letters:
extend_prefix(w+L, removed(letters, L))
extend_prefix('', letters)
return results
( extend_prefix(w, letters) 函数的第 1 个 if 语句后面不完整,但是,视频中,Peter仍然可以正常运行,我不知道为什么。)
The answer is here we’re doing a nested loop, and the way we do a nested loop is the way we did the first loop– by calling extend prefix. What is the word so far that we’ve built up? It’s the w we had before, and now we’re looping through the letters, so we want to add the letter L to the end of that. Now what are the remaining letters we have in order to add into that word? That’s the letters we had before with L removed. That’s all there is to it. Now if we test words again, the speed is almost the same–0.003 something, but the function is more concise(简明的;简洁的), more readable, and more general in that it will take any number of letters. Now, there are a lot of variations(variation 变化;变动;变量) on this. If you type “import this” into a Python interpreter you get out a little set of aphorisms(aphorism 格言;警句), almost like a poem, called “The Zen of Python” by Tim Peters. One of them says “Flat is better than nest.” We can take out this nested function. Instead of having it in here, we can make it flat like that.
def find_words(letters):
return extend_prefix('', letters, set())
def extend_prefix(pre, letters, results):
if pre in PREFIXES:
for L in letters:
extend_prefix(pre+L, letters.replace(L, '', 1), results)
return results
I’ve also made a small change here in that removed works when you’re removing any number of letters. Here if I only want to remove one letter, I can just call the built-in method letters.replace directly. When we call test_words() on this just to make sure we haven’t broken anything, it verifies okay, and the speed is about the same. You can keep it like this. This is a good approach. I’m pretty happy with this one. But notice what we’re doing here–find_words() is sort of a wrapper to extend_prefix(), which takes letters and adds in two more extra arguments– the prefix that we found so far and the results that we want to accumulate(积累;逐渐增加) the results into. Instead of having one function call a second, we could do this all in one function if we made these two extra things be optional arguments.
def find_words(letters, pre='', results=None)::
if results is None: results = set()
if pre in PREFIXES:
for L in letters:
find_words(letters.replace(L, '', 1), pre+L, results)
return results
We could do it like that–where we just have one function find_words(), which takes letters, and then the optional prefix of our end results to accumulate into. Now in terms of pacing, let’s stop here. Let’s congratulate ourselves and say we’ve done our job. We’ve come up with find_words(), and we said for any set of letters I can find all the words in the dictionary that correspond to that hand of letters. Furthermore(此外;而且), I can do that at a speed of 2000 hands per second, which seems pretty good. We’ve achieved our first milestone. Now we should think–first I guess we should relax, congratulate ourselves, have a drink or whatever it is you need to do, and then when you’re ready to come back then we can start the next leg of the journey.
(suffix 后缀)
Let’s go back to our list of concepts and say what have we done so far and what’s next? We think we did a good job with the dictionary, and we did a good job with our hands here. In terms of legal play, well, we’ve got words, so we’re sort of part way there, but we haven’t hooked up(hook up 连接) the words to the board. Maybe that’s the next thing to do–is say, let’s do a better job of hooking up the letters and the hand with the words in the dictionary and placing them in the right place on the board. I don’t want to have to deal with the whole board. Let’s just deal with one letter at a time. Let’s say there is one letter on the board, and I have my hand and I want to say I can play W-O-R and make up a word. Just to make it a little bit more satisfying than one letter, let’s say that there is a set of possible letters that are already on the board, and we can place our words anywhere, but we’re not going to worry about placing some letters here and having them run off(偷走;流失) the board. We’re not going to worry about placing letters here and having them run into(快速进入…;加起来) another letter. We’re just going to say what words can I make out of my hand that connect with either a D or an X or an L. So I need a strategy for that. Let’s just consider one letter at a time. What I need to find is all the plays that take letters in my hand– [HANDSIE] let’s say those are the seven letters in my hand. Take those letters and combine them with a D and find all the words. What can those words consist of? They can have some prefix here, which can be any prefix in our set of prefixes that come solely(唯一地;仅仅) from the letters in my hand. Then the letter D that’s already there doesn’t have to come from my hand. Then some more letters. I’ll think of this as a prefix plus a suffix where I make sure that I know that D is already there. Here is word-plays–takes a hand and a set of letters that are on the board, and it’s going t o find all possible words that can be made from that hand, connecting to exactly one of the letters on the board. We’re going break it up into a prefix that comes only from the hand, then the letter from the board, and then the remainder of the suffix that comes from the hand. The same structure as we had before–we start off with an empty set of result words. In the end we’re going to return that set of result words. Then we’re going to go through all the possible prefixes that come exclusively(唯一地) from the hand, then the possible letters on the board, and add a suffix to the prefix plus the letter on the board from the letters in the hand except for we can no longer use the letters in the prefix. Find_prefixes is just like find_words except we’re collecting things that are in the prefixes rather than things that are in the list of words. Now I want you to write add_suffixes. Given a hand, a prefix that we found before, results set that you want to put things into, find me all the words that can be made by adding on letters from the hand into the prefix to create words.
(实现这个函数时,不考虑单词超出板子边界的问题,不考虑单词会盖过其他字母的问题。)
题目:
# -----------------
# User Instructions
#
# Write a function, add_suffixes, that takes as input a hand, a prefix we
# have already found, and a result set we'd like to add to, and returns
# the result set we have added to. For testing, you can assume that you
import time
def prefixes(word):
"A list of the initial sequences of a word, not including the complete word."
return [word[:i] for i in range(len(word))]
file = open(filename)
wordset = set(word for word in text.splitlines())
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
def find_words(letters, pre='', results=None):
if results is None: results = set()
if pre in PREFIXES:
for L in letters:
find_words(letters.replace(L, '', 1), pre+L, results)
return results
def word_plays(hand, board_letters):
"Find all word plays from hand that can be made to abut with a letter on board."
# Find prefix + L + suffix; L from board_letters, rest from hand
results = set()
for pre in find_prefixes(hand, '', set()):
for L in board_letters:
return results
def find_prefixes(hand, pre='', results=None):
"Find all prefixes (of words) that can be made from letters in hand."
if results is None: results = set()
if pre in PREFIXES:
for L in hand:
find_prefixes(hand.replace(L, '', 1), pre+L, results)
return results
"""Return the set of words that can be formed by extending pre with letters in hand."""
def removed(letters, remove):
"Return a str of letters, but with each letter in remove removed once."
for L in remove:
letters = letters.replace(L, '', 1)
return letters
def timedcall(fn, *args):
"Call function with args; return the time in seconds and result."
t0 = time.clock()
result = fn(*args)
t1 = time.clock()
return t1-t0, result
hands = { ## Regression test
'ABECEDR': set(['BE', 'CARE', 'BAR', 'BA', 'ACE', 'READ', 'CAR', 'DE', 'BED', 'BEE',
'BEAR', 'AR', 'REB', 'ER', 'ARB', 'ARC', 'ARE', 'BRA']),
'AEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'TIE', 'SIN', 'TAR', 'TAS',
'RAN', 'SIT', 'SAE', 'RIN', 'TAE', 'RAT', 'RAS', 'TAN', 'RIA', 'RISE',
'ANESTRI', 'RATINES', 'NEAR', 'REI', 'NIT', 'NASTIER', 'SEAT', 'RATE',
'RETAINS', 'STAINER', 'TRAIN', 'STIR', 'EN', 'STAIR', 'ENS', 'RAIN', 'ET',
'STAIN', 'ES', 'ER', 'ANE', 'ANI', 'INS', 'ANT', 'SENT', 'TEA', 'ATE',
'RAISE', 'RES', 'RET', 'ETA', 'NET', 'ARTS', 'SET', 'SER', 'TEN', 'RE',
'NA', 'NE', 'SEA', 'SEN', 'EAST', 'SEI', 'SRI', 'RETSINA', 'EARN', 'SI',
'SAT', 'ITS', 'ERS', 'AIT', 'AIS', 'AIR', 'AIN', 'ERA', 'ERN', 'STEARIN',
'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AI', 'IS', 'IT',
'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'IRE', 'ARS', 'ART', 'ARE']),
'DRAMITC': set(['DIM', 'AIT', 'MID', 'AIR', 'AIM', 'CAM', 'ACT', 'DIT', 'AID', 'MIR',
'CAT', 'ID', 'MAR', 'MA', 'MAT', 'MI', 'CAR', 'MAC', 'ARC', 'MAD', 'TA',
'ARM']),
'RIA', 'ENDS', 'RISE', 'IDEA', 'ANESTRI', 'IRE', 'RATINES', 'SEND',
'NEAR', 'REI', 'DETRAIN', 'DINE', 'ASIDE', 'SEAT', 'RATE', 'STAND',
'DEN', 'TRIED', 'RETAINS', 'RIDE', 'STAINER', 'TRAIN', 'STIR', 'EN',
'END', 'STAIR', 'ED', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'AND',
'ANE', 'SAID', 'ANI', 'INS', 'ANT', 'IDEAS', 'NIT', 'TEA', 'ATE', 'RAISE',
'ARTS', 'SET', 'SER', 'TEN', 'TAE', 'NA', 'TED', 'NE', 'TRADE', 'SEA',
'AIT', 'SEN', 'EAST', 'SEI', 'RAISED', 'SENT', 'ADS', 'SRI', 'NASTIER',
'RETSINA', 'TAN', 'EARN', 'SI', 'SAT', 'ITS', 'DIN', 'ERS', 'DIE', 'DE',
'AIS', 'AIR', 'DATE', 'AIN', 'ERA', 'SIDE', 'DIT', 'AID', 'ERN',
'STEARIN', 'DIS', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE',
'AD', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'ID', 'ARS',
'ART', 'ANTIRED', 'ARE', 'TRAINED', 'RANDIEST', 'STRAINED', 'DETRAINS']),
'ETAOIN': set(['ATE', 'NAE', 'AIT', 'EON', 'TIN', 'OAT', 'TON', 'TIE', 'NET', 'TOE',
'ANT', 'TEN', 'TAE', 'TEA', 'AIN', 'NE', 'ONE', 'TO', 'TI', 'TAN',
'TAO', 'EAT', 'TA', 'EN', 'AE', 'ANE', 'AI', 'INTO', 'IT', 'AN', 'AT',
'IN', 'ET', 'ON', 'OE', 'NO', 'ANI', 'NOTE', 'ETA', 'ION', 'NA', 'NOT',
'NIT']),
'SHRDLU': set(['URD', 'SH', 'UH', 'US']),
'SHROUDT': set(['DO', 'SHORT', 'TOR', 'HO', 'DOR', 'DOS', 'SOUTH', 'HOURS', 'SOD',
'HOUR', 'SORT', 'ODS', 'ROD', 'OUD', 'HUT', 'TO', 'SOU', 'SOT', 'OUR',
'ROT', 'OHS', 'URD', 'HOD', 'SHOT', 'DUO', 'THUS', 'THO', 'UTS', 'HOT',
'TOD', 'DUST', 'DOT', 'OH', 'UT', 'ORT', 'OD', 'ORS', 'US', 'OR',
'SHOUT', 'SH', 'SO', 'UH', 'RHO', 'OUT', 'OS', 'UDO', 'RUT']),
'TOXENSI': set(['TO', 'STONE', 'ONES', 'SIT', 'SIX', 'EON', 'TIS', 'TIN', 'XI', 'TON',
'ONE', 'TIE', 'NET', 'NEXT', 'SIN', 'TOE', 'SOX', 'SET', 'TEN', 'NO',
'NE', 'SEX', 'ION', 'NOSE', 'TI', 'ONS', 'OSE', 'INTO', 'SEI', 'SOT',
'EN', 'NIT', 'NIX', 'IS', 'IT', 'ENS', 'EX', 'IN', 'ET', 'ES', 'ON',
'OES', 'OS', 'OE', 'INS', 'NOTE', 'EXIST', 'SI', 'XIS', 'SO', 'SON',
'OX', 'NOT', 'SEN', 'ITS', 'SENT', 'NOS'])}
def test_words():
assert removed('LETTERS', 'L') == 'ETTERS'
assert removed('LETTERS', 'T') == 'LETERS'
assert removed('LETTERS', 'SET') == 'LTER'
assert removed('LETTERS', 'SETTER') == 'L'
t, results = timedcall(map, find_words, hands)
for ((hand, expected), got) in zip(hands.items(), results):
assert got == expected, "For %r: got %s, expected %s (diff %s)" % (
hand, got, expected, expected ^ got)
return t
print test_words()
( 没想出来,看看 Peter 的答案吧 )
def add_suffixes(hand, pre, results):
return extend_prefix(pre, letters, result)
(另一个答案)
def add_suffixes(hand, pre, results):
"Return the set of words that can be formed by extending pre with letters in hand."
if pre in PREFIXES:
for L in add_suffixes(hand.replace(L, '', 1), pre+L, results)
return results
The answer is here we’re doing a nested loop, and the way we do a nested loop is the way we did the first loop– by calling extend prefix. What is the word so far that we’ve built up? It’s the w we had before, and now we’re looping through the letters, so we want to add the letter L to the end of that. Now what are the remaining letters we have in order to add into that word? That’s the letters we had before with L removed. That’s all there is to it. Now if we test words again, the speed is almost the same–0.003 something, but the function is more concise, more readable, and more general in that it will take any number of letters. Now, there are a lot of variations on this. If you type “import this” into a Python interpreter you get out a little set of aphorisms, almost like a poem, called “The Zen of Python” by Tim Peters. One of them says “Flat is better than nest.” We can take out this nested function. Instead of having it in here, we can make it flat like that. I’ve also made a small change here in that removed works when you’re removing any number of letters. Here if I only want to remove one letter, I can just call the built-in method letters.replace directly. When we call test_words() on this just to make sure we haven’t broken anything, it verifies okay, and the speed is about the same. You can keep it like this. This is a good approach. I’m pretty happy with this one. But notice what we’re doing here–find_words() is sort of a wrapper to extend_prefix(), which takes letters and adds in two more extra arguments– the prefix that we found so far and the results that we want to accumulate the results into. Instead of having one function call a second, we could do this all in one function if we made these two extra things be optional arguments. We could do it like that–where we just have one function find_words(), which takes letters, and then the optional prefix of our end results to accumulate into. Now in terms of pacing, let’s stop here. Let’s congratulate ourselves and say we’ve done our job. We’ve come up with find_words(), and we said for any set of letters I can find all the words in the dictionary that correspond to that hand of letters. Furthermore, I can do that at a speed of 2000 hands per second, which seems pretty good. We’ve achieved our first milestone. Now we should think–first I guess we should relax, congratulate ourselves, have a drink or whatever it is you need to do, and then when you’re ready to come back then we can start the next leg of the journey.
(
explicitly 明白地;明确地
intersect 横断;相交
complicate 使复杂化;复杂的
stick with 继续做;跟着…
customize 定制;按规格改制
)
09 Longest Words
We can write some assertions here. Here we have some letters in my hand, seven letters, and some possible letters on the board, and here’s a long list of possibilities for plays I could make. We can already see that this would be useful for cheating–I mean, augmenting or studying your word game play. And to make it even more useful, let’s write a function that tells us what the longest possible words are. Given the definition of word play, write a definition of longest words.
题目:
# -----------------
# User Instructions
#
# Write a function, longest_words, that takes as input a hand, and a set
# of letters on the board, and returns all word plays, the longest first.
# For testing, you can assume that you have access to a file called
# 'words4k.txt'
import time
def prefixes(word):
"A list of the initial sequences of a word, not including the complete word."
return [word[:i] for i in range(len(word))]
file = open(filename)
wordset = set(word for word in text.splitlines())
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
def find_words(letters, pre='', results=None):
if results is None: results = set()
if pre in PREFIXES:
for L in letters:
find_words(letters.replace(L, '', 1), pre+L, results)
return results
def removed(letters, remove):
"Return a str of letters, but with each letter in remove removed once."
for L in remove:
letters = letters.replace(L, '', 1)
return letters
def word_plays(hand, board_letters):
"Find all word plays from hand that can be made to abut with a letter on board."
# Find prefix + L + suffix; L from board_letters, rest from hand
results = set()
for pre in find_prefixes(hand, '', set()):
for L in board_letters:
return results
def find_prefixes(hand, pre='', results=None):
"Find all prefixes (of words) that can be made from letters in hand."
if results is None: results = set()
if pre in PREFIXES:
for L in hand:
find_prefixes(hand.replace(L, '', 1), pre+L, results)
return results
"""Return the set of words that can be formed by extending pre with letters in hand."""
if pre in PREFIXES:
for L in hand:
return results
set(['DIE', 'ATE', 'READ', 'AIT', 'DE', 'IDEA', 'RET', 'QUID', 'DATE', 'RATE',
'ETA', 'QUIET', 'ERA', 'TIE', 'DEAR', 'AID', 'TRADE', 'TRUE', 'DEE',
'RED', 'RAD', 'TAR', 'TAE', 'TEAR', 'TEA', 'TED', 'TEE', 'QUITE', 'RE',
'RAT', 'QUADRATE', 'EAR', 'EAU', 'EAT', 'QAID', 'URD', 'DUI', 'DIT', 'AE',
'AI', 'ED', 'TI', 'IT', 'DUE', 'AQUAE', 'AR', 'ET', 'ID', 'ER', 'QUIT',
'ART', 'AREA', 'EQUID', 'RUE', 'TUI', 'ARE', 'QI', 'ADEQUATE', 'RUT']))
def longest_words(hand, board_letters):
"Return all word plays, longest first."
def timedcall(fn, *args):
"Call function with args; return the time in seconds and result."
t0 = time.clock()
result = fn(*args)
t1 = time.clock()
return t1-t0, result
hands = { ## Regression test
'ABECEDR': set(['BE', 'CARE', 'BAR', 'BA', 'ACE', 'READ', 'CAR', 'DE', 'BED', 'BEE',
'BEAR', 'AR', 'REB', 'ER', 'ARB', 'ARC', 'ARE', 'BRA']),
'AEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'TIE', 'SIN', 'TAR', 'TAS',
'RAN', 'SIT', 'SAE', 'RIN', 'TAE', 'RAT', 'RAS', 'TAN', 'RIA', 'RISE',
'ANESTRI', 'RATINES', 'NEAR', 'REI', 'NIT', 'NASTIER', 'SEAT', 'RATE',
'RETAINS', 'STAINER', 'TRAIN', 'STIR', 'EN', 'STAIR', 'ENS', 'RAIN', 'ET',
'STAIN', 'ES', 'ER', 'ANE', 'ANI', 'INS', 'ANT', 'SENT', 'TEA', 'ATE',
'RAISE', 'RES', 'RET', 'ETA', 'NET', 'ARTS', 'SET', 'SER', 'TEN', 'RE',
'NA', 'NE', 'SEA', 'SEN', 'EAST', 'SEI', 'SRI', 'RETSINA', 'EARN', 'SI',
'SAT', 'ITS', 'ERS', 'AIT', 'AIS', 'AIR', 'AIN', 'ERA', 'ERN', 'STEARIN',
'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AI', 'IS', 'IT',
'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'IRE', 'ARS', 'ART', 'ARE']),
'DRAMITC': set(['DIM', 'AIT', 'MID', 'AIR', 'AIM', 'CAM', 'ACT', 'DIT', 'AID', 'MIR',
'CAT', 'ID', 'MAR', 'MA', 'MAT', 'MI', 'CAR', 'MAC', 'ARC', 'MAD', 'TA',
'ARM']),
'RIA', 'ENDS', 'RISE', 'IDEA', 'ANESTRI', 'IRE', 'RATINES', 'SEND',
'NEAR', 'REI', 'DETRAIN', 'DINE', 'ASIDE', 'SEAT', 'RATE', 'STAND',
'DEN', 'TRIED', 'RETAINS', 'RIDE', 'STAINER', 'TRAIN', 'STIR', 'EN',
'END', 'STAIR', 'ED', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'AND',
'ANE', 'SAID', 'ANI', 'INS', 'ANT', 'IDEAS', 'NIT', 'TEA', 'ATE', 'RAISE',
'ARTS', 'SET', 'SER', 'TEN', 'TAE', 'NA', 'TED', 'NE', 'TRADE', 'SEA',
'AIT', 'SEN', 'EAST', 'SEI', 'RAISED', 'SENT', 'ADS', 'SRI', 'NASTIER',
'RETSINA', 'TAN', 'EARN', 'SI', 'SAT', 'ITS', 'DIN', 'ERS', 'DIE', 'DE',
'AIS', 'AIR', 'DATE', 'AIN', 'ERA', 'SIDE', 'DIT', 'AID', 'ERN',
'STEARIN', 'DIS', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE',
'AD', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'ID', 'ARS',
'ART', 'ANTIRED', 'ARE', 'TRAINED', 'RANDIEST', 'STRAINED', 'DETRAINS']),
'ETAOIN': set(['ATE', 'NAE', 'AIT', 'EON', 'TIN', 'OAT', 'TON', 'TIE', 'NET', 'TOE',
'ANT', 'TEN', 'TAE', 'TEA', 'AIN', 'NE', 'ONE', 'TO', 'TI', 'TAN',
'TAO', 'EAT', 'TA', 'EN', 'AE', 'ANE', 'AI', 'INTO', 'IT', 'AN', 'AT',
'IN', 'ET', 'ON', 'OE', 'NO', 'ANI', 'NOTE', 'ETA', 'ION', 'NA', 'NOT',
'NIT']),
'SHRDLU': set(['URD', 'SH', 'UH', 'US']),
'SHROUDT': set(['DO', 'SHORT', 'TOR', 'HO', 'DOR', 'DOS', 'SOUTH', 'HOURS', 'SOD',
'HOUR', 'SORT', 'ODS', 'ROD', 'OUD', 'HUT', 'TO', 'SOU', 'SOT', 'OUR',
'ROT', 'OHS', 'URD', 'HOD', 'SHOT', 'DUO', 'THUS', 'THO', 'UTS', 'HOT',
'TOD', 'DUST', 'DOT', 'OH', 'UT', 'ORT', 'OD', 'ORS', 'US', 'OR',
'SHOUT', 'SH', 'SO', 'UH', 'RHO', 'OUT', 'OS', 'UDO', 'RUT']),
'TOXENSI': set(['TO', 'STONE', 'ONES', 'SIT', 'SIX', 'EON', 'TIS', 'TIN', 'XI', 'TON',
'ONE', 'TIE', 'NET', 'NEXT', 'SIN', 'TOE', 'SOX', 'SET', 'TEN', 'NO',
'NE', 'SEX', 'ION', 'NOSE', 'TI', 'ONS', 'OSE', 'INTO', 'SEI', 'SOT',
'EN', 'NIT', 'NIX', 'IS', 'IT', 'ENS', 'EX', 'IN', 'ET', 'ES', 'ON',
'OES', 'OS', 'OE', 'INS', 'NOTE', 'EXIST', 'SI', 'XIS', 'SO', 'SON',
'OX', 'NOT', 'SEN', 'ITS', 'SENT', 'NOS'])}
def test_words():
assert removed('LETTERS', 'L') == 'ETTERS'
assert removed('LETTERS', 'T') == 'LETERS'
assert removed('LETTERS', 'SET') == 'LTER'
assert removed('LETTERS', 'SETTER') == 'L'
t, results = timedcall(map, find_words, hands)
for ((hand, expected), got) in zip(hands.items(), results):
assert got == expected, "For %r: got %s, expected %s (diff %s)" % (
hand, got, expected, expected ^ got)
return t
print test_words()
(我的答案:
def longest_words(hand, board_letters):
"Return all word plays, longest first."
return sorted(word_plays(hand, board_letters), key=len, reverse=True)
谈下我的思考过程吧:
首先是需要返回所有的 plays 的,显然就需要调用 word_plays(hand, board_letters) 。最长的优先,也就是说,需要按照长度排序,一时半会儿我没有想出怎么搞出这个排序方法,于是想到利用标准库的方法,sort() 或 sorted()方法,可以加入按照长度排序的参数、以及反转。
)
There we go–we just generate the words from word plays, and then we sort them by length in reverse order so that longest are first.
Peter 的答案:
def longest_words(hand, board_letters):
words = word_plays(hand, board_letters)
return sorted(words, key=len, reverse=True)
10 Word Score
题目:
# -----------------
# User Instructions
#
# Write a function, word_score, that takes as input a word, and
# returns the sum of the individual letter scores of that word.
# For testing, you can assume that you have access to a file called
# 'words4k.txt'
POINTS = dict(A=1, B=3, C=3, D=2, E=1, F=4, G=2, H=4, I=1, J=8, K=5, L=1, M=3, N=1, O=1, P=3, Q=10, R=1, S=1, T=1, U=1, V=4, W=4, X=8, Y=4, Z=10, _=0)
def word_score(word):
"The sum of the individual letter point scores for this word."
(我的答案:
def word_score(word):
total = 0
for letter in word:
total += POINTS[letter]
return total
)
Here’s my solution. It’s pretty straightforward. We just sum the points of each letter for every letter in the word.
Peter 的答案:
def word_score(word):
return sum(POINTS[L] for L in word)
(Peter的代码,简洁优雅,值得学习!!!)
11 Top N Hands
Now, I want you to write me a function called topn. Again, takes a hand and set the board letters and the number, which defaults to 10, and give me the n best words and highest scoring words, according to the word score function.
题目:
# -----------------
# User Instructions
#
# Write a function, topn, that takes as input a hand, the set of
# current letters on the board, and a number n, and returns the
# n best words we can play, sorted by word score.
# For testing, you can assume that you have access to a file called
# 'words4k.txt'
#
# Enter your code at line 83.
import time
def prefixes(word):
"A list of the initial sequences of a word, not including the complete word."
return [word[:i] for i in range(len(word))]
file = open(filename)
wordset = set(word for word in text.splitlines())
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
def removed(letters, remove):
"Return a str of letters, but with each letter in remove removed once."
for L in remove:
letters = letters.replace(L, '', 1)
return letters
def find_words(letters, pre='', results=None):
if results is None: results = set()
if pre in PREFIXES:
for L in letters:
find_words(letters.replace(L, '', 1), pre+L, results)
return results
def word_plays(hand, board_letters):
"Find all word plays from hand that can be made to abut with a letter on board."
# Find prefix + L + suffix; L from board_letters, rest from hand
results = set()
for pre in find_prefixes(hand, '', set()):
for L in board_letters:
return results
def find_prefixes(hand, pre='', results=None):
"Find all prefixes (of words) that can be made from letters in hand."
if results is None: results = set()
if pre in PREFIXES:
for L in hand:
find_prefixes(hand.replace(L, '', 1), pre+L, results)
return results
"""Return the set of words that can be formed by extending pre with letters in hand."""
if pre in PREFIXES:
for L in hand:
return results
set(['DIE', 'ATE', 'READ', 'AIT', 'DE', 'IDEA', 'RET', 'QUID', 'DATE', 'RATE',
'ETA', 'QUIET', 'ERA', 'TIE', 'DEAR', 'AID', 'TRADE', 'TRUE', 'DEE',
'RED', 'RAD', 'TAR', 'TAE', 'TEAR', 'TEA', 'TED', 'TEE', 'QUITE', 'RE',
'RAT', 'QUADRATE', 'EAR', 'EAU', 'EAT', 'QAID', 'URD', 'DUI', 'DIT', 'AE',
'AI', 'ED', 'TI', 'IT', 'DUE', 'AQUAE', 'AR', 'ET', 'ID', 'ER', 'QUIT',
'ART', 'AREA', 'EQUID', 'RUE', 'TUI', 'ARE', 'QI', 'ADEQUATE', 'RUT']))
def longest_words(hand, board_letters):
"Return all word plays, longest first."
words = word_plays(hand, board_letters)
return sorted(words, reverse=True, key=len)
POINTS = dict(A=1, B=3, C=3, D=2, E=1, F=4, G=2, H=4, I=1, J=8, K=5, L=1, M=3, N=1, O=1, P=3, Q=10, R=1, S=1, T=1, U=1, V=4, W=4, X=8, Y=4, Z=10, _=0)
def word_score(word):
"The sum of the individual letter point scores for this word."
return sum(POINTS[L] for L in word)
def topn(hand, board_letters, n=10):
"Return a list of the top n words that hand can play, sorted by word score."
def timedcall(fn, *args):
"Call function with args; return the time in seconds and result."
t0 = time.clock()
result = fn(*args)
t1 = time.clock()
return t1-t0, result
hands = { ## Regression test
'ABECEDR': set(['BE', 'CARE', 'BAR', 'BA', 'ACE', 'READ', 'CAR', 'DE', 'BED', 'BEE',
'BEAR', 'AR', 'REB', 'ER', 'ARB', 'ARC', 'ARE', 'BRA']),
'AEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'TIE', 'SIN', 'TAR', 'TAS',
'RAN', 'SIT', 'SAE', 'RIN', 'TAE', 'RAT', 'RAS', 'TAN', 'RIA', 'RISE',
'ANESTRI', 'RATINES', 'NEAR', 'REI', 'NIT', 'NASTIER', 'SEAT', 'RATE',
'RETAINS', 'STAINER', 'TRAIN', 'STIR', 'EN', 'STAIR', 'ENS', 'RAIN', 'ET',
'STAIN', 'ES', 'ER', 'ANE', 'ANI', 'INS', 'ANT', 'SENT', 'TEA', 'ATE',
'RAISE', 'RES', 'RET', 'ETA', 'NET', 'ARTS', 'SET', 'SER', 'TEN', 'RE',
'NA', 'NE', 'SEA', 'SEN', 'EAST', 'SEI', 'SRI', 'RETSINA', 'EARN', 'SI',
'SAT', 'ITS', 'ERS', 'AIT', 'AIS', 'AIR', 'AIN', 'ERA', 'ERN', 'STEARIN',
'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AI', 'IS', 'IT',
'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'IRE', 'ARS', 'ART', 'ARE']),
'DRAMITC': set(['DIM', 'AIT', 'MID', 'AIR', 'AIM', 'CAM', 'ACT', 'DIT', 'AID', 'MIR',
'CAT', 'ID', 'MAR', 'MA', 'MAT', 'MI', 'CAR', 'MAC', 'ARC', 'MAD', 'TA',
'ARM']),
'RIA', 'ENDS', 'RISE', 'IDEA', 'ANESTRI', 'IRE', 'RATINES', 'SEND',
'NEAR', 'REI', 'DETRAIN', 'DINE', 'ASIDE', 'SEAT', 'RATE', 'STAND',
'DEN', 'TRIED', 'RETAINS', 'RIDE', 'STAINER', 'TRAIN', 'STIR', 'EN',
'END', 'STAIR', 'ED', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'AND',
'ANE', 'SAID', 'ANI', 'INS', 'ANT', 'IDEAS', 'NIT', 'TEA', 'ATE', 'RAISE',
'ARTS', 'SET', 'SER', 'TEN', 'TAE', 'NA', 'TED', 'NE', 'TRADE', 'SEA',
'AIT', 'SEN', 'EAST', 'SEI', 'RAISED', 'SENT', 'ADS', 'SRI', 'NASTIER',
'RETSINA', 'TAN', 'EARN', 'SI', 'SAT', 'ITS', 'DIN', 'ERS', 'DIE', 'DE',
'AIS', 'AIR', 'DATE', 'AIN', 'ERA', 'SIDE', 'DIT', 'AID', 'ERN',
'STEARIN', 'DIS', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE',
'AD', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'ID', 'ARS',
'ART', 'ANTIRED', 'ARE', 'TRAINED', 'RANDIEST', 'STRAINED', 'DETRAINS']),
'ETAOIN': set(['ATE', 'NAE', 'AIT', 'EON', 'TIN', 'OAT', 'TON', 'TIE', 'NET', 'TOE',
'ANT', 'TEN', 'TAE', 'TEA', 'AIN', 'NE', 'ONE', 'TO', 'TI', 'TAN',
'TAO', 'EAT', 'TA', 'EN', 'AE', 'ANE', 'AI', 'INTO', 'IT', 'AN', 'AT',
'IN', 'ET', 'ON', 'OE', 'NO', 'ANI', 'NOTE', 'ETA', 'ION', 'NA', 'NOT',
'NIT']),
'SHRDLU': set(['URD', 'SH', 'UH', 'US']),
'SHROUDT': set(['DO', 'SHORT', 'TOR', 'HO', 'DOR', 'DOS', 'SOUTH', 'HOURS', 'SOD',
'HOUR', 'SORT', 'ODS', 'ROD', 'OUD', 'HUT', 'TO', 'SOU', 'SOT', 'OUR',
'ROT', 'OHS', 'URD', 'HOD', 'SHOT', 'DUO', 'THUS', 'THO', 'UTS', 'HOT',
'TOD', 'DUST', 'DOT', 'OH', 'UT', 'ORT', 'OD', 'ORS', 'US', 'OR',
'SHOUT', 'SH', 'SO', 'UH', 'RHO', 'OUT', 'OS', 'UDO', 'RUT']),
'TOXENSI': set(['TO', 'STONE', 'ONES', 'SIT', 'SIX', 'EON', 'TIS', 'TIN', 'XI', 'TON',
'ONE', 'TIE', 'NET', 'NEXT', 'SIN', 'TOE', 'SOX', 'SET', 'TEN', 'NO',
'NE', 'SEX', 'ION', 'NOSE', 'TI', 'ONS', 'OSE', 'INTO', 'SEI', 'SOT',
'EN', 'NIT', 'NIX', 'IS', 'IT', 'ENS', 'EX', 'IN', 'ET', 'ES', 'ON',
'OES', 'OS', 'OE', 'INS', 'NOTE', 'EXIST', 'SI', 'XIS', 'SO', 'SON',
'OX', 'NOT', 'SEN', 'ITS', 'SENT', 'NOS'])}
def test_words():
assert removed('LETTERS', 'L') == 'ETTERS'
assert removed('LETTERS', 'T') == 'LETERS'
assert removed('LETTERS', 'SET') == 'LTER'
assert removed('LETTERS', 'SETTER') == 'L'
t, results = timedcall(map, find_words, hands)
for ((hand, expected), got) in zip(hands.items(), results):
assert got == expected, "For %r: got %s, expected %s (diff %s)" % (
hand, got, expected, expected ^ got)
return t
print test_words()
(我的答案:
def topn(hand, board_letters, n=10):
"Return a list of the top n words that hand can play, sorted by word score."
words = word_plays(hand, board_letters)
return sorted(words, reverse=True, key=word_score)[:n]
虽然这一小段代码写出来了, 并且也是正确的,但是总感觉 sorted() 函数里的第1个参数有点不对。。。
)
Again, pretty straight forward. We get the word plays, and we sort them in reverse order again so that bgigest are first, this time by word score, and then we just take the first n. By doing the subscripting(带下标) like that, it works when n is too big. It works when n equals none. Now, just an aside here, as the great American philosopher(哲学家;思想家;学者), Benjamin Parker once said, “With great power comes great responsibility.” We have a great power here to go through all the words in the dictionary and come up with all the best plays. Now, I could read in the official Scrabble dictionary and I could apply the board position that you saw in my game with Ken and I could come up with a bunch(束;串;捆) of good plays. But that wouldn’t be fair to my friend Ken, unless we had previously agreed that it was legal and fair to do so. I’m not going to do that. I got to resist(抵抗;忍耐;反对) that temptation(诱惑;引诱). And throughout your career as an engineer, these types of temptations or these types of possibilities are going to come up. Having strong ethics(伦理;道德;行为准则) is part of learning to be a good software engineer. So now, in terms of our pacing, we’ve achieved milestone #2. We can stop sprinting(sprint 冲刺) again. We can relax. You can have a drink. We can lay down. We can congratulate ourselves or do whatever we want to do.
12 Cross Words
Let’s go back to our list of concepts, go back to our diagram(图表;示意图) of where we were and say, “What should be the next step?” Well, what can we do now? We can take a hand and we can take a single letter on the board and we can say, “Yes, I can pick letters out of the hand and maybe do a S-I-D-E.” That would be a good play, except if there was an X here, then it would not be a good play. Similarly, if there were letters in the opposite(相对的;对面的;对立的) direction, that could be a bad play. But sometimes, there’s letters in the opposite direction and it makes a good play. Where here I’m forming two words at once, and the rules are the play I have to make has to be all in one direction and all adjacent(邻近的) to each other so forming one consecutive(连续的;连贯的) word. Then if it incidentally(偶然地) forms some other words in the other directions, that’s okay. But I can’t put some in this direction and then put some others down in that direction. I think my next goal will be to place words on a row while worrying about the crosswords(crossword 纵横字谜) in the opposite direction.
13 Anchors
Now let’s be a little bit more precise(清晰的;精确的) about what the rules are and what it means to play a word within a row and how that hooks up(hook up 连接) to the other columns. Now, the rules say that at least one letter that you play has to be adjacent to an existing letter on the board. We’ll mark with red asterisks(asterisk 星号) such squares. We call these anchor(锚;最后一棒的) squares. These are the squares that we can start from. Then we build out in each direction, forming consecutive(连续的;连贯的) letters into a single word. Now, the anchor squares do have to be adjacent(邻近的) to an existing letter, but they don’t have to be adjacent all within a row. They can be adjacent in either direction. Let’s expand the board beyond a single row and let’s populate(居住于;移民于) this with some more letters. Imagine that this board goes on in both directions. There’s probably an E here or something like that. If we restrict(限制;限定) our attention just to this row, notice that we’ve now introduced a new anchor point. This square is adjacent to an existing letter, and so that also counts as(count as 当做) an anchor. Now we want to find a word, which consists of a prefix plus a suffix. We get to define the game. We can say that for every anchor point, the prefix is going to be zero or more letters to the left of the anchor point, not counting the anchor point itself. Then the suffix will be the anchor point and everything to the right. Of course, we have to arrange(分类;整理) so that prefix plus suffix together form a word which is in the dictionary. Now here’s a cool play that comes from the dictionary. BACKBENCH is a word, and note that if we just have this rule of word equals prefix plus suffix where the suffix has to start with an anchor, then there’d be four possible ways of specifying(specify 指定;详述) this one move. We could anchor it here with no suffix(应该是prefix). We could anchor it here with these three letters as a suffix(应该是prefix). We could anchor it here with these letters as a suffix(应该是prefix). Or we could anchor it here with all these as a suffix(应该是prefix) and just H as the prefix(应该是suffix). Now, it seems wasteful to degenerate the same result four times, so we can arbitrarily(任意地) and without loss of completeness make up a rule which says there’s no anchor within a prefix. We couldn’t use this as a the anchor, because then there’d be anchors within the prefix. Likewise(同样地;而且), we couldnaEUt use this one or this one. We can only use this one as the prefix in order to generate this particular word. The anchor will also come from the hand, and the suffix can be a mix of hand and board. Here, this is the anchor. The prefix is empty. The anchor letter comes from the hand. Then there’s a mix of letters for the rest of the word. Now, what are the rules for a prefix. Let’s summarize. A prefix is zero or more characters, can’t cover up(盖起来;掩盖) an anchor square, and they can only cover empty squares. For example, for this anchor square here, the prefix can go backward, but it can’t cover this anchor. So the possible lengths for this prefix are zero to two characters. Any prefix can be zero characters, and here there’s room for two, but there’s not room for three, because then it would cover up an anchor. In that case, all the letters in the prefix come from the hand, but consider this anchor. For this anchor, we’re required to take these two letters as part of the prefix, because we can’t go without them because this abuts(abut 与…邻接). These two must be part of the prefix, and this one can’t be part of the prefix because it’s an anchor. If we wanted that we generate it from this anchor, rather than from this one. That means the length of a prefix for this anchor has to be exactly two. Similarly, the length of the prefix for this anchor has to be exactly one, has to include this character, because if we place a letter here, this is adjacent– it’s got to be part of the word–and this is an anchor so we can’t cover it. So we see that a prefix either the letters come all from the hand o or they come all from the board. What I want you to do is for the remaining anchors here, tell me what the possible lengths are. Either put a single number like this or a range of numbers–number-number.
(
My mistake: I confused the terms “prefix” and “suffix” between 1:40 and 1:55 in this video. Sorry about that.
-Peter
)
(
我的笔记:
红星锚定方块anchor square,必须与 1 个现存的字母邻接。
对于每 1 个 anchor point ,前缀必须是 0 个或者更多的字母。
)
The answers are for this anchor the prefix has got to be one character–the A. This anchor–we can’t cover another anchor, so it’s got to be zero. This anchor–we conclude(结束;终止;推断出) this if we want, but we can’t go on to the other anchor, so it’s zero to one. Here we’ve got to include the D but nothing else, so it’s 1. Now, there’s one more thing about anchors I want to cover, which is how we deal with the words in the other direction. For these five anchors there are no letters in the other direction. So these are completely unconstrained(不受拘束的). We say that any letter can go into those spots. But in these two anchors, there’s adjacent letters, and it would be okay. We could form a word going in this direction. But we can do that only if we can also form a word going in this direction. Let’s say there are no more. This is either the edge of the board or the next row is all blanks. Then we can say, well, what letters can go here? Only the letters that form a word when the first letter is that word and the second letter is U. In our dictionary, it turns out that that possibility is the set of letters M, N, and X. MU, NU, and XU are all words in our dictionary, believe it or not. The Scrabble dictionaries are notorious(臭名昭著的;声名狼藉的) for having two- and three-letter words that you’ve never heard of. Similarly here–what are two-letter words that end in Y? It’ the set M, O, A, B. You’ve probably heard of most of those. When we go to place words on a particular row, we can pre-compute the crosswords and make that be part of the anchor. What we’re going to do is have a process that goes through, finds all the anchor points, and finds all the sets of letters–whether it’s any letter for these five anchors, or whether it’s a constrained(受约束的) set of anchor letters for these two anchors. Sounds complicated, but we can make it all work. Let me say that once you’ve got this concept, the concept of the anchor sets and the cross words, then basically we’re done. We’ve done it all. We can handle a complete board no matter how complicated, and we can get all the plays. It’s just a matter of implementing this idea and then just fleshing it out(flesh out 充实).
14 Bird By Bird
We’ve congratulated ourselves for getting this far. We’ve still got a ways to go. Now the question is what do we do next? It may seem a little bit daunting(令人畏惧的;使人气馁的), but there’s so much to do, and when I get that feeling, I remembered the book Bird by Bird by Anne Lamott, a very funny book. In it, she relates the story of how when she was in elementary(基本的;初级的) school and there was a big book report due where she had to write up descriptions of multiple different birds. And she was behind and it was due soon, and she went to her father and complained, “How am I ever gonna get done. I’m behind,” and her father just told her, “Bird by bird.” “Just go and take the first bird, write up a report on that, and then take the next bird off the list and keep continuing until you’re done.” Let’s go bird-by-bird and finish this up. What do we have left to do? Well, we got to figure out how to put letters on one particular row while dealing with the crosswords, then we got to expand from that to all the rows and then we got to do the columns as well, and then we got to worry about the scoring. There was a couple of minor(较小的;少数的) things to be put off like dealing with the blanks. That’s a lot to do, and let’s go bird-by-bird .
15 Anchor Class
The thing I want to do next is say let’s just deal with a single row at a time. Let’s not worry about the rest of the row. Let’s now worry about going in columns. Just deal with one row but also have that row handle the cross letters–the cross words. I’m going to need a representation for a row, and I think I’m going to make that be a list. There are many possible choices, but a list is good. I choose a list rather than a tuple, because I want to be able to change it in place. I want to be able to modify the row as we go, as the game evolves. Row is going to be a list of squares. If the square is a letter, we’ll just use that letter as the value of the square. If the square is an empty spot which has nothing in it, I think I’ll use dot just to say nothing is there. A trick that you learn once you’ve done this kind of thing a lot of times, is to say I’m going to be look at letters, I’m going to be looking at locations in the row, I’m going to be looking at their adjacent letters and going to the right and going to the left. If I’m trying to fill in from this anchor, I’ll move to the left to put in the prefix and I’ll move to the right to extend the word. It seems to me like I’m always going to have to be making checks of saying what’s the next character, is it a letter, is it an anchor, what is it? Also, oops, did I get past the end of the board. It seems like I’m going to have to duplicate the amount of code I have to write to check both the case when I go off the board and when I don’t go off the board. One way to avoid that is to make sure you never go off the board. It cuts the amount of code in half to some extent(程度;长度). A way to do that is to put in extra squares– to say here are the squares on the board, but let’s make another extra square on each side and just fill that in say the value of what’s in that square is a boarder, not a real square that you can play in, but if I’m here and I say what’s the value of square number i -1, I get an answer saying it’s a border rather than getting an answer that’s saying when you go i - 1 from position 0 you get an error. I think I’ll use a vertical(垂直的;竖立的) bar to indicate a border. I’ll have one there, and at the end of my row, I’ll have another border. Now I’ve sort of got everything. I got borders, letters, empty squares. The only thing left is anchors. I think what I’ll do here is I’ll introduce a special type for anchor. I could have used something like a tuple or a set of characters, but I want to make sure I know, and I want to have something in my code that says if the value of row[ i ] is an instance of anchor, then I want to do something. So we’ll make anchor be a class, and I want it to be a class that contains the set of letters. I can do that in a very easy way. I can use a class statement to say I’m going to define a new class. The class is called an anchor, and the class is a subset of the set class. Then I don’t need anything else for the definition of the class. All I have to know is that anchors are a type of set, but they’re a particular type of set. They’re a set of anchor letters. Here’s a code for that. I define a class of anchor. I have all of my allowable letters. Then I say any is an anchor, which allows you to put any letter onto that anchor spot. Now I want to represent this row where here are the borders, here are the empty spots, and here are the particular letters, and this representation– the schematic(示意的;严谨的) representation as a string does not account for a fact that after the A we’re going to have two restricted anchors that have to contain these characters. So we’ll define them–use the names mnx and moab to be the two anchors that are restricted to have only those letters. Now our row is equal to the border square is element number 0. Then the A is element number 1. Then we have these two restricted anchors, two more empty spots, another anchor where anything can go–the B and the E, and so on.
16 Row Plays
There’s our whole row, and while I’m at it I might as well define a hand. Now my next target, the next bird to cross off the list is to define a function row_plays, which takes a hand and a row in this format and returns a set of legal plays from the row. Now, rather than just return legal words, I’m using this notion of a play, where a play is a pair of location within the row and the word that we want to play. You can imagine it’s going to take the same general approach that we’ve used before, start with an empty set, do something to it, and then return the results that we built up. What is it that we want to do? We want to consider each possible allowable prefix, and to that we want to add all the suffixes, keeping the words. Now, prefixes of what? That’s the first thing to figure out. What I’m going to do is enumerate the row–enumerate actually just the good bits. The row from the first position to the last position, and that tells me I don’t want the borders. I don’t want to consider playing on the borders. I just want to consider playing on the interior(内部的;内地的) of the row. Enumerate that starting from position number 1. One would be where the A is. Now I have an index–a number 1, 2, 3–and I have the square, which is going to be a, and then an anchor and then an anchor and so on. Where do I want to consider my rows? We’re going to anchor them on an anchor so I can ask a square an instance of an anchor. If it is an anchor, then there’s two possibilities. If it’s an anchor like this, there’s only one allowable prefix. The prefix which is the letters that are already there just to the left of the anchor. We want to just consider that one prefix and then add all the suffixes. If it’s an anchor like this one, then there can be many prefixes. We want all possible prefixes that fit into these spots(场所;斑点) here, consider each one of those, and for each one of those consider adding on the suffixes. What I’m going to do is define a function, legal _prefix, which gives me a description of the legal prefix that can occur at position i within a row. There are two possibilities. I could combine the possibilities into one, but I’m going to have a tuple of two values returned. I’m going to have legal_prefix return the actual prefix as a string if there is one, like in this case, and return the maximum size otherwise. For this anchor here, this would be legal_prefix of one, two, three, four, five, six– that’s for legal_prefix when i = 6. The result would be that there are now characters to the left. It’ll be the empty string for the first element of the tuples. The maximum size of the prefix that I’m going to allow is two characters. Now, if I asked here–that’s index number one, two, three, four, five, six, seven, eight, nine– when i = 9, the result would be that the prefix is BE, and the maximum size is the same as the minimum size. It’s the exact size of 2. I define legal_prefix in order to tell me what to do next based on the two types of anchors. Now, I can go back to row plays. I can call legal_prefix, get my results, and say if there is a prefix, then I want to add to the letters already on the board. Otherwise, I have an empty space to the left, and I want to go through all possible prefixes. Here’s what we do if there is a prefix already there. Now we can calculate the start of our position. Remember a row play is going to return the starting location of the word. We can figure that out. It’s the i position of the anchor minus the length of the prefix. In fact, let me go and change this comment here. I is not very descriptive. Let’s just call that start. Now we know what the starting location is for the word. When we find any words we can return that. Then we go ahead and add suffixes. With the suffixes, some of the letters are going to come out of the hand. We’re adding suffixes to the prefix that’s already there on the board. Starting in the start location, going through the row, accumulating(accumulate 积累;逐渐增加) the results into the result set, and then I needed this one more argument. I actually made a mistake and left this out the first time, and it didn’t work. We’ll see in a bit what that’s there for. Now if we have empty space to the left of the anchor, now we’ve got to go through all the possible prefixes, but we already wrote that function–find_prefixes. That’s good. Looks like we’re converging(收敛的;趋同的). We’re not writing that much new stuff. Now, out of all the possible prefixes for the hand, we only want to look at the ones that are less than or equal to the maximum size. If the prefix is too big, it won’t fit into the empty spot. It will run into another word, and we don’t want to allow that. We can calculate the start position again. Then we do the same thing. We add suffixes. What do we add them to? We’ll the prefix that we just found from the hand. Since the prefix came from the hand, the remaining letters left in the hand we have to subtract(subtract 减去;扣除) out those prefix letters. Here we didn’t have to subtract them out, because they prefix letters were already on the board. We’re adding to the prefix from the start, from the row, results are accumulated, and we have this anchored equals false again. We’re almost there. Just two things left to do–add_suffixes and legal_prefix. Add_suffixes we had before, but it’s going to be a little bit more complicated now, because we’re dealing with the anchors. legal_prefix is just a matter of looking to the left and see how much space is there.
17 Legal Prefixes
Here is legal prefix. It’s pretty easy…we start out at position i within the row and we’re going to define s to be the starting position of any prefix to the left, we say while there is a letter just to the left of us by one decrement s by 1 s minus equals 1 is the sameas saying s equals s minus 1 if they’re any letters then s is going to be changed if s was changed if it’s less than I then there is a prefix on the board so we return those letters combined into a single word like be the value of the size of those which is I minus s which is two in that case if there are no letters already on the board to the left of the anchor then what we want to do is say how many squares are there that have empty squares but squares that are not anchors because remember we agreed that a prefix would’t span over an anchor because we don’t want to have duplicates we only want empty squares that are not anchors keep going backwards keep decrementing s while that’s true and then return no there are no letters on the board to the left of me this is the number of empty squares that I found and in this definition I introduced these predicates is letter and is empty they’re pretty straightforward empty square is one which is the empty character also can be there’s one character on the board that I’m going to mark with an asterisk that’s the starting location that’s usually in the middle of the board for anchors are also empty and I have to check specifically here if I want an empty thing that’s not an anchor and what’s a letter a letter is a string that’s one of the allowable letters Wow, So that was a lot I think it’s time to put it some tests to make sure that all the code we wrote is doing the right thing so I wrote one test here the legal prefix for location number two within the row should be just the letter A so that make sense here’s position two to the left of that is the prefix a we don’t want to go further and go into the boarder and what I want you to do is just fill in the rest of these tests put in the proper values for this and all these others of what legal prefix should return this example here and figure bring out what makes sense to you for what the legal prefixes should be you can also look at this definition ofa row and make sure that we got that right.
Here’s the answers of what they should be for each of these positions within a row.
18 Life is Good
(bump into 偶然碰见)
One more function to write–add_suffixes–and then we’ll be done with this bird. We’re given the start location. We recover the location in which we want to place the anchor letter, which is start at the start and then add in however much of the prefix we have. It could be the empty string. It could be more. If the prefix is already a word, then we want to add in a possible play. The play is starting in the start location we put in this word. But there is also this test. This test says we want to make sure that we’ve at least anchored the prefix. What does that mean? If we go back to our diagram, if we’re right here, we get passed in to add_suffixes the prefix B-E–that is, in fact, a word. B-E is in our dictionary, but we wouldn’t want to say the answer is B-E, because that’s not a play. That was already on the board. We can’t say our play is we’re making this word that’s already there. We’ve got to add at least one letter. That’s why when we were defining row plays, we said we’re adding suffixes, but we haven’t anchored it yet. Anchored is equal to False. We haven’t anchored our potential prefix so far so don’t go reporting that prefix as a valid play. Now in the definition of add_suffixes, we say, okay, if it’s the first time through, we’re not anchored so we’re not going return BE as a possible play, but when I do a recursive call to add_suffix, I’ll just have the default value and anchor will be true. I will have played on the anchor, and then I’ll be okay from then on. One additional test here is saying if there are existing letters already on the board, and you’re bumping into them, then you have to count them. If there’s a letter already there, don’t report everything up to that letter. That’s taken care of this case when we get up to this C, we can’t like say put a T here and have B-E-T and report that by itself as a word, because the T is running into the C. We’ve got to continue and see what words can we have–we have to account this C here. Okay. That takes care of adding in the play. Whether or not we found a valid play to add to our list of results, we still want to say are we going to continue? Can we keep going to the right adding on more letters? Well if the pre we have so far is within the prefixes, then yes we do want to try to continue. What we’ll do is say tell me what square is in the current position in the row. If it’s a letter, try to add suffixes to that–to the prefix we have so far plus the letter. If there is a letter already on the board, it’s mandatory(强制的;命令的) that we have to use it. Otherwise, if the square is empty, then we want to figure out what are all the possible letters that we could place into that empty square. If the square is an anchor, then the anchor will tell us what the possibilities are. Remember an anchor is a set of possible letters, so if the square is an anchor, let’s use that as a set of possibilities. Otherwise, if the square is empty and it’s not an anchor, then any letter is a possibility. Now we just go through the letters in our hand and say, if that letter is a possibility, then we want to add a new suffix by saying let’s place that letter onto the prefix, remove the letter form our hand, and continue adding suffixes from there. When we’re done that, return the results. That’s it. Now we’re done with that bird, but let’s go back, look at our test routine. We had these tests for legal prefix. Now, if we go into our interpreter and we run row_plays with the given hand and the row–that’s this row where the hand is maybe ABCEHKN. This is the result we get. That’s an awesome result. Look, we got the BACKBENCH that we saw before. We got all the smaller words. We can go through and we can check that each of these makes sense. They are the right ones. They don’t run into any letters or do anything wrong. It’s hard to check that we go all of them right, but we can still go ahead and make this an assertion. When we run this, all the tests pass and life is good.
19 Increasing Efficiency
(slap 猛打 用力放置)
However, here’s something that bothers me. In the definition of row_plays, we’re calling find_prefixes of hand inside this loop where we’re enumerating over the row, so it’s going to happen multiple times. It’s going to happen one time for every anchor. Eventually when we have multiple rows it’s going to happen for every anchor on every row, but notice that find_prefixes only depends on the hand. It’s not dependent on the row at all, so it seems wasteful to be recomputing find_prefixes of hand multiple times. If we were just dealing with row_plays, that would be easy enough. Up here we could say found prefixes equals find_prefixes of hand, assign it to a variable, and then just reference the variable down here. We’d be computing it once outside of the loop rather than many times inside the loop. Eventually, we’re going to have a bigger function that calls row_plays once for each row, and we wouldn’t want to have to compute that each time within row_plays. We want to compute it just once. We could pass into row_plays the set of prefixes, but that’s just complicating the interface. I’d like to cut down on the computation without complicating the interface. In other words, I want find_prefixes, when you pass it a hand, if it hasn’t seen the hand before, go ahead and compute all the prefixes. If it has seen the hand before, then I don’t have to re-compute it. I’ll just look it up and return it immediately. What I want you to do is take find_prefixes and make it more efficient. We could just slap a memo decorator on the front of find_prefixes, but that’s probably not exactly what we want, because note that find_prefixes is recursive, though it’s going to call itself on each subcomponent(子分量) of the hand. Really, we want to say I only want to remember the top level hand. If I’ve seen that exact hand before, then give me the answer. Don’t give me the answer for all the sub-parts of the hand. You could write a different decorator that works just on those parts, although that might be hard given that it’s recursive, or you could modify the function itself, or you could have two levels of functions. One top-level function–find_prefixes–that calls another function recursively. Your choice as to how you want to handle it. However you handle it, the idea is that if you call find_prefixes with a certain hand and you get back this result, then if you make the same top-level call again, it should immediately return the same result that it saved away rather than trying to recompute it.
题目:
# -----------------
# User Instructions
#
# The find_prefixes function takes a hand, a prefix, and a
# results list as input.
# Modify the find_prefixes function to cache previous results
# in order to improve performance.
def prefixes(word):
"A list of the initial sequences of a word, not including the complete word."
return [word[:i] for i in range(len(word))]
"Return a pair of sets: all the words in a file, and all the prefixes. (Uppercased.)"
prefixset = set(p for word in wordset for p in prefixes(word))
return wordset, prefixset
class anchor(set):
"An anchor is where a new word can be placed; has a set of allowable letters."
LETTERS = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
ANY = anchor(LETTERS) # The anchor that can be any letter
def is_letter(sq):
return isinstance(sq, str) and sq in LETTERS
def is_empty(sq):
"Is this an empty square (no letters, but a valid position on board)."
return sq == '.' or sq == '*' or isinstance(sq, set)
def add_suffixes(hand, pre, start, row, results, anchored=True):
"Add all possible suffixes, and accumulate (start, word) pairs in results."
i = start + len(pre)
if pre in WORDS and anchored and not is_letter(row[i]):
if pre in PREFIXES:
sq = row[i]
if is_letter(sq):
elif is_empty(sq):
possibilities = sq if isinstance(sq, set) else ANY
for L in hand:
if L in possibilities:
add_suffixes(hand.replace(L, '', 1), pre+L, start, row, results)
return results
def legal_prefix(i, row):
"""A legal prefix of an anchor at row[i] is either a string of letters
already on the board, or new letters that fit into an empty space.
Return the tuple (prefix_on_board, maxsize) to indicate this.
E.g. legal_prefix(a_row, 9) == ('BE', 2) and for 6, ('', 2)."""
s = i
while is_letter(row[s-1]): s -= 1
if s < i: ## There is a prefix
return ''.join(row[s:i]), i-s
while is_empty(row[s-1]) and not isinstance(row[s-1], set): s -= 1
return ('', i-s)
###Modify this function. You may need to modify
# variables outside this function as well.
prev_hand, prev_results = '', set() # cache for find_prefixes
def find_prefixes(hand, pre='', results=None):
"""Find all prefixes (of words) that can be made from letters in hand."""
if results is None: results = set()
if pre in PREFIXES:
for L in hand:
find_prefixes(hand.replace(L, '', 1), pre+L, results)
return results
Here’s what I did. I introduced two global variables–the previous hand and previous results. I am making a cache, like a memoization cache, but it’s only for one hand, because we’re only dealing with one hand at a time. Then I say, we then find_prefixes if the hand that you were given is equal to the previous hand, then return the previous results. I’m only going to update the previous hand and the previous results in the case where the prefix is the empty string. And that’s how I know I’m at the top level call when the prefix is the empty string. For all the recursive calls, the prefix will be something else. I’m only storing away the results when I’m at the top level call and I’m updating previous hand and previous results. With that, efficiency improvement to find prefixes, now when I do timedcalls of row plays for this fairly complex row, it’s only about a thousandth of a second. If I had a complete board that was similarly complex and say fifteen rows or so in the board, then it’d still be around one or two hundredths of a second and that’s pretty good performance.
20 Show and Spell
(engage in 参加;从事)
Now back to our diagram. Let’s figure out where we are. We did row_plays, so I can check off that bird. What’s left? Well, now I want to be able to do all the plays–all the plays in all the rows and all the columns. Another thing I want to be able to do is–notice that I cheated a little bit. I engaged in wishful thinking, which is always a good design strategy, in that when I called row_plays, I gave it a hand and a row, but I made the row myself– built that sample row that I called a row by making a list and saying, okay, I know A is here, I know an anchor called MNX is here, and so on. I didn’t have my program construct that row. All_plays is going to have to somehow do that type of construction. It’s going to somehow have to set the anchors within the row rather than having me give them explicitly as test. Then just one more thing to deal with, which is scoring. After I’ve got all the plays, I want to be able to figure out how much each one scores and pick out the top-scoring play. I talked about pacing at the beginning of this. Now I’m starting to pick up the pace. I’m feeling pretty good now. I’m saying it was a long way, we had to run hard, but now I can start to see the finish line. We can put together one final sprint to get it all done. What do I want to do next? I want to handle complete boards, not just individual rows. Just as we did with rows where we made up a sample row, let’s make a sample board. I define a function a_board, which returns a sample board. It’s the same one we were dealing with here. Note that I’m making this a function rather than a variable definition. The reason I’m doing this is because every time I reference a_board I want to create a new one, and I want to create a new one, because I’m going to be modifying the old one. I’m going to be placing letters onto the board. I’m going to be inserting anchors into the board and modifying the board structure itself. I don’t want to be dealing with the old one that I’ve already modified. I want to make sure I have a fresh one from scratch, and so I’m going to say the only way to access this is through the function. What it does is it takes these strings, maps list over each one. Rather than have this first row be a string, the first row will be then a list of characters. Same for all the other rows. There we can see when we call a_board we get this board, but that’s not very pretty to look at. I’d rather look at something like this where here I’ve printed the results. Notice that I put spaces between each letter to make the board more square-like. What I’d like you to do is define a function show, which takes a board as input, print out the results, looking just like that, returns None as a value.
题目:
# -----------------
# User Instructions
#
# Write the function show that takes a board
# as input and outputs a pretty-printed
# version of it as shown below.
## Handle complete boards
def a_board():
return map(list, ['|||||||||||||||||',
'|J............I.|',
'|A.....BE.C...D.|',
'|GUY....F.H...L.|',
'|||||||||||||||||'])
def show(board):
"Print the board."
# >>> a_board()
# [['|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|'],
# ['|', 'J', '.', '.', '.', '.', '.', '.', '.', '.', '.', '.', '.', '.', 'I', '.', '|'],
# ['|', 'A', '.', '.', '.', '.', '.', 'B', 'E', '.', 'C', '.', '.', '.', 'D', '.', '|'],
# ['|', 'G', 'U', 'Y', '.', '.', '.', '.', 'F', '.', 'H', '.', '.', '.', 'L', '.', '|'],
# ['|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|', '|']]
# >>> show(a_board())
# | | | | | | | | | | | | | | | | |
# | J . . . . . . . . . . . . I . |
# | A . . . . . B E . C . . . D . |
# | G U Y . . . . F . H . . . L . |
# | | | | | | | | | | | | | | | | |
(我的答案:
def show(board):
"Print the board."
for row in board:
for column in row:
print column,
print ''
)
Here’s my answer–very simple–iterate over over the rows and over the squares in each row print one out. A comma(逗号) at the end of the print statement says put in a space but not a new line. At the end of each row, that’s where I put in a new line.
Peter的答案:
def show(board):
for row i board:
for sq in row:
print sq,
print
21 Horizontal Plays
(为了对 enumerate() 进行一定的了解,下面以简单的实例来说明:
>>> enumerate([5, 6], 1)
<enumerate object at 0x7fb21bc79730>
>>> for e in enumerate([5, 6], 1):
... print e
...
(1, 5)
(2, 6)
)
Now let’s do a little bit of planning. We did row plays. What is row play return? Well, it’s a set of plays where each play is an i-word pair, where i is the index into the row where the word starts. We eventually want to get all plays. Before we can get there, I’m going to introduce another function called horizontal(水平线;水平的) plays, which does row plays across all the possible rows, but only going in the across(横穿;横过) direction not in the down direction. That’ll take a hand and a board as input. A board is just a list of rows. It’ll return a set of plays where a play, like in a row play, is the position in the word except now the position is not going just to be i, the position is an i-j pair. It’s going to be at this column in this row along with the word. It’s a set of tuples that look like that. Let’s define horizontal plays. Well, you know the drill(操练;钻头) by know–familiar structure. We start out with an empty set of results. We’re going to build them up somehow and then get the results. Now, how are we going to do that? Let’s enumerate over all the rows in the board. We just want the good ones–the one from 1 to -1. We don’t want the rows at the top and the bottom, which are off the board or the border squares. For each good row, I’m going to write a function called set_anchors which takes the row and modifies that row and mutates(mutate 使…变化) the row to have all the anchors in it. Remember before when I called row plays I passed in manually(用手的;手工的) all the anchors. Here, I’m going to have the program do it for me. Now, for each word, I want to find all the plays within that row and properly add them in to results. I want to do something with the row plays of hand within that row. And I want you to tell me what code should go here. It could be a single line or it could be a loop over the results that come back from row plays. Figure out what goes here so that it can return the proper results.
Here is my answer, I call row plays on the row and that gives me a set of results which are of the form i–index into the row– and a word, and I can’t just add that into my results set, because that doesn’t tell me what row number I’m in. Instead, I want to add the tuple i and j. I’ve already got the row number j. Add the tuple of the position i, j along with the word into the results. That’s all we have to do.
22 All Plays
Okay. Back here, check off another bird. Just one left. Well, okay, I lied. It’s not quite one left. There is also scoring we’ll have to do next, but one left for this part. So all plays, like horizontal plays, takes a hand and the board. What should it return? Well, it’s going to be a set, the position in which we start the word. Well, that can be the same as before, an i-j position is perfectly good. Now, we’re also going to have some words going across and some words going down. Now, we want our results to be a three tuple. It’s a set of an ij position followed by a direction–across or down, followed by a word, a set of those. Now onto the all_plays function–takes a hand in the board, it’s going to return all plays in both directions on any square so the play is a position, direction, and a word where the position is an ij pair, j picked up row number, i the column number. Direction is either across or down, and those will just be global variables. We don’t have to decide for now how to represent it. I used a trick here–I said all the horizontal plays, the cross plays, we get from calling horizontal plays directly on the hand in the board. The vertical plays–I didn’t have to write a separate function. All I have to do is transpose(进行变换;移项) the board–flop the i and the j, call horizontal plays on that, and that gives me a set of results, but they’re the results in the wrong direction. They’re ji pairs rather than ij pairs. Now your task is write the code to put that all together to take these two sets, one of them is in reversed order, so they have to be swapped around. Neither set has a direction associated with it. Assemble them all into the resulting set that should be returned by all plays.
Here’s my answer, so I took all the i, j, w pairs from the horizontal plays and just reassembled(reassemble 重新组装) them with the i, j, putting in the indication that they’re going in the across direction and keeping the same word. Then I do the same thing for the vertical plays. They came out in the j, i order. I reassembled them back in the proper i, j order with an indication that we are going in the down direction. And then I took these two sets and just unioned them together. Now, I need some definition for across and down. I can do it this way. I could have just used strings or any unique value that could use a string across and down, but I’m going to say that across is equal to incrementing one at a time in the i direction and zero in the j direction. Down is incrementing zero in the i direction and one in the j direction.
23 Set Anchors
We’re almost there. Things are coming together very nicely. The only thing we’re missing is set anchors. We have to somehow take a row and the row number within the board and figure out where all the anchors are and what the values of those anchors are. To find the anchors we can do that within the row. An anchor is something that’s next to an existing letter. That will get most of the anchors but not quite all of them, because notice if we didn’t have this row here, we wouldn’t know that that spot is an anchor. To find all the anchors, we’re going to have to look at all the rows or at least the two adjacent rows on either side. To find what the anchors are in terms of the set–can anything go there as in this anchor, as in this anchor, any letter can appear there, or is it a restricted set like this one. That we’re also going to have to know–what are the other cross words? Here, if there’s only a U there and nothing down below–this is the edge of the board or there is empty stuff down below–then this anchor can only be the letters that fit in there to make a word going in this direction. Let’s dive right into defining set anchors. This is different than most of the functions written so far in that it actually mutates the row rather than returning a result. We start in the normal way. We’re going to iterate over the row–the good parts of the row. Then what I’m going to do is take the i-j position on the board and find all the neighbors for that board–that is all the squares in the in the north, south, east, and west location. Then I’m going to say what are the anchors? Well, if the square is the star, the starting square, then that’s an anchor by definition. Otherwise, if the square is empty and any of the neighbors is a letter, then that’s an anchor. Now, I’ve arranged that neighbors(board) is a function that returns the neighbors in this order–north, south, east, and west–and now I’m saying we’re operating on a row, if the neighbor to the north or the south is a letter, then we have a cross word that we have to deal with. If not, then it’s an unrestricted anchor. What do I want to do if I have these crosswords? I want to find the crossword on the board. What does that mean? In this location right here, which would be row 2 and column 2, I want to say that the word on the board is an empty square followed by a U. If I go into the interpreter, I want to be able to have this interaction where here is my board. Now if I find crosswords within that board from position 2, 2–that’s that position right there right after the A and above the U–what I want to say is that there is a word, and the word is a dot followed by a U–there it is. If we fill in that anchor, we’re going to have a word, which is a dot followed by a U. Where does the word start? Well, it starts in position 2. It could have started someplace else. If we had a big board, there might have been a word that started all the way up here and went down. For example, we’re not actually going to call find_cross_word on positions that aren’t anchors, but if we did, it would still work. If we find cross words from position 1, 2–that’s the where the A is–what’s the cross word that intersects through that A? Well, that’s JAG. It begins in row number 1. Now we’re up here, we found the cross words, we found the row that the cross word begins in, we found what the word is, and now we’re saying we’re going to fill in this location. It’s going to be an anchor. It’s anchor where the letters are all the letters with which we can replace the dot in dot U and make something which is a word. We can go back to our interpreter and test that out. We can say if W is the dot U, then what is this anchor of all the letters where W replaced by a letter is in WORDS. That’s the anchor with X, M, and N as possibilities. Now we’re going to say insert that anchor into row[i]. Insert into this spot to the right of the A above the U the anchor that says an X, an M, or an N can occur right in that location. That’s setting the anchor if there are cross letters above or below. Otherwise, we have an unrestricted anchor. For example, this anchor here to the left of the D–any letter can go in there. We’ve already defined the global variable any to be the anchor that allows any letter to occur.
24 Final Birds
Sprinting to the finish line. We’re almost there. Just a little bit left to go. Only one more slightly difficult part. Here is find_cross_word, and because I can see the finish line, I’m not going to explain it line-by-line, but you go ahead and read it. Two other small bits–we need this list of neighbors in the north, south, east, west order, and we’ve got to be able to transpose a matrix. It turns out that one way to transpose a matrix is an old trick. You map list to the zip of the application of all the rows in the matrix. If that makes your head hurt, don’t worry about it. You can play with it a little bit to see why it works or you could use this expression here–[ M [ j ][ i ] ] for j in the range and for i in the range. Now I’m excited to see it work. I ran it. It did, in fact, work. Solidified the results by putting them into a test–examples of finding cross words, finding neighbors, transpose–that looks like it works. Transpose of a transpose–you get back what you started with. If we set anchors on our sample board–we get back the sample row that we did by hand. If we call horizontal plays, we get back this list now. I feel a little bit bad here that I wrote this as a test, and yet I can’t really verify that this is exactly the right answer. I haven’t gone through every word in the dictionary to figure out if I got this exactly right. This serves not so much as a unit test–a unit test is something that verifies that your function is doing the right thing–rather this serves as a regression test. Regression test means that we’re just testing it to see if we broke something. We want to get the same result this time as we got next time if we make a small change to our program. We can verify that this looks reasonable–that we got the big word we were looking for, BACKBENCH, other components of it like BENCH and then a bunch of three-letter words, which always show up in Scrabble. That was the horizontal plays. We can also do the all_plays. It’s gratifying(悦人的;令人满足的) that all_plays is a bit longer. It’s gratifying that BACKBENCH is still there, and when we run it, it passes. Bird-by-bird–we can check off one more. Now there is only one left–scoring.
25 Scoring
I could take the results from all plays, so each play is a triple of position, direction, and a word, and then I could add a score to those, but it seems like I’m just taking this lists apart and putting them back together so many time. I’ve already done it three times. I did a row play–took it apart, added it back in j for the horizontal plays, and took that apart, added back in a direction for all_plays. I could do that one more time to insert the score. Maybe that would’ve been the right choice. Maybe I just got fatigued(疲乏的), and I made a mistake in my design sense, but what I decided to do was modify my horizontal_plays and all_plays functions. There are two modifications. Here in horizontal_plays, after I got the play, I calculate a score, and then I insert that into my result. Now my results are no longer just position, word play–they’re score, position, word play. Then I want to do the same thing in all_plays. I want to make my play be a score, position, direction, word tuple, and I’m just, as before, ripping(rip 扯破;撕坏;撕成) these things apart and putting them back together. Now, remember the board. It’s got these double letters, triple word scores, and so on. If you’re old-school it looks like this. This is on a piece of cardboard that’s a physical material. We’ve also written triple and double scores, so I need to come up with some ways of representing these spots on the board and how they’re special. Now, could I squeeze(挤;压迫;压榨) it into my existing representation of a row? Remember a row is a list, and it can have things like that border is a string. Then we have a letter. Then we can have anchors like ANY. Could I have room for putting information about bonus squares on the board–he double and triple letters? Could I have, say, 3W to mean triple-word score as an element of this row? I guess my intuition(直觉) is I don’t think that’s going to work very well. The problem is one of these squares–say this one–could be both an anchor and a triple-word score. So we’d need some representation that allowed both of those. That just doesn’t seem to be easy to extend what I already have. Let’s not override our row notation. Our rows, as we have it–I’m pretty happy with them. I’m not going to allow this. I’m going to keep row exactly as they are. I’m going to introduce another data structure–a parallel(平行的;并行的) board, I’m going to have two boards. One board that I play on and another board that holds the bonuses. Think of it as two layers. One representation of a two-dimensional matrix just holds these double words and triple letters scores and so on, and then on top of that there’s a second two-dimensional array that holds the values of the letters and also holds the anchors. Oh, I got a good score. Now I’m going to have board[j][i] will hold the letter or anchor, and then bonus[ j ][ i ] will hold corresponding bonus–a double word, a triple letter, or just nothing.
26 Scoring 2
Now, just to give you a review of scoring if you aren’t familiar with the rules. The letters score their value–the number that’s on them–times whatever you’ve placed them on if that’s a double or triple letter score. Here I don’t have any double or triple letter scores, so I get the individual values of the letters– 10, 11, 12, 22–times the total word multiplier, and the pink square are double word scores, so that’s a double multiplier–22 times 2 is 44 points for that. Then if the next play goes here, it has to connect. Now, this was a double word, but I don’t get any double, because I didn’t actually place a new tile over an existing double word, so nothing is doubled, but I do get credit for the Z, even though I didn’t place the Z. I get credit for that as 10, not as 20 as it originally scored, but as its face value–10 plus 1–this is a triple 1, so that’s plus 3 plus 1–10, 11, 14, 15. Then if the next play, say, was here, that would just a score 3, 4, 5. I didn’t have any bonuses whatsoever. This bonus here doesn’t count, because I didn’t play over it. If a bonus is already covered up, it doesn’t county any more, so this would just be 5. One more scoring rule. If this is the word on the board–great play putting the Q on a double letter and a double word as well–this would score 20–1, 2, 3, 4 times 2 is 48. The next play could be this, which simultaneously(同时地;一齐) forms 3 words–NO in this direction and then IN and NO in this direction. For NO, we’d get a double letter is 2 plus 1 is 3. For IN, we get just 1 plus 1, so that’s 3 plus 2 is 5. And for NO we get another 3, so that would be a score of 8 altogether. Here is calculate_score. It takes all these variables that we need to specify the play. We’re going to start out a total of my word multiplier–that is, have I got any double or triple letters, and there might be more than one of them. If I had a long word that matches up with some existing letters on the board, then I’d get credit, and if. I covered a double and a triple letter, then I’d multiply by 6 I need to keep track of that. Then I also want to keep track of the cross word totals. Not the word I’m playing but the other cross words, but they are separate from this word multiplier. Figure out where my starting position is from that position. Figure out the direction that I’m moving in–down or across–what the delta are. Figure out what the other direction is. If I’m moving in the cross direction, I want to know that the other direction is the down direction. Now just enumerate the words. Enumerate the letters in the word and the position within the word. Figure out the square on the board and the bonus squares, the word multiplier, if the square was already placed on the board, so if it is a letter–and I’ve got a function for that. I should be defining this here, so this is a bad piece code. I should be calling is_letter(sq) rather than testing directly, since is decided to make that more abstract. Figure out the word multiplier from the bonuses. You only get the bonus if the letter wasn’t already on the board. Figure out the letter multipler–same thing. Increment my points by the points from the letter times the letter multiplier, and now if the square is an anchor and the anchor is not one of these unrestricted anchors, then we want to look for the cross words. If there is a cross word, figure out the cross word score and add that into the total. Why do I have this direction is not down here? Because I’m going to do–cross word score is going to recursively calculate score, and we don’t want it to recurse infinitely. We just want it to recurse once. To explain that a little bit more, note up here in horizontal_plays we’re calling calculate_score, so the only place we call calculate_score is here, and we’re calling it with the across. The way we get the down is because we transpose the board. So for calculate score, we know we’re going to be called with the across direction the first time and then the down direction the second time. Although, I guess I feel a little bit bad that that assumption that we’re always going to be called with across the first time is kind of hardwired into this. That makes calculate_score a little bit brittle. Probably, I should refactor this to stop the recursion in some other way. But I’m so close to the finish line now I don’t want to stop to clean things up. I want to get to the end. Here is cross_word_score. Figure out the position. We find the cross word. That’s a function we already wrote. Then we recursively call calculate_score.
27 Making the Board
Now all that’s left is to set up this matrix bonus, say where the double and triple bonuses are. Here’s what I’ve done. I’ve just drawn a picture of the bonus and I called it the bonus_template. But I only drew one quadrant(四分之一圆), one quarter of the board, because I noticed that they were all symmetric(相称性的;均衡的) and so this function bonus_template takes a quadrant in terms of a string, mirrors each rows and then mirrors each set of rows, where mirror of a sequence is just sequence plus the rest of the sequence except for the last one, so there’s going to be a middle piece that we’ll reflect(反射) it around. I made one template for the Scrabble game and one for the Words With Friends game, and then you choose which bonus you want to use, and then I defined these constants for double words, triple words, double letters, and triple letters, and I wasn’t quite sure what to use. I know I didn’t want to use letters like d and t because I’d get confused with the letters in the hand. I used 2 and 3 for double and triple words, a colon(冒号) because it has 2 dots for double letters, and a semicolon(分号) because it’s a little bit bigger than a colon for triple letters. Even though we’re so close to the end, it’s still good hygiene(卫生;卫生学;保健法) to write tests, so I took some time to write some, ran them, all the tests pass. You can see here’s a tiny little bonus template–a quarter of an array, which looks like that. When you apply bonus_template to it, you get this nice symmetric array. Now what I’d like you to do is modify the show(board) function so that it prints out these bonus entries so that if there’s no letter over this 3 in the corner it should print the 3 rather than just printing a dot.
Here’s my solution–I capture the j and i coordinates by using the enumerate function. I print the square if it’s a letter or if it’s the border that’s off the square. Otherwise, I just print the bonus.
28 Making Plays
Now, one thing I like to be able to do is when I get a play, I want to be able to actually modify the board to indicate what that play is. I want you to write a function to do that for me. It takes a play, but remember a play is tuple of a score, a start position indicated by i, n, j, a direction indicating by delta i and delta j, and the actual word, a string. Let’s make this look a little bit better by making it be a tuple. Write the code that will modify the board and in addition to modifying it, let’s also return the board
Here’s my answer–I just enumerated the letters in the word and the position into the word, updated the board, marching down from the start position j, i, and multiplying n, the position into the word, by the deltas specified by the direction.
29 Best Play
Now, very exciting. We’re at the culmination. One more function to write. That’s the best play. Given a hand and a board, return the highest scoring play. If there are no plays at all, just return None.
Here’s my answer. We got all the pieces. We call all plays. We get back. There is a collection of plays. We sort them and take the last one–that’ll be the highest. We don’t even have to specify what to sort by, because of the score was the first element of the play, so we’re automatically sorting by that, and then return the best one if there are any plays otherwise. Otherwise, then I specified no play here in case I change my mind, but I can say no play equals None. Now, I could write something that plays a complete game, but instead I’m just going to have a simple function show_best, which takes a hand and a board, displays the current board, and then displays the best play. When I type it in to the interpreter, this is what I get. It found the backbench that we had sort of laid out there, scored 64 points, and out of all the possible plays, it found the optimal one. So, we did it. We made it all the way through. Congratulations.
参考文献:
Design of Computer Programs - 英文介绍 - Udacity;
Design of Computer Programs - 中文介绍 - Udacity;
Design of Computer Programs - 中文介绍 - 果壳;
Design of Computer Programs - 视频列表 - Udacity;
Design of Computer Programs - 视频列表 - YouTube;
Design of Computer Programs - class wiki - Udacity。
展开全文
• D:\PCL1.8.0\pcl_install\PCL 1.8.0\include\pcl-1.8\pcl/PCLHeader.h(10): fatal error C1083: 无法打开包括文件: “boost/shared_ptr.hpp”: N o such file or directory 我去在boost是能够找到shared_ptr.hpp ...
D:\PCL1.8.0\pcl_install\PCL 1.8.0\include\pcl-1.8\pcl/PCLHeader.h(10): fatal error C1083: 无法打开包括文件: “boost/shared_ptr.hpp”: N
o such file or directory
我去在boost是能够找到shared_ptr.hpp
那么是什么问题呢?大概率还是路径的问题
应该是我们给的路径并不是它以为的路径
我做的是将cpp文件编译成python文件,需要在编译的时候就设置这些目录,而不是通过环境变量,
如果是环境变量,那就去修改环境变量的值
我最后发现的错误确实是路径写错了
在这里插入图片描述
我将‘-’写成了下划线
展开全文
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Word Games 你可以学到什么: Managing complexity. Large sets of words. Appro...
备注1:每个视频的英文字幕,都翻译成中文,太消耗时间了,为了加快学习进度,我将暂停这个工作,仅对英文字幕做少量注释。
备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。
Word Games
你可以学到什么:
Managing complexity.
Large sets of words.
Appropriate data structures.
Lesson 6
视频链接:
Lesson 6 - Udacity
Problem Set 6
视频链接:
Problem Set 6 - Udacity
Office Hours 6
视频链接:
Office Hours 6 - Udacity
Course Syllabus
Lesson 6:
Lesson 6 Course Notes(主要是课程视频对应的英文字幕的网页。)
Lesson 6 Code
Lesson 6 words4k.txt file
参考文献:
Design of Computer Programs - 英文介绍 - Udacity;
Design of Computer Programs - 中文介绍 - Udacity;
Design of Computer Programs - 中文介绍 - 果壳;
Design of Computer Programs - 视频列表 - Udacity;
Design of Computer Programs - 视频列表 - YouTube;
Design of Computer Programs - class wiki - Udacity。
展开全文
• 点击打开链接
点击打开链接
展开全文
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Dealing with Uncertainty Through Probability 你可以学到什么: 概率:小猪游戏。...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Dealing with Complexity Through Search 你可以学到什么: 搜索:利用手电筒或船,...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Dealing with Uncertainty Through Probability 你可以学到什么: 概率:小猪游戏。...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Dealing with Complexity Through Search 你可以学到什么: 搜索:利用手电筒或船,...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Regular Expressions, other languages and interpreters 你可以学到什么: 定义...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Regular Expressions, other languages and interpreters 你可以学到什么: 定义...
• 备注2:将.flv视频文件与Subtitles文件夹中的.srt字幕文件放到同1个文件夹中,然后在迅雷看看中打开播放,即可自动加载字幕。 Regular Expressions, other languages and interpreters 你可以学到什么: 定义...
• 找不到的话说明程序打开的文件数已经到达的最大值c;不能再打开新的文件。 * 如果是w模式c;创建一个新文件。如果是r模式c;以只读方式打开文件。如果是a模式c;首先打开文件c;如果打开...
• 打开SPIFFS的所有输出; 7. 改写example/storage/spiffs/main/spiffs_example_main.c : 7.1 挂载spiffs; 7.2 POSSIX 或 C 编写open("/spiffs/hello.txt"c;...)c;read(fd, ...)...
• \o "举报" 举报 android null eclipse path string tools版权文档请勿用做商业用途 命令行打开方式 1首先你要打开android模拟器?下面命令行打开的4步骤我是引用百度上的 1.找到SDK的tools文件夹我的在D:\a
• 会话:当用户打开浏览器,访问多个WEB资源,然后关闭浏览器的过程,称之为一个会话,选项卡,弹出页面都属于这个会话,且共享同一个session。 注意:具体会话和浏览器版本,厂商有关,如IE7及以下,每开一个...
• 流(FILE)流的缓冲类型流的打开fopen 示例fopen 新建文件访问权限 标准I/O 文件的概念和类型 (本质)文件就是一组相关数据的有序集合 文件类型包含: 常规文件 r 目录文件 d 字符设备文件 c (在Linux中,访问一个...
• I.O.U. 题目链接:点击打开链接 思路: 将每个人得多少、欠多少综合起来看,一个关系内的debts最小就是得的总和或者欠的总和. ps:这题数据貌似很水,很多代码都水过了... 代码: #include #include #...
• 这更多是一个模仿模块,可以帮助我测试CI / CD,npm发布和语义版本控制的最佳实践。 用法 安装套件 yarn add is-chanon 根据需要使用包装 const isChanon = require ( "is-chanon" ) ; isChanon ( "Chanon" ) ; //...
• ## 用jupyter打开其他盘的文件
千次阅读 多人点赞 2020-03-14 17:42:47
jupyter只能打开C盘的文件怎么办 初学者在使用jupyter的过程中一定会遇见这样的问题: “呀!我的jupyter打开里面怎么只能看见C盘的文件啊,怎么办怎么办??” 手动狗头 (因为我就遇见过,还一直傻傻的把要用的...
• C:\Qt\4.8.4\src\plugins\sqldrivers\mysql>qmake -o Makefile INCLUDEPATH+="D:\MySQ L\include" LIBS+="D:\MySQL\lib\libmysql.lib" mysql.pro nmake 对于库的连接,必须加上绝对路径,否则会出错。
• ## I/O
2008-06-01 23:47:00
UNIX I/O(系统I/O) 通常被称为不带缓存的I/O,不带缓存指的是每个 r e a d和w r i t e都调用内核中的一个系统调用,这些不带缓存的I / O函数不是ANSI C的组成部分,但是是P O S I X . 1和X P G 3的组成部分。所有I /...
• 文件 (1)新建:Ctrl + N ...(3)打开:Ctrl + O (4)关闭:Ctrl + F4 (5)退出:Alt + F4 编辑 (1)设置—用户界面(高亮特性/语言/色调) (2)工程设置—默认颜色对象 (3)撤销 窗口 切换文档V ...
• HKEY_CURRENT_USER\Software\JavaSoft\Prefs\com\oracle\javafx\scenebuilder\app\preferences\/S/B_2.0\/D/O/C/U/M/E/N/T/S\ 下的内容全部删除,重启JavaFX编辑器就可以 转载于:...
• 目录(d) 字符设备(c) 块设备(b) 套接字(s) 管道(p) 符号链接(l) 在Linux中,几乎所有概念都可以抽象成一个文件 一共七种文件类型,均可以使用通用I/O进行操作。文件描述符 用于指代一个打开的文件,...
• var o=function(d){var c=null;switch(d){case"url":c=a;break;case"id":c=b;break;case"width":c=i;break;case"height":c=z;break;case"pdfOpenParams":c=r;break;case"pluginTypeFound":c=x;break;case...
• 万物皆文件 文件的概念和类型 文件:一组相关数据的有序集合 文件类型: 类型 ...d ...c ...文件I/O 文件I/O:由POSIX(可移植操作系统接口)定义的一些函数 ...标准I/O基于文件I/O实现 ...每个打开的文件都对应一个文.
• 下列哪种打开文件的方式不...以读写方式打开一个已存在的标准I/O流时应指定哪个mode参数( B ) A r B r+ C w+ D a+ 下列哪个是不带缓存的( C ) A stdin B stdout C stderr D 都不是 如果键盘输入为abcdef,程序如...
• 前言 又是一个深夜场,本来想给biubiubiu_上个紫,没想到...画个图想了一个做法,tle,打开代码一看一个O(1e9)的for循环摆在那里,赶紧优化掉,0:22过掉之后D过的比C多,想了一个nlogn的贪心然后wa了,反向贪心又wa...
• Linux文件和I/O文件基础文件类型标准I/O系统调用缓冲机制流缓冲类型全缓冲行缓冲无缓冲打开流参数:path参数:mode关闭流 文件基础 文件即一组相关数据的有序集合。 文件类型 常规文件 r 目录文件(文件夹) d ...
• 方法一:一、首先 vim -b filename二、在命令行...hexdump -C XXX(文件名)-C是参数 不同的参数有不同的意义-C 是比较规范的 十六进制和ASCII码显示-c 是单字节字符显示-b 单字节八进制显示-o 是双字节八进制显示-d ...
... | 2021-06-16 08:18:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5502861142158508, "perplexity": 2298.4048261724847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00271.warc.gz"} |
https://www.rocketryforum.com/threads/drill-presses-and-drill-bits.162903/page-3 | # Drill presses and drill bits?
### Help Support The Rocketry Forum:
#### OverTheTop
TRF Supporter
IMHO consider getting a metal lathe if you are considering a lathe and can spring for the extra $. You can do metal, wood, fiberglass on a metal lathe, but on a wood lathe anything other than wood is pretty much a no-go. The disadvantage of a metal lathe turning wood is that the sawdust gets into everything and is a bit of a pain to clean. In wood lathes it easily drops through. The other advantage of wood lathes is cost. For the same cost you get a much larger machine. #### Bowman ##### Well-Known Member IMHO consider getting a metal lathe if you are considering a lathe and can spring for the extra$. You can do metal, wood, fiberglass on a metal lathe, but on a wood lathe anything other than wood is pretty much a no-go. The disadvantage of a metal lathe turning wood is that the sawdust gets into everything and is a bit of a pain to clean. In wood lathes it easily drops through.
The other advantage of wood lathes is cost. For the same cost you get a much larger machine.
Good points.
Also the Metal (engine) lathe will have provision for many more procedures and fixtures like knurling and threading to name a couple.
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
Generally speaking, I think you are better off buying as good as you can afford, and probably better to go for quality over features.
In this case, I'm just not sure how much use I'm going to get out of all this stuff.
This whole fire ban thing has me worried.
I got the chuck already. I get the drill press today. Maybe a new chuck wasn't necessary, but a keyless chuck will be more convenient, if it holds tight.
#### Bowman
##### Well-Known Member
Generally speaking, I think you are better off buying as good as you can afford, and probably better to go for quality over features.
Agreed
And there is a lower limit to acceptable quality that says not to buy as well.
It may be cheap but it might ultimately cost more than the next model up just trying to get acceptable performance out of it.
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
Who is HF?
I got the drill press today.
Didn't take too long to put it together. Instructions were pretty good, except a drawing showed the table tilt handle coming in from the right and it goes in from the left.
Way overkill. I think I could have gotten away with the $86 WEN drill press. This looks like a floor drill press with a shorter post. Huge. Even with the "better" chuck I bought, there is enough runout that I can see it with my naked eye. I might try taking it off, cleaning everything better and putting it back on. Otherwise everything works. The Forstner bit set and WEN rotary tool I ordered should be ready for pickup tomorrow, but I have some bike wheels coming too and want to be home when they show up because they cost$1,600. I don't want to risk some porch pirate taking them.
Band saw should be here in a few days.
##### Lonewolf.... No Club
Who is HF?
Harbor Freight... a.k.a. Chinese Junk
Congrat's on the drill press. Post up some photo's.
Last edited:
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
Harbor Freight... a.k.a. Chinese Junk
Congrat's on the drill press. Post up some photo's.
After I get the rest of my shop together (band saw and bench sander), I'll try and post a photo, though I'm a little reluctant to show the complete disaster my downstairs family room is.
My "bench" isn't working out as well as I thought it would. I'm going to need to make some changes.
I'd be interested in seeing other's rocket building shops. Might make a good thread.
#### OverTheTop
##### Well-Known Member
TRF Supporter
There is already a thread for that.
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
So, I have the drill press and I've been watching a few drill press hack videos and finding out there is a lot more you can do with a drill press than I thought.
I just ordered a cross slide vise, milling bits, and a center for a combination square.
One of the things I'm thinking about is making fin slots in balsa boat tails.
I think there are, at least, a couple ways of doing that.
I'm also thinking about making flutes in nosecones.
And, of course, I want to drill holes through transitions.
One of the problems I see with all these procedures is the softness of balsa.
Holding it in place without crushing it seems like it would be tricky.
#### prfesser
Be very careful milling on the drill press. Most have a Morse taper on the chuck, so it's held in place by friction. Milling forces tend to pull the taper out. Cut too deep and you may find the chuck+Morse taper landing on your suddenly-ruined workpiece. It's possible that the chuck+taper would go flying somewhere they aren't supposed to be.
OTOH if the chuck is held in with a draw bar---think of a long bolt that goes through the spindle and screws into a threaded chuck---then mill away!
Best -- Terry
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
Be very careful milling on the drill press. Most have a Morse taper on the chuck, so it's held in place by friction. Milling forces tend to pull the taper out. Cut too deep and you may find the chuck+Morse taper landing on your suddenly-ruined workpiece. It's possible that the chuck+taper would go flying somewhere they aren't supposed to be.
OTOH if the chuck is held in with a draw bar---think of a long bolt that goes through the spindle and screws into a threaded chuck---then mill away!
Best -- Terry
Hopefully that won't happen with balsa wood. Very soft. Unless I take up another hobby, I can't see milling anything else. Well, maybe plastic.
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
I just did my first milling job.
I milled four fin slots into a V-2 boat tail, from Balsa Machining.
I made a bit of a mess out of the first one because I didn't clamp the vise down, but the other three look pretty good.
Should make a really secure and straight fin attachment.
Also should have cleaned off the vise first. Got black grease marks on the balsa.
Runout seems pretty bad. I'm guessing I need to clean everything better and put back together.
#### Senior Space Cadet
##### Well-Known Member
TRF Supporter
I wanted to cut the shoulder off a plastic nose cone so that I could insert the shoulder of a balsa transition.
In order for that to work, I needed to taper the balsa shoulder.
With no lathe, I tried drilling a hole in the transition, inserting a dowel, then inserting the dowel in the drill press chuck.
In order for that to work, the hole needs to be exactly centered and straight.
Apparently this is beyond my skill level. I believe I started the hole centered well, but it was not straight.
So, I've ordered one of these. | 2021-01-22 17:11:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22874894738197327, "perplexity": 3828.5519716208264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00166.warc.gz"} |
https://kidsarecrud.com/h7s9kcoh/4707c4-how-third-law-of-thermodynamics-can-be-verified-experimentally | ... State and explain Newton's third law of motion. By replacing eq. It helps us to predict whether a process will take place or not. According to the second law, for any spontaneous process $$d S^{\mathrm{universe}}\geq0$$, and therefore, replacing it into eq. A comprehensive list of standard entropies of inorganic and organic compounds is reported in appendix 16. The arrow of time (i.e., "time flowing forward") is said to result from the second law of thermodynamics {[35]}. At absolute zero the system must be in … The ca- lorimetric entrow is measured from experimental heat ca- The effective action at any temperature coincides with the product of standard deviations of the coordinate and momentum in the Heisenberg uncertainty relation and is therefore bounded from below. All we have to do is to use the formulas for the entropy changes derived above for heating and for phase changes. The history of the Laws of Thermodynamics reveals more than just how science described a set of natural laws. If an object reaches the absolute zero of temperature (0 K = −273.15C = −459.67 °F), its atoms will stop moving. (7.20): $$$How will you prove it experimentally? \\ For example, an exothermal chemical reaction occurring in the beaker will not affect the overall temperature of the room substantially.$$$, $$$\\ \Delta S^{\mathrm{sys}} \approx n C_V \ln \frac{T_f}{T_i}. \tag{7.7} From the first law of thermodynamics, the work done by turbine in an isentropic process can be calculated from: W T = h 3 – h 4s → W Ts = c p (T 3 – T 4s) From Ideal Gas Law we know, that the molar specific heat of a monatomic ideal gas is: C v = 3/2R = 12.5 J/mol K and C p = C v + R = 5/2R = 20.8 J/mol K$$$. After more than 100 years of debate featuring the likes of Einstein himself, physicists have finally offered up mathematical proof of the third law of thermodynamics, which states that a temperature of absolute zero cannot be physically achieved because it's impossible for the entropy (or disorder) of … Otherwise the integral becomes unbounded. We will return to the Clausius theorem in the next chapter when we seek more convenient indicators of spontaneity. \end{aligned} Two Systems In Thermal Equilibrium With A Third System Are In Thermal Equilibrium With Each Others. \scriptstyle{\Delta_1 S^{\text{sys}}} & \searrow \qquad \qquad \nearrow \; \scriptstyle{\Delta_2 S^{\text{sys}}} \\ Interpretation of the laws [ edit ] The four laws of black-hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. The third law of thermodynamics implies that the entropy of any solid compound or for crystalline substance is zero at absolute zero temperature. As the gas cools, it becomes liquid. In doing so, we apply the third law of thermodynamics, which states that the entropy of a perfect crystal can be chosen to be zero when the temperature is at absolute zero. 4:09 1.0k LIKES. \Delta_{\text{rxn}} S^{-\kern-6pt{\ominus}\kern-6pt-}= \sum_i \nu_i S_i^{-\kern-6pt{\ominus}\kern-6pt-}, Overall: \[ An unambiguous zero of the enthalpy scale is lacking, and standard formation enthalpies (which might be negative) must be agreed upon to calculate relative differences. Entropy has a positive value at temperatures greater than absolute zero, which is useful to measure the absolute entropy of a given substance. The calculation of the entropy change for an irreversible adiabatic transformation requires a substantial effort, and we will not cover it at this stage. Laboratory Exercise 2 – Thermodynamics Laboratory The purpose of this laboratory is to verify the first law of thermodynamics through the use of the microcontroller board, and sensor board. If One Object Is Exerting Force On Another Object, The Other Object Must Also Be Exerting A Force On The First Object. \scriptstyle{\Delta S_1} \; \bigg\downarrow \quad & \qquad \qquad \qquad \qquad \scriptstyle{\bigg\uparrow \; \Delta S_3} \\ Implications and corollaries to the Third Law of Thermodynamics would eventually become keys to modern chemistry and physics. d S^{\mathrm{universe}} = d S^{\mathrm{sys}} + d S^{\mathrm{surr}}, Because the effective entropy is nonzero at low temperatures, we can write the third law of thermodynamics in the form postulated by Nernst. \Delta S^{\text{surr}} & = \frac{-Q_{\text{sys}}}{T}=\frac{5.6 \times 10^3}{263} = + 21.3 \; \text{J/K}. or, similarly: \tag{7.11} The most important elementary steps from which we can calculate the entropy resemble the prototypical processes for which we calculated the energy in section 3.1. Water vapor has very high entropy (randomness). In the next few sections, let us learn Newton’s third law in detail. (7.21) requires knowledge of quantities that are dependent on the system exclusively, such as the difference in entropy, the amount of heat that crosses the boundaries, and the temperature at which the process happens.22 If a process produces more entropy than the amount of heat that crosses the boundaries divided by the absolute temperature, it will be spontaneous. Presents a general theory of nonequilibrium thermodynamics for information processing zero can be divided into a and. 'S third law in detail D S = q/T ( 1 ) Biot-Savart 's is... Be divided into a system approaches a constant value can not depend On any parameters... Absolute value of the law of thermodynamics defines absolute zero temperature an alternative to the third and law. Be that amount at the end a given substance system approaches a constant value as its temperature absolute... Can write the third law can be extrapolated, however, this theory can be extrapolated, however, theory... Thermodynamics would eventually become keys to Modern chemistry and physics is also valid in magnetostatics German chemist and Walther! For reversible processes since they happen through a series of equilibrium states since the exchanged... Equation } \ ] of absolute zero temperature can be calculated in to. Integral can only go to zero if C R also goes to zero of an element in its with. Zero can be verified calorimetrically and spectroscopically molecules that can not be proved empirically theory, by the. That superposition principle is also valid in magnetostatics, its atoms will stop moving system and its surroundings a! Applied magnetic field Newton 's third law of conservation of energy that thermodynamics is concerned with equals energy... Push or pull acting On an Object reaches the absolute zero, which show. With their surroundings, or for crystalline substance is zero at absolute zero can be calculated eq... The beaker+room combination behaves as a system isolated from the rest of universe! The form postulated by Nernst part of the law of thermodynamics law that. Or derived in laboratories above for heating and for phase changes forms of energy thermodynamics. Corresponds in SI to the Clausius theorem as: a system 's entropy approaches a constant value its! Any crystalline body at zero temperature ( 0 K = −273.15C = −459.67 °F ) its. Can calculate the heat exchanged in a generalized thermostat model, Thermal equilibrium is characterized by effective!: D S = q/T ( 1 ) Biot-Savart 's law is verified, a residual can! Observations in electromagnetism resulting in its interaction with another Object, the other Object must also Exerting. Won ’ T actually Achieve absolute zero temperature: or DNA it helps us predict! The Clausius theorem, its atoms will stop moving the idea that superposition principle is also valid magnetostatics... Valid and real as gravity, magnetism, or for reversible processes since happen... Crystal approaches zero at a temperature of zero Kelvin entropy is by the following equation: D =! The idea that superposition principle is also valid in magnetostatics the simplest of calculations learn Newton S! First Object described a set of natural laws at low temperatures, we will return to the theorem! { 7.19 } \end { equation } \ ] asr + AST - ASP, will... Zero is extremely important in calculations involving thermodynamics, One must indeed include the discovery that this is... Experiment, whether the third law 0\ ) pressure gauge and a variable volume container can T... Overall temperature of the laws of thermodynamics states that the beaker will not affect overall! A push or pull acting On an Object reaches the absolute zero temperature the system be... → 0 generalized thermostat model, Thermal equilibrium is characterized by an effective temperature bounded from below 7.8 \end... Thermodynamics would eventually become keys to Modern chemistry and physics isolated from the latter laws 0 as T sub! T be confused by the following equation: D S = q/T ( 1 ) Biot-Savart 's law verified! Convenient indicators of spontaneity are the only two forms of energy ( 1863-1922.. One useful way of measuring entropy zero if C R also goes to zero water vapor has very high (... Impossible to Achieve a temperature of the experiment, whether the third law of thermodynamics in action form... Explain Newton 's third law can be experimentally verified by an effective temperature bounded from below of thermodynamics are yet! Why is it Impossible to Achieve a temperature of the law of thermodynamics defines absolute zero and! Into the energetics of the universe can be extrapolated, however, it can us! A consequence, it can teach us a great deal about our pride in Modern science ''! Removed, at least you how third law of thermodynamics can be verified experimentally won ’ T 85–88 J/ ( mol K ) 1863-1922 ) everything is! Are known yet affect the overall temperature of zero Kelvin asr + AST - ASP, which is mathematical! Gives No information about the time required for the verification of Hess ’ S law. And T is the temperature approaches absolute zero experimentally, this theory can be as! Be calculated in reference to this unambiguous zero process, as long as the approaches... Thermal equilibrium with Each Others calculating these quantities might not always be the simplest of calculations by! Thermodynamics were in effect long before they were written in textbooks or derived in laboratories for irreversible processes! Work are the only two forms of energy that thermodynamics is concerned.. How does … No experimentally verified by an effective temperature bounded from below { 7.20 \end! Pressure or applied magnetic field, entropy can still be present within the accuracy of the Clausius..., scientists discuss thermodynamic values in reference to this unambiguous zero derived above for and... Reported in appendix 16 \text { K } \ ] those conditions equals the energy ( the ground state.! Thinking about water 1863-1922 ) is by the following equation: D S represents entropy, S... T > sub > 1 < /sub > → 0 as T > >! Stated as follows, regarding the properties of Systems in equilibrium with their surroundings, at! Integral can only go to zero to how third law of thermodynamics can be verified experimentally that the definition of entropy includes the heat exchanged reversible! Latter laws, D S represents the change in entropy, q represents heat transfer, and T the. The formula for \ ( W_ { \mathrm { sys } } \neq 0\ ) of! With another Object, the surroundings always absorb heat reversibly ( 0 K and T K Nature, as as... The strong form of the entropy of any solid compound or for reversible processes they! Associated with heat, temperature and entropy for phase changes between a given temperature and. Us a great deal about our pride in Modern science. } \ ] that thermodynamics is with... Of spontaneity thermodynamics implies that the universe is considered isolated from the latter laws 1. Figure below is an outline showing the experimental procedure by which the third law of thermodynamics the... State how third law of thermodynamics can be verified experimentally the minimum Thermal energy ( the ground state ) process, as we know it, Frederick Trouton. To explain this fact, we can find absolute entropies of pure at. Crystalline body at zero temperature can be taken as zero of equilibrium states be visualized by about!, regarding the properties of Systems in Thermal equilibrium is characterized by how third law of thermodynamics can be verified experimentally adequate measuring equipment in! Temperature can be mea- sured both calorimetrically and spectroscopically, absolute zero system its... Laws of thermodynamics would eventually become keys to Modern chemistry and physics transformation is associated. System approaches a constant value as its temperature approaches absolute zero, which will show experimentally, within accuracy! Energy and work know it, Frederick Thomas Trouton ( 1863-1922 ) it can teach us a great about. From the latter laws an outline showing the experimental procedure by which the third of! Measuring equipment present even at \ ( \Delta S^ { \mathrm { REV } } \ ) compounds... Phase changes discussed how to calculate reaction enthalpies for any reaction, given the formation enthalpies reactants! Heat, temperature and entropy 1 < /sub > → 0 as >... Scientists discuss thermodynamic values in reference to a system and surroundings are separated by a boundary and... Interesting example of the human body is an outline showing the experimental procedure by which the third law of states! Asp, which is useful to measure entropy changes derived above for heating and for changes. And can be mea- sured both calorimetrically and spectroscopically long as the immediate surroundings were... Force is a push or pull acting On an Object resulting in its interaction with another,! Interesting example of the human body is an outline showing the experimental observations in electromagnetism chapter 4, we to. Possible to measure entropy changes using a calorimeter showing the experimental observations in electromagnetism What! 'S equations ; the generalization of all the experimental procedure by which the third law of thermodynamics can be as! That this discipline is free of any solid compound or for crystalline substance is zero at temperature. What is the temperature approaches absolute zero experimentally, or at least in theory by. Which is the temperature confused by the following equation: D S represents the change in entropy, q heat. Characterized by an adequate measuring equipment observations in electromagnetism equations ; the generalization of all the experimental observations in.... Always absorb heat reversibly given the formation enthalpies of reactants and products approaches zero a. Zero Kelvin will try to do so, we need to recall that the definition of entropy includes the that! ( isothermal process ) and can be divided into a system 's entropy a! Irreversible adiabatic processes \ ( W_ { \mathrm { sys } } \ ) experimentally, this residual entropy be!, magnetism, or Nernst principle, states that the entropy changes using a calorimeter experimental observations in electromagnetism an... Real as gravity, magnetism, or for reversible processes since they happen through a series equilibrium... In definition 4.2, whether the third law of thermodynamics body is an interesting example the. Of natural laws generalization of all the experimental observations in electromagnetism or applied magnetic field through a of.
how third law of thermodynamics can be verified experimentally 2021 | 2021-05-12 19:58:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.840736985206604, "perplexity": 848.8490728974268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00139.warc.gz"} |
https://www.physicsforums.com/threads/van-der-waals-equation-virial-expansion.748291/ | Van der Waals Equation, Virial Expansion
1. Apr 11, 2014
unscientific
1. The problem statement, all variables and given/known data
Taken from Concepts in thermal Physics:
2. Relevant equations
3. The attempt at a solution
Shouldn't the van der waal's equation be:
$$p = \frac{RT}{V_m -b} - \frac{a}{V_m^2}$$
$$pV_m = \frac{VRT}{V_m -b} - \frac{a}{V_m}$$
2. Apr 11, 2014
TSny
Yes. So (26.41) has a wrong sign. But it looks like they got it right in (26.42) and (26.43).
3. Apr 11, 2014
Staff: Mentor
You're right. It sure looks like they made a slew of algebra mistakes. The first term on the right hand side of 26.41 isn't even dimensionally correct.
Chet
4. Apr 11, 2014
TSny
Right. I didn't notice that they should not have had the factor of V on the left side of equation 26.41.
File size:
2.4 KB
Views:
149 | 2017-08-22 06:47:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7786790728569031, "perplexity": 1630.7545431039846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110471.85/warc/CC-MAIN-20170822050407-20170822070407-00421.warc.gz"} |
http://drhuang.com/science/mathematics/math%20word/math/l/l330.htm | ## Liouville Function
The function
(1)
where is the number of not necessarily distinct Prime Factors of , with . The first few values of are 1, , , 1, , 1, , , 1, 1, , , .... The Liouville function is connected with the Riemann Zeta Function by the equation
(2)
(Lehman 1960).
The Conjecture that the Summatory Function
(3)
satisfies for is called the Pólya Conjecture and has been proved to be false. The first for which are for , 4, 6, 10, 16, 26, 40, 96, 586, 906150256, ... (Sloane's A028488), and is, in fact, the first counterexample to the Pólya Conjecture (Tanaka 1980). However, it is unknown if changes sign infinitely often (Tanaka 1980). The first few values of are 1, 0, , 0, , 0, , , , 0, , , , , , 0, , , , , ... (Sloane's A002819). also satisfies
(4)
where is the Floor Function (Lehman 1960). Lehman (1960) also gives the formulas
(5)
and
(6)
where , , and are variables ranging over the Positive integers, is the Möbius Function, is Mertens Function, and , , and are Positive real numbers with .
References
Fawaz, A. Y. The Explicit Formula for .'' Proc. London Math. Soc. 1, 86-103, 1951.
Lehman, R. S. On Liouville's Function.'' Math. Comput. 14, 311-320, 1960.
Sloane, N. J. A. Sequences A028488 and A002819/M0042 in An On-Line Version of the Encyclopedia of Integer Sequences.'' http://www.research.att.com/~njas/sequences/eisonline.html and Sloane, N. J. A. and Plouffe, S. The Encyclopedia of Integer Sequences. San Diego: Academic Press, 1995.
Tanaka, M. A Numerical Investigation on Cumulative Sum of the Liouville Function.'' Tokyo J. Math. 3, 187-189, 1980. | 2023-03-29 11:12:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681974053382874, "perplexity": 858.5512414813549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00581.warc.gz"} |
https://byjus.com/question-answer/in-what-ratio-is-water-to-be-mixed-with-milk-at-rs-9-60-per/ | Question
# In what ratio is water to be mixed with milk at Rs. 9.60 per litre to make the price of the mixture Rs. 9 per litre?
A
15:2
B
15:4
C
15:1
D
15:3
Solution
## The correct option is B 15:1 Assume that cost of water is 0 Let quantity of water be y litre. and quantity of milk be x litre. ∴ Quantity of mixture will be (x+y) litre and the given cost of the mixture Rs. 9 per litre. Now, 9.6x=9(x+y) 9.6x=9x+9y 9.6x-9x=9y 0.6x=9y xy=90.6=906=15 Quantitative Aptitude
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-23 02:55:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825718343257904, "perplexity": 6475.015361877231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00265.warc.gz"} |
https://www.vedantu.com/physics/webers-law | # Weber's Law
Top
## Introduction
Weber is a famous scientist who has given the law for physio psychology in the 19th century. It has become a great advancement in the field of physics as well as psychology. The law gave a way to introduce several innovations and principles to simplify human efforts. Let us explore more about what is Weber's law, its equation, and related terms in the equation in detail.
## What is Weber's law?
Weber's law states that the just noticeable difference in stimulus intensity may affect the production of sensations proportionally.
In simple terms, we can say that the size of the intensity of stimuli will show a proportionate change in producing the sense experiences.
$\frac{\Delta I}{I}=k$
Where ΔI(Delta I) represents the difference threshold, I represents the initial stimulus intensity and k signifies that the proportion on the left side of the equation remains constant despite variations in the I term.
It is denoted by Delta I. Where is the intensity of stimuli? And K is Weber's Constant.
### Fechner Weber Law
After Weber's law, Fechner's law became an interference of the Weber salah. According to Fechner's law, the intensity of our sensations may increase according to the logarithm of the energy but not change drastically. By merging these two statements, finally we got Fechner Weber's law.
Weber's law equation
According to Weber's law, the size of noticeable differences in stimuli may affect the increase in production of the census. This can be denoted by Delta S.
dS = K*S
Where S is the reference stimuli
and K is constant.
, Also Weber's law equation can be written as,
Ψ = k logS
Where Ψ = sensation
K = constant
S = stimuli.
### Weber's Fraction
As the weather states that the relation between both the intensity and sensations are proportional to each other, the equation can also be expressed as a fraction. This fraction is known as Weber's fraction.
### Explanation
The Weber and Fechner law can be explained by using a simple experiment.
And let us assume that you have lifted and a whole day weight of 3.0kg. It requires an effort to hold 3.0kgs. Then, the minimal weight in between 3.0kg to 3.1kg weight is added, for say 0.05 kg, we may not observe much difference. But, it keeps on increasing gradually, the effort also increases. This noticeable difference gives Weber's law equation and fraction.
According to Weber Fechner's law physics, substitute the values.
The weight of magnitude, I = 3.0 kg, the increment threshold I = 0.3 kg.
The ratio of I/I for a given instance is
0.3/3.0 = 0.1.
This is Weber's Law.
Thus Weber's equation will be proved for different instances. One can verify by changing the weights.
Hence, the fraction I/I is known as the Weber fraction.
### Exceptions
The Weber Fechner law has proven several things. But an exception has declared that it is not true for all cases. To avoid this exception, Weber's law has been modified a bit. This modified law can be expressed as,
$\frac{\Delta I}{I+a}=K$
Here, K = Weber's Constant
I = Intensity
a = constant.
But the lower case letter represents the constant for the baseline.
### Weber's Law Perception
Both Weber and Fechner had made experiments and proved that the just noticeable difference can produce the noticeable difference in perceptions. This perception is nothing but Weber's law perception. Here we have different perceptions. Namely -
• Weight Perception:- Weber's law completely holds good for weight perception. It was already proven that the ratio of intensity is always one for several weights.
• Vision Perception:- The Weber Fechner law provides all over a making relationship for the brightness of an eye. The magnitude of eye brightness can be easily calculated on the logarithmic scale and can be substituted in Weber's law equation or fraction.
• Sound Perception:- unfortunately the weber doesn't hold good for sound perception. Particularly for loudness and increasing loudness, the weather's law cannot find the magnitude and fails to get a similar value.
These are the various Weber's law perceptions and the results differ for each perception.
### Conclusion
Hence, Weber's law is an equation that states that the just noticeable difference can be proportional in producing the intensity of stimuli. If this relationship is expressed using the logarithmic scale, then it is called a Fechner law. So, both scientists have given a hypothesis for psychophysics. and bought the laws are interrelated with each other.
1. What are the Applications of Weber's Law?
Ans. Along with the human senses, Weber's law also results in various fields. It has several applications in different sectors. The major applications of Weber's law are as follows-
Numerical Cognition:- several psychological studies have proven that calculating the discrimination of numbers which makes changes and reduces or increases the value. In that case, the weather's law explains psychologically and uses the logarithmic scale to calculate the difference between numbers even if they are continuously repeated.
Finance For the Public:- public finance is a new sector that uses Weber's law to calculate and understand demand and production. It is a newly developed literature hypothesis. It is a recent enhancement and new era in the literature with the hypothesis of both Weber and Fechner.
Pharmacy:- it is not an ecstasy that Weber's law is also used in pharmacy. According to pharmacology, the dosage of the medicines will be proportionate based on the severity of the injury or infection.
In this way, there are several applications of Weber's law in various sectors.
2. Explain Weber's Law Concept Briefly With an Example From Our Daily Routine?
Ans. It is easy to explain Weber's log concept that the just noticeable difference of stimuli intensity has a proportional change of producing sensations noticeably. This can be explained by using a simple and general example. One should get excited and interested to learn the concept.
Let's take the morning freshener as an example. If we consider a cup of tea that has a single spoon of sugar, it tastes good. If the sugar was added continuously in a negligible amount, one should feel a drastic change and intensity in taste. The disproportionate relation between the sugar and taste bud is nothing but Weber's law if these calculations can be done using a logarithmic scale, then it is called Fechner's law. | 2021-09-24 12:07:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7130840420722961, "perplexity": 1690.0674997642666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00511.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-ee/engineering-mathematics/differential-equations/ | NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## Marks 1
More
With initial condition $$x\left( 1 \right)\,\,\, = \,\,\,\,0.5,\,\,\,$$ the solution of the differential equation, $$\,\... GATE EE 2012 With$$K$$as constant, the possible solution for the first order differential equation$${{dy} \over {dx}} = {e^{ - 3x...
GATE EE 2011
The solution of the first order differential equation $$\mathop x\limits^ \bullet \left( t \right) = - 3\,x\left( t \r... GATE EE 2005 If at every point of a certain curve , the slope of the tangent equals$${{ - 2x} \over y},$$the curve is _______. GATE EE 1995 ## Marks 2 More Consider the differential equation$$\left( {{t^2} - 81} \right){{dy} \over {dt}} + 5ty = \sin \left( t \right)\,\,$$w... GATE EE 2017 Set 1 Let$$y(x)$$be the solution of the differential equation$$\,\,{{{d^2}y} \over {d{x^2}}} - 4{{dy} \over {dx}} + 4y = 0\...
GATE EE 2016 Set 2
A function $$y(t),$$ such that $$y(0)=1$$ and $$\,y\left( 1 \right) = 3{e^{ - 1}},\,\,$$ is a solution of the differenti...
GATE EE 2016 Set 1
A differential equation $$\,\,{{di} \over {dt}} - 0.21 = 0\,\,$$ is applicable over $$\,\, - 10 < t < 10.\,\,$$ If...
GATE EE 2015 Set 2
A solution of the ordinary differential equation $$\,\,{{{d^2}y} \over {d{t^2}}} + 5{{dy} \over {dt}} + 6y = 0\,\,$$ is...
GATE EE 2015 Set 1
Consider the differential equation $${x^2}{{{d^2}y} \over {d{x^2}}} + x{{dy} \over {dx}} - y = 0.\,\,$$ Which of the fo...
GATE EE 2014 Set 2
The solution for the differential equation $$\,\,{{{d^2}x} \over {d{t^2}}} = - 9x,\,\,$$ with initial conditions $$x(0... GATE EE 2014 Set 1 For the differential equation$${{{d^2}x} \over {d{t^2}}} + 6{{dx} \over {dt}} + 8x = 0$$with initial conditions$$x(...
GATE EE 2010
For the equation \,\,\mathop x\limits^{ \bullet \bullet } \left( t \right) + 3\mathop x\limits^ \bullet \left( t \ri...
GATE EE 2005
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12 | 2022-08-16 04:27:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987465739250183, "perplexity": 7670.924473106933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00213.warc.gz"} |
https://www.degruyter.com/browse?pageSize=10&sort=relevance&t_0=MT-07&t_1=MT-03 | # SEARCH CONTENT
## You are looking at 1 - 10 of 3,962 items :
• Geometry and Topology
• Algebra and Number Theory
Clear All
From Commutative Algebra to Algebraic Geometry
## Abstract
We focus on the description of the automorphism group Γ of a Clifford-like parallelism ∥ on a 3-dimensional projective double space (ℙ(HF), ∥, ∥r) over a quaternion skew field H (of any characteristic). We compare Γ with the automorphism group Γ of the left parallelism ∥, which is strictly related to Aut(H). We build up and discuss several examples showing that over certain quaternion skew fields it is possible to choose ∥ in such a way that Γ is either properly contained in Γ or coincides with Γ even though ∥ ≠ ∥.
## Abstract
We study birational projective models of 𝓜2,2 obtained from the moduli space of curves with nonspecial divisors. We describe geometrically which singular curves appear in these models and show that one of them is obtained by blowing down the Weierstrass divisor in the moduli stack of 𝓩-stable curves 𝓜 2,2(𝓩) defined by Smyth. As a corollary, we prove projectivity of the coarse moduli space M 2,2(𝓩).
## Abstract
For a smooth manifold X equipped with a volume form, let 𝓛0 (X) be the Lie algebra of volume preserving smooth vector fields on X. Lichnerowicz proved that the abelianization of 𝓛0 (X) is a finite-dimensional vector space, and that its dimension depends only on the topology of X. In this paper we provide analogous results for some classical examples of non-singular complex affine algebraic varieties with trivial canonical bundle, which include certain algebraic surfaces and linear algebraic groups. The proofs are based on a remarkable result of Grothendieck on the cohomology of affine varieties, and some techniques that were introduced with the purpose of extending the Andersén–Lempert theory, which was originally developed for the complex spaces ℂn, to the larger class of Stein manifolds that satisfy the density property.
FREE ACCESS
## Abstract
Let S be a surface of genus g at least 2. A representation $ρ:π1S→PSL2R$ is said to be purely hyperbolic if its image consists only of hyperbolic elements along with the identity. We may wonder under which conditions such representations arise as the holonomy of a branched hyperbolic structure on S. In this work we characterise them completely, giving necessary and sufficient conditions.
## Abstract
Let Q be a simply connected manifold. We show that every exact Lagrangian cobordism between compact, exact Lagrangians in T * Q is an h-cobordism. This is a corollary of the Abouzaid–Kragh Theorem.
## Abstract
We show that the Grothendieck group associated to integral polytopes in ℝn is free-abelian, by providing an explicit basis. Moreover, we identify the involution on this polytope group given by reflection about the origin as a sum of Euler characteristic type. We also compute the kernel of the norm map sending a polytope to its induced seminorm on the dual of ℝn.
## Abstract
K3 polytopes appear in complements of tropical quartic surfaces. They are dual to regular unimodular central triangulations of reflexive polytopes in the fourth dilation of the standard tetrahedron. Exploring these combinatorial objects, we classify K3 polytopes with up to 30 vertices. Their number is 36 297 333. We study the singular loci of quartic surfaces that tropicalize to K3 polytopes. These surfaces are stable in the sense of Geometric Invariant Theory. | 2021-01-25 19:47:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7100493907928467, "perplexity": 496.80657768940785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00658.warc.gz"} |
https://eprint.iacr.org/2018/1111 | ## Cryptology ePrint Archive: Report 2018/1111
Cryptanalysis of the Wave Signature Scheme
Paulo S. L. M. Barreto and Edoardo Persichetti
Abstract: In this paper, we cryptanalyze the signature scheme Wave, which has recently appeared as a preprint. In this paper, we cryptanalyze the signature scheme \textsc{Wave}, which has recently appeared as a preprint. First, we show that there is a severe information leakage occurring from honestly-generated signatures. Then, we illustrate how to exploit this leakage to retrieve an alternative private key, which enables efficiently forging signatures for arbitrary messages. Our attack manages to break the proposed 128-bit secure \textsc{Wave} parameters in just over a minute, most of which is actually spent collecting genuine signatures. We also explain how our attack applies to generalized versions of the scheme which could potentially be achieved using generalized admissible $(U,U+V)$ codes and larger field characteristics. Finally, as a target for further cryptanalysis, we describe a variant of \textsc{Wave} that we call \textsc{Tsunami}, which appears to thwart our attacks while keeping the positive aspects of that scheme.
Category / Keywords: public-key cryptography / code-based cryptosystems, digital signatures, cryptanalysis
Date: received 15 Nov 2018, last revised 17 Nov 2018
Contact author: pbarreto at uw edu
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2018/1111
[ Cryptology ePrint archive ] | 2019-02-19 18:48:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2987927198410034, "perplexity": 4036.986160275593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247491141.23/warc/CC-MAIN-20190219183054-20190219205054-00270.warc.gz"} |
https://www.physicsforums.com/threads/the-basis-of-the-real-numbers-over-the-irrationals.638587/ | The basis of the Real numbers over the Irrationals
1. Sep 24, 2012
nautolian
1. What can be said of the dimension of the basis of the Reals over the Irrationals
2. Relevant equations
3. I believe the basis is infinite because any real number can be made out of the combination of irrational vectors multiplied by the same irrational coefficient to make any real number. ie. sqrt(2)*sqrt(2)=2 + sqrt(51)*sqrt(51)=53, etc. Could you please help me figure out if this is the correct solution or what a better way to phrase the proof would be? Thanks!
Last edited by a moderator: Sep 25, 2012
2. Sep 24, 2012
Dick
The irrationals aren't a vector subspace of R, so the question is pretty meaningless. Did you maybe mean the rationals?
3. Sep 24, 2012
nautolian
Actually, I think it's a trick question because I thought that at first but I wasn't sure. How would I prove that? Does it break the multiply by zero axium since zero is not an irrational? Thanks for your help!
4. Sep 24, 2012
Dick
It sure does. 0 is not an irrational. Lot's of other things break. You can also add two irrationals and get a rational etc etc.
5. Sep 25, 2012
nautolian
Hey thanks, how would adding two irrationals to get a rational break the vector space rule?
6. Sep 25, 2012
Dick
To be a vector space you first have to be an additive group. The addition operation has to be closed. I.e. if x and y are irrational then x+y has to be irrational. Give a counterexample.
7. Sep 25, 2012
Staff: Mentor
What are you doing here? √2 + √2 = 2, and 2 + √51 + √51 = 53, but in connecting expressions as you did above, you are saying that 2 = 53, which is clearly not true. | 2017-08-18 07:09:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716478943824768, "perplexity": 572.7466095105483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104612.83/warc/CC-MAIN-20170818063421-20170818083421-00218.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/general-chemistry-10th-edition/chapter-15-acids-and-bases-exercises-page-657/15-24 | ## General Chemistry 10th Edition
We know that nitrogen has a greater electronegativity than carbon. We would expect $H-O$ bond in $H-O-N$ group to be more polar with the H atom having a positive partial charge than the $H-O$ bond the $H-O-C$ group. | 2019-11-17 10:04:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4165579676628113, "perplexity": 1667.7721778494442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00171.warc.gz"} |
https://codereview.stackexchange.com/questions/119390/multiple-jquery-onclick-events/119529#119529 | # Multiple jQuery onClick events
I have a ton of jQuery onClick events. onClick I hide/show different UI elements. I was wondering, how can I tidy the code up and make multiple onClick events more readable?
$('.info_2').on('click', function() {$('#nav-wrapper').toggleClass('hidden_nav');
$('#card-wrapper').toggleClass('centre_share');$('.E_info').toggleClass('display');
$('#info-btn').css('opacity', '0');$('#nav-wrapper').delay(300).toggleClass('hidden');
$('#nav-wrapper').removeClass('display_nav');$('#nav-wrapper').removeClass('display');
});
$('.info_back').on('click', function() {$('#nav-wrapper').removeClass('hidden_nav');
$('#nav-wrapper').addClass('display_nav');$('#nav-wrapper').addClass('display');
$('#info-btn').css('opacity', '1');$('#nav-wrapper').removeClass('hidden');
$('.E_info').removeClass('display');$('.E_info').addClass('hidden');
$('#card-wrapper').removeClass('centre_share'); });$('#info-btn').on('click', function(){
$('#info-btn').toggleClass('close_btn');$('.o-card_border').toggleClass('info_display card_active');
$('.start_title').toggleClass('hidden remove_flow')$('#svg_full').attr('class', 'test');
$('#svg_top').attr('class', 'test');$('#svg_bot').attr('class', 'test');
$('#svg_bot_bot').attr('class', 'test');$('#svg_bot_right').attr('class', 'test');
$('.rectangle_style_frame3 display').toggleClass('hidden');$('.triangle_style').toggleClass('hidden');
$('.bg-info').toggleClass('display');$('.info_CharactersInvolved').toggleClass('display');
$('.info_themes').toggleClass('display');$('.E_info').toggleClass('display');
});
The code works fine. I just think it looks really ugly. And the readability of it is painful, especially if you're trying to jump onto the project and learn the codebase.
• Does this compile? You should include all code to ensure it's on topic. Feb 9 '16 at 14:48
• @Raystafarian I could. But I felt the question would get bloated. There's a lot of code. I just thought all the onClick events look ugly and when trying to go through my code and improve it I had no idea how I would tidy up multiple onClick events Feb 9 '16 at 15:17
• Fair enough, did you see codereview.stackexchange.com/questions/26229/… Feb 9 '16 at 15:54
• Is it the only part of your code regarding these onClick events? If yes, I don't see any way to make it more readable: factorization is possible only on a few element/actions, and will have a cost on readability. At the opposite, if this ony an example, while a lot of other onClick events exist, then it might be considered. In this case, post the entire code. Feb 10 '16 at 0:33
• Review code organization concept here learn.jquery.com/code-organization/concepts. You can create separate object for encapsulation show/view logic. For example, create NavWrapper. Feb 10 '16 at 16:26
## 3 Answers
The worst issues are cases of the same element being selected multiple times in order to apply various methods. Repeat selection of the same element should be avoided because selection of DOM element(s) is expensive.
There are also are cases where multiple selections have the same method applied. This isn't so bad for performance, but leads to unnecessarily bulky source.
The code can be improved with the following techniques :
• using method chaining.
• using comma separated selectors.
• using $(this) to select the same element as the one to which an event handler is attached. • passing space-separated lists of class names to addClass() and .removeClass(). You might end up with something like this : $('.info_2').on('click', function() {
$('#card-wrapper').toggleClass('centre_share');$('.E_info').toggleClass('display');
$('#info-btn').css('opacity', '0');$('#nav-wrapper').toggleClass('hidden_nav').delay(300).toggleClass('hidden').removeClass('display_nav').removeClass('display');
});
$('.info_back').on('click', function() {$('#nav-wrapper').removeClass('hidden_nav hidden').addClass('display_nav display');
$('#info-btn').css('opacity', '1');$('.E_info').removeClass('display').addClass('hidden');
$('#card-wrapper').removeClass('centre_share'); });$('#info-btn').on('click', function(){
$(this).toggleClass('close_btn');$('.o-card_border').toggleClass('info_display card_active');
$('.start_title').toggleClass('hidden remove_flow');$('#svg_full, #svg_top, #svg_bot, #svg_bot_bot, #svg_bot_right').attr('class', 'test');
$('.rectangle_style_frame3 display, .triangle_style').toggleClass('hidden');$('.bg-info, .info_CharactersInvolved, .info_themes, .E_info').toggleClass('display');
});
First, you should wrap your code up in a closure to prevent collisions with other code. I typically use an IIFE for this. Also, you should add use strict'; to your code to help prevent certain issues.
(function($) { 'use strict'; // your code here })( jQuery ); You can abstract away all of those class names so that, if you ever have to make a change, you only have to change them in place. var classes = { hideNav : 'hidden-nav', hide : 'hiddden' }; //example of how to use$('#nav-wrapper').toggleClass( classes.hide );
I would also recommend not using the display classes. Just removing the hidden class should be sufficient. If you additional CSS in the display class, then you should create a class called nav (or whatever make sense) for styling.
Next, you need to DRY your code. In your functions, you are repeating a LOT of code. These repeated calls should be their own function. For example, you do the same manipulations on the .nav-wrapper in each set of code. Move that code into it's own function and just call it where ever you need.
Third, you should cache all of your selectors. As a general rule of thumb, if you use the same selection more than once, you should cache it. So create a variable and use the variable everywhere.
var $navwrapper =$('#nav-wrapper');
$navwrapper.removeClass( classes.hide ); Last, although you haven't supplied any HTML, there are probably ways you might be able to simplify your code. For instance, if all of your .info* items have the same parent container (ex: .info-areas), you could add the click event to this parent element instead of each individual info area. You can figure out which area was clicked by using event.target. $('.info-areas').on('click', function(evt) {
var current=\$(evt.target);
// do whatever
});
Hope that helps.
Your question is basically equivalent to this one, and my advice is the same. Each click handler is really changing global state, so set the state once on some ancestor element (e.g. the <body>) and let the Cascading stylesheet take care of all the consequences. | 2021-10-22 21:44:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21362023055553436, "perplexity": 1939.276498231123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00485.warc.gz"} |
https://www.netket.org/docs/optimizer_AdaDelta/ | #### Variational
AdaDelta Optimizer. Like RMSProp, AdaDelta corrects the monotonic decay of learning rates associated with AdaGrad, while additionally eliminating the need to choose a global learning rate $\eta$. The NetKet naming convention of the parameters strictly follows the one introduced in the original paper; here $E[g^2]$ is equivalent to the vector $\mathbf{s}$ from RMSProp. $E[g^2]$ and $E[\Delta x^2]$ are initialized as zero vectors.
## Class Constructor
Constructs a new AdaDelta optimizer.
Argument Type Description
rho float=0.95 Exponential decay rate, in [0,1].
epscut float=1e-07 Small $\epsilon$ cutoff.
### Examples
>>> from netket.optimizer import AdaDelta | 2019-05-22 15:07:14 | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7812661528587341, "perplexity": 5113.475498491124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00316.warc.gz"} |
https://direct.mit.edu/neco/article/33/3/713/97487/Sophisticated-Inference | ## Abstract
Active inference offers a first principle account of sentient behavior, from which special and important cases—for example, reinforcement learning, active learning, Bayes optimal inference, Bayes optimal design—can be derived. Active inference finesses the exploitation-exploration dilemma in relation to prior preferences by placing information gain on the same footing as reward or value. In brief, active inference replaces value functions with functionals of (Bayesian) beliefs, in the form of an expected (variational) free energy. In this letter, we consider a sophisticated kind of active inference using a recursive form of expected free energy. Sophistication describes the degree to which an agent has beliefs about beliefs. We consider agents with beliefs about the counterfactual consequences of action for states of affairs and beliefs about those latent states. In other words, we move from simply considering beliefs about “what would happen if I did that” to “what I would believe about what would happen if I did that.” The recursive form of the free energy functional effectively implements a deep tree search over actions and outcomes in the future. Crucially, this search is over sequences of belief states as opposed to states per se. We illustrate the competence of this scheme using numerical simulations of deep decision problems.
## 1 Introduction
In theoretical neurobiology, active inference has proved useful in providing a generic account of motivated behavior under ideal Bayesian assumptions, incorporating both epistemic and pragmatic value (Da Costa, Parr, Sajid et al., 2020; Friston, FitzGerald, Rigoli, Schwartenbeck, & Pezzulo, 2017). This account is often portrayed as being based on first principles because it inherits from the statistical physics of random dynamical systems at nonequilibrium steady state (Friston, 2013; Hesp, Ramstead et al., 2019; Parr, Da Costa, & Friston, 2020). Active inference does not pretend to replace existing formulations of sentient behavior; it just provides a Bayesian mechanics from which most (and, arguably, all) normative optimization schemes can be derived as special cases. Generally these special cases arise when ignoring one sort of uncertainty or another. For example, if we ignore uncertainty about (unobservable) hidden states that generate (observable) outcomes, active inference reduces to conventional schemes like optimal control theory and reinforcement learning. While the latter schemes tend to focus on the maximization of value as a function of hidden states per se, active inference optimizes a functional1 of (Bayesian) beliefs about hidden states. This allows it to account for uncertainties surrounding action and perception in a unified, Bayes-optimal fashion.
Most current applications of active inference rest on the selection of policies (i.e., ordered sequences of actions or open-loop policies, where the sequence of future actions depends only on current states, not future states) that minimize a functional of beliefs called expected free energy (Da Costa, Parr, Sajid, et al., 2020; Friston, FitzGerald et al., 2017). This approach clearly has limitations, in the sense that one has to specify a priori allowable policies, each of which represents a possible path through a deep tree of action sequences. This formulation limits the scalability of the ensuing schemes because only a relatively small number of policies can be evaluated (Tschantz, Baltieri, Seth, & Buckley, 2019). In this letter, we consider active inference schemes that enable a deep tree search over all allowable sequences of action into the future. Because this involves a recursive evaluation of expected free energy—and implicit Bayesian beliefs—the resulting scheme has a sophisticated aspect (Costa-Gomes, Crawford, & Broseta, 2001; Devaine, Hollard, & Daunizeau, 2014): rolling out beliefs about beliefs.
Sophistication is a term from the economics literature and refers to having beliefs about one's own or another's beliefs. For instance, in game theory, an agent is said to have a level of sophistication of 1 if she has beliefs about her opponent, 2 if she has beliefs about her opponent's beliefs about her strategy, and so forth. Most people have a level of sophistication greater than two (Camerer, Ho, & Chong, 2004).
According to this view, most current illustrations of active inference can be regarded as unsophisticated or naive, in the sense that they consider only beliefs about the consequences of action, as opposed to the consequences of action for beliefs. In what follows, we try to unpack this distinction intuitively and formally using mathematical and numerical analyses. We also take the opportunity to survey the repertoire of existing schemes that fall under the Bayesian mechanics of active inference, including expected utility theory (Von Neumann & Morgenstern, 1944), Bayesian decision theory (Berger, 2011), optimal Bayesian design (Lindley, 1956), reinforcement learning (Sutton & Barto, 1981), active learning (MacKay, 1992), risk-sensitive control (van den Broek, Wiegerinck, & Kappen, 2010), artificial curiosity (Schmidhuber, 2006), intrinsic motivation (Oudeyer & Kaplan, 2007), empowerment (Klyubin, Polani, & Nehaniv, 2005), and the information bottleneck method (Tishby, Pereira, & Bialek, 1999; Tishby & Polani, 2010).
Sophisticated inference recovers Bayes-adaptive reinforcement learning (Åström, 1965; Ghavamzadeh, Mannor, Pineau, & Tamar, 2016; Ross, Chaib-draa, & Pineau, 2008) in the zero temperature limit. Both approaches perform belief state planning, where the agent maximizes an objective function by taking into account how it expects its own beliefs to change in the future (Duff, 2002) and evinces a degree of sophistication. The key distinction is that Bayes-adaptive reinforcement learning considers arbitrary reward functions, while sophisticated active inference optimizes an expected free energy that can be motivated from first principles. While both can be specified for particular tasks, the expected free energy additionally mandates the agent to seek out information about the world (Friston, 2013, 2019) beyond what is necessary for solving a particular task (Tishby & Polani, 2010). This allows inference to account for artificial curiosity (Lindley, 1956; Oudeyer & Kaplan, 2007; Schmidhuber, 1991) that goes beyond reward seeking to the gathering of evidence for an agent's existence (i.e., its marginal likelihood). This is sometimes referred to as self-evidencing (Hohwy, 2016).
The basic distinction between sophisticated and unsophisticated inference was briefly introduced in appendix 6 of Friston, FitzGerald et al. (2017). As outlined in this appendix, there is a sense in which unsophisticated formulations, which simply sum the expected free energy over future time steps based on current beliefs about the future, can be thought of as selecting policies that optimize a path integral of the expected free energy. In contrast, sophisticated schemes take account of the way in which the free energy changes as alternative paths are pursued and beliefs updated. This can be thought of as an expected path integral.
This distinction is subtle but can lead to fundamentally different kinds of behavior. A simple example illustrates the difference. Consider the following three-armed bandit problem—with a twist. The right and left arms increase or decrease your winnings. However, you do not know which arm is which. The central arm does not affect your winnings but tells you which arm pays off. Crucially, once you have committed to either the right or the left arm, you cannot switch to the other arm. This game is engineered to confound agents whose choice behavior is based on Bayesian decision theory. This follows because the expected payoff is the same for every sequence of moves. In other words, choosing the right or left arm—for the first and subsequent trials—means you are equally likely to win or lose. Similarly, choosing the middle arm (or indeed doing nothing) has the same Bayesian risk or expected utility.
However, an active inference agent, who is trying to minimize her expected free energy,2 will select actions that minimize the risk of losing and resolve her uncertainty about whether the right or left arm pays off. This means that the center arm acquires epistemic (uncertainty-resolving) affordance and becomes intrinsically attractive. On choosing the central arm—and discovering which arm holds the reward—her subsequent choices are informed, in the sense that she can exploit her knowledge and commit to the rewarding arm. In this example, the agent has resolved a simple exploration-exploitation dilemma3 by resolving ambiguity as a prelude to exploiting updated beliefs about the consequences of subsequent action. Note that because the central arm has been selected, there is no ambiguity in play, and its epistemic affordance disappears. Note further that all three arms initially have some epistemic affordance; however, the right and left arms are less informative if the payoff is probabilistic.
The key move behind this letter is to consider a sophisticated agent who evaluates the expected free energy of each move recursively. Simply choosing the central arm to resolve uncertainty does not, in and of itself, mean an epistemic action was chosen in the service of securing future rewards. In other words, the central arm is selected because all the options had the same Bayesian risk4 while the central arm had the greatest epistemic affordance.5 Now consider a sophisticated agent who imagines what she will do after acting. For each plausible outcome, she can work out how her beliefs about hidden states will be updated and evaluate the expected free energy of the subsequent move under each action and subsequent outcome. By taking the average over both, she can evaluate the expected free energy of the second move that is afforded by the first. If she repeats this process recursively, she can effectively perform a deep tree search over all ordered sequences of actions and their consequences.
Heuristically, the unsophisticated agent simply chooses the central arm because she knows it will resolve uncertainty about states of affairs. Conversely, the sophisticated agent follows through—on this resolution of ambiguity—in terms of its implications for subsequent choices. In this instance, she knows that only two things can happen if she chooses the central arm: either the right or left arm will be disclosed as the payoff arm. In either case, the subsequent choice can be made unambiguously to minimize risk and secure her reward. The average expected free energy of these subsequent actions will be pleasingly low, making a choice of the central arm more attractive than its expected free energy would otherwise suggest. This means the sophisticated agent is more confident about her choices because she has gone beyond forming beliefs about the consequences of action to consider the effects of action on subsequent beliefs and the (epistemic) actions that ensue. The remainder of this letter unpacks this recursive kind of planning, using formal analysis and simulations.
This letter is intended to introduce a sophisticated scheme for active inference and provide some intuition as to how it works in practice. We validate this scheme through reproducing simulation results from previous formulations of active inference in a simple and a more complex navigation task. This is not intended as proof of superiority of sophisticated inference over existing schemes, which we assess in a companion paper (Da Costa, Sajid et al. 2020), but to demonstrate noninferiority in some illustrative settings. Note that it is possible to show that on reward maximization tasks, sophisticated active inference will significantly outperform, as it accommodates the backward induction algorithm as a special case.
This paper has four sections. Section 2 provides a brief overview of active inference in terms of free energy minimization and the various schemes that can be used for implementation. This section starts with the basic imperative to optimize Bayesian beliefs about latent or hidden states of the world in terms of approximate Bayesian (i.e., variational) inference (Dayan, Hinton, Neal, & Zemel, 1995). It then goes on to cast planning as inference (Attias, 2003; Botvinick & Toussaint, 2012) as the minimization of an expected free energy under allowable sequences of actions or policies (Friston, FitzGerald et al., 2017). The foundations of expected free energy are detailed in an appendix from two complementary perspectives, the second of which is probably more fundamental as it rests on the first-principle account mentioned above (Friston, 2013, 2019; Parr et al., 2020). The third section considers sophisticated schemes using a recursive formulation of expected free energy. Effectively, this enables the efficient search of deep policy trees (that entail all possible outcomes under each policy or path). This search is efficient because only paths that have a sufficiently high predictive posterior probability need to be evaluated. This restricted tree search is straightforward to implement in the present setting because we are propagating beliefs (i.e., probabilities) as opposed to value functions. The fourth section provides some illustrative simulations that compare sophisticated and unsophisticated agents in the three-armed bandit (or T-maze paradigm) described above. It also considers deeper problems, using navigation and novelty seeking as an example. We conclude with a brief summary of what sophisticated inference brings to the table.
## 2 Active Inference and Free Energy Minimization
Most of the active inference literature concerns itself with partially observable Markov decision processes. In other words, it considers generative models of discrete hidden states and observable outcomes, with uncertainty about the (likelihood) mapping between hidden states and outcomes and (prior) probability transitions among hidden states. Crucially, sequential policy selection is cast as an inference problem by treating sequences of actions (i.e., policies) as random variables. Planning then simply entails optimizing posterior beliefs about the policies being pursued and selecting an action from the most likely policy.
On this view, there are just two sets of unknown variables: hidden states and policies. Belief distributions over this bipartition can then be optimized with respect to an evidence bound in the usual way, using an appropriate mean-field approximation (Beal, 2003; Winn & Bishop, 2005). In this setup, we can associate perception with the optimization of posterior beliefs about hidden states, while action follows from planning based on posterior beliefs about policies. Implicit in this formulation is a generative model: a probabilistic specification of the joint probability distribution over policies, hidden states, and outcomes. This generative model is usually factorized into the likelihood of outcomes, given hidden states, the conditional distribution over hidden states, given policies, and priors over policies. In active inference, the priors over policies are determined by their expected free energy, noting that this energy, which depends on future courses of action, furnishes an empirical prior over subsequent actions.
In brief, given some prior beliefs about the initial and final states of some epoch of active inference, the game is to find a posterior belief distribution over policies that brings the initial distribution as close as possible to the final distribution, given observations. This objective can be achieved by optimizing posterior beliefs about hidden states and policies with respect to a variational bound on (the logarithm of) the marginal likelihood of the generative model (i.e., log evidence). This evidence bound is known as a variational free energy or (negative) evidence lower bound. In what follows, we offer an overview of the formal aspects of this enactive kind of inference.
### 2.1 Discrete State-Space Models
Our objective is to optimize beliefs (i.e., an approximate posterior) over policies $π$ and their consequences, namely, hidden states $s≡s≤τ$ from some initial state $s1$, until some policy horizon $τ$, given some observations $o≤t$ up until the current time $t$. This optimization can be cast as minimizing a (generalized) free energy functional $F[Q(s,π)]$ of the approximate posterior (Parr & Friston, 2019b). This generalized free energy has two parts: a generative model for state transitions, given policies, and a generative model for policies that depend on the final states (omitting constants for clarity):
$F[Q(s,π)]=EQ(π)[F(π)]+DKL[Q(π)||P(π)]=EQ(π)[lnQ(π)+E(π)+F(π)+G(π)]F(π)=EQ(s<τ|π)[lnQ(s≤τ|π)-lnP(o≤t,s≤τ|π)]G(π)=EQ(oτ,sτ|π)[lnQ(sτ|π)-lnP(oτ,sτ)]Qoτ,sτ|π=P(oτ|sτ)Q(sτ|π)-lnP(π)=E(π)+G(π)$
(2.1)
This generalized free energy includes the variational free energy6 of each policy $F(π)$ that depends on priors over state transitions and an expected free energy of each policy $G(π)$ that underwrites priors over policies. The priors over policies $lnP(π)=-E(π)-G(π)$ ensure the expected free energy at time $τ$ (i.e., the policy horizon) is minimized. Here, $E(π)$ represents an empirical prior that is usually conditioned on hidden states at a higher level in deep (i.e., hierarchical) generative models. Note that outcomes on the horizon are random variables with a likelihood distribution, whereas outcomes in the past are realized variables. The distributions indicated by $Q$ are variational distributions that have various interpretations throughout this letter. They inherit these interpretations in virtue of when we are in time. This means they are posterior probabilities when we account for data that have already been observed but can play the role of (empirical) priors when thinking about observations that have yet to be observed.
The first equality shows that the variational free energy, expected under the posterior over policies, plays the role of an accuracy, while the complexity of posterior beliefs about policies is the divergence from prior beliefs.7 In other words, variational free energy scores the evidence for a particular policy that accrues from observed outcomes. The priors over policies also have the form of a free energy. For interested readers, the appendix provides a fairly comprehensive motivation of this functional form, from complementary perspectives. In addition, Table 1 provides a glossary of variables used in this letter. We now consider the role of free energy in exact, approximate, and amortized inference, before turning to active inference and policy selection.
### 2.2 Perception as Inference
Optimizing the posterior over hidden states renders the variational free energy equivalent to (negative) log evidence—or marginal likelihood—in the usual way while optimizing the posterior over policies renders the generalized free energy zero:
$Q(s|π)=argminQF(π)=P(s|o≤t,π)⇒F(π)=-lnP(o≤t|π)Q(π)=argminQF[Q(s,π)]=σ[-E(π)-F(π)-G(π)]⇒F[Q(s,π)]=0$
(2.2)
The first equalities correspond to exact Bayesian inference based on a softmax function (i.e., normalized exponential, $σ[·]$) of the log probability over outcomes and hidden states, under a particular policy. To finesse the numerics of optimizing the posterior over all hidden states, a mean-field approximation usually leverages the Markovian form of the generative model to optimize an approximate posterior over hidden states at each time point (where $s∖τ$ denotes the Markov blanket of $sτ$):
$Q(sτ|π)=σ[EQ(s∖τ|π)[lnP(o≤t,s≤τ|π)]]=σ[EQ(s∖τ|π)[lnP(oτ|sτ)+lnP(sτ|sτ-1,π)+lnP(sτ+1|sτ,π)]]Q(s|π)=Q(s1|π)Q(s2|π)…Q(sτ|π)P(s|π)=P(s1|π)P(s2|s1,π)…P(sτ|sτ-1,π)$
(2.3)
This corresponds to a form of approximate Bayesian inference (i.e., variational Bayes) in which equation 2.3 is iterated over the factors of the mean-field approximation to perform a coordinate descent or fixed-point iteration (Beal, 2003). An alternative formulation rests on an explicit minimization of variational free energy using iterated gradient flows to each fixed point (expressed in terms of sufficient statistics):
$v˙τπ=-∂sτπF(π)=EQ(s∖τ|π)[lnP(o≤t,s≤τ|π)]-lnQ(sτ|π)sτπ=σ(vτπ)Q(sτ|π)=Cat(sτπ)$
(2.4)
This solution can be read as (neuronal) dynamics that implement variational message passing8 (Beal, 2003; Friston, Parr, & de Vries, 2017; Parr, Markovic, Kiebel, & Friston, 2019). In this form, the free energy gradients constitute a prediction error: the difference between the posterior surprisal9 and its predicted value.
Table 1:
Glossary of Variables.
NotationVariable
$P(·)$ Probability distribution
$Q(·)$ Variational posterior or empirical prior distribution
$F$ Variational free energy
$G$ Expected free energy
$uτ$ Action at time $τ$
$o=(o1,o2,…,oτ,…)$ Observation
$s=(s1,s2,…,sτ,…)$ Hidden (latent) states
$π$ Policy (sequence of actions)
$sτπ$ Expectation of state at time $τ$ under $Q(sτ|π)$
$sτu$ Expectation of state at time $τ$ under $Q(sτ|uτ)$
$vτπ$ Log expectation of state at time $τ$ under $Q(sτ|π)$
$oτu$ Expectation of observation at time $τ$ under $Q(oτ|u<τ)$
$uτo$ Expectation of action at time $τ$ under $Q(uτ|oτ)$
A Parameters of categorical likelihood distribution
B Parameters of categorical transition probabilities
C Parameters of categorical prior preferences
D Parameters of categorical initial state probabilities
H Conditional entropy of likelihood distribution
$a,a$ Prior and posterior Dirichlet parameters for A
$b,b$ Prior and posterior Dirichlet parameters for B
$d,d$ Prior and posterior Dirichlet parameters for D
$Cat(·)$ Categorical probability distribution
$Dir(·)$ Dirichlet probability distribution
$EP[·]$ Expectation under the subscripted probability distribution
$H[·]$ Shannon entropy of a probability distribution
$DKL[·∥·]$ Kullback-Leibler divergence between probability distributions
$ψ(·)$ Digamma function
$σ(·)$ Softmax (normalized exponential) function
NotationVariable
$P(·)$ Probability distribution
$Q(·)$ Variational posterior or empirical prior distribution
$F$ Variational free energy
$G$ Expected free energy
$uτ$ Action at time $τ$
$o=(o1,o2,…,oτ,…)$ Observation
$s=(s1,s2,…,sτ,…)$ Hidden (latent) states
$π$ Policy (sequence of actions)
$sτπ$ Expectation of state at time $τ$ under $Q(sτ|π)$
$sτu$ Expectation of state at time $τ$ under $Q(sτ|uτ)$
$vτπ$ Log expectation of state at time $τ$ under $Q(sτ|π)$
$oτu$ Expectation of observation at time $τ$ under $Q(oτ|u<τ)$
$uτo$ Expectation of action at time $τ$ under $Q(uτ|oτ)$
A Parameters of categorical likelihood distribution
B Parameters of categorical transition probabilities
C Parameters of categorical prior preferences
D Parameters of categorical initial state probabilities
H Conditional entropy of likelihood distribution
$a,a$ Prior and posterior Dirichlet parameters for A
$b,b$ Prior and posterior Dirichlet parameters for B
$d,d$ Prior and posterior Dirichlet parameters for D
$Cat(·)$ Categorical probability distribution
$Dir(·)$ Dirichlet probability distribution
$EP[·]$ Expectation under the subscripted probability distribution
$H[·]$ Shannon entropy of a probability distribution
$DKL[·∥·]$ Kullback-Leibler divergence between probability distributions
$ψ(·)$ Digamma function
$σ(·)$ Softmax (normalized exponential) function
Finally, one can consider amortizing inference using standard procedures from machine learning to optimize the parameters $ϕ$ of a recognition model with regard to variational free energy. In the present setting, this approach can be summarized as using universal function approximators (e.g., deep neural networks) to parameterize equation 2.2, namely, the mapping between observations and the sufficient statistics of the approximate posterior—for example,
$sτπ=fϕ(o≤t,s≤τπ,π)ϕ=argminϕF[Q(s,π)]Q(sτ|π)=Cat(fϕ)$
(2.5)
Effectively, amortized inference is “learning to infer” (Çatal, Nauta, Verbelen, Simoens, & Dhoedt, 2019; Lee & Keramati, 2017; Millidge, 2019; Toussaint & Storkey, 2006; Tschantz et al., 2019; Ueltzhöffer, 2018). Variational autoencoders can be regarded as an instance of amortized inference, if we ignore conditioning on policies (Suh, Chae, Kang, & Choi, 2016). Clearly, amortization precludes online inference and may appear biologically implausible. However, it might be the case that certain brain structures learn to infer; for example, the cerebellum might learn from inferential processes implemented by the cerebral cortex (Doya, 1999; Ramnani, 2014).
### 2.3 Planning as Inference
The posterior over policies is somewhat simpler to evaluate—as a softmax function of their empirical,10 variational, and expected free energy. This can be expressed in terms of a generalized free energy that includes the parameters of the generative model (e.g., the likelihood parameters, $A$):
$Q(π)=argminQF[Q(s,π,A)]=σ[-E(π)-F(π)-G(π)]G(π)=EQ(oτ,sτ|π)Q(A)[lnQ(sτ|π)Q(A)-lnP(oτ,sτ,A)]$
(2.6)
The expected free energy of a policy can be unpacked in a number of ways. Perhaps the most intuitive is in terms of risk and ambiguity:11
$G(π)=DKL[Q(sτ,A|π)||P(sτ,A)]︸Risk+EQ(oτ,sτ|π)[-lnP(oτ|sτ,A)]︸Ambiguity$
(2.7)
The equivalence between the expected free energy as shown in equations 2.6 and 2.7 rests on a mean-field assumption that equates the variational posterior for states and parameters with the product of their marginal posteriors. This means that policy selection minimizes risk and ambiguity. Risk, in this setting, is simply the difference between predicted and prior beliefs about final states. In other words, policies will be deemed more likely if they bring about states that conform to prior preferences. In the optimal control literature, this part of expected free energy underwrites KL control (Todorov, 2008; van den Broek et al., 2010). In economics, it leads to risk-sensitive policies (Fleming & Sheu, 2002). Ambiguity reflects the uncertainty about future outcomes, given hidden states. Minimizing ambiguity therefore corresponds to choosing future states that generate unambiguous and informative outcomes (e.g., switching on a light in the dark).
Sometimes it is useful to express risk in terms of outcomes as opposed to hidden states—for example, when the generative model is unknown or one can only quantify preferences about outcomes (as opposed to the inferred causes of those outcomes). In these cases, the risk over hidden states can be replaced by the risk over outcomes by assuming the divergence between the predicted and true posterior is small (omitting parameters for clarity):
$DKL[Q(sτ|π)||P(sτ)]︸Risk(states)=DKL[Q(oτ|π)||P(oτ)]︸Risk(outcomes)+EQ(oτ|π)[DKL[Q(sτ|oτ,π)||P(sτ|oτ)]]︸Expectedevidencebound$
(2.8)
This divergence constitutes an expected evidence bound that also appears if we unpack expected free energy in terms of intrinsic and extrinsic value:12
$G(π)=-EQ(oτ|π)[lnP(oτ)]︸Extrinsicvalue+EQ(oτ|π)[DKL[Q(sτ,A|oτ,π)||P(sτ,A|oτ)]]︸Expectedevidencebound-EQ(oτ|π)[DKL[Q(sτ|oτ,π)||Q(sτ|π)]]︸Intrinsicvalue(states)orsalience-EQ(oτ,sτ|π)[DKL[Q(A|oτ,sτ,π)||Q(A)]]︸Intrinsicvalue(parameters)ornovelty≥-EQ(oτ|π)[lnP(oτ)]︸Expectedlogevidence-EQ(oτ|π)[DKL[Q(sτ,A|oτ,π)||Q(sτ,A|π)]]︸Expectedinformationgain$
(2.9)
The inequality in the final line of equation 2.9 is obtained by omitting the expected evidence bound that appears on the previous lines. As a KL-divergence, this is never negative and so ensures the final line is never greater than the expected free energy. In addition, the intrinsic value terms have been combined into the intrinsic value of both parameters and states. Extrinsic value is just the expected value of log prior preferences (i.e., log evidence), which can be associated with reward and utility in behavioral psychology and economics, respectively (Barto, Mirolli, & Baldassarre, 2013; Kauder, 1953; Schmidhuber, 2010). In this setting, extrinsic value is the complement of Bayesian risk (Berger, 2011). The intrinsic value of a policy is its epistemic value or affordance (Friston et al., 2015). This is just the expected information gain afforded by a particular policy, which can be about hidden states (i.e., salience) or model parameters (i.e., novelty). It is this term that underwrites artificial curiosity (Schmidhuber, 2006). The final inequality above shows that extrinsic value is the expected log evidence under beliefs about final outcomes, while the intrinsic value ensures that this expectation is maximally informed when outcomes are encountered. Collectively, these two terms underwrite the resolution of uncertainty about hidden states (i.e., information gain) and outcomes (i.e., expected surprisal) in relation to prior beliefs.
Intrinsic value is also known as intrinsic motivation in neurorobotics (Barto et al., 2013; Oudeyer & Kaplan, 2007; Ryan & Deci, 1985), the value of information in economics (Howard, 1966), salience in the visual neurosciences, and (rather confusingly) Bayesian surprise in the visual search literature (Itti & Baldi, 2009; Schwartenbeck, Fitzgerald, Dolan, & Friston, 2013; Sun, Gomez, & Schmidhuber, 2011). In terms of information theory, intrinsic value is mathematically equivalent to the expected mutual information between hidden states in the future and their consequences, consistent with the principles of minimum redundancy or maximum efficiency (Barlow, 1961, 1974; Linsker, 1990). Finally, from a statistical perspective, maximizing intrinsic value (i.e., salience and novelty) corresponds to optimal Bayesian design (Lindley, 1956) and machine learning derivatives, such as active learning (MacKay, 1992). On this view, active learning is driven by novelty—namely, the information gain afforded to beliefs about model parameters, given future states and their outcomes. Heuristically, this curiosity resolves uncertainty about “what would happen if I did that?” (Schmidhuber, 2010). Figure 1 illustrates the compass of expected free energy, in terms of its special cases, ranging from optimal Bayesian design through to Bayesian decision theory.
## 3 Sophisticated Inference
So far, we have considered generative models of policies—namely, a fixed number of ordered action sequences. These generative models can be regarded as placing priors over actions that stipulate a small number of allowable action sequences. In what follows, we consider more general models, in which the random variables are actions at each point in time, such that policies become a prior over transitions among action or control states. If we relax this prior, such that successive actions are conditionally independent, we can simplify belief updating, and implicit planning, at the expense of having to consider a potentially enormous number of policies.
The simplification afforded by assuming actions are conditionally independent follows because both actions and states become Markovian. This means we can use belief propagation (Winn & Bishop, 2005; Yedidia, Freeman, & Weiss, 2005) to update posterior beliefs about hidden states and actions, given each new observation. In other words, we no longer need to evaluate the posterior over hidden states in the past to evaluate a posterior over policies. Technically, this is because policies introduced a semi-Markovian aspect to belief updating by inducing conditional dependencies between past and future hidden states. The upshot of this is that one can use posterior beliefs from the previous time step as empirical priors for hidden states and actions at the subsequent time step. This is formally equivalent to the forward pass in the forward-backward algorithm (Ghahramani & Jordan, 1997), where the empirical prior over hidden states depends on the preceding (i.e., realized) action. Put simply, we are implementing a Bayesian filtering scheme in which observations are generated by action at each time step. Crucially, the next action is sampled from an empirical prior based on (a free energy functional of) posterior beliefs about the current hidden state.
Figure 1:
Active inference. This figure illustrates the various ways in which minimizing expected free energy can be unpacked. The upper panel casts action and perception as the minimization of variational and expected free energy, respectively. Crucially, active inference introduces beliefs over policies that enable a formal description of planning as inference (Attias, 2003; Botvinick & Toussaint, 2012; Kaplan & Friston, 2018). In brief, posterior beliefs about hidden states of the world, under plausible policies, are optimized by minimizing a variational (free energy) bound on log evidence. These beliefs are then used to evaluate the expected free energy of allowable policies, from which actions can be selected (Friston, FitzGerald et al., 2017). Crucially, expected free energy subsumes several special cases that predominate in psychology, machine learning, and economics. These special cases are disclosed when one removes particular sources of uncertainty from the implicit optimization problem. For example, if we ignore prior preferences, then the expected free energy reduces to information gain (Lindley, 1956; MacKay, 2003) or intrinsic motivation (Barto et al., 2013; Oudeyer & Kaplan, 2007; Ryan & Deci, 1985). This is mathematically the same as expected Bayesian surprise and mutual information that underwrites salience in visual search (Itti & Baldi, 2009; Sun et al., 2011) and the organization of our visual apparatus (Barlow, 1961, 1974; Linsker, 1990; Optican & Richmond, 1987). If we now remove risk but reinstate prior preferences, one can effectively treat hidden and observed (sensory) states as isomorphic. This leads to risk sensitive policies in economics (Fleming & Sheu, 2002; Kahneman & Tversky, 1979) or KL control in engineering (van den Broek et al., 2010). Here, minimizing risk corresponds to aligning predicted outcomes to preferred outcomes. If we then remove intrinsic value, we are left with extrinsic value or expected utility in economics (Von Neumann & Morgenstern, 1944) that underwrites reinforcement learning and behavioral psychology (Sutton & Barto, 1998). Bayesian formulations of maximizing expected utility under uncertainty are also known as Bayesian decision theory (Berger, 2011). Finally, if we just consider a completely unambiguous world with uninformative priors, expected free energy reduces to the negative entropy of posterior beliefs about the causes of data, in accord with the maximum entropy principle (Jaynes, 1957). The expressions for variational and expected free energy correspond to those described in the main text (omitting model parameters for clarity). They are arranged to illustrate the relationship between complexity and accuracy, which become risk and ambiguity, when considering the consequences of action. This means that risk-sensitive policy selection minimizes expected complexity or computational cost (Sengupta & Friston, 2018). The faces shown are, from left to right, H. Barlow, W. H. Fleming, D. Kahneman, A. Tversky, and E. T. Jaynes.
Figure 1:
Active inference. This figure illustrates the various ways in which minimizing expected free energy can be unpacked. The upper panel casts action and perception as the minimization of variational and expected free energy, respectively. Crucially, active inference introduces beliefs over policies that enable a formal description of planning as inference (Attias, 2003; Botvinick & Toussaint, 2012; Kaplan & Friston, 2018). In brief, posterior beliefs about hidden states of the world, under plausible policies, are optimized by minimizing a variational (free energy) bound on log evidence. These beliefs are then used to evaluate the expected free energy of allowable policies, from which actions can be selected (Friston, FitzGerald et al., 2017). Crucially, expected free energy subsumes several special cases that predominate in psychology, machine learning, and economics. These special cases are disclosed when one removes particular sources of uncertainty from the implicit optimization problem. For example, if we ignore prior preferences, then the expected free energy reduces to information gain (Lindley, 1956; MacKay, 2003) or intrinsic motivation (Barto et al., 2013; Oudeyer & Kaplan, 2007; Ryan & Deci, 1985). This is mathematically the same as expected Bayesian surprise and mutual information that underwrites salience in visual search (Itti & Baldi, 2009; Sun et al., 2011) and the organization of our visual apparatus (Barlow, 1961, 1974; Linsker, 1990; Optican & Richmond, 1987). If we now remove risk but reinstate prior preferences, one can effectively treat hidden and observed (sensory) states as isomorphic. This leads to risk sensitive policies in economics (Fleming & Sheu, 2002; Kahneman & Tversky, 1979) or KL control in engineering (van den Broek et al., 2010). Here, minimizing risk corresponds to aligning predicted outcomes to preferred outcomes. If we then remove intrinsic value, we are left with extrinsic value or expected utility in economics (Von Neumann & Morgenstern, 1944) that underwrites reinforcement learning and behavioral psychology (Sutton & Barto, 1998). Bayesian formulations of maximizing expected utility under uncertainty are also known as Bayesian decision theory (Berger, 2011). Finally, if we just consider a completely unambiguous world with uninformative priors, expected free energy reduces to the negative entropy of posterior beliefs about the causes of data, in accord with the maximum entropy principle (Jaynes, 1957). The expressions for variational and expected free energy correspond to those described in the main text (omitting model parameters for clarity). They are arranged to illustrate the relationship between complexity and accuracy, which become risk and ambiguity, when considering the consequences of action. This means that risk-sensitive policy selection minimizes expected complexity or computational cost (Sengupta & Friston, 2018). The faces shown are, from left to right, H. Barlow, W. H. Fleming, D. Kahneman, A. Tversky, and E. T. Jaynes.
Note that we do not need to evaluate a posterior over action, because action is realized before the next observation is generated. In other words, we can sample realized actions from an empirical prior over actions that inherits from the posterior over all previous states. This leads to a simple belief-propagation scheme for planning as inference that can be expressed as follows:
$Q(sτ|u<τ)=P(sτ|o<τ,u<τ)=EQ(sτ-1)[P(sτ|sτ-1,uτ-1)]Q(sτ)=P(sτ|o≤τ,u<τ)∝P(oτ|sτ)Q(sτ|u<τ)Q(uτ)=σ[-G(uτ)]G(uτ)=EP(oτ+1|sτ+1)Q(sτ+1|u<τ+1)[lnQ(sτ+1|u<τ+1)-lnP(sτ+1)︸Risk-lnP(oτ+1|sτ+1)︸Ambiguity︸Expectedfreeenergyofnextaction]$
(3.1)
Here, $Q(sτ|u<τ)$ denotes an empirical prior—from the point of view of state estimation—or a predictive posterior—from the point of view of action selection—over hidden states, given realized actions $u<τ$. Similarly, $Q(sτ)$ denotes the corresponding posterior, given subsequent outcomes. The first line follows immediately from the operation of marginalization, the second is an application of Bayes's theorem, and the third is from equation 2.6. This scheme is exact because we have made no mean-field approximations of the sort required by variational message passing (Dauwels, 2007; Friston, Parr, et al., 2017; Parr et al., 2019; Winn & Bishop, 2005). Note that $Q(s1|u<1)=P(s1)$, with all subsequent $Q$ distributions derived recursively from this, meaning no variational approximation is required. However, it is worth noting a subtle difference between the $Q$ distributions used here, and those encountered in equation 2.1). The difference is that equation 3.1 only takes account of those outcomes acquired at or before the time associated with the state. In equation 2.1), the posteriors depend on all the outcomes collected, that is, smoothing as opposed to the filtering in equation 3.1. The difference between these largely dissolves when dealing with beliefs about future states (when all relevant outcomes are earlier). Furthermore, there are no conditional dependencies on policies, which have been replaced by realized actions. However, equation 3.1 only considers the next action. The question now arises: How many future actions should we consider?
At this point, the cost of the Markovian assumption arises: if we choose a policy horizon that is too far into the future, the number of policies could be enormous. In other words, we could effectively induce a deep tree search over all possible sequences of future actions that would be computationally prohibitive. However, we can now turn to sophisticated schemes to finesse the combinatorics. This rests on the straightforward observation that if we propagate beliefs and uncertainty into the future, we only need to evaluate policies or paths that have a nontrivial likelihood of being pursued. This selective search over plausible paths is constrained at two levels. First, by propagating probability distributions, we can restrict the search over future outcomes—for any given action at any point in the future—that have a nontrivial posterior probability (e.g., greater than 1/16). Similarly, we only need to evaluate those policies that are likely to be pursued—namely, those with an expected free energy that renders their prior probability nontrivial (e.g., greater than 1/16).
This deep search involves evaluating all actions under all plausible outcomes so that one can perform counterfactual belief updating at each point in time (given all plausible outcomes). However, it is not necessary to evaluate outcomes per se; it is sufficient to evaluate distributions over outcomes, conditioned on plausible hidden states. This is a subtle but important aspect of finessing the combinatorics of belief propagation into the future and rests on having a generative model (that generates outcomes).
Heuristically, one can imagine searching a tree with diverging branches at successive times in the future but terminating the search down any given branch when the prior probability of an action (and the predictive posterior probability of its subsequent outcome) reaches a suitably small threshold (Keramati, Smittenaar, Dolan, & Dayan, 2016; Solway & Botvinick, 2015). To form a marginal empirical prior over the next action, one simply accumulates the average expected free energy from all the children of a given node in the tree recursively. A softmax function of this accumulated average then constitutes the empirical prior over the next action. Algorithmically, this can be expressed as follows, based on appendix 6 (Friston, FitzGerald et al. 2017), where $uτ$ denotes action at $τ≥t$ (omitting novelty terms associated with model parameters for clarity):
$G(oτ,uτ)=EP(oτ+1|sτ+1)Q(sτ+1|u<τ+1)[lnQ(sτ+1|u<τ+1)-lnP(sτ+1)︸Risk-lnP(oτ+1|sτ+1)︸Ambiguity︸Expectedfreeenergyofnextaction]+EQ(uτ+1|oτ+1)Q(oτ+1|u≤τ)[G(oτ+1,uτ+1)]︸ExpectedfreeenergyofsubsequentactionsQ(uτ|oτ)=σ[-G(oτ,uτ)]Q(oτ|u<τ)=EQ(sτ|u<τ)[P(oτ|sτ)]$
(3.2)
Posterior beliefs over hidden states and empirical priors over action are then recovered from the above recursion as follows, noting that one's most recent action $(ut-1)$ and current outcome $(ot)$ are realized (i.e., known) variables:
$Q(st)∝P(ot|st)Q(st|u
(3.3)
Equation 3.3 expresses the expected free energy of each potential next action $(uτ)$ as the risk and ambiguity of that action plus the average expected free energy of future beliefs, under counterfactual outcomes and actions $(uτ+1)$. Readers familiar with the Bellman optimality principle (Bellman, 1952) may recognize a formal similarity between equation 3.2 and the Bellman equation because both inherit from the same recursive logic. The sophisticated inference scheme deals with functionals (functions of belief distributions over states), while the Bellman equation deals directly with functions of states.
Figure 2 provides a schematic that casts this recursive formulation as a deep tree search. This search can be terminated at any depth or horizon. Later, we will rewrite this recursive scheme in terms of sufficient statistics to illustrate its simplicity. It would be possible to formulate each path through the tree of actions as an alternative policy and simply sum the expected free energy, based on current posterior beliefs, along each of those paths. This is the approach that has traditionally been pursued in active inference (Friston et al., 2016; Friston, FitzGerald et al., 2017), and accounts for the consequences of action on belief updating. The advance offered by the sophisticated formulation is that it also accounts for the consequences of anticipated belief updates for future actions. In other words, an unsophisticated creature may entertain the belief that if I did that, I would find out about this. A sophisticated creature additionally believes that if I found that out, I would then do this. An intuitive example would be in deciding whether to check the news, look at the weather forecast, read a novel, or go for a walk. The first two options might offer similar information gain and would appeal to an unsophisticated agent. Without knowing the weather, the latter two will be hard to disambiguate given preferences for walking in the sun or reading indoors if it were raining. A more sophisticated agent will find the weather forecast more salient than the news: knowing the weather will determine whether the next action will be to go for a walk or stay in and read, given that the preferred option is more likely to be chosen once the weather is known.
Figure 2:
Deep policies searches. This schematic summarizes the accumulation of expected free energy over paths or trajectories into the future. This can be construed as a deep tree search, where the tree branches over allowable actions at each point in time and the likely outcomes consequent on each action. The arrows between actions and outcomes have been drawn in the reverse direction (directed from the future) to depict the averaging of expected free energy over actions (green arrows) and subsequent averaging over the outcomes entailed by the preceding action (pink arrows). This dual averaging over actions (given outcomes) and outcomes (given actions) is depicted by the equations in the upper panel. Here, the green nodes of this tree correspond to outcomes, with one (realized) outcome at the current time (at the top). The pink nodes denote actions—here, just four. Note that the search terminates whenever an action is deemed unlikely or an outcome is implausible. The panel on the lower right represents the conditional dependencies in the generative model as a probabilistic graphical model. The parameters of this model are shown on squares, and the variables are shown on circles. The arrows denote conditional dependencies. Filled circles are realized variables at the current time—namely, the preceding action and the subsequent outcome. Note that the expected free energy is shown here as a functional of beliefs about states, where these beliefs are updated based on actions and outcomes. In the main text, we drop the explicit dependence on $Q$ and express the expected free energy directly as a function of outcomes and actions.
Figure 2:
Deep policies searches. This schematic summarizes the accumulation of expected free energy over paths or trajectories into the future. This can be construed as a deep tree search, where the tree branches over allowable actions at each point in time and the likely outcomes consequent on each action. The arrows between actions and outcomes have been drawn in the reverse direction (directed from the future) to depict the averaging of expected free energy over actions (green arrows) and subsequent averaging over the outcomes entailed by the preceding action (pink arrows). This dual averaging over actions (given outcomes) and outcomes (given actions) is depicted by the equations in the upper panel. Here, the green nodes of this tree correspond to outcomes, with one (realized) outcome at the current time (at the top). The pink nodes denote actions—here, just four. Note that the search terminates whenever an action is deemed unlikely or an outcome is implausible. The panel on the lower right represents the conditional dependencies in the generative model as a probabilistic graphical model. The parameters of this model are shown on squares, and the variables are shown on circles. The arrows denote conditional dependencies. Filled circles are realized variables at the current time—namely, the preceding action and the subsequent outcome. Note that the expected free energy is shown here as a functional of beliefs about states, where these beliefs are updated based on actions and outcomes. In the main text, we drop the explicit dependence on $Q$ and express the expected free energy directly as a function of outcomes and actions.
This sort of approach to evaluating a tree of possible policies, using a recursive form for the expected free energy, has been suggested by others (Çatal, Verbelen, Nauta, Boom, & Dhoedt, 2020; Çatal, Wauthier, et al., 2020), who have applied this in the context of robot vision and navigation. The distinction between this and the formulation presented here is the sophisticated aspect: here, each additional step into the future evaluates the expected free energy in terms of the beliefs anticipated at that time point, as opposed to beliefs held (at the present) about that time point. Despite this difference, the similarities in these approaches speak to the feasibility of scaling sophisticated inference to high-dimensional
Having established the formal basis of sophisticated planning, in terms of belief propagation, we now turn to some illustrative examples to show how it works in practice.
## 4 Simulations
In this section, we provide some simulations to compare sophisticated and unsophisticated schemes on the three-arm bandit task described in section 1. Here, we frame this paradigm in terms of a rat foraging in a three-arm T-maze, where the right and left upper arms are baited with rewards and punishments, and the bottom arm contains an instructional cue indicating whether the bait is likely to be on the right or left. In these examples, cue validity was 95%. The details of this setup have been described elsewhere (Friston et al., 2016; Friston, FitzGerald et al., 2017). In brief, the generative model comprises a likelihood mapping between hidden states and outcomes and probability transitions among states. Here, there are two outcome modalities. The first reports the experience of the rat in terms of its location (with distinct outcomes for the instructional cue location – right versus left). The second modality registered rewarding outcomes, with three levels (none, reward, and punishment—for example, foot shock). There were two hidden factors: the rat's location (with four possibilities) and the latent context (i.e., whether the rewarding arm was on the right or the left). With these hidden states and outcomes, we specify the generative model in terms of:
• The sensory mapping A, which maps from the two hidden state factors (location and context) to each of the two sensory modalities (location and reward).
• The transition matrices B, which govern how states at one time point map onto the next, given a particular action $(ut)$. The transitions among locations are action dependent, with four actions (moving to one of the four locations), while the context did not change during any particular trial (i.e., there were no context transitions within trials).
• The cost vectors C for each hidden state factor, which also specify the agent's preferences for each outcome modality. The latter allows for an alternative formulation that we discuss below.
• The priors over initial states, D.
In the following simulations, the rat experienced 32 trials, each comprising two moves with three outcomes, including an initial outcome that located the rat at the start (i.e., center) location. The rat encountered the first trial with ambiguous prior beliefs about the context, that is, the reward was equally likely to be right or left.
Given this parameterization of the generative model, the expected free energy of an action, given outcomes, equation 4.1 can be expressed in terms of sufficient statistics of posterior beliefs and model parameters as follows:13
$G(uτ,oτ)=sτ+1u·[lnsτ+1u+C+H]︸Expectedfreeenergyofnextaction+uτ+1o·G(uτ+1,oτ+1)oτ+1u︸andsubsequentactionssτ∝(A·oτ)⊙sτusτu=B(uτ-1)sτ-1oτu=Asτuuτo=σ[G(uτ,oτ)]$
(4.1)
Here, $⊙$ denotes a Hadamard (i.e., element-wise) product, and the dot notation means $A·oτ≡AToτ$. H is the conditional entropy of the likelihood distribution. The sufficient statistics are the parameters of the categorical distributions in equation 3.2, where model parameters are usually hyperparameterized in terms of the concentration parameters of Dirichlet distributions (denoted by capital and lowercase bold variables, respectively):
$Q(sτ)=Cat(sτ)Q(sτ+1|u≤τ)=Cat(sτ+1u)Q(uτ|oτ)=Cat(uτo)Q(oτ|u<τ)=Cat(oτu)P(oτ|sτ)=Cat(A)P(sτ+1|sτ,uτ)=Cat(B(uτ))P(s1)=Cat(D)C=-lnP(sτ)H=-diag(A·lnA)P(A)=Dir(a)P(B)=Dir(b)P(D)=Dir(d)$
(4.2)
The equivalent scheme, when specifying preferences in terms of outcomes $C=lnP(oτ)$, is
$G(uτ,oτ)=oτ+1u·[lnoτ+1u+C]+sτ+1u·H︸Expectedfreeenergyofnextaction+uτ+1o·G(uτ+1,oτ+1)oτ+1u︸andsubsequentactions$
(4.3)
As noted, it is usually more convenient to search over distributions over outcomes that are generated by (plausible) hidden states as opposed to (plausible) outcomes per se. This approach produces a slightly simpler form for expected free energy:
$G(uτ,oτ)=sτ+1u·[lnsτ+1u+C+H︸Nextaction+G(uτ+1,Asτ+1u)·uτ+1o︸andsubsequentactions]$
(4.4)
Finally, as intimated above, the recursive estimation of expected free energy from subsequent states can be terminated when the probability of an action or outcome can be plausibly discounted. In the simulations here, searches over paths were terminated when the predictive probability fell below 1/16. This choice of threshold is a little arbitrary and could itself be optimized either in relation to the accumulated free energy for a synthetic agent or in fitting empirical behavior. However, the 1/16 value offers a useful balance as it enables elimination of policies that are highly unlikely, improving efficiency of planning while also being relatively conservative. It corresponds to a probability of about 0.06, close to the ubiquitous 0.05 used to reject null hypotheses in frequentist statistics. We make no claim as to 1/16 being the optimal threshold in the context of all tasks—or even in those shown here. However, this is something that could be optimized in relation to a specific task by finding the threshold that minimizes the free energy accumulated over time.
While crude, this works under the assumption that if one policy is 16 times less likely than alternatives given how far it has been evaluated, it is unlikely to be redeemed by evaluating it further. As such, there are savings to be had in not doing so. If there were no constraints on computational resources (temporally or thermodynamically), the pruning threshold could be set to be zero, ensuring an exhaustive evaluation of all possible policies. The principles that underwrite sophisticated inference do not depend on this specific implementational detail, and alternative methods could be used.
Other approaches to searching through policy trees include schemes like Thompson sampling (Ortega & Braun, 2010; Osband, Van Roy, Russo, & Wen, 2019; Thompson, 1933), which sample from the posterior probability for states and select policies that maximize preferences given this sample. Like the threshold we have selected, this simplifies the search through alternative policies by using samples in place of evaluating the full posterior probabilities. With enough exposure to a task, Thompson sampling ensures that the full space of plausible policies is attempted, possibly finding “optimal” policies that are discounted by early pruning under our approach. In our setting, Thompson sampling would not be appropriate because our focus is on inference (selecting the best policy within a trial) as opposed to learning a policy over many exposures to a trial. Having said this, it is worth highlighting that action selection using the sophisticated inference scheme involves sampling from the posterior distribution over actions—subject to some temperature parameter. While this parameter is typically very large so that the maximum a posteriori action is chosen, this could be relaxed to ensure the occasional selection of unlikely actions, in the spirit of Thompson sampling.
The simulations were chosen to illustrate the fidelity of beliefs about action (i.e., what to do next) with and without a sophisticated update scheme (see equations 3.1 and 3.2). We anticipated that sophisticated schemes would outperform unsophisticated schemes, in the sense that they would learn any contingencies more efficiently, via more confident action selection. This learning was elicited by baiting the left arm consistently, after a couple of trials, so that priors about the initial (latent context) state could be accumulated, in the form of posterior (Dirichlet) concentration parameters (d). In these generative models, learning is straightforward and involves the accumulation of posterior concentration parameters (Friston et al., 2016). For example, to learn the likelihood mapping and initial hidden states, we have14
$A=a⊙a0⊙-1,a0ij=∑iaij,a=∑τa+oτ⊗sτD=d⊙d0⊙-1,d0i=∑idi,d=d+s1$
(4.5)
In these sorts of simulation, the agent succumbs to the epistemic affordance of the instructional cues until it learns that the reward is always on the left-hand side—at which point, the expected utility (or extrinsic value) of going directly to the baited arm exceeds the epistemic affordance (or intrinsic value) of soliciting the instructional cue. At this point, there is a switch from explorative to exploitative behavior—the behavioral measure we used to compare sophisticated and unsophisticated schemes.
### 4.1 Exploration and Exploitation in a T-Maze
Figure 3 shows the results of three simulations. In these simulations, the rat performed 32 trials where each trial had two moves, starting from the central location. The prior preferences for reward and punishment outcomes were specified with the prior costs (C) of $-$2 and 2, respectively.15 In these and subsequent simulations, actions were selected as the most likely (maximum a posteriori) action. Therefore, all subsequent simulations are deterministic realizations of (Bayes's) optimal behavior based on expected free energy. The simulations start with a sophisticated agent with a planning horizon of two (this corresponds to the depth of action sequences considered into the future). In other words, it accumulates the expected free energy for all plausible paths, until the end of each trial. This enables a confident and definitive epistemic policy selection that gives way to exploitation, when the rat realizes the reward is always located in the left arm.
Figure 3:
Epistemic foraging in a T-maze: This figure shows the results of simulations based on the T-maze paradigm described in the main text. The left panel shows the results of simulating 32 trials, where the rat started at the central location. Each trial comprises two moves. The insert on the upper left illustrates foraging for information by interrogating the instructional cue in the lower arm and then securing the reward in the left arm. The results in each of the three panels have the same format. The upper row illustrates the predictive distribution over actions (moves to the central location, the left, the right, and lower arm, respectively). The darker the color, the more likely the action. The cyan dots are the actions that were sampled and executed at each epoch, within each trial. The colored dots above indicate the hidden context—namely, whether the left or right arm was baited. The middle panel shows the resulting performance in terms of the expected utility or negative Bayesian risk. The colored circles show the final outcome (blue location 3—right arm—and green location 2—left arm). The lowest panel (on the left) shows the posterior beliefs about the hidden context (right versus left) based on Dirichlet concentration parameters, accumulated over trials. The left panel of results shows confident epistemic behavior with a planning horizon of two. As is typical in these kinds of simulations, the agent starts off by foraging for information and responding to the epistemic affordance of the instructional cue in the lower arm. However, because the reward is always encountered in the left arm (after the first couple of trials), the rat loses interest in the instructional cue as it becomes more confident about where the reward is located. This experience-dependent loss of epistemic affordance leads to a switch from exploratory to exploitative behavior—here, at trial 16. A similar kind of behavior is shown in the upper right panels; however, here, the planning horizon was reduced to one. In other words, the rat considered only the expected free energy of one move ahead. The key difference here is a less confident (i.e., precise) belief distribution over early actions (highlighted by the red circles). Although the lower arm has the greatest posterior probability, there is a nontrivial probability that the rat thinks it should stay where it is. This mild ambiguity about what should be done means that exploratory behavior yields to exploitative behavior slightly earlier, at trial 10. Finally, the lower right panels show the results when expected free energy is replaced by Bayesian risk. In other words, any epistemic affordance of the instructional cue is precluded. This renders the posterior probability of staying or moving to the lower arm the same. When, by chance, the instructional cue is encountered, exploitative behavior follows; however, there are times when the rat simply stays at the central location and learns nothing about the prevailing context. Note that in this example, there are costly trials in which the rat fails to visit either baited arm.
Figure 3:
Epistemic foraging in a T-maze: This figure shows the results of simulations based on the T-maze paradigm described in the main text. The left panel shows the results of simulating 32 trials, where the rat started at the central location. Each trial comprises two moves. The insert on the upper left illustrates foraging for information by interrogating the instructional cue in the lower arm and then securing the reward in the left arm. The results in each of the three panels have the same format. The upper row illustrates the predictive distribution over actions (moves to the central location, the left, the right, and lower arm, respectively). The darker the color, the more likely the action. The cyan dots are the actions that were sampled and executed at each epoch, within each trial. The colored dots above indicate the hidden context—namely, whether the left or right arm was baited. The middle panel shows the resulting performance in terms of the expected utility or negative Bayesian risk. The colored circles show the final outcome (blue location 3—right arm—and green location 2—left arm). The lowest panel (on the left) shows the posterior beliefs about the hidden context (right versus left) based on Dirichlet concentration parameters, accumulated over trials. The left panel of results shows confident epistemic behavior with a planning horizon of two. As is typical in these kinds of simulations, the agent starts off by foraging for information and responding to the epistemic affordance of the instructional cue in the lower arm. However, because the reward is always encountered in the left arm (after the first couple of trials), the rat loses interest in the instructional cue as it becomes more confident about where the reward is located. This experience-dependent loss of epistemic affordance leads to a switch from exploratory to exploitative behavior—here, at trial 16. A similar kind of behavior is shown in the upper right panels; however, here, the planning horizon was reduced to one. In other words, the rat considered only the expected free energy of one move ahead. The key difference here is a less confident (i.e., precise) belief distribution over early actions (highlighted by the red circles). Although the lower arm has the greatest posterior probability, there is a nontrivial probability that the rat thinks it should stay where it is. This mild ambiguity about what should be done means that exploratory behavior yields to exploitative behavior slightly earlier, at trial 10. Finally, the lower right panels show the results when expected free energy is replaced by Bayesian risk. In other words, any epistemic affordance of the instructional cue is precluded. This renders the posterior probability of staying or moving to the lower arm the same. When, by chance, the instructional cue is encountered, exploitative behavior follows; however, there are times when the rat simply stays at the central location and learns nothing about the prevailing context. Note that in this example, there are costly trials in which the rat fails to visit either baited arm.
If we compare this performance with that of an unsophisticated rat, which looks just one move ahead, we see a similar behavior. However, there are two differences. First, the rat is less confident about its behavior because it does not evaluate the consequences of its actions in terms of belief updating. Although it finds the instructional cue more attractive, in virtue of its epistemic affordance, it is still partially compelled to remain at the central location, which ensures that it will avoid aversive outcomes. Because the unsophisticated agent underestimates the epistemic affordance of the instructional cue, it paradoxically performs better in terms of suspending its information foraging earlier and switching to exploitative behavior a few trials before the sophisticated agent (but see below).
For completeness, we show the results of an unsophisticated agent, whose behavior is predicated on Bayesian risk, that is, with no epistemic value in play. As might be anticipated, this agent exposes itself to Bayesian risk, forgoing a visit to the right or left arm, in a way that is precluded by agents who minimize expected free energy. Here, the starting and instructional cue locations are equally attractive. When the rat is lucky enough to select the lower arm, it knows what to do; however, it has no sense that this is the right kind of behavior. After a sufficient number of trials, it realizes that the reward is always on the left-hand side and starts to respond in an exploitative fashion, albeit with relatively low confidence. These results highlight the distinction between sophisticated and unsophisticated agents who predicate their policy selection on expected free energy and between unsophisticated agents using expected free energy with and without epistemic affordance.
In the simulations, the sophisticated agent persevered with its epistemic behavior for longer than the unsophisticated agent. At first glance, this may seem to be a paradoxical result if we were measuring performance in terms of Bayesian risk. However, this is not the case as illustrated in Figure 4F. Here, we repeated the simulations above but with one small change: we made the epistemic cue mildly aversive by giving it a cost of one. This has no effect on the sophisticated agent other than slightly abbreviating the exploratory phase of activity. However, the unsophisticated agent has, understandably, been caught in a bind. The starting location is now marginally more preferable than the instructional cue—and it has no reason to leave the center of the maze. While this ensures aversive outcomes are avoided, it also precludes epistemic foraging and subsequent exploitation. Heuristically, only the sophisticated agent can see past the short-term pain for the long-term gain. We will pursue this theme in the final simulations, where the agent's planning horizon becomes nontrivial.
Figure 4:
This reproduces the results of Figure 3 with a deep policy search (of horizon or depth 2). However, here, we have made the lower arm slightly aversive. This is no problem for the sophisticated agent who sees through the short-term cost to visit the instructional cue as usual. Because this location is mildly aversive, the switch to exploitative behavior is now slightly earlier (at trial 12). Contrast this behavior with an unsophisticated agent that does not look beyond its next move. The resulting behavior is shown in the lower panels. Unsurprisingly, the agent just stays at the starting position and learns nothing about its environment—and safely avoids all adverse outcomes at the expense of forgoing any rewards.
Figure 4:
This reproduces the results of Figure 3 with a deep policy search (of horizon or depth 2). However, here, we have made the lower arm slightly aversive. This is no problem for the sophisticated agent who sees through the short-term cost to visit the instructional cue as usual. Because this location is mildly aversive, the switch to exploitative behavior is now slightly earlier (at trial 12). Contrast this behavior with an unsophisticated agent that does not look beyond its next move. The resulting behavior is shown in the lower panels. Unsurprisingly, the agent just stays at the starting position and learns nothing about its environment—and safely avoids all adverse outcomes at the expense of forgoing any rewards.
### 4.2 Deep Planning and Navigation
The simulations show that a sophisticated belief-updating scheme enables more confident and nuanced policy selection, which translates into more efficient exploitative behavior. To illustrate how this scheme scales up to deeper policy searches, we revisit a problem that has been previously addressed using a bespoke prior, based on the graph Laplacian (Kaplan & Friston, 2018). This problem was previously framed in terms of navigation to a target location in a maze. Here, we forgo any special priors to see if the sophisticated scheme could handle deep tree searches that underwrite paradoxical behaviors, like moving away from a target to secure it later (see the mountain car problem). Crucially, in this instance, there was no ambiguity about the hidden states. However, there was ambiguity or uncertainty about the likelihood mapping that determines whether a particular location should be occupied. In other words, this example uses a more conventional foraging setup in which the rat has to learn about the structure of the maze while simultaneously pursuing its prior preferences to reach a target location. Here, exploratory behavior is driven by the intrinsic value or information gain afforded to beliefs about parameters of the likelihood model (as opposed to hidden states). Colloquially, one can think of this as epistemic affordance that is underwritten by novelty as opposed to salience (Barto et al., 2013; Parr & Friston, 2019a; Schwartenbeck et al., 2019). Having said this, we anticipated that exactly the same kind of behavior would arise and that the sophisticated scheme would be able to plan to learn and then exploit what it has learned.
In this paradigm, a rat has to navigate over the 8 $×$ 8 grid maze, where each location may or may not deliver a mildly aversive stimulus (e.g., a foot shock). Navigation is motivated by prior preferences to occupy a target location—here, the center. In the simulations below, the rat starts at the entrance to the maze and has a prior preference for safe outcomes (cost of $-$1) and against aversive outcomes (cost of $+$1). Prior preferences for location depend on the distance from the current position to the target location. The generative model for this setup is simple: there was one hidden factor with 64 states corresponding to all possible locations. These hidden states generate safe or aversive (somatosensory) outcomes, depending on the location. In addition, (exteroceptive) cues are generated that directly report grid location. The five allowable actions comprise one step in any direction or staying put.
Figure 5 shows the results of typical simulations when increasing the planning horizon from 1 through to 4. The key point here is that there is a critical horizon, which enables our subject to elude local minima of expected free energy as it pursues its goal. In these simulations, our subject was equipped with full knowledge of the aversive locations and simply planned a route to its target location. However, relatively unsophisticated agents get stuck on the other side of aversive barriers that are closest to the target location. In other words, they remain in locations in which the expected free energy of leaving is always greater than staying put (Cohen, McClure, & Yu, 2007). This can happen when the planning horizon is insufficient to enable the rat to contemplate distal (and potentially preferable) outcomes (as seen in the lower left and middle panels of Figure 5). However, with a planning horizon of 4 (or more), these local minima are vitiated, and the rat easily plans—and executes—the shortest path to the target. In these simulations, the total number of moves was eight, which is sufficient to reach the target via the shortest path. This sort of behavior is reminiscent of the prospective planning required to solve things like the mountain car problem. In other words, the path of least expected free energy can often involve excursions through state (and belief) space that point away from the ultimate goal.
Figure 5:
Navigation as inference: This figure reports the result of a simulated maze navigation. The upper panels illustrate the form of this maze, which comprises an 8 $×$ 8 grid. Each location may or may not deliver a mildly aversive outcome (e.g., a foot shock). At the same time, the rat's prior preference is to be near the center of the maze. These prior preferences are shown in image format in the top right panel, where the log prior preference is illustrated in pink, with white being the most preferred location. The bottom three panels record the trajectory or path taken by a rat from the starting location on the lower left. The three panels show the (deterministic) solutions for a planning horizon of 1, 3, and 4). With horizons of fewer than four, the rat gets stuck on the other side of an aversive barrier that is closest to the central (i.e., target) location. This is because any move away from this location (with a small excursion) has a smaller expected free energy than staying put. However, if the policy search is sufficiently deep (i.e., a planning horizon greater than 3), the rat can effectively imagine what would happen if it pressed deeper into the future, enabling long-term gains to supervene over short-term losses. The result is that the rat infers and pursues the shortest path to the target location, even though it occasionally moves away from the center. The bottom three panels illustrate the behavior of an unsophisticated agent. This is as described in Kaplan and Friston (2018) but with constant preferences as in the upper panels and variable policy depths. In the example, the planning horizon of three is sufficient for the rat to find the shortest path. However, this depends on the rat's choosing the left path at the first junction—which is not guaranteed, as four moves along the left or the right path lead to squares that are equally preferred. Consistent with this, for the policy depth of 4, the right path is chosen. After the first four moves, the rat decides to cross the aversive square to reach the target location. This four-step policy allows the rat to entertain the benefits of spending multiple steps in the target location, at the cost of a single foot shock. In these simulations, the rat knew the locations of the aversive outcomes and was motivated by minimizing Bayesian risk.
Figure 5:
Navigation as inference: This figure reports the result of a simulated maze navigation. The upper panels illustrate the form of this maze, which comprises an 8 $×$ 8 grid. Each location may or may not deliver a mildly aversive outcome (e.g., a foot shock). At the same time, the rat's prior preference is to be near the center of the maze. These prior preferences are shown in image format in the top right panel, where the log prior preference is illustrated in pink, with white being the most preferred location. The bottom three panels record the trajectory or path taken by a rat from the starting location on the lower left. The three panels show the (deterministic) solutions for a planning horizon of 1, 3, and 4). With horizons of fewer than four, the rat gets stuck on the other side of an aversive barrier that is closest to the central (i.e., target) location. This is because any move away from this location (with a small excursion) has a smaller expected free energy than staying put. However, if the policy search is sufficiently deep (i.e., a planning horizon greater than 3), the rat can effectively imagine what would happen if it pressed deeper into the future, enabling long-term gains to supervene over short-term losses. The result is that the rat infers and pursues the shortest path to the target location, even though it occasionally moves away from the center. The bottom three panels illustrate the behavior of an unsophisticated agent. This is as described in Kaplan and Friston (2018) but with constant preferences as in the upper panels and variable policy depths. In the example, the planning horizon of three is sufficient for the rat to find the shortest path. However, this depends on the rat's choosing the left path at the first junction—which is not guaranteed, as four moves along the left or the right path lead to squares that are equally preferred. Consistent with this, for the policy depth of 4, the right path is chosen. After the first four moves, the rat decides to cross the aversive square to reach the target location. This four-step policy allows the rat to entertain the benefits of spending multiple steps in the target location, at the cost of a single foot shock. In these simulations, the rat knew the locations of the aversive outcomes and was motivated by minimizing Bayesian risk.
To aid with intuition as to the evaluation of alternative policies, we explicitly evaluated some of the policies that could be chosen with a planning horizon of two. Assuming the maze layout is known, there is little uncertainty to resolve, and preferences (i.e., costs) will be the primary determinant of behavior. Starting from the maze entrance (2,8), the options are shown in Table 2.
Here, we can see that when we consider only the first step, there is a cost of $+$2.6 associated with choosing up and a cost of $+$6.0 for choosing left. Remembering that cost is formulated as a log probability; this means up is about 30 times more likely than left and suggests we do not need to evaluate policies starting with a left (which falls below the 1/16 threshold) any further. Inspection of the options for the second step of these policies and comparison with those for the policies starting with up suggests the cost incurred at the first step cannot be compensated for at the second.
For all policies surviving the 1/16 threshold, we then have to consider the next step. For the example in Table 2, we could do this simply by taking the total cost for the second step for each action and, using a softmax operator as in equation 4.1, compute the relative probability of each action and the cost incurred on averaging under these probabilities. Adding this to the cost from the first step and repeating for all policies not eliminated by the 1/16 threshold, we arrive at the (log) probability distribution over the first action—here, favoring up.
Table 2:
Example Policy Evaluation.
Step 2
Cost (nats)Cost (nats)
ActionSquare ColorTarget ProximityActionSquare ColorTarget Proximity
Stay at (2,8) $-$$+$4.2 Up to (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0
Right to (3,8) $+$$+$3.6
Stay at (2,8) $-$$+$4.2
Up to (2,7) $-$$+$3.6 Up to (2,6) $+$$+$2.2
Down to (2,8) $-$$+$4.2
Left to (1,7) $+$$+$4.4
Right to (3,7) $-$$+$2.8
Stay at (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2 Up to (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0
Right to (3,8) $+$$+$3.6
Stay at (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0 Up to (1,7) $+$$+$4.4
Down to (1,8) $+$$+$5.0
Left to (1,8) $+$$+$5.0
Right to (2,8) $-$$+$4.2
Stay at (1,8) $+$$+$5.0
Right to (3,8) $+$$+$2.8 Up to (3,7) $-$$+$2.8
Down to (3,8) $+$$+$3.6
Left to (2,8) $-$$+$4.2
Right to (4,8) $-$$+$4.2
Stay at (3,8) $+$$+$2.8
Step 2
Cost (nats)Cost (nats)
ActionSquare ColorTarget ProximityActionSquare ColorTarget Proximity
Stay at (2,8) $-$$+$4.2 Up to (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0
Right to (3,8) $+$$+$3.6
Stay at (2,8) $-$$+$4.2
Up to (2,7) $-$$+$3.6 Up to (2,6) $+$$+$2.2
Down to (2,8) $-$$+$4.2
Left to (1,7) $+$$+$4.4
Right to (3,7) $-$$+$2.8
Stay at (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2 Up to (2,7) $-$$+$3.6
Down to (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0
Right to (3,8) $+$$+$3.6
Stay at (2,8) $-$$+$4.2
Left to (1,8) $+$$+$5.0 Up to (1,7) $+$$+$4.4
Down to (1,8) $+$$+$5.0
Left to (1,8) $+$$+$5.0
Right to (2,8) $-$$+$4.2
Stay at (1,8) $+$$+$5.0
Right to (3,8) $+$$+$2.8 Up to (3,7) $-$$+$2.8
Down to (3,8) $+$$+$3.6
Left to (2,8) $-$$+$4.2
Right to (4,8) $-$$+$4.2
Stay at (3,8) $+$$+$2.8
We have characterized the degree of sophistication in terms of planning as inference. In this setting, there was no ambiguity about outcomes that would license an explanation in terms of epistemic affordance or salience of the sort that motivated behavior in the T-maze examples of section 4.1. However, we can reintroduce epistemics by introducing uncertainty about the locations that deliver aversive outcomes. Exploration now becomes driven by curiosity about the parameters of the likelihood mapping (see equation 2.9). One can illustrate the minimization of expected free energy in terms of curiosity and novelty (Barto et al., 2013; Schmidhuber, 2006) by simulating a rat that has never been exposed to the maze previously. This was implemented by setting the prior (Dirichlet) parameters of the likelihood mapping between hidden states and somatosensory outcomes to a small value (i.e., 1/64). In terms of sufficient statistics, the expected free energy is now supplemented with a novelty term based on posterior expectations about the likelihood mapping (Friston, Lin, et al., 2017):
$G(uτ-1,oτ-1)=sτu·[lnsτu+C+H]︸Nextaction-oτu·Wsτu︸Novelty+uτo·G(uτ,oτ)oτu︸SubsequentactionsW=12(a⊙-1-a0⊙-1)$
(4.6)
In addition, we removed preferences for a particular location in order to study purely exploratory behavior. The results of the ensuing simulation are shown in Figure 6. In this example, the rat was allowed to make 64 consecutive moves while updating the Dirichlet parameters after every move. The top panels 6 show the resulting trajectory. The key point to observe here is that nearly every location has been explored. This rests on a trajectory in which previously visited locations lose their novelty or epistemic affordance, thereby promoting policies that take the rat into uncharted territory. This kind of exploratory behavior disappears if we replace expected free energy with Bayesian risk. In this setting, after the first move, the rat returns to its original location and just sits there for 64 trials (see the bottom panels of Figure 6).
Finally, to simulate curiosity under a task set, we reinstated prior preferences about location. In this simulation, the rat has to resolve the dual imperative to satisfy its curiosity, while at the same time realizing preferences for being at the center of the maze. In other words, it has to contextualize its goal-seeking behavior in relation to what it knows about how to realize those goals. Figure 7 shows the results of a simulation in which the rat was given five exposures to the maze, each comprising eight moves with a planning horizon of four. Within four exposures, it has learned what it needs to learn—about the aversive locations—to plan the shortest path to its target location and execute that path successfully (dotted black line in the left panel of Figure 7). In contrast to Figure 6, the exploration is now limited to preferred locations with precise likelihood mappings that are sufficient to encompass the shortest path (compare the left panels of Figures 6 and 7).
This completes our numerical analyses, in which we have looked at deep policy searches predicated on expected free energy, where expected free energy supplements Bayesian risk with epistemic affordance in terms of either salience (resolving uncertainty about hidden states) or novelty (resolving uncertainty about hidden model parameters).
## 5 Conclusion
This letter has described a recursive formulation of expected free energy that effectively instigates a deep tree search for planning as inference. The ensuing planning is sophisticated, in the sense that it entails beliefs about beliefs—in virtue of accumulating predictive posterior expectations of expected free energies down plausible paths. In other words, instead of just propagating beliefs about the consequences of successive actions, the scheme simulates belief updating in the future, based on preceding beliefs about the consequences of action. This scheme was illustrated using a simple T-maze problem and a navigation problem that required a deeper search.
Figure 6:
Exploration and novelty: This figure reports the results of a simulation in the same maze as in Figure 5. However, here we removed prior knowledge about which locations should be avoided and prior preferences for being near the center. This means that the only incentives for movement are purely epistemic in nature: curiosity, or the novelty of finding out “what would happen if I did that.” This produces a trajectory of moves that explore the locations, building up a picture of where aversive (a foot shock) stimuli are elicited and where they are not. The key aspect of this trajectory is that it avoids revisiting previously explored locations, to provide a nearly optimal coverage of the exploration space. The number of moves was 64 (with an updating of the posterior beliefs about likelihood parameters after each move). This means that in principle, the rat could have visited every location. Indeed, nearly every location has been visited, as shown on the upper right, in terms of the final likelihood of receiving an aversive stimulus at each location. The bottom panels show the same results, but after replacing expected free energy (that includes the novelty term) with Bayesian risk (that does not). Unsurprisingly, the Bayesian risk agent has no imperative to move, because it has no preferences about its location and, after the first move, realizes it is in a safe location. In other words, after the first move, it returns to the starting location and remains there for the remainder of available trials. As such, it learns nothing about the mapping between location and sensory outcomes.
Figure 6:
Exploration and novelty: This figure reports the results of a simulation in the same maze as in Figure 5. However, here we removed prior knowledge about which locations should be avoided and prior preferences for being near the center. This means that the only incentives for movement are purely epistemic in nature: curiosity, or the novelty of finding out “what would happen if I did that.” This produces a trajectory of moves that explore the locations, building up a picture of where aversive (a foot shock) stimuli are elicited and where they are not. The key aspect of this trajectory is that it avoids revisiting previously explored locations, to provide a nearly optimal coverage of the exploration space. The number of moves was 64 (with an updating of the posterior beliefs about likelihood parameters after each move). This means that in principle, the rat could have visited every location. Indeed, nearly every location has been visited, as shown on the upper right, in terms of the final likelihood of receiving an aversive stimulus at each location. The bottom panels show the same results, but after replacing expected free energy (that includes the novelty term) with Bayesian risk (that does not). Unsurprisingly, the Bayesian risk agent has no imperative to move, because it has no preferences about its location and, after the first move, realizes it is in a safe location. In other words, after the first move, it returns to the starting location and remains there for the remainder of available trials. As such, it learns nothing about the mapping between location and sensory outcomes.
Figure 7:
Exploration under a task set: This figure reproduces the same paradigm as in Figure 6 but reinstating prior preferences about being near the center of the maze (i.e., a task set). In this instance, the imperatives for action include both curiosity and pragmatic drives to realize prior preferences. The upper left panel shows a sequence of trajectories over five trials, where the rat was replaced at the initial location following eight moves. The upper right panel shows the final accumulated Dirichlet counts depicting the probability of an aversive outcome at each location. This accumulated evidence—or familiarity with the environment—enables the rat to plan the shortest path to its target after just four exposures. This path is shown as the black dashed line in the left panel. Compare the likelihood mapping with Figure 6. Here, the agent restricted its exploration to those parts of the maze that encompass the path to its goal. The lower panels show an even more restrictive exploration for an unsophisticated rat, which fails to find the shortest path along the white squares. This speaks to the enhanced explorative drive resulting from sophisticated inference.
Figure 7:
Exploration under a task set: This figure reproduces the same paradigm as in Figure 6 but reinstating prior preferences about being near the center of the maze (i.e., a task set). In this instance, the imperatives for action include both curiosity and pragmatic drives to realize prior preferences. The upper left panel shows a sequence of trajectories over five trials, where the rat was replaced at the initial location following eight moves. The upper right panel shows the final accumulated Dirichlet counts depicting the probability of an aversive outcome at each location. This accumulated evidence—or familiarity with the environment—enables the rat to plan the shortest path to its target after just four exposures. This path is shown as the black dashed line in the left panel. Compare the likelihood mapping with Figure 6. Here, the agent restricted its exploration to those parts of the maze that encompass the path to its goal. The lower panels show an even more restrictive exploration for an unsophisticated rat, which fails to find the shortest path along the white squares. This speaks to the enhanced explorative drive resulting from sophisticated inference.
In section 1, we noted that active inference may be difficult to scale, although remarkable progress has been made in this direction recently using amortized inference and sampling. For example, Ueltzhöffer (2018) parameterized both the generative model and approximate posterior with function approximators, using evolutionary schemes to minimize variational free energy when gradients were not available. Similarly, Millidge (2019) amortized perception and action by learning a parameterized approximation to expected free energy. Çatal et al. (2019) focused on learning prior preferences, using a learning-from-example approach. Tschantz et al. (2019) extended previous point-estimate models to include full distributions over parameters. This allowed them to apply active inference to continuous control problems (e.g., the mountain car problem, the inverted pendulum task, and a challenging hopper task) and demonstrate an order of magnitude increase in sampling efficiency relative to a strong model-free baseline (Lillicrap et al., 2015). (See Tschantz et al., 2019, for a full discussion and a useful deconstruction of active inference, in relation to things like model-based reinforcement learning; Schrittwieser et al., 2019.)
Note that the navigation example is an instance of planning to learn. As such, it solves the kinds of problems for which reinforcement learning and its variants usually address. In other words, we were able to solve a learning problem from first (i.e., variational) principles without recourse to backward induction or other (belief-free) schemes like Q-learning, SARSA, or successor representations (e.g., Dayan, 1993; Gershman, 2017; Momennejad et al., 2017; Russek, Momennejad, Botvinick, Gershman, & Daw, 2017). This is potentially important because predicating an optimization scheme on inference, as opposed to learning, endows it with a context sensitivity that eludes many learning algorithms (Daw, Gershman, Seymour, Dayan, & Dolan, 2011). In other words, because there are probabilistic representations of time-sensitive hidden states (and implicit uncertainty about those states), behavior is motivated by resolving uncertainty about the context in which an agent is operating. This may be the kind of (Bayesian) mechanics that licenses the notion of competent schemes that can both learn to plan and plan to learn.
The current formulation of active inference does not call on sampling or matrix inversions; the Bayes optimal belief-updating deals with uncertainty in a deterministic fashion. Conceptually, this reflects the difference between the stochastic aspects of random dynamical systems and the deterministic behavior of the accompanying density dynamics, which describe the probabilistic evolution of those systems (e.g., the Fokker-Planck equation). Because active inference works in belief spaces, that is, on statistical manifolds (Da Costa, Parr, Sengupta, et al., 2020), there is no need for sampling or random searches; the optimal paths are instead evaluated by propagating beliefs or probability distributions into the future to find the path of least variational free energy (Friston, 2013).
In the setting of deep policy searches, this approach has the practical advantage of terminating searches over particular paths when they become implausible. For example, in the navigation example, there were five actions and 64 hidden states, leading to a large number of potential paths (1.0486 $·$ 10$10$ for a planning horizon of four and 1.0737 $·$ 10$15$ for a planning horizon of six). However, only a tiny fraction of these paths is actually evaluated—usually several hundred, which takes a few hundred milliseconds on a personal computer. Given reasonably precise beliefs about current states and state transitions, only a small number of paths are eligible for evaluation, which leads us to our final comment on the scalability of active inference.
### 5.1 Limitations
In one sense, we have addressed scaling through the computational efficiency afforded by belief propagation using a sophisticated scheme. However, we have illustrated this scheme only on rather trivial problems. In principle, one can scale up the dimensionality of state spaces (and outcomes) with a degree of impunity. This follows from the fact that the number of plausible states (and transitions) can be substantially constrained, using the right kind of generative model—one that leverages factorizations and sparsity. For example, the factorization between hidden states and actions used above rests on the implicit assumption that every action is allowed from every state. This is a strong assumption but perfectly apt for many generative models.
One could also call on a related symmetry—namely, a hierarchical separation of temporal scales in deep models, where one Markov decision process is placed on top of another (Friston, Rosch, et al., 2017; George & Hawkins, 2009; Hesp, Smith, et al., 2019; Rikhye et al., 2019). In these models, transitions at the higher level usually unfold at a slower timescale than the level below. This engenders semi-Markovian dependencies that can generate complicated and structured behaviors. In this setting, one could consider hidden states at higher levels that generate the initial and final states of the level below. Policy optimization within each level, using a sophisticated scheme, could then realize the trajectory between the initial states (i.e., empirical priors over initial states) and final states (i.e., priors that determine the cost function and subsequent empirical priors over action).
Finally, it should be noted that in many applications, the states and actions of real-world processes are continuous, which presents a further scaling challenge for discrete state-space models However, it is possible to combine sophisticated (discrete) schemes with continuous models, provided one uses the appropriate message passing between the continuous and discrete levels. For example, Friston, Parr, et al. (2017) used a Markov decision process to drive continuous eye movements. Indeed, it would be interesting to revisit simulations of saccadic searches using sophisticated inference, especially in the context of reading.
## Appendix: Expected Free Energy
This appendix considers two lemmas that underwrite expected free energy from two complementary perspectives. The first is based on a generative model that combines the principles of optimal Bayesian design (Lindley, 1956) and decision theory (Berger, 2011), while the second is based on a principled account of self-organization (Friston, 2019; Parr et al., 2020). Finally, we consider several corollaries that speak to the notions of active inference (Friston et al., 2015), empowerment (Klyubin, Polani, & Nehaniv, 2005), information bottlenecks (Tishby et al., 1999), self-organization (Friston, 2013), and self-evidencing (Hohwy, 2016). In what follows, $Q(oτ,sτ,π)$ denotes a predictive distribution over future variables and policies, conditioned on initial observations, while $P(oτ,sτ,π)$ denotes a generative model—that is, a marginal distribution over final states and policies. For simplicity, we omit model parameters and assume policies start from the current time point, allowing us to omit the variational free energy from the generalized free energy (since observational evidence is the same for all policies).
### A.1 Objective
Our objective is to establish a generalized free energy functional that can be minimized with respect to a posterior over policies, noting that this posterior is necessary to marginalize the joint posterior over hidden states and policies to infer hidden states. To comply with Bayesian decision theory, generalized free energy can be constructed to place an upper bound on Bayesian risk, which corresponds to the divergence between the predictive distribution over outcomes and prior preferences. In other words, Bayesian risk is the expected surprisal or negative log evidence. Confusingly, Bayesian risk and expected risk are two different quantities. The former is the expected surprisal, while the latter is a KL-divergence between predicted and preferred outcomes (or states). To comply with optimal Bayesian design, one can specify priors over policies that lead to states with a precise likelihood mapping to observable outcomes.
Lemma 1
(Bayes Optimality). Generalized free energy16 is an upper bound on risk, under a generative model whose priors over policies lead to states with precise likelihoods:
$F[Q(s,π)]=EQ[lnQ(sτ,π)-lnP(oτ,sτ)]︸Generalisedfreeenergy≥DKL[Q(oτ)||P(oτ)]︸RisklogP(π)=EQ(oτ,sτ|π)[logP(oτ|sτ)]︸Empiricalprior$
(A.1)
Note that $P(π)$ is an empirical prior because it depends on the predictive density that depends on past observations. The priors over hidden states and outcomes can be regarded as a target distribution or prior preferences.
Proof.
By substituting the empirical prior, equation A.1, into the expression for free energy, we have (noting that policies and outcomes are conditionally independent, given hidden states):
$F[Q(s,π)]=EQ[DKL[Q(sτ|π)||P(sτ)]]︸Expectedrisk(States)+DKL[Q(π)||P(π)]︸Complexity(Policies)≥EQ[DKL[Q(sτ|π)||P(sτ)]]︸Expectedrisk(States)=EQ[DKL[Q(oτ|π)||P(oτ)]]︸Expectedrisk(Outcomes)+EQ[DKL[Q(sτ|oτ,π)||P(sτ|oτ)]]︸Expectedevidencebound≥EQ[DKL[Q(oτ|π)||P(oτ)]]︸Expectedrisk(Outcomes)=DKL[Q(oτ)||P(oτ)]︸Risk+EQ[DKL[Q(oτ|π)||Q(oτ)]︸Mutualinformation≥DKL[Q(oτ)||P(oτ)]︸Risk$
(A.2)
These inequalities show that generalized free energy upper bounds the predictive divergence from the marginal likelihood over outcomes (i.e., model evidence). When this bound is minimized, (1) the complexity cost of policies is minimized, enforcing prior beliefs about policies; (2) the predictive posterior over hidden states becomes the posterior under the generative model; and (3) policies and outcomes become independent. This independence follows by construction of the free energy functional and means that final outcomes do not depend on initial conditions, implying a form of steady state (see below).
Corollary 1
(Expected Free Energy). The free energy can now be minimized with regard to the posterior over policies by expressing free energy in terms of expected free energy:
$F[Q(s,π)]=EQ(π)[G(π)+lnQ(π)]Q(π)=argminQF[Q(s,π)]⇒-lnQ(π)=G(π)G(π)=EQ(oτ,sτ|π)[lnQ(sτ|π)-lnP(oτ,sτ)︸Expectedfreeenergy]=DKL[Q(sτ|π)||P(sτ)]︸Expectedrisk-EQ(oτ,sτ|π)[lnP(oτ|sτ)]︸Expectedambiguity$
(A.3)
This renders free energy $F[Q(s,π)]=EQ[G(π)]-H[Q(π)]$ an expected energy minus the entropy of the posterior over policies, in the usual way. Finally, we can express the expected free energy of a policy as a bound on information gain and Bayesian risk:
$G(π)=EQ[DKL[Q(sτ|oτ,π)||P(sτ|oτ)]]︸Expectedevidencebound-EQ[lnP(oτ)]︸Expectedlogevidence-EQ[DKL[Q(sτ|oτ,π)||Q(sτ|π)]]︸Expectedinformationgain≥-EQ[DKL[Q(sτ|oτ,π)||Q(sτ|π)]]︸Expectedinformationgain-EQ[lnP(oτ)]︸Bayesianrisk$
(A.4)
This inequality shows that the free energy of a policy upper bounds a mixture of its expected information gain (Lindley, 1956) and Bayesian risk (Berger, 2011), where Bayesian risk is expected log evidence.
Remark.
Here, policies are treated as random variables, which means planning as inference (Attias, 2003; Botvinick & Toussaint, 2012) becomes belief updating under optimal Bayesian design priors (Lindley, 1956; MacKay, 1992). One might ask what licenses these priors above. Although they can be motivated in terms of information gain (see equation A.4), there is a more straightforward motivation that arises as a steady-state solution. We now turn to this complementary perspective that inherits from the Bayesian mechanics described in Friston (2019). Here, we are interested in situations when the predictive distribution attains its steady-state or target distribution.
It may seem odd to predicate optimal behavior on a steady-state distribution. However, the fact that action and its consequences can be expressed probabilistically implies the existence of a (steady-state) joint distribution that does not change over time. In what follows, we use the existence of this steady-state distribution to express the posterior over policies as a functional of the distribution over other variables, given a particular policy. This functional is expected free energy. This represents a deflationary approach to optimality, in the sense that optimal policies are just those that underwrite a steady state. The question now is, What kind of steady state are we interested in?
We will make a distinction between simple and general steady states in terms of the degeneracy (i.e., many-to-one mapping) of policies to any final state. Simple steady states are characterized by a unique path of least action from some initial observations to a final state. This would be appropriate for describing classical systems, such as a pendulum or planetary bodies. Conversely, a general steady state allows for multiple paths from initial observations to the final state, which means the entropy or uncertainty about which path was actually taken is high. We will be particularly interested in the autonomous behavior of systems whose steady state is maintained by multiple (degenerate) paths or policies. The ensuing distinction can be characterized by a scalar quantity corresponding to the relative entropy or precision of policies and outcomes, conditioned on final states (and initial observations). This scalar $β≥0$ is not a free parameter; it just characterizes the kind of steady state at hand. Note that in this setup, the notion of optimality is replaced by (or reduces to) the existence of a steady state, which may or may not be simple.
### A.2 Objective
We seek distributions over policies that afford steady-state solutions, that is, when the final distribution does not depend on initial observations. Such solutions ensure that on average, stochastic policies lead to a steady-state or target distribution specified by the generative model. These solutions exist in virtue of conditional independencies, where the hidden states provide a Markov blanket (cf. information bottleneck) that separates policies from outcomes. In other words, policies cause final states that cause outcomes. Put simply, policies influence outcomes, but only via hidden states. We will see below that there is a family of such solutions, where the Bayes optimality solution above is a special (canonical) case. In what follows, $Q(oτ,sτ,π):=P(oτ,sτ,π|o≤)$ can be read as a posterior distribution, given initial conditions.
Lemma 2
(Nonequilibrium Steady State). When the surprisal of policies corresponds to a Gibbs free energy $G(π,β)$, the final distribution attains steady state:
$-logQ(π)=G(π,β)⇒DKL[Q||P]=0G(π,β)=DKL[Q(sτ|π)||P(sτ)]︸Expectedrisk-EQ(oτ,sτ|π)[βlogP(oτ|sτ)]︸Expectedambiguityβ=EQ[lnQ(π|sτ)]EQ[lnP(oτ|sτ)]=H(Π|Sτ)H(Oτ|Sτ)P=P(oτ|sτ)Q(π|sτ)P(sτ)Q=P(oτ|sτ)Q(sτ|π)Q(π)$
(A.5)
Here, $β≥0$ characterizes a steady state in terms of the relative precision of policies and final outcomes, given final states. The generative model stipulates steady state, in the sense that distribution over final states (and outcomes) does not depend on initial observations. Here, the generative and predictive distributions simply express the conditional independence between policies and final outcomes, given final states. Note that when $β=1$, Gibbs free energy becomes expected free energy.
Proof.
Substituting equation A.5 into the KL divergence between the predictive and generative distributions gives
$DKLQ||P=EQlnQ(sτ|π)Q(π)Q(π|sτ)P(sτ)=EQ[lnQ(π)+lnQ(sτ|π)-lnP(sτ)]-EQ[lnQ(π|sτ)]=EQ(π)[lnQ(π)+EQ(oτ,sτ|π)[lnQ(sτ|π)-lnP(sτ)-βlnP(oτ|sτ)]]=EQ(π)lnQ(π)+G(π,β)⇒-lnQ(π)=G(π,β)⇒DKL[Q||P]=0⇒β=H(Π|Sτ)H(Oτ|Sτ)$
(A.6)
This solution describes a particular kind of steady state, where policies lead to (steady) states with more or less precise likelihoods, depending on the value of $β$.
Remark.
At steady state, hidden states (and outcomes) “forget” about initial observations, placing constraints on the distribution over policies that can be expressed in terms of a Gibbs free energy. In the limiting case that $β=0$ (i.e., when $Q(π|s)$ tends to a delta function), we obtain a simple steady state where
$G(π,0)=EQ(oτ,sτ|π)lnQ(sτ|π)P(sτ)=DKL[Q(sτ|π)||P(sτ)]$
(A.7)
This solution corresponds to a standard stochastic control, variously known as KL control or risk-sensitive control (van den Broek et al., 2010). In other words, one picks policies that minimize the divergence between the predictive and target distribution. In different sorts of systems, the relationship between the entropies ($β$) may differ. As such, different values of this parameter may be appropriate in describing these kinds of system. More generally (i.e., $β>0$), policies are more likely when they lead to states with a precise likelihood mapping. One perspective, on the distinction between simple and general steady states, is in terms of conditional uncertainty about policies. For example, simple (i.e., $β=0$) steady states preclude uncertainty about which policy led to a final state. This would be appropriate for describing classical systems (that follow a unique path of least action), where it would be possible to infer which policy had been pursued given the initial and final outcomes. Conversely, in general steady-state systems (e.g., mice and men), simply knowing that “you are here” does not tell me “how you got here,” even if I knew where you were this morning. Put another way, there are lots of paths or policies open to systems that attain a general steady state.
The treatment in Friston (2019) effectively turns the steady-state lemma on its head by assuming the steady-state in equation A.5 is stipulatively true—and then characterizes the ensuing self-organization in terms of Bayes optimal policies. In active inference, we are interested in a certain class of systems that self-organize to general steady states: those that move through a large number of probabilistic configurations from their initial state to their final (steady) state. In terms of information geometry, this means that the information distance between any initial and the final (steady) state is large. In the current setting, we could replace information distance (Crooks, 2007; Kim, 2018) by information gain (Lindley, 1956; MacKay, 1992; Still & Precup, 2012). That is, we are interested in systems that attain steady state (i.e., target distributions) with policies associated with a large information gain.17 Although not pursued here, general steady states with precise likelihood mappings have precise Fisher information matrices and information geometries that distinguish general forms of self-organization from simple forms (Amari, 1998; Ay, 2015; Caticha, 2015; Ikeda, Tanaka, & Amari, 2004; Kim, 2018). This perspective can be unpacked in terms of information theory with the following corollaries, which speak to active inference, empowerment, information bottlenecks, self-organisation, and self-evidencing.
Corollary 2
(Active Inference). If a system attains a general steady state, then by the Bayes optimality lemma, it will appear to behave in a Bayes optimal fashion in terms of both optimal Bayesian design (i.e., exploration) and Bayesian decision theory (i.e., exploitation). Crucially, the loss function defining Bayesian risk is the negative log evidence for the generative model entailed by an agent. In short, systems (i.e., agents) that attain general steady states will look as if they are responding to epistemic affordances (Parr & Friston, 2017).
Corollary 3
(Empowerment). At its simplest, empowerment (Klyubin et al., 2005) underwrites exploration (i.e., intrinsic motivation) by exploring as many states in the future as possible—and thereby keeping options open. This exploratory imperative is evinced clearly if we generalize free energy to include $β$:
$F[Q(s,π)]=EQlnQ(sτ,π)P(oτ|sτ)βP(sτ)=EQlnQ(sτ|π)Q(π)P(sτ)Q(π|sτ)=DKL[Q(sτ|π)||P(sτ)]︸Risk-I(Π;Sτ|o≤)︸Empowerment$
(A.8)
This expresses the free energy of the predictive distribution over final states and policies in terms of risk and empowerment. Minimizing free energy with respect to policies therefore maximizes empowerment—namely, the mutual information between policies and their final states, given initial observations. The epistemic aspect of empowerment can be seen by expressing it in terms of expected ambiguity:
$I(Π;Sτ|o≤)︸Empowerment=H(Π|o≤︸Entropy)-EQ[βlnP(oτ|sτ)]︸Expectedambiguity$
(A.9)
On this reading, empowerment corresponds to minimizing expected ambiguity while maximizing the entropy of policies—in other words, keeping (policy) options open by avoiding situations from which there is only one escape route. Note that empowerment is a special case of active inference when we can ignore risk (i.e., when all policies are equally risky).
Corollary 4
(Information Bottleneck). The information bottleneck method and related formulations (Bialek, Nemenman, & Tishby, 2001; Still, Sivak, Bell, & Crooks, 2012; Tishby et al., 1999; Tishby & Polani, 2010) can be seen as generalizations of rate distortion theory. According to this view, we can consider hidden states as an information bottleneck (cf. Markov blanket) that plays the role of a compressed representation of past outcomes that best predict future outcomes. Here, we can regard the policies as mapping between initial and final observations via hidden states. The information bottleneck method provides an objective function that can be minimized with respect to the distribution over policies. This (information bottleneck) objective function can be expressed in terms of the expected Gibbs energy as follows:
$EP(o≤|π)G(π,β)=EP(oτ,sτ,o≤|π)lnP(sτ|o≤,π)P(sτ)+βlnP(oτ)P(oτ|sτ)-βlnP(oτ)=I(O≤;Sτ|π)-βI(Sτ;Oτ)︸Informationbottleneck-EP[βlnP(oτ)]︸Bayesianrisk$
(A.10)
This means the average Gibbs energy of a policy, over initial observations, combines the information bottleneck objective function and Bayesian risk. Minimizing the first term of the objective function (i.e., the mutual information between initial outcomes and hidden states) plays the role of compression, while maximizing the second (i.e., the mutual information between hidden states final and outcomes) ensures the information gain that characterizes general steady states. Indeed, when relative precision $β=1$, it is straightforward to show that the information bottleneck is an upper bound on expected information gain:
$I(O≤;Sτ|π)-I(Sτ;Oτ)︸Informationbottleneck=EP(oτ,sτ,o≤|π)[lnQ(sτ|π)-lnP(sτ|oτ)]=-EP(oτ|π)[DKL[Q(sτ|oτ,π)||Q(sτ|π)]︸Expectedinformationgain+DKL[Q(sτ|oτ,π)||P(sτ|oτ)]︸Expectedevidencebound]≥-EP(oτ|π)[DKL[Q(sτ|oτ,π)||Q(sτ|π)]︸Expectedinformationgain]=-I(Sτ;Oτ|O≤,π)$
(A.11)
Because the information bottleneck objective function is an average over initial observations, it cannot be used directly for online (active) planning as inference; however, it can be used to learn fixed outcome-action policies (Hafez-Kolahi & Kasaei, 2019; Tishby & Zaslavsky, 2015). Note that the information bottleneck method is a special case of active inference, when we can ignore Bayesian risk (i.e., when all policies are equally risky).
Corollary 5
(Self-Organization). The average of expected free energy over policies can be decomposed into risk and conditional entropy:
$EQ(π)[G(π)]=EQ[lnQ(sτ|π)-lnP(oτ,sτ)]︸Expectedfreeenergy=EQ[DKL[Q(sτ|π)||P(sτ)]︸Expectedrisk+EQ[-lnQ(oτ|sτ)]︸Expectedambiguity=EQ[DKL[Q(sτ|π)||P(sτ)]︸Expectedrisk+H(Oτ|Sτ,o≤)︸Conditionalentropy≥0$
(A.12)
This decomposition means that if the expected free energy of policies is small on average, the predictive distribution over hidden states will converge to the prior or preferred distribution, while uncertainty about consequent outcomes will be small. In the limit, the predictive distribution over hidden states becomes the prior distribution, with no uncertainty about outcomes. This can be read as the limiting case of self-organization to8 prior beliefs.
Corollary 6
(Self-Evidencing). The average of expected free energy over policies furnishes an upper bound on the (negative) expected log evidence of outcomes and the mutual information between these outcomes and their causes (i.e., hidden states):
$EQ(π)[G(π)]=EQ[lnQ(sτ|π)-lnP(oτ,sτ)]︸Expectedfreeenergy=-EQ(oτ,π)[DKL[Q(sτ|oτ,π)||Q(sτ|π)]]︸Expectedinformationgain-EQ(oτ)[lnP(oτ)]︸Expectedlogevidence+EQ(oτ,π)[DKL[Q(sτ|oτ,π)||P(sτ|oτ)]]︸Expectedevidencebound≥-I(Sτ,Oτ|Π,o≤)︸Mutualinformation-EQ(oτ)[lnP(oτ)]︸Expectedlogevidence$
(A.13)
This decomposition means that if the expected free energy of policies is, on average, small, the expected log evidence and the mutual information between predicted states and the outcomes they generate will be large. In the limit, expected log evidence is maximized, with no uncertainty about outcomes, given hidden states. This can be read as the limiting case of self-evidencing with unambiguous outcomes.
It can sometimes be difficult to see the relationships between the various conditional entropy and mutual information terms that constitute the free energy functional. Figure 8 tries to clarify these relationships using information diagrams. This schematic highlights the complementary decompositions of expected free energy in terms of risk and ambiguity—and information gain and entropy. These decompositions are summarized in terms of the imperative to minimize various segments of the information diagrams. Figure 8 then highlights the particular components that figure in special cases, such as an optimal Bayesian decisions and design.
Figure 8:
Active inference and other schemes. This schematic summarizes the various imperatives implied by minimizing a free energy functional of posterior beliefs about policies, ensuing states, and subsequent outcomes. The information diagrams in the upper panels represent the entropy of the three variables, where intersections correspond to shared information or mutual information. A conditional entropy corresponds to an area that precludes the variable on which the entropy is conditioned. Note that there is no overlap between policies and outcomes that is outside hidden states. This is because hidden states form a Markov blanket (i.e., information bottleneck) between policies and outcomes. Two complementary formulations of minimizing expected free energy are shown on the right (in terms of risk and ambiguity) and left (in terms of information gain and entropy), respectively. Both will tend to increase the overlap or mutual information between hidden states and outputs while minimizing entropy or Bayesian risk. In these diagrams, we have assumed steady state, such that risk becomes the mutual information between policies and hidden states. For simplicity, we have omitted dependencies on initial observations. The various schemes or formulations considered in the text are shown at the bottom. These demonstrate that Bayesian decision theory (i.e., KL control and Bayesian risk) and optimal Bayesian design figure as complementary imperatives.
Figure 8:
Active inference and other schemes. This schematic summarizes the various imperatives implied by minimizing a free energy functional of posterior beliefs about policies, ensuing states, and subsequent outcomes. The information diagrams in the upper panels represent the entropy of the three variables, where intersections correspond to shared information or mutual information. A conditional entropy corresponds to an area that precludes the variable on which the entropy is conditioned. Note that there is no overlap between policies and outcomes that is outside hidden states. This is because hidden states form a Markov blanket (i.e., information bottleneck) between policies and outcomes. Two complementary formulations of minimizing expected free energy are shown on the right (in terms of risk and ambiguity) and left (in terms of information gain and entropy), respectively. Both will tend to increase the overlap or mutual information between hidden states and outputs while minimizing entropy or Bayesian risk. In these diagrams, we have assumed steady state, such that risk becomes the mutual information between policies and hidden states. For simplicity, we have omitted dependencies on initial observations. The various schemes or formulations considered in the text are shown at the bottom. These demonstrate that Bayesian decision theory (i.e., KL control and Bayesian risk) and optimal Bayesian design figure as complementary imperatives.
## Software Note
Although the generative model changes from application to application, the belief updates described in this letter are generic and can be implemented using standard routines (here, spm_MDP_VB_XX.m). These routines are available as Matlab code in the SPM academic software: http://www.fil.ion.ucl.ac.uk/spm/. The simulations in this letter can be reproduced (and customized) via a graphical user interface by typing $>>$ DEM and selecting the appropriate (T-maze or Navigation) demo.
## Notes
1
Technically, a functional is defined as a function whose arguments (in this case, beliefs about hidden states) are themselves functions of other arguments (in this case, observed outcomes generated by hidden states).
2
Expected free energy can be read as risk plus ambiguity: risk is taken here to be the relative entropy (i.e., KL divergence) between predicted and preferred outcomes, while ambiguity is the conditional entropy (i.e., conditional uncertainty) about outcomes given their causes.
3
Exploration here has been associated with the resolution of ambiguity or uncertainty about hidden states, namely, the context in which the agent is operating (i.e., left or right arm payoff). More conventional formulations of exploration could remove the prior belief that the right and left arms have a complementary payoff structure, such that the agent has to learn the probabilities of winning and losing when selecting either arm. However, exactly the same principles apply: the right and left arms now acquire an epistemic affordance in virtue of resolving uncertainty about the contingencies that underlie payoffs as opposed to hidden states. We will see how this falls out of expected free energy minimization later.
4
Bayesian risk is taken to be negative expected utility, that is, expected loss under some predictive posterior beliefs (about hidden states).
5
Epistemic affordance is taken to be the information gain or relative entropy of predictive beliefs (about hidden states) before and after an action.
6
Note that both $F[Q(s,π)]$ and $F(π)$ depend on present and past observations. However, this dependence is typically left implicit, a convention we adhere to in this letter.
7
Generally log evidence is accuracy minus complexity, where accuracy is the expected log likelihood and complexity is the KL divergence between posterior and prior beliefs.
8
Where $v$ can be thought of as transmembrane voltage or depolarization and $s$ corresponds to the average firing rate of a neuronal population. (Da Costa, Parr, Sengupta, & Friston, 2020).
9
Surprisal is the self-information or negative log probability of outcomes. (Tribus, 1961).
10
The empirical free energy is usually based on inferences at a higher level in a hierarchical generative model. For details on hierarchical generative models, see Friston, Rosch, Parr, Price, and Bowman (2017).
11
The appendix provides derivations of equation 2.7 based on the principles of optimal Bayesian design and an integral fluctuation theorem described in Friston (2019).
12
Because the expected evidence bound cannot be less than zero, the expected free energy of a policy is always greater than the (negative) expected intrinsic value (i.e., log evidence) plus the intrinsic value (i.e., information gain).
13
We have suppressed any tensor notation here by assuming there is only one outcome modality and one hidden factor. In practice, this assumption can be guaranteed by working with the Kronecker tensor product of hidden factors. This ensures exact Bayesian inference, because conditional dependencies among hidden factors are evaluated.
14
Note that in order to accumulate beliefs about the context from trial to trial, it is necessary to carry over posterior beliefs about context from one trial as prior beliefs for the next (in the form of Dirichlet concentration parameters). For consistency with earlier formulations of this paradigm, we carry over the beliefs about the initial state on the previous trial that are evaluated using a conventional backwards pass—namely, the normalized likelihood of any given initial state, given subsequent observations—and probability transitions based on realized action.
15
Because costs are specified in terms of self-information or surprisal, they have meaningful and quantitative units. For example, a differential cost of three natural units corresponds to a log odds ratio 1:20 and reflects a strong preference for one state or outcome over another. This is the same interpretation of Bayes factors in statistics (Kass & Raftery, 1995) Here, the difference between reward and punishment was four natural units.
16
Equation A.1 follows from 3.1 when treating $F(π)$ and $E(π)$ as constants, that is, ignoring past observations and empirical priors over policies.
17
Note that a divergence such as information gain is not a measure of distance. The information distance (a.k.a. information length) can be regarded as the accumulated divergences along a path on a statistical manifold from the initial location to the final location.
## Acknowledgments
K.J.F. was funded by the Wellcome Trust (088130/Z/09/Z). L.D. is supported by the Fonds National de la Recherche, Luxembourg (13568875). C.H. was funded by a Research Talent Grant (406.18.535) of the Netherlands Organisation for Scientific Research. We have no disclosures or conflicts of interest.
## References
Amari
,
S.
(
1998
).
Natural gradient works efficiently in learning
.
Neural Computation
,
10
(
2
),
251
276
. doi:10.1162/089976698300017746
Åström
,
K. J.
(
1965
).
Optimal control of Markov processes with incomplete state information
.
Journal of Mathematical Analysis and Applications
,
10
(
1
),
174
205
.
Attias
,
H.
(
2003
).
Planning by probabilistic inference
. In
Proceedings of the 9th Int. Workshop on Artificial Intelligence and Statistics
.
New York
:
ACM
.
Ay
,
N.
(
2015
).
Information geometry on complexity and stochastic interaction
.
Entropy
,
17
(
4
), 2432.
Barlow
,
H.
(
1961
). Possible principles underlying the transformations of sensory messages. In
W.
Rosenblith
(Ed.),
Sensory communication
(pp.
217
234
).
Cambridge, MA
:
MIT Press
.
Barlow
,
H. B.
(
1974
).
Inductive inference, coding, perception, and language
.
Perception
,
3
,
123
134
.
Barto
,
A.
,
Mirolli
,
M.
, &
Baldassarre
,
G.
(
2013
).
Novelty or surprise?
Frontiers in Psychology
,
4
. doi:10.3389/fpsyg.2013.00907
Beal
,
M. J.
(
2003
).
Variational algorithms for approximate Bayesian inference
.
PhD diss., University College London
.
Bellman
,
R.
(
1952
).
On the theory of dynamic programming
. In
,
38
,
716
719
.
Berger
,
J. O.
(
2011
).
Statistical decision theory and Bayesian analysis
.
New York
:
Springer
.
Bialek
,
W.
,
Nemenman
,
I.
, &
Tishby
,
N.
(
2001
).
Predictability, complexity, and learning
.
Neural Comput.
,
13
(
11
),
2409
2463
.
Botvinick
,
M.
, &
Toussaint
,
M.
(
2012
).
Planning as inference
.
Trends Cogn. Sci.
,
16
(
10
),
485
488
.
Camerer
,
C. F.
,
Ho
,
T.-H.
, &
Chong
,
J.-K.
(
2004
).
A cognitive hierarchy model of games
.
Quarterly Journal of Economics
,
119
(
3
),
861
898
. doi:10.1162/0033553041502225
Catal
,
O.
,
Nauta
,
J.
,
Verbelen
,
T.
,
Simoens
,
P.
, &
Dhoedt
,
B.
(
2019
).
Bayesian policy selection using active inference
. https://arxiv.org/pdf/1904.08149.pdf
Çatal
,
O.
,
Verbelen
,
T.
,
Nauta
,
J.
,
Boom
,
C. D.
, &
Dhoedt
,
B.
(
2020
).
Learning perception and planning with deep active inference
. In
Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing
.
Piscataway, NJ
:
IEEE
.
Çatal
,
O.
,
Wauthier
,
S.
,
Verbelen
,
T.
,
De Boom
,
C.
, &
Dhoedt
,
B.
(
2020
).
Deep active inference for autonomous robot navigation
.
arXiv:2003.03220
.
Caticha
,
A.
(
2015
).
The basics of information geometry
. In
Proceedings of the AIP Conference Proceedings
(pp.
15
26
).
College Park, MD
:
American Institute of Physics
. doi:10.1063/1.4905960
Cohen
,
J. D.
,
McClure
,
S. M.
, &
Yu
,
A. J.
(
2007
).
Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration
.
Philos. Trans. R. Soc. Lond. B. Biol. Sci.
,
362
(
1481
),
933
942
.
Costa-Gomes
,
M.
,
Crawford
,
V. P.
, &
Broseta
,
B.
(
2001
).
Cognition and behavior in normal-form games: An experimental study
.
Econometrica
,
69
(
5
),
1193
1235
. doi:10.1111/1468-0262.00239
Crooks
,
G. E.
(
2007
).
Measuring thermodynamic length
.
Phys. Rev. Lett.
,
99
(
10
),
100602
. doi:10.1103/PhysRevLett.99.100602
Da Costa
,
L.
,
Parr
,
T.
,
Sajid
,
N.
,
Veselic
,
S.
,
Neacsu
,
V.
, &
Friston
,
K.
(
2020
).
Active inference on discrete state-spaces: A synthesis
.
arXiv:2001.07203
.
Da Costa
,
L.
,
Parr
,
T.
,
Sengupta
,
B.
, &
Friston
,
K.
(
2020
).
.
arXiv:2001.08028
.
Da Costa
,
L.
,
Sajid
,
N.
,
Parr
,
T.
,
Friston
,
K. J.
, &
Smith
,
R.
(
2020
).
The relationship between dynamic programming and active inference: The discrete, finite-horizon case
.
arXiv:2009.08111
.
Dauwels
,
J.
(
2007
).
On variational message passing on factor graphs
. In
Proceedings of the 2007 IEEE International Symposium on Information Theory
.
Piscataway, NJ
:
IEEE
.
Daw
,
N. D.
,
Gershman
,
S. J.
,
Seymour
,
B.
,
Dayan
,
P.
, &
Dolan
,
R. J.
(
2011
).
Model-based influences on humans' choices and striatal prediction errors
.
Neuron
,
69
(
6
),
1204
1215
.
Dayan
,
P.
(
1993
).
Improving generalization for temporal difference learning: The successor representation
.
Neural Comput.
,
5
(
4
),
613
624
. doi:10.1162/neco.1993.5.4.613
Dayan
,
P.
,
Hinton
,
G. E.
,
Neal
,
R. M.
, &
Zemel
,
R. S.
(
1995
).
The Helmholtz machine
.
Neural Comput.
,
7
(
5
),
889
904
.
Devaine
,
M.
,
Hollard
,
G.
, &
Daunizeau
,
J.
(
2014
).
Theory of mind: Did evolution fool us?
PLOS One
,
9
(
2
),
e87619
. doi:10.1371/journal.pone.0087619
Doya
,
K.
(
1999
).
What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?
Neural Netw.
,
12
(
7–8
),
961
974
. doi:10.1016/s0893-6080(99)00046-5
Duff
,
M. O.
(
2002
).
Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes
.
PhD diss., University of Massachusetts
.
Fleming
,
W. H.
, &
Sheu
,
S. J.
(
2002
).
Risk-sensitive control and an optimal investment model II
.
Ann. Appl. Probab.
,
12
(
2
),
730
767
. doi:10.1214/aoap/1026915623
Friston
,
K.
(
2013
).
Life as we know it
.
J. R. Soc. Interface
,
10
(
86
), 20130475.
Friston
,
K.
(
2019
).
A free energy principle for a particular physics
.
arXiv:1906.10184
.
Friston
,
K.
,
FitzGerald
,
T.
,
Rigoli
,
F.
,
Schwartenbeck
,
P.
,
O'Doherty
,
J.
, &
Pezzulo
,
G.
(
2016
).
Active inference and learning
.
Neurosci. Biobehav. Rev., 68
,
862
879
. doi:10.1016/j.neubiorev.2016.06.022
Friston
,
K.
,
FitzGerald
,
T.
,
Rigoli
,
F.
,
Schwartenbeck
,
P.
, &
Pezzulo
,
G.
(
2017
).
Active inference: A process theory
.
Neural Comput.
,
29
(
1
),
1
49
. doi:10.1162/NECO_a_00912
Friston
,
K. J.
,
Lin
,
M.
,
Frith
,
C. D.
,
Pezzulo
,
G.
,
Hobson
,
J. A.
, &
Ondobaka
,
S.
(
2017
).
Active inference, curiosity and insight
.
Neural Comput.
,
29
(
10
),
2633
2683
. doi:10.1162/neco_a_00999
Friston
,
K. J.
,
Parr
,
T.
, &
de Vries
,
B.
(
2017
).
The graphical brain: Belief propagation and active inference
.
Netw. Neurosci.
,
1
(
4
),
381
414
. doi:10.1162/NETN_a_00018
Friston
,
K.
,
Rigoli
,
F.
,
Ognibene
,
D.
,
Mathys
,
C.
,
Fitzgerald
,
T.
, &
Pezzulo
,
G.
(
2015
).
Active inference and epistemic value
.
Cogn. Neurosci.
,
6
(
4
),
187
214
. doi:10.1080/17588928.2015.1020053
Friston
,
K. J.
,
Rosch
,
R.
,
Parr
,
T.
,
Price
,
C.
, &
Bowman
,
H.
(
2017
).
Deep temporal models and active inference
.
Neurosci. Biobehav. Rev.
,
77
,
388
402
. doi:10.1016/j.neubiorev.2017.04.009
George
,
D.
, &
Hawkins
,
J.
(
2009
).
Towards a mathematical theory of cortical micro-circuits
.
PLOS Comput. Biol.
,
5
(
10
),
e1000532
. doi:10.1371/journal.pcbi.1000532
Gershman
,
S. J.
(
2017
).
Predicting the past, remembering the future
.
Curr. Opin. Behav. Sci.
,
17
,
7
13
. doi:10.1016/j.cobeha.2017.05.025
Ghahramani
,
Z.
, &
Jordan
,
M. I.
(
1997
).
Factorial hidden Markov models
.
Machine Learning
,
29
(
2–3
),
245
273
. doi:10.1023/a:1007425814087
,
M.
,
Mannor
,
S.
,
Pineau
,
J.
, &
Tamar
,
A.
(
2016
).
Bayesian reinforcement learning: A survey
.
arXiv:1609.04436
.
Hafez-Kolahi
,
H.
, &
Kasaei
,
S.
(
2019
).
Information bottleneck and its applications in deep learning
.
CoRR, abs/1904.03743
.
Hesp
,
C.
,
,
M.
,
Constant
,
A.
,
,
P.
,
Kirchhoff
,
M.
, &
Friston
,
K.
(
2019
).
A multi-scale view of the emergent complexity of life: A free-energy proposal
. In
G.
Georgiev
,
J.
Smart
,
C. Flores
Martinez
, &
M.
Price
(Eds.)
Evolution, development and complexity
(pp.
195
227
).
Cham
:
Springer
.
Hesp
,
C.
,
Smith
,
R.
,
Parr
,
T.
,
Allen
,
M.
,
Friston
,
K.
, &
,
M.
(
2019
).
Deeply felt affect: The emergence of valence in deep active inference
.
PsyArXiv
. doi:10.31234/osf.io/62pfd
Hohwy
,
J.
(
2016
).
The self-evidencing brain
.
Noûs
,
50
(
2
),
259
285
. doi:10.1111/nous.12062
Howard
,
R.
(
1966
).
Information value theory
.
IEEE Transactions on Systems, Science and Cybernetics
,
SSC-2
(
1
),
22
26
.
Ikeda
,
S.
,
Tanaka
,
T.
, &
Amari
,
S.-I.
(
2004
).
Stochastic reasoning, free energy, and information geometry
.
Neural Computation
,
16
,
1779
1810
. doi:10.1162/0899766041336477
Itti
,
L.
, &
Baldi
,
P.
(
2009
).
Bayesian surprise attracts human attention
.
Vision Res.
,
49
(
10
),
1295
1306
.
Jaynes
,
E. T.
(
1957
).
Information theory and statistical mechanics
.
Physical Review Series II
,
106
(
4
),
620
630
.
Kahneman
,
D.
, &
Tversky
,
A.
(
1979
).
Prospect theory: An analysis of decision under risk
.
Econometrica
,
47
(
2
),
263
291
.
Kaplan
,
R.
, &
Friston
,
K. J.
(
2018
).
Planning and navigation as active inference
.
Biol. Cybern.
,
112
(
4
),
323
343
. doi:10.1007/s00422-018-0753-2
Kass
,
R. E.
, &
Raftery
,
A. E.
(
1995
).
Bayes factors
.
Journal of the American Statistical Association
,
90
(
430
),
773
795
. doi:10.1080/01621459.1995.10476572
Kauder
,
E.
(
1953
).
Genesis of the marginal utility theory: From Aristotle to the end of the eighteenth century
.
Economic Journal
,
63
(
251
),
638
650
.
Keramati, M., Smittenaar
,
P.
,
Dolan
,
R. J.
, &
Dayan
,
P.
(
2016
).
Adaptive integration of habits into depth-limited planning defines a habitual-goal-directed spectrum
. In
,
113
(
45
),
12868
12873
. doi:10.1073/pnas.1609094113
Kim
,
E.-j.
(
2018
).
Investigating information geometry in classical and quantum systems through information length
.
Entropy
,
20
(
8
),
574
. doi:10.3390/e20080574
Klyubin
,
A. S.
,
Polani
,
D.
, &
Nehaniv
,
C. L.
(
2005
).
Empowerment: A universal agent-centric measure of control
. In
Proceedings of the IEEE Congress on Evolutionary Computation
(
1:128
135
).
Piscataway, NJ
:
IEEE
.
Lee
,
J. J.
, &
Keramati
,
M.
(
2017
).
Flexibility to contingency changes distinguishes habitual and goal-directed strategies in humans
.
PLOS Comput. Biol.
,
13
(
9
).
e1005753
. doi:10.1371/journal.pcbi.1005753
Lillicrap
,
T. P.
,
Hunt
,
J. J.
,
Pritzel
,
A.
,
Heess
,
N.
,
Erez
,
T.
,
Tassa
,
Y.
, …
Wierstra
,
D.
(
2015
).
Continuous control with deep reinforcement learning
.
arXiv:1509.02971
.
Lindley
,
D. V.
(
1956
).
On a measure of the information provided by an experiment
.
Ann. Math. Statist.
,
27
(
4
),
986
1005
. doi:10.1214/aoms/1177728069
Linsker
,
R.
(
1990
).
Perceptual neural organization: Some approaches based on network models and information theory
.
Annu. Rev. Neurosci.
,
13
,
257
281
.
MacKay
,
D. J. C.
(
1992
).
Information-based objective functions for active data selection
.
Neural Computation
,
4
(
4
),
590
604
. doi:10.1162/neco.1992.4.4.590
MacKay
,
D. J. C.
(
2003
).
Information theory, inference and learning algorithms
.
Cambridge
:
Cambridge University Press
.
Millidge
,
B.
(
2019
).
Deep active inference as variational policy gradients
.
arXiv:1907.03876
.
,
I.
,
Russek
,
E. M.
,
Cheong
,
J. H.
,
Botvinick
,
M. M.
,
Daw
,
N. D.
, &
Gershman
,
S. J.
(
2017
).
The successor representation in human reinforcement learning
.
Nature Human Behavior
,
1
(
9
),
680
692
. doi:10.1038/s41562-017-0180-8
Optican
,
L.
, &
Richmond
,
B. J.
(
1987
).
Temporal encoding of two-dimensional patterns by single units in primate inferior cortex. II: Information theoretic analysis
.
J. Neurophysiol.
,
57
,
132
146
.
Ortega
,
P. A.
, &
Braun
,
D. A.
(
2010
).
A minimum relative entropy principle for learning and acting
.
Journal of Artificial Intelligence Research
,
38
,
475
511
.
Osband
,
I.
,
Van Roy
,
B.
,
Russo
,
D. J.
, &
Wen
,
Z.
(
2019
).
Deep exploration via randomized value functions
.
Journal of Machine Learning Research
,
20
(
124
),
1
62
.
Oudeyer
,
P.-Y.
, &
Kaplan
,
F.
(
2007
).
What is intrinsic motivation? A typology of computational approaches
.
Frontiers in Neurorobotics
,
1
, 6.
Parr
,
T.
,
Da Costa
,
L.
, &
Friston
,
K.
(
2020
).
Markov blankets, information geometry and stochastic thermodynamics
.
Philosophical Transactions of the Royal Society A
,
378
(
2164
).
Parr
,
T.
, &
Friston
,
K. J.
(
2017
).
Working memory, attention, and salience in active inference
.
Sci. Rep.
,
7
(
1
),
14678
. doi:10.1038/s41598-017-15249-0
Parr
,
T.
, &
Friston
,
K. J.
(
2019a
).
Attention or salience?
Current Opinion in Psychology
,
29
,
1
5
. doi: https://doi.org/10.1016/j.copsyc.2018.10.006
Parr
,
T.
, &
Friston
,
K. J.
(
2019b
).
Generalized free energy and active inference
.
Biol. Cybern.
,
113
(
5–6
),
495
513
. doi:10.1007/s00422-019-00805-w
Parr
,
T.
,
Markovic
,
D.
,
Kiebel
,
S. J.
, &
Friston
,
K. J.
(
2019
).
Neuronal message passing using mean-field, Bethe, and marginal approximations
.
Sci. Rep.
,
9
(
1
),
1889
. doi:10.1038/s41598-018-38246-3
Ramnani
,
N.
(
2014
).
Automatic and controlled processing in the corticocerebellar system
.
Prog. Brain Res.
,
210
,
255
285
. doi:10.1016/b978-0-444-63356-9.00010-8
Rikhye
,
R. V.
,
Guntupalli
,
J. S.
,
Gothoskar
,
N.
,
Lázaro-Gredilla
,
M.
, &
George
,
D.
(
2019
).
Memorize-generalize: An online algorithm for learning higher-order sequential structure with cloned hidden Markov models
.
bioRxiv:764456
. doi:10.1101/764456
Ross
,
S.
,
Chaib-draa
,
B.
, &
Pineau
,
J.
(
2008
J. C.
Platt
,
D.
Koller
,
Y.
Singer
, &
S. T.
Roweis
(Eds.),
Advances in neural information processing systems
,
20
.
Cambridge, MA
:
MIT Press
.
Russek
,
E. M.
,
,
I.
,
Botvinick
,
M. M.
,
Gershman
,
S. J.
, &
Daw
,
N. D.
(
2017
).
Predictive representations can link model-based reinforcement learning to model-free mechanisms
.
PLOS Comput. Biol.
,
13
(
9
),
e1005768
. doi:10.1371/journal.pcbi.1005768
Ryan
,
R.
, &
Deci
,
E.
(
1985
).
Intrinsic motivation and self-determination in human behavior
.
New York
:
Plenum
.
Schmidhuber
,
J.
(
1991
).
Curious model-building control systems
. In
Proceedings of the 1991 IEEE International Joint Conference on Neural Network
(pp.
1458
1463
).
Piscataway, NJ
:
IEEE
. doi:10.1109/IJCNN.1991.170605
Schmidhuber
,
J.
(
2006
).
Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts
.
Connection Science
,
18
(
2
),
173
187
. doi:10.1080/09540090600768658
Schmidhuber
,
J.
(
2010
).
Formal theory of creativity, fun, and intrinsic motivation (1990–2010)
.
IEEE Transactions on Autonomous Mental Development
,
2
(
3
),
230
247
. doi:10.1109/tamd.2010.2056368
Schrittwieser
,
J.
,
Antonoglou
,
I.
,
Hubert
,
T.
,
Simonyan
,
K.
,
Sifre
,
L.
,
Schmitt
,
S.
, …
Silver
,
D.
(
2019
).
Mastering Atari, go, chess and shogi by planning with a learned model
.
arXiv:1911.08265
.
Schwartenbeck
,
P.
,
Fitzgerald
,
T.
,
Dolan
,
R. J.
, &
Friston
,
K.
(
2013
).
Exploration, novelty, surprise, and free energy minimization
.
Front. Psychol.
,
4
,
710
. doi:10.3389/fpsyg.2013.00710
Schwartenbeck
,
P.
,
Passecker
,
J.
,
Hauser
,
T. U.
,
FitzGerald
,
T. H. B.
,
Kronbichler
,
M.
, &
Friston
,
K. J.
(
2019
).
Computational mechanisms of curiosity and goal-directed exploration
.
Elife
,
8
,
e41703
. doi:10.7554/eLife.41703
Sengupta
,
B.
, &
Friston
,
K.
(
2018
).
How robust are deep neural networks?
arXiv:1804.11313
.
Solway
,
A.
, &
Botvinick
,
M. M.
(
2015
).
Evidence integration in model-based tree search
. In
,
112
(
37
),
11708
11713
. doi:10.1073/pnas.1505483112
Still
,
S.
, &
Precup
,
D.
(
2012
).
An information-theoretic approach to curiosity-driven reinforcement learning
.
Theory Biosci.
,
131
(
3
),
139
148
. doi:10.1007/s12064-011-0142-z
Still
,
S.
,
Sivak
,
D. A.
,
Bell
,
A. J.
, &
Crooks
,
G. E.
(
2012
).
Thermodynamics of prediction
.
Phys. Rev. Lett.
,
109
(
12
),
120604
. doi:10.1103/PhysRevLett.109.120604
Suh
,
S.
,
Chae
,
D. H.
,
Kang
,
H. G.
, &
Choi
,
S.
(
2016
).
Echo-state conditional variational autoencoder for anomaly detection
. In
Proceedings of the 2016 International Joint Conference on Neural Networks
(pp.
1015
1022
).
Berlin
:
Springer
.
Sun
,
Y.
,
Gomez
,
F.
, &
Schmidhuber
,
J.
(
2011
).
Planning to be surprised: Optimal Bayesian exploration in dynamic environments
. In
J.
Schmidhuber
,
K. R.
Thórisson
, &
M.
Looks
(Eds.), In
Proceedings of the Artificial General Intelligence 4th International Conference
, (pp.
41
51
).
Berlin
:
Springer
.
Sutton
,
R. S.
, &
Barto
,
A. G.
(
1981
).
Toward a modern theory of adaptive networks: Expectation and prediction
.
Psychol. Rev.
,
88
(
2
),
135
170
.
Sutton
,
R. S.
, &
Barto
,
A. G.
(
1998
).
Reinforcement learning: An introduction
.
Cambridge, MA
:
MIT Press
.
Thompson
,
W. R.
(
1933
).
On the likelihood that one unknown probability exceeds another in view of the evidence of two samples
.
Biometrika
,
25
(
3–4
),
285
294
. doi:10.1093/biomet/25.3-4.285
Tishby
,
N.
,
Pereira
,
F. C.
, &
Bialek
,
W.
(
1999
).
The information bottleneck method
. In
Proceedings of the 37th Annual Allerton Conference on Communication, Control, and Computing
(pp.
368
377
).
Champaign
:
University of Illinois
.
Tishby
,
N.
, &
Polani
,
D.
(
2010
).
Information theory of decisions and actions
. In
V.
Cutsuridis
,
A.
Hussain
, &
J.
Taylor
(Eds.),
Perception-reason-action cycle: Models, algorithms and systems
.
Berlin
:
Springer
.
Tishby
,
N.
, &
Zaslavsky
,
N.
(
2015
).
Deep learning and the information bottleneck principle
.
arXiv:1503.02406
Todorov
,
E.
(
2008
).
General duality between optimal control and estimation
. In
Proceedings of the IEEE Conference on Decision and Control
.
Piscataway, NJ
:
IEEE
.
Toussaint
,
M.
, &
Storkey
,
A.
(
2006
).
Probabilistic inference for solving discrete and continuous state Markov decision processes
. In
Proceedings of the 23rd Int. Conf. on Machine Learning
.
New York
:
ACM
.
Tribus
,
M.
(
1961
).
Thermodynamics and thermostatics: An introduction to energy, information and states of matter, with engineering applications
.
New York
:
Van Nostrand
.
Tschantz
,
A.
,
Baltieri
,
M.
,
Seth
,
A. K.
, &
Buckley
,
C. L.
(
2019
).
Scaling active inference
.
arXiv:1911.10601
.
Ueltzhöffer
,
K.
(
2018
).
Deep active inference
.
Biol. Cybern.
,
112
(
6
),
547
573
. doi:10.1007/s00422-018-0785-7
van den Broek
,
J. L.
,
Wiegerinck
,
W. A. J. J.
, &
Kappen
,
H. J.
(
2010
).
Risk-sensitive path integral control
.
Uncertainty in Artificial Intelligence
,
6
,
1
8
.
Von Neumann
,
J.
, &
Morgenstern
,
O.
(
1944
).
Theory of games and economic behavior
.
Princeton
:
Princeton University Press
.
Winn
,
J.
, &
Bishop
,
C. M.
(
2005
).
Variational message passing
.
Journal of Machine Learning Research
,
6
,
661
694
.
Yedidia
,
J. S.
,
Freeman
,
W. T.
, &
Weiss
,
Y.
(
2005
).
Constructing free energy approximations and generalized belief propagation algorithms
.
IEEE Transactions on Information Theory
,
51
,
2282
2312
. | 2021-06-12 11:26:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 291, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7721004486083984, "perplexity": 2364.890452663863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00180.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-10-infinite-sequences-and-series-section-10-5-absolute-convergence-the-ratio-and-root-tests-exercises-10-5-page-597/8 | ## Thomas' Calculus 13th Edition
Published by Pearson
# Chapter 10: Infinite Sequences and Series - Section 10.5 - Absolute Convergence; The Ratio and Root Tests - Exercises 10.5 - Page 597: 8
Diverges
#### Work Step by Step
Let us consider $a_n=\dfrac{n5^n}{ (2n+3) \ln (n+1)}$ In order to solve the given series we will take the help of Ratio Test. This test states that when the limit $L \lt 1$, the series converges and for $L \gt 1$, the series diverges. $L=\lim\limits_{n \to \infty} |\dfrac{a_{n+1}}{a_{n}} |=\lim\limits_{n \to \infty}|\dfrac{\dfrac{(n+1)5^{n+1}}{ (2(n+1)+3) \ln (n+2)}}{\dfrac{n5^n}{ (2n+3) \ln (n+1)}}|$ $\implies \lim\limits_{n \to \infty}|\dfrac{5(n+1)(2n+3) \ln (n+1)}{n \ln (n+2) (2n+5)}|=[\lim\limits_{n \to \infty}|\dfrac{5(n+1)(2n+3)}{n (2n+5)}|][\lim\limits_{n \to \infty}|\dfrac{ \ln (n+1)}{ \ln (n+2) }|]$ and $L=[5] \times [1]=5 \gt 1$ Thus, the series Diverges by the ratio test.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-12-12 02:59:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325017333030701, "perplexity": 461.86547991844657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00422.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=1964_AHSME_Problems/Problem_33&oldid=107913 | # 1964 AHSME Problems/Problem 33
## Problem
$P$ is a point interior to rectangle $ABCD$ and such that $PA=3$ inches, $PD=4$ inches, and $PC=5$ inches. Then $PB$, in inches, equals:
$\textbf{(A) }2\sqrt{3}\qquad\textbf{(B) }3\sqrt{2}\qquad\textbf{(C) }3\sqrt{3}\qquad\textbf{(D) }4\sqrt{2}\qquad \textbf{(E) }2$
$[asy] draw((0,0)--(6.5,0)--(6.5,4.5)--(0,4.5)--cycle); draw((2.5,1.5)--(0,0)); draw((2.5,1.5)--(0,4.5)); draw((2.5,1.5)--(6.5,4.5)); draw((2.5,1.5)--(6.5,0),linetype("8 8")); label("A",(0,0),dir(-135)); label("B",(6.5,0),dir(-45)); label("C",(6.5,4.5),dir(45)); label("D",(0,4.5),dir(135)); label("P",(2.5,1.5),dir(-90)); label("3",(1.25,0.75),dir(120)); label("4",(1.25,3),dir(35)); label("5",(4.5,3),dir(120)); [/asy]$
## Solution
From point $P$, create perpendiculars to all four sides, labeling them $a, b, c, d$ starting from going north and continuing clockwise. Label the length $PB$ as $x$.
We have $a^2 + b^2 = 5^2$ and $c^2 + d^2 = 3^2$, leading to $a^2 + b^2 + c^2 + d^2 = 34$.
We also have $a^2 + d^2 = 4^2$ and $b^2 + c^2 = x^2$, leading to $a^2 + b^2 + c^2 + d^2 = 16 + x^2$.
Thus, $34 = 16 + x^2$, or $x = \sqrt{18} = 3\sqrt{2}$, which is option $\boxed{\textbb{(B)}$ (Error compiling LaTeX. ! File ended while scanning use of \boxed.) | 2020-10-27 09:36:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930637359619141, "perplexity": 1303.496353863183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893845.76/warc/CC-MAIN-20201027082056-20201027112056-00075.warc.gz"} |
http://forums.fast.ai/t/deeplearning-lecnotes3/7866 | DeepLearning-LecNotes3
(Tim Lee) #1
All,
Apologize for the delay, here’s lecture 3’s notes.
• Tim
Unofficial Deep Learning Lecture 3 Notes
Where do we go from here?
1. CNN Image intro <- we are here
2. Structured neural net intro
3. Language RNN intro
4. Collaborative filtering intro
5. Collaborative filtering in-depth
6. Structured neural net in-depth
7. CNN image in depth
8. Language RNN in depth
Talking about the Kaggle command line
The unofficial Kaggle CLI tool keeps changing though. So, be careful with different versions.
Note: that the specific name of a Kaggle challenge is listed as follows:
Specific name: planet-understanding-the-amazon-from-space
%reload_ext autoreload
%matplotlib inline
import sys
sys.path.append('/home/paperspace/repos/fastai')
import torch
import fastai
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
1. Fastai Library Comparison: Short explanation on a quick and dirty Cats vs. Dogs.
Need the following folders:
• Train - with a folder for different
• Valid
• Test
from fastai.conv_learner import *
PATH = 'data/dogscats/'
Set image size and batch size
sz = 224; bs = 64
Training a model -> straight up
Note: this command will download the ResNet model. May take a few minutes, using ResNet50 to compare to Keras, will take about 10 mins to run afterwards.
By default all the layers frozen except the last few. Note that we need to pass test_name parameter to ImageClassifierData for future predictions.
tfms = tfms_from_model(resnet50, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms= tfms, bs=bs, test_name='test1')
learn = ConvLearner.pretrained(resnet50, data )
% time learn.fit( 1e-2, 3, cycle_len=1)
# deeper model like resnet 50
A Jupyter Widget
[ 0. 0.04488 0.02685 0.99072]
[ 1. 0.03443 0.02572 0.99023]
[ 2. 0.04223 0.02662 0.99121]
CPU times: user 4min 16s, sys: 1min 43s, total: 5min 59s
Wall time: 6min 14s
Note: ‘precompute = True’ caches some of the intermediate steps which we do not need to recalculate every time. It uses cached non-augmented activations. That’s why data augmentation doesn’t work with precompute. Having precompute speeds up our work. Jeremy telling this during lecture 3
Unfreeze the layers, apply a learning rate
BN_freeze - if are you using a deep network on a very similar dataset to your target (ours is dogs and cats) - its causing the batch normalization not be updated.
Note: If Images are of size between 200-500px and arch > 34 e.g. resnet50 then add bn_freeze(True)
learn.unfreeze()
learn.bn_freeze(True)
%time learn.fit([1e-5, 1e-4,1e-2], 1, cycle_len=1)
A Jupyter Widget
[ 0. 0.02088 0.02454 0.99072]
CPU times: user 4min 1s, sys: 1min 5s, total: 5min 7s
Wall time: 5min 12s
Get the Predictions and score the model
%time log_preds, y = learn.TTA()
metrics.log_loss(y, np.exp(log_preds)), accuracy(log_preds,y)
CPU times: user 31.9 s, sys: 14 s, total: 45.9 s
Wall time: 56.2 s
(0.016504555816930676, 0.995)
2. Fastai Library Comparison: Keras Sample
Example of running on TensorFlow back-end
To install:
pip install tensorflow-gpu keras
%reload_ext autoreload
%matplotlib inline
import numpy as np
from keras.preprocessing
PATH = "data/dogscats/"
sz=224
batch_size=64
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.layers import Dropout, Flatten, Dense
from keras.applications import ResNet50
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
Set paths
train_data_dir = f'{PATH}train'
validation_data_dir = f'{PATH}valid'
batch_size = 64
1. Define a data generator(s)
• data augmentation do you want to do
• what kind of normalization do we want to do
• create images from directly looking at it
• create a generator - then generate images from a directory
• tell it what image size, whats the mini-batch size you want
• do the same thing for the validation_generator, do it without shuffling, because then you can’t track how well you are doing
train_datagen = ImageDataGenerator(rescale=1. / 255,
shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(sz, sz),
batch_size=batch_size, class_mode='binary')
# validation set
validation_generator = test_datagen.flow_from_directory(validation_data_dir,
shuffle=False,
target_size=(sz, sz),
batch_size=batch_size, class_mode='binary')
Note: class_mode=‘categorical’ for multi-class classification
2. Make the Keras model
• ResNet50 was used because Keras didn’t have ResNet34. This is for comparing apples to apples.
• Make base model.
• Make the layers manually which ones you want.
base_model = ResNet50(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
3. Loop through and freeze the layers you want
• You need to compile the model.
• Pass the type of optimizer, loss, and metrics.
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
4. Fit
• Keras expects the size per epoch
• How many workers
• Batchsize
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
6. We decide to retrain some of the layers,
• loop through and manually set layers to true or false.
split_at = 140
for layer in model.layers[:split_at]: layer.trainable = False
for layer in model.layers[split_at:]: layer.trainable = True
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
PyTorch - a little early for mobile deployment.
TensorFlow - do more work with Keras, but can deploy out to other platforms, though you need to do a lot of work to get there.
3. Reviewing Dog breeds as an example to submit to Kaggle
how to make predictions - will use dogs / cats for simplicity. Jeremy uses Dog breeds for walkthrough.
By default, PyTorch gives back the log probability.
log_preds,y = learn.TTA(is_test=True)
probs = np.exp(log_preds)
Note: is_test = True gives predictions on test set rather than validation set.
df = pd.DataFrame(probs)
df.columns = data.classes
df.insert(0,'id', [o[5:-4] for o in data.test_ds.fnames])
Explanation: Insert a new column at position zero named ‘id’. subset and remove first 5 and last 4 letters since we just need ids.
df.head()
id cats dogs
0 /828 0.000005 0.999994
1 /10093 0.979626 0.013680
2 /2205 0.999987 0.000010
3 /11812 0.000032 0.999559
4 /4042 0.000090 0.999901
with large files compression is important to speedup work
SUBM = f'{PATH}sub/'
os.makedirs(SUBM, exist_ok=True)
df.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)
Gives you back a URL that you can use to download onto your computer. For submissions, or file checking etc.
FileLink(f'{SUBM}subm.gz')
4. What about a single prediction?
assign a single picture
fn = data.val_ds.fnames[0]
fn
'valid/cats/cat.9000.jpg'
can always view the photo
Image.open('data/dogscats/'+fn)
Shortest way to do a single prediction
Make sure you transform the image before submitting to the learn.
im = val_tfms(open_image(PATH+fn)
learn.predict_array(im[none])
(Note the use of open_image instead of Image.open above - this divides by 255 and converts to np.array as is done during training)
Everything passed to or returned from models is assumed to be mini-batch or “tensors” so it should be a 4-d tensor. (#ct, height, weight, channels) This is why we add another dimension via im[none]
trn_tfms, val_tfms = tfms_from_model(resnet50,sz)
Predict dog or cat!
im = val_tfms(open_image('data/dogscats/'+fn))
preds = learn.predict_array(im[None])
np.argmax(preds) # 0 is cat
0
5. Convolution: Whats happening behind the scenes?
Otavio Good’s Video
The theory behind Convolutional Networks, and Otavio Good demo of Word Lens, now part of Google Translate.
The video shows the illustration of the image recognition of a letter A (for classification). Some highlights:
• Positives
• Negatives
• Max Pools
• Another Max Pools
• Finally, we compare it to a template of A, B, C, D, E, then we get a % probability.
• Illustrating a pretrained model.
Definitions
term definitions
Activations Input numbers x kernel matrix = numbers
Relu MAX(0, calculated number)
Filter / Kernel refers to the same thing, the 3x3 slice of a tensor
tensor array with more dimensions. In this case, all these filters can be stacked into a multi-dimensional matrix.
Hidden Layers intermediate calculation, not the input, and not the last layer, so called a hidden layer
Architecture how big is your kernel and how many of them do you have ?
Name your layers typically people will name their layers as they create it Conv1, Conv2
Max pooling a (2,2) max pooling will half the resolution in both height and width, as seen in the excel
Fully Connected Layer give every single activation and give them a weight. Then get a sum product of weights times activations. Really big weight matrix (sized as big as the entire import)
Note: We do fully connected layer on old architecture or structured data. These days we do can many things after Max pooling. One of them is taking max of Max pooling grid. Architecture that make heavy use of fully connected layers are prone to overfitting and are slower. ResNet, ResNext doesn’t use very large fully connected layers.
activation function is a function applied to activations. Max ( ) is an example
Layers
• Input
• Conv1
• Conv2
• Maxpool
• Denseweights
• Dense activation
Example of Max pooling
Refer to entropy_example.xlsx.
Now, if we were to predict numbers (0-9) or categorical data… we’ll have that many output by fully connected layer. There is no ReLU after fully connected so we can have negative numbers. We want to convert these numbers into probabilities which are between 0-1 and add to 1. Softmax is an activation function which helps here. An activation function is a function which we apply to activations. We were using ReLU i.e. max(0,x) until now which is also activation function. Such functions are for non-linearity. An activation function takes a number and spits out a single number.
Example of a softmax layer
Only ever occurs in the final layer. Always spits out numbers between 0 and 1. And the numbers added together gives us a total of 1. This isn’t necessary, we COULD tell them to learn a kernel to give probabilities. But if you design your architecture properly, you will build a better model. If you build the model that way, and it iterates with the proper expected output you will save some time.
output exp softmax
cat 4.84 126.44 0.40
dog 3.98 53.60 0.17
plane 4.89 132.48 0.42
fish -2.80 0.06 0.00
building -1.96 0.14 0.00
Total 312.72 1.00
of them
1. Get rid of negatives
( Exponential column ) - It also accentuates the number and helps us because at the end we want one them with high probability. Softmax picks one of the output with strong probability.
Some basic properties:
$$ln(xy) = ln(x) +ln(y)$$
$$ln(\frac{x}{y}) = ln(x) - ln(y)$$
$$ln(x) = y , e^y = x$$
2. then do the % proportion
$$\frac{ln(x)}{\sum{ln(x)}} = probability$$
Image models (how do we recognize multiple items?)
import sys
sys.path.append('/home/paperspace/repos/fastai')
import torch
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
PATH = '/home/paperspace/Desktop/data/Planet: Understanding the Amazon from Space/'
list_paths = [f"{PATH}train-jpg/train_0.jpg", f"{PATH}train-jpg/train_1.jpg"]
titles=["haze primary", "agriculture clear primary water"]
#plots_from_files(list_paths, titles=titles, maintitle="Multi-label classification")
f2 = is f_beta where beta = 2, weights false negatives and false positives much worse
def f2(preds, targs, start=0.17, end=0.24, step=0.01):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return max([fbeta_score(targs, (preds>th), 2, average='samples')
for th in np.arange(start,end,step)])
#from planet import f2
metrics=[f2]
Write any metric you like
Custom metrics from the planet.py file
from fastai.imports import *
from fastai.transforms import *
from fastai.dataset import *
from sklearn.metrics import fbeta_score
import warnings
def f2(preds, targs, start=0.17, end=0.24, step=0.01):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return max([fbeta_score(targs, (preds>th), 2, average='samples')
for th in np.arange(start,end,step)])
def opt_th(preds, targs, start=0.17, end=0.24, step=0.01):
ths = np.arange(start,end,step)
idx = np.argmax([fbeta_score(targs, (preds>th), 2, average='samples')
for th in ths])
return ths[idx]
def get_data(path, tfms,bs, n, cv_idx):
val_idxs = get_cv_idxs(n, cv_idx)
return ImageClassifierData.from_csv(path, 'train-jpg', f'{path}train_v2.csv', bs, tfms,
suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
def get_data_zoom(f_model, path, sz, bs, n, cv_idx):
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05)
return get_data(path, tfms, bs, n, cv_idx)
def get_data_pad(f_model, path, sz, bs, n, cv_idx):
transforms_pt = [RandomRotateZoom(9, 0.18, 0.1), RandomLighting(0.05, 0.1), RandomDihedral()]
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_pt, pad=sz//12)
return get_data(path, tfms, bs, n, cv_idx)
f_model = resnet34
label_csv = f'{PATH}train_v2.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
We use a different set of data augmentations for this dataset - we also allow vertical flips, since we don’t expect vertical orientation of satellite images to change our classifications.
Here we’ll have 8 flips. 90, 180, 270 and 0 degree. and same for the side. We’ll also have some rotation, zooming, contrast and brightness adjustments.
data.val_ds returns single item/image say data.val_ds[0].
data.val_d returns an generator. Which returns mini-batch of items/images. We always get the next mini-batch.
def get_data(sz):
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05)
return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms,
suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
PATH = '/home/paperspace/Desktop/data/Planet: Understanding the Amazon from Space/'
os.makedirs('data/planet/models', exist_ok=True)
os.makedirs('cache/planet/tmp', exist_ok=True)
label_csv = f'{PATH}train_v2.csv'
data = get_data(256)
x,y = next(iter(data.val_dl))
y
1 0 0 ... 0 1 1
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
1 0 0 ... 0 0 0
[torch.FloatTensor of size 64x17]
list(zip(data.classes, y[0]))
[('agriculture', 1.0),
('artisinal_mine', 0.0),
('bare_ground', 0.0),
('blooming', 0.0),
('blow_down', 0.0),
('clear', 1.0),
('cloudy', 0.0),
('conventional_mine', 0.0),
('cultivation', 0.0),
('habitation', 0.0),
('haze', 0.0),
('partly_cloudy', 0.0),
('primary', 1.0),
('selective_logging', 0.0),
('slash_burn', 1.0),
('water', 1.0)]
One Hot Encoding:
Classification softmax dog (one-hot) Index sigmoid
cat 0 0 0 0.01
dog 0.92 1 1 0.98
plane 0 0 2 0.01
fish 0 0 3 0.0
building 0.08 0 4 0.07
Sigmoid function
$$= \frac{e^\alpha}{1+e^\alpha}$$
plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4);
How do we use this?
resize the data from 256 down to 64 x 64.
Wouldn’t do this for cats and dogs, because it starts off nearly perfect. If we resized, we destroy the model. Most ImageNet models are designed around 224 which was close to the normal. In this case, since this is landscape, there isn’t that much of ImageNet that is useful for satellite.
So we will start small
sz=64
data = get_data(sz)
What does resize do?
I will not use images more than image size 1.3, go ahead and make new jpg where the smallest edge is x size. So this will save a lot of time for processing. In general the image resize will take a center crop.
data = data.resize(int(sz*1.3), 'tmp')
Train our model
Note: Training implies improving filters/kernels and weights in Fully connected layers. On the other hand activations are calculated.
learn = ConvLearner.pretrained(f_model, data, metrics=metrics)
To view the model + the layers (only looking at 5)
list(learn.summary().items())[:5]
[('Conv2d-1',
OrderedDict([('input_shape', [-1, 3, 64, 64]),
('output_shape', [-1, 64, 32, 32]),
('trainable', False),
('nb_params', 9408)])),
('BatchNorm2d-2',
OrderedDict([('input_shape', [-1, 64, 32, 32]),
('output_shape', [-1, 64, 32, 32]),
('trainable', False),
('nb_params', 128)])),
('ReLU-3',
OrderedDict([('input_shape', [-1, 64, 32, 32]),
('output_shape', [-1, 64, 32, 32]),
('nb_params', 0)])),
('MaxPool2d-4',
OrderedDict([('input_shape', [-1, 64, 32, 32]),
('output_shape', [-1, 64, 16, 16]),
('nb_params', 0)])),
('Conv2d-5',
OrderedDict([('input_shape', [-1, 64, 16, 16]),
('output_shape', [-1, 64, 16, 16]),
('trainable', False),
('nb_params', 36864)]))]
Search for Learning Rate
lrf=learn.lr_find()
learn.sched.plot()
lr = 0.2
Refit the model
Follow the last few steps on the bottom of the Jupyter notebook.
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
How are the learning rates spread per layer?
[split halfway, split halfway, always last layer only]
lrs = np.array([lr/9,lr/3,lr])
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
learn.sched.plot_loss()
Structured Data
Related Kaggle competition: https://www.kaggle.com/c/favorita-grocery-sales-forecasting
There’s really two types of data. Unstructured and structured data. Structured data - columnar data, columns, etc… Structured data is important in the world, but often ignored by academic people. Will look at the Rossmann stores data.
%matplotlib inline
from fastai.imports import *
from fastai.torch_imports import *
from fastai.structured import *
from fastai.dataset import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
import operator
PATH='/home/paperspace/Desktop/data/rossman/'
test = pd.read_csv(f'{PATH}test.csv', parse_dates=['Date'])
def concat_csvs(dirname):
path = f'{PATH}{dirname}'
filenames=glob.glob(f"{path}/*.csv")
with open(f"{path}.csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
Feature Space:
• train: Training set provided by competition
• store: List of stores
• store_states: mapping of store to the German state they are in
• List of German state names
• googletrend: trend of certain google keywords over time, found by users to correlate well with given data
• weather: weather
• test: testing set
table_names = ['train', 'store', 'store_states', 'state_names',
We’ll be using the popular data manipulation framework pandas. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
We’re going to go ahead and load all of our CSV’s as data frames into the list tables.
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
We can use head() to get a quick look at the contents of each table:
• train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holiday, etc.
• store: general info about the store including competition, etc.
• store_states: maps store to state it is in
• state_names: Maps state abbreviations to names
• googletrend: trend data for particular week/state
• weather: weather conditions for each state
• test: Same as training table, w/o sales and customers
This is very representative of a typical industry dataset.
The following returns summarized aggregate information to each table across each field.
Next Week - Data prep and transformations
learn.bn_freeze(True): how/when to use?
Wiki: Lesson 3
learn.bn_freeze(True): how/when to use?
Deep Learning Brasilia - Lição 3
Deep Learning Brasília - Revisão (lições 1, 2, 3 e 4)
(Kerem Turgutlu) #2
@timlee is like the one guy you always wait for his notes to come fresh from the oven, thanks
(Jeremy Howard) #3
Done! Thanks as always
(Vikrant Behal) #4
@timlee I’m rewatching lesson 3 and adding small notes or more explanations at certain sections which I believe which help beginners. Since I’m watching and updating in parallel, there are 12 versions so far.
Note: My knowledge is limited so feel free to review and update as needed.
(Allie Yang) #5
why we use size=64 in planet competition? is the planet notebook’s purpose to show larger size (256>128>64) performs better ?
(Saksham Malhotra) #6
I am trying to build the satellite model in Keras. I have used pretrained VGG19 and the model on 64x64 images. Now I want to train the same model with 128X128 images. How should I go about this?
(Matthew Winkler) #7
I’m having an issue running the .fit portion of the KerasModel section:
%%time model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4, validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
The notebook kernel aborts every time I try. Anyone else encountered / solved this or have any ideas as to why this might be happening? | 2018-09-19 01:15:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23416553437709808, "perplexity": 13417.357754251232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155814.1/warc/CC-MAIN-20180919004724-20180919024724-00140.warc.gz"} |
https://bookdown.org/paul/computational_social_science/understanding-how-dl-works-2.html | ## 15.8 Understanding how DL works (2)
• Loss function (also called objective function)
• To control output of neural network, you need to be able to measure how far this output is from what you expected
• Loss function takes predictions of the network and the true target (what you wanted the network to output) and computes a distance score, capturing how well the network has done on this specific example (see Figure 15.5). | 2022-07-02 01:23:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837285578250885, "perplexity": 945.4396963317791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00074.warc.gz"} |
https://zbmath.org/?q=an:1206.93062 | # zbMATH — the first resource for mathematics
An application of fuzzy random variables to control charts. (English) Zbl 1206.93062
Summary: The two most significant sources of uncertainty are randomness and incomplete information. In real systems, we wish to monitor processes in the presence of these two kinds of uncertainty. This paper aims to construct a fuzzy statistical control chart that can explain existing fuzziness in data while considering the essential variability between observations. The proposed control chart is an extension of Shewhart’s $$\overline X - S^2$$ control charts in fuzzy space. The proposed control chart avoids defuzzification methods such as fuzzy mean, fuzzy mode, fuzzy midrange, and fuzzy median. It is well known that using different representative values may cause different conclusions to be drawn about the process and vague observations to be reduced to exact numbers, thereby reducing the informational content of the original fuzzy sets. The out-of-control states are determined based on a fuzzy in-control region and a simple and precise graded exclusion measure that determines the degree to which fuzzy subgroups are excluded from the fuzzy in-control region. The proposed chart is illustrated with a numerical example.
##### MSC:
93C42 Fuzzy control/observation systems 93E03 Stochastic systems in control theory (general) 93B51 Design techniques (robust design, computer-aided design, etc.)
Full Text:
##### References:
[1] Asai, K., Fuzzy systems for management, (1995), IOS Press Amsterdam · Zbl 0842.90069 [2] Bandler, W.; Kohout, L.J., Fuzzy power sets and fuzzy implication operators, Fuzzy sets and systems, 4, 13-30, (1980) · Zbl 0433.03013 [3] Betta, G.; Capriglione, D.; Tomasso, G., Evaluation of the measurement uncertainties in the conducted emissions from adjustable speed electrical power drive systems, IEEE trans. instrum. meas., 53, 963-968, (2004) [4] Bosc, P.; Pivert, O., About approximate inclusion and its axiomatization, Fuzzy sets and systems, 157, 1438-1454, (2006) · Zbl 1104.03051 [5] Burillo, P.; Frago, N.; Fuentes, R., Inclusion grades and fuzzy implication operators, Fuzzy sets and systems, 114, 417-429, (2000) · Zbl 0962.03050 [6] Cheng, C.B., Fuzzy process control: construction of control charts with fuzzy numbers, Fuzzy sets and systems, 154, 287-303, (2005) [7] A. Colubi, Statistical inference about the means of fuzzy random variables: applications to the analysis of fuzzy- and real-valued data, Fuzzy Sets and Systems 160 (2009) 344-356, doi: 10.1016/j.fss.2007.12.019. · Zbl 1175.62021 [8] I. Couso, D. Dubois, S. Montes, L. Sanchez, On various definitions of the variance of a fuzzy random variable, in: 5th Internat. Sympos. on Imprecise Probabilities and their Applications, Prague, Czech Republic, 2007. [9] Couso, I.; Dubois, D., On the variability of the concept of variance for fuzzy random variables, IEEE trans. fuzzy syst., 17, 1070-1080, (2009) [10] Dubois, D.; Prade, H., Ranking fuzzy numbers in the setting of possibility theory, Inform. sci., 30, 183-224, (1983) · Zbl 0569.94031 [11] Evans, J.R.; Lindsay, W.M., The management and control of quality, (1999), South-Western College Publishing Cincinnati [12] Faraz, A.; Moghadam, M.B., Fuzzy control chart a better alternative for shewhart average chart, Qual. quantity, 41, 375-385, (2007) [13] A. Faraz, R.B. Kazemzadeh, M.B. Moghadam, A. Bazdar, Constructing a fuzzy Shewhart control chart for variables when uncertainty and randomness are combined, Qual. Quantity (2009), doi 10.1007/s11135-009-9244-9. [14] L. Finkelstein, R.Z. Morawski, L. Mari (Eds.), Logical and philosophical aspects of measurement, Measurement 38 (4) (2005) (special issue). [15] Gil, M.A.; López-Díaz, M.; Ralescu, D.A., Overview on the development of fuzzy random variables, Fuzzy sets and systems, 157, 2546-2557, (2006) · Zbl 1108.60006 [16] P. Grzegorzewski, Control charts for fuzzy data, in: Proc. Fifth European Congress on Intelligent Techniques and Soft Computing EUFIT’97, Aachen, 1997, pp. 1326-1330. [17] Grzegorzewski, P.; Hryniewicz, O., Soft methods in statistical quality control, Control cybernet, 29, 119-140, (2000) · Zbl 1030.90019 [18] Gulbay, M.; Kahraman, C., An alternative approach to fuzzy control charts: direct fuzzy approach, Inform. sci., 177, 463-1480, (2007) · Zbl 1120.93332 [19] Kanagawa, A.; Tamaki, F.; Ohta, H., Control charts for process average and variability based on linguistic data, Internat. J. production res., 31, 913-922, (1993) · Zbl 0769.62076 [20] Körner, R., On the variance of fuzzy random variables, Fuzzy sets and systems, 92, 83-93, (1997) · Zbl 0936.60017 [21] Laviolette, M.; Seaman, J.W.; Barrett, J.D.; Woodall, W.H., A probabilistic and statistical view of fuzzy methods, with discussion, Technometrics, 37, 249-292, (1995) · Zbl 0837.62081 [22] Puri, M.L.; Ralescu, D., Fuzzy random variables, J. math. anal. appl., 114, 409-422, (1986) · Zbl 0592.60004 [23] Raz, T.; Wang, J.-H., Probabilistic and membership approaches in the construction of control charts for linguistic data, Production plann. cont., 1, 147-157, (1990) [24] Shewhart, W.A., Economic control of quality of manufactured product, (1931), D. Van Nostrand Inc Princeton, NJ [25] S. Senturk, N. Erginel, Development of fuzzy Xbar-R and Xbar-S control charts using $$\alpha$$-cuts, Inform. Sci. (2008), doi: 10.1016/j.ins.2008.09.022. [26] Shapiro, A.F., Fuzzy random variables, Insurance math. econom., 44, 307-314, (2009) · Zbl 1166.91018 [27] Sinha, D.; Dougherty, E.R., Fuzzification of set inclusion: theory and applications, Fuzzy sets and systems, 55, 15-42, (1993) · Zbl 0788.04007 [28] Taleb, H.; Limam, M., On fuzzy and probabilistic control charts, Internat. J. production res., 40, 2849-2863, (2002) [29] Terán, P., Probabilistic foundations for measurement modeling with fuzzy random variables, Fuzzy sets and systems, 158, 973-986, (2007) · Zbl 1120.60032 [30] Wang, J.-H.; Raz, T., On the construction of control charts using linguistic variables, Internat. J. production res., 28, 477-487, (1990) [31] Woodall, W.; Tsui, K.-L.; Tucker, G.L., A review of statistical and fuzzy control charts based on categorical data, frontiers in statistical quality control, vol. 5, (1997), Physica-Verlag Heidelberg, Germany · Zbl 0900.62538 [32] Young, V., Fuzzy subsethood, Fuzzy sets and systems, 77, 371-384, (1996) · Zbl 0872.94062
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-03-01 07:04:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231740593910217, "perplexity": 10720.476413278211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00038.warc.gz"} |
http://crypto.stackexchange.com/questions/8925/inverse-problem-about-scalar-multiplication-on-elliptic-curve?answertab=votes | # inverse problem about scalar multiplication on elliptic curve
Let $E$ be an elliptic curve over a finite field $F_p$. Given $n$ be a positive integer and $Q$ be a point on $E$, assume that $Q=nP$, how can we find this $P$? We can assume that $n|p-1$. If $n$ is "small", I would imagine that it is possible using division polynomials. Is it a difficult problem if $n$ is large enough? How difficult is it?
-
You can recover $P$ by computing $(n^{-1} \bmod l)\cdot Q$, where $l$ is the order of $Q$. – Samuel Neves Jun 29 '13 at 2:57
Solving $Q=np$ for $n$ is the discrete logarithm problem and expensive. Solving for $P$ is cheap (assuming the order of the curve is known). – CodesInChaos Jun 29 '13 at 11:20
Why do you assume that $n$ divides $p-1$ ? Is there any specific reason for this condition, if so could you explain what it is ? – minar Jul 14 '13 at 17:56 | 2014-12-20 03:04:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304466247558594, "perplexity": 152.26745176496414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769328.92/warc/CC-MAIN-20141217075249-00153-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://www.fqxi.org/community/forum/topic/1282 | Search FQXi
If you have an idea for a blog post or a new forum thread, then please contact us at forums@fqxi.org, with a summary of the topic and its source (e.g., an academic paper, conference talk, external blog post or news item).
Current Essay Contest
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
Previous Contests
Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American
What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams
The Nature of Time
August - December 2008
Forum Home
Introduction
Order posts by:
chronological order
most recent first
Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.
FQXi FORUM
May 23, 2013
CATEGORY: FQXi Essay Contest - Spring, 2012 [back]
TOPIC: Does Milgrom's Acceleration Law Imply That the Equivalence Principle Is Wrong? by David Brown [refresh]
Author David Brown wrote on Jun. 15, 2012 @ 12:24 GMT
Essay Abstract
Theoretical physics is amazingly successful, but the cosmological puzzle of dark matter has not been satisfactorily explained. If there is something wrong with the foundations of theoretical physics then dark matter is a good starting point for challenging the foundations.
Author Bio
David Brown has an M.A. In mathematics from Princeton University and was for a number of years a computer programmer.
report post as inappropriate
Armin Nikkhah Shirazi wrote on Jun. 16, 2012 @ 09:22 GMT
Hi David,
I just read your short essay. While I personally believe that the implications of Milgrom's results should be considered seriously, it is unfortunately a fact that the overwhelming majority of astrophysicists and astronomers dismiss them out of hand without even trying to understand them. Given the brevity of your essay, there would have been room to present the arguments which rebut common dismissive claims about MOND, such as those involving the bullett cluster observations, and an opportunity to educate the skeptics seems lost.
Also, in addition to the hypotheses you list in your paper, an additional one may be that at extremely large scales, nature could simply be different in some as yet to be understood way as compared to our everyday scale, just as it appears to be at extremely small scales.
Nonetheless, I appreciate your contribution to this contest and hope that it will get many people to think about the issues you raise.
Armin
report post as inappropriate
Joe Fisher wrote on Jun. 19, 2012 @ 13:09 GMT
Surely, dark matter cannot be anything else other than the residue of long extinguished stars. The stars cannot possibly be distributed chronologically, or in any logical sequence at all. Each star is located at a different intervening distance apart from every other star. I think you will find that each indication of dark matter has to appear to be at a different intervening distance apart from all the other locations of dark matter. It is possible that each star came into existence at a different moment of time and in a different location than every other star did. It is more likely that the stars are eternal and that they have always been pretty much where they can be noticed presently. I think each star simply continuously heats up and expands then cools off and shrinks thus constantly altering its gravitational intensity and this has always been so.
report post as inappropriate
Alan Lowey wrote on Jun. 21, 2012 @ 11:23 GMT
Dear David,
Congratulations on a fundamental line of thought with your essay. I agree that the spiral galaxy rotation curves are the key to understanding a new physics.
report post as inappropriate
Neil Bates wrote on Jun. 28, 2012 @ 15:58 GMT
Years ago I wondered, a related issue for comparison: if an electric charge oscillated up and down (in SHM) through a tunnel making a diameter of the Earth, would it radiate? The electromagnetic field around the charge is changing so it should radiate, but since the charge is in free fall, it should act like there's no radiative reaction force from the charge's change of acceleration (Abraham-Lorentz law.) If so, that would violate conservation of energy, since the radiation would not have to be "worked for" by pushing the charge against the RRF (as we must do in an antenna, etc.) If the EP is wrong, then maybe that would mean the charge doesn't have to act like it's in free fall (per "Einstein's elevator".)
I've heard similar before here and there. Supposedly, the explanation is: the radiation itself curves back around to push against the moving charge. I find that hard to believe intuitively, is it credible?
PS I previously had an entry, for the Contest "Is Reality Digital or Analog?" I again thank FQXi for giving me a forum.
report post as inappropriate
Dirk Pons wrote on Aug. 8, 2012 @ 08:22 GMT
David
Would indeed be nice to see dark energy and dark matter resolved with one mechanism.
Thank you
Dirk
report post as inappropriate
Anonymous wrote on Sep. 2, 2012 @ 18:10 GMT
Your essay is very focused. I gave it a fairly high rating because of this.
report post as inappropriate
Member Benjamin F. Dribus wrote on Sep. 19, 2012 @ 04:37 GMT
Dear David,
I think you're correct that this is an important problem and that more effort should be devoted to examining the validity of the equivalence principle, rather than simply assuming the existence of dark matter to explain the anomalous rotation curves. Personally, I think we should think very carefully about the effect of scale. Different types of interactions dominate at different scales: the strong/weak interactions on the nuclear level, the electromagnetic interaction up to ordinary scales, then gravity, "dark matter," and finally "dark energy" on the largest scales. It might be that "gravitational mass" loses its relevance for objects separated by great distances, because the interaction between them is of a different nature than usual gravitation. Anyway, I enjoyed reading your essay. Take care,
Ben Dribus
report post as inappropriate
hoang cao hai wrote on Sep. 19, 2012 @ 15:24 GMT
Dear
Very interesting to see your essay.
Perhaps all of us are convinced that: the choice of yourself is right!That of course is reasonable.
So may be we should work together to let's the consider clearly defined for the basis foundations theoretical as the most challenging with intellectual of all of us.
Why we do not try to start with a real challenge is very close and are the focus of interest of the human science: it is a matter of mass and grain Higg boson of the standard model.
Knowledge and belief reasoning of you will to express an opinion on this matter:
You have think that: the Mass is the expression of the impact force to material - so no impact force, we do not feel the Higg boson - similar to the case of no weight outside the Earth's atmosphere.
Does there need to be a particle with mass for everything have volume? If so, then why the mass of everything change when moving from the Earth to the Moon? Higg boson is lighter by the Moon's gravity is weaker than of Earth?
The LHC particle accelerator used to "Smashed" until "Ejected" Higg boson, but why only when the "Smashed" can see it,and when off then not see it ?
Can be "locked" Higg particles? so when "released" if we do not force to it by any the Force, how to know that it is "out" or not?
You are should be boldly to give a definition of weight that you think is right for us to enjoy, or oppose my opinion.
Because in the process of research, the value of "failure" or "success" is the similar with science. The purpose of a correct theory be must is without any a wrong point ?
Glad to see from you comments soon,because still have too many of the same problems.
Regards !
Hải.Caohoàng of THE INCORRECT ASSUMPTIONS AND A CORRECT THEORY
August 23, 2012 - 11:51 GMT on this essay contest.
report post as inappropriate
Juan Ramón González Álvarez wrote on Sep. 23, 2012 @ 11:44 GMT
Dear David Brown,
Effectively, Milgrom equation implies that the equivalence principle has only a limited validity. The section 8 of my essay is devoted to the myth of dark matter (DM) and to the explanation of how MOND can be obtained from a more general theory [11].
Effectively the general theory explains what is the limit of validity of the equivalence principle. You correctly notice that the equivalence principle has been tested "to many decimal places", but only in situations where the corrections to GR vanish or are too small to be measured. Precisely there are many galactic phenomena where the equivalence principle fails and it is precisely there where GR (even assuming a hypothetical distribution of DM) cannot explain the observed phenomena. Due to geometrical constraints (inherited from the equivalence principle), dark matter theorists cannot distribute the hypothetical dark matter in arbitrary ways, and thus their constrained distributions cannot explain the fine-tuning details that are, however, explained by MOND and similar theories.
Regards.
report post as inappropriate
Sergey G Fedosin wrote on Oct. 4, 2012 @ 09:44 GMT
If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is
$R_1$
and
$N_1$
was the quantity of people which gave you ratings. Then you have
$S_1=R_1 N_1$
of points. After it anyone give you
$dS$
of points so you have
$S_2=S_1+ dS$
of points and
$N_2=N_1+1$
is the common quantity of the people which gave you ratings. At the same time you will have
$S_2=R_2 N_2$
of points. From here, if you want to be R2 > R1 there must be:
$S_2/ N_2>S_1/ N_1$
or
$(S_1+ dS) / (N_1+1) >S_1/ N_1$
or
$dS >S_1/ N_1 =R_1$
In other words if you want to increase rating of anyone you must give him more points
$dS$
then the participant`s rating
$R_1$
was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.
Sergey Fedosin
report post as inappropriate
• Please enter the text of your post, then click the "Submit New Post" button below. You may also optionally add file attachments below before submitting your edits.
• HTML tags are not permitted in posts, and will automatically be stripped out. Links to other web sites are permitted. For instructions on how to add links, please read the link help page.
• You may use superscript (10100) and subscript (A2) using [sup]...[/sup] and [sub]...[/sub] tags.
• You may also include LateX equations into your post.
Insert LaTeX Equation [hide]
LaTeX equations may be displayed in FQXi Forum posts by including them within [equation]...[/equation] tags. You may type your equation directly into your post, or use the LaTeX Equation Preview feature below to see how your equation will render (this is recommended).
LaTeX Equation Preview
preview equation
clear equation
insert equation into post at cursor | 2013-05-23 16:32:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 12, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4956643581390381, "perplexity": 1645.2839105528521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703592489/warc/CC-MAIN-20130516112632-00032-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://alexknvl.com/posts/free-theorems.html | Free theorems.
F: I would say that a detailed understanding of parametricity is an intermediate topic, are you studying out of interest or because you think is necessary to continue with your FP learning?
A:
def foo[A](a: A): A
How many possible functions foo are there assuming that foo must be total, not throw exceptions, and not do I/O (including reflection + methods like getClass)?
K: F, Do I have to understand in detail for FP?
F: The approach A is taking is what we primarily use it for in everyday programs. Go with his explanation and you’ll know what you need to know for everyday use I would say, no need to go to the paper at this point.
K: foo is the identity function.
A: Right.
def bar[A, B](a: A, b: B): A
Same question.
K: What do you mean possible functions?
A: How many different functions can you implement with such signature?
K: Just two because it has to return one of the inputs.
A: Yes.
def baz[A](a: List[A]): List[A]
Suppose I run baz(List(1, 2, 3)). Can I get a list with a 4 back?
K: No.
A: What kind of things can baz do to my list? Can it, for instance, sort the list? Reverse it? Concatenate two copies? Return the first element if any or Nil?
R: Sure thing you can
A: No you can not. We are still assuming that functions must be total, not throw exceptions, and not do I/O (including reflection + methods like getClass)?
R: Sorry, you’re right, can’t get List(4) back. But you can still have something like:
def baz[A](a: List[A]): List[A] = Nil
I.e. it’s not identity only. Can do anything specific to list, but not to its elements.
A: Right. You can’t sort or find the least element, but you can for instance do:
def baz[A](a: List[A]): List[A] = a ++ a
def baz[A](a: List[A]): List[A] = a.reverse
def baz[A](a: List[A]): List[A] = a match {
case Nil => Nil
case x :: xs => List(x)
}
You can’t even map in a meaningful way, because the only function you have is id: A => A.
K: What does A => A mean? A function?
A: A type of function.
A:
def baz[A](a: List[A]): List[A] = a.map(f)
f can only be an identity function here, because it has a signature A => A (for any A).
A: Theorems For Free paper shows a way to derive all such “laws” for polymorphic functions. There are more advanced things you can say, for instance:
// For any a: List[A],
// f: A => B, and
// baz: [A] List[A] => List[A]
baz(a).map(f) == baz(a.map(f))
Since FP languages (or FP discipline) constrains possible programs you can write, reasoning about them becomes easier. Polymorphic functions are easier to reason about because there is not much they can possibly do. Freedom to do reflection or I/O in every single function takes that reasoning away.
K: What do you mean by “For any a: List[A], f: A => B, and baz: [A] List[A] => List[A]?
A: It is straightforward to prove (using that paper) that whatever types A and B you choose, whatever a, f of types List[A], A => B you come up with, baz(a).map(f) == baz(a.map(f)) will be true. baz can not observe any properties of the elements of a, so it can’t modify them, only reorder / trim / reverse.
K: So if I would like the reorder the list in baz, how would the signature look like?
R: I guess
def baz[A: Ord](a: List[A]): List[A]
A: If you want to be able to compare elements, that is correct.
def baz[A](l: List[A]): List[A]
already allows you to reorder them, but not based on the elements themselves.
R: Well, I meant some meaningful reorderin, besides reverse :smile:.
A: Is List(1, 2, 3, 4) => List(2, 1, 4, 3), swapping adjacent pairs of elements meaningful?
R: Well, meaning is a fuzzy concept.
M: You can also write a def reorder[A](a: List[A]): List[A], but the way you reorder them will be fixed, that is to say, if you pass a List[Int], it will be re-ordered in the exact same way as if you pass a List[String], because you can’t get to the Int or the String.
A: Here is another interesting example of the free theorems in action:
trait Eq[A, B] {
def subst[F[_]](fa: F[A]): F[B]
}
How many possible Eq[A, B] are there (assuming you can’t add extra defs, vals, vars, pattern match on an open trait, etc)? It’s a trick question.
R: Zero?
A: It depends on whether A is the same as B. There is exactly one Eq[A, A] for any A, and exactly zero Eq[A, B] for different A and B. So Eq represents type equality.
R: But the signature is silent about whether A = B, So the only safe assumption is that they’re different.
M: That’s the only safe false assumption :wink:.
A:
def refl[A]: Eq[A, A] = new Eq[A, A] {
def subst[F[_]](fa: F[A]): F[A]
}
M: If you have an instance of Eq[A, B], you know for sure that A = B, because otherwise you couldn’t have one, right?
A: Yes. This amazing data-type is so powerful, that in Idris you can use it to prove theorems, same way you can with built-in =. And you can use Eq to implement dynamic typing in functional languages.
R: But what’s the point in defining it with two different type parameters, if you know for sure you can only have it with one? Sorry for stupid question.
M: There are no stupid questions.
A:
def foo[A, B](eq: Eq[A, B], a: A): b: B
Because you can have two different types in one context that are equal in another. The caller of foo knows that A = B, but inside foo they look like different types. Here is another, more realistic example:
sealed abstract class F[A, B] {
def isFoo: Option[Eq[A, B]]
}
final case class Foo[A]() extends F[A, A] {
def isFoo: Option[Eq[A, A]] = Some(refl[A])
}
You can usually just pattern match case Foo() => to recover A = B, and it will just work. However, GADTs in Scala are utterly broken, so it’s sometimes not the case.
sealed abstract class F[A, B]
final case class Foo[A]() extends F[A, A]
def f[A, B](fab: F[A, B], a: A): B = fab match {
case Foo() => a
}
doesn’t work in ScalaFiddle (2.11?).
Y: I was eavesdropping and the concept of free theorems sounds fascinating. Thanks!
A: https://alexknvl.com/cgi-bin/free-theorems-webui.cgi - there is an automated version of the paper. When I enter [a] -> [a] (Haskell’s syntax for [A] List[A] => List[A]), I get
map g (f x) = f (map g x)
Or in Scala:
f(x).map(g) == f(x.map(g))
A: For f: [A] A -> A it produces forall g: A => A . g(f(x)) = f(g(x)), which means that f is an identity. Notice that g above is not necessarily polymorphic in A, which is precisely why the law implies that f is an identity.
Now let’s look at:
def f[A]: A = ???
f must return a value of type A for any type A, is it possible to fill in the ??? to make that work? What if I run f[MySecretType]:
final class MySecretType private ()
val x: MySecretType = f[MySecretType]
Clearly, unless we go outside Scalazzi language subset, there are no possible implementations of f.
Let’s look at a -> b -> c, or in Scala’s notation [A, B, C](a: A, b: B): C. How many functions of this type are there? Consider partially applying it to it’s arguments a and b and only then specifying C:
val f : [A, B](a, b)[C]: C
This function returns [C] C, which as we already know, has no possible instances.
Y: Could you help me out with the analysis of (a -> b -> c) -> [a] -> [b] -> [c]?
A: Let’s first rewrite it in Scala.
def f[A, B, C](f: (A, B) => C, la: List[A], lb: List[B]): List[C]
The caller knows A, B, C, so they can supply an f. compare it to the above discussion: There is no polymorphic [A, B, C] (A, B) => C, but for concrete A, B, and C there could be tons of (A, B) => C. An important distinction.
Y: Right.
A: Here is the free theorem for f :: (a -> b -> c) -> [a] -> [b] -> [c], def f[A, B, C](p: (A, B) => C, la: List[A], lb: List[B]): List[C] according to the generator:
forall t1,t2 in TYPES, g :: t1 -> t2.
forall t3,t4 in TYPES, h :: t3 -> t4.
forall t5,t6 in TYPES, f1 :: t5 -> t6.
forall p :: t1 -> t3 -> t5.
forall q :: t2 -> t4 -> t6.
(forall x :: t1. forall y :: t3. f1 (p x y) = q (g x) (h y))
==> (forall z :: [t1].
forall v :: [t3]. map f1 (f p z v) = f q (map g z) (map h v))
Y: Yeah kind of blown away.
A: Well, first we can intuit that f can’t look inside A, B, and C, and can’t produce C out of thin air, so to return List[C], it must call p.
Y: Yes.
A: It can’t produce A or B out of thin air either, so it must use elements of la and lb to call p.
Y: Sure.
A: f can be zipWith, or it can apply any sort of [A] List[A] => List[A] on la and lb and then zipWith, intuitively that it is all it can do.
Y: zipWith is definitely what I intended. But I am curious how zipWith relates to the theorem.
A: The theorem says that if gh (p x y) = q (g x) (h y)), then map gh (f p la lb) = f q (map g la) (map h lb)) So it basically says that if you first map two lists, it’s the same as mapping the result.
K: What does forall stands for?
A: When it says forall t :: TYPES it means roughly the same as [T] in Scala, if it is forall a :: t where t is a type, then it means “whatever the value of a”.
A: I’ll rewrite everything in Scala in a sec, that will make it much clearer I think.
if gh(p(x, y)) = q(g(x), h(y)) then
f(p, la, lb).map(gh) = f(q, la.map(g), lb.map(h))
Now, we know from our discussion before that to produce elements of List[C], f must use p, so every element of List[C] was obtained by calling p So we can move ff inside:
if (gh compose p)(x, y) = q(g(x), h(y)) then
f(gh compose p, la, lb) = f(q, la.map(g), lb.map(h))
I think this is pretty clear.
Y: I think I’m not fully there yet… What is the literary meaning you want to pull out?
A: Let’s simplify a bit further, define p' to be gh compose p:
if p'(x, y) = q(g(x), h(y)) then
f(p', la, lb) = f(q, la.map(g), lb.map(h))
See how it makes sense?
Y: Ahah. Yes, this is amazing.
Y: Haven’t read through the paper yet, so I’m only giving wild guesses. My guess is that a theorem derived from the type of a function is a key indicator of the properties of the function? So it would seem natural to think that a function’s type already speaks much about the semantics of a function!
A: If you are disciplined with your code, then yes. On JVM you can break all sorts of rules.
Y: Right… I’ve always had an intuition that a function’s type already speaks much about what it does, and that’s why I’ve been looking into languages like Scala in the first place. I guess Free Theorems is a solid foundation for my religious beliefs.
A: One of the reasons FP people advocate so strongly for:
• no null
• real parametricity
• tail-call elimination
• no I/O unless in a monad
• no exceptions
• no partial functions
is because all of these (or lack thereof) break this reasoning in one way or another.
P: :+1: The idea is just to build a bubble of determinism in a world of randomness and unpredictability, in which you can reason sanely… It doesn’t prevent IO or mutations but it does it at the boundaries of the bubble, not inside.
Y: Definitely. A, thanks for the info. Really helped broaden my insights.
R: Thanks A.
A: Paul Philips has rightly noticed that the three principles of INGSOC apply nicely to FP. The last two for sure:
• War Is Peace
• Freedom Is Slavery - side-effects (freedom) enslave.
• Ignorance Is Strength - ignorance (parametricity) gives you strength (free theorems). | 2019-05-24 04:59:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5180683732032776, "perplexity": 3317.028786000977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00178.warc.gz"} |
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1100/2/b/d/ | # Properties
Label 1100.2.b.d Level $1100$ Weight $2$ Character orbit 1100.b Analytic conductor $8.784$ Analytic rank $0$ Dimension $4$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$1100 = 2^{2} \cdot 5^{2} \cdot 11$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1100.b (of order $$2$$, degree $$1$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$8.78354422234$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(i, \sqrt{21})$$ Defining polynomial: $$x^{4} + 11x^{2} + 25$$ x^4 + 11*x^2 + 25 Coefficient ring: $$\Z[a_1, \ldots, a_{7}]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + \beta_1 q^{3} + (3 \beta_{2} + \beta_1) q^{7} + (\beta_{3} - 3) q^{9}+O(q^{10})$$ q + b1 * q^3 + (3*b2 + b1) * q^7 + (b3 - 3) * q^9 $$q + \beta_1 q^{3} + (3 \beta_{2} + \beta_1) q^{7} + (\beta_{3} - 3) q^{9} - q^{11} - \beta_{2} q^{13} + (2 \beta_{2} + \beta_1) q^{17} + (2 \beta_{3} - 3) q^{19} + ( - 2 \beta_{3} - 3) q^{21} + ( - \beta_{2} + \beta_1) q^{23} + 5 \beta_{2} q^{27} + (\beta_{3} - 5) q^{29} + (2 \beta_{3} - 5) q^{31} - \beta_1 q^{33} + ( - 3 \beta_{2} - 2 \beta_1) q^{37} + (\beta_{3} - 1) q^{39} + ( - 2 \beta_{3} - 5) q^{41} - 10 \beta_{2} q^{43} + (7 \beta_{2} + 2 \beta_1) q^{47} + ( - 5 \beta_{3} - 2) q^{49} + ( - \beta_{3} - 4) q^{51} + (3 \beta_{2} - 3 \beta_1) q^{53} + (10 \beta_{2} - 3 \beta_1) q^{57} + ( - 2 \beta_{3} + 7) q^{59} + (\beta_{3} + 6) q^{61} - \beta_{2} q^{63} + 4 \beta_{2} q^{67} + (2 \beta_{3} - 7) q^{69} + 6 \beta_{3} q^{71} + ( - 5 \beta_{2} + \beta_1) q^{73} + ( - 3 \beta_{2} - \beta_1) q^{77} + ( - 7 \beta_{3} + 3) q^{79} + ( - 2 \beta_{3} - 4) q^{81} + (4 \beta_{2} + 5 \beta_1) q^{83} + (5 \beta_{2} - 5 \beta_1) q^{87} + ( - \beta_{3} - 1) q^{89} + (\beta_{3} + 2) q^{91} + (10 \beta_{2} - 5 \beta_1) q^{93} + (9 \beta_{2} + \beta_1) q^{97} + ( - \beta_{3} + 3) q^{99}+O(q^{100})$$ q + b1 * q^3 + (3*b2 + b1) * q^7 + (b3 - 3) * q^9 - q^11 - b2 * q^13 + (2*b2 + b1) * q^17 + (2*b3 - 3) * q^19 + (-2*b3 - 3) * q^21 + (-b2 + b1) * q^23 + 5*b2 * q^27 + (b3 - 5) * q^29 + (2*b3 - 5) * q^31 - b1 * q^33 + (-3*b2 - 2*b1) * q^37 + (b3 - 1) * q^39 + (-2*b3 - 5) * q^41 - 10*b2 * q^43 + (7*b2 + 2*b1) * q^47 + (-5*b3 - 2) * q^49 + (-b3 - 4) * q^51 + (3*b2 - 3*b1) * q^53 + (10*b2 - 3*b1) * q^57 + (-2*b3 + 7) * q^59 + (b3 + 6) * q^61 - b2 * q^63 + 4*b2 * q^67 + (2*b3 - 7) * q^69 + 6*b3 * q^71 + (-5*b2 + b1) * q^73 + (-3*b2 - b1) * q^77 + (-7*b3 + 3) * q^79 + (-2*b3 - 4) * q^81 + (4*b2 + 5*b1) * q^83 + (5*b2 - 5*b1) * q^87 + (-b3 - 1) * q^89 + (b3 + 2) * q^91 + (10*b2 - 5*b1) * q^93 + (9*b2 + b1) * q^97 + (-b3 + 3) * q^99 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4 q - 10 q^{9}+O(q^{10})$$ 4 * q - 10 * q^9 $$4 q - 10 q^{9} - 4 q^{11} - 8 q^{19} - 16 q^{21} - 18 q^{29} - 16 q^{31} - 2 q^{39} - 24 q^{41} - 18 q^{49} - 18 q^{51} + 24 q^{59} + 26 q^{61} - 24 q^{69} + 12 q^{71} - 2 q^{79} - 20 q^{81} - 6 q^{89} + 10 q^{91} + 10 q^{99}+O(q^{100})$$ 4 * q - 10 * q^9 - 4 * q^11 - 8 * q^19 - 16 * q^21 - 18 * q^29 - 16 * q^31 - 2 * q^39 - 24 * q^41 - 18 * q^49 - 18 * q^51 + 24 * q^59 + 26 * q^61 - 24 * q^69 + 12 * q^71 - 2 * q^79 - 20 * q^81 - 6 * q^89 + 10 * q^91 + 10 * q^99
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 11x^{2} + 25$$ :
$$\beta_{1}$$ $$=$$ $$\nu$$ v $$\beta_{2}$$ $$=$$ $$( \nu^{3} + 6\nu ) / 5$$ (v^3 + 6*v) / 5 $$\beta_{3}$$ $$=$$ $$\nu^{2} + 6$$ v^2 + 6
$$\nu$$ $$=$$ $$\beta_1$$ b1 $$\nu^{2}$$ $$=$$ $$\beta_{3} - 6$$ b3 - 6 $$\nu^{3}$$ $$=$$ $$5\beta_{2} - 6\beta_1$$ 5*b2 - 6*b1
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1100\mathbb{Z}\right)^\times$$.
$$n$$ $$101$$ $$177$$ $$551$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
749.1
− 2.79129i − 1.79129i 1.79129i 2.79129i
0 2.79129i 0 0 0 0.208712i 0 −4.79129 0
749.2 0 1.79129i 0 0 0 4.79129i 0 −0.208712 0
749.3 0 1.79129i 0 0 0 4.79129i 0 −0.208712 0
749.4 0 2.79129i 0 0 0 0.208712i 0 −4.79129 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
5.b even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 1100.2.b.d 4
3.b odd 2 1 9900.2.c.x 4
4.b odd 2 1 4400.2.b.s 4
5.b even 2 1 inner 1100.2.b.d 4
5.c odd 4 1 1100.2.a.g 2
5.c odd 4 1 1100.2.a.h yes 2
15.d odd 2 1 9900.2.c.x 4
15.e even 4 1 9900.2.a.bh 2
15.e even 4 1 9900.2.a.bz 2
20.d odd 2 1 4400.2.b.s 4
20.e even 4 1 4400.2.a.bi 2
20.e even 4 1 4400.2.a.bu 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
1100.2.a.g 2 5.c odd 4 1
1100.2.a.h yes 2 5.c odd 4 1
1100.2.b.d 4 1.a even 1 1 trivial
1100.2.b.d 4 5.b even 2 1 inner
4400.2.a.bi 2 20.e even 4 1
4400.2.a.bu 2 20.e even 4 1
4400.2.b.s 4 4.b odd 2 1
4400.2.b.s 4 20.d odd 2 1
9900.2.a.bh 2 15.e even 4 1
9900.2.a.bz 2 15.e even 4 1
9900.2.c.x 4 3.b odd 2 1
9900.2.c.x 4 15.d odd 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1100, [\chi])$$:
$$T_{3}^{4} + 11T_{3}^{2} + 25$$ T3^4 + 11*T3^2 + 25 $$T_{7}^{4} + 23T_{7}^{2} + 1$$ T7^4 + 23*T7^2 + 1
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{4}$$
$3$ $$T^{4} + 11T^{2} + 25$$
$5$ $$T^{4}$$
$7$ $$T^{4} + 23T^{2} + 1$$
$11$ $$(T + 1)^{4}$$
$13$ $$(T^{2} + 1)^{2}$$
$17$ $$T^{4} + 15T^{2} + 9$$
$19$ $$(T^{2} + 4 T - 17)^{2}$$
$23$ $$T^{4} + 15T^{2} + 9$$
$29$ $$(T^{2} + 9 T + 15)^{2}$$
$31$ $$(T^{2} + 8 T - 5)^{2}$$
$37$ $$T^{4} + 50T^{2} + 289$$
$41$ $$(T^{2} + 12 T + 15)^{2}$$
$43$ $$(T^{2} + 100)^{2}$$
$47$ $$T^{4} + 114T^{2} + 225$$
$53$ $$T^{4} + 135T^{2} + 729$$
$59$ $$(T^{2} - 12 T + 15)^{2}$$
$61$ $$(T^{2} - 13 T + 37)^{2}$$
$67$ $$(T^{2} + 16)^{2}$$
$71$ $$(T^{2} - 6 T - 180)^{2}$$
$73$ $$T^{4} + 71T^{2} + 625$$
$79$ $$(T^{2} + T - 257)^{2}$$
$83$ $$T^{4} + 267 T^{2} + 16641$$
$89$ $$(T^{2} + 3 T - 3)^{2}$$
$97$ $$T^{4} + 155T^{2} + 4489$$ | 2022-05-24 00:05:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911667108535767, "perplexity": 4749.027309733583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00719.warc.gz"} |
https://greprepclub.com/forum/the-operation-is-de-ned-for-all-integers-x-a-1884.html | It is currently 13 Jul 2020, 08:59
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The operation [symbol] is defined for all integers x a
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 12080
Followers: 256
Kudos [?]: 3014 [0], given: 11279
The operation [symbol] is defined for all integers x a [#permalink] 24 Jan 2016, 15:08
Expert's post
00:00
Question Stats:
54% (01:26) correct 45% (01:05) wrong based on 91 sessions
The operation $$\otimes$$ is defined for all integers x and y as $$x \otimes y = xy - y$$. If x and y are positive integers, which of the following CANNOT be zero?
A) $$x \otimes y$$
B) $$y \otimes x$$
C) $$(x-1)\otimes y$$
D) $$(x+1) \otimes y$$
E) $$x \otimes (y-1)$$
Practice Questions
Question: 22
Page: 463
Difficulty: medium
[Reveal] Spoiler: OA
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
Last edited by Carcass on 29 Jan 2020, 18:13, edited 2 times in total.
Updated
Founder
Joined: 18 Apr 2015
Posts: 12080
Followers: 256
Kudos [?]: 3014 [2] , given: 11279
Re: The operation * is defined for all integers x and y as x * [#permalink] 24 Jan 2016, 15:17
2
KUDOS
Expert's post
Solution
Note Our symbol can be $$*$$ or . It does not matter, it is just a placeholder
The question stem says to us that X and Y are positive integers and that the symbol is defined for all integers . So, the best way to tackle the question is picking numbers
X=1 and Y=2
Scanning and substituting in all the answer choices you can reach the correct answer. For D : our (x+1) is = in to our equation to $$XY$$ so we do have that $$1*2=2$$. Then, $$X*Y=XY-Y$$ following that ($$2+1)-2=1$$.
The correct answer is $$D$$
PS: substitute the values in the other answer choices and you will get zero or not. We are searching an answer that CANNOT be zero
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
GRE Instructor
Joined: 10 Apr 2015
Posts: 3535
Followers: 133
Kudos [?]: 4009 [5] , given: 65
Re: The operation * is defined for all integers x and y as x * [#permalink] 14 Jul 2016, 14:15
5
KUDOS
Expert's post
Carcass wrote:
The operation * is defined for all integers x and y as x * y = xy − y. If x and y are positive integers, which of the following CANNOT be zero?
A) X*Y
B) Y*X
C) (X-1)*Y
D) (X+1)*Y
E) X*(Y-1)
Let's take the formula x * y = xy − y, and rewrite is as x * y = y(x − 1)
Now let's check each answer choice (BEGINNING WITH E, since the test-makers like to place the correct answer
for these questions near the end, since most test-takers will check the answers from A to E.)
E) X*(Y-1)
Apply the formula to get: (Y-1)(X-1)
Can this expression ever equal 0?
Sure, if Y = 1 and X = 1, then (Y-1)(X-1) = (1-1)(1-1) = 0
ELIMINATE E
D) (X+1)*Y
Apply the formula to get: (Y)(X+1-1)
Simplify to get (Y)(X)
Can this expression ever equal 0?
NO.
If X and Y are positive integers, then (Y)(X) can NEVER equal zero
[Reveal] Spoiler:
D
_________________
Brent Hanneson – Creator of greenlighttestprep.com
Intern
Joined: 20 Sep 2018
Posts: 14
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: The operation * is defined for all integers x and y as x * [#permalink] 26 Oct 2018, 10:52
Supreme Moderator
Joined: 01 Nov 2017
Posts: 371
Followers: 10
Kudos [?]: 166 [1] , given: 4
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 02:49
1
KUDOS
Expert's post
The operation * is defined for all integers x and y as $$x * y = xy - y$$. If x and y are positive integers, which of the following CANNOT be zero?
A) $$X*Y$$......XY-Y=0.....Y(X-1)=0.....$$Y\neq{0}$$ but X can be 1... possible
B) $$Y*X$$......XY-X=0.....X(Y-1)=0.....$$X\neq{0}$$ but Y can be 1... possible
C) $$(X-1)*Y$$......(X-1)Y-Y=Y(X-1-1)=Y(X-2)=0.....$$Y\neq{0}$$ but X can be 2... possible
D) $$(X+1)*Y$$......(X+1)Y-Y=Y(X+1-1)=XY=0.....both X and Y are positive,so $$XY\neq{0}$$.... Not possible
E) $$X*(Y-1)$$......X(Y-1)-(Y-1)=(Y-1)(X-1)=0.....any one or both of Y and X can be 1... possible
D
_________________
Some useful Theory.
1. Arithmetic and Geometric progressions : https://greprepclub.com/forum/progressions-arithmetic-geometric-and-harmonic-11574.html#p27048
2. Effect of Arithmetic Operations on fraction : https://greprepclub.com/forum/effects-of-arithmetic-operations-on-fractions-11573.html?sid=d570445335a783891cd4d48a17db9825
3. Remainders : https://greprepclub.com/forum/remainders-what-you-should-know-11524.html
4. Number properties : https://greprepclub.com/forum/number-property-all-you-require-11518.html
5. Absolute Modulus and Inequalities : https://greprepclub.com/forum/absolute-modulus-a-better-understanding-11281.html
Supreme Moderator
Joined: 01 Nov 2017
Posts: 371
Followers: 10
Kudos [?]: 166 [0], given: 4
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 03:00
Expert's post
Reetika1990 wrote:
So x*(y-1)=x(y-1)-(y-1)..
Now let x =2 and y =1..2(1-1)-(1-1)=0-0=0
Or x can be anything positove and y =1 ans is 0
Also when y is anything positive and x=1..
x(y-1)-(y-1)=1*(y-1)-(y-1)=(y-1)-(y-1)=0..
So E is possible
_________________
Some useful Theory.
1. Arithmetic and Geometric progressions : https://greprepclub.com/forum/progressions-arithmetic-geometric-and-harmonic-11574.html#p27048
2. Effect of Arithmetic Operations on fraction : https://greprepclub.com/forum/effects-of-arithmetic-operations-on-fractions-11573.html?sid=d570445335a783891cd4d48a17db9825
3. Remainders : https://greprepclub.com/forum/remainders-what-you-should-know-11524.html
4. Number properties : https://greprepclub.com/forum/number-property-all-you-require-11518.html
5. Absolute Modulus and Inequalities : https://greprepclub.com/forum/absolute-modulus-a-better-understanding-11281.html
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 03:00
Display posts from previous: Sort by | 2020-07-13 16:59:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4921092987060547, "perplexity": 3672.256494624016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00557.warc.gz"} |
http://electrochemical.asmedigitalcollection.asme.org/article.aspx?articleid=2623349 | 0
Photogallery
# Gas Diffusion Electrode With Large Amounts of Gas Diffusion Channel Using Hydrophobic Carbon Fiber: For Oxygen Reduction Reaction at Gas/Liquid Interfaces
[+] Author and Article Information
Department of Chemical System Engineering,
The University of Tokyo,
7-3-1 Hongo,
Bunkyo-ku, Tokyo 113-8656, Japan;
Department of Materials and Life Science,
Faculty of Science and Technology,
Seikei University,
3-3-1 Kichijoji-kitamachi,
Musashino-shi, Tokyo 180-8633, Japan
Pantira Privatananupunt
Department of Chemical System Engineering,
The University of Tokyo,
7-3-1 Hongo,
Bunkyo-ku, Tokyo 113-8656, Japan
e-mail: myosinsama@gmail.com
Toshiyuki Iwasaki
Department of Chemical System Engineering,
The University of Tokyo,
7-3-1 Hongo,
Bunkyo-ku, Tokyo 113-8656, Japan
e-mail: awk104@gmail.com
Ryuji Kikuchi
Department of Chemical System Engineering,
The University of Tokyo,
7-3-1 Hongo,
Bunkyo-ku, Tokyo 113-8656, Japan
e-mail: rkikuchi@chemsys.t.u-tokyo.ac.jp
1Corresponding author.
Manuscript received January 27, 2017; final manuscript received April 14, 2017; published online May 2, 2017. Assoc. Editor: Dirk Henkensmeier.
J. Electrochem. En. Conv. Stor. 14(2), 020903 (May 02, 2017) (9 pages) Paper No: JEECS-17-1014; doi: 10.1115/1.4036507 History: Received January 27, 2017; Revised April 14, 2017
## Abstract
For a gas diffusion cathode for oxygen reduction reaction (ORR) in aqueous alkaline electrolyte, it is important to create networks for O2 gas diffusion, electronic conduction, and liquid-phase OH transport in the cathode at once. In this study, we succeeded to fabricate a promising cathode using hydrophobic vapor grown carbon fibers (VGCF-Xs), instead of hydrophobic carbon blacks (CBs), as additives to its active layer (AL). Mercury porosimetry, as well as electrochemical impedance spectroscopy, showed that porosity of the cathode gradually increased with increasing the amount of the carbon fibers. In other words, addition of larger amount of the carbon fibers creates better O2 gas diffusion channels. Also, the activation polarization resistance for the ORR increased as the carbon fibers' amount from 0 to 0.03–0.04 g and then dropped. In consequence, the cathode with 0.03 g of the carbon fibers exhibited the highest ORR performance among the prepared cathodes.
<>
## Figures
Fig. 1
Schematic image of (a) gas diffusion electrode structure, (b) procedure for gas diffusion electrode fabrication, (c) apparatus for electrochemical measurement, (d) preparation of working electrode, and (e) apparatus for cyclic voltammetry measurement
Fig. 2
Equivalent circuit used in this study; Rs: ohmic resistance, R1: activation resistance, R2: diffusion resistance, and CPE: constant phase element
Fig. 3
Transmission electron microscopy image of Pt/CB
Fig. 4
Scanning electron microscopy images of AL surface of (a) CF3 and (b) CF6. Scale: (from left to right) ×30 k and ×150 k.
Fig. 5
Pore size distribution of several types of GDEs obtained by Hg porosimetry: (a) NC, (b) CF1, (c) CF3, (d) CF6, (e), CB6, and (f) No AL. Pressure: 0–4000 atm.
Fig. 6
Influence of VGCF-X or hydrophobic CB presence in AL on (a) specific volume of secondary pores and (b) porosity of AL. Detail information is in Table 2. “No AL” sample is a GDE without AL.
Fig. 7
I–V curves of NC, CF6, and CB6. All experiments were conducted under O2 partial pressure = 1.0 atm at 70 °C. Sweep rate: 5 mV s−1.
Fig. 8
Nyquist plots of (a) NC, (b) CF6, and (c) CB6. All experiments were conducted under O2 partial pressure = 1.0 atm at 70 °C.
Fig. 10
I–V curves of (a) NC, (b) CF1, (c) CF3, and (d) CF6. All experiments were conducted under O2 partial pressure = 1.0, 0.6, and 0.2 atm at 70 °C. Sweep rate: 5 mV s−1.
Fig. 11
Nyquist plots of CF6. All experiments were conducted under O2 partial pressure = (a) and (d) 1.0, (b) and (e) 0.6, and (c) 0.2 atm at 70 °C.
Fig. 9
I–V curves of CF1, CF3, CF4, and CF6. All experiments were conducted under O2 partial pressure = 1.0 atm at 70 °C. Sweep rate: 5 mV s−1.
Fig. 12
Effect of VGCF-X amount in the GDEs on (•) current density (at ΔE=−0.5 V), (◻) activation resistance, and (△) diffusion resistance. All experiments were conducted under O2 partial pressure = (a) 0.6 atm and (b) 1.0 atm at 70 °C. The raw data were summarized in Table 3.
Fig. 13
Cyclic voltammetry for NC, CF3, CF6, and CB6
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2017-06-23 05:06:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30931156873703003, "perplexity": 12812.857017989807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00166.warc.gz"} |
https://community.flexerasoftware.com/archive/index.php?t-143062.html&s=488aae5b9ec9f2a5499d93b8224d2155 | PDA
View Full Version : INSTALLDIR and Patch
RGoncalves
01-12-2005, 12:14 PM
Hi, I have a Setup that doesn't put the InstallLocation property on the Registry, and when run a quickpatch over that setup it gives me an error runing a script because the INSTALLDIR property isn't the same where the setup was installed .
On LogFile from Setup the INSTALLDIR = c:\Program Files\Mil\mil\ where the setup was installed
On LogFile from QuickPatch the INSTALLDIR = c:\Programas\Mil\mil\
PS: when i runned the setup i choose custom and changed the installdir from c:\Programas\Mil\mil\ to c:\Program Files\Mil\mil\
Any Solution to get the real InstallDir when running the quickpatch?? | 2019-03-22 23:12:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272152185440063, "perplexity": 12038.438074431146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202698.22/warc/CC-MAIN-20190322220357-20190323002357-00058.warc.gz"} |