url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://blender.stackexchange.com/questions/270119/how-can-i-model-this-distorted-torus
|
# How can I model this distorted torus?
I'm trying to model a ring that is styled following this reference:
I'd like to make the distortions using geometry nodes and/or other modifiers such that they can be animated. Thinking the animation would probably be done by transforming the coordinate space of a texture, but I'm not sure what kind of modeling technique to follow for making the distorted hooky bits.
I wondered about doing this with shaders, i.e. a solid torus geometry with parts knocked off with transparency to achieve the effect. I feel it's probably doable with actual geometry though.
Edit: trying to improve on the vagueness, the specific thing that I'm stuck on is how to model the features circled below.
They look a bit like what the snake hook sculpting brush might be good for, but I'd like achieve them with nodes and/or modifiers.
• Your description of what you want to achieve is rather vague... But what I can already tell you is that transparency isn't gonna work. The reason is that it will just cause holes in the mesh and not parts knocked off as you want. Jul 23 at 17:44
If I understood the question correctly, you just want to move certain vertices of a ring outward and offset them radially, right?
...At least it looks like that on the picture. If I am wrong, please be so kind and clarify the question.
If I am correct, you could solve it as follows:
Use Curve to Mesh to create a ring.
Randomly select some vertices of the mesh with the node Random Value (set to Boolean).
Move them outwards along their normals, and rotate this vector a bit around the center.
I've made it a little more extreme than necessary, just to illustrate it better.
• This is great considering how vague my question was :-}. It might be tricky to animate, but definitely give be a good base to start playing with! Jul 30 at 16:32
|
2022-09-27 11:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38209787011146545, "perplexity": 720.6464763512074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00019.warc.gz"}
|
http://www.zentralblatt-math.org/ioport/en/?q=rv:Qaiser%20Mushtaq
|
History
Please fill in your query. A complete syntax description you will find on the General Help page.
Matrix generators for the orthogonal groups. (English)
J. Symb. Comput. 25, No.3, 351-360 (1998).
Generators for the groups $\text{SL}(l,q)$, $\text{Sp}(2m,q)$, $U(l,q)$ and $\text{Sz}(q)$ have been available in computer algebra systems for some time. But until recently it has been impractical to work with these groups for large dimensions and fields. Since the orthogonal groups up to dimension 6 are isomorphic to other linear groups, only small orthogonal groups are available in this way. Hence there has been a need for matrix generators for the orthogonal groups of dimensions beyond 6. In 1962, Steinberg gave pairs of generators for all finite simple groups of Lie type. These generators are given in terms of root elements and generators for the Weyl group. In the paper under review, the authors describe the corresponding generators for the finite orthogonal groups $Ω(l,q)$. In order to provide explicit constructions for the orthogonal groups which can be used within computer algebra packages, these generators are presented as matrices. These generators are equal to Steinberg’s generators modulo centre of the group. Their methods are easily adaptable for finding generators for $\text{SO}(l,q)$ and $O(l,q)$.
|
2013-05-21 16:41:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046421647071838, "perplexity": 225.26957441686986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700212265/warc/CC-MAIN-20130516103012-00035-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.answers.com/other-math/If_you_add_a_rational_and_irrational_number_what_is_the_sum
|
0
# If you add a rational and irrational number what is the sum?
Wiki User
2017-11-22 12:54:00
an irrational number PROOF : Let x be any rational number and y be any irrational number. let us assume that their sum is rational which is ( z ) x + y = z if x is a rational number then ( -x ) will also be a rational number. Therefore, x + y + (-x) = a rational number this implies that y is also rational BUT HERE IS THE CONTRADICTION as we assumed y an irrational number. Hence, our assumption is wrong. This states that x + y is not rational. HENCE PROVED
it will always be irrational.
Wiki User
2017-11-22 12:54:00
Study guides
20 cards
➡️
See all cards
3.74
816 Reviews
|
2022-05-18 10:58:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880876302719116, "perplexity": 428.69276843127267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00173.warc.gz"}
|
http://www.igi.tugraz.at/lehre/CI/SS08/tutorials/AdaptiveFilter/node3.html
|
Computational Intelligence, SS08 2 VO 442.070 + 1 RU 708.070 last updated:
General
Course Notes (Skriptum)
Online Tutorials
Introduction to Matlab Neural Network Toolbox OCR with ANNs Adaptive Filters VC dimension Gaussian Statistics PCA, ICA, Blind Source Separation Hidden Markov Models Mixtures of Gaussians Automatic Speech Recognition
Practical Course Slides
Homework
Exams
Animated Algorithms
Interactive Tests
Key Definitions
News
mailto:webmaster
Subsections
The LMS (least mean squares) algorithm is an approximation of the steepest descent algorithm which uses an instantaneous estimate of the gradient vector of a cost function. The estimate of the gradient is based on sample values of the tap-input vector and an error signal. The algorithm iterates over each coefficient in the filter, moving it in the direction of the approximated gradient [1].
For the LMS algorithm it is necessary to have a reference signal representing the desired filter output. The difference between the reference signal and the actual output of the transversal filter (eq. 2) is the error signal
(3)
A schematic of the learning setup is depicted in fig. 2.
The task of the LMS algorithm is to find a set of filter coefficients that minimize the expected value of the quadratic error signal, i.e., to achieve the least mean squared error (thus the name). The squared error and its expected value are (for simplicity of notation and perception we drop the dependence of all variables on time in eqs. 4 to 7)
(4)
(5)
Note, that the squared error is a quadratic function of the coefficient vector , and thus has only one (global) minimum (and no other (local) minima), that theoretically could be found if the correct expected values in eq. 5 were known.
The gradient descent approach demands that the position on the error surface according to the current coefficients should be moved into the direction of the steepest descent', i.e., in the direction of the negative gradient of the cost function with respect to the coefficient vector2
(6)
The expected values in this equation, , the cross-correlation vector between the desired output signal and the tap-input vector, and , the auto-correlation matrix of the tap-input vector, would usually be estimated using a large number of samples from and . In the LMS algorithm, however, a very short-term estimate is used by only taking into account the current samples: , and , leading to an update equation for the filter coefficients
(7)
Here, we introduced the step-size' parameter , which controls the distance we move along the error surface. In the LMS algorithm the update of the coefficients, eq. 7, is performed at every time instant ,
(8)
## Choice of step-size
The `step-size' parameter introduced in eq. 7 and 8 controls how far we move along the error function surface at each update step. certainly has to be chosen (otherwise we would move the coefficient vector in a direction towards larger squared error). Also, should not be too large, since in the LMS algorithm we use a local approximation of and in the computation of the gradient of the cost function, and thus the cost function at each time instant may differ from an accurate global cost function.
Furthermore, too large a step-size causes the LMS algorithm to be instable, i.e., the coefficients do not converge to fixed values but oscillate. Closer analysis [1] reveals, that the upper bound for for stable behavior of the LMS algorithm depends on the largest eigenvalue max of the tap-input auto-correlation matrix and thus on the input signal. For stable adaptation behavior the step-size has to be
(9)
Since we still do not want to compute an estimate of and its eigenvalues, we first approximate max ( is the trace of matrix , i.e., the sum of the elements on its diagonal), and then - in the same way as we approximated the expected values in the cost function - , the tap-input power at the current time . Hence, the upper bound for for stable behavior depends on the signal power3.
## Summary of the LMS algorithm
1. Filter operation:
2. Error calculation:
where is the desired output
|
2017-11-20 09:33:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029382467269897, "perplexity": 525.5067020617097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805977.0/warc/CC-MAIN-20171120090419-20171120110419-00518.warc.gz"}
|
https://doc.sagemath.org/html/en/reference/modules/sage/modules/quotient_module.html
|
# Quotients of free modules#
AUTHORS:
• William Stein (2009): initial version
• Kwankyu Lee (2022-05): added quotient module over domain
class sage.modules.quotient_module.FreeModule_ambient_field_quotient(domain, sub, quotient_matrix, lift_matrix, inner_product_matrix=None)#
A quotient $$V/W$$ of two vector spaces as a vector space.
To obtain $$V$$ or $$W$$ use self.V() and self.W().
EXAMPLES:
sage: k.<i> = QuadraticField(-1)
sage: A = k^3; V = A.span([[1,0,i], [2,i,0]])
sage: W = A.span([[3,i,i]])
sage: U = V/W; U
Vector space quotient V/W of dimension 1 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I where
V: Vector space of degree 3 and dimension 2 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 0 i]
[ 0 1 -2]
W: Vector space of degree 3 and dimension 1 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 1/3*i 1/3*i]
sage: U.V()
Vector space of degree 3 and dimension 2 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 0 i]
[ 0 1 -2]
sage: U.W()
Vector space of degree 3 and dimension 1 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 1/3*i 1/3*i]
sage: U.quotient_map()
Vector space morphism represented by the matrix:
[ 1]
[3*i]
Domain: Vector space of degree 3 and dimension 2 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 0 i]
[ 0 1 -2]
Codomain: Vector space quotient V/W of dimension 1 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I where
V: Vector space of degree 3 and dimension 2 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 0 i]
[ 0 1 -2]
W: Vector space of degree 3 and dimension 1 over Number Field in i with defining polynomial x^2 + 1 with i = 1*I
Basis matrix:
[ 1 1/3*i 1/3*i]
sage: Z = V.quotient(W)
sage: Z == U
True
We create three quotient spaces and compare them:
sage: A = QQ^2
sage: V = A.span_of_basis([[1,0], [1,1]])
sage: W0 = V.span([V.1, V.0])
sage: W1 = V.span([V.1])
sage: W2 = V.span([V.1])
sage: Q0 = V/W0
sage: Q1 = V/W1
sage: Q2 = V/W2
sage: Q0 == Q1
False
sage: Q1 == Q2
True
V()#
Given this quotient space $$Q = V/W$$, return $$V$$.
EXAMPLES:
sage: M = QQ^10 / [list(range(10)), list(range(2,12))]
sage: M.cover()
Vector space of dimension 10 over Rational Field
W()#
Given this quotient space $$Q = V/W$$, return $$W$$.
EXAMPLES:
sage: M = QQ^10 / [list(range(10)), list(range(2,12))]
sage: M.relations()
Vector space of degree 10 and dimension 2 over Rational Field
Basis matrix:
[ 1 0 -1 -2 -3 -4 -5 -6 -7 -8]
[ 0 1 2 3 4 5 6 7 8 9]
cover()#
Given this quotient space $$Q = V/W$$, return $$V$$.
EXAMPLES:
sage: M = QQ^10 / [list(range(10)), list(range(2,12))]
sage: M.cover()
Vector space of dimension 10 over Rational Field
lift(x)#
Lift element of this quotient $$V / W$$ to $$V$$ by applying the fixed lift homomorphism.
The lift is a fixed homomorphism.
EXAMPLES:
sage: M = QQ^3 / [[1,2,3]]
sage: M.lift(M.0)
(1, 0, 0)
sage: M.lift(M.1)
(0, 1, 0)
sage: M.lift(M.0 - 2*M.1)
(1, -2, 0)
lift_map()#
Given this quotient space $$Q = V / W$$, return a fixed choice of linear homomorphism (a section) from $$Q$$ to $$V$$.
EXAMPLES:
sage: M = QQ^3 / [[1,2,3]]
sage: M.lift_map()
Vector space morphism represented by the matrix:
[1 0 0]
[0 1 0]
Domain: Vector space quotient V/W of dimension 2 over Rational Field where
V: Vector space of dimension 3 over Rational Field
W: Vector space of degree 3 and dimension 1 over Rational Field
Basis matrix:
[1 2 3]
Codomain: Vector space of dimension 3 over Rational Field
quotient_map()#
Given this quotient space $$Q = V / W$$, return the natural quotient map from $$V$$ to $$Q$$.
EXAMPLES:
sage: M = QQ^3 / [[1,2,3]]
sage: M.quotient_map()
Vector space morphism represented by the matrix:
[ 1 0]
[ 0 1]
[-1/3 -2/3]
Domain: Vector space of dimension 3 over Rational Field
Codomain: Vector space quotient V/W of dimension 2 over Rational Field where
V: Vector space of dimension 3 over Rational Field
W: Vector space of degree 3 and dimension 1 over Rational Field
Basis matrix:
[1 2 3]
sage: M.quotient_map()( (QQ^3)([1,2,3]) )
(0, 0)
relations()#
Given this quotient space $$Q = V/W$$, return $$W$$.
EXAMPLES:
sage: M = QQ^10 / [list(range(10)), list(range(2,12))]
sage: M.relations()
Vector space of degree 10 and dimension 2 over Rational Field
Basis matrix:
[ 1 0 -1 -2 -3 -4 -5 -6 -7 -8]
[ 0 1 2 3 4 5 6 7 8 9]
class sage.modules.quotient_module.QuotientModule_free_ambient(module, sub)#
Quotients of ambient free modules by a submodule.
INPUT:
• module – an ambient free module
• sub – a submodule of module
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: M.quotient_module(N)
Quotient module by Submodule of Ambient free module of rank 2 over
the integral domain Multivariate Polynomial Ring in x, y, z over Rational Field
Generated by the rows of the matrix:
[x - y z]
[ y*z x*z]
V()#
Given this quotient space $$Q = V/W$$, return $$V$$.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.cover() is M
True
W()#
Given this quotient space $$Q = V/W$$, return $$W$$
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.relations() is N
True
ambient_module()#
Return self, since self is ambient.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.ambient_module() is Q
True
cover()#
Given this quotient space $$Q = V/W$$, return $$V$$.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.cover() is M
True
free_cover()#
Given this quotient space $$Q = V/W$$, return the free module that covers $$V$$.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M / N
sage: NQ = Q.submodule([Q([1,x])])
sage: QNQ = Q / NQ
sage: QNQ.free_cover() is Q.free_cover() is M
True
Note that this is different than the immediate cover:
sage: QNQ.cover() is Q
True
sage: QNQ.cover() is QNQ.free_cover()
False
free_relations()#
Given this quotient space $$Q = V/W$$, return the submodule that generates all relations of $$Q$$.
When $$V$$ is a free module, then this returns $$W$$. Otherwise this returns the union of $$W$$ lifted to the cover of $$V$$ and the relations of $$V$$ (repeated until $$W$$ is a submodule of a free module).
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M / N
sage: NQ = Q.submodule([Q([1, x])])
sage: QNQ = Q / NQ
sage: QNQ.free_relations()
Submodule of Ambient free module of rank 2 over the integral domain Multivariate Polynomial Ring in x, y, z over Rational Field
Generated by the rows of the matrix:
[ 1 x]
[x - y z]
[ y*z x*z]
Note that this is different than the defining relations:
sage: QNQ.relations() is NQ
True
sage: QNQ.relations() == QNQ.free_relations()
False
gen(i=0)#
Return the $$i$$-th generator of this module.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.gen(0)
(1, 0)
gens()#
Return the generators of this module.
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.gens()
((1, 0), (0, 1))
relations()#
Given this quotient space $$Q = V/W$$, return $$W$$
EXAMPLES:
sage: S.<x,y,z> = PolynomialRing(QQ)
sage: M = S**2
sage: N = M.submodule([vector([x - y, z]), vector([y*z, x*z])])
sage: Q = M.quotient_module(N)
sage: Q.relations() is N
True
|
2022-12-07 12:53:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7233559489250183, "perplexity": 4369.019277712459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00743.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvp&paperid=15&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find
Personal entry: Login: Password: Save password Enter Forgotten password? Register
Teor. Veroyatnost. i Primenen., 2007, Volume 52, Issue 1, Pages 190–199 (Mi tvp15)
This article is cited in 6 scientific papers (total in 6 papers)
Short Communications
On the continuity of weak solutions of backward stochastic differential equations
R. Buckdahna, H.-J. Engelbertb
a Laboratoire des Mathématiques, Université de Bretagne Occidentale, Brest, France
b Institut für Stochastik, Friedrich Schiller-Universität, Jena, Germany
Abstract: In the present paper, the notion of a weak solution of a general backward stochastic differential equation (BSDE), which was introduced by the authors and A. Rǎşcanu in [Theory Probab. Appl., 49 (2005), pp. 16–50], will be discussed. The relationship between continuity of solutions, pathwise uniqueness, uniqueness in law, and existence of a pathwise unique strong solution is investigated. The main result asserts that if all weak solutions of a BSDE are continuous, then the solution is pathwise unique. One should notice that this is a specific result for BSDEs and there is of course no counterpart for (forward) stochastic differential equations (SDEs). As a consequence, if a weak solution exists and all solutions are continuous, then there exists a pathwise unique solution and this solution is strong. Moreover, if the driving process is a continuous local martingale satisfying the previsible representation property, then the converse is also true. In other words, the existence of discontinuous solutions to a BSDE is a natural phenomenon, whenever pathwise uniqueness or, in particular, uniqueness in law is not satisfied. Examples of discontinuous solutions of a certain BSDE were already given in [R. Buckdahn and H.-J. Engelbert, Proceedings of the Fourth Colloquium on Backward Stochastic Differential Equations and Their Applications, to appear]. This was the motivation for the present paper which is aimed at exploring the general situation.
Keywords: backward stochastic differential equations, weak solutions, strong solutions, uniqueness in law, pathwise uniqueness, continuity of solutions, discontinuity of solutions.
DOI: https://doi.org/10.4213/tvp15
Full text: PDF file (1329 kB)
References: PDF file HTML file
English version:
Theory of Probability and its Applications, 2008, 52:1, 152–160
Bibliographic databases:
Received: 07.09.2006
Language:
Citation: R. Buckdahn, H.-J. Engelbert, “On the continuity of weak solutions of backward stochastic differential equations”, Teor. Veroyatnost. i Primenen., 52:1 (2007), 190–199; Theory Probab. Appl., 52:1 (2008), 152–160
Citation in format AMSBIB
\Bibitem{BucEng07} \by R.~Buckdahn, H.-J.~Engelbert \paper On the continuity of weak solutions of backward stochastic differential equations \jour Teor. Veroyatnost. i Primenen. \yr 2007 \vol 52 \issue 1 \pages 190--199 \mathnet{http://mi.mathnet.ru/tvp15} \crossref{https://doi.org/10.4213/tvp15} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2354579} \zmath{https://zbmath.org/?q=an:1153.60032} \elib{http://elibrary.ru/item.asp?id=9466888} \transl \jour Theory Probab. Appl. \yr 2008 \vol 52 \issue 1 \pages 152--160 \crossref{https://doi.org/10.1137/S0040585X9798292X} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000254828600012} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-42549161055}
Linking options:
• http://mi.mathnet.ru/eng/tvp15
• https://doi.org/10.4213/tvp15
• http://mi.mathnet.ru/eng/tvp/v52/i1/p190
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Yannacopoulos A.N., Frangos N.E., Karatzas I., “Wiener chaos solutions for linear backward stochastic evolution equations”, SIAM J. Math. Anal., 43:1 (2011), 68–113
2. Ma J., Zhang J., “On weak solutions of forward-backward SDEs”, Probab. Theory Related Fields, 151:3-4 (2011), 475–507
3. Liang G., Lyons T., Qian Zh., “Backward stochastic dynamics on a filtered probability space”, Ann. Probab., 39:4 (2011), 1422–1448
4. Bouchemella N. de Fitte P.R., “Weak Solutions of Backward Stochastic Differential Equations with Continuous Generator”, Stoch. Process. Their Appl., 124:1 (2014), 927–960
5. Carmona R. Delarue F., “Probabilistic Theory of Mean Field Games With Applications i: Mean Field Fbsdes, Control, and Games”, Probabilistic Theory of Mean Field Games With Applications i: Mean Field Fbsdes, Control, and Games, Probability Theory and Stochastic Modelling, 83, Springer International Publishing Ag, 2018, 1–713
6. Carmona R. Delarue F., “Probabilistic Theory of Mean Field Games With Applications II: Mean Field Games With Common Noise and Master Equations”, Probabilistic Theory of Mean Field Games With Applications II: Mean Field Games With Common Noise and Master Equations, Probability Theory and Stochastic Modelling, 84, Springer International Publishing Ag, 2018, 1–697
• Number of views: This page: 261 Full text: 64 References: 56 First page: 11
Contact us: math-net2020_01 [at] mi-ras ru Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2020
|
2020-01-17 15:28:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25222495198249817, "perplexity": 3988.9530501874365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00086.warc.gz"}
|
https://www.physicsforums.com/insights/tag/relativity/
|
# Relativity Articles
In plain English, Special Relativity says:
1. The laws of the universe in a non-accelerating reference frame are the same everywhere.
2. The speed of light is constant.
## Articles for: relativity
/
### Geodesic Congruences in FRW, Schwarzschild and Kerr Spacetimes
/
Introduction The theory of geodesic congruences is extensively covered in many textbooks (see References); what follows in the introduction is a brief…
### Tolman Law in a Nutshell
/
The Tolman law describes how the temperature in a fixed gravitational field depends on the position (see https://arxiv.org/abs/1803.04106 for a pedagogic…
### The Electric Field Seen by an Observer: A Relativistic Calculation with Tensors
/
This Insight was inspired by the discussion in "electric field seen by an observer in motion", which tries to understand the relation between two expressions:…
### Is Pressure A Source Of Gravity?
/
In a previous series of articles, I posed the question "Does Gravity Gravitate?" and explained how, depending on how you interpreted the terms "gravity"…
### Slowly Lowering an Object in a Static, Spherically Symmetric Spacetime
/
In the first two articles in this series, we looked at the Einstein Field Equation and Maxwell's Equations in a static, spherically symmetric spacetime.…
### Maxwell’s Equations in a Static, Spherically Symmetric Spacetime
/
In the first article in this series, we looked at the Einstein Field Equations in a static, spherically symmetric spacetime. In this article, we are going…
### The Einstein Field Equation in a Static, Spherically Symmetric Spacetime
/
This will be the first of several articles which will provide, for reference, useful equations for static, spherically symmetric spacetimes. This is a…
### Learn Relativity Using the Bondi K-calculus
/
Although Special Relativity was formulated by Einstein (1905), and given a spacetime interpretation by Minkowski (1908) [which helped make special relativity…
### Relativity Variables: Velocity, Doppler-Bondi k, and Rapidity
/
Traditional presentations of special relativity place emphasis on "velocity", which of course has an important physical interpretation... carried over…
### Struggles with the Continuum: Spacetime Conclusion
/
We've been looking at how the continuum nature of spacetime poses problems for our favorite theories of physics --- problems with infinities.…
### Struggles with the Continuum: General Relativity
/
Combining electromagnetism with relativity and quantum mechanics led to QED. Last time we saw the immense struggles with the continuum this…
### Learn Orbital Precession in the Schwarzschild and Kerr Metrics
/
The Schwarzschild Metric A Lagrangian that can be used to describe geodesics is $F = g_{\mu\nu}v^\mu v^\mu$, where $v^\mu = dx^\mu/ds$…
1 Comment
/
A spacetime is often described in terms of a tetrad field, that is, by giving a set of basis vectors at each point. Let the vectors of the tetrad be denoted…
### Learn About Relativity on Rotated Graph Paper
/
This Insight is a follow-up to my earlier tutorial Insight (Spacetime Diagrams of Light Clocks). I gave it a different name because I am placing more…
### Learn About Spacetime Diagrams of Light Clocks
/
We demonstrate a method for constructing spacetime diagrams for special relativity on graph paper that has been rotated by 45 degrees. Many quantitative…
### Struggles with the Continuum – Relativity and Quantum
/
In this series, we're looking at mathematical problems that arise in physics due to treating spacetime as a continuum---basically, problems…
### Why Is the Speed of Light the Same in All Frames of Reference?
The first thing to worry about here is that when you ask someone for a satisfying answer to a "why" question, you have to define what you think would be…
### A Geometrical View of Time Dilation and the Twin Paradox
/
Based on the number of questions we receive on the topic at Physics Forums, there is a lot of confusion in the general public about how time dilation works…
### Does Gravity Gravitate: The Wave
/
In the first two posts in this series, we looked at different ways of interpreting the question "does gravity gravitate?" We left off at…
### Can I Send a Signal Faster than Light by Pushing a Rigid Rod?
/
One common proposal for achieving faster than light communication is to use a long perfectly rigid object and mechanically send signals to the other end…
/
### PF’s policy on Lorentz Ether Theory and Block Universe
/
What is the PF's policy on Lorentz Ether Theory and Block Universe?Debates about the superiority or "truth" of modern Lorentz Ether Theory (LET) and…
/
### Struggles With the Continuum: Point Particles and the Electromagnetic Field
/
In these posts, we're seeing how our favorite theories of physics deal with the idea that space and time are a continuum, with points described…
### Why Does C Have a Particular Value, and Can It Change?
/
Short answer: Because c (speed of light) has units, its value is what it is only because of our choice of units, and there is no meaningful way to test…
### How Fast Do Changes in the Gravitational Field Propagate?
/
General relativity predicts that disturbances in the gravitational field propagate as gravitational waves, and that low-amplitude gravitational waves travel…
### Do Photons have Mass?
/
Do photons have mass? The quick answer: NO.However, this is where it gets a bit confusing for most people. This is because in physics, there are several…
### Learn the Relativistic Work-Kinetic Energy Theorem
/
I was bothered for a long time by the reasons for the relativistic validity of the work-kinetic energy relation ##\Delta E=Fd##, which holds without any…
### Learn A Short Proof of Birkhoff’s Theorem
/
Birkhoff's theorem is a very useful result in General Relativity, and pretty much any textbook has a proof of it. The one I first read was in Misner, Thorne,…
|
2022-10-06 08:14:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6156628727912903, "perplexity": 879.816109811029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00443.warc.gz"}
|
https://economics.stackexchange.com/questions/21551/what-are-the-immediate-effects-of-changing-a-goods-price-on-consumer-and-prod
|
# What are the (immediate) effects of changing a good's price on consumer and producer surplus?
Basically, I'm trying to understand why the total surplus is maximized at the equilibrium and what happens if the price isn't at the equilibrium.
Say the price of a good is the equilibrium price. Now make it more expensive. I know that after a while the price will adjust to the equilibrium again, but what will happen before that? I'm trying to use an example but there are things I can't fill in:
The equilibrium price of the good is \$20 and the equilibrium quantity is 20. Given a straight demand and supply curve and a maximum buyer's imposed value of \$40, the consumer surplus is $\frac{1}{2}(40-20)\cdot20=200$ and the producer surplus is $\frac{1}{2}(20-0)\cdot20=200$.
Now make the good \$30. The quantity demanded falls to 15 and the quantity supplied rises to 25. Now, the consumer's surplus is the triangle I marked in blue,$\frac{1}{2}(40-30)\cdot15=75$(because only 15 items will be sold), but what about the producer's surplus? Is this the triangle I marked in red? I think not, because that seems to imply that the price is only \$10. But if not, what is the producer's surplus, and why?
Thanks in advance and my apologies for the possibly confusing phrasing.
• This question boils down to "what is producer's surplus?" which, without the homework element is a much better question in my opinion. – Giskard Apr 18 '18 at 9:38
• Maybe you could think about who ends up with the rectangular chunk between p=10 and p=30. – Dan Apr 18 '18 at 11:12
• @Dan I don't really know. If the price rise were due to a tax, that area would be the tax revenue, but I don't know what the area means when there is just a price change not due to taxes. – Sudera Apr 18 '18 at 11:20
• @Sudera It seems like you're close. If the price rise were due to a tax, that chunk of money would go to the government. If it's just because the producer put the price up, where does that money go to? – Dan Apr 18 '18 at 11:26
• @Dan Oh I guess it just goes to the producer then. Thanks for your help! – Sudera Apr 18 '18 at 11:45
This four page reference shows how producer surplus is defined and calculated using integral calculus:
https://www.math.ubc.ca/~malabika/teaching/ubc/spring11/math105/surplus.pdf
As quantity increases from q = 0 to q = qe (equilibrium quantity) the price rises at each point on the supply curve S(q).
The equilibrium point (pe, qe) is taken independently by finding the intersection supply curve S(q) and demand curve D(q).
The producer surplus is defined as the area in the rectangle pe x qe minus the area under the rising supply curve AS(q) as quantity goes up from q = 0 to q = qe. When you get to qe you stop doing the calculation so there is no meaning associated with data points to the right of the equilibrium point with respect to producer surplus definition.
I don't know why the terms consumer surplus and producer surplus are used in this context. In a counterfactual world the limits of the calculation might be less than or greater than the equilibrium point, and then consumers would spend less or spend more, and producers would sell less or sell more, but in those counterfactual models one must move the supply and demand curves to a different point of intersection to recognize sales at the same supply and demand price.
In reality goods sell at price times quantity. If the producer drives down the price sufficiently consumers tend to buy in greater quantity. Early adopters of high definition TV or electric cars are willing to pay more so sales occur at higher prices and lower quantities. Producers use the learning curve and invest in fixed costs to drive down prices and increases quantities produced, and demand goes up as prices come down over time. All during this process the supply and demand curves are moving around and do not represent a static calculus solution.
|
2020-10-22 09:34:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5480778813362122, "perplexity": 734.9505612254934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00316.warc.gz"}
|
https://openor.blog/tags/statistics
|
# Statistics
## Analysis on a plane
I’m on a plane! No-frills airlines don’t have wifi, so I’ve downloaded a bunch of datasets and a bunch of libraries and seen if I can do a bit of analysis while in the air. Yes, yes, I can.
## May Day special - union membership UK
From before I hit publish on part 2, I had people telling me that the graphs were cool and they wanted their version. I briefly considered making a page with 404 graphs on it - 2 graphs $$*$$ 2 sexes $$*$$ 101 age groups. Then I remembered that this is a webpage, not a static document, and Javascript is a thing. I discovered the rather useful htmlwidget plotly, which adds interactivity and drops some of the painful aspects of ggplot.
|
2019-08-24 08:43:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093404769897461, "perplexity": 2059.7348485226375}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00381.warc.gz"}
|
https://ask.sagemath.org/questions/26270/revisions/
|
# Revision history [back]
### labeling 3d plots
I want to add text to my 3d plots. The first 'show' below prints the equation correctly in a 2d plot. The second 'show' does not print the equation correctly in a 3d plot.
Is there a fix to make it work for the 3d plot?
Thanks, George Craig
t = text("${M}_x = {0}_x$", (1,1)) show(t)
t = text3d("${M}_x = {0}_x$", (1,1,1)) show(t)
2 No.2 Revision slelievre 16999 ●21 ●151 ●335 http://carva.org/samue...
### labeling 3d plots
I want to add text to my 3d plots. The first 'show' below prints the equation correctly in a 2d plot. The second 'show' does not print the equation correctly in a 3d plot.
Is there a fix to make it work for the 3d plot?
Thanks, George Craig
t = text("${M}_x = {0}_x$", (1,1))
show(t) t = text3d("${M}_x = {0}_x$", (1,1,1))
show(t)show(t)
|
2023-01-27 04:06:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24771708250045776, "perplexity": 6877.3977468617595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00257.warc.gz"}
|
https://amsi.org.au/ESA_middle_years/Year6/Year6_1cT/Year6_1cT_R3_pg1.html
|
## Content description
Multiply and divide decimals by powers of 10 (ACMNA130)
Source: Australian Curriculum, Assessment and Reporting Authority (ACARA)
### Multiply and divide decimals by powers of 10
When a number is multiplied by a power of 10, each digit is multiplied by the same power of 10:
\begin{align*} 25.89 \times 10 &= 20 \times 10 + 5 \times 10 + 0.8 \times 10 + 0.09 \times 10 \\ &= 200 + 50 + 8 + 0.9 \\ &= 258.9\end{align*}
When a number is divided by a power of 10, each digit is divided by the same power of 10:
\begin{align*} 25.89 \div 10 &= 20 \div 10 + 5 \div 10 + \frac{8}{10} \div 10 + \frac{9}{100} \div 10 \\ &= 2 + \frac{5}{10} + \frac{8}{100} + \frac{9}{1000} \\ &= 2.589 \end{align*}
#### Multiplying whole numbers by 10, 100 and 1000
When 14 is multiplied by 10, the 1 in the tens place is multiplied by 10, and the 4 in the ones place is multiplied by 10.
10 × 10 = 100, and 4 × 10 = 40, and we place a zero in the ones place to say that there are 'no ones'.
14 × 10 = 140
If we multiply 14 by 100:
14 × 100 = 1400
We can extend the pattern:
14 × 10 = 140
14 × 100 = 1400
14 × 1000 = 14 000
14 × 10 000 = 140 000
and so on.
|
2023-02-03 14:33:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 466.60440985831144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00511.warc.gz"}
|
https://www.mathwarehouse.com/quadratic/the-quadratic-formula.php
|
# The Quadratic Formula
What it is, what it does, and how to use it
#### What is the Quadratic Formula?
The quadratic formula is:
#### What does this formula tell us?
The quadratic formula calculates the solutions of any quadratic equation.
#### What is a quadratic equation?
A quadratic equation is an equation that can be written as ax ² + bx + c where a ≠ 0. In other words, a quadratic equation must have a squared term as its highest power.
#### Examples of quadratic equations
$$y = 5x^2 + 2x + 5 \\ y = 11x^2 + 22 \\ y = x^2 - 4x +5 \\ y = -x^2 + + 5$$
#### Non Examples
$$y = 11x + 22 \\ y = x^3 -x^2 +5x +5 \\ y = 2x^3 -4x^2 \\ y = -x^4 + 5$$
#### Ok, but what is a 'solution'?
Well a solution can be thought in two ways:
Algebra: For any quadratic equation of the form f(x) = ax2+bx+c, the solution is when f(x) = 0. Geometry The solution is where the graph of a quadratic equation (a parabola) is intersects the x-axis. This, of course, only applies to real solutions.
### Example of the quadratic formula to solve an equation
Use the formula to solve theQuadratic Equation: $$y = x^2 + 2x + 1$$.
Just substitute a,b, and c into the general formula:
$$a = 1 \\ b = 2 \\ c = 1$$
Below is a picture representing the graph of y = x² + 2x + 1 and its solution.
### Quadratic Formula Song
A catchy way to remember the quadratic formula is this song (pop goes the weasel).
### Practice Problems
##### Practice 1
In this quadratic equation, y = x² − 2x + 1 and its solution:
• a = 1
• b = − 2
• c = 1
##### Practice 2
In this quadratic equation,y = x² − x − 2 and its solution:
• a = 1
• b = − 1
• c = − 2
##### Practice 3
In this quadratic equation, y = x² − 1 and its solution:
• a = 1
• b = 0
• c = −1
##### Practice 4
In this quadratic equation, y = x² + 2x − 3 and its solution:
• a = 1
• b = 2
• c = −3
Below is a picture of the graph of the quadratic equation and its two solutions.
##### Practice 5
In this quadratic equation, y = x² + 4x − 5 and its solution:
• a = 1
• b = 4
• c = −5
##### Practice 6
In this quadratic equation,y = x² − 4x + 5 and its solution:
• a = 1
• b = −4
• c = 5
Below is a picture of this quadratic's graph.
|
2021-09-27 21:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7124936580657959, "perplexity": 1504.0787479324665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00480.warc.gz"}
|
https://eccc.weizmann.ac.il/report/2020/126/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Paper:
TR20-126 | 19th August 2020 03:54
#### Indistinguishability Obfuscation from Well-Founded Assumptions
TR20-126
Authors: Aayush Jain, Huijia Lin, Amit Sahai
Publication: 19th August 2020 14:37
Keywords:
Abstract:
In this work, we show how to construct indistinguishability obfuscation from subexponential hardness of four well-founded assumptions. We prove:
Let $\tau \in (0,\infty), \delta \in (0,1), \epsilon \in (0,1)$ be arbitrary constants. Assume sub-exponential security of the following assumptions, where $\lambda$ is a security parameter, and the parameters $\ell,k,n$ below are large enough polynomials in $\lambda$:
- The SXDH assumption on asymmetric bilinear groups of a prime order $p = O(2^\lambda)$,
- The LWE assumption over $\mathbb{Z}_{p}$ with subexponential modulus-to-noise ratio $2^{k^\epsilon}$, where $k$ is the dimension of the LWE secret,
- The LPN assumption over $\mathbb{Z}_p$ with polynomially many LPN samples and error rate $1/\ell^\delta$, where $\ell$ is the dimension of the LPN secret,
- The existence of a Boolean PRG in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$,
Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists.
ISSN 1433-8092 | Imprint
|
2022-05-16 08:29:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591394066810608, "perplexity": 3629.713427767834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00717.warc.gz"}
|
https://www.physicsforums.com/threads/solutions-to-newtonian-gravity.205469/
|
# Solutions to newtonian gravity
1. Dec 20, 2007
### llamascience
I've only just completed high school, but I have a decent grasp on basic undergrad maths/physics. After hours and hours of paper and mental calculations, I still can't see a way of solving Newton's law of gravitation for the path of a body given inital position/velocity i.e. the second order non-linear differential equation r'' = -GM/(r^2).
Can somebody please put my mind at ease by telling me that it cannot be done or showing me how. T'would be much appreciated as I am getting sick of pages that go from Newton's law to conic sections in one jump.
2. Dec 20, 2007
### Parlyne
It can be done; but, it's rather long. The first thing you need to keep in mind is that Newton's 2nd law is a vector equation. So, what you should have there is $\frac{d^2\vec{r}}{dt^2}} = \frac{-GM}{r^2}\hat{r}$, where $\vec{r} = r\hat{r}$. Then, remember that since the radial direction changes with angular coordinates, $\hat{r}$ has a non-zero time derivative. From these considerations and a good choice of angular coordinates, you should be able to separate this down to two equations in r and $\phi$ and a constraint that all motion will be in a plane. From the two equations, you should be able to get a differential equation for r and one that lets you determine $\phi$ once you know r. The r equation can be solved by changing variables to $u = \frac{1}{r}$. Hope this helps.
3. Dec 20, 2007
### nicksauce
Try an energy approach (for the 1 dimensional case)
$$(\frac{dx}{dt})^2 = \frac{C}{x}$$
Squareroot both sides, separate the variables and it should be solvable.
4. Dec 20, 2007
### llamascience
Wait up there, is that KE = PE??
5. Dec 20, 2007
### nicksauce
Essentially. More Correct would be KE + PE = Constant, but solving for the C in the expression I gave shouldn't be hard.
6. Dec 20, 2007
### llamascience
Maybe it's 4 in the morning or maybe im just missing the point entirely, but is C supposed to be a constant? Please explain in more detail if possible.
7. Dec 20, 2007
### nicksauce
Yes it is supposed to be a constant.
Suppose your particle starts from rest, at a distance x_0. Then you have
$$\frac{1}{2}mv^2 - \frac{GM_1M_2}{x} = -\frac{GM_1M_2}{x_0}$$
You can then reduce it into the form
$$v^2 = A + \frac{B}{x}$$
Where A and B are constants, and then solve by separating x and t.
8. Dec 21, 2007
### Shooting Star
I cannot do both, since it can be done, and right now I can't show you how. I may be able to do it, say, tomorrow. Would you like to know the most general approach? Are you comfortable with vectors, just the basic differentiation and stuff like that?
9. Dec 22, 2007
### llamascience
i'd hope so, or i did not deserve a 97% in my calc course :P
if you dont mind doing so, id like a detailed explanation of how to go about this. its been haunting my mind for the past year and all the replies ive been given have been helpful, but unsatisfactory
10. Dec 22, 2007
### Shooting Star
I'll start with very basic stuff, since I don't know what you know and what approach you want. We'll proceed in small steps.
The general eqn of motion would be mr'' = -GMmr/r^3. Can you prove from this that the motion is in a plane, and also that there is a conserved vector, that is, a vector whose magnitude and direction is constant?
For this, you don't require the inverse square law, but any central force, that is, a force of the form f(r)r. Note that r is the magnitude of r.
[Hint (may be ignored): d/dt(rXr') = rXr''.
11. Dec 23, 2007
### llamascience
yer, the vector product of the position vector and the velocity is conserved and as this is always perpendicular to the velocity, the motion must be in plane
12. Dec 23, 2007
### Shooting Star
(EDIT: I would like a one line justification for your above statement. That is to say, why is the product of the posn vector and the velo conserved?)
Excellent. I'll just elaborate on this a bit. To review what you've said, we got,
d/dt(rXmv) = 0 => rXp = L, where p = mv is the linear momentum, and L is the constant vector. L is called the angular momentum about the origin.
Since r.L = 0, r must be always perp to L, and so r must lie in a plane.
(I've got to run now, but we'll finish off by next one or two posts.)
Last edited: Dec 23, 2007
13. Dec 24, 2007
### llamascience
aahhh, gotcha. i knew id seen a similar form before, just couldnt put my finger on it
we never covered angular mechanics to any significant detail in TEE physics and for some reason vectors were practically left out of the course. still, i understand the basics from outside reading
JUSTIFICATION: so as you pointed out$$\frac{\partial}{\partial t}(\vec{r}\times\dot{\vec{r}}) = \vec{r}\times\ddot{\vec{r}}$$
so $$\frac{\partial}{\partial t}(\vec{r}\times\dot{\vec{r}}) = \vec{r}\times\normalfont{f(r)}\hat{\vec{r}} = f(r)\vec{0} = \vec{0}$$
thus the vector $$\vec{r}\times\dot{\vec{r}}$$ is time-invariant. then, like you say the position and velocity are perp. to this so the motion is planar
14. Dec 24, 2007
### D H
Staff Emeritus
One minor correction here: Those are total derivatives wrt time, not partial derivatives.
Note that angular momentum L is necessarily conserved whenever the magnitude of the force vector is a function of the magnitude of the distance vector only and the force vector is directed along or against the distance vector. This is called a central force problem.
It is better to work in the center of mass frame rather than a frame with the Sun at the origin. Total energy (kinetic + potential), angular momentum, and linear momentum are all conserved in this frame. While linear momentum is tautologically zero in the center of mass frame, that total energy and angular momentum are constants of motion is the key to solving the problem.
15. Dec 24, 2007
### Shooting Star
Hi D H,
Thanks for correcting llamascience on the use of the partial derivatives. This student has just finished high school and I'm trying to satisfy his/her curiosity in the simplest possible way. Right now, I don't think introduction of reduced mass will be very fruitful. Anyway, he has to solve the same equations for that. So, for the present, we'll take the origin at the Sun and the earth to be very light compared to Sun.
Have a look at post #10 for central forces.
(Hi llamascience, be back with you.)
16. Dec 24, 2007
### sadhu
I hope you now that angular momentum remain conserved during any motion in gravitational field thus
r*m*v=constant=initial given to it
G*mass of earth*massof body/r+0.5 *m*v*v=constant=initial one....
try if this can help
17. Dec 24, 2007
### llamascience
actually, for the sake of my picking up something new here, could both of you explain your methods simultaneously?
dont worry about my being just out of high school (australia btw, just to avoid confusion). i dont mean to sound over-confident, but like i say; ive read a lot of university level maths and understand a great deal of it, especially in physics related areas like vector calculus, differential equations etc.
still, continue with your method :)
18. Dec 24, 2007
### Parlyne
This is not quite right. The magnitude of angular momentum is $mrv\sin\theta$, where $\theta$ is the angle between $\vec{r}$ and $\vec{v}$. In the most general case, there is no reason to think that $\vec{r}$ and $\vec{v}$ are always perpendicular.
19. Dec 24, 2007
### llamascience
also, please explain why the partial derivative and total derivative with respect to time should be any different when r is dependent only on the one variable: t.
i know that in the general case this could create confusion with solving a PDE, but how will the choice of differential affect the result in this trivial case?
20. Dec 25, 2007
### sadhu
well i never said that v,r to be treated as scalers if i just didnt put an arrow
what i meant was cross product of velocity and position vector
21. Dec 25, 2007
### Shooting Star
Contd. From post #12.
To simplify, let h = L/m, so that h = rXv and r= re, where e is the unit vector in the direction of r. I will leave certain simplifications to you.
Note that e.de/dt = 0 . (Can you say why? Hint: magnitude of e is constant.)
r = re => v = dr/dt = rde/dt + dr/dt e =>
h = rXv = r^2 eXde/dt. --- (1) (I leave the simplification to you.)
d/dt(vXh) = (dv/dt)Xh = -(GM/r^2)eXh = -GMe X (e X de/dt) = GMde/dt. (using (1).)
So, now we have got an integrable eqn:
d/dt(vXh) = GMde/dt =>
vXh = GMe + A. where A is a constant vector. Taking the scalar product of r with both sides, we get,
r.(vXh) = GM(r.e) + r.A. But r.(vXh) = h^2. (Prove it.) So,
h^2 = GMr + rAcos(theta) =>
r = (h^2/GM)/[1 + (A/GM)cos(theta)], which is the standard form for a conic section:
r = a/(1 + ecos(theta)). (This e denotes the eccentricity.)
I have used only elementary properties of vectors to find the path. The orbit around an inverse square central force is in general a conic section. You can choose the x-axis along A.
What you had wanted is to be able to solve for the path given ri and vi. Well, riXvi = h, and from ri you can find ri and thetai. So, you can know the value of r at any theta.
Getting the r and theta given the t only requires a more advanced treatment. I can give you the formulae, but you'll be doing it soon, I guess.
22. Dec 25, 2007
### llamascience
ill just go through each bit you said to do myself and show you my working. if you see anywhere i can improve in efficiency, please tell :)
the magnitude of e is constant so:
$$0 = \frac{d}{dt}\left|\vec{e}\right| = \frac{d}{dt}\sqrt{\vec{e}.\vec{e}} = \frac{2\vec{e}.\frac{d\vec{e}}{dt}}{2\sqrt{\vec{e}.\vec{e}}}$$
therefore:
$$\vec{e}.\frac{d\vec{e}}{dt} = 0$$
then
$$\vec{h} = \vec{r}\times\vec{v} = r\vec{e}\times\left(r\frac{d\vec{e}}{dt}+\frac{dr}{dt}\vec{e}\right) = r\vec{e}\times r\frac{d\vec{e}}{dt}+r\vec{e}\times\frac{dr}{dt}\vec{e} = r^{2}\vec{e}\times\frac{d\vec{e}}{dt}+r\frac{dr}{dt}\vec{e}\times\vec{e} = r^{2}\vec{e}\times\frac{d\vec{e}}{dt}$$
and
using the scalar triple product:
$$\vec{r}\cdot\left(\vec{v}\times\vec{h}\right) = \left(\vec{r}\times\vec{v}\right)\cdot\vec{h} = \vec{h}\cdot\vec{h} = h^{2}$$
finally, thank you so much for explaining this step-by-step. this method made so much more sense than what i was attempting, using changes of variable, separation, etc
23. Dec 25, 2007
### Shooting Star
I am impressed in the way you have walked in step with me. Your derivations are all correct -- the ones I'd left for you to prove.
A note of caution, though. This method of getting an integrable vector eqn and getting the path directly, worked only because it was an inverse square law, For example, I don't think you'll get such an integrable vector eqn in an inverse cube law. This is because in inverse square law, in addition to energy and angular momentum being conserved, there is a third vector which is conserved. It’s called the Runge-Lenz vector, or simply the Lenz vector, even though it was originally discovered by Laplace.
Care to venture your opinion as to what that vector is?
24. Dec 26, 2007
### llamascience
well, it must have something to do with $$\vec{r}$$, being conserved only for inverse square fields
it might come to me after a while, but please enlighten me
25. Dec 26, 2007
### llamascience
actually, is it the constant vector of integration we produced?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
2018-07-18 15:04:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7644078135490417, "perplexity": 754.3475526760435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00118.warc.gz"}
|
https://yajuna.wordpress.com/category/archive-math_317_fall2016/
|
# (317) Homework 9(DUE 12/1)
6.1: 2, 6, 9(b), 14, 17
4.1: 4, 6, 8, 9, 10, 12
4.2: 3,5
# (317)Homework 8(Due 11/17)
Here are the problems due on Thursday, 11/17. A quick reminder that we have our second quiz on 11/17, Thursday. It should cover the proof strategies.
Problems:
3.4: 23
3.5: 1,2,10,26,28,30
3.6: 1,2
# (317)Homework 7(Due 11/10)
Section 3.3: 2,6,9,14,19,21
Section 3.4: 9,11,12,13
Hi all,
3.1: 2,3,8,15,16
3.2: 2,3,4,6,12
# (317)Homework 5 (Due 10/13)
Hi all,
Midterm is coming up soon! Review by looking at homework problems, activity sheets and reading reviews.
Here are the problems due on Thursday
2.2: 4,8,11
2.3: 1,3,5,6,11
# (317) Statements with/out variables
Here we will sum up statements with or without variables.
If $P$ is a statement without variables, it has truth values. There are only two truth values: true or false.
On the other hand, $P(x)$ is a statement with variable $x$. It has a truth set, where each $x_0$ is the set is a value that makes $P(x_0)$ true. It is possible that all $x$ in the universe makes $P(x)$ true; or there are only some values that make it true.
For example:
$P$: ” 3 is a prime number.” is a statement, and there is no variable. The truth value of $P$ depends on itself and nothing else.
$P(x)$: “$x$ is a prime number.” is a statement, and $x$ is the variable. The truth set of $P(x)$ is the set of all prime numbers, which is a subset of $\mathbb Z$.
Quantifiers bind variables, and they only make sense for statements with variables. Both $\forall xP(x)$ and $\exists xP(x)$ have no free variable, and therefore become statements. They now have only truth values true and false.
As $\forall xP(x)$ and $\exists xP(x)$ are really the shorthand notation for $\forall x\in UP(x)$ and $\exists x\in UP(x)$, the truth value would depend on both $P(x)$ and the universe $U$.
In our previous example of$P(x)$, adding the universal quantifier makes the statement into $\forall xP(x)$, which means “all $x$ are prime.” This statement is either true or false. It is true, if the universe we choose is the set of all prime numbers, and false if the universe is all integers $\mathbb Z$.
Similarly, the existential quantifier makes the statement into $\exists xP(x)$, which means “there are some $x$ that is prime.” This statement is either true or false. If the intersection of the universe and the set of prime numbers is non-empty, for example $U=\{4.5, 5, \pi, e,18\}$, then the statement is true. If the universe doesn’t contain any prime numbers, for example $U=\{4,6,8,18\}$, then the statement is false.
# (317)Homework 4(Due 10/6)
2.1: 9
2.2: 2,5,6,7,9,11,12
# (317)Homework 3(Due 9/29)
Hi all,
Homework due:
1.5: 2,3,5,9
2.1: 2,3,5,7,8
One more reminder that we are having a quiz next Tuesday!
# (317) Homework 2(Due 9/22)
Section 1.3: 2,4,6,8
Section 1.4: 6,9,11,13,14
If you like to know more about Venn Diagram for more than three sets, check out this article.
# (317) Sum up of 1.1 and 1.2
Hi all,
This week we will talk about sections 1.3-1.5, and we will use everything we have learned last week. Here is a summary:
Proofs (or arguments) are about statements, composed of statements with logical connectives. We hope to make arguments/write down proofs that are valid, so when all premises are true, we can only say the conclusion must be true. Truth tables are nice ways of organizing information, and you can use truth tables to list all logical possibilities.
Statements: sentences that can only be either true or false. You might need more information to know the truth value of it. We use capital letters to represent statements.
To connect these statements, we use logical connectives. We have three so far: $\wedge$, $\lor$, $\neg$, and one more will be introduced in section 1.5. These connect the statements you make.
In an argument, you might see words like: therefore/then/hence/thus/we conclude, etc. Statements before these words are called premises, and statements after these words are called conclusions. An argument is invalid, if all premises are true, but the conclusion can be false. So if you want to judge the validity of an argument, try to see if you can come up with something that satisfies all the premises but doesn’t satisfy the conclusion. These are call counterexamples.
Remind you that logical forms (statements with logical connectives) are not unique, as you can see in the equivalent formulas on page 21 and 23.
Finally, truth table is a great tool for organizing information. Above equivalent formulas can be discovered via such tables.
|
2017-05-23 16:47:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.494472861289978, "perplexity": 734.8385003843489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607648.39/warc/CC-MAIN-20170523163634-20170523183634-00172.warc.gz"}
|
https://www.broadinstitute.org/gatk/guide/tagged?tag=VQSR
|
# Tagged with #VQSR 5 documentation articles | 1 announcement | 72 forum discussions
Created 2014-06-05 16:10:25 | Updated 2014-06-05 17:55:23 | Tags: vqsr bqsr phred quality-scores
You may have noticed that a lot of the scores that are output by the GATK are in Phred scale. The Phred scale was originally used to represent base quality scores emitted by the Phred program in the early days of the Human Genome Project (see this Wikipedia article for more historical background). Now they are widely used to represent probabilities and confidence scores in other contexts of genome science.
### Phred scale in context
In the context of sequencing, Phred-scaled quality scores are used to represent how confident we are in the assignment of each base call by the sequencer.
In the context of variant calling, Phred-scaled quality scores can be used to represent many types of probabilities. The most commonly used in GATK is the QUAL score, or variant quality score. It is used in much the same way as the base quality score: the variant quality score is a Phred-scaled estimate of how confident we are that the variant caller correctly identified that a given genome position displays variation in at least one sample.
### Phred scale in practice
In today’s sequencing output, by convention, Phred-scaled base quality scores range from 2 to 63. However, Phred-scaled quality scores in general can range anywhere from 0 to infinity. A higher score indicates a higher probability that a particular decision is correct, while conversely, a lower score indicates a higher probability that the decision is incorrect.
The Phred quality score (Q) is logarithmically related to the error probability (E).
$$Q = -10 \log E$$
So we can interpret this score as an estimate of error, where the error is e.g. the probability that the base is called incorrectly by the sequencer, but we can also interpret it as an estimate of accuracy, where the accuracy is e.g. the probability that the base was identified correctly by the sequencer. Depending on how we decide to express it, we can make the following calculations:
If we want the probability of error (E), we take:
$$E = 10 ^{-\left(\frac{Q}{10}\right)}$$
And conversely, if we want to express this as the estimate of accuracy (A), we simply take
$$\begin{eqnarray} A &=& 1 - E \nonumber \ &=& 1 - 10 ^{-\left(\frac{Q}{10}\right)} \nonumber \ \end{eqnarray}$$
Here is a table of how to interpret a range of Phred Quality Scores. It is largely adapted from the Wikipedia page for Phred Quality Score.
For many purposes, a Phred Score of 20 or above is acceptable, because this means that whatever it qualifies is 99% accurate, with a 1% chance of error.
Phred Quality Score Error Accuracy (1 - Error)
10 1/10 = 10% 90%
20 1/100 = 1% 99%
30 1/1000 = 0.1% 99.9%
40 1/10000 = 0.01% 99.99%
50 1/100000 = 0.001% 99.999%
60 1/1000000 = 0.0001% 99.9999%
And finally, here is a graphical representation of the Phred scores showing their relationship to accuracy and error probabilities.
The red line shows the error, and the blue line shows the accuracy. Of course, as error decreases, accuracy increases symmetrically.
Note: You can see that below Q20 (which is how we usually refer to a Phred score of 20), the curve is really steep, meaning that as the Phred score decreases, you lose confidence very rapidly. In contrast, above Q20, both of the graphs level out. This is why Q20 is a good cutoff score for many basic purposes.
Created 2013-09-11 21:54:51 | Updated 2015-05-22 22:46:00 | Tags: vqsr callset filtering hard-filtering
### The problem:
Our preferred method for filtering variants after the calling step is to use VQSR, a.k.a. recalibration. However, it requires well-curated training/truth resources, which are typically not available for organisms other than humans, and it also requires a large amount of variant sites to operate properly, so it is not suitable for some small-scale experiments such as targeted gene panels or exome studies with fewer than 30 exomes. For the latter, it is sometimes possible to pad your cohort with exomes from another study (especially for humans -- use 1000 Genomes or ExAC!) but again for non-human organisms it is often not possible to do this.
### The solution: hard-filtering
So, if this is your case and you are sure that you cannot use VQSR, then you will need to use the VariantFiltration tool to manually filter your variants. To do this, you will need to compose filter expressions as explained here, here and here based on the recommendations detailed further below.
### But first, some caveats
Let's be painfully clear about this: there is no magic formula that will give you perfect results. Filtering variants manually, using thresholds on annotation values, is subject to all sorts of caveats. The appropriateness of both the annotations and the threshold values is very highly dependent on the specific callset, how it was called, what the data was like, what organism it belongs to, etc.
HOWEVER, because we want to help and people always say that something is better than nothing (not necessarily true, but let's go with that for now), we have formulated some generic recommendations that should at least provide a starting point for people to experiment with their data.
In case you didn't catch that bit in bold there, we're saying that you absolutely SHOULD NOT expect to run these commands and be done with your analysis. You absolutely SHOULD expect to have to evaluate your results critically and TRY AGAIN with some parameter adjustments until you find the settings that are right for your data.
In addition, please note that these recommendations are mainly designed for dealing with very small data sets (in terms of both number of samples or size of targeted regions). If you are not using VQSR because you do not have training/truth resources available for your organism, then you should expect to have to do even more tweaking on the filtering parameters.
### Filtering recommendations
Here are some recommended arguments to use with VariantFiltration when ALL other options are unavailable to you. Note that these JEXL expressions will tag as filtered any sites where the annotation value matches the expression. So if you use the expression QD < 2.0, any site with a QD lower than 2 will be tagged as failing that filter.
#### For SNPs:
• QD < 2.0
• MQ < 40.0
• FS > 60.0
• SOR > 4.0
• HaplotypeScore > 13.0 only for variants output byUnifiedGenotyper; for HaplotypeCaller's output it is not informative
• MQRankSum < -12.5
• ReadPosRankSum < -8.0
#### For indels:
• QD < 2.0
• ReadPosRankSum < -20.0
• InbreedingCoeff < -0.8
• FS > 200.0
• SOR > 10.0
### And now some more IMPORTANT caveats (don't skip this!)
• The InbreedingCoeff statistic is a population-level calculation that is only available with 10 or more samples. If you have fewer samples you will need to omit that particular filter statement.
• For shallow-coverage (<10x), it is virtually impossible to use manual filtering to reliably separate true positives from false positives. You really, really, really should use the protocol involving variant quality score recalibration. If you can't do that, maybe you need to take a long hard look at your experimental design. In any case you're probably in for a world of pain.
• The maximum DP (depth) filter only applies to whole genome data, where the probability of a site having exactly N reads given an average coverage of M is a well-behaved function. First principles suggest this should be a binomial sampling but in practice it is more a Gaussian distribution. Regardless, the DP threshold should be set a 5 or 6 sigma from the mean coverage across all samples, so that the DP > X threshold eliminates sites with excessive coverage caused by alignment artifacts. Note that for exomes, a straight DP filter shouldn't be used because the relationship between misalignments and depth isn't clear for capture data.
### Finally, a note of hope
Some bits of this article may seem harsh, or depressing. Sorry. We believe in giving you the cold hard truth.
HOWEVER, we do understand that this is one of the major points of pain that GATK users encounter -- along with understanding how VQSR works, so really, whichever option you go with, you're going to suffer.
Tell us what you'd like to see here, and we'll do our best to make it happen. (no unicorns though, we're out of stock)
We also welcome testimonials from you. We are one small team; you are a legion of analysts all trying different things. Please feel free to come forward and share your findings on what works particularly well in your hands.
Created 2013-06-17 22:26:13 | Updated 2014-12-17 17:04:18 | Tags: variantrecalibrator vqsr applyrecalibration
#### Objective
Recalibrate variant quality scores and produce a callset filtered for the desired levels of sensitivity and specificity.
• TBD
#### Caveats
This document provides a typical usage example including parameter values. However, the values given may not be representative of the latest Best Practices recommendations. When in doubt, please consult the FAQ document on VQSR training sets and parameters, which overrides this document. See that document also for caveats regarding exome vs. whole genomes analysis design.
#### Steps
1. Prepare recalibration parameters for SNPs
a. Specify which call sets the program should use as resources to build the recalibration model
b. Specify which annotations the program should use to evaluate the likelihood of Indels being real
c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches
2. Build the SNP recalibration model
3. Apply the desired level of recalibration to the SNPs in the call set
4. Prepare recalibration parameters for Indels a. Specify which call sets the program should use as resources to build the recalibration model b. Specify which annotations the program should use to evaluate the likelihood of Indels being real c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches d. Determine additional model parameters
5. Build the Indel recalibration model
6. Apply the desired level of recalibration to the Indels in the call set
### 1. Prepare recalibration parameters for SNPs
#### a. Specify which call sets the program should use as resources to build the recalibration model
For each training set, we use key-value tags to qualify whether the set contains known sites, training sites, and/or truth sites. We also use a tag to specify the prior likelihood that those sites are true (using the Phred scale).
• True sites training resource: HapMap
This resource is a SNP call set that has been validated to a very high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). We will also use these sites later on to choose a threshold for filtering variants based on sensitivity to truth sites. The prior likelihood we assign to these variants is Q15 (96.84%).
• True sites training resource: Omni
This resource is a set of polymorphic SNP sites produced by the Omni genotyping array. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).
• Non-true sites training resource: 1000G
This resource is a set of high-confidence SNP sites produced by the 1000 Genomes Project. The program will consider that the variants in this resource may contain true variants as well as false positives (truth=false), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q10 (%).
• Known sites resource, not used in training: dbSNP
This resource is a SNP call set that has not been validated to a high degree of confidence (truth=false). The program will not use the variants in this resource to train the recalibration model (training=false). However, the program will use these to stratify output metrics such as Ti/Tv ratio by whether variants are present in dbsnp or not (known=true). The prior likelihood we assign to these variants is Q2 (36.90%).
The default prior likelihood assigned to all other variants is Q2 (36.90%). This low value reflects the fact that the philosophy of the GATK callers is to produce a large, highly sensitive callset that needs to be heavily refined through additional filtering.
#### b. Specify which annotations the program should use to evaluate the likelihood of SNPs being real
These annotations are included in the information generated for each variant call by the caller. If an annotation is missing (typically because it was omitted from the calling command) it can be added using the VariantAnnotator tool.
Total (unfiltered) depth of coverage. Note that this statistic should not be used with exome datasets; see caveat detailed in the VQSR arguments FAQ doc.
Variant confidence (from the QUAL field) / unfiltered depth of non-reference samples.
Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the StrandOddsRatio (SOR) annotation.
Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the FisherStrand (FS) annotation.
The rank sum test for mapping qualities. Note that the mapping quality rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.
The rank sum test for the distance from the end of the reads. If the alternate allele is only seen near the ends of reads, this is indicative of error. Note that the read position rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.
Estimation of the overall mapping quality of reads supporting a variant call.
Evidence of inbreeding in a population. See caveats regarding population size and composition detailed in the VQSR arguments FAQ doc.
#### c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches
• First tranche threshold 100.0
• Second tranche threshold 99.9
• Third tranche threshold 99.0
• Fourth tranche threshold 90.0
Tranches are essentially slices of variants, ranked by VQSLOD, bounded by the threshold values specified in this step. The threshold values themselves refer to the sensitivity we can obtain when we apply them to the call sets that the program uses to train the model. The idea is that the lowest tranche is highly specific but less sensitive (there are very few false positives but potentially many false negatives, i.e. missing calls), and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. This allows us to filter variants based on how sensitive we want the call set to be, rather than applying hard filters and then only evaluating how sensitive the call set is using post hoc methods.
### 2. Build the SNP recalibration model
#### Action
Run the following GATK command:
java -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R reference.fa \
-input raw_variants.vcf \
-resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap.vcf \
-resource:omni,known=false,training=true,truth=true,prior=12.0 omni.vcf \
-resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.vcf \
-an DP \
-an QD \
-an FS \
-an SOR \
-an MQ \
-an MQRankSum \
-an InbreedingCoeff \
-mode SNP \
-tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \
-recalFile recalibrate_SNP.recal \
-tranchesFile recalibrate_SNP.tranches \
-rscriptFile recalibrate_SNP_plots.R
#### Expected Result
This creates several files. The most important file is the recalibration report, called recalibrate_SNP.recal, which contains the recalibration data. This is what the program will use in the next step to generate a VCF file in which the variants are annotated with their recalibrated quality scores. There is also a file called recalibrate_SNP.tranches, which contains the quality score thresholds corresponding to the tranches specified in the original command. Finally, if your installation of R and the other required libraries was done correctly, you will also find some PDF files containing plots. These plots illustrated the distribution of variants according to certain dimensions of the model.
For detailed instructions on how to interpret these plots, please refer to the VQSR method documentation and presentation videos.
### 3. Apply the desired level of recalibration to the SNPs in the call set
#### Action
Run the following GATK command:
java -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference.fa \
-input raw_variants.vcf \
-mode SNP \
--ts_filter_level 99.0 \
-recalFile recalibrate_SNP.recal \
-tranchesFile recalibrate_SNP.tranches \
-o recalibrated_snps_raw_indels.vcf
#### Expected Result
This creates a new VCF file, called recalibrated_snps_raw_indels.vcf, which contains all the original variants from the original raw_variants.vcf file, but now the SNPs are annotated with their recalibrated quality scores (VQSLOD) and either PASS or FILTER depending on whether or not they are included in the selected tranche.
Here we are taking the second lowest of the tranches specified in the original recalibration command. This means that we are applying to our data set the level of sensitivity that would allow us to retrieve 99% of true variants from the truth training sets of HapMap and Omni SNPs. If we wanted to be more specific (and therefore have less risk of including false positives, at the risk of missing real sites) we could take the very lowest tranche, which would only retrieve 90% of the truth training sites. If we wanted to be more sensitive (and therefore less specific, at the risk of including more false positives) we could take the higher tranches. In our Best Practices documentation, we recommend taking the second highest tranche (99.9%) which provides the highest sensitivity you can get while still being acceptably specific.
### 4. Prepare recalibration parameters for Indels
#### a. Specify which call sets the program should use as resources to build the recalibration model
For each training set, we use key-value tags to qualify whether the set contains known sites, training sites, and/or truth sites. We also use a tag to specify the prior likelihood that those sites are true (using the Phred scale).
• Known and true sites training resource: Mills
This resource is an Indel call set that has been validated to a high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).
The default prior likelihood assigned to all other variants is Q2 (36.90%). This low value reflects the fact that the philosophy of the GATK callers is to produce a large, highly sensitive callset that needs to be heavily refined through additional filtering.
#### b. Specify which annotations the program should use to evaluate the likelihood of Indels being real
These annotations are included in the information generated for each variant call by the caller. If an annotation is missing (typically because it was omitted from the calling command) it can be added using the VariantAnnotator tool.
Total (unfiltered) depth of coverage. Note that this statistic should not be used with exome datasets; see caveat detailed in the VQSR arguments FAQ doc.
Variant confidence (from the QUAL field) / unfiltered depth of non-reference samples.
Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the StrandOddsRatio (SOR) annotation.
Measure of strand bias (the variation being seen on only the forward or only the reverse strand). More bias is indicative of false positive calls. This complements the FisherStrand (FS) annotation.
The rank sum test for mapping qualities. Note that the mapping quality rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.
The rank sum test for the distance from the end of the reads. If the alternate allele is only seen near the ends of reads, this is indicative of error. Note that the read position rank sum test can not be calculated for sites without a mixture of reads showing both the reference and alternate alleles.
Evidence of inbreeding in a population. See caveats regarding population size and composition detailed in the VQSR arguments FAQ doc.
#### c. Specify the desired truth sensitivity threshold values that the program should use to generate tranches
• First tranche threshold 100.0
• Second tranche threshold 99.9
• Third tranche threshold 99.0
• Fourth tranche threshold 90.0
Tranches are essentially slices of variants, ranked by VQSLOD, bounded by the threshold values specified in this step. The threshold values themselves refer to the sensitivity we can obtain when we apply them to the call sets that the program uses to train the model. The idea is that the lowest tranche is highly specific but less sensitive (there are very few false positives but potentially many false negatives, i.e. missing calls), and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. This allows us to filter variants based on how sensitive we want the call set to be, rather than applying hard filters and then only evaluating how sensitive the call set is using post hoc methods.
#### d. Determine additional model parameters
• Maximum number of Gaussians (-maxGaussians) 4
This is the maximum number of Gaussians (i.e. clusters of variants that have similar properties) that the program should try to identify when it runs the variational Bayes algorithm that underlies the machine learning method. In essence, this limits the number of different ”profiles” of variants that the program will try to identify. This number should only be increased for datasets that include very many variants.
### 5. Build the Indel recalibration model
#### Action
Run the following GATK command:
java -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R reference.fa \
-input recalibrated_snps_raw_indels.vcf \
-resource:mills,known=true,training=true,truth=true,prior=12.0 mills.vcf \
-an QD \
-an DP \
-an FS \
-an SOR \
-an MQRankSum \
-an InbreedingCoeff
-mode INDEL \
-tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \
--maxGaussians 4 \
-recalFile recalibrate_INDEL.recal \
-tranchesFile recalibrate_INDEL.tranches \
-rscriptFile recalibrate_INDEL_plots.R
#### Expected Result
This creates several files. The most important file is the recalibration report, called recalibrate_INDEL.recal, which contains the recalibration data. This is what the program will use in the next step to generate a VCF file in which the variants are annotated with their recalibrated quality scores. There is also a file called recalibrate_INDEL.tranches, which contains the quality score thresholds corresponding to the tranches specified in the original command. Finally, if your installation of R and the other required libraries was done correctly, you will also find some PDF files containing plots. These plots illustrated the distribution of variants according to certain dimensions of the model.
For detailed instructions on how to interpret these plots, please refer to the online GATK documentation.
### 6. Apply the desired level of recalibration to the Indels in the call set
#### Action
Run the following GATK command:
java -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference.fa \
-input recalibrated_snps_raw_indels.vcf \
-mode INDEL \
--ts_filter_level 99.0 \
-recalFile recalibrate_INDEL.recal \
-tranchesFile recalibrate_INDEL.tranches \
-o recalibrated_variants.vcf
#### Expected Result
This creates a new VCF file, called recalibrated_variants.vcf, which contains all the original variants from the original recalibrated_snps_raw_indels.vcf file, but now the Indels are also annotated with their recalibrated quality scores (VQSLOD) and either PASS or FILTER depending on whether or not they are included in the selected tranche.
Here we are taking the second lowest of the tranches specified in the original recalibration command. This means that we are applying to our data set the level of sensitivity that would allow us to retrieve 99% of true variants from the truth training sets of HapMap and Omni SNPs. If we wanted to be more specific (and therefore have less risk of including false positives, at the risk of missing real sites) we could take the very lowest tranche, which would only retrieve 90% of the truth training sites. If we wanted to be more sensitive (and therefore less specific, at the risk of including more false positives) we could take the higher tranches. In our Best Practices documentation, we recommend taking the second highest tranche (99.9%) which provides the highest sensitivity you can get while still being acceptably specific.
Created 2012-08-02 14:05:29 | Updated 2014-12-17 17:05:58 | Tags: variantrecalibrator bundle vqsr applyrecalibration faq
This document describes the resource datasets and arguments that we recommend for use in the two steps of VQSR (i.e. the successive application of VariantRecalibrator and ApplyRecalibration), based on our work with human genomes, to comply with the GATK Best Practices. The recommendations detailed in this document take precedence over any others you may see elsewhere in our documentation (e.g. in Tutorial articles, which are only meant to illustrate usage, or in past presentations, which may be out of date).
The document covers:
• Explanation of resource datasets
• Important notes about exome experiments
• Argument recommendations for VariantRecalibrator
• Argument recommendations for ApplyRecalibration
These recommendations are valid for use with calls generated by both the UnifiedGenotyper and HaplotypeCaller. In the past we made a distinction in how we processed the calls from these two callers, but now we treat them the same way. These recommendations will probably not work properly on calls generated by other (non-GATK) callers.
Note that VQSR must be run twice in succession in order to build a separate error model for SNPs and INDELs (see the VQSR documentation for more details).
### Explanation of resource datasets
The human genome training, truth and known resource datasets mentioned in this document are all available from our resource bundle.
If you are working with non-human genomes, you will need to find or generate at least truth and training resource datasets with properties corresponding to those described below. To generate your own resource set, one idea is to first do an initial round of SNP calling and only use those SNPs which have the highest quality scores. These sites which have the most confidence are probably real and could be used as truth data to help disambiguate the rest of the variants in the call set. Another idea is to try using several SNP callers in addition to the UnifiedGenotyper or HaplotypeCaller, and use those sites which are concordant between the different methods as truth data. In either case, you'll need to assign your set a prior likelihood that reflects your confidence in how reliable it is as a truth set. We recommend Q10 as a starting value, which you can then experiment with to find the most appropriate value empirically. There are many possible avenues of research here. Hopefully the model reporting plots that are generated by the recalibration tools will help facilitate this experimentation.
#### Resources for SNPs
• True sites training resource: HapMap
This resource is a SNP call set that has been validated to a very high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). We will also use these sites later on to choose a threshold for filtering variants based on sensitivity to truth sites. The prior likelihood we assign to these variants is Q15 (96.84%).
• True sites training resource: Omni
This resource is a set of polymorphic SNP sites produced by the Omni geno- typing array. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).
• Non-true sites training resource: 1000G
This resource is a set of high-confidence SNP sites produced by the 1000 Genomes Project. The program will consider that the variants in this re- source may contain true variants as well as false positives (truth=false), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q10 (%). 17
• Known sites resource, not used in training: dbSNP
This resource is a call set that has not been validated to a high degree of confidence (truth=false). The program will not use the variants in this resource to train the recalibration model (training=false). However, the program will use these to stratify output metrics such as Ti/Tv ratio by whether variants are present in dbsnp or not (known=true). The prior likelihood we assign to these variants is Q2 (36.90%).
#### Resources for Indels
• Known and true sites training resource: Mills
This resource is an Indel call set that has been validated to a high degree of confidence. The program will consider that the variants in this resource are representative of true sites (truth=true), and will use them to train the recalibration model (training=true). The prior likelihood we assign to these variants is Q12 (93.69%).
Some of the annotations included in the recommendations given below might not be the best for your particular dataset. In particular, the following caveats apply:
• Depth of coverage (the DP annotation invoked by Coverage) should not be used when working with exome datasets since there is extreme variation in the depth to which targets are captured! In whole genome experiments this variation is indicative of error but that is not the case in capture experiments.
• You may have seen HaplotypeScore mentioned in older documents. That is a statistic produced by UnifiedGenotyper that should only be used if you called your variants with UG. This statistic isn't produced by the HaplotypeCaller because that mathematics is already built into the likelihood function itself when calling full haplotypes with HC.
• The InbreedingCoeff is a population level statistic that requires at least 10 samples in order to be computed. For projects with fewer samples, or that includes many closely related samples (such as a family) please omit this annotation from the command line.
### Important notes for exome capture experiments
In our testing we've found that in order to achieve the best exome results one needs to use an exome SNP and/or indel callset with at least 30 samples. For users with experiments containing fewer exome samples there are several options to explore:
• Add additional samples for variant calling, either by sequencing additional samples or using publicly available exome bams from the 1000 Genomes Project (this option is used by the Broad exome production pipeline). Be aware that you cannot simply add VCFs from the 1000 Genomes Project. You must either call variants from the original BAMs jointly with your own samples, or (better) use the reference model workflow to generate GVCFs from the original BAMs, and perform joint genotyping on those GVCFs along with your own samples' GVCFs with GenotypeGVCFs.
• You can also try using the VQSR with the smaller variant callset, but experiment with argument settings (try adding --maxGaussians 4 to your command line, for example). You should only do this if you are working with a non-model organism for which there are no available genomes or exomes that you can use to supplement your own cohort.
### Argument recommendations for VariantRecalibrator
The variant quality score recalibrator builds an adaptive error model using known variant sites and then applies this model to estimate the probability that each variant is a true genetic variant or a machine artifact. One major improvement from previous recommended protocols is that hand filters do not need to be applied at any point in the process now. All filtering criteria are learned from the data itself.
#### Common, base command line
This is the first part of the VariantRecalibrator command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.
java -Xmx4g -jar GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R path/to/reference/human_g1k_v37.fasta \
-input raw.input.vcf \
-recalFile path/to/output.recal \
-tranchesFile path/to/output.tranches \
-nt 4 \
[SPECIFY TRUTH AND TRAINING SETS] \
[SPECIFY WHICH ANNOTATIONS TO USE IN MODELING] \
[SPECIFY WHICH CLASS OF VARIATION TO MODEL] \
#### SNP specific recommendations
For SNPs we use both HapMap v3.3 and the Omni chip array from the 1000 Genomes Project as training data. In addition we take the highest confidence SNPs from the project's callset. These datasets are available in the GATK resource bundle.
-resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.vcf \
-resource:omni,known=false,training=true,truth=true,prior=12.0 1000G_omni2.5.b37.sites.vcf \
-resource:1000G,known=false,training=true,truth=false,prior=10.0 1000G_phase1.snps.high_confidence.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
-an QD -an MQ -an MQRankSum -an ReadPosRankSum -an FS -an SOR -an DP -an InbreedingCoeff \
-mode SNP \
Please note that these recommendations are formulated for whole-genome datasets. For exomes, we do not recommend using DP for variant recalibration (see below for details of why).
Note also that, for the above to work, the input vcf needs to be annotated with the corresponding values (QD, FS, DP, etc.). If any of these values are somehow missing, then VariantAnnotator needs to be run first so that VariantRecalibration can run properly.
Also, using the provided sites-only truth data files is important here as parsing the genotypes for VCF files with many samples increases the runtime of the tool significantly.
You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.
#### Indel specific recommendations
When modeling indels with the VQSR we use a training dataset that was created at the Broad by strictly curating the (Mills, Devine, Genome Research, 2011) dataset as as well as adding in very high confidence indels from the 1000 Genomes Project. This dataset is available in the GATK resource bundle.
--maxGaussians 4 \
-resource:mills,known=false,training=true,truth=true,prior=12.0 Mills_and_1000G_gold_standard.indels.b37.sites.vcf \
-resource:dbsnp,known=true,training=false,truth=false,prior=2.0 dbsnp.b37.vcf \
-an QD -an DP -an FS -an SOR -an ReadPosRankSum -an MQRankSum -an InbreedingCoeff \
-mode INDEL \
Note that indels use a different set of annotations than SNPs. Most annotations related to mapping quality have been removed since there is a conflation with the length of an indel in a read and the degradation in mapping quality that is assigned to the read by the aligner. This covariation is not necessarily indicative of being an error in the same way that it is for SNPs.
You may notice that these recommendations no longer include the --numBadVariants argument. That is because we have removed this argument from the tool, as the VariantRecalibrator now determines the number of variants to use for modeling "bad" variants internally based on the data.
### Argument recommendations for ApplyRecalibration
The power of the VQSR is that it assigns a calibrated probability to every putative mutation in the callset. The user is then able to decide at what point on the theoretical ROC curve their project wants to live. Some projects, for example, are interested in finding every possible mutation and can tolerate a higher false positive rate. On the other hand, some projects want to generate a ranked list of mutations that they are very certain are real and well supported by the underlying data. The VQSR provides the necessary statistical machinery to effectively apply this sensitivity/specificity tradeoff.
#### Common, base command line
This is the first part of the ApplyRecalibration command line, to which you need to add either the SNP-specific recommendations or the indel-specific recommendations given further below.
java -Xmx3g -jar GenomeAnalysisTK.jar \
-T ApplyRecalibration \
-R reference/human_g1k_v37.fasta \
-input raw.input.vcf \
-tranchesFile path/to/input.tranches \
-recalFile path/to/input.recal \
-o path/to/output.recalibrated.filtered.vcf \
[SPECIFY THE DESIRED LEVEL OF SENSITIVITY TO TRUTH SITES] \
[SPECIFY WHICH CLASS OF VARIATION WAS MODELED] \
#### SNP specific recommendations
For SNPs we used HapMap 3.3 and the Omni 2.5M chip as our truth set. We typically seek to achieve 99.5% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.
--ts_filter_level 99.5 \
-mode SNP \
#### Indel specific recommendations
For indels we use the Mills / 1000 Genomes indel truth set described above. We typically seek to achieve 99.0% sensitivity to the accessible truth sites, but this is by no means universally applicable: you will need to experiment to find out what tranche cutoff is right for your data. Generally speaking, projects involving a higher degree of diversity in terms of world populations can expect to achieve a higher truth sensitivity than projects with a smaller scope.
--ts_filter_level 99.0 \
-mode INDEL \
Created 2012-07-23 16:49:34 | Updated 2015-06-03 14:42:06 | Tags: variantrecalibrator vqsr applyrecalibration vcf callset variantrecalibration
This document describes what Variant Quality Score Recalibration (VQSR) is designed to do, and outlines how it works under the hood. For command-line examples and recommendations on what specific resource datasets and arguments to use for VQSR, please see this FAQ article.
As a complement to this document, we encourage you to watch the workshop videos available on our Events webpage.
Slides that explain the VQSR methodology in more detail as well as the individual component variant annotations can be found here in the GSA Public Drop Box.
Detailed information about command line options for VariantRecalibrator can be found here.
Detailed information about command line options for ApplyRecalibration can be found here.
### Introduction
The purpose of variant recalibration is to assign a well-calibrated probability to each variant call in a call set. This enables you to generate highly accurate call sets by filtering based on this single estimate for the accuracy of each call.
The approach taken by variant quality score recalibration is to develop a continuous, covarying estimate of the relationship between SNP call annotations (QD, SB, HaplotypeScore, HRun, for example) and the the probability that a SNP is a true genetic variant versus a sequencing or data processing artifact. This model is determined adaptively based on "true sites" provided as input (typically HapMap 3 sites and those sites found to be polymorphic on the Omni 2.5M SNP chip array, for humans). This adaptive error model can then be applied to both known and novel variation discovered in the call set of interest to evaluate the probability that each call is real. The score that gets added to the INFO field of each variant is called the VQSLOD. It is the log odds ratio of being a true variant versus being false under the trained Gaussian mixture model.
The variant recalibrator contrastively evaluates variants in a two step process, each performed by a distinct tool:
• VariantRecalibrator
Create a Gaussian mixture model by looking at the annotations values over a high quality subset of the input call set and then evaluate all input variants. This step produces a recalibration file.
• ApplyRecalibration
Apply the model parameters to each variant in input VCF files producing a recalibrated VCF file in which each variant is annotated with its VQSLOD value. In addition, this step will filter the calls based on this new lod score by adding lines to the FILTER column for variants that don't meet the specified lod threshold.
Please see the VQSR tutorial for step-by-step instructions on running these tools.
#### How VariantRecalibrator works in a nutshell
The tool takes the overlap of the training/truth resource sets and of your callset. It models the distribution of these variants relative to the annotations you specified, and attempts to group them into clusters. Then it uses the clustering to assign VQSLOD scores to all variants. Variants that are closer to the heart of a cluster will get a higher score than variants that are outliers.
#### How ApplyRecalibration works in a nutshell
During the first part of the recalibration process, variants in your callset were given a score called VQSLOD. At the same time, variants in your training sets were also ranked by VQSLOD. When you specify a tranche sensitivity threshold with ApplyRecalibration, expressed as a percentage (e.g. 99.9%), what happens is that the program looks at what is the VQSLOD value above which 99.9% of the variants in the training callset are included. It then takes that value of VQSLOD and uses it as a threshold to filter your variants. Variants that are above the threshold pass the filter, so the FILTER field will contain PASS. Variants that are below the threshold will be filtered out; they will be written to the output file, but in the FILTER field they will have the name of the tranche they belonged to. So VQSRTrancheSNP99.90to100.00 means that the variant was in the range of VQSLODs corresponding to the remaining 0.1% of the training set, which are basically considered false positives.
### Interpretation of the Gaussian mixture model plots
The variant recalibration step fits a Gaussian mixture model to the contextual annotations given to each variant. By fitting this probability model to the training variants (variants considered to be true-positives), a probability can be assigned to the putative novel variants (some of which will be true-positives, some of which will be false-positives). It is useful for users to see how the probability model was fit to their data. Therefore a modeling report is automatically generated each time VariantRecalibrator is run (in the above command line the report will appear as path/to/output.plots.R.pdf). For every pair-wise combination of annotations used in modeling, a 2D projection of the Gaussian mixture model is shown.
The figure shows one page of an example Gaussian mixture model report that is automatically generated by the VQSR from the example HiSeq call set. This page shows the 2D projection of mapping quality rank sum test versus Haplotype score by marginalizing over the other annotation dimensions in the model.
In each page there are four panels which show different ways of looking at the 2D projection of the model. The upper left panel shows the probability density function that was fit to the data. The 2D projection was created by marginalizing over the other annotation dimensions in the model via random sampling. Green areas show locations in the space that are indicative of being high quality while red areas show the lowest probability areas. In general putative SNPs that fall in the red regions will be filtered out of the recalibrated call set.
The remaining three panels give scatter plots in which each SNP is plotted in the two annotation dimensions as points in a point cloud. The scale for each dimension is in normalized units. The data for the three panels is the same but the points are colored in different ways to highlight different aspects of the data. In the upper right panel SNPs are colored black and red to show which SNPs are retained and filtered, respectively, by applying the VQSR procedure. The red SNPs didn't meet the given truth sensitivity threshold and so are filtered out of the call set. The lower left panel colors SNPs green, grey, and purple to give a sense of the distribution of the variants used to train the model. The green SNPs are those which were found in the training sets passed into the VariantRecalibrator step, while the purple SNPs are those which were found to be furthest away from the learned Gaussians and thus given the lowest probability of being true. Finally, the lower right panel colors each SNP by their known/novel status with blue being the known SNPs and red being the novel SNPs. Here the idea is to see if the annotation dimensions provide a clear separation between the known SNPs (most of which are true) and the novel SNPs (most of which are false).
An example of good clustering for SNP calls from the tutorial dataset is shown to the right. The plot shows that the training data forms a distinct cluster at low values for each of the two statistics shown (haplotype score and mapping quality bias). As the SNPs fall off the distribution in either one or both of the dimensions they are assigned a lower probability (that is, move into the red region of the model's PDF) and are filtered out. This makes sense as not only do higher values of HaplotypeScore indicate a lower chance of the data being explained by only two haplotypes but also higher values for mapping quality bias indicate more evidence of bias between the reference bases and the alternative bases. The model has captured our intuition that this area of the distribution is highly enriched for machine artifacts and putative variants here should be filtered out!
### Tranches and the tranche plot
The recalibrated variant quality score provides a continuous estimate of the probability that each variant is true, allowing one to partition the call sets into quality tranches. The main purpose of the tranches is to establish thresholds within your data that correspond to certain levels of sensitivity relative to the truth sets. The idea is that with well calibrated variant quality scores, you can generate call sets in which each variant doesn't have to have a hard answer as to whether it is in or out of the set. If a very high accuracy call set is desired then one can use the highest tranche, but if a larger, more complete call set is a higher priority than one can dip down into lower and lower tranches. These tranches are applied to the output VCF file using the FILTER field. In this way you can choose to use some of the filtered records or only use the PASSing records.
The first tranche (from the bottom, with the highest value of truth sensitivity but usually the lowest values of novel Ti/Tv) is exceedingly specific but less sensitive, and each subsequent tranche in turn introduces additional true positive calls along with a growing number of false positive calls. Downstream applications can select in a principled way more specific or more sensitive call sets or incorporate directly the recalibrated quality scores to avoid entirely the need to analyze only a fixed subset of calls but rather weight individual variant calls by their probability of being real. An example tranche plot, automatically generated by the VariantRecalibrator walker, is shown below.
This is an example of a tranches plot generated for a HiSeq call set. The x-axis gives the number of novel variants called while the y-axis shows two quality metrics -- novel transition to transversion ratio and the overall truth sensitivity.
Note that the tranches plot is not applicable for indels and will not be generated when the tool is run in INDEL mode.
### Ti/Tv-free recalibration
We use a Ti/Tv-free approach to variant quality score recalibration. This approach requires an additional truth data set, and cuts the VQSLOD at given sensitivities to the truth set. It has several advantages over the Ti/Tv-targeted approach:
• The truth sensitivity (TS) approach gives you back the novel Ti/Tv as a QC metric
• The truth sensitivity (TS) approach is conceptual cleaner than deciding on a novel Ti/Tv target for your dataset
• The TS approach is easier to explain and defend, as saying "I took called variants until I found 99% of my known variable sites" is easier than "I took variants until I dropped my novel Ti/Tv ratio to 2.07"
We have used hapmap 3.3 sites as the truth set (genotypes_r27_nr.b37_fwd.vcf), but other sets of high-quality (~99% truly variable in the population) sets of sites should work just as well. In our experience, with HapMap, 99% is a good threshold, as the remaining 1% of sites often exhibit unusual features like being close to indels or are actually MNPs, and so receive a low VQSLOD score.
Note that the expected Ti/Tv is still an available argument but it is only used for display purposes.
### Finally, a couple of Frequently Asked Questions
#### - Can I use the variant quality score recalibrator with my small sequencing experiment?
This tool is expecting thousands of variant sites in order to achieve decent modeling with the Gaussian mixture model. Whole exome call sets work well, but anything smaller than that scale might run into difficulties.
One piece of advice is to turn down the number of Gaussians used during training. This can be accomplished by adding --maxGaussians 4 to your command line.
maxGaussians is the maximum number of different "clusters" (=Gaussians) of variants the program is "allowed" to try to identify. Lowering this number forces the program to group variants into a smaller number of clusters, which means there will be more variants in each cluster -- hopefully enough to satisfy the statistical requirements. Of course, this decreases the level of discrimination that you can achieve between variant profiles/error modes. It's all about trade-offs; and unfortunately if you don't have a lot of variants you can't afford to be very demanding in terms of resolution.
#### - Why don't all the plots get generated for me?
The most common problem related to this is not having Rscript accessible in your environment path. Rscript is the command line version of R that gets installed right alongside. We also make use of the ggplot2 library so please be sure to install that package as well. See the Common Problems section of the Guide for more details.
Created 2014-12-16 21:48:12 | Updated 2014-12-17 17:06:58 | Tags: vqsr best-practices
The Best Practices recommendations for Variant Quality Score Recalibration have been slightly updated to use the new(ish) StrandOddsRatio (SOR) annotation, which complements FisherStrand (FS) as indicator of strand bias (only available in GATK version 3.3-0 and above).
While we were at it we also reconciled some inconsistencies between the tutorial and the FAQ document. As a reminder, if you ever find differences between parameters given in the VQSR docs, let us know, but FYI that the FAQ is the ultimate source of truth=true. Note also that the command line example given in VariantRecalibrator tool doc tends to be out of date because it can only be updated with the next release (due to a limitation of the tool doc generation system) and, well, we often forget to do it in time -- so it should never be used as a reference for Best Practice parameter values, as indicated in the caveat right underneath it which no one ever reads.
Speaking of caveats, there's no such thing as too much repetition of the fact that whole genomes and exomes have subtle differences that require some tweaks to your command lines. In the case of VQSR, that means dropping Coverage (DP) from your VQSR command lines if you're working with exomes.
Finally, keep in mind that the values we recommend for tranches are really just examples; if there's one setting you should freely experiment with, that's the one. You can specify as many tranche cuts as you want to get really fine resolution.
Created 2015-06-19 02:29:09 | Updated | Tags: vqsr
Hi,
I would like to apply VQSR on exome dataset. I wonder should "-L" be put in the command line too? And why?
Thank you very much!
Emma
Created 2015-05-26 01:55:14 | Updated | Tags: vqsr gatk-vqsr-exome
Hello,
I want to run VQSR for my Exome data. I have finished with data pre-processing and joint genotyping. Now, I want to move to the next step which is VQSR (still in the first step, VariantRecalibrator). I noticed that I don't have the training set for the tools parameter's input. Where can I download the vcf file that I need to run this command? This is what I've tried:
Hapmap (Link) I tried to open HapMap website and I found allocated SNPs download link. Is this the file I need? The file are in XML format and splitted per chromosome and it based in Hg35. Do I need to join these XML and then convert it to VCF using GATK? Will it give some problem with the different HG build (I use HG38)
1000genome (Link) I found the VCF file also per chromosome. I think I just need to join it, don't I? probably you can give some suggestion how to join it properly?
Omni I don't know where I can get this file.
dbSNP I think I already have this file. I have use it during the GATK pre-processing step. It is the same file, right?
Created 2015-05-22 18:56:41 | Updated | Tags: vqsr
Hi,
I encounter an error when I try to run variant recalibration step over 50 WGS samples. This error message comes up somewhere during the program start to run but everytime it stopped at a different chromosomal location. It was really confusing since same command worked fine for me previously.
Hope to get a clue from you.
Following is the error message.
##### ERROR stack trace
java.lang.RuntimeException: java.io.IOException: Transport endpoint is not connected at htsjdk.tribble.readers.LineReaderUtil$2.readLine(LineReaderUtil.java:79) at htsjdk.tribble.readers.LineIteratorImpl.advance(LineIteratorImpl.java:23) at htsjdk.tribble.readers.LineIteratorImpl.advance(LineIteratorImpl.java:10) at htsjdk.samtools.util.AbstractIterator.next(AbstractIterator.java:57) at htsjdk.tribble.AsciiFeatureCodec.decode(AsciiFeatureCodec.java:79) at htsjdk.tribble.AsciiFeatureCodec.decode(AsciiFeatureCodec.java:41) at htsjdk.tribble.TribbleIndexedFeatureReader$QueryIterator.readNextRecord(TribbleIndexedFeatureReader.java:449) at htsjdk.tribble.TribbleIndexedFeatureReader$QueryIterator.next(TribbleIndexedFeatureReader.java:405) at htsjdk.tribble.TribbleIndexedFeatureReader$QueryIterator.next(TribbleIndexedFeatureReader.java:373) at org.broadinstitute.gatk.utils.refdata.utils.FeatureToGATKFeatureIterator.next(FeatureToGATKFeatureIterator.java:60) at org.broadinstitute.gatk.utils.refdata.utils.FeatureToGATKFeatureIterator.next(FeatureToGATKFeatureIterator.java:42) at org.broadinstitute.gatk.utils.iterators.PushbackIterator.next(PushbackIterator.java:65) at org.broadinstitute.gatk.utils.iterators.PushbackIterator.element(PushbackIterator.java:51) at org.broadinstitute.gatk.utils.refdata.SeekableRODIterator.next(SeekableRODIterator.java:223) at org.broadinstitute.gatk.utils.refdata.SeekableRODIterator.next(SeekableRODIterator.java:66) at org.broadinstitute.gatk.utils.collections.RODMergingIterator$Element.next(RODMergingIterator.java:72) at org.broadinstitute.gatk.utils.collections.RODMergingIterator.next(RODMergingIterator.java:111) at org.broadinstitute.gatk.utils.collections.RODMergingIterator.allElementsLTE(RODMergingIterator.java:145) at org.broadinstitute.gatk.utils.collections.RODMergingIterator.allElementsLTE(RODMergingIterator.java:129) at org.broadinstitute.gatk.engine.datasources.providers.RodLocusView.getSpanningTracks(RodLocusView.java:140) at org.broadinstitute.gatk.engine.datasources.providers.RodLocusView.next(RodLocusView.java:127) at org.broadinstitute.gatk.engine.traversals.TraverseLociNano$MapDataIterator.next(TraverseLociNano.java:172) at org.broadinstitute.gatk.engine.traversals.TraverseLociNano$MapDataIterator.next(TraverseLociNano.java:153) at org.broadinstitute.gatk.utils.nanoScheduler.NanoScheduler.executeSingleThreaded(NanoScheduler.java:271) at org.broadinstitute.gatk.utils.nanoScheduler.NanoScheduler.execute(NanoScheduler.java:245) at org.broadinstitute.gatk.engine.traversals.TraverseLociNano.traverse(TraverseLociNano.java:144) at org.broadinstitute.gatk.engine.traversals.TraverseLociNano.traverse(TraverseLociNano.java:92) at org.broadinstitute.gatk.engine.traversals.TraverseLociNano.traverse(TraverseLociNano.java:48) at org.broadinstitute.gatk.engine.executive.ShardTraverser.call(ShardTraverser.java:98) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.io.IOException: Transport endpoint is not connected at java.io.RandomAccessFile.readBytes0(Native Method) at java.io.RandomAccessFile.readBytes(RandomAccessFile.java:350) at java.io.RandomAccessFile.read(RandomAccessFile.java:385) at htsjdk.samtools.seekablestream.SeekableFileStream.read(SeekableFileStream.java:80) at htsjdk.tribble.TribbleIndexedFeatureReader$BlockStreamWrapper.read(TribbleIndexedFeatureReader.java:539) at java.io.InputStream.read(InputStream.java:101) at htsjdk.tribble.readers.PositionalBufferedStream.fill(PositionalBufferedStream.java:127) at htsjdk.tribble.readers.PositionalBufferedStream.read(PositionalBufferedStream.java:79) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at htsjdk.tribble.readers.LongLineBufferedReader.fill(LongLineBufferedReader.java:140) at htsjdk.tribble.readers.LongLineBufferedReader.readLine(LongLineBufferedReader.java:298) at htsjdk.tribble.readers.LongLineBufferedReader.readLine(LongLineBufferedReader.java:354) at htsjdk.tribble.readers.LineReaderUtil$2.readLine(LineReaderUtil.java:77) ... 32 more
##### ERROR ------------------------------------------------------------------------------------------
Created 2015-05-17 19:30:08 | Updated 2015-05-17 20:28:52 | Tags: vqsr runtime-error
I'm currently doing a comparison between 100 greek samples downsampled to 30x and 15x to explore the effects this has on our various tools. I'm currently only evaluating chromosome 6 as I need the initial comparison results soon and something went boom. Curiously enough it only affects the 15x version of the data and not the 30x. I suspect it might be something threading related? I'm going to retry with less and no threads. Confirmed same error with 31 threads, now testing in single threaded mode.
INFO 17:29:10,799 HelpFormatter - The Genome Analysis Toolkit (GATK) v3.4-0-g7e26428, Compiled 2015/05/15 03:25:41
INFO 17:29:10,800 HelpFormatter - For support and documentation go to http://www.broadinstitute.org/gatk
INFO 17:29:10,806 HelpFormatter - Program Args: -T VariantRecalibrator -nt 32 -R /lustre/scratch113/resources/ref/Homo_sapiens/1000Genomes_hs37d5/hs37d5.fa -input greek_bams/15x/15x_annot.vcf.gz --recal_
file greek_bams/15x_vqsr_snp_recal.vcf.gz --tranches_file greek_bams/15x_vqsr_snp_recal.tranches -mode SNP -rscriptFile greek_bams/15x.snp.plot -L 6 -l INFO -resource:hapmap,known=false,training=true,trut
h=true,prior=15.0 /lustre/scratch111/resources/variation/Homo_sapiens/grch37/gatk-bundle/2.5/hapmap_3.3.b37.vcf -resource:omni,known=false,training=true,truth=true,prior=12.0 /lustre/scratch111/resources/
variation/Homo_sapiens/grch37/gatk-bundle/2.5/1000G_omni2.5.b37.vcf -resource:1000g,known=false,training=true,truth=false,prior=10.0 /lustre/scratch111/resources/variation/Homo_sapiens/grch37/gatk-bundle/
2.5/1000G_phase1.snps.high_confidence.b37.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=2.0 /lustre/scratch111/resources/variation/Homo_sapiens/grch37/gatk-bundle/2.8/b37//dbsnp_138.b37.
vcf --target_titv 2.15 -an QD -an MQRankSum -an ReadPosRankSum -an FS -an InbreedingCoeff -an DP -an MQ -an SOR
INFO 17:29:10,811 HelpFormatter - Executing as mercury@hgs4b on Linux 3.8.0-44-generic amd64; Java HotSpot(TM) 64-Bit Server VM 1.7.0_25-b15.
INFO 17:29:10,811 HelpFormatter - Date/Time: 2015/05/17 17:29:10
INFO 17:29:10,812 HelpFormatter - --------------------------------------------------------------------------------
INFO 17:29:10,812 HelpFormatter - --------------------------------------------------------------------------------
INFO 17:29:11,493 GenomeAnalysisEngine - Strictness is SILENT
INFO 17:29:12,058 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000
INFO 17:29:13,568 IntervalUtils - Processing 171115067 bp from intervals
WARN 17:29:13,570 IndexDictionaryUtils - Track input doesn't have a sequence dictionary built in, skipping dictionary validation
INFO 17:29:13,620 MicroScheduler - Running the GATK in parallel mode with 32 total threads, 1 CPU thread(s) for each of 32 data thread(s), of 32 processors available on this machine
INFO 17:29:13,758 GenomeAnalysisEngine - Preparing for traversal
INFO 17:29:13,765 GenomeAnalysisEngine - Done preparing for traversal
INFO 17:29:13,766 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING]
INFO 17:29:13,767 ProgressMeter - | processed | time | per 1M | | total | remaining
INFO 17:29:13,768 ProgressMeter - Location | sites | elapsed | sites | completed | runtime | runtime
INFO 17:29:13,878 TrainingSet - Found hapmap track: Known = false Training = true Truth = true Prior = Q15.0
INFO 17:29:13,880 TrainingSet - Found omni track: Known = false Training = true Truth = true Prior = Q12.0
INFO 17:29:13,882 TrainingSet - Found 1000g track: Known = false Training = true Truth = false Prior = Q10.0
INFO 17:29:13,884 TrainingSet - Found dbsnp track: Known = true Training = false Truth = false Prior = Q2.0
INFO 17:30:00,834 ProgressMeter - 6:45767321 437766.0 47.0 s 107.0 s 26.7% 2.9 m 2.1 m
INFO 17:30:17,026 VariantDataManager - QD: mean = 19.92 standard deviation = 5.86
INFO 17:30:17,226 VariantDataManager - MQRankSum: mean = 0.06 standard deviation = 0.52
INFO 17:30:17,410 VariantDataManager - ReadPosRankSum: mean = 0.25 standard deviation = 0.52
INFO 17:30:17,601 VariantDataManager - FS: mean = 2.65 standard deviation = 3.93
INFO 17:30:17,790 VariantDataManager - InbreedingCoeff: mean = -0.00 standard deviation = 0.19
INFO 17:30:17,985 VariantDataManager - DP: mean = 1432.94 standard deviation = 185.41
INFO 17:30:18,179 VariantDataManager - MQ: mean = 59.94 standard deviation = 0.72
INFO 17:30:18,374 VariantDataManager - SOR: mean = 0.78 standard deviation = 0.40
INFO 17:30:19,569 VariantDataManager - Annotations are now ordered by their information content: [DP, MQ, QD, FS, ReadPosRankSum, MQRankSum, SOR, InbreedingCoeff]
INFO 17:30:19,642 VariantDataManager - Training with 611167 variants after standard deviation thresholding.
INFO 17:30:19,648 GaussianMixtureModel - Initializing model with 100 k-means iterations...
INFO 17:30:30,839 ProgressMeter - 6:171052865 4569513.0 77.0 s 16.0 s 100.0% 77.0 s 0.0 s
INFO 17:31:00,843 ProgressMeter - 6:171052865 4569513.0 107.0 s 23.0 s 100.0% 107.0 s 0.0 s
INFO 17:31:30,847 ProgressMeter - 6:171052865 4569513.0 2.3 m 29.0 s 100.0% 2.3 m 0.0 s
INFO 17:31:35,265 VariantRecalibratorEngine - Finished iteration 0.
INFO 17:32:00,850 ProgressMeter - 6:171052865 4569513.0 2.8 m 36.0 s 100.0% 2.8 m 0.0 s
INFO 17:32:25,369 VariantRecalibratorEngine - Finished iteration 5. Current change in mixture coefficients = 1.82124
...
INFO 17:45:45,833 VariantRecalibratorEngine - Finished iteration 95. Current change in mixture coefficients = 0.00236
INFO 17:46:00,990 ProgressMeter - 6:171052865 4569513.0 16.8 m 3.7 m 100.0% 16.8 m 0.0 s
INFO 17:46:12,074 VariantRecalibratorEngine - Convergence after 98 iterations!
INFO 17:46:17,393 VariantRecalibratorEngine - Evaluating full set of 985716 variants...
INFO 17:46:17,455 VariantDataManager - Training with worst 0 scoring variants --> variants with LOD <= -5.0000.
INFO 17:46:27,147 GATKRunReport - Uploaded run statistics report to AWS S3
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR stack trace
Caused by: java.lang.IllegalArgumentException: No data found.
... 5 more
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A GATK RUNTIME ERROR has occurred (version 3.4-0-g7e26428):
##### ERROR
##### ERROR This might be a bug. Please check the documentation guide to see if this is a known problem.
##### ERROR If not, please post the error message, with stack trace, to the GATK forum.
##### ERROR
##### ERROR MESSAGE: Unable to retrieve result
##### ERROR ------------------------------------------------------------------------------------------
Created 2015-05-08 08:02:41 | Updated | Tags: vqsr
Hi, I have several variant files that were generated by other calling tools with some annotations not defined by GATK. I wonder if I can also apply VQSR on these dataset restricting to those exclusive annotations (using "-an"). Is there any drawback for this application?
Besides, I saw somewhere that fitting only a single Gaussian distribution to each annotation. Is this a proper way to perform variant recalibration?
Created 2015-05-06 07:26:13 | Updated | Tags: vqsr
Hi,
I saw "LowQual" in the QUAL field of some records in VCF files which has been VQSR. I wonder whether these records have ever been used in the VQSR. If they have been used in that step, but without tagging something like "VQSRTrancheSNP99.90to100.00", does it mean that they are somehow ok in terms of VQSR?
Thank you!
Emma
Created 2015-04-20 23:41:13 | Updated | Tags: vqsr vcf gatk
Hi,
After using VQSR, I have a vcf output that contains sites labeled "." in the FILTER field. When I look at the vcf documentation (1000 genomes), it says that those are sites where filters have not been applied. Is this correct? I would like to know more about what these sites mean, exactly.
An example of such a site in my data is:
1 10439 . AC A 4816.02 . AC=10;AF=0.185;AN=54;BaseQRankSum=-4.200e-01;ClippingRankSum=-2.700e-01;DP=1690;FS=6.585;GQ_MEAN=111.04;GQ_STDDEV=147.63;InbreedingCoeff=-0.4596;MLEAC=17;MLEAF=0.315;MQ=36.85;MQ0=0;MQRankSum=-8.340e-01;NCC=0;QD=11.39;ReadPosRankSum=-8.690e-01;SOR=1.226 GT:AD:DP:GQ:PGT:PID:PL 0/1:22,14:36:99:0|1:10439_AC_A:200,0,825 0/0:49,0:49:0:.:.:0,0,533 0/0:92,0:92:0:.:.:0,0,2037 0/1:20,29:49:99:.:.:634,0,340 0/0:11,0:16:32:.:.:0,32,379 0/1:21,17:38:99:.:.:273,0,616 0/0:57,0:57:0:.:.:0,0,1028 0/0:58,0:58:0:.:.:0,0,1204 0/0:52,0:52:0:.:.:0,0,474 0/0:86,0:86:27:.:.:0,27,2537 0/1:13,24:37:99:.:.:596,0,220 0/1:14,34:48:99:.:.:814,0,263 0/0:86,0:86:0:.:.:0,0,865 0/0:61,0:61:0:.:.:0,0,973 0/0:50,0:50:0:.:.:0,0,648 0/0:40,0:40:0:.:.:0,0,666 0/0:79,0:79:0:.:.:0,0,935 0/0:84,0:84:0:.:.:0,0,1252 0/1:22,27:49:99:.:.:618,0,453 0/0:39,0:39:0:.:.:0,0,749 0/0:74,0:74:0:.:.:0,0,1312 0/1:13,18:31:99:.:.:402,0,281 0/0:41,0:44:99:.:.:0,115,1412 0/1:30,9:39:99:.:.:176,0,475 0/1:26,23:49:99:.:.:433,0,550 0/1:13,34:47:99:.:.:736,0,185 0/0:44,0:44:0:.:.:0,0,966
Thanks, Alva
Created 2015-04-01 14:17:52 | Updated | Tags: vqsr haplotypecaller best-practices gvcf
I am currently processing ~100 exomes and following the Best Practice recommendations for Pre-processing and Variant Discovery. However, there are a couple of gaps in the documentation, as far as I can tell, regarding exactly how to proceed with VQSR with exome data. I would be grateful for some feedback, particularly regarding VQSR. The issues are similar to those discussed on this thread: http://gatkforums.broadinstitute.org/discussion/4798/vqsr-using-capture-and-padding but my questions aren't fully-addressed there (or elsewhere on the Forum as far as I can see).
Prior Steps: 1) All samples processed with same protocol (~60Mb capture kit) - coverage ~50X-100X 2) Alignment with BWA-MEM (to whole genome) 3) Remove duplicates, indel-realignment, bqsr 4) HC to produce gVCFs (-ERC) 5) Genotype gVCFs
This week I have been investigating VQSR, which has generated some questions.
Q1) Which regions should I use from my data for building the VQSR model?
Here I have tried 3 different input datasets:
a) All my variant positions (11Million positions) b) Variant positions that are in the capture kit (~326k positions) - i.e. used bedtools intersect to only extract variants from (1) c) Variant positions that are in the capture kit with padding of 100nt either side (~568k positions) - as above but bed has +/-100 on regions + uniq to remove duplicate variants that are now in more than one bed region
For each of the above, I have produced "sensitive" and "specific" datasets: "Specific" --ts_filter_level 90.0 \ for both SNPs and INDELs "Sensitive" --ts_filter_level 99.5 \ for SNPs, and --ts_filter_level 99.0 \ for INDELs (as suggested in the definitive FAQ https://www.broadinstitute.org/gatk/guide/article?id=1259 )
I also wanted to see what effect, if any, the "-tranche" argument has - i.e. does it just allow for ease of filtering, or does it affect the mother generated, since it was not clear to me. I applied either 5 tranches or 6:
5-tranche: -tranche 100.0 -tranche 99.9 -tranche 99.5 -tranche 99.0 -tranche 90.0 \ for both SNPs and INDELs 6-tranche: -tranche 100.0 -tranche 99.9 -tranche 99.5 -tranche 99.0 -tranche 95.0 -tranche 90.0 \ for both SNPs and INDELs
To compare the results I then used bed intersect to get back to the variants that are within the capture kit (~326k, as before). The output is shown in the spreadsheet image below.
http://i58.tinypic.com/25gc4k7.png
What the table appears to show me, is that at the "sensitive" settings (orange background), the results are largely the same - the difference between "PASS" in the set at the bottom where all variants were being used, and the others is mostly accounted for by variants being pushed into the 99.9-100 tranche.
However, when trying to be specific (blue background), the difference between using all variants, or just the capture region/capture+100 is marked. Also surprising (at least for me) is the huge difference in "PASS" in cells E15 and E16, where the only difference was the number of tranches given to the model (note that there is very little difference in the analogous cells in Rows 5/6 andRows 10/11.
Q2) Can somebody explain why there is such a difference in "PASS" rows between All-SPEC and the Capture(s)-Spec Q3) Can somebody explain why 6 tranches resulted in ~23k more PASSes than 5 tranches for the All-SPEC Q4) What does "PASS" mean in this context - a score =100? Is it an observation of a variant position in my data that has been observed in the "truth" set? It isn't actually described in the header of the VCF, though presumably the following corresponds: FILTER=<ID=VQSRTrancheSNP99.90to100.00+,Description="Truth sensitivity tranche level for SNP model at VQS Lod < -38682.8777"> Q5) Similarly, why do no variants fall below my lower tranche threshold of 90? Is it because they are all reliable at least to this level?
Q6) Am I just really confused? :-(
Created 2015-03-11 20:38:44 | Updated | Tags: vqsr best-practices variantfiltration
Hi all - I'm stumped and need your help. I'm following the GATK best practices for calling variants with HaplotypeCaller in GVCF mode. One of my samples is NA12878, among 119 others samples in my cohort. For some reason GATK is missing a bunch of variants in this sample that I can clearly see in IGV but are not listed in the VCF. I discovered that the variant is being filtered out..reason being VQSRTranchesSNP99.00to99.90. The genotype is homozygous variant, DP is 243, Qual is 524742.54 and its known in dbSNP. I suspect this is happening to other variants.
How do I adjust VQSR or how tranches are used and variants get placed in? I supposed I need to fine tune my parameters...but I would think something as obvious as this variant would pass Filtering.
Created 2015-02-25 02:12:39 | Updated | Tags: vqsr baserecalibrator haplotypecaller knownsites resources variant-recalibration
Hi, I have a general question about the importance of known VCFs (for BQSR and HC) and resources file (for VQSR). I am working on rice for which the only known sites are the dbSNP VCF files which are built on a genomic version older than the reference genomic fasta file which I am using as basis. How does it affect the quality/accuracy of variants? How important is to have the exact same build of the genome as the one on which the known VCF is based? Is it better to leave out the known sites for some of the steps than to use the version which is built on a different version of the genome for the same species? In other words, which steps (BQSR, HC, VQSR etc) can be performed without the known sites/resource file? If the answers to the above questions are too detailed, can you please point me to any document, if available, which might address this issue?
Thanks, NB
Created 2015-02-12 16:44:10 | Updated | Tags: vqsr
when I finished VQSR, I got a vcf file "recalibrated_variants.vcf",
[wubin]$awk -F"\t" 'NR>161{print$7}' recalibrated_variants.vcf|sort|uniq -c 65902 LowQual 3163999 PASS 122377 VQSRTrancheINDEL90.00to99.00 53509 VQSRTrancheINDEL99.00to99.90 4589 VQSRTrancheINDEL99.90to100.00 742359 VQSRTrancheSNP90.00to99.00 368105 VQSRTrancheSNP99.00to99.90 184493 VQSRTrancheSNP99.90to100.00
If I want 99% truth sites sensitivity, I can discard sites of
VQSRTrancheINDEL99.00to99.90 VQSRTrancheINDEL99.90to100.00 VQSRTrancheSNP99.00to99.90 VQSRTrancheSNP99.90to100.00 LowQual
and retain sites of
PASS VQSRTrancheINDEL90.00to99.00
Am I right ?
Created 2015-02-11 22:56:25 | Updated | Tags: vqsr variantannotator vqsr-indel
Hi,
In the best practices for vqsr in indel mode it is recommended to use the annotation SOR. However, when I try to add this annotation using VariantAnnotator it only adds it to the SNP calls not the indel calls. Does this mean SOR should not be used for vqsr in indel mode?
Thanks,
Kath
Created 2015-02-04 05:14:44 | Updated | Tags: variantrecalibrator vqsr vcf gatk
Hi,
I have generated vcf files using GenotypeGVCFs; each file contains variants corresponding to a different chromosome. I would like to use VQSR to perform the recalibration on all these data combined (for maximum power), but it seems that VQSR only takes a single vcf file, so I would have to combine my vcf files using CombineVariants. Looking at the documentation for CombineVariants, it seems that this tool always produces a union of vcfs. Since each vcf file is chromosome-specific, there are no identical sites across files. My questions are: Is CombineVariants indeed the appropriate tool for me to merge chromosome-specific vcf files, and is there any additional information that I should specify in the command-line when doing this? Do I need to run VariantAnnotator afterwards (I would assume not, since these vcfs were generated using GenotypeGVCFs and the best practices workflow more generally)? I just want to be completely sure that I am proceeding correctly.
Thank you very much in advance, Alva
Created 2015-02-02 21:24:31 | Updated | Tags: vqsr dbsnp vqslod genotypegvcfs gvcf gq-pl
From my whole-genome (human) BAM files, I want to obtain: For each variant in dbSNP, the GQ and VQSLOD associated with seeing that variant in my data.
Here's my situation using HaplotypeCaller -ERC GVCF followed by GenotypeGVCFs: CHROM POS ID REF ALT chr1 1 . A # my data chr1 1 . A T # dbSNP I would like to know the confidence (in terms of GQ and/or PL) of calling A/A, A/T. or T/T. The call of isn't useful to me for the reason explained below.
How can I get something like this to work? Besides needing a GATK-style GVCF file for dbSNP, I'm not sure how GenotypeGVCFs behaves if "tricked" with a fake GVCF not from HaplotypeCaller.
My detailed reason for needing this is below:
For positions of known variation (those in dbSNP), the reference base is arbitrary. For these positions, I need to distinguish between three cases: 1. We have sufficient evidence to call position n as the variant genotype 0/1 (or 1/1) with confidence scores GQ=x1 and VQSLOD=y1. 2. We have sufficient evidence to call position n as homozygous reference (0/0) with confidence scores GQ=x2 and VQSLOD=y2. 3. We do not have sufficient evidence to make any call for position n.
I was planning to use VQSR because the annotations it uses seem useful to distinguish between case 3 and either of 1 and 2. For example, excessive depth suggests a bad alignment, which decreases our confidence in making any call, homozygous reference or not.
Following the best practices pipeline using HaplotypeCaller -ERC GVCF, I get ALTs with associated GQs and PLs, and GT=./.. However, GenotypeGVCF removes all of these, meaning that whenever the call by HaplotypeCaller was ./. (due to lack of evidence for variation), it isn't carried forward for use in VQSR.
Consequently, this seems to distinguish only between these two cases: 1. We have sufficient evidence to call position n as the variant genotype 0/1 (or 1/1) with confidence scores GQ=x1 and VQSLOD=y1. 2. We do not have sufficient evidence to call position n as a variant (it's either 0/0 or unknown).
This isn't sufficient for my application, because we care deeply about the difference between "definitely homozygous reference" and "we don't know".
Douglas
Created 2015-01-27 20:25:42 | Updated | Tags: variantrecalibrator vqsr vcf gatk
Hi,
I ran VariantRecalibrator and ApplyRecalibration, and everything seems to have worked fine. I just have one question: if there are no reference alleles besides "N" in my recalibrate_SNP.recal and recalibrate_INDEL.recal files, and in the "alt" field simply displays , does that mean that none of my variants were recalibrated? Just wanted to be completely sure. My original file (after running GenotypeGVCFs) has the same number of variants as the recalibrated vcf's.
Thanks, Alva
Created 2015-01-23 16:55:57 | Updated | Tags: vqsr haplotypecaller bam gatk genotypegvcfs
Hi,
I have recal.bam files for all the individuals in my study (these constitute 4 families), and each bam file contains information for one chromosome for one individual. I was wondering if it is best for me to pass all the files for a single individual together when running HaplotypeCaller, if it will increase the accuracy of the calling, or if I can just run HaplotypeCaller on each individual bam file separately.
Also, I was wondering at which step I should be using CalculateGenotypePosteriors, and if it will clean up the calls substantially. VQSR already filters the calls, but I was reading that CalculateGenotypePosteriors actually takes pedigree files, which would be useful in my case. Should I try to use CalculateGenotypePosteriors after VQSR? Are there other relevant filtering or clean-up tools that I should be aware of?
Alva
Created 2015-01-10 08:13:41 | Updated | Tags: unifiedgenotyper variantrecalibrator vqsr haplotypescore annotation
The documentation on the HaplotypeScore annotation reads:
HaplotypeCaller does not output this annotation because it already evaluates haplotype segregation internally. This annotation is only informative (and available) for variants called by Unified Genotyper.
The annotation used to be part of the best practices:
http://gatkforums.broadinstitute.org/discussion/15/best-practice-variant-detection-with-the-gatk-v1-x-retired
I will include it in the VQSR model for UG calls from low coverage data. Is this an unwise decision? I guess this is for myself to evaluate. I thought I would ask, in case I have missed something obvious.
Created 2014-12-02 23:21:07 | Updated | Tags: vqsr known-vcf
Hello, I am working on dog targeted sequencing data. In VQSR step, I got error as below. For the record, I use canFam3.fa (from UCSC hg19) as reference and Canis_familiaris.newchr.vcf (Ensembel) as reource file, the two files didn't get error in previous steps. Did anyone have similar problem, Thanks for tips !
INFO 18:07:03,352 HelpFormatter - -------------------------------------------------------------------------------- INFO 18:07:03,355 HelpFormatter - The Genome Analysis Toolkit (GATK) v3.3-0-g37228af, Compiled 2014/10/24 01:07:22 INFO 18:07:03,355 HelpFormatter - Copyright (c) 2010 The Broad Institute INFO 18:07:03,355 HelpFormatter - For support and documentation go to http://www.broadinstitute.org/gatk INFO 18:07:03,359 HelpFormatter - Program Args: -T VariantRecalibrator -R canFam3.fa -input ./variant_calling/FGC0805.target.raw.snps.indels.vcf -resource:dbsnp,known=false,training=true,truth=false,prior=12.0 Canis_familiaris.newchr.vcf -an DP -an QD -an FS -an MQRankSum -an ReadPosRankSum -mode SNP -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 -recalFile ./variant_calling/FGC0805.target.recalibrate.SNP.recal -tranchesFile ./variant_calling/FGC0805.target.recalibrate.SNP.tranches -rscriptFile ./variant_calling/FGC0805.target.recalibrate.SNP.plots.R INFO 18:07:03,363 HelpFormatter - Executing as wangfan1@bioapps on Linux 2.6.32-358.14.1.el6.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.7.0_65-b17. INFO 18:07:03,364 HelpFormatter - Date/Time: 2014/12/02 18:07:03 INFO 18:07:03,364 HelpFormatter - -------------------------------------------------------------------------------- INFO 18:07:03,364 HelpFormatter - -------------------------------------------------------------------------------- INFO 18:07:04,430 GenomeAnalysisEngine - Strictness is SILENT INFO 18:07:05,183 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 18:07:06,187 GenomeAnalysisEngine - Preparing for traversal INFO 18:07:06,217 GenomeAnalysisEngine - Done preparing for traversal INFO 18:07:06,218 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING] INFO 18:07:06,219 ProgressMeter - | processed | time | per 1M | | total | remaining INFO 18:07:06,219 ProgressMeter - Location | sites | elapsed | sites | completed | runtime | runtime INFO 18:07:06,231 TrainingSet - Found dbsnp track: Known = false Training = true Truth = false Prior = Q12.0 INFO 18:07:36,226 ProgressMeter - Starting 0.0 30.0 s 49.6 w 100.0% 30.0 s 0.0 s
##### ERROR ------------------------------------------------------------------------------------------
Created 2014-10-29 20:01:04 | Updated | Tags: vqsr random-forest
Hi there,
I hope I'm not being too forward here, but I was wondering if your group was still looking into implementing a RF model for VQSR (in particular I was hoping that it would help with smaller size datasets, in terms of the count of variant sites for smaller than exome captures) or if you have abandoned it?
Best Regards,
Kurt
Created 2014-10-29 02:38:38 | Updated | Tags: vqsr
I'm trying to run VQSR on a vcf I just called with HaplotypeCaller. Here is my command:
java -Xmx32g -jar /Commands/GATK/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -R /Reference/ucsc.hg19.fasta \ -input H3H5.HTC.raw.vcf \ -resource:hapmap,known=false,training=true,truth=true,prior=15.0 /Reference/hapmap3.3.hg19.vcf \ -resource:omni,known=false,training=true,truth=false,prior=12.0 /Reference/1000G.omni2.5.hg19.vcf \ -resource:1000G,known=false,training=true,truth=false,prior=10.0 /Reference/1000G.ph1.SNP.HC.hg19.vcf \ -resource:dbsnp,known=true,training=true,truth=false,prior=6.0 /Reference/dbsnp138.hg19.vcf \ -an QD -an MQRankSum -an ReadPosRankSum -an FS \ -mode SNP \ -tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0 \ -recalFile VQSR/H3H5.SNP.VQSR \ -tranchesFile VQSR/H3H5.SNP.Tranches \ -rscriptFile VQSR/H3H5.SNP.VQSR.R \ -nt 16
Each time I try to run VQSR it gives me this error:
INFO 12:23:34,122 VariantRecalibratorEngine - Finished iteration 95. Current change in mixture coefficients = 0.00198 INFO 12:23:34,122 VariantRecalibratorEngine - Convergence after 95 iterations! INFO 12:23:34,461 VariantRecalibratorEngine - Evaluating full set of 251205 variants... INFO 12:23:34,476 VariantDataManager - Training with worst 0 scoring variants --> variants with LOD <= -5.0000. INFO 12:23:39,194 GATKRunReport - Uploaded run statistics report to AWS S3
##### ERROR ------------------------------------------------------------------------------------------
In the discussion of similar errors, I've seen that too little data and MQ annotation can cause similar problems, but I didn't use either of them.
I'm going to guess that it is something simple, but any help would be appreciated.
Chris
Created 2014-09-29 11:03:13 | Updated | Tags: variantrecalibrator vqsr
Hi GATK team,
our lab has a never ending discussion about running VQSR on related samples or having to exclude them. And i guess we need your help to settle this.
We have a multisample call (UG) run on ~1.500 samples, which contains all sorts of unrelated samples, trios and small families. Our statistician tries to convince us to exclude all related samples, because this might skew the VQSR model. The biologists don't follow this argument, but we are unable to convince each other. Do related samples disturb the VQSR?
Even more specific - if we run VQSR on tumor/normal pairs - should we expect surprising behaviour of the model or can we just run the recalibration without worries?
Created 2014-09-18 12:12:57 | Updated | Tags: vqsr
Hello Geraldine,
First thank you a lot for your amazing work on this forum. My project deals with discovering rare population-specific variants in human exomes, and I would like to know how the VQSR step would affect the discovery of these variants. I was wondering whether it is better to perform VQSR on all the populations together (420 individuals but with a risk to clean out "true" rare population-specific variants ) or to run it by population (between 30 and 100 individuals each but I read that VQSR is loosing power with a reduced number of samples) ?
Thank you for your help, Best Marie
Created 2014-09-11 15:52:18 | Updated | Tags: vqsr haplotypecaller qualbydepth genotypegvcfs
Hey there,
How can it be possible that some of my snps or indels calls miss the QD tag? I'm doing the recommended workflow and I've tested if for both RNAseq (hard filters complains, that's how I saw those tags were missing) and Exome sequencing (VQSR). How can a hard filter applied on QD on a line without actually that tag can be considered to pass that threshold too? I'm seeing a lot more INDELs in RNAseq where this kind of scenario is happening as well.
Here's the command lines that I used :
# VQSR
java -Djava.io.tmpdir=$LSCRATCH -Xmx10g -jar /home/apps/Logiciels/GATK/3.2-2/GenomeAnalysisTK.jar -l INFO -T VariantRecalibrator -an QD -an MQRankSum -an ReadPosRankSum -an FS -an MQ -mode SNP -resource:1000G,known=false,training=true,truth=false,prior=10.0 ~/References/hg19/VQSR/1000G_phase1.snps.high_confidence.b37.noGL.converted.vcf -resource:hapmap,known=false,training=true,truth=true,prior=15.0 ~/References/hg19/VQSR/hapmap_3.3.b37.noGL.nochrM.converted.vcf -resource:omni,known=false,training=true,truth=false,prior=12.0 ~/References/hg19/VQSR/1000G_omni2.5.b37.noGL.nochrM.converted.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=6.0 ~/References/hg19/VQSR/dbsnp_138.b37.excluding_sites_after_129.noGL.nochrM.converted.vcf -input snv.vcf -recalFile 96Exomes.HC.TruSeq.snv.RECALIBRATED -tranchesFile 96Exomes.HC.TruSeq.snv.tranches -rscriptFile 96Exomes.HC.TruSeq.snv.plots.R -R ~/References/hg19/hg19.fasta --maxGaussians 4 java -Djava.io.tmpdir=$LSCRATCH -Xmx10g -jar /home/apps/Logiciels/GATK/3.2-2/GenomeAnalysisTK.jar -l INFO -T ApplyRecalibration -ts_filter_level 99.0 -mode SNP -input snv.vcf -recalFile 96Exomes.HC.TruSeq.snv.RECALIBRATED -tranchesFile 96Exomes.HC.TruSeq.snv.tranches -o 96Exomes.HC.TruSeq.snv.recal.vcf -R ~/References/hg19/hg19.fasta
# HARD FILTER (RNASeq)
java -Djava.io.tmpdir=$LSCRATCH -Xmx2g -jar /home/apps/Logiciels/GATK/3.1-1/GenomeAnalysisTK.jar -l INFO -T VariantFiltration -R ~/References/hg19/hg19.fasta -V 96RNAseq.STAR.q1.vcf -window 35 -cluster 3 -filterName FS -filter "FS > 30.0" -filterName QD -filter "QD < 2.0" -o 96RNAseq.STAR.q1.FS30.QD2.vcf Here are some examples for RNAseq : chr1 6711349 . G A 79.10 PASS BaseQRankSum=-1.369e+00;ClippingRankSum=1.00;DP=635;FS=1.871;MLEAC=1;MLEAF=5.495e-03;MQ=60.00;MQ0=0;MQRankSum=-4.560e-01;ReadPosRankSum=-1.187e+00 GT:AD:DP:GQ:PL 0/0:8,0:8:24:0,24,280 ./.:0,0:0 0/0:9,0:9:21:0,21,248 0/0:7,0:7:21:0,21,196 0/0:7,0:7:21:0,21,226 0/0:8,0:8:21:0,21,227 0/0:8,0:8:21:0,21,253 0/0:7,0:7:21:0,21,218 0/0:9,0:9:27:0,27,282 1/1:0,0:5:15:137,15,0 0/0:2,0:2:6:0,6,47 0/0:28,0:28:78:0,78,860 0/0:7,0:7:21:0,21,252 0/0:2,0:2:6:0,6,49 0/0:5,0:5:12:0,12,152 0/0:3,0:3:6:0,6,90 0/0:4,0:4:12:0,12,126 0/0:9,0:9:21:0,21,315 0/0:7,0:7:21:0,21,256 0/0:7,0:7:21:0,21,160 0/0:8,0:8:21:0,21,298 0/0:20,0:20:60:0,60,605 0/0:2,0:2:6:0,6,49 0/0:2,0:2:6:0,6,67 0/0:2,0:2:6:0,6,71 0/0:14,0:14:20:0,20,390 0/0:7,0:7:21:0,21,223 0/0:7,0:7:21:0,21,221 0/0:4,0:4:12:0,12,134 0/0:2,0:2:6:0,6,54 ./.:0,0:0 0/0:4,0:4:9:0,9,118 0/0:8,0:8:21:0,21,243 0/0:6,0:6:15:0,15,143 0/0:8,0:8:21:0,21,244 0/0:7,0:7:21:0,21,192 0/0:2,0:2:6:0,6,54 0/0:13,0:13:27:0,27,359 0/0:8,0:8:21:0,21,245 0/0:7,0:7:21:0,21,218 0/0:12,0:12:36:0,36,354 0/0:8,0:8:21:0,21,315 0/0:7,0:7:21:0,21,215 0/0:2,0:2:6:0,6,49 0/0:10,0:10:24:0,24,301 0/0:7,0:7:21:0,21,208 0/0:7,0:7:21:0,21,199 0/0:2,0:2:6:0,6,47 0/0:3,0:3:9:0,9,87 0/0:2,0:2:6:0,6,73 0/0:7,0:7:21:0,21,210 0/0:8,0:8:22:0,22,268 0/0:7,0:7:21:0,21,184 0/0:7,0:7:21:0,21,213 0/0:5,0:5:9:0,9,135 0/0:7,0:7:21:0,21,200 0/0:4,0:4:12:0,12,118 0/0:7,0:7:21:0,21,232 0/0:7,0:7:21:0,21,232 0/0:7,0:7:21:0,21,217 0/0:8,0:8:21:0,21,255 0/0:9,0:9:24:0,24,314 0/0:8,0:8:21:0,21,221 0/0:9,0:9:24:0,24,276 0/0:9,0:9:21:0,21,285 0/0:3,0:3:6:0,6,90 0/0:2,0:2:6:0,6,57 0/0:13,0:13:20:0,20,385 0/0:2,0:2:6:0,6,48 0/0:11,0:11:27:0,27,317 0/0:8,0:8:21:0,21,315 0/0:9,0:9:24:0,24,284 0/0:7,0:7:21:0,21,228 0/0:14,0:14:33:0,33,446 0/0:2,0:2:6:0,6,64 0/0:2,0:2:6:0,6,72 0/0:7,0:7:21:0,21,258 0/0:10,0:10:27:0,27,348 0/0:7,0:7:21:0,21,219 0/0:9,0:9:21:0,21,289 0/0:20,0:20:57:0,57,855 0/0:4,0:4:12:0,12,146 0/0:7,0:7:21:0,21,205 0/0:12,0:14:36:0,36,1030 0/0:3,0:3:6:0,6,87 0/0:2,0:2:6:0,6,60 0/0:7,0:7:21:0,21,226 0/0:7,0:7:21:0,21,229 0/0:8,0:8:21:0,21,265 0/0:4,0:4:6:0,6,90 ./.:0,0:0 0/0:7,0:7:21:0,21,229 0/0:2,0:2:6:0,6,59 0/0:2,0:2:6:0,6,56 chr1 7992047 . T C 45.83 SnpCluster BaseQRankSum=1.03;ClippingRankSum=0.00;DP=98;FS=0.000;MLEAC=1;MLEAF=0.014;MQ=60.00;MQ0=0;MQRankSum=-1.026e+00;ReadPosRankSum=-1.026e+00 GT:AD:DP:GQ:PL ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,70 0/0:2,0:2:6:0,6,45 0/0:3,0:3:6:0,6,87 0/0:2,0:2:6:0,6,52 ./.:0,0:0 ./.:0,0:0 ./.:1,0:1 ./.:0,0:0 0/0:2,0:2:6:0,6,55 0/0:2,0:2:6:0,6,49 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,61 0/0:2,0:2:6:0,6,49 ./.:0,0:0 ./.:0,0:0 0/0:3,0:3:6:0,6,90 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,52 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 0/0:2,0:2:6:0,6,69 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 0/0:2,0:2:6:0,6,64 ./.:0,0:0 0/0:2,0:2:6:0,6,37 ./.:0,0:0 0/0:2,0:2:6:0,6,67 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 0/0:2,0:2:6:0,6,68 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 0/0:11,0:11:24:0,24,360 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 0/0:2,0:2:6:0,6,68 0/0:2,0:2:6:0,6,50 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,50 0/0:3,0:3:6:0,6,90 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:4:6:0,6,50 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:7,0:7:21:0,21,231 0/0:2,0:2:6:0,6,64 ./.:0,0:0 0/0:2,0:2:6:0,6,63 0/0:2,0:2:6:0,6,70 ./.:0,0:0 0/0:6,0:6:15:0,15,148 ./.:0,0:0 ./.:0,0:0 1/1:0,0:2:6:90,6,0 ./.:0,0:0 0/0:2,0:2:6:0,6,63 0/0:2,0:2:6:0,6,74 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,58 0/0:2,0:2:6:0,6,71 ./.:0,0:0 ./.:0,0:0 0/0:2,0:2:6:0,6,49 For Exome Seq now : chr2 111878571 . C T 93.21 PASS DP=634;FS=0.000;MLEAC=1;MLEAF=5.319e-03;MQ=60.00;MQ0=0;VQSLOD=14.19;culprit=MQ GT:AD:DP:GQ:PL 0/0:8,0:8:24:0,24,243 0/0:4,0:4:9:0,9,135 0/0:7,0:7:18:0,18,270 0/0:7,0:7:21:0,21,230 0/0:16,0:16:48:0,48,542 0/0:8,0:8:21:0,21,315 0/0:6,0:6:18:0,18,186 0/0:5,0:5:15:0,15,168 0/0:6,0:6:15:0,15,225 0/0:10,0:10:30:0,30,333 0/0:7,0:7:21:0,21,239 0/0:6,0:6:18:0,18,202 0/0:6,0:6:15:0,15,225 0/0:7,0:7:21:0,21,225 0/0:8,0:8:24:0,24,272 0/0:5,0:5:15:0,15,168 1/1:0,0:13:13:147,13,0 0/0:2,0:2:6:0,6,73 0/0:8,0:8:24:0,24,256 0/0:14,0:14:4:0,4,437 0/0:3,0:3:9:0,9,85 0/0:4,0:4:12:0,12,159 0/0:7,0:7:21:0,21,238 0/0:5,0:5:15:0,15,195 0/0:7,0:7:15:0,15,225 0/0:12,0:12:36:0,36,414 0/0:4,0:4:12:0,12,156 0/0:7,0:7:0:0,0,190 0/0:2,0:2:6:0,6,64 0/0:7,0:7:21:0,21,242 0/0:7,0:7:21:0,21,234 0/0:8,0:8:24:0,24,267 0/0:7,0:7:21:0,21,245 0/0:7,0:7:21:0,21,261 0/0:6,0:6:18:0,18,204 0/0:8,0:8:24:0,24,302 0/0:5,0:5:15:0,15,172 0/0:9,0:9:24:0,24,360 0/0:18,0:18:51:0,51,649 0/0:5,0:5:15:0,15,176 0/0:2,0:2:6:0,6,70 0/0:14,0:14:33:0,33,495 0/0:4,0:4:9:0,9,135 0/0:8,0:8:21:0,21,315 0/0:4,0:4:12:0,12,149 0/0:4,0:4:6:0,6,90 0/0:10,0:10:27:0,27,405 0/0:3,0:3:6:0,6,90 0/0:4,0:4:12:0,12,133 0/0:14,0:14:6:0,6,431 0/0:4,0:4:12:0,12,151 0/0:5,0:5:15:0,15,163 0/0:3,0:3:9:0,9,106 0/0:7,0:7:21:0,21,237 0/0:7,0:7:21:0,21,268 0/0:8,0:8:21:0,21,315 0/0:2,0:2:6:0,6,68 ./.:0,0:0 0/0:3,0:3:9:0,9,103 0/0:7,0:7:21:0,21,230 0/0:3,0:3:6:0,6,90 0/0:9,0:9:26:0,26,277 0/0:7,0:7:21:0,21,236 0/0:5,0:5:15:0,15,170 ./.:1,0:1 0/0:15,0:15:45:0,45,653 0/0:8,0:8:24:0,24,304 0/0:6,0:6:15:0,15,225 0/0:3,0:3:9:0,9,103 0/0:2,0:2:6:0,6,79 0/0:7,0:7:21:0,21,241 0/0:4,0:4:12:0,12,134 0/0:3,0:3:6:0,6,90 0/0:5,0:5:15:0,15,159 0/0:4,0:4:12:0,12,136 0/0:5,0:5:12:0,12,180 0/0:11,0:11:21:0,21,315 0/0:13,0:13:39:0,39,501 0/0:3,0:3:9:0,9,103 0/0:8,0:8:24:0,24,257 0/0:2,0:2:6:0,6,73 0/0:8,0:8:24:0,24,280 0/0:4,0:4:12:0,12,144 0/0:4,0:4:9:0,9,135 0/0:8,0:8:24:0,24,298 0/0:4,0:4:12:0,12,129 0/0:5,0:5:15:0,15,184 0/0:2,0:2:6:0,6,62 0/0:2,0:2:6:0,6,65 0/0:9,0:9:27:0,27,337 0/0:7,0:7:21:0,21,230 0/0:7,0:7:21:0,21,239 0/0:5,0:5:0:0,0,113 0/0:11,0:11:33:0,33,369 0/0:7,0:7:21:0,21,248 0/0:10,0:10:30:0,30,395 Thanks for your help. Created 2014-09-10 08:21:59 | Updated 2014-09-10 08:28:15 | Tags: vqsr gatk-vqsr-exome Hi, I am working on non-human species data and i have used VQSR in the analysis pipeline as shown below: If VQSR is performed, should we still consider filtering the variants on basequality and mapping quality? Created 2014-09-02 12:38:46 | Updated 2014-09-02 12:48:19 | Tags: vqsr bqsr gatk de-novo-mutation Hi there, I have been using GATK to identify variants recently. I saw that BQSR is highly recommended. But I don’t know whether it is still needed for de novo mutation calling. For example, I want to identify de novo mutations generated in the progenies by single seed descent METHODS in plants. For example, in the paper of “The rate and molecular spectrum of spontaneous mutations in Arabidopsis thaliana”, these spontaneous arising mutations may not included in the known sites of variants. Based on documentation posted in GATK websites, they assume that all reference mismatches we see are errors and indicative of poor base quality. Under this assumption, these de novo mutations may be missed in the step of variant calling. So in this situation, what should I do? Or should I skip the BQSR step? Also what should I do when I reach to step- VQSR? Hope some GATK developers can help me on this. Thanks. Created 2014-07-09 15:31:45 | Updated | Tags: vqsr Hi, I have exome sequencing data on 90 samples, and my lab uses the VQSR filter to remove low quality variants. I was wondering if I should also perform a genotype-level filter by DP/GQ post this VQSR filtering step. Is there a protocol that is recommended, or some metrics I can look at to determine if such a step is required? Thanks, Shweta Created 2014-07-03 14:29:27 | Updated | Tags: vqsr r java -jar -Djava.io.tmpdir=temp/ -Xmx4g GenomeAnalysisTK-2.8-1-g932cd3a/GenomeAnalysisTK.jar -T VariantRecalibrator -R hg19.fa -input NA19240.raw.SNPs.vcf -resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.refmt.vcf -resource:omni,known=false,training=true,truth=false,prior=12.0 1000G_omni2.5.hg19.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=6.0 dbsnp_138.b37.refmt.vcf -an QD -an MQ -an MQRankSum -an ReadPosRankSum -an FS -an DP -mode SNP -recalFile NA19240.raw.SNPs.recal -tranchesFile NA19240.raw.SNPs.tranches -rscriptFile NA19240.snp.plots.R However, there is no NA19240.snp.plots.R.pdf generated. And I didn't find any error. When I try to run NA19240.snp.plots.R in R, source('NA19240.snp.plots.R'), there is an error: Error: Use 'theme' instead. (Defunct; last used in version 0.9.1) How can I fix it? Thanks!! Created 2014-06-30 07:11:50 | Updated | Tags: vqsr exome Hello, I've asked this question at the workshop in Brussels, and I would like to post it here: I'm working on an exome analysis on trio. I would like to run VQSR of filteration on the data. since this is an exome project, there are not a lot of varients, and therefore, as I understand. the VQSR is not accurate. You suggest to add more data from 1000Genomes or other published data. The families that I'm working on belongs to a very small and specific population, and I'm afraid that adding published data will add a lot of noise. What do you think, should I add more published data? change parameters such as maxGaussians? do hard filteration? Thanks, Maya Created 2014-06-10 18:52:15 | Updated | Tags: vqsr boom Hi there, So for the SNV model in VariantRecalibrator, I was using QD, MQRankSum, ReadPosRankSum, FS for a little while and then decided to add MQ back in since I saw that BP was updated recently and that was back in for BP. However; when I added MQ back in, and it went to train the negative model, it said it was training with 0 variants (same data set w/o using MQ in the model yielded ~30,000 variants to be used in the negative training model). I have attached a text file that has the base command line, followed by the log from the unsuccessful run and then followed by the successful run log. The version 3.1-1 and there are approx 700 exomes. Kurt Created 2014-04-23 13:35:51 | Updated | Tags: vqsr Hi, I am working on VQSR step (using GATK 2.8.1) on variants which have been called by UG from ~500 whole genomes of cattle . I run VariantRecalibrator as following: ${JAVA} ${GATK}/GenomeAnalysisTK.jar -T VariantRecalibrator \ -R${REF} -input ${OUTPUT}/GATK-502-sorted.full.vcf.gz \ -resource:HD,known=false,training=true,truth=true,prior=15.0 HD_bosTau6.vcf \ -resource:JH_F1,known=false,training=true,truth=false,prior=10.0 F1_uni_idra_pp_trusted_only_LMQFS_bosTau6.vcf \ -resource:dbsnp,known=true,training=false,truth=false,prior=6.0 BosTau6_dbSNP138_NCBI.vcf \ -an QD -an MQRankSum -an ReadPosRankSum -an FS -an MQ -an DP -an HaplotypeScore \ -mode SNP \ -recalFile${OUTPUT}/gatk_502_sorted_fixed.recal \
-tranchesFile ${OUTPUT}/gatk_502_sorted_fixed.tranches \ -rscriptFile${OUTPUT}/gatk_502_sorted_fixed.plots.R
HD_bosTau6.vcf : ~770k markers on Illumina bovine high-density chip array
F1_uni_idra_pp_trusted_only_LMQFS_bosTau6.vcf : ~5.4M SNPs
The tranches pdf I got looks really weird, please check the attached file.
Then I tried to vary the 'prior' score of trainning VCF, and also supply additional VCF file from another project as training datasets. And I still got the similar tranches graph as above. e.g.:
-resource:HD,known=false,training=true,truth=true,prior=15.0 HD_bosTau6.vcf
-resource:JH_F1,known=false,training=true,truth=false,prior=12.0 F1_uni_idra_pp_trusted_only_LMQFS_bosTau6.vcf
-resource:DN,known=false,training=true,truth=false,prior=12.0 HC-Plat-FB.3in3.vcf.gz
-resource:dbsnp,known=true,training=false,truth=false,prior=6.0 BosTau6_dbSNP138_NCBI.vcf
HC-Plat-FB.3in3.vcf.gz : ~ 14M markers
It is worthy to mention that I have done VariantRecalibrator step with the same parameters and training sets on another 50 whole genomes very recently, and it worked fine. Actually I have done VariantRecalibrator on the 500 animals before when I accidentally took a unfiltered VCF called by UG as training set. Surprisingly, I got good tranches graph that time, similar to the graph posted on GATK best practice. Do you have any suggestion for me?
Thanks,
Created 2014-04-15 21:37:57 | Updated | Tags: variantrecalibrator vqsr
Hi,
Sorry to bother you guys. Just a few quick questions:
1) I'm attempting to download the bundles for VQSR and I noticed that they are for b37 or hg19. If I performed my initial assemblies and later SNP calls with hg38, will this cause an issue? Should I restart the process using either b37 or hg19?
2) I'm still a bit lost on what is considered "too few variants" for VQSR. As VQSR works best when there are thousands of variants - is this recommendation on a per sample basis or for an entire project? I'm presently working with sequences from 80 unique samples for a single gene (~100kbp) and HaplotypeCaller detects on average ~300 raw snps. Would you recommend I hard filter instead in my case?
Thanks,
Dave
Created 2014-04-10 17:59:18 | Updated | Tags: vqsr
hi, Geraldine, Thanks for the webinar! You mentioned that VQSR isn't necessary for a single exome. But would there be any drawback to run it on a single exome? I see that it helps to set up the PASS filter.
Created 2014-03-26 19:55:01 | Updated | Tags: vqsr indels resource-bundle
Hi all --
This should be a simple problem -- I cannot find a valid version of the Mills indel reference in the resource bundle, or anywhere else online!
All versions of the reference VCF are stripped of genotypes and do not contain a FORMAT column or any additional annotations.
I am accessing the Broad's public FTP, and none of the Mills VCF files in bundle folders 2.5 or 2.8 contain a full VCF. I understand that there are "sites only" VCF, but I can't seem to find anything else.
Can anyone link me to a version that contains the recommended annotations for indel VQSR, or that can be annotated?
Created 2014-02-26 15:33:35 | Updated 2014-02-26 16:17:03 | Tags: vqsr nan-lod
INFO 17:05:50,124 GenomeAnalysisEngine - Preparing for traversal
INFO 17:05:50,144 GenomeAnalysisEngine - Done preparing for traversal
INFO 17:05:50,144 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING]
INFO 17:05:50,145 ProgressMeter - Location processed.sites runtime per.1M.sites completed total.runtime remaining
INFO 17:05:50,166 TrainingSet - Found hapmap track: Known = false Training = true Truth = true Prior = Q15.0
INFO 17:05:50,166 TrainingSet - Found omni track: Known = false Training = true Truth = false Prior = Q12.0
INFO 17:05:50,167 TrainingSet - Found dbsnp track: Known = true Training = false Truth = false Prior = Q6.0
INFO 17:06:20,149 ProgressMeter - 1:216404576 2.04e+06 30.0 s 14.0 s 7.0% 7.2 m 6.7 m
INFO 17:06:50,151 ProgressMeter - 2:223579089 4.70e+06 60.0 s 12.0 s 15.2% 6.6 m 5.6 m
INFO 17:07:20,159 ProgressMeter - 4:33091662 7.43e+06 90.0 s 12.0 s 23.3% 6.4 m 4.9 m
INFO 17:07:50,161 ProgressMeter - 5:92527959 1.00e+07 120.0 s 11.0 s 31.4% 6.4 m 4.4 m
INFO 17:08:20,162 ProgressMeter - 7:1649969 1.30e+07 2.5 m 11.0 s 39.8% 6.3 m 3.8 m
INFO 17:08:50,168 ProgressMeter - 8:106975025 1.58e+07 3.0 m 11.0 s 48.4% 6.2 m 3.2 m
INFO 17:09:20,169 ProgressMeter - 10:101433561 1.87e+07 3.5 m 11.0 s 57.4% 6.1 m 2.6 m
INFO 17:09:50,170 ProgressMeter - 12:99334147 2.16e+07 4.0 m 11.0 s 66.1% 6.1 m 2.1 m
INFO 17:10:20,171 ProgressMeter - 15:30577012 2.41e+07 4.5 m 11.0 s 75.4% 6.0 m 88.0 s
INFO 17:10:52,409 ProgressMeter - 18:8763648 2.68e+07 5.0 m 11.0 s 83.5% 6.0 m 59.0 s
INFO 17:11:22,410 ProgressMeter - 22:31598896 2.97e+07 5.5 m 11.0 s 92.2% 6.0 m 27.0 s
INFO 17:11:33,135 VariantDataManager - QD: mean = 17.48 standard deviation = 9.03
INFO 17:11:33,516 VariantDataManager - HaplotypeScore: mean = 3.03 standard deviation = 2.62
INFO 17:11:33,882 VariantDataManager - MQ: mean = 52.40 standard deviation = 2.98
INFO 17:11:34,253 VariantDataManager - MQRankSum: mean = 0.31 standard deviation = 1.02
INFO 17:11:37,973 VariantDataManager - Training with 1024360 variants after standard deviation thresholding.
INFO 17:11:37,977 GaussianMixtureModel - Initializing model with 30 k-means iterations...
INFO 17:11:53,065 ProgressMeter - GL000202.1:10465 3.08e+07 6.0 m 11.0 s 99.8% 6.0 m 0.0 s
INFO 17:12:09,041 VariantRecalibratorEngine - Finished iteration 0.
INFO 17:12:23,066 ProgressMeter - GL000202.1:10465 3.08e+07 6.5 m 12.0 s 99.8% 6.5 m 0.0 s
INFO 17:12:30,492 VariantRecalibratorEngine - Finished iteration 5. Current change in mixture coefficients = 0.08178
INFO 17:12:51,054 VariantRecalibratorEngine - Finished iteration 10. Current change in mixture coefficients = 0.05869
INFO 17:12:53,072 ProgressMeter - GL000202.1:10465 3.08e+07 7.0 m 13.0 s 99.8% 7.0 m 0.0 s
INFO 17:13:11,207 VariantRecalibratorEngine - Finished iteration 15. Current change in mixture coefficients = 0.15237
INFO 17:13:23,073 ProgressMeter - GL000202.1:10465 3.08e+07 7.5 m 14.0 s 99.8% 7.5 m 0.0 s
INFO 17:13:31,503 VariantRecalibratorEngine - Finished iteration 20. Current change in mixture coefficients = 0.13505
INFO 17:13:51,768 VariantRecalibratorEngine - Finished iteration 25. Current change in mixture coefficients = 0.05729
INFO 17:13:53,080 ProgressMeter - GL000202.1:10465 3.08e+07 8.0 m 15.0 s 99.8% 8.0 m 0.0 s
INFO 17:14:11,372 VariantRecalibratorEngine - Finished iteration 30. Current change in mixture coefficients = 0.02607
INFO 17:14:23,081 ProgressMeter - GL000202.1:10465 3.08e+07 8.5 m 16.0 s 99.8% 8.5 m 0.0 s
INFO 17:14:24,730 VariantRecalibratorEngine - Convergence after 33 iterations!
INFO 17:14:27,037 VariantRecalibratorEngine - Evaluating full set of 3860460 variants...
INFO 17:14:51,111 VariantDataManager - Found 0 variants overlapping bad sites training tracks.
INFO 17:14:55,071 VariantDataManager - Additionally training with worst 1000 scoring variants --> 1000 variants with LOD <= -30.5662.
INFO 17:14:55,071 GaussianMixtureModel - Initializing model with 30 k-means iterations...
INFO 17:14:55,082 VariantRecalibratorEngine - Finished iteration 0.
INFO 17:14:55,095 VariantRecalibratorEngine - Convergence after 4 iterations!
INFO 17:14:55,096 VariantRecalibratorEngine - Evaluating full set of 3860460 variants...
INFO 17:15:02,071 GATKRunReport - Uploaded run statistics report to AWS S3
##### ERROR ------------------------------------------------------------------------------------------
##### ERROR A USER ERROR has occurred (version 2.7-2-g6bda569):
##### ERROR
##### ERROR This means that one or more arguments or inputs in your command are incorrect.
##### ERROR The error message below tells you what is the problem.
##### ERROR
##### ERROR If the problem is an invalid argument, please check the online documentation guide
##### ERROR (or rerun your command with --help) to view allowable command-line arguments for this tool.
##### ERROR
##### ERROR
##### ERROR Please do NOT post this error to the GATK forum unless you have really tried to fix it yourself.
##### ERROR
##### ERROR MESSAGE: NaN LOD value assigned. Clustering with this few variants and these annotations is unsafe. Please consider raising the number of variants used to train the negative model (via --numBad 3000, for example).
##### ERROR ------------------------------------------------------------------------------------------
My command is :
java -jar -Xmx4g GenomeAnalysisTK-2.7-2-g6bda569/GenomeAnalysisTK.jar -T VariantRecalibrator -R human_g1k_v37.fasta -input NA12878_snp.vcf -resource:hapmap,known=false,training=true,truth=true,prior=15.0 hapmap_3.3.b37.sites.vcf -resource:omni,known=false,training=true,truth=false,prior=12.0 1000G_omni2.5.b37.sites.vcf -resource:dbsnp,known=true,training=false,truth=false,prior=6.0 dbsnp_132.b37.vcf -an QD -an HaplotypeScore -an MQ -an MQRankSum --maxGaussians 4 -mode SNP -recalFile NA12878_recal.vcf -tranchesFile NA12878_tranches -rscriptFile NA12878.plots.R
Before I didn't use -maxGaussians 4, once an error suggested this, I tried but still got this error message...And I think that numBad is already deprecated. I don't understand why this error will happen. I'm doing GATK unifiedgenotyper on 1000Genomes high coverage bam file and then use VQSR to filter the snp.
Created 2014-02-26 15:13:15 | Updated 2014-02-26 15:35:08 | Tags: vqsr
hi i run VQSR on the vcf file generated by unified genotyper and filtered PASS 63412 out of 86840 (files with snps and indels). as i run unified genotyper with -glm BOTH command. i have two questions
1) the number of pass snps are different when i counted them in two ways(first with original output of UG and other by separating snps and indel into two separate files using awk script
grep -v "#" sample1_recalibrated_snps_PASS.vcf | grep -c "PASS"
63412
grep -v "#" sample1_merged_recalibrated_snps_raw_indels.vcf| grep -c "LowQual“
18725
Statistics for separate snp file. here i use awk script to separate snps and indels (using awk script)
Rest is fine only problem is that pass snps no differ think why
grep -v "^#" sample1_snp.vcf| grep -c "PASS
63402
grep -v "^#" sample1_snp.vcf| grep -c "LowQual“
18725
2) i run VQSR on snps generated by unified genotyper i need to ask query about VQSR tranche plot for Snps. in my case tranche is not showing any false positive call see plot attached what do i interpret that there is no FP which seems surprising
when i tried to run VQSR on INDELS (in the same file) it doesnt work as i had 884 indels which i read from VQSR documentation and questions asked by ppl is small.
Created 2014-02-24 22:22:31 | Updated | Tags: vqsr filter gatk
In my PiCard/GATK pipeline, I already include the 1000G_gold_standard and dbsnp files in my VQSR step, I am wondering if I should further filter the final vcf files. The two files I use are Mills_and_1000G_gold_standard.indels.hg19.vcf and dbsnp_137.hg19.vcf, downloaded from the GATK resource bundle.
I recently came across the NHLBI exome seq data http://evs.gs.washington.edu/EVS/#tabs-7, and the more complete 1000G variants ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20101123/interim_phase1_release/
These made me wonder if I should use these available VCFs to further filter my VCF files to remove the common SNPs. If so, can I use the "--mask" parameter in VariantFiltration of GATK to do the filtration? Examples below copied from documentation page:
java -Xmx2g -jar GenomeAnalysisTK.jar \
-R ref.fasta \
-T VariantFiltration \
-o output.vcf \
--variant input.vcf \
--filterExpression "AB < 0.2 || MQ0 > 50" \
--filterName "Nov09filters" \
Created 2014-02-24 22:21:58 | Updated | Tags: vqsr filter gatk
In my PiCard/GATK pipeline, I already include the 1000G_gold_standard and dbsnp files in my VQSR step, I am wondering if I should further filter the final vcf files. The two files I use are Mills_and_1000G_gold_standard.indels.hg19.vcf and dbsnp_137.hg19.vcf, downloaded from the GATK resource bundle.
I recently came across the NHLBI exome seq data http://evs.gs.washington.edu/EVS/#tabs-7, and the more complete 1000G variants ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20101123/interim_phase1_release/
These made me wonder if I should use these available VCFs to further filter my VCF files to remove the common SNPs. If so, can I use the "--mask" parameter in VariantFiltration of GATK to do the filtration? Examples below copied from documentation page:
java -Xmx2g -jar GenomeAnalysisTK.jar \
-R ref.fasta \
-T VariantFiltration \
-o output.vcf \
--variant input.vcf \
--filterExpression "AB < 0.2 || MQ0 > 50" \
--filterName "Nov09filters" \
Created 2014-02-24 16:12:14 | Updated | Tags: variantrecalibrator vqsr applyrecalibration indels
Hi,
Given that there's no tranche plot generated for indels using VariantRecalibrator, how do we assess which tranche to pick for the next step, ApplyRecalibration? On SNP mode, I'm using tranche plots to evaluate the tradeoff between true and false positive rates at various tranche levels, but that's not possible with indels.
Thanks!
Grace
Created 2014-02-20 15:30:32 | Updated | Tags: vqsr snpcalling
Hi - I have a question on how best to do VQSR on my samples. One of the readgroups for my individuals are from genomic DNA and have very even coverage (around 10x) while the remaining 4-5 readgroups in the individuals are from Whole Genome Amplified (WGA) DNA. The WGA readgruops have very uneven coverage ranging from 0 to over a 1000 with a mean of around 30x (see attached image, blue is wga and turquoise is genomic, y-axis is depth and x-axis is sliding windows along a chromosome). So I have WGA and genomic libs for each individual and their coverage distributions are very different.
We tested different SNP calling (Unified Genotyper) and VSQR strategies and at the moment we think a strategy where we call and vqsr the genomic and wga libs separately and then combine them in the end works best. However I am interested on what the GATK team would have done in such a case. The reason we are doing it separately is that we think the vqsr on the combined libs would not be wise since there is such difference in the depth (and strand bias) between the WGA and genomic readgroups. If there was a way in the VQSR step to incorporate read group difference and include it in the algortihm it could maybe solve such a problem - but as far as I can see there is no such thing (we used the ReadGroupblacklist option when calling the RGs separately) - but for VQSR there is not a "include read group effects" kind of option. Or does it intrinsically include read group information in the machine learning step? By the way - we did the BQSR so the qualities would have been adjusted according to readgroup effects. But still there does seem to be a noticeable difference between the VQSR results we get from WGA vs genomic read groups (for instance WGA readgroups have consistently lower Hz than genomic readgroups calls - which we think is due to strand bias). From the VQSR plots it is clear that many SNPs are excluded in the WGA RGs due to strand bias and DP - however the bias is still visible after VQSR.
Sorry for the elaborate explanation - however - my question is how the GATK team would have handled SNPcalling and VQSR if the RG depth vary that much as in the attached image case.
Created 2014-02-20 15:27:51 | Updated | Tags: vqsr snpcalling
Hi - I have a question on how best to do VQSR on my samples. One of the readgroups for my individuals are from genomic DNA and have very even coverage (around 10x) while the remaining 4-5 readgroups in the individuals are from Whole Genome Amplified (WGA) DNA. The WGA readgruops have very uneven coverage ranging from 0 to over a 1000 with a mean of around 30x (see attached image, blue is wga and turquoise is genomic, y-axis is depth and x-axis is sliding windows along a chromosome). So I have WGA and genomic libs for each individual and their coverage distributions are very different.
We tested different SNP calling (Unified Genotyper) and VSQR strategies and at the moment we think a strategy where we call and vqsr the genomic and wga libs separately and then combine them in the end works best. However I am interested on what the GATK team would have done in such a case. The reason we are doing it separately is that we think the vqsr on the combined libs would not be wise since there is such difference in the depth (and strand bias) between the WGA and genomic readgroups. If there was a way in the VQSR step to incorporate read group difference and include it in the algortihm it could maybe solve such a problem - but as far as I can see there is no such thing (we used the ReadGroupblacklist option when calling the RGs separately) - but for VQSR there is not a "include read group effects" kind of option. Or does it intrinsically include read group information in the machine learning step? By the way - we did the BQSR so the qualities would have been adjusted according to readgroup effects. But still there does seem to be a noticeable difference between the VQSR results we get from WGA vs genomic read groups (for instance WGA readgroups have consistently lower Hz than genomic readgroups calls - which we think is due to strand bias). From the VQSR plots it is clear that many SNPs are excluded in the WGA RGs due to strand bias and DP - however the bias is still visible after VQSR.
Sorry for the elaborate explanation - however - my question is how the GATK team would have handled SNPcalling and VQSR if the RG depth vary that much as in the attached image case.
Created 2014-01-21 13:32:12 | Updated | Tags: vqsr selectvariants vcf
I just wanted to select variants from a VCF with 42 samples. After 3 hours I got the following Error. How to fix this? please advise. Thanks I had the same problem when I used "VQSR". How can I fix this problem?
INFO 20:28:17,247 HelpFormatter - -------------------------------------------------------------------------------- INFO 20:28:17,250 HelpFormatter - The Genome Analysis Toolkit (GATK) v2.7-4-g6f46d11, Compiled 2013/10/10 17:27:51 INFO 20:28:17,250 HelpFormatter - Copyright (c) 2010 The Broad Institute INFO 20:28:17,251 HelpFormatter - For support and documentation go to http://www.broadinstitute.org/gatk INFO 20:28:17,255 HelpFormatter - Program Args: -T SelectVariants -rf BadCigar -R /groups/body/JDM_RNA_Seq-2012/GATK/bundle-2.3/ucsc.hg19/ucsc.hg19.fasta -V /hms/scratch1/mahyar/Danny/data/Overal-RGSM-42prebamfiles-allsites.vcf -L chr1 -L chr2 -L chr3 -selectType SNP -o /hms/scratch1/mahyar/Danny/data/Filter/extract_SNP_only3chr.vcf INFO 20:28:17,256 HelpFormatter - Date/Time: 2014/01/20 20:28:17 INFO 20:28:17,256 HelpFormatter - -------------------------------------------------------------------------------- INFO 20:28:17,256 HelpFormatter - -------------------------------------------------------------------------------- INFO 20:28:17,305 ArgumentTypeDescriptor - Dynamically determined type of /hms/scratch1/mahyar/Danny/data/Overal-RGSM-42prebamfiles-allsites.vcf to be VCF INFO 20:28:18,053 GenomeAnalysisEngine - Strictness is SILENT INFO 20:28:18,167 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 20:28:18,188 RMDTrackBuilder - Creating Tribble index in memory for file /hms/scratch1/mahyar/Danny/data/Overal-RGSM-42prebamfiles-allsites.vcf INFO 23:15:08,278 GATKRunReport - Uploaded run statistics report to AWS S3
##### ERROR stack trace
java.lang.NegativeArraySizeException at org.broad.tribble.readers.AsciiLineReader.readLine(AsciiLineReader.java:97) at org.broad.tribble.readers.AsciiLineReader.readLine(AsciiLineReader.java:116) at org.broad.tribble.readers.AsciiLineReaderIterator$TupleIterator.advance(AsciiLineReaderIterator.java:84) at org.broad.tribble.readers.AsciiLineReaderIterator$TupleIterator.advance(AsciiLineReaderIterator.java:73) at net.sf.samtools.util.AbstractIterator.next(AbstractIterator.java:57) at org.broad.tribble.readers.AsciiLineReaderIterator.next(AsciiLineReaderIterator.java:46) at org.broad.tribble.readers.AsciiLineReaderIterator.next(AsciiLineReaderIterator.java:24) at org.broad.tribble.AsciiFeatureCodec.decode(AsciiFeatureCodec.java:73) at org.broad.tribble.AsciiFeatureCodec.decode(AsciiFeatureCodec.java:35) at org.broad.tribble.AbstractFeatureCodec.decodeLoc(AbstractFeatureCodec.java:40) at org.broad.tribble.index.IndexFactory$FeatureIterator.readNextFeature(IndexFactory.java:428) at org.broad.tribble.index.IndexFactory$FeatureIterator.next(IndexFactory.java:390) at org.broad.tribble.index.IndexFactory.createIndex(IndexFactory.java:288) at org.broad.tribble.index.IndexFactory.createDynamicIndex(IndexFactory.java:278) at org.broadinstitute.sting.gatk.refdata.tracks.RMDTrackBuilder.createIndexInMemory(RMDTrackBuilder.java:388) at org.broadinstitute.sting.gatk.refdata.tracks.RMDTrackBuilder.loadIndex(RMDTrackBuilder.java:274) at org.broadinstitute.sting.gatk.refdata.tracks.RMDTrackBuilder.getFeatureSource(RMDTrackBuilder.java:211) at org.broadinstitute.sting.gatk.refdata.tracks.RMDTrackBuilder.createInstanceOfTrack(RMDTrackBuilder.java:140) at org.broadinstitute.sting.gatk.datasources.rmd.ReferenceOrderedQueryDataPool.(ReferenceOrderedDataSource.java:208) at org.broadinstitute.sting.gatk.datasources.rmd.ReferenceOrderedDataSource.(ReferenceOrderedDataSource.java:88) at org.broadinstitute.sting.gatk.GenomeAnalysisEngine.getReferenceOrderedDataSources(GenomeAnalysisEngine.java:964) at org.broadinstitute.sting.gatk.GenomeAnalysisEngine.initializeDataSources(GenomeAnalysisEngine.java:758) at org.broadinstitute.sting.gatk.GenomeAnalysisEngine.execute(GenomeAnalysisEngine.java:284) at org.broadinstitute.sting.gatk.CommandLineExecutable.execute(CommandLineExecutable.java:113) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:245) at org.broadinstitute.sting.commandline.CommandLineProgram.start(CommandLineProgram.java:152) at org.broadinstitute.sting.gatk.CommandLineGATK.main(CommandLineGATK.java:91)
##### ERROR ------------------------------------------------------------------------------------------
Created 2014-01-13 11:55:08 | Updated | Tags: vqsr haplotypecaller exome
We are running GATK HaplotypeCaller on ~50 whole exome samples. We are interested in rare variants - so we ran GATK in single sample mode instead of multi sample as you recommend, however we would like to take advantage of VQSR. What would you recommend? Can we run VQSR on the output from GATK single sample?
Additionally, we are likely to run extra batches of new exome samples. Should we wait until we have them all before running them through the GATK pipeline?
Created 2013-12-31 02:11:46 | Updated 2013-12-31 02:12:36 | Tags: vqsr best-practices non-human
Hello there! Thanks as always for the lovely tools, I continue to live in them.
• Been wondering how best to interpret my VQSLOD plots/tranches and subsequent VQSLOD scores. Attached are those plots, and a histogram of my VQSLOD scores as they are found across my replicate samples.
Methods Thus Far
We have HiSeq reads of "mutant" and wt fish, three replicates of each. The sequences were captured by size selected digest, so some have amazing coverage but not all. The mutant fish should contain de novo variants of an almost cancer-like variety (TiTv independent).
As per my interpretation of the best practices, I did an initial calling of the variants (HaplotypeCaller) and filtered them very heavily, keeping only those that could be replicated across all samples. Then I reprocessed and called variants again with that first set as a truth set. I also used the zebrafish dbSNP as "known", though I lowered the Bayesian priors of each from the suggested human ones. The rest of my pipeline follows the best practices fairly closely, GATK version was 2.7-2, and my mapping was with BWA MEM.
My semi-educated guess..
The spike in VQSLOD I see for samples found across all six replicates are simply the rediscovery of those in my truth set, and those with amazing coverage, which is probably fine/good. The part that worries me are the plots and tranches. The plots don't ever really show a section where the "known" set clusters with one set of obviously good variants but not with another. Is that OK or does that and my inflated VQSLOD values ring of poor practice?
Created 2013-11-14 17:19:47 | Updated | Tags: variantrecalibrator vqsr
I'm somewhat struggling with the new negative training model in 2.7. Specifically, this paragraph in the FAQ causes me trouble:
Finally, please be advised that while the default recommendation for --numBadVariants is 1000, this value is geared for smaller datasets. This is the number of the worst scoring variants to use when building the model of bad variants. If you have a dataset that's on the large side, you may need to increase this value considerably, especially for SNPs.
And so I keep thinking about how to scale it with my dataset, and I keep wanting to just make it a percentage of the total variants - which is of course the behavior that was removed! In the Version History for 2.7, you say
Because of how relative amounts of good and bad variants tend to scale differently with call set size, we also realized it was a bad idea to have the selection of bad variants be based on a percentage (as it has been until now) and instead switched it to a hard number
Can you comment a little further about how it scales? I'm assuming it's non-linear, and my intuition would be that smaller sets have proportionally more bad variants. Is that what you've seen? Do you have any other observations that could help guide selection of that parameter?
Created 2013-11-01 20:03:17 | Updated 2013-11-01 20:04:57 | Tags: vqsr gatk
I have the following entries in my vcf files output from VQSR. What does the "VQSRTrancheINDEL99.00to99.90" string mean? did they fail the recalibration?
PASS
VQSRTrancheINDEL99.00to99.90
VQSRTrancheINDEL99.00to99.90
VQSRTrancheINDEL99.00to99.90
PASS
VQSRTrancheINDEL99.00to99.90
PASS
PASS
VQSRTrancheINDEL99.90to100.00
VQSRTrancheINDEL99.90to100.00
VQSRTrancheINDEL99.90to100.00
PASS
VQSRTrancheINDEL99.00to99.90
VQSRTrancheINDEL99.00to99.90
Below is the command I used:
##### ERROR ------------------------------------------------------------------------------------------
Exact command:
/usr/java/latest/bin/java -Xmx6g -XX:-UseGCOverheadLimit -Xms512m -jar /projects/apps/alignment/GenomeAnalysisTK/latest/GenomeAnalysisTK.jar -R /data2/reference/sequence/human/ncbi/37.1/allchr.fa -et NO_ET -K /projects/apps/alignment/GenomeAnalysisTK/latest/Hossain.Asif_mayo.edu.key -mode INDEL -T ApplyRecalibration -nt 4 -input /data2/secondary/multisample/Merged.variant.INDEL.vcf.temp -recalFile /data2/secondary/multisample/temp/Merged.variant.INDEL.recal -tranchesFile /data2/secondary/multisample/temp/Merged.variant.INDEL.tranches -o /data2/secondary/multisample/Merged.variant.filter.INDEL_2.vcf
Version of GATK : 1.7 and 1.6.7
Created 2012-10-11 18:12:51 | Updated 2013-01-07 19:13:46 | Tags: variantrecalibrator vqsr tranches
Hello,
I am running Variant Quality Score Recalibration on indels with the following command.
java -Xmx8g -jar /raid/software/src/GenomeAnalysisTK-1.6-9-g47df7bb/GenomeAnalysisTK.jar \
-T VariantRecalibrator \
-R /raid/references-and-indexes/hg19/bwa/hg19_lite.fa \
-input indel_output_all_chroms_combined.vcf \
--maxGaussians 4 -std 10.0 -percentBad 0.12 \
-resource:mills,known=true,training=true,truth=true,prior=12.0 /raid/Merlot/exome_pipeline_v1/ref/Mills_and_1000G_gold_standard.indels.hg19.sites.vcf \
-an QD -an FS -an HaplotypeScore -an ReadPosRankSum \
--ts_filter_level 95.0 \
-mode INDEL \
-recalFile /raid2/projects/STFD/indel_output_7.recal \
-tranchesFile /raid2/projects/STFD/indel_output_7.tranches \
-rscriptFile /raid2/projects/STFD/indel_output_7.plots.R
`
My tranches file reports only false positives for all tranches. When I run VQSR on SNPS, the tranches have many true positives and look similar to other tranch files reported on this site. I am wondering if anyone has similar experiences or suggestions?
Thanks
Created 2012-10-10 16:18:42 | Updated 2013-01-07 20:29:25 | Tags: vqsr inbreedingcoeff annotation exome community
I'm curious about the experience of the community at large with VQSR, and specifically with which sets of annotations people have found to work well. The GATK team's recommendations are valuable, but my impression is that they have fairly homogenous data types - I'd like to know if anyone has found it useful to deviate from their recommendations.
For instance, I no longer include InbreedingCoefficient with my exome runs. This was spurred by a case where previously validated variants were getting discarded by VQSR. It turned out that these particular variants were homozygous alternate in the diseased samples and homozygous reference in the controls, yielding an InbreedingCoefficient very close to 1. We decided that the all-homozygous case was far more likely to be genuinely interesting than a sequencing/variant calling artifact, so we removed the annotation from VQSR. In order to catch the all-heterozygous case (which is more likely to be an error), we add a VariantFiltration pass for 'InbreedingCoefficient < -0.8' following ApplyRecalibration.
In my case, I think InbreedingCoefficient isn't as useful because my UG/VQSR cohorts tend to be smaller and less diverse than what the GATK team typically runs (and to be honest, I'm still not sure we're doing the best thing). Has anyone else found it useful to modify these annotations? It would be helpful if we could build a more complete picture of these metrics in a diverse set of experiments.
Created 2012-09-27 09:20:35 | Updated 2012-10-02 16:33:21 | Tags: variantrecalibrator vqsr non-human dbsnp
Hello, I have a new sequenced genome with some samples for this specie, I would like to follow the best practices but I don't have a dbsnp or something similar, but could I use the variants from the samples as a dbsnp? for example get the variants that coincide in all my samples and use it as a dbsnp?
Thanks!
Created 2012-09-04 04:53:54 | Updated 2012-09-06 14:28:57 | Tags: vqsr
We have data from target sequencing genes (only targeted two genes). We analyzed the data by GATK pipeline. Since the data set is too small, we tried hard filtration on both SNP and indels. At the same time, we sequenced the same sample by whole exome sequencing and filter SNP by VQSR. The quality of VQSR results is much better than hard filtration results. For economic reason, we need to develop analysis pipeline for target sequencing, is it ok to incorporate the target sequencing data into an exome sequencing data (merge the VCF files), do VQSR? I just worried the true sites in target sequencing data have different features compared to true sites in whole exome sequencing data.
|
2015-06-29 23:11:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3966454267501831, "perplexity": 4669.490263357812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375090887.26/warc/CC-MAIN-20150627031810-00080-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Introductory_Quantum_Mechanics_(Fitzpatrick)/03%3A_Fundamentals_of_Quantum_Mechanics/3.11%3A_Exercises
|
$$\require{cancel}$$
# 3.11: Exercises
1. Monochromatic light with a wavelength of $$6000 \unicode{x212b}$$ passes through a fast shutter that opens for $$10^{-9}$$ sec. What is the subsequent spread in wavelengths of the no longer monochromatic light?
2. Calculate $$\langle x\rangle$$, $$\langle x^{\,2}\rangle$$, and $$\sigma_x$$, as well as $$\langle p\rangle$$, $$\langle p^{\,2}\rangle$$, and $$\sigma_p$$, for the normalized wavefunction $\psi(x) = \sqrt{\frac{2\,a^{\,3}}{\pi}}\,\frac{1}{x^{\,2}+a^{\,2}}.$ Use these to find $$\sigma_x\,\sigma_p$$. Note that $$\int_{-\infty}^{\infty} dx/(x^{\,2}+a^{\,2}) = \pi/a$$.
3. Classically, if a particle is not observed then the probability of finding it in a one-dimensional box of length $$L$$, which extends from $$x=0$$ to $$x=L$$, is a constant $$1/L$$ per unit length. Show that the classical expectation value of $$x$$ is $$L/2$$, the expectation value of $$x^{\,2}$$ is $$L^2/3$$, and the standard deviation of $$x$$ is $$L/\sqrt{12}$$.
4. Demonstrate that if a particle in a one-dimensional stationary state is bound then the expectation value of its momentum must be zero.
5. Suppose that $$V(x)$$ is complex. Obtain an expression for $$\partial P(x,t)/\partial t$$ and $$d/dt \int P(x,t)\,dx$$ from Schrödinger’s equation. What does this tell us about a complex $$V(x)$$?
6. $$\psi_1(x)$$ and $$\psi_2(x)$$ are normalized eigenfunctions corresponding to the same eigenvalue. If $\int_{-\infty}^\infty \psi_1^\ast\,\psi_2\,dx = c,$ where $$c$$ is real, find normalized linear combinations of $$\psi_1$$ and $$\psi_2$$ that are orthogonal to (a) $$\psi_1$$, (b) $$\psi_1+\psi_2$$.
7. Demonstrate that $$p=-{\rm i}\,\hbar\,\partial/\partial x$$ is an Hermitian operator. Find the Hermitian conjugate of $$a = x + {\rm i}\,p$$.
8. An operator $$A$$, corresponding to a physical quantity $$\alpha$$, has two normalized eigenfunctions $$\psi_1(x)$$ and $$\psi_2(x)$$, with eigenvalues $$a_1$$ and $$a_2$$. An operator $$B$$, corresponding to another physical quantity $$\beta$$, has normalized eigenfunctions $$\phi_1(x)$$ and $$\phi_2(x)$$, with eigenvalues $$b_1$$ and $$b_2$$. The eigenfunctions are related via \begin{aligned} \psi_1 &= (2\,\phi_1+3\,\phi_2) \left/ \sqrt{13},\right.\nonumber\\[0.5ex] \psi_2 &= (3\,\phi_1-2\,\phi_2) \left/ \sqrt{13}.\right.\nonumber\end{aligned} $$\alpha$$ is measured and the value $$a_1$$ is obtained. If $$\beta$$ is then measured and then $$\alpha$$ again, show that the probability of obtaining $$a_1$$ a second time is $$97/169$$.
9. Demonstrate that an operator that commutes with the Hamiltonian, and contains no explicit time dependence, has an expectation value that is constant in time.
10. For a certain system, the operator corresponding to the physical quantity $$A$$ does not commute with the Hamiltonian. It has eigenvalues $$a_1$$ and $$a_2$$, corresponding to properly normalized eigenfunctions \begin{aligned} \phi_1 &= (u_1+u_2)\left/\sqrt{2},\right.\nonumber\\[0.5ex] \phi_2 &= (u_1-u_2)\left/\sqrt{2},\right.\nonumber\end{aligned} where $$u_1$$ and $$u_2$$ are properly normalized eigenfunctions of the Hamiltonian with eigenvalues $$E_1$$ and $$E_2$$. If the system is in the state $$\psi=\phi_1$$ at time $$t=0$$, show that the expectation value of $$A$$ at time $$t$$ is $\langle A\rangle = \left(\frac{a_1+a_2}{2}\right) + \left(\frac{a_1-a_2}{2}\right)\cos\left(\frac{[E_1-E_2]\,t}{\hbar}\right).$
## Contributors and Attributions
• Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)
|
2022-01-27 16:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956623911857605, "perplexity": 141.4892519349406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00251.warc.gz"}
|
http://languagelog.ldc.upenn.edu/nll/?cat=91
|
## Philly accent
"An earful of that unmistakable Philly accent", CBS This Morning 7/26/2016:
Featuring Meredith Tamminga!
## Scotty: Sexist or just Scottish?
Wells Hansen writes:
I recently heard some grumbling at the local pub over the new Star Trek's "Scotty" referring to Lt Uhura as "lass" or "lassy". Have the writers of the most recent iteration of the ST franchise created a sexist or dismissive Scotty …or just a Scottish one?
I haven't seen the movie, and am not competent in contemporary Scottish sociolinguistics, much less those of the 23rd century. So I'll leave this one for the commenters.
## Whodunit sociolinguistics
In order to pass the time on the long flight back from Paris, I downloaded a set of classic Margery Allingham mysteries. And in reading them, I was struck now and again by interesting and unexpected linguistic trivia. Thus in Look to the Lady1931 [emphasis added]:
Mr Campion was introduced, and there was a momentary awkward pause. A quick comprehending glance passed between him and the elder girl, a silent flicker of recognition, but neither spoke. Penny sensed the general embarrassment and came to the rescue, chattering on breathlessly with youthful exuberance.
'I forgot you didn't know Beth,' she said. 'She came just after you left. She and her people have taken Tye Hall. They're American, you know. It's glorious having neighbours again – or it would be if Aunt Di hadn't behaved so disgustingly. My dear, if Beth and I hadn't conducted ourselves like respectable human beings there'd be a feud.'
Beth laughed. 'Lady Pethwick doesn't like strangers,' she said, revealing a soft unexpectedly deep voice with just a trace of a wholly delightful New England accent.
Read the rest of this entry »
## "Linguistics has evolved"
From alice-is-thinking on tumblr, three weeks ago, forwarded by a 20-year-old correspondent:
http://alice-is-thinking.tumblr.com/post/145533947099/me-10-years-ago-i-never-use-online-abbreviations
The accompanying note:
this seems to be a rly common phenomenon among millennials who are especially active on social media – myself included
Read the rest of this entry »
## Language and identity
Rebecca Tan, "Accent Adaptation (On sincerity, spontaneity, and the distance between Singlish and English", The Pennsylvania Gazette 2/18/2016:
The most difficult thing about speaking in a foreign country isn’t adopting a new currency of speech, but using it as though it’s your own—not just memorizing your lines, but taking center stage and looking your audience in the eye. It is one thing to pronounce can’t so that it rhymes with ant instead of aunt, but a whole other order to do that without feeling like a fraud. […]
Lately I’ve been wondering if I’ve taken this whole language situation a tad too personally. Till now, I have kept my Singaporean inflection close at hand, for fear that attempts at Americanisms will be wrong—or, worse, permanent. Yet I am beginning to feel myself grow tired of this stage fright, tired of this senseless preoccupation with the packaging of ideas rather than the ideas themselves. Away from all these theatrics, the simple facts are that I am 9,500 miles away from home, and will be for four more years. I came here looking for change, and the words forming in my mouth to accommodate that change are not jokes, lies, or betrayals. They are real, not strange, and they are mine.
## Stigmatized varieties of Gaelic
St. Patrick's Day was last Thursday, but this afternoon I saw someone wandering around in a sparkly green top hat. In that spirit, I offer a post about perhaps-fictional attitudes towards a variety of Scottish Gaelic.
The content comes from Ken MacLeod's novella The Human Front, which the publisher's blurb calls "a comedic and biting commentary on capitalism and an exploration of technological singularity in a posthuman civilization". We learn that "the story follows John Matheson, an idealistic teenage Scottish guerilla warrior who must change his tactics and alliances with the arrival of an alien species". The protagonist tells us that
My mother, Morag, was a Glaswegian of Highland extraction, who had met and married my father after the end of the Second World War and before the beginning of the Third. She, somewhat contrarily, taught herself the Gaelic and used it in all her dealings with the locals, though they always thought her dialect and her accent stuck-up and affected. The thought of her speaking a pure and correct Gaelic in a Glasgow accent is amusing; her neighbours' attitude towards her well-meant efforts less so, being an example of the the characteristic Highland inferiority complex so often mistaken for class or national consciousness. The Lewis accent itself is one of the ugliest under heaven, a perpetual weary resentful whine — the Scottish equivalent of Cockney — and the dialect thickly corrupted with English words Gaelicized by the simple expedient of mispronouncing them in the aforementioned accent.
Read the rest of this entry »
## Pygmalion updated
Peter Serafinowicz has updated George Bernard Shaw's dictum that "It is impossible for an Englishman to open his mouth without making some other Englishman hate or despise him", by re-voicing Donald Trump to demonstrate that emotional reactions to British accents are easily evoked in Americans as well. There's "Sophisticated Trump", posted on YouTube 12/17/2015:
Read the rest of this entry »
## Jeopardy gossip
The internet has been working hard at providing Deborah Cameron with material for a book she might write on attitudes towards women's voices. (Background: "Un justified", 7/8/2015; "Cameron v. Wolf" 7/27/2015.)
To see what I mean, sample the tweets for #JeopardyLaura, or read some of the old-media coverage, like "Is this woman the most annoying 'Jeopardy!' contestant ever?", Fox News 11/24/2015:
"Jeopardy!" contestant Laura Ashby is causing quite a stir on social media. The Marietta, Georgia, native isn't getting attention for her two-day winning streak but instead the tone of her voice.
Ashby first appeared on the competition show on Nov. 6 and when she returned this week the Internet went crazy over her voice.
Several tweeters went out of their way to exemplify Cameron's observation that "This endless policing of women’s language—their voices, their intonation patterns, the words they use, their syntax—is uncomfortably similar to the way our culture polices women’s bodily appearance":
Read the rest of this entry »
## UM/UH accommodation
Over the years, we've presented some surprisingly consistent evidence about age and gender differences in the rates of use of different hesitation markers in various Germanic languages and dialects. See the end of this post for a list; or see Martijn Wieling et al., "Variation and change in the use of hesitation markers in Germanic languages", forthcoming:
In this study, we investigate cross-linguistic patterns in the alternation between UM, a hesitation marker consisting of a neutral vowel followed by a final labial nasal, and UH, a hesitation marker consisting of a neutral vowel in an open syllable. Based on a quantitative analysis of a range of spoken and written corpora, we identify clear and consistent patterns of change in the use of these forms in various Germanic languages (English, Dutch, German, Norwegian, Danish, Faroese) and dialects (American English, British English), with the use of UM increasing over time relative to the use of UH. We also find that this pattern of change is generally led by women and more educated speakers.
For other reasons, I've done careful transcriptions (including disfluencies) of several radio and television interview programs, and it occurred to me to wonder whether such interviews show accommodation effects in UM/UH usage. As a first exploration of the question, I took a quick look at four interviews by Terry Gross of the NPR radio show Fresh Air: with Willie Nelson, Stephen KingJill Soloway, and Lena Dunham.
Read the rest of this entry »
## What does "vocal fry" mean?
Julianne Escobedo Shepherd, "LOL Vocal Fry Rules U R All Dumb", Jezebel 7/30/2015:
This week, in shit-hot stuff happening on the internet, once-great feminist pundit Naomi Wolf wrote a column about how vocal fry is Keeping Women Down, and then other women across the internet rebutted her, rightly positing that when your dads bitch about the way you talk it’s because they’re just trying to not listen to you talk, period, so fuck your dads. […]
Vocal fry, as interpreted via California’s finest Calabasians, is a weapon of the young, disaffected woman, not a way to connote that they don’t care about anything, per se—just that specifically, they do not care about you. It is the speaking equivalent of “you ain’t shit,” an affectation of the perpetually unbothered. It’s a protective force between the pejorative You—dads, Sales types, bosses, basically anyone who represents the establishment—and the collective Us, which is to say, a misunderstood generation that inherited a whole landscape of bullshit because y’all didn’t fix it when you had the goddamn chance. It’s a way of communicating to you “We have this handled,” and also “Get off my dick.” It’s a proscenium of absolute dismissal and it is one of the most beautiful mannerisms millennials possess.
Read the rest of this entry »
## The great creak-off of 1969
In a comment on yesterday's post about Noam Chomsky's use of creaky voice ("And we have a winner…", 7/26/2015), Tara wrote
At the risk of sounding like I missed the joke: creakiness in a speaker Chomsky's age is much more likely to be physiological in origin than stylistic. I checked older footage of Chomsky, and he does seem to have been quite a bit less creaky in the 60s than today. But more importantly, listen to William F. Buckley in the same recording! I suspect that Noam has been out-creaked.
Read the rest of this entry »
## Open Letter to Terry Gross
Sameer ud Dowla Khan, a phonetician at Reed College, has written an open letter to Terry Gross, which starts like this:
While I am a loyal fan of your program, I’m very disappointed in your interview of David Thorpe and Susan Sankin from 7 July 2015. As both a phonetician who specializes in intonation, stress patterns, and voice quality, as well as a gay man, I found the opinions expressed in the interview to be not only inaccurate, but also offensive and damaging.
You can listen to that interview, and read the transcript, on the Fresh Air web site — "Filmmaker And Speech Pathologist Weigh In On What It Means To 'Sound Gay'":
Is there such a thing as a "gay voice"? For gay filmmaker David Thorpe, the answer to that question is complicated. "There is no such thing as a fundamentally gay voice," Thorpe tells Fresh Air's Terry Gross. But, he adds, "there is a stereotype and there are men, to a greater or lesser extent, who embody that stereotype."
In his new film, Do I Sound Gay?, Thorpe searches for the origin of that stereotype and documents his own attempts to sound "less gay" by working with speech pathologist Susan Sankin.
Read the rest of this entry »
## NYC rhoticity
"Can you spot wealthy New Yorkers by their ‘R” sounds?", Improbable Blog 6/19/2015:
Is it possible to gauge how wealthy a New Yorker might be just by the way they pronounce their /r/ s? A new paper in the Journal of English Linguistics investigates whether variations of rhoticity [viz. the prevalence, or lack of, the /r/ sound in speech] in wedding-consultants’ speech could be correlated with the amount of money a bride states she is willing to spend on her wedding dress. That is to say, the amount of money she has at her disposal, used as a measure of her (perceived) social status. The paper, in the Journal of English Linguistics, June 2015, 43: 118-142, can be downloaded here for US$30: Maeve Eberhardt & Corinne Downs, "'(r) You Saying Yes to the Dress?' Rhoticity on a Bridal Reality Television Show", Journal of English Linguistics 2015. Or you can deprive Sage Publications of their$30, and get a report about the same research for free in an earlier version: Maeve Eberhardt & Corinne Downs, "A Department Store Study for the 21st Century: /r/ vocalization on TLC’s Say Yes to the Dress", NWAV 2013.
Read the rest of this entry »
|
2016-07-27 07:43:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26647454500198364, "perplexity": 7052.670841243913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826736.89/warc/CC-MAIN-20160723071026-00021-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/problem-of-the-week-242-jan-24-2017.20594/
|
# Problem of the Week #242 - Jan 24, 2017
Status
Not open for further replies.
#### Euge
##### MHB Global Moderator
Staff member
Here is this week's POTW:
-----
Suppose $\Gamma$ is a finite group of homeomorphisms of a Hausdorff space $M$ such that every non-identity element of $\Gamma$ is fixed point free. Show that $\Gamma$ acts on $M$ properly discontinuously.
-----
Remember to read the POTW submission guidelines to find out how to submit your answers!
#### Euge
##### MHB Global Moderator
Staff member
No one solved this week's problem. You can read my solution below.
Fix $p\in M$. For each $g\in \Gamma\setminus\{1\}$, $p \neq gp$. Since $M$ is Hausdorff, for every $g\in \Gamma\setminus\{1\}$, there are disjoint open sets $U_g \ni p$ and $V_g \ni gp$. Set $$W = \bigcap_{g\in \Gamma\setminus\{1\}} (U_g \cap g^{-1}(V_g))$$ Since each of the sets $U_g \cap g^{-1}(V_g)$ is an open neighborhood of $p$ and $\Gamma\setminus\{1\}$ is finite, then $W$ is open neighborhood of $p$. Given $g\neq 1$, $gW\cap W = \emptyset$. Indeed, if $gW\cap W \neq \emptyset$, then there are $w,w'\in W$ for which $gw = w'$. As $w'\in U_g$ and $w\in g^{-1}(V_g)$, we have $w' = gw \in U_g \cap V_g$, a contradiction. Consequently, $\Gamma$ acts on $M$ properly discontinuously.
Status
Not open for further replies.
|
2020-07-16 12:43:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579376339912415, "perplexity": 237.06918352884006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00320.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-1-section-1-3-exponents-order-of-operations-and-variable-expressions-exercise-set-page-29/98
|
# Chapter 1 - Section 1.3 - Exponents, Order of Operations, and Variable Expressions - Exercise Set: 98
Yes, because in the order of operations, the addition comes after multiplication. Without the parentheses, you will solve 32 $\times$ 5 first, but they want you to solve 12 + 32 first, so they use the parentheses.
#### Work Step by Step
Explain why parentheses are necessary in the expression (12 + 32) $\times$ 5.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-24 13:10:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6337236166000366, "perplexity": 1037.3939339922374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00211.warc.gz"}
|
http://mathhelpforum.com/discrete-math/26523-beginning-discrete-math-help-plz.html
|
# Math Help - Beginning Discrete math, help plz.
1. ## Beginning Discrete math, help plz.
I have a couple of questions 1 of which I more or less know what should be the proof, I just dont really know how to prove it.
1. My first question is how would I go about proving that m^2 = n^2
if and only if m = n or m = -n.
I know I have to prove it both ways, but am not quite sure how to construct it.
2. My next problem is proving that either 2x10^500 + 15 or 2x10^500 + 16
is not a perfect square.
Now I am fairly certain that perfect squares do not appear sequentially, thus proving that one of them is not square, but I am not sure how to exactly go about it.
The first question should probably be a proof that is fairly simple, these questions are both coming from chapters 1.6 and 1.7 of my book, so they shouldnt be too difficult.
help appreciated. Extra points to those who can help me understand a little bit more.
2. Hello, p00ndawg!
Here is half of #2 . . .
$\text{2. Prove that neither }\,2\cdot10^{500} + 15\,\text{ nor }\,2\cdot10^{500} + 16\,\text{ is a perfect square.}$
We observe the following . . .
$\text{If }n\text{ is an even number, its square is a multiple of 4.}$
. . $m \,= \,2k\quad\Rightarrow\quad m^2 \,=\,4k^2$
$\text{If }n\text{ is odd, its square is one more than a mutliple of 4.}$
. . $n^2 \:= \:(2k+1)^2\:=\:4k^2 + 4k + 1 \:=\:4k(k+1) + 1$
We have: . $M \;=\;2\cdot10^{500} + 15 \;=\;2\cdot10\!\cdot\!10^{499} + 12 + 3 \;=\;2\!\cdot\!2\cdot5\!\cdot\!10^{499} + 4\cdot 3 + 3$
. . Hence: . $M \;=\;4\left(5\!\cdot\!10^{499} +3\right) + 3$
Therefore, $M$ is three more than a mutiple of 4 . . . It cannot be a square.
3. Originally Posted by p00ndawg
I have a couple of questions 1 of which I more or less know what should be the proof, I just dont really know how to prove it.
1. My first question is how would I go about proving that m^2 = n^2
if and only if m = n or m = -n.
show that $m^2 = n^2 \implies m = n \mbox{ or } m = -n$. we can get this by square rooting both sides of the equation
then show that $m = n \mbox { or } m = -n \implies m^2 = n^2$. we have to deal with each case separately. we get the desired result if we square both sides of the equation
4. thank you guys for your help, what I am wondering although, is how in the HECK can you come up with some of these things?
Because I just started this class, should It be this kind of awkward for me to do, or come up with some of these proofs?
or rather, will it get easier as I practice?
Because I must tell you, some of these things that are coming up in these proofs are kind of ridiculoid, and while they make PERFECT sense, I am just not quite sure how I could come up with them myself.
And to tell you the truth, I am by no means bad at math in anyway, but this type of thinking really makes my head hurt.
5. Originally Posted by p00ndawg
And to tell you the truth, I am by no means bad at math in anyway, but this type of thinking really makes my head hurt.
You may be dismayed to learn that “this type of thinking” is really what mathematics is all about. What you may have thought was mathematics was just preliminary to the real heart of mathematics.
6. Originally Posted by Plato
You may be dismayed to learn that “this type of thinking” is really what mathematics is all about. What you may have thought was mathematics was just preliminary to the real heart of mathematics.
Yea I totally understand what you're saying, so If I clarify what I would mean to say is that I am rather good at standard mathematics.
Of course, I only started learning this, so one would assume I wont be very good at this, Im pretty sure its safe to also assume that not many people were very good at the start of this type of mathematics as well.
of course, I have been wrong before.
7. (1)
$\begin{array}{lrcl}
{} & m^2 &=& n^2\\
\Leftrightarrow & m^2-n^2 &=& 0\\
\Leftrightarrow & (m-n)(m+n) &=& 0\\
\Leftrightarrow & m-n=0 &\mbox{or}& m+n=0\\
\Leftrightarrow & m=n &\mbox{or}& m=-n
\end{array}$
8. thank you for the reply.
|
2015-04-19 22:11:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7891807556152344, "perplexity": 301.2678309004965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639482.79/warc/CC-MAIN-20150417045719-00152-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/124320-tangent-line.html
|
# Thread: tangent line
1. ## tangent line
need to find the slope of the tangent line to the curve y=lnx^8 at the point where x=e^2
2. Originally Posted by pebblesbambam
need to find the slope of the tangent line to the curve y=lnx^8 at the point where x=e^2
$\displaystyle y = \ln{x^8}$
$\displaystyle y = 8\ln{x}$
$\displaystyle y' =$ ?
once you determine the derivative, evaluate at $\displaystyle x = e^2$
|
2018-03-24 21:51:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524935245513916, "perplexity": 779.5505164842889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651007.67/warc/CC-MAIN-20180324210433-20180324230433-00382.warc.gz"}
|
https://mattermodeling.stackexchange.com/questions/6538/molecular-orbitals-and-active-space
|
# Molecular orbitals and active space
During the same minute as asking this question, I also asked this at Quantum Computing SE.
In Qiskit, each qubit corresponds to one spin orbital. For example, the $$\ce{N2}$$ molecule would have 10 molecular orbitals, which correspond to 20 spin orbitals for alpha and beta spins for the sto-3g basis set. In my opinion, this is the case because the for each atom contained in the molecule, we need to account for the orbitals included, which are 1s, 2s, 2p for the nitrogen atom. However, in this case, how could one deal with the molecular orbitals when in the situation like choosing the active space for the molecule?
• +1 especially for a question that attracted such a thorough and detailed answer! However, what does this question have to do with PySCF? And what exactly are you asking? What precisely do you want to know when you ask: "how could one deal with the molecular orbitals when in the situation like choosing the active space for the molecule?" and what does that have to do with STO-3G or Qiskit? Aug 9 at 15:54
• Thanks for the advice. I have already modified the question. Aug 13 at 14:18
• At this point I would recommend you ask a new question. Aug 13 at 14:37
## 1 Answer
As you said above, nitrogen has 2 s orbitals and one p orbital. However, one would typically freeze the chemically inactive 1s orbital, leaving you 5 electrons in 4 orbitals per atom, or 10 electrons in 8 orbitals for N$$_2$$; this is denoted $$(10e,8o)$$.
Going beyond the minimal STO-3G basis, in addition to the occupied molecular orbitals (which are poorly described by STO-3G), you will have a large number of unoccupied molecular orbitals that are necessary to describe electron correlation.
Choosing the active space for complete-active space (CAS) self-consistent field (SCF) calculations is traditionally quite difficult, see e.g. Int. J. Quantum Chem. 111, 3329 (2011). The traditional CAS-SCF approach is equivalent to exact diagonalization of the real two-electron Hamiltonian in the space of electron configurations, i.e. the full configuration interaction (FCI) method. Since the challenge of the FCI solution scales exponentially, in practice the problem size is limited in practice to $$\lesssim(20e,20o)$$. Because of this limitation, it is not possible to include all valence orbitals in the CASSCF calculation already for diatomic molecules; however, the results one gets with various choices for the active orbitals may differ.
There are a number of alternative approaches to the FCI problem, such as the density matrix renormalization group (DMRG) method or various selected CI methods, which are able to push back the scaling wall and which have been found to be immensely useful for chemical applications. As a rule of thumb, these methods can handle strongly correlated electrons in three dimensions up to problem sizes of roughly (50e,50o). Future large-scale quantum computers are hailed as a panacea for chemistry, because they might be able to push back the problem size limit further to hundreds of electrons in hundreds of orbitals, at which point full valence orbital active spaces become feasible.
Because of the importance of the choice of the active orbitals in traditional CAS-SCF calculations, a multitude of ways in which to choose the active orbitals has been suggested. For general 3D molecular systems, a variety of strategies can be chosen. Using the canonical orbitals is not advised, because the unoccupied orbitals are not good estimates of excited states. Instead, common ways to pick the natural orbitals include employing natural orbitals from a lower level of theory; for instance, unrestricted Hartree-Fock natural orbitals (UNO) have been found to yield good orbitals for CAS calculations; MP2 and CISD are other commonly-used alternatives. Another established alternative are the so-called improved virtual orbitals, which are a way to improve the choice for the unoccupied orbitals to include in the active space. It is also possible to define the active space directly in terms of a reference set of gas-phase atomic orbitals, see J. Chem. Theory Comput. 2017, 13, 9, 4063–4078
In diatomic molecules like N$$_2$$, the choice can be done simply by symmetry: as the Hamiltonian does not depend on the angle measured around the bond, the orbitals are periodic with respect to this angle, $$\exp(i m \phi)$$ (see e.g. Int J. Quantum Chem. 119, e25968 (2019)), and the orbitals can be classified into $$\sigma$$ ($$m=0$$), $$\pi$$ ($$m=\pm 1$$), $$\delta$$ ($$m=\pm 2$$), $$\varphi$$ ($$m=\pm 3$$), etc orbitals. In N$$_2$$, the two s orbitals both yield a $$\sigma$$ orbital and the two p orbitals yield 2 $$\pi$$ and 2 $$\sigma$$ orbitals, yielding an active space of 2 $$\pi$$ and 4 $$\sigma$$ orbitals (a $$\sigma$$ orbital fits 2 electrons, while the $$\pi$$ and higher orbitals each fit 4 electrons).
However, the choice of the initial orbitals should matter less and less when the size of the active space is increased. If all the valence orbitals fit into the active space, the orbital optimization in CASSCF should converge quite rapidly even when begun from the canonical orbitals.
• +1 for another very thorough answer! I agree with your rule of thumb about being able to handle strongly correlated electrons up to roughly (50e,50o). We did (54e,54o) in this paper: aip.scitation.org/doi/10.1063/1.5063376, but it was not an easy calculation (and this particular active space was not the worst in terms of strong correlation). Aug 9 at 15:49
• Thanks for the thorough answer! That really helps a lot. I still have one technical question. Take the H2O molecule as an excample, according to the MO diagram here commons.wikimedia.org/wiki/File:H2O-MO-Diagram.svg, I would like to choose the 3a1 & 1b1 orbital to be not in the active space. In this case, is the PySCF pakage arrange the orbitals according to the MO orbitals so that I can just specify a list like [0,1,4,5] to be the active MO space assume that the organization is from down to up in PySCF? Aug 13 at 14:02
• @ironmanaudi You can turn on symmetry in PySCF; some of the MO integrals will be zero. The active space can then be defined directly in terms of the number of orbitals in each symmetry block. This should be illustrated in the PySCF examples. Aug 13 at 16:40
|
2021-10-27 04:31:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6781749725341797, "perplexity": 714.9537790594818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00227.warc.gz"}
|
https://physics.stackexchange.com/questions/523736/does-the-no-hair-theorem-imply-that-dark-matter-cannot-have-any-charge/523738
|
# Does the no-hair theorem imply that dark matter cannot have any “charge”?
By "charge" I mean some kind of unique conserved property, similar to electric charge, color charge, baryon number, etc.
The no-hair theorem states that black holes can only have three macroscopic properties: mass, angular momentum, and electric charge. This implies that if dark matter had some unique "charge", we could throw all that dark matter into the black hole and the "charge" would disappear. Therefore, "charge" cannot be conserved, and this conclusion is independent of everything and anything that might describe "charge". For that matter, the no-hair theorem means that there can be no other "charge" for any kind of physical object, not just dark matter.
For example, suppose I postulate a fifth force between dark matter particles that obeys the inverse square law:
$$F_{new} = k \frac{c_1c_2}{r^2}$$
where $$c_1$$ and $$c_2$$ are the "charges" of the two objects. Then I must have that $$c_1$$ and $$c_2$$ cannot be conserved (in contrast to Newton's force law and Coulomb's law, where they are), or the theory is dead before it even begins.
Is this correct? If so, it sounds like a very powerful result, affecting not just the known but also the unknown.
• You can have additional charges, that are conserved even in the presence of black holes. Just add more forces, and you'll have analogues of electric charge. Also, even for charges whose conservation is violated by black holes, that isn't really an important effect in most models. – knzhou Jan 8 '20 at 0:40
|
2021-05-19 02:38:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7854416966438293, "perplexity": 367.1703680132415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00477.warc.gz"}
|
http://w3-org.9356.n7.nabble.com/Re-Status-of-RFC-1738-ftp-URI-scheme-td170988.html
|
# Re: Status of RFC 1738 -- 'ftp' URI scheme
46 messages
123
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
On Thu, 04 Feb 2010 15:53:50 -0000, Alfred HÎnes <[hidden email]> wrote: > Folks, > work already is in progress for an updated specification of > the 'ftp' URI scheme, as one step in the process to let the > obsoleted RFC 1738 actually become gravestone dead. > Upon indication of support for the idea by an AD, I took over > the pen from Paul Hoffman in December -- he was the last one > to start a similar effort several years ago, but did not have > the time to pursue it. Perhaps this is a good time to point out (which I omitted to do earlier) that the news and nntp schemes have now een published as RFC 5538, so when this ftp scheme is finished perhaps we can finally put RFC 1738 to bed. -- Charles H. Lindsey ---------At Home, doing my own thing------------------------ Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chlEmail: [hidden email] Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Charles Lindsey scripsit: > Perhaps this is a good time to point out (which I omitted to do earlier) > that the news and nntp schemes have now een published as RFC 5538, so > when this ftp scheme is finished perhaps we can finally put RFC 1738 to > bed. Has the file scheme become a separate RFC at last? -- A: "Spiro conjectures Ex-Lax." John Cowan Q: "What does Pat Nixon frost her cakes with?" [hidden email] --"Jeopardy" for generative semanticists http://www.ccil.org/~cowan
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
On Wed, Dec 29, 2010 at 8:18 AM, John Cowan wrote: Charles Lindsey scripsit: > Perhaps this is a good time to point out (which I omitted to do earlier) > that the news and nntp schemes have now een published as RFC 5538, so > when this ftp scheme is finished perhaps we can finally put RFC 1738 to > bed. Has the file scheme become a separate RFC at last? -- Unfortunately no - IANA URI Schemes registry says RFC 1738 and this I-Dexpired 5 years ago .Warts and all, an RFC for file URI is sorely needed IMHO. Cheers,- Ira
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by John Cowan-3 On Wed, 29 Dec 2010 13:18:53 -0000, John Cowan <[hidden email]> wrote: > Charles Lindsey scripsit: > >> Perhaps this is a good time to point out (which I omitted to do earlier) >> that the news and nntp schemes have now een published as RFC 5538, so >> when this ftp scheme is finished perhaps we can finally put RFC 1738 to >> bed. > > Has the file scheme become a separate RFC at last? No, I suppose that needs to be fixed before this matter can finally be closed. -- Charles H. Lindsey ---------At Home, doing my own thing------------------------ Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chlEmail: [hidden email] Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by Ira McDonald 29.12.2010 19:10, Ira McDonald wrote: On Wed, Dec 29, 2010 at 8:18 AM, John Cowan wrote: Charles Lindsey scripsit: > Perhaps this is a good time to point out (which I omitted to do earlier) > that the news and nntp schemes have now een published as RFC 5538, so > when this ftp scheme is finished perhaps we can finally put RFC 1738 to > bed. Has the file scheme become a separate RFC at last? -- Unfortunately no - IANA URI Schemes registry says RFC 1738 and this I-D expired 5 years ago . Warts and all, an RFC for file URI is sorely needed IMHO. Cheers, - Ira Moreover, I've found the 'afs' URI scheme in RFC 1738 (and in Provisional registry), that (1) needs to be defined or (2) moved to Historic registry. What RFC says is: The following scheme have been proposed at various times, but this document does not define their syntax or use at this time. It is suggested that IANA reserve their scheme names for future definition: afs Andrew File System global file names. mid Message identifiers for electronic mail. cid Content identifiers for MIME body parts. nfs Network File System (NFS) file names. tn3270 Interactive 3270 emulation sessions. mailserver Access to data available from mail servers. z39.50 Access to ANSI Z39.50 services. Currently, all of them (except afs, mailserver and tn3270) have been specified or moved to Historic. There is a draft moving 'mailserver' to Historic too (https://datatracker.ietf.org/doc/draft-melnikov-mailserver-uri-to-historic/) and draft specifying tn3270 scheme (https://datatracker.ietf.org/doc/draft-yevstifeyev-tn3270-uri/). And only afs remains indefinite. So I think now it's time to discuss if it should be moved to Historic either. Maybe (2) seems more acceptable for me. Has anyone seen the Andrew File System wide-spread among the Internet? As I know, it was an experimental effort of Carnegie Mellon University and I really do not neither know any public-available resources hat can be accessed by such a scheme nor clients for it. You may find some discussions on this topic on uri-review mailing list in November, as I remeber (while discussion about draft-melnikov-mailserver-uri-to-historic). I would also like to know what is with 'modem' scheme. It is in permanent registry, but has a note of being Historic. What that smth wrong with defining docs and IANA did not have clear definitions of its actions as for this scheme or smth other? As for 'file' scheme, I just wonder why this document (I mean Paul's draft) has not become the RFC like e. g. his docs as for telnet and prospero schemes. Nevertheless, IMO we need to align it with the most current URI defining practices and make it RFC too. Finally, as for 'ftp' one, I am strongly concerned there must be clear definition of this wide-spread scheme - it is really needed. So, taking everything into account, only if we resolve *all* these problems, we can say that RFC 1738 is really obsolete, IMO. Happy New Year to everybody, Mykyta Yevstifeyev
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Mykyta Yevstifeyev wrote: > As for 'file' scheme, I just wonder why this document (I mean Paul's > draft) has not become the RFC New RFCs tend to be treated as the latest & greatest guidance (or proposals for such) from the IETF, so there was a question of whether it made sense to create a new RFC that reproduces a relatively uselss section of an otherwise obsolete RFC, just for the purpose of retiring the larger document. If the section were fit to publish as-is, it would be OK, but it really needs a lot of work before it will be of much use to a present-day implementer who wants to know how to produce or utilize 'file:' URIs. When I was working on such a project, I was stymied by all kinds of issues, some of which I posted about: http://lists.w3.org/Archives/Public/uri/2004Jul/0013 (Discussion picked up again in May and August 2005; check the archives.) Even if you take the "just document what works, don't fix what's broken" approach, having an RFC that's just a survey of soon-to-be-outdated implementations doesn't seem like the greatest plan. Paul Hoffman washed his hands of the whole thing after being overwhelmed by the comments and disagreement over how prescriptive a new RFC should be. Larry Masinter offered to take it over, then he and I and Graham Klyne discussed using a wiki to manage the initial stages of a new draft. The idea was to let interested parties make edits directly until a reasonable degree of consensus/stability was reached, and then an editor (Larry probably) would take it over and submit a clean version as an Internet-Draft or RFC. The wiki is still up on my server, but no progress was made; Larry and I are the only ones who ever did anything with it, and we both lost interest pretty quickly. http://offset.skew.org/wiki/URI/File_scheme/Plan_of_action http://offset.skew.org/wiki/URI/File_scheme'file' URIs being tied to OSes as they are (not so much a question of interopability on the Internet), I'm not convinced there are enough people interested in the problem or who are having trouble with implementations to really justify a project to update the 'file:' URI spec. I think it makes more sense to just leave the issue unresolved (pardon the pun). That means either leaving RFC 1738 alive, or just retiring the 'file:' URI spec altogether. That wouldn't preclude picking it up again in the future. Mike
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
On Mon, 03 Jan 2011 04:32:11 -0000, Mike Brown <[hidden email]> wrote: > http://offset.skew.org/wiki/URI/File_scheme/Plan_of_action> http://offset.skew.org/wiki/URI/File_scheme> > 'file' URIs being tied to OSes as they are (not so much a question of > interopability on the Internet), I'm not convinced there are enough > people > interested in the problem or who are having trouble with implementations > to > really justify a project to update the 'file:' URI spec. > > I think it makes more sense to just leave the issue unresolved (pardon > the > pun). That means either leaving RFC 1738 alive, or just retiring the > 'file:' > URI spec altogether. That wouldn't preclude picking it up again in the > future. Having taken a quick look at those links, I think people were trying to make it more complicated than needs be. Essentially, a file URI is supposed to be meaningful within some limited namepsace. Therefore, any convention which works within that namespace should work within the URI. If you are not familiar with the namespace in question, then don't use file URIs. Put more simply, if you write file://host/blah-blah-blah, then if you write a POSIX open call open("blah-blah-blah", ...) then it ought to work, and you define the format of blah-blah-blah so that it is so. All you ahve to worry about then is where percent-coding is needed, and perhaps whether some relative URI is possible. -- Charles H. Lindsey ---------At Home, doing my own thing------------------------ Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chlEmail: [hidden email] Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Open this post in threaded view
|
## RE: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by Mykyta Yevstifeyev > From: [hidden email] [mailto:[hidden email]] On Behalf Of Mykyta Yevstifeyev > Sent: Friday, 31 December, 2010 00:17 > To: Ira McDonald > Cc: John Cowan; Charles Lindsey; URI > Currently, all of them (except afs, mailserver and tn3270) have > been specified or moved to Historic. ... So I think now it's time > to discuss if it should be moved to Historic either. > > Maybe (2) seems more acceptable for me. Has anyone seen the Andrew > File System wide-spread among the Internet? As I know, it was an > experimental effort of Carnegie Mellon University and I really do > not neither know any public-available resources hat can be > accessed by such a scheme nor clients for it. AFS is not just an experimental system, and not used just at CMU. It was used at a number of universities and businesses in production, and sold as a product by Transarc (now part of IBM). When I worked at IBM in the late '80s and early '90s we used it, for example. MIT's Project Athena ran a good-sized AFS cell. As far as I know it's still in use; MIT is still hosting pages about it.[1] OpenAFS [2] is available for several platforms, and is active. There were 35 messages on its announce list last year. Some casual searching suggests there are at least a few public AFS resources. I don't know whether there are any clients that support afs-scheme URLs. [1] See for example http://ist.mit.edu/services/web/afs/about. [2] http://openafs.org/-- Michael Wojcik Principal Software Systems Developer, Micro Focus This message has been scanned for viruses by MailController - www.MailController.altohiway.com
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by Charles Lindsey Charles Lindsey scripsit: > Put more simply, if you write file://host/blah-blah-blah, then if you > write a POSIX open call open("blah-blah-blah", ...) then it ought to > work, and you define the format of blah-blah-blah so that it is so. All > you ahve to worry about then is where percent-coding is needed, and > perhaps whether some relative URI is possible. So far so good. The messy part is what the authority means when it is neither empty nor "localhost", and clients differ widely in this respect. -- And it was said that ever after, if any John Cowan man looked in that Stone, unless he had a [hidden email] great strength of will to turn it to other http://ccil.org/~cowanpurpose, he saw only two aged hands withering in flame. --"The Pyre of Denethor"
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by Michael Wojcik Michael, See my comment below. 03.01.2011 16:02, Michael Wojcik wrote: >> From: [hidden email] [mailto:[hidden email]] On Behalf Of > Mykyta Yevstifeyev >> Sent: Friday, 31 December, 2010 00:17 >> To: Ira McDonald >> Cc: John Cowan; Charles Lindsey; URI >> Currently, all of them (except afs, mailserver and tn3270) have >> been specified or moved to Historic. ... So I think now it's time >> to discuss if it should be moved to Historic either. >> >> Maybe (2) seems more acceptable for me. Has anyone seen the Andrew >> File System wide-spread among the Internet? As I know, it was an >> experimental effort of Carnegie Mellon University and I really do >> not neither know any public-available resources hat can be >> accessed by such a scheme nor clients for it. > AFS is not just an experimental system, and not used just at CMU. It was > used at a number of universities and businesses in production, and sold > as a product by Transarc (now part of IBM). When I worked at IBM in the > late '80s and early '90s we used it, for example. > > MIT's Project Athena ran a good-sized AFS cell. As far as I know it's > still in use; MIT is still hosting pages about it.[1] > > OpenAFS [2] is available for several platforms, and is active. There > were 35 messages on its announce list last year. > > Some casual searching suggests there are at least a few public AFS > resources. > > I don't know whether there are any clients that support afs-scheme URLs. That is the most important. If there is no interest of implementators, do we need such scheme. Moreover, if the scheme is Historic, that does not mean that it is *restricted* to be used. Historic status means that smth. is *not intended* to be used rather that forbidden. So there is more sense to move this scheme to historic rather than specify it. Mykyta. > > [1] See for example http://ist.mit.edu/services/web/afs/about. > [2] http://openafs.org/> >
Open this post in threaded view
|
## RE: Status of RFC 1738 -- 'ftp' URI scheme
> From: Mykyta Yevstifeyev [mailto:[hidden email]] > Sent: Tuesday, 04 January, 2011 01:09 > > 03.01.2011 16:02, Michael Wojcik wrote: > > > > I don't know whether there are any clients that support afs- > > scheme URLs. > > That is the most important. If there is no interest of implementators, > do we need such scheme. Moreover, if the scheme is Historic, that does > not mean that it is *restricted* to be used. Historic status means that > smth. is *not intended* to be used rather that forbidden. So there is > more sense to move this scheme to historic rather than specify it. Agreed. Unless someone can identify a client that uses afs-scheme URLs, relegating it to Historic status appears to be appropriate. -- Michael Wojcik Principal Software Systems Developer, Micro Focus This message has been scanned for viruses by MailController - www.MailController.altohiway.com
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by John Cowan-3 On Mon, 03 Jan 2011 14:38:05 -0000, John Cowan <[hidden email]> wrote: > Charles Lindsey scripsit: > >> Put more simply, if you write file://host/blah-blah-blah, then if you >> write a POSIX open call open("blah-blah-blah", ...) then it ought to >> work, and you define the format of blah-blah-blah so that it is so. All >> you ahve to worry about then is where percent-coding is needed, and >> perhaps whether some relative URI is possible. > > So far so good. The messy part is what the authority means when it is > neither empty nor "localhost", and clients differ widely in this respect. > If the authority identifies a host (e.g. e domain name with a A record, or some local name known from /etc/hosts) then the question is whether the open command in that host understands "blah-blah-blah". I think any standard should forbid anything other than such domain/host names. Anything else would be rgearded as a 'local convention' outwith the standard - and good luck to you if it happens to work in your environment. -- Charles H. Lindsey ---------At Home, doing my own thing------------------------ Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chlEmail: [hidden email] Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Charles Lindsey scripsit: > If the authority identifies a host (e.g. e domain name with a A record, > or some local name known from /etc/hosts) Well, Internet Explorer interprets file://foo/bar/baz as the UNC name \\foo\bar\baz, which strikes me as extremely sensible, and I wish every browser on Windows did it. (Chrome does, Firefox doesn't.) Technically "foo" is not a hostname but the published name of an externally exposed portion of a file tree. > then the question is whether the open command in that host understands > "blah-blah-blah". No doubt, but how does one find out? Lynx uses anonymous FTP if the hostname is not empty or "localhost"; is that conformant, given that anonymous FTP typically has its own root? -- John Cowan [hidden email] http://ccil.org/~cowanThe penguin geeks is happy / As under the waves they lark The closed-source geeks ain't happy / They sad cause they in the dark But geeks in the dark is lucky / They in for a worser treat One day when the Borg go belly-up / Guess who wind up on the street.
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
On Thu, 06 Jan 2011 16:36:23 -0000, John Cowan <[hidden email]> wrote: > Charles Lindsey scripsit: > >> If the authority identifies a host (e.g. e domain name with a A record, >> or some local name known from /etc/hosts) > > Well, Internet Explorer interprets file://foo/bar/baz as the UNC name > \\foo\bar\baz, which strikes me as extremely sensible, and I wish every > browser on Windows did it. (Chrome does, Firefox doesn't.) Technically > "foo" is not a hostname but the published name of an externally exposed > portion of a file tree. That looks like a typical microsoft non-standard invention. It is certainly not in the spirit of the main URI standard, and it was not the intention of RFC 1738. And how do you indicate that 'foo' really IS a host name, as intended by 1738? It seems like an aberration we should not give any official support to. > >> then the question is whether the open command in that host understands >> "blah-blah-blah". > > No doubt, but how does one find out? Lynx uses anonymous FTP if the > hostname is not empty or "localhost"; is that conformant, given that > anonymous FTP typically has its own root? Well the file scheme is not supposed to be an alternative to the ftp scheme. Given that 1738 was written with local networks in mind rather than the global internet, then I think file:/host/filename... should normally be seen as an invitation to mount that file from that host using NFS. -- Charles H. Lindsey ---------At Home, doing my own thing------------------------ Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chlEmail: [hidden email] Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Charles Lindsey scripsit: >> Well, Internet Explorer interprets file://foo/bar/baz as the UNC name >> \\foo\bar\baz, which strikes me as extremely sensible, and I wish every >> browser on Windows did it. (Chrome does, Firefox doesn't.) Technically >> "foo" is not a hostname but the published name of an externally exposed >> portion of a file tree. > > That looks like a typical microsoft non-standard invention. It is > certainly not in the spirit of the main URI standard, and it was not the > intention of RFC 1738. And how do you indicate that 'foo' really IS a > host name, as intended by 1738? It seems like an aberration we should not > give any official support to. It seems to me to fit perfectly with the notion of a "reg-name" in RFC 3986 Section 3.2.2. Relevant snippets: "In other cases, the data within the host component identifies a registered name that has nothing to do with an Internet host. We use the name 'host' for the ABNF rule because that is its most common purpose, not its only purpose." "A host identified by a registered name is a sequence of characters usually intended for lookup within a locally defined host or service name registry, though the URI's scheme-specific semantics may require that a specific registry (or fixed name table) be used instead." Since the whole "file" scheme is OS-specific anyway, I see no problem with saying that the specific registry for the "file" scheme on Windows hosts is WINS first and then DNS, since WINS client support is universally available on Windows and NFS (or AFS or whatever) is quite rare. In addition, the normal pattern for distributed file systems other than SMB is to mount remote hosts in the local file system, not to reference arbitrary hosts by their DNS names. (There is already a separate scheme for CIFS, the successor to SMB, where arbitrary references are more common.) -- John Cowan http://ccil.org/~cowan [hidden email] The Penguin shall hunt and devour all that is crufty, gnarly and bogacious; all code which wriggles like spaghetti, or is infested with blighting creatures, or is bound by grave and perilous Licences shall it capture. And in capturing shall it replicate, and in replicating shall it document, and in documentation shall it bring freedom, serenity and most cool froodiness to the earth and all who code therein. --Gospel of Tux
Open this post in threaded view
|
## RE: Status of RFC 1738 -- 'ftp' URI scheme
In reply to this post by Charles Lindsey > From: [hidden email] [mailto:[hidden email]] On Behalf Of > Charles Lindsey > Sent: Friday, 07 January, 2011 08:48 > > On Thu, 06 Jan 2011 16:36:23 -0000, John Cowan <[hidden email]> > wrote: > > > Charles Lindsey scripsit: > > > >> If the authority identifies a host (e.g. e domain name with a A > >> record, or some local name known from /etc/hosts) > > > > Well, Internet Explorer interprets file://foo/bar/baz as the UNC name > > \\foo\bar\baz, which strikes me as extremely sensible It strikes me as a lousy idea, so there's another data point. > That looks like a typical microsoft non-standard invention. It is > certainly not in the spirit of the main URI standard, and it was not > the intention of RFC 1738. And a security risk, since it trivially lets malicious sites probe SMB connections using a combination of and other auto-loaded resources and scripting. > Well the file scheme is not supposed to be an alternative to the ftp > scheme. Given that 1738 was written with local networks in mind rather > than the global internet, then I think file:/host/filename... should > normally be seen as an invitation to mount that file from that host > using NFS. I don't think the file scheme should try to do anything at all, beyond attempting to open the named resource using the normal OS mechanism for opening a file. If the OS decides to retrieve a network resource based on that request, fine; but it shouldn't be an explicit feature of the file scheme. If people want URIs that refer to SMB-hosted resources, let them write a new I-D for the "smb" scheme and push it through the RFC process. There are existing implementations.[1] On another point, I'd substitute "normal OS mechanism for opening a file" for Charles' reference to "POSIX" upthread. There are non-POSIX OSes in use, and non-POSIX filesystems even on OSes that also support POSIX. On IBM's OS/400 aka iSeries aka System i aka whatever they're calling it today, for example, people ought to be able to use file-scheme URIs both for resources in the POSIX-compatible Hierarchical File System, and for the non-hierarchical Integrated File System. (There's still a reasonable, though constrained, interpretation for the abs_path portion of a file-scheme URI under IFS.) [1] See . Apparently there was an I-D for the smb scheme at one point. -- Michael Wojcik Principal Software Systems Developer, Micro Focus This message has been scanned for viruses by MailController - www.MailController.altohiway.com
Open this post in threaded view
|
## Re: Status of RFC 1738 -- 'ftp' URI scheme
Michael Wojcik scripsit: > I don't think the file scheme should try to do anything at all, beyond > attempting to open the named resource using the normal OS mechanism for > opening a file. But that's precisely my point: UNC names *are* obtained by opening the named resource using the normal OS mechanism, given the obvious mapping of / to \ (which is not actually required by the Windows kernel). -- John Cowan http://www.ccil.org/~cowan [hidden email] "After all, would you consider a man without honor wealthy, even if his Dinar laid end to end would reach from here to the Temple of Toplat?" "No, I wouldn't", the beggar replied. "Why is that?" the Master asked. "A Dinar doesn't go very far these days, Master. --Kehlog Albran Besides, the Temple of Toplat is across the street." The Profit
Open this post in threaded view
|
## RE: Status of RFC 1738 -- 'ftp' URI scheme
> From: John Cowan [mailto:[hidden email]] On Behalf Of John Cowan > Sent: Friday, 07 January, 2011 10:16 > > Michael Wojcik scripsit: > > > I don't think the file scheme should try to do anything at all, > beyond > > attempting to open the named resource using the normal OS mechanism > > for opening a file. > > But that's precisely my point: UNC names *are* obtained by opening the > named resource using the normal OS mechanism, given the obvious mapping > of / to \ (which is not actually required by the Windows kernel). Hmm. Yes. Good point. That's what I get for posting while distracted. However, it's only true if the authority portion of the file-scheme URI is passed to the file-opening mechanism, and it's not clear that is either the original intent or the desirable behavior of the file scheme. Certainly *I* would prefer that file-scheme handlers on Windows do something like the following: - Convert the authority and path to canonical form if necessary - Verify that the authority portion is empty or a reference to the local host - Pass the path portion to the standard OS file-opening mechanism (CreateFile), requesting read access to the data I explicitly don't want them to treat a file-scheme URI as a UNC name. But obviously some users (possibly the majority) *would* want a file-scheme URI treated as a UNC name. And while I might argue that the principle of least surprise, the principle of least privilege, and a bias toward security all argue against that, those are arguments that historically have not had a lot of traction among casual users or implementors of the software they use. I suppose what I'm saying, then, is that I don't believe it's clear that the UNC-mapping behavior is necessarily desirable, and more discussion on the topic might be called for, if we're trying to standardize the file scheme. -- Michael Wojcik Principal Software Systems Developer, Micro Focus This message has been scanned for viruses by MailController - www.MailController.altohiway.com
|
2020-09-25 20:49:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5703591108322144, "perplexity": 3959.2177487974977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00673.warc.gz"}
|
https://www.r-bloggers.com/2020/03/quantmod_0-4-16-on-cran-2/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A new version of quantmod is on CRAN! One really cool thing about this release is that almost all the changes are contributions from the community.
Ethan Smith made more excellent contributions to getQuote() in this release. It no longer throws an error if one or more symbols are missing. And it handles multiple symbols in a semicolon-delimted string, just like getSymbols(). For example, you can get quotes for multiple symbols by calling getQuote("SPY;AAPL").
@jrburl made a great enhancement to getOptionChain(). Now, instead of throwing an error, it sets volume and open interest to NA if those columns are missing from the Yahoo Finance data. They also submitted a pull request to handle cases where Bid and/or Ask data are missing too. Unfortunately, that pull request came after I had already pushed to CRAN.
Unfortunately, Yahoo! Finance continues to make changes to how they return data. Thankfully, quantmod users are diligent and catch these changes. @helgasoft noticed the split ratio delimiter changed from / to :. So, for example, a 2-for-1 split was 1/2 but is now 2:1.
@helgasoft also noticed that Alpha Vantage discontinued their “batch quote” functionality, which broke getQuote(). Thankfully, they provided a patch that used the single-quote request, so getQuote() works with Alpha Vantage again!
@matiasandina noticed that I had incorrectly labelled the dividend pay date as the ex-dividend date in the data getQuote() returned from Yahoo Finance. Whoops!
See the news file for the other bug fixes. Thanks for using quantmod!
|
2021-10-26 23:44:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19952046871185303, "perplexity": 3862.513945203374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00596.warc.gz"}
|
https://blog.csdn.net/lllxy/article/details/1791602
|
# The OCI Functions for C
Programmer's Guide to the Oracle7 Server Call Interface Library Product Contents Index
# The OCI Functions for C
This chapter describes each function in the OCI library for the OCI C programmer. The description of many of the functions includes an example that shows how an OCI program uses the function. Examples are not provided for the simpler functions. The description of each function has the following parts:
Purpose
What the function does.
Syntax
The function call with its parameter list.
A detailed description of the function, including examples.
Parameters
A detailed description of each parameter.
A list of other functions that affect or are used with this function. Not included if not applicable.
Be sure to read "Calling OCI Routines" . It contains important information about data structures, datatypes, parameter passing conventions, and other important information about the OCI functions.
## Calling OCI Routines
This section describes data structures and coding rules that are specific to applications written in the C language. Refer to this section for information about data structures, datatypes, and parameter passing conventions in OCI C programs.
### Datatypes
The datatypes used in the C examples in this guide are defined in the file oratypes.h. This file is port specific. The types defined, such as sb2 and ub4, can take different C types on different systems. An example oratypes.h file for UNIX C compilers is listed in Appendix A.
Different OCI platforms may have different datatype definitions. The online location of the oratypes.h file can also be system specific. On Unix systems, it can be found at \$ORACLE_HOME/rdbms/demo/ oratypes.h. See your Oracle system-specific documentation for the location of oratypes.h on your system.
### Data Structures
To use the OCI functions, you must define data structures for one or more LDAs and CDAs. The internal structure of these data areas is discussed in the section "OCI Data Structures" . The LDA structure is the same size as the CDA structure, and the same structure declaration can be used for both structures.
The only field an OCI application normally accesses in the LDA is the return code field. In the example code in this section, the datatypes Lda_Def and Cda_Def, as defined in the header file ocidfn.h, are used to define the LDAs and CDAs. This file is listed in Appendix A and is also available online; see the Oracle system-specific documentation for the online location of this file.
### Parameter Names
The prototype parameter names in the function descriptions are six or less characters in length and do not contain non-alphanumeric characters. This maintains common names across all languages supported by the OCI. In your OCI C program, you can of course use longer, more descriptive names for the parameters.
### Parameter Types
The OCI functions take three types of parameters:
• integers (sword, eword, sb4, ub4)
• short integers (sb2, ub2)
• character variables (ub1, sb1)
• addresses of program variables (pointers)
• memory pointers (dvoid)
The following two sections discuss special considerations to remember when passing parameters to the OCI functions.
#### Integers
When passing integer literals to an OCI function, you should cast the literal to the type of the parameter. For example, the oparse() function has the following prototype, using ANSI C notation:
oparse(Cda_Def *cursor, text *sqlstm, sb4 sqllen,
sword defflg, ub4 lngflg);
If you call oparse() as
oparse(&cda, (text *) "select sysdate from dual", -1, 1, 2);
it will usually work on most 32-bit systems, although the C compiler might issue warning messages to the effect that the type conversions are non-portable. So, you should call oparse() as
oparse(&cda, (text *) "select sysdate from dual", (sb4) -1,
(sword) 0, (ub4) 2);
Always be careful to distinguish signed and unsigned short integers (sb2 and ub2) from integers (sword), and signed and unsigned long integers (sb4 and ub4).
Be careful to pass all pointer parameters as valid addresses. When passing the null pointer (0), Oracle recommends that you cast it to the appropriate type. When you pass a pointer as a parameter, your OCI program must allocate the storage for the object to which it points. OCI routines never allocate storage for program objects. String literals can be passed to an OCI function where the parameter type is text * as the oparse() example in the previous section demonstrates.
### Parameter Classification
There are three kinds of parameters:
• required parameters
• optional parameters
• unused parameters
#### Required Parameters
Required parameters are used by Oracle, and the OCI program must supply valid values for them.
#### Optional Parameters
The use of optional parameters depends on the requirements of your program. The Syntax section for each routine in this chapter indicates optional parameters using square brackets ([ ]).
In most cases, an unused optional parameter is passed as -1 if it is an integer. It is passed as the null pointer (0) if it is an address parameter. For example, your program might not need to supply an indicator variable on a bind call, in which case all input values in the program could be non-null. The indp parameter in the bind functions obindps(), obndra(), obndrv(), and obndrn() is optional. This parameter is a pointer, so it is passed as a null pointer ((sb2 *) 0) when it is not used.
Note: A value of -1 should not be passed for unused optional parameters in the new obindps() and odefinps() calls. Unused parameters in these calls must be passed a zero or NULL. See the descriptions of individual calls for more details about specific parameters.
#### Unused Parameters
Unused parameters are not used by Oracle, at least for the language being described. For example, for cross-language compatibility, some OCI functions have the parameters fmt, fmtl, and fmtt. These are the format string specifier for the packed decimal external datatype, and the string length and type parameters when this type is being bound. COBOL uses the packed decimal type, so these parameters are unused in C.
However, you always pass unused parameters. In C, pass these in the same way as omitted optional parameters. In most cases, this means passing -1 if it is an integer parameter, or 0 if it is a pointer. See the syntax and examples for the odefin() function() for examples of how to pass omitted optional and unused parameters.
Note: As with optional parameters, a value of -1 should not be passed for unused parameters in the new obindps() and odefinps() calls. Unused parameters in these calls must be passed a zero or NULL. See the descriptions of individual calls for more details about specific parameters.
The Syntax section (in the description of each function) uses angle brackets (< >) to indicate unused parameters.
### Parameter Descriptions
Parameters for the OCI functions are described in terms of their type and their mode. When a parameter is a CDA or an LDA, the type is a Cda_Def * or a Lda_Def *. That is, a pointer to a Cda_Def or Lda_Def structure as defined in ocidfn.h (see page A - 8).
Note: The OCI program must allocate these structures, not just the pointers.
#### Parameter Modes
When a parameter is a generalized pointer (that is, it can be a pointer to any variable or array of variables, depending on the requirements of your program), its type is listed as a ub1 pointer. The mode of a parameter has three possible values:
IN
A parameter that passes data to Oracle.
OUT
A parameter that receives data from Oracle on this or a subsequent call.
IN/OUT
A parameter that passes data on the call and receives data on the return from this or a subsequent call.
### Function Return Values
When called from a C program, OCI functions return an integer value. The return value is 0 if the function completed without error. If a non-zero value is returned, an error occurred. In that event, you should check the return code field in the CDA to get the error number. The example programs in Appendix A demonstrate this.
The shorter code fragments in this chapter do not always check for errors.
Note: oerhms(), sqlld2(), and sqllda() are exceptions to this rule. oerhms() returns the length of the message. sqlld2() and sqllda() are void functions that return error indications in the LDA parameter.
### Variable Location
When you bind and define program variables using obindps(), obndra(), obndrv(), obndrn(), odefinps() and odefin(), they are known to Oracle by their addresses. The address when bound must remain valid when the statement is executed.
If you pass LDA and CDA structures to other non-OCI functions in your program, always pass them as pointers, not by value. Oracle updates the fields in these structures after OCI calls. You also lose important information if your program uses copies of these structures, which will not be updated by OCI calls.
Caution: A change in the location of local variables may also cause errors in an OCI program. When the address of a local variable used in a subsequent call is passed to Oracle as a parameter in a bind or define call, you must be certain that the addressed variable is actually at the specified location when it is used in the subsequent execute or fetch call.
#### obindps
Purpose
obindps() associates the address of a program variable with a placeholder in a SQL or PL/SQL statement. Unlike older OCI bind calls, obindps() can be used to bind placeholders to be used in piecewise operations, or operations involving arrays of structures.
Syntax
obindps(Cda_Def *cursor, ub1 opcode, text *sqlvar,
[sb4 sqlvl], ub1 *pvctx, sb4 progvl,
sword ftype, <sword scale>, [sb2 *indp],
[ub2 *alenp], [ub2 *rcodep], sb4 pv_skip,
sb4 ind_skip, sb4 alen_skip, sb4 rc_skip,
[ub4 maxsiz], [ub4 *cursiz], <text *fmt>,
<sb4 fmtl>, <sword fmtt>);
obindps() is used to associate the address of a program variable with a placeholder in a SQL or PL/SQL statement. Additionally, it can indicate that an application will be providing inserted or updated data incrementally at runtime. This piecewise insert is designated in the opcode parameter. obindps() is also used when an application will be inserting data stored in an array of structures.
Note: This function is only compatible with Oracle Server release 7.3 or later. If a release 7.3 application attempts to use this function against a release 7.2 or earlier server, an error message is likely to be generated. At that point you must restart execution.
With the introduction of obindps() there are now four fully-supported calls for binding input parameters, the other three being the older obndra(), obndrn() and obndrv(). Application developers should consider the following points when determining which bind call to use:
• obindps() is supported only when a program is linked in deferred mode. If it is necessary to link in non-deferred mode, another bind routine must be used. In this case, the ability to handle piecewise operations and arrays of structures is not supported.
• obindps() is more complex than the older bind calls. Users who are not performing piecewise operations and are not using arrays of structures may choose to use one of the older routines.
• obindps() does not support the ability to do a positional bind. If this functionality is needed, the bind should be performed using obndrn().
Unlike older OCI calls, obindps() does not accept -1 for any optional or unused parameters. When it is necessary to pass a value to these parameters NULL or 0 should be used instead. The only exception to this rule is that a -1 length is acceptable for sqlvl if sqlvar is null-terminated.
See the sections "Piecewise Insert, Update and Fetch," and "Arrays of Structures" for more information about piecewise operations, arrays of structures, skip parameters and the obindps() call.
The following sample code demonstrates the use of obindps() in an OCI program which performs an insert from an array of structures. This code is provided for demonstration purposes only, and does not constitute a complete program. Most of the work in the program is done within the insert_records() function.
For sample code demonstrating an array fetch, see the description of the odefinps() routine later in this chapter. For sample code demonstrating the use of obindps() for a piecewise insert, see the description of the ogetpi() routine later in this chapter.
... /* OCI #include statements */
#define DEFER_PARSE 1 /* oparse flags */
#define NATIVE 1
#define VERSION_7 2
#define ARRAY_SIZE 10
#define OCI_EXIT_FAILURE 1 /* exit flags */
#define OCI_EXIT_SUCCESS 0
void insert_records();
struct emp_record /* employee data record */
{ int empno;
char ename[11];
char job[11];
int mgr;
char hiredate[10];
float sal;
float comm;
int deptno;
};
typedef struct emp_record emp_record;
struct emp_record_indicators
{ short empno; /* indicator variable record */
short ename;
short job;
short mgr;
short hiredate;
short sal;
short comm;
short deptno;
};
typedef struct emp_record_indicators emp_record_indicators;
Lda_Def lda; /* login area */
ub1 hda[256]; /* host area */
Cda_Def cda; /* cursor area */
main()
{
emp_record emp_records[ARRAY_SIZE];
emp_record_indicators emp_rec_inds[ARRAY_SIZE];
int i=0;
char yn[4];
... /* log on to the database */
for (i=0;i<ARRAY_SIZE;i++)
{
... /* prompt user for data necessary */
... /* to fill emp_records and */
... /* emp_records_inds arrays */
}
insert_records(i,&emp_records, &emp_records_inds);
... /* log off from the database */
}
/* Function insert_records(): This function inserts the array */
/* of records passed to it. */
void insert_records(n, emp_records, emp_rec_inds)
int n;
emp_record emp_records[];
emp_record_indicators emp_rec_inds[];
{
text *sqlstmt =(text *) "INSERT INTO EMP (empno,ename, deptno) /
VALUES (:empno, :ename, :deptno)";
if (oopen(&cda, &lda, (text *)0, -1, -1, (text *)0, -1))
exit(OCI_EXIT_FAILURE);
if (oparse(&cda, sqlstmt, (sb4)-1, 0, (ub4)VERSION_7))
exit(OCI_EXIT_FAILURE);
if (obindps(&cda, 1, (text *)":empno",
strlen(":empno"), (ub1 *)&emp_records[0].empno,
sizeof(emp_records[0].empno),
SQLT_INT, (sword)0, (sb2 *) &emp_rec_inds[0].empno,
(ub2 *)0, (ub2 *)0, (sb4) sizeof(emp_record),
(sb4) sizeof(emp_record_indicators), 0, 0,
0, (ub4 *)0, (text *)0, 0, 0))
exit(OCI_EXIT_FAILURE);
if (obindps(&cda, 1, (text *)":ename",
strlen(":ename"), (ub1 *)emp_records[0].ename,
sizeof(emp_records[0].ename),
SQLT_STR, (sword)0, (sb2 *) &emp_rec_inds[0].ename,
(ub2 *)0, (ub2 *)0, (sb4) sizeof(emp_record),
(sb4) sizeof(emp_record_indicators), 0, 0,
0, (ub4 *)0, (text *)0, 0, 0))
exit(OCI_EXIT_FAILURE);
if (obindps(&cda, 1, (text *)":deptno",
strlen(":deptno"), (ub1 *)&emp_records[0].deptno,
sizeof(emp_records[0].deptno),
SQLT_INT, (sword)0, (sb2 *) &emp_rec_inds[0].deptno,
(ub2 *)0, (ub2 *)0, (sb4) sizeof(emp_record),
(sb4) sizeof(emp_record_indicators),
0, 0, 0, (ub4 *)0, (text *)0, 0, 0))
exit(OCI_EXIT_FAILURE);
if (oexn(&cda,n,0))
exit(OCI_EXIT_FAILURE);
ocom(&lda); /* commit the insert */
if (oclose(&cda)) /* close cursor */
exit(OCI_EXIT_FAILURE);
}
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT opcode ub1 IN sqlvar text * IN sqlvl sb4 IN pvctx ub1* IN progvl sb4 IN ftype sword IN scale sword IN indp sb2 * IN/OUT alenp ub2 * IN rcodep ub2 * OUT pv_skip sb4 IN ind_skip sb4 IN alen_skip sb4 IN rc_skip sb4 IN maxsiz ub4 IN cursiz ub4 * IN/OUT fmt text * IN fmtl sb4 IN fmtt sword IN
Note: Since the obindps() call can be used in a variety of different circumstances, some items in the following list of parameter descriptions may include different explanations for how the parameter is used for piecewise operations, arrays of structures and standard scalar or array binds.
Standard scalar and array binds are those binds which were previously possible using other OCI bind calls (obndra(), obndrv(), and obndrn()).
cursor A pointer to the CDA associated with the SQL statement or PL/SQL block being processed.
opcode Piecewise bind: pass as 0.
Arrays of structures or standard bind: pass as 1.
sqlvar Specifies the address of a character string holding the name of a placeholder (including the preceding colon, e.g., ":varname") in the SQL statement being processed.
sqlvl The length of the character string in sqlvar, including the preceding colon. For example, the placeholder ":employee" has a length of nine. If the string is null terminated, this parameter can be specified as -1.
pvctx Piecewise bind: A pointer to a context block entirely private to the application. This should be used by the application to store any information about the column being bound. One possible use would be to store a pointer to a file which will be referenced later. Each bind variable can then have its own separate file pointer. This pointer can be retrieved during a call to ogetpi().
Arrays of structures or standard bind: A pointer to a program variable or array of program variables from which input data will be retrieved when the SQL statement is executed. For arrays of structures this should point to the first scalar element in the array of structures being bound. This parameter is equivalent to the progv parameter from the older OCI bind calls.
progvl Piecewise bind: This should be passed in as the maximum possible size of the data element of type ftype.
Arrays of structures or standard bind: This should be passed as the length in bytes of the datatype of the program variable, array element or the field in a structure which is being bound.
ftype The external datatype code of the program variable being bound. Oracle converts the program variable from external to internal format before it is bound to the SQL statement. See the section "External Datatypes" for a list of datatype codes, and the listings of ocidem.h and ocidfn.h in Appendix A for lists of constant definitions corresponding to datatype codes.
For piecewise operations, the valid datatype codes are 1 (VARCHAR2), 5 (STRING), 8 (LONG) and 24 (LONG RAW).
indp Pointer to an indicator variable or array of indicator variables. For arrays of structures this may be an interleaved array of column-level indicator variables. See page 2 - 29 for more information about indicator variables.
alenp Piecewise bind: pass as (ub2 *)0.
Arrays of structures or standard bind: A pointer to a variable or array containing the length of data elements being bound. For arrays of structures, this may be an interleaved array of column-level length variables. The maximum usable size of the array is determined by the maxsiz parameter.
rcodep Pointer to a variable or array of variables where column-level error codes are returned after a SQL statement is executed. For arrays of structures, this may be an interleaved array of column-level return code variables.
Typical error codes would indicate that data in progv has been truncated (ORA-01406) or that a null occurred on a SELECT or PL/SQL FETCH (ORA-01405).
pv_skip Piecewise bind or standard scalar bind: pass as zero or NULL.
Arrays of structures or standard array bind: This is the skip parameter for an array of structures holding program variables being bound. In general, this value will be sizeof(structure). If a standard array bind is being performed, this value should equal the size of one element of the array being bound.
ind_skip Piecewise bind or standard scalar bind: pass as zero or NULL.
Arrays of structures or standard array bind: This is the skip parameter for an array of indicator variables associated with an array holding program data to be inserted. This parameter will either equal the size of one indicator parameter structure (for arrays of structures) or the size of one indicator variable (for standard array bind).
alen_skip Piecewise bind or standard scalar bind: pass as zero or NULL.
Arrays of structures or standard array bind: This is the skip parameter for an array of data lengths associated with an array holding program data to be inserted. This parameter will either equal the size of one length variable structure (for arrays of structures) or the size of one length variable (for standard array bind).
rc_skip Piecewise bind or standard scalar bind: pass as zero or NULL.
Arrays of structures or standard array bind: This is the skip parameter for an array used to store returned column-level error codes associated with the execution of a SQL statement. This parameter will either equal the size of one return code structure (for arrays of structures) or the size of one return code variable (for standard array bind).
maxsiz The maximum size of an array being bound to a PL/SQL table. Values range from 1 to 32512, but the maximum size of the array depends on the datatype. The maximum array size is 32512 divided by the internal size of the datatype.
This parameter is only relevant when binding to PL/SQL tables. Set this parameter to ((ub4)0) for SQL scalar or array binds.
cursiz A pointer to the actual number of elements in the array being bound to a PL/SQL table.
If progv is an IN parameter, set the cursiz parameter to the size of the array being bound. If progv is an OUT parameter, the number of valid elements being returned in the progv array is returned after PL/SQL block is executed.
This parameter is only relevant when binding to PL/SQL tables. Set this parameter to ((ub4 *) 0) for SQL scalar or array binds.
#### obndra
Purpose
obndra() associates the address of a program variable or array with a placeholder in a SQL statement or PL/SQL block.
Syntax
obndra(Cda_Def *cursor, text *sqlvar, [sword sqlvl],
ub1 *progv, sword progvl, sword ftype,
<sword scale>, [sb2 *indp], [ub2 *alen],
[ub2 *arcode], [ub4 maxsiz], [ub4 *cursiz],
<text *fmt>, <sword fmtl>, <sword fmtt>);
You can use obndra() to bind scalar variables or arrays in your program to placeholders in a SQL statement or a PL/SQL block. The alen parameter of the obndra() function allows you to change the size of the bound variable without actually rebinding the variable.
Note:
If cursor is a cursor variable that has been OPENed FOR in a PL/SQL block, then obndra() returns an error, unless a new SQL statement or PL/SQL block has been parsed on it.
When you bind arrays in your program to PL/SQL tables, you must use obndra(), because this function provides additional parameters that allow you to control the maximum size of the table and to retrieve the current table size after the block executes.
Note: Applications running against a release 7.3 or later server that need to perform piecewise operations or utilize arrays of structures must use the newer obindps() routine instead of obndra().
The obndra() function must be called after you call oparse() to parse the statement containing the PL/SQL block and before calling oexn() or oexec() to execute it.
Once you have bound a program variable, you can change the value in the variable (progv) and length of the variable (progvl) and re-execute the block without rebinding.
However, if you must change the type of the variable, you must reparse the statement or block and rebind the variable before re-executing.
The following short, but complete, example program shows how to use obndra() to bind arrays in a C program to tables in PL/SQL procedures.
#include <stdio.h>
#include <oratypes.h>
#include <ocidfn.h>
#include <ocidem.h>
Cda_Def cda;
Lda_Def lda;
/* set up the table */
text *dt = (text *) "DROP TABLE part_nos";
text *ct = (text *) "CREATE TABLE part_nos (partno NUMBER, description/
VARCHAR2(20))";
text *cp = (text *) "/
CREATE OR REPLACE PACKAGE update_parts AS/n/
TYPE part_number IS TABLE OF part_nos.partno%TYPE/n/
INDEX BY BINARY_INTEGER;/n/
TYPE part_description IS TABLE OF part_nos.description%TYPE/n/
INDEX BY BINARY_INTEGER;/n/
descrip IN part_description,/n/
partno IN part_number);/n/
END update_parts;";
text *cb = (text *) "/
CREATE OR REPLACE PACKAGE BODY update_parts AS/n/
descrip IN part_description,/n/
partno IN part_number) is/n/
BEGIN/n/
FOR i IN 1..n LOOP/n/
INSERT INTO part_nos/n/
VALUES (partno(i), descrip(i));/n/
END LOOP;/n/
END update_parts;";
#define DESC_LEN 20
#define MAX_TABLE_SIZE 1200
text *pl_sql_block = (text *) "/
BEGIN/n/
END;";
text descrip[3][20] = {"Frammis", "Widget", "Thingie"};
sword numbers[] = {12125, 23169, 12126};
ub2 descrip_alen[3] = {DESC_LEN, DESC_LEN, DESC_LEN};
ub2 descrip_rc[3];
ub4 descrip_cs = (ub4) 3;
ub2 descrip_indp[3];
ub2 num_alen[3] = {
(ub2) sizeof (sword),
(ub2) sizeof (sword),
(ub2) sizeof (sword) };
ub2 num_rc[3];
ub4 num_cs = (ub4) 3;
ub2 num_indp[3];
ub1 hda[256];
main()
{
printf("Connecting to Oracle...");
if (olog(&lda, hda, "scott/tiger", -1, 0, -1, 0, -1,
OCI_LM_DEF)) {
printf("Cannot logon as scott/tiger. Exiting.../n");
exit(1);
}
if (oopen(&cda, &lda, NULL, -1, -1, NULL, -1)) {
printf("Cannot open cursor, exiting.../n");
exit(1);
}
/* Drop the table. */
printf("/nDropping table...");
if (oparse(&cda, dt, -1, 0, 2))
if (cda.rc != 942)
oci_error();
printf("/nCreating table...");
if (oparse(&cda, ct, -1, 0, 2))
oci_error();
/* Parse and execute the create package statement. */
printf("/nCreating package...");
if (oparse(&cda, cp, -1, 0, 2))
oci_error();
if (oexec(&cda))
oci_error();
/* Parse and execute the create package body statement. */
printf("/nCreating package body...");
if (oparse(&cda, cb, -1, 0, 2))
oci_error();
if (oexec(&cda))
oci_error();
/* Parse the anonymous PL/SQL block that calls the
stored procedure. */
printf("/nParsing PL/SQL block...");
if (oparse(&cda, pl_sql_block, -1, 0, 2))
oci_error();
/* Bind the C arrays to the PL/SQL tables. */
printf("/nBinding arrays...");
if (obndra(&cda, (text *) ":description", -1, (ub1 *) descrip,
DESC_LEN, VARCHAR2_TYPE, -1, descrip_indp, descrip_alen,
descrip_rc, (ub4) MAX_TABLE_SIZE, &descrip_cs, (text *) 0,
-1, -1))
oci_error();
if (obndra(&cda, (text *) ":partno", -1, (ub1 *) numbers,
(sword) sizeof (sword), INT_TYPE, -1, num_indp,
num_alen, num_rc, (ub4) MAX_TABLE_SIZE, &num_cs,
(text *) 0, -1, -1))
oci_error();
printf("/nExecuting block...");
if (oexec(&cda)) oci_error();
printf("/n");
if (oclose(&cda)) {
printf("Error closing cursor!/n");
return -1;
}
if (ologof(&lda)) {
printf("Error logging off!/n");
return -1;
}
exit(1);
}
oci_error()
{
text msg[600];
sword rv;
rv = oerhms(&lda, cda.rc, msg, 600);
printf("/n/n%.*s", rv, msg);
printf("Processing OCI function %s/n", oci_func_tab[cda.fc]);
if (oclose(&cda))
printf("Error closing cursor!/n");
if (ologof(&lda))
printf("Error logging off!/n");
exit(1);
}
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT sqlvar text * IN sqlvl sword IN progv (2) ub1 * (1) IN/OUT (3) progvl sword IN ftype sword IN scale sword IN indp (2) sb2 * IN/OUT (3) alen (2) ub2 * IN/OUT arcode (2) ub2 * OUT (4) maxsiz ub4 IN cursiz ub4 * IN/OUT (3) fmt text * IN fmtl sword IN fmtt sword IN
Note 1. progv is a pointer to the data buffer.
Note 2. If maxsiz > 1, must be an array with cardinality at least as great as maxsiz.
Note 3. IN/OUT parameter used or returned on the execute or fetch call.
Note 4. OUT parameter returned on the fetch call.
cursor A pointer to the CDA associated with the SQL statement by the oparse() call.
sqlvar Specifies the address of a character string containing the name of a placeholder (including the preceding colon) in the SQL statement.
sqlvl The length of the character string sqlvar, including the preceding colon. For example, the placeholder :EMPLOYEE has a length of nine. If the placeholder name is a null-terminated character string (as in the example in this section), this parameter can be omitted (passed as -1).
progv A pointer to a program variable or array of program variables from which input data will be retrieved or into which output data will be placed when oexec(), oexn(), or oexfet() is executed.
progvl The length in bytes of the program variable or array element. Because obndra() might be called only once for many different progv values on successive execute calls, progvl must contain the maximum length of progv.
Note: The datatype of progvl is sword. On some systems, this type might be only two bytes. When binding LONG VARCHAR and LONG VARRAW buffers, this limits the maximum length of the buffer to 64K bytes. So, to bind a longer buffer for these datatypes, set progvl to -1, and pass the actual data area length (total buffer length - sizeof (sb4)) in the first four bytes of progv. Set this value before calling obndra().
ftype The external datatype of the program variable in the user program. Oracle converts the program variable from external to internal format before it is bound to the SQL statement. There is a list of external datatypes and type codes in the section "External Datatypes" .
scale Only used for PACKED DECIMAL variables, which are not normally used in C. Set this parameter to -1. See the description of the OBNDRV routine for information about this parameter.
indp A pointer to an indicator variable, or array of indicator variables if progv is an array. As an array, indp must contain at least the same number of elements as progv.
alen A pointer to an array of elements containing the length of the data. This is the effective length of the bind variable element, not the size of the array. For example, if the progv parameter is an array declared as
text arr[5][20];
then alen should point to an array of at least five elements. The maximum usable size of the array is determined by the maxsiz parameter.
If arr in the above example is an IN parameter, each element in the array pointed to by alen should be set to the length of the data in the corresponding element in the arr array (<=20 in this example) before the execute call.
If arr in the above example is an OUT parameter, the length of the returned data appears in the array pointed to by alen after the PL/SQL block is executed.
Once the bind is done using obndra(), you can change the data length of the bind variable without rebinding. However, the length cannot be greater than that specified in alen.
arcode An array containing the column-level error return codes. This parameter points to an array that will contain the error code for the bind variable after the execute call. The error codes that can be returned in arcode are those that indicate that data in progv has been truncated or that a null occurred on a SELECT or PL/SQL FETCH, for example, ORA-01405 or ORA-01406.
If obndra() binds an array of elements (that is, maxsiz is greater than one), then arcode must also point to an array of at least equal size.
maxsiz The maximum size of an array being bound to a PL/SQL table. Values range from 1 to 32512, but the maximum size of the array depends on the datatype. The maximum array size is 32512 divided by the internal size of the datatype.
This parameter is only relevant when binding to PL/SQL tables. Set this parameter to ((ub4)0) for SQL scalar or array binds.
cursiz A pointer to the actual number of elements in the array being bound to a PL/SQL table.
If progv is an IN parameter, set the cursiz parameter to the size of the array being bound. If progv is an OUT parameter, the number of valid elements being returned in the progv array is returned after PL/SQL block is executed.
This parameter is only relevant when binding to PL/SQL tables. Set this parameter to ((ub4 *) 0) for SQL scalar or array binds.
#### obndrn obndrv
Purpose
obndrn() and obndrv() associate the address of a program variable with the specified placeholder in the SQL statement. The placeholder is identified by name for the obndrv() function, and by number for obndrn().
Syntax
obndrn(Cda_Def *cursor, sword sqlvn,
ub1 *progv, sword progvl, sword ftype,
<sword scale>, [sb2 *indp], <text *fmt>,
<sword fmtl>, <sword fmtt>);
obndrv(Cda_Def *cursor, text *sqlvar,
[sword sqlvl], ub1 *progv, sword progvl,
sword ftype, <sword scale>, [sb2 *indp],
<text *fmt>, <sword fmtl>, <sword fmtt>);
You can call either obndrv() or obndrn() to bind the address of a variable in your program to a placeholder in the SQL statement being processed. If your application needs to perform piecewise operations or utilize arrays of structures, you must bind your variables using obindps() instead.
Note:
If cursor is a cursor variable that has been OPENed FOR in a PL/SQL block, then obndrn() or obndra() return an error, unless a new SQL statement or PL/SQL block has been parsed on it.
If you use obndrv(), the placeholder in the SQL statement consists of a colon (:) followed by a SQL identifier. The placeholder is not a program variable. For example, the SQL statement
SELECT ename,sal,comm FROM emp WHERE deptno = :Dept AND
comm > :Min_com
has two placeholders, :Dept and :Min_com.
If you use obndrn(), the placeholders in the SQL statement consist of a colon followed by a literal integer in the range 1 to 255. The SQL statement
SELECT ename,sal,comm FROM emp WHERE deptno = :2 AND comm > :1
has two placeholders, :1 and :2.
An obndrv() call that binds the :Dept placeholder in the first SQL statement above to the program variable dept_num is
#define INT 3 /* external datatype code for integer */
Cda_Def cursor;
sword dept_num, minimum_comm;
...
obndrv(&cursor, ":Dept", -1, (ub1 *) &dept_num,
(sword) sizeof(sword), INT, -1, (sb2*) 0, (text *) 0, -1, -1);
Because the literal ":Dept" is a null-terminated string, the sqlvl parameter is not needed; you pass it as -1. Some of the remaining parameters are optional. For example, indp, the pointer to an indicator variable, is optional and not used in this example. It is passed as 0 cast to an sb2 pointer. fmt is not used, because the datatype is not packed decimal or display signed leading separate. Its absence is indicated by passing a null pointer.
If you use obndrn(), the parameter sqlvn identifies the placeholder by number. If sqlvn is set to 1, the program variable is bound to the placeholder :1. For example, obndrn() is called to bind the program variable minimum_comm to the placeholder :2 in the second SQL statement above as follows:
obndrn(&cursor, 2, (ub1 *) &dept_num, (sword) sizeof(sword),
INT, -1, (sb2 *) 0, (text *) 0, -1, -1);
where the placeholder :2 is indicated in the sqlvn parameter by passing the value 2. The sqlvn parameter can be a variable and a literal.
You cannot use obndrn() in a PL/SQL block to bind program variables to placeholders, because PL/SQL does not recognize numbered placeholders. Always use obndra() (or obndrv()) and named placeholders within PL/SQL blocks.
The obndrv() or obndrn() function must be called after you call oparse() to parse the SQL statement and before calling oexn(), oexec(), or oexfet() to execute it. Once you have bound a program variable, you can change the value in the variable and re-execute the SQL statement without rebinding.
For example, if you have bound the address of dept_num to the placeholder ":Dept", and you now want to use new_dept_num (of the same datatype) when executing the SQL statement , you must call obndrv() again to bind the new program variable to the placeholder.
However, if you need to change the type or length of the variable, you must reparse and rebind before re-executing.
You should not use obndrv() and obndrn() after an odescr() call. If you do, you must first reparse and then rebind all variables.
At the time of the bind, Oracle stores the address of the program variable. If the same placeholder occurs more than once in the SQL statement, a single call to obndrv() or obndrn() binds all occurrences of the placeholder to the bind variable.
Note: You can bind an array using obndrv() or obndrn(), but you must then specify the number of rows with either oexn(), oexfet(), or ofen(). This is the Oracle array interface.
The completion status of the bind is returned in the return code field of the CDA. A return code of zero indicates successful completion.
If your program is linked using the deferred mode option, bind errors that would be returned immediately in non-deferred mode are not detected until the bind operation is actually performed. This happens on the first describe (odescr()) or execute (oexec(), oexn(), or oexfet()) call after the bind.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT sqlvar text * IN sqlvl sword IN sqlvn sword IN progv ub1 * IN/OUT (1) progvl sword IN ftype sword IN scale sword IN indp sb2 * IN/OUT (1,2) fmt text * IN fmtl sword IN fmtt sword IN
Note 1. Values are IN or IN/OUT parameters for oexec(), oexn(), or oexfet().
Note 2. Can have the mode OUT when bound in a PL/SQL statement.
cursor A pointer to the CDA associated with the SQL statement by the oparse() call.
sqlvar Used only with obndrv(), this parameter specifies the address of a character string containing the name of a placeholder (including the preceding colon) in the SQL statement.
sqlvl Used only with obndrv(), the sqlvl parameter is the length of the character string sqlvar, including the preceding colon. For example, the placeholder :Employee has a length of nine. If the placeholder name is a null-terminated character string, this parameter can be omitted (passed as -1).
sqlvn Used only with obndrn(), this parameter specifies a placeholder in the SQL statement referenced by the cursor by number. For example, if sqlvn is an integer literal or a variable equal to 2, it refers to all placeholders identified by :2 within the SQL statement.
progv A pointer to a program variable or array variables. Values are input to Oracle when either oexec() or oexn() is executed. Data are retrieved when either oexfet(), ofen(), or ofetch() is performed.
progvl The length in bytes of the program variable or array element. Since obndrv() or obndrn() might be called only once for many different progv values on successive execute or fetch calls, progvl must contain the maximum length of progv.
Note: The datatype of progvl is sword. On some systems, this type might be only two bytes. When binding LONG VARCHAR and LONG VARRAW buffers, this limits the maximum length of the buffer to 64K bytes. To bind a longer buffer for these datatypes, set progvl to -1 and pass the actual data area length (total buffer length - sizeof (sb4)) in the first four bytes of progv. Set this value before calling obndrn() or obndrv().
ftype The Oracle external datatype of the program variable. Oracle converts the program variable between external and internal formats when the data is input to or retrieved from Oracle. See page 3 - 8 for a list of external datatypes.
scale The scale parameter is valid only for PACKED DECIMAL variables, which are not normally used in C applications. Set this parameter to -1 to indicate that it is unused. See the description of the OBNDRV routine for information about this parameter.
indp A pointer to a short integer (or array of short integers) that serves as indicator variables.
On Input
If the indicator variable contains a negative value when the statement is executed, the corresponding column is set to null; otherwise, it is set to the value pointed to by progv.
On output
If the indicator variable contains a negative value after the fetch, the corresponding column contained a null.
#### obreak
Purpose
obreak() performs an immediate (asynchronous) abort of any currently executing OCI function that is associated with the specified LDA. It is normally used to stop a long-running execute or fetch that has not completed.
Syntax
obreak(Lda_Def *lda);
If no OCI function is active when obreak() is called, obreak() will be ignored unless the next OCI function called is a fetch. In this case, the subsequent fetch call will be aborted.
obreak() is the only OCI function that you can call when another OCI function is in progress. It should not be used when a connect operation (olog()) is in progress, because the LDA is in an indeterminate state. obreak() cannot return a reliable error status to the LDA, because it might be called when the Oracle internal status structures are in an inconsistent state.
Note: obreak() aborts the currently executing OCI function not the connection.
obreak() is not guaranteed to work on all operating systems and does not work on all protocols. In some cases, obreak() may work with one protocol on an operating system, but may not work with other protocols on the same operating system.
Working with the OCI in non-blocking mode can provide a more consistent way of interrupting a SQL statement. See the section "Non-Blocking Mode" for more information.
The following example shows how to use obreak() in an OCI program to interrupt a query if it does not complete in six seconds. This example works under many UNIX operating systems. The example must be linked two-task to work correctly.
#include <stdio.h>
#include <signal.h>
#include <ocidfn.h>
#include <ocidem.h>
Lda_Def lda;
Cda_Def cda;
ub1 hda[256];
/* Define a new alarm function, to replace the standard
alarm handler. */
sighandler()
{
sword rv;
fprintf(stderr, "Alarm signal has been caught/n");
/* Call obreak() to interrupt the SQL statement in progress. */
if (rv = obreak(&lda))
fprintf(stderr, "Error %d on obreak/n", rv);
else
fprintf(stderr, "obreak performed/n");
}
err()
{
text errmsg[512];
sword n;
n = oerhms(&lda, cda.rc, errmsg, sizeof (errmsg));
fprintf(stderr, "/n-Oracle error-/n%.*s", n, errmsg);
fprintf(stderr, "while processing OCI function %s/n",
oci_func_tab[cda.fc]);
oclose(&cda);
ologof(&lda);
exit(1);
}
main(argc, argv)
int argc;
char *argv[];
{
void *old_sig;
text name[10];
so connect using SQL*Net. */
if (olog(&lda, hda, argv[1], -1, argv[2], -1,
(text *) 0, -1, OCI_LM_DEF)) {
printf("cannot connect as %s/n", argv[1]);
exit(1);
}
if (oopen(&cda, &lda, 0, -1, -1, 0, -1)) {
printf("cannot open cursor data area/n");
exit(1);
}
signal(SIGALRM, sighandler);
/* Parse a query statement. */
if (oparse(&cda, "select ename from emp", -1, 0, 2))
err();
if (odefin(&cda, 1, name, sizeof (name), 1,
-1, (sb2 *) 0, (text *) 0, 0, -1,
(ub2 *) 0, (ub2 *) 0))
err();
if (oexec(&cda))
err();
/* Set the timeout */
alarm(1);
/* Begin the query. */
for (;;) {
if (ofetch(&cda)) {
/* Break if no data found (should never happen,
unless the alarm fails, or the emp table has
less than 6 or so rows). */
if (cda.rc == 1403) break;
/* When the alarm is caught and obreak is performed,
a 1013 error should be detected at this point. */
err();
}
printf("%10.10s/n", name);
/* Slow the query for the timeout. */
sigpause();
}
fprintf(stderr, "Unexpected termination./n");
err();
}
Parameter
Parameter Name Type Mode lda Lda_Def * IN
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
#### ocan
Purpose
ocan() cancels a query after the desired number of rows have been fetched.
Syntax
ocan(Cda_Def *cursor);
ocan() informs Oracle that the operation in progress for the specified cursor is complete. The ocan() function thus frees any resources associated with the specified cursor, but keeps the cursor associated with its parsed representation in the shared SQL area.
For example, if you require only the first row of a multi-row query, you can call ocan() after the first ofetch() operation to inform Oracle that your program will not perform additional fetches.
If you use the oexfet() function to fetch your data, specifying a non-zero value for the oexfet() cancel parameter has the same effect as calling ocan() after the fetch completes.
Parameter
Parameter Name Type Mode cursor Cda_Def * IN/OUT
cursor A pointer to the cursor data area specified in the oparse() call associated with the query.
#### oclose
Purpose
oclose() disconnects a cursor from the data areas in the Oracle Server with which it is associated.
Syntax
oclose(Cda_Def *cursor);
The oclose() function frees all resources obtained by the oopen(), parse, execute, and fetch operations using the cursor. If oclose() fails, the return code field of the CDA contains the error code.
Parameter
Parameter Name Type Mode cursor Cda_Def * IN/OUT
cursor A pointer to the CDA specified in the associated oopen() call.
#### ocof
Purpose
ocof() disables autocommit, that is, automatic commit of every SQL data manipulation statement.
Syntax
ocof(Lda_Def *lda);
By default, autocommit is already disabled at the start of an OCI program. Turning on autocommit can have a serious impact on performance. So, if the ocon() (autocommit on) function enables autocommit for some special circumstance, use ocof() to disable autocommit as soon as it is practical.
If ocof() fails, the return code field of the LDA indicates the reason.
Parameter
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
ocom(), ocon(), olog().
#### ocom
Purpose
ocom() commits the current transaction.
Syntax
ocom(Lda_Def *lda);
The current transaction starts from the olog() call or the last orol() or ocom() call, and lasts until an ocom(), orol(), or ologof() call is issued.
If ocom() fails, the return code field of the LDA indicates the reason.
Do not confuse the ocom() call (COMMIT) with the ocon() call (turn autocommit on).
Parameter
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
ocon(), olog(), ologof(), orol().
#### ocon
Purpose
ocon() enables autocommit, that is, automatic commit of every SQL data manipulation statement.
Syntax
ocon(Lda_Def *lda);
By default, autocommit is disabled at the start of an OCI program. This is because it is more expensive and less flexible than placing ocom() calls after each logical transaction. When autocommit is on, a zero in the return code field after executing the SQL statement indicates that the transaction has been committed.
If ocon() fails, the return code field of the LDA indicates the reason
If it becomes necessary to turn autocommit on for some special circumstance, it is advisable to follow that with a call to ocof() to disable autocommit as soon as it is practical in order to maximize performance.
Do not confuse the ocon() function with the ocom() (COMMIT) function.
Parameter
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
ocof(), ocom(), olog().
#### odefin
Purpose
odefin() defines an output variable for a specified select-list item of a SQL query.
Syntax
odefin(Cda_Def *cursor, sword pos, ub1 *buf,
sword bufl, sword ftype, <sword scale>,
[sb2 *indp], <text *fmt>, <sword fmtl>,
<sword fmtt>, [ub2 *rlen],
[ub2 *rcode]);
An OCI program must call odefin() once for each select-list item in a SQL statement. Each call to odefin() associates an output variable in your program with a select-list item of the query. odefin() can define scalar or string program variables which are compatible with the external datatype (ftype). See Table 3-2 for a list of datatypes and compatible variables. The output variable may also be the address of an array of scalars or strings for use with the oexfet() and ofen() functions.
Note: Applications running against a release 7.3 or later server that need to perform piecewise operations or utilize arrays of structures must use the newer odefinps() routine instead of odefin().
Oracle places data in the output variables when the program calls ofetch(), ofen(), or oexfet().
If you do not know the number of select-list items in the SQL statement, or the lengths and internal datatypes of the items, you can obtain this information at runtime using the odescr() function.
You can call odefin() only after you call oparse() to parse the SQL statement. You must also call odefin() before fetching the data.
odefin() associates output variables with select-list items using the position index of the select-list item in the SQL statement. Position indices start at 1 for the first (or leftmost) select-list item. For example, in the SQL statement
SELECT ename, empno, sal FROM emp WHERE sal > :min_sal
the select-list item SAL is in position 3, EMPNO is in position 2, and ENAME is in position 1.
If the type or length of bound variables changes between queries, you must reparse and rebind before re-executing.
You call odefin() to associate output buffers with the select-list items in the above statement as follows:
#define ENAME_LEN 20
Cda_Def cursor; /* allocate a cursor */
text employee_name[ENAME_LEN];
sword employee_number;
float salary;
sb2 ind_ename, ind_empno, ind_sal;
ub2 retc_ename, retc_empno, retc_sal;
ub2 retl_ename, retl_empno, retl_sal;
...
odefin(&cursor, 1, employee_name, ENAME_LEN, SQLT_STR,
-1, &ind_ename, 0, -1, -1, &retl_ename, &retc_ename);
odefin(&cursor, 2, &employee_number, (int) sizeof(int), SQLT_INT,
-1, &ind_empno, 0, -1, -1, &retl_empno, &retc_empno);
odefin(&cursor, 3, &salary, (int) sizeof(float), SQLT_FLT,
-1, &ind_sal, 0, -1, -1, &retl_sal, &retc_sal);
Oracle provides return code information at the row level using the return code field in the CDA. If you require return code information at the column level, you must include the optional rcode parameter, as in the examples above. During each fetch, Oracle sets rcode for the select-list item processed. This return parameter contains Oracle error codes, and indicates either successful completion (zero) or an exceptional condition, such as "null item fetched", "item fetched was truncated", or other non-fatal column errors. The following codes are some of those that can be returned in the rcode parameter:
Code Meaning 0 Success. 1405 A null was fetched. 1406 ASCII or string buffer data was truncated. The converted data from the database did not fit into the buffer. Check the value in indp, if specified, or rlen to determine the original length of the data. 1454 Invalid conversion specified: integers not of length 1, 2, or 4; reals not of length 4 or 8; invalid packed decimal conversions; packed decimal with more than 38 digits specified. 1456 Real overflow. Conversion of a database column or expression would overflow a floating-point number on this machine. 3115 Unsupported datatype.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT pos sword IN buf ub1 * IN (1) bufl sword IN ftype sword IN scale sword IN indp sb2 * IN (1) fmt text * IN fmtl sword IN fmtt sword IN rlen ub2 * IN (1) rcode ub2 * IN (1)
Note 1. The buffer, indp, retl, and rcode parameters are OUT parameters for the ofetch(), ofen(), and oexfet() functions.
cursor A pointer to the CDA specified in the associated oparse() call. This may be either a regular cursor or a cursor variable.
pos An index for a select-list item in the query. Position indices start at 1 for the first (or leftmost) select-list item. The odefin() function uses the position index to associate output variables with a given select-list item. If you specify a position index greater than the number of items in the select-list, or less than 1, the behavior of odefin() is undefined.
If you do not know the number of items in the select-list, use the odescr() routine to determine it. See the second sample program in Appendix A for an example that does this.
buf A pointer to the variable in the user program that receives the data when ofetch(), ofen(), or oexfet() executes. The variable can be of any type into which an Oracle column or expression result can be converted. See Chapter 3 for more information on datatype conversions.
Note: If odefin() is being called to set up an array fetch operation using the ofen() or oexfet() functions, then the buf parameter must be the address of an array large enough to hold the set of items to be fetched.
bufl The length in bytes of the variable being defined. If buf is an array, this is the size in bytes of one element of the array.
Note: The datatype of bufl is sword. On some systems, this type might be only two bytes. When defining LONG VARCHAR and LONG VARRAW buffers, this appears to limit the maximum length of the buffer to 64K bytes. To define a longer buffer for these datatypes, set bufl to -1 and pass the actual data area length (total buffer length - sizeof (sb4)) in the first four bytes of buf. Set this value before calling odefin().
ftype The external datatype to which the select-list item is to be converted before it is moved to the output variable. A list of the external datatypes and datatype codes can be found in the "External Datatypes" section.
scale The scale of a packed decimal number. Not normally used in C.
indp The indp value, after the fetch, indicates whether the select-list item fetched was null, truncated, or returned intact. See "Indicator Values" for additional details.
If the output buffer size was too small to hold all of the data, the output was truncated. You can obtain the length of the data in the column using the expression
*(ub2 *) indp
If oparse() parses the SQL statement, and you do not define an indicator parameter for a column, a "fetched column value was truncated" error is returned for truncated select-list items.
Note: If odefin() is being called to set up an array fetch operation using the ofen() or oexfet() functions, then the indp parameter must be the address of an array large enough to hold indicator variables for all the items that will be fetched.
The indp parameter offers only a subset of the functionality provided by the rlen and rcode parameters.
fmt Not normally used in C. See the description of the ODEFIN routine for more information about packed decimal format specifiers.
fmtl Not normally used in C. See the description of the ODEFIN routine for more information about packed decimal format specifiers.
fmtt Not normally used in C. See the description of the ODEFIN routine for more information about packed decimal format specifiers.
rlen A pointer to a ub2 into which Oracle places the length of the data (plus length bytes, in the case of variable-length datatypes) after the fetch operation completes. If odefin() is being used to associate an array with a select-list item, the rlen parameter must also be an array of ub2s of the same size. Return lengths are valid after the ofetch(), ofen(), or oexfet() operation.
rcode A pointer to an unsigned short integer that receives the column return code after the fetch. The error codes that can be returned in rcode are those that indicate that data in the column has been truncated or that a null occurred, for example, ORA-01405 or ORA-01406.
If odefin() is being used to associate an array with a select-list item, the rcode parameter must also be an array of ub2s of the same size.
#### odefinps
Purpose
odefinps() defines an output variable for a specified select-list item in a SQL query. This call can also specify if an operation will be performed piecewise or with arrays of structures.
Syntax
odefinps(Cda_Def *cursor, ub1 opcode, sword pos,
ub1 *bufctx, sb4 bufl, sword ftype, <sword scale>,
[sb2 *indp],<text *fmt>, <sb4 fmtl>, <sword fmtt>,
[ub2 *rlenp], [ub2 *rcodep], sb4 buf_skip,
sb4 ind_skip, sb4 len_skip, sb4 rc_skip);
odefinps() is used to define an output variable for a specified select-list item in a SQL query. Additionally, it can indicate that an application will be fetching data incrementally at runtime. This piecewise fetch is designated in the opcode parameter. odefinps() is also used when an application will be fetching data into an array of structures.
Note: This function is only compatible with Oracle server release 7.3 or later. If a release 7.3 application attempts to use this function against a release 7.2 or earlier server, an error message is likely to be generated. At that point you must restart execution.
With the introduction of odefinps() there are now two fully-supported calls for binding input parameters, the other being the older odefin(). Application developers should consider the following points when determining which define call to use:
• odefinps() is supported only when a program is linked in deferred mode. If it is necessary to link in non-deferred mode, odefin() must be used. In this case, the ability to handle piecewise operations and arrays of structures is not supported.
• odefinps() is more complex than the older bind call. Users who are not performing piecewise operations and are not using arrays of structures may choose to use odefin().
Unlike older OCI calls, odefinps() does not accept -1 for any optional or unused parameters. When it is necessary to pass a value to these parameters NULL or 0 should be used instead.
See the sections "Piecewise Insert, Update and Fetch," and "Arrays of Structures" for more information about piecewise operations, arrays of structures, skip parameters and the odefinps() call.
The following sample code demonstrates the use of odefinps() in an OCI program which performs an insert from an array of structures. This code is provided for demonstration purposes only, and does not constitute a complete program. Most of the work in this program is done in the array_fetch() routine.
For sample code demonstrating an array insert, see the description of the obindps() routine earlier in this chapter. For sample code demonstrating the use of odefinps() for a piecewise fetch, see the description of the osetpi() routine later in this chapter.
... /* OCI #include statements */
#define DEFER_PARSE 1 /* oparse flags */
#define NATIVE 1
#define VERSION_7 2
#define NO_MORE_DATA 1403
#define ARRAY_SIZE 10
#define OCI_EXIT_FAILURE 1 /* exit flags */
#define OCI_EXIT_SUCCESS 0
void array_fetch();
void print_results();
struct emp_record
{ int empno;
char ename[11];
char job[11];
int mgr;
char hiredate[10];
float sal;
float comm;
int deptno;
};
typedef struct emp_record emp_record;
struct emp_record_indicators
{ short empno;
short ename;
short job;
short mgr;
short hiredate;
short sal;
short comm;
short deptno;
};
typedef struct emp_record_indicators emp_record_indicators;
Lda_Def lda; /* login area */
ub1 hda[256]; /* host area */
Cda_Def cda; /* cursor area */
main()
{
... /* log on to oracle */
array_fetch();
... /* log off of oracle */
}
/* Function array_fetch(): This function retrieves EMP data */
/* into an array of structs and prints them. */
void array_fetch()
{
emp_record emp_records[20];
emp_record_indicators emp_records_inds[20];
int printed=0;
int cont=1;
int ret_val;
text *sqlstmt = (text *) "SELECT empno,ename,deptno /
FROM emp";
if (oopen(&cda, &lda, (text *)0, -1, -1, (text *)0, -1))
exit(OCI_EXIT_FAILURE);
if (oparse(&cda, sqlstmt, (sb4)-1, 0, (ub4)VERSION_7))
exit(OCI_EXIT_FAILURE);
if (odefinps(&cda, 1, 1, (ub1 *) &emp_records[0].empno,
(ub4) sizeof(emp_records[0].empno), SQLT_INT, 0,
(sb2 *) &emp_records_inds[0].empno, (text *)0, 0, 0,(ub2 *) 0, (ub2 *) 0,
(sb4) sizeof(emp_record), (sb4)
sizeof(emp_record_indicators), 0, 0))
exit(OCI_EXIT_FAILURE);
if (odefinps(&cda, 1, 2, (ub1 *) emp_records[0].ename,
(ub4) sizeof(emp_records[0].ename), SQLT_STR, 0,
(sb2 *) &emp_records_inds[0].ename, (text *)0, 0, 0,(ub2 *) 0, (ub2 *) 0,
(sb4) sizeof(emp_record), (sb4)
sizeof(emp_record_indicators), 0, 0))
exit(OCI_EXIT_FAILURE);
if (odefinps(&cda, 1, 3, (ub1 *) &emp_records[0].deptno,
(ub4) sizeof(emp_records[0].deptno), SQLT_INT, 0,
(sb2 *) &emp_records_inds[0].deptno, (text *)0, 0,
0, (ub2 *) 0, (ub2 *) 0,
(sb4) sizeof(emp_record), (sb4)
sizeof(emp_record_indicators), 0, 0))
exit(OCI_EXIT_FAILURE);
oexec(&cda)
while (cont)
{
printf(" Empno/tEname /t Deptno/n");
printf("----------/t----------/t----------/n");
ret_val=ofen(&cda,(sword) ARRAY_SIZE);
switch (cda->rc) /* switch on return value */
{
case 0:
print_results(emp_records,emp_records_inds,cda->rpc -
printed);
printed=cda->rpc;
break;
case NO_MORE_DATA:
/* print last batch? */
if (cda->rpc > printed)
{
print_results(emp_records,emp_records_inds,cda->rpc -
printed);
printed=cda->rpc;
}
cont=0;
break;
default:
exit(OCI_EXIT_FAILURE);
}
}
if (oclose(&cda))
exit(OCI_EXIT_FAILURE);
}
void print_results(emp_records,emp_records_inds,n)
emp_record emp_records[];
emp_record_indicators emp_records_inds[];
int n;
{
int i;
for (i=0;i<n;i++)
{
if (emp_records_inds[i].empno == -1)
printf("%10.s/t","");
else
printf("%10.d/t", emp_records[i].empno);
printf("%-10.10s/t", (emp_records_inds[i].ename ==-1 ? "" :
emp_records[i].ename));
if (emp_records_inds[i].deptno== -1)
printf("%10.s/n","");
else
printf("%10.d/n", emp_records[i].deptno);
}
}
### Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT opcode ub1 IN pos sword IN bufctx ub1 * IN bufl sb4 IN ftype sword IN scale sword IN indp sb2 * IN fmt text * IN fmtl sb4 IN fmtt sword IN rlenp ub2 * OUT rcodep ub2 * IN buf_skip sb4 IN ind_skip sb4 IN len_skip sb4 IN rc_skip sb4 IN
Note: Since the odefinps() call can be used in a variety of different circumstances, some items in the following list of parameter descriptions include different explanations for how the parameter is used for piecewise operations, arrays of structures and standard scalar or array binds.
Standard scalar and array defines are those defines which were previously possible using odefin().
cursor A pointer to the CDA associated with the SELECT statement being processed.
opcode Piecewise define: pass as 0. Arrays of structures or standard define: pass as 1.
pos An index for the select-list column which needs to be defined. Position indices start from 1 for the first, or left-most, item of the query. The odefinps() function uses the position index to associate output variables with a given select-list item. If you specify a position index greater than the number of items in the select-list, or less than 1, the behavior of odefinps() is undefined.
If you do not know the number of items in the select list, use the odescr() routine to determine it. See the second sample program in Appendix A for an example that does this.
bufctx Piecewise define: A pointer to a context block entirely private to the application. This should be used by the application to store any information about the column being defined. One possible use would be to store a pointer to a file which will be referenced later. Each output variable can then have its own separate file pointer. The pointer can be retrieved by the application during a call to ogetpi().
Array of structures or standard define: This specifies a pointer to the program variable or the beginning of an array of program variables or structures into which the column being defined will be placed when the fetch is performed. This parameter is equivalent to the buf parameter of the odefin() call.
bufl Piecewise define: The maximum possible size of the column being defined.
Array of structures or standard define: The length (in bytes) of the variable pointed to by bufctx into which the column being defined will be placed when a fetch is performed. For an array define, this should be the length of the first scalar element of the array of variables or structures pointed to by bufctx.
ftype The external datatype to which the select-list item is to be converted before it is moved to the output variable. A list of the external datatypes and datatype codes can be found in the "External Datatypes" section.
For piecewise operations, the valid datatype codes are 1 (VARCHAR2), 5 (STRING), 8 (LONG) and 24 (LONG RAW).
indp A pointer to an indicator variable or an array of indicator variables. If arrays of structures are used, this points to a possibly interleaved array of indicator variable structures.
rlenp A pointer to an element or array of elements which will hold the length of a column or columns after a fetch is done. If arrays of structures are used, this points to a possibly interleaved array of length variable structures.
rcodep A pointer to an element or array of elements which will hold column-level error codes which are returned by a fetch. If arrays of structures are used, this points to a possibly interleaved array of return code variable structures.
buf_skip Piecewise define or standard scalar define: pass as 0.
Array of structures or standard array define: this is the skip parameter which specifies the number of bytes to be skipped in order to get to the next program variable element in the array being defined. In general, this will be the size of one program variable for a standard array define, or the size of one structure for an array of structures.
ind_skip Piecewise define or standard scalar define: pass as 0.
Array of structures or standard array define: this is the skip parameter which specifies the number of bytes which must be skipped to get to the next indicator variable in the possibly interleaved array of indicator variables pointed to by indp. In general, this will be the size of one indicator variable for a standard array define, and the size of one indicator variable structure for arrays of structures.
len_skip Piecewise define or standard define: pass as 0.
Array of structures: this is the skip parameter which specifies the number of bytes which must be skipped to get to the next column length in the possibly interleaved array of column lengths pointed to by rlenp. In general, this will be the size of one length variable for a standard array define, and the size of one length variable structure for arrays of structures.
rc_skip Piecewise define or standard define: pass as 0.
Array of structures: this is the skip parameter which specifies the number of bytes which must be skipped to get to the next return code structure in the possibly interleaved array of return codes pointed to by rcodep. In general, this will be the size of one return code variable for a standard array define, and the size of one length variable structure for arrays of structures.
#### odescr
Purpose
odescr() describes select-list items for SQL queries. The odescr() function returns internal datatype and size information for a specified select-list item.
Syntax
odescr(Cda_Def *cursor, sword pos,
sb4 *dbsize, [sb2 *dbtype],
[sb1 *cbuf], [sb4 *cbufl], [sb4 *dsize],
[sb2 *prec], [sb2 *scale],
[sb2 *nullok]);
The odescr() function replaces the older odsc(). You call odescr() after you have parsed the SQL statement (using oparse()) and after binding all input variables. odescr() obtains the following information about select-list items in a query:
• maximum size (dbsize)
• internal datatype code (dbtype)
• column name (cbuf)
• length of the column name (cbufl)
• maximum display size (dsize)
• precision of numeric items (prec)
• scale of numerics (scale)
• whether null values are permitted in the column (nullok)
A dependency exists between the results returned by a describe operation (odescr()) and a bind operation (obindps(), obndra(), obndrn() or obndrv()). Because a select-list item might contain bind variables, the type returned by odescr() can vary depending on the results of bind operations.
So, if you have placeholders for bind variables in a SELECT statement and you will use odescr() to obtain the size or datatype of select-list items, you should do the bind operation before the describe. If you need to rebind any input variables after performing a describe, you must reparse the SQL statement before rebinding.
Note: Note that the rebind operation might change the results returned for a select-list item.
The odescr() function is particularly useful for dynamic SQL queries. That is, queries in which the number of select-list items, and their datatypes and sizes might not be known until runtime.
The return code field of the CDA indicates success (zero) or failure (non-zero) of the odescr() call.
The odescr() function uses a position index to refer to select-list items in the SQL query statement. For example, the SQL statement
SELECT ename, sal FROM emp WHERE sal > :Min_sal
contains two select-list items: ENAME and SAL. The position index of SAL is 2, and ENAME's index is 1.
The example program below is a complete C program that shows how you can describe select-list items. The program allows the user to enter SQL query statements at runtime, and prints out the name of each select-list item, the length of the name, and the datatype. See also the sample program cdemo2.c for additional information on describing select lists.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <oratypes.h>
#include <ocidfn.h>
#include <ocidem.h>
#define NPOS 13
Cda_Def cda;
Lda_Def lda;
ub1 hda[256];
main()
{
text sql_statement[256];
sword i, pos;
text cbuf[NPOS][20];
sb4 dbsize[NPOS], cbufl[NPOS], dsize[NPOS];
sb2 dbtype[NPOS], prec[NPOS], scale[NPOS], nullok[NPOS];
if (olog(&lda, hda, "scott", -1, "tiger", -1, 0, -1,
OCI_LM_DEF)) {
printf("Cannot connect as scott. Exiting.../n");
exit(1);
}
if (oopen(&cda, &lda, 0, -1, -1, 0, -1)) {
oci_error();
exit(1);
}
for (;;) {
printf("/nEnter a query or /"exit/"> ");
gets(sql_statement);
if (strncmp(sql_statement, "exit", 4) == 0) break;
/* parse the statement */
if (oparse(&cda, sql_statement, -1, 0, 0)) {
oci_error();
continue;
}
for (pos = 1; pos <= NPOS; pos++) {
cbufl[pos] = sizeof cbuf[pos];
if (odescr(&cda, pos, &dbsize[pos], &dbtype[pos],
&cbuf[pos], &cbufl[pos], &dsize[pos],
&prec[pos], &scale[pos], &nullok[pos])) {
if (cda.rc == 1007)
break;
oci_error();
continue;
}
}
/* print out the total count and the names
of the select-list items, column sizes, and datatype codes */
pos--;
printf("/nThere were %d select-list items./n", pos);
printf("Item name Length Datatype/n");
printf("/n");
for (i = 1; i <= pos; i++) {
printf("%*.*s", cbufl[i], cbufl[i], cbuf[i]);
printf("%*c", 25 - cbufl[i], ' ');
printf("%6d %8d/n", cbufl[i], dbtype[i]);
}
}
oclose(&cda);
ologof(&lda);
exit(0);
}
oci_error()
{
text msg[512];
printf("/nOracle ERROR/n");
oerhms(&lda, cda.rc, msg, (int) sizeof msg);
printf("%s", msg);
if (cda.fc != 0)
printf("processing OCI function %s/n",
oci_func_tab[cda.fc]);
}
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT pos sword IN dbsize sb4 * OUT dbtype sb2 * OUT cbuf sb1 * OUT cbufl sb4 * IN/OUT dsize sb4 * OUT prec sb2 * OUT scale sb2 * OUT nullok sb2 * OUT
cursor A pointer to a CDA in the program. The odescr() function uses the cursor address to reference a specific SQL query statement that has been passed to Oracle by a prior oparse() call. This may be either a regular cursor or a cursor variable.
pos The position index of the select-list item in the SQL query. Each item is referenced by position index, starting at one for the first (or leftmost) item. If you specify a position index greater than the number of items in the select-list or less than one, odescr() returns a "variable not in select-list" error in the return code field of the CDA.
dbsize A pointer to a signed long that receives the maximum size of the column, as stored in the Oracle data dictionary. Values returned in dbsize are
Oracle Column Type Value CHAR, VARCHAR2, RAW length of the column in the table NUMBER 22 (the internal length) DATE 7 (the internal length) LONG, LONG RAW 0 ROWID (system dependent) Functions returning dataype 1 (such as TO_CHAR()) same as the dsize parameter
dbtype Receives the internal datatype code of the select-list item. See Table 3 - 1 for a list of Oracle internal datatype codes. The datatype code returned for CHAR items (including literal strings in a select-list) can depend on how you parsed the SQL statement. If you used oparse() with the lngflg parameter set to 0, or oparse() with the lngflg parameter set to 1 when connected to a Version 6 database, CHAR items return the datatype code 1. Otherwise, dbtype returns 96.
The USER function in a select-list always returns the datatype code 1.
cbuf Receives the name of the select-list item, that is, the name of the column or wording of the expression. The program must allocate a string long enough to receive the item name.
cbufl Contains the length in bytes of cbuf. This parameter must be set before calling odescr(). If cbufl is not specified (that is, passed as 0), then the select-list item name is not returned. The name is truncated if it is longer than cbufl.
On return from odescr(), cbufl contains the length of the returned string in bytes.
dsize Receives the maximum display size of the select-list item if the select-list item is returned as a character string. The dsize parameter is especially useful when functions, such as SUBSTR or TO_CHAR, are used to modify the representation of a column.
prec Returns the precision of numeric select-list items. Precision is the total number of digits of a number. See "Internal Datatypes" for additional information about precision and scale.
Pass this parameter as zero if you do not require the precision value.
scale A pointer to a short that returns the scale of numeric select-list items. Pass this parameter as zero if you do not require the scale value.
For Version 6 of the RDBMS, odescr() returns the correct scale and precision of fixed-point numbers and returns precision and scale of zero for floating-point, as shown below:
SQL Datatype Precision Scale NUMBER(P) P 0 NUMBER(P,S) P S NUMBER 0 0 FLOAT(N) 0 0
For Oracle7, the SQL types REAL, DOUBLE PRECISION, FLOAT, and FLOAT(N) return the correct precision and a scale of -127.
nullok A pointer to a short that returns zero if null values are not permitted for the column, and non-zero if nulls are permitted.
Pass this parameter as zero if you do require the null status of the select-list item.
#### odessp
Purpose
odessp() is used to describe the parameters of a PL/SQL procedure or function stored in an Oracle database.
Syntax
odessp(Lda_Def *lda, text *objnam,
size_t onlen, ub1 *rsv1, size_t rsv1ln,
ub1 *rsv2, size_t rsv2ln, ub2 *ovrld, ub2 *pos,
ub2 *level, text **argnm, ub2 *arnlen,
ub2 *dtype, ub1 *defsup, ub1 *mode, ub4 *dtsiz,
sb2 *prec, sb2 *scale, ub1 *radix, ub4 *spare,
ub4 *arrsiz)
You call odessp() to get the properties of a stored procedure (or function) and the properties of its parameters. When you call odessp(), pass to it:
• A valid LDA for a connection that has execute privileges on the procedure.
• The name of the procedure, optionally including the package name. The package body does not have to exist, as long as the procedure is specified in the package.
• The total length of the procedure name, or -1 if it is null terminated.
If the procedure exists and the connection specified in the lda parameter has permission to execute the procedure, odessp() returns information about each parameter of the procedure in a set of array parameters. It also returns information about the return type if it is a function.
odessp() returns the same information for a parameter of type cursor variable as for a regular cursor.
Your OCI program must allocate the arrays for all parameters of odessp(), and you must pass a parameter (arrsiz) that indicates the size of the arrays (or the size of the smallest array if they are not equal). The arrsiz parameter returns the number of elements of each array that was returned by odessp().
odessp() returns a non-zero value if an error occurred. The error number is in the return code field of the LDA. The following errors can be returned there:
-20000
The object named in the objnam parameter is a package, not a procedure or function.
-20001
The procedure or function named in objnam does not exist in the named package.
-20002
A database link was specified in objnam, either explicitly or by means of a synonym.
ORA-0xxxx
An Oracle code, usually indicating a syntax error in the procedure specification in objnam.
When odessp() returns successfully, the OUT array parameters contain the descriptive information about the procedure or function parameters, and the return type for a function. As an example, consider a package EMP_RECS in the SCOTT schema. The package contains two stored procedures and a stored function, all named GET_SAL_INFO. Here is the package specification:
create or replace package EMP_RECS as
procedure get_sal_info (
name in emp.ename%type,
salary out emp.sal%type);
procedure get_sal_info (
ID_num in emp.empno%type,
salary out emp.sal%type);
function get_sal_info (
name in emp.ename%type) return emp.sal%type;
end EMP_RECS;
A code fragment to describe these procedures and functions follows:
#include <stdio.h>
#include <ocidfn.h>
#include <ocidem.h>
#define ASIZE 50
Lda_Def lda;
Cda_Def cda;
ub1 hda[256];
text *objnam = (text *) "scott.emp_recs.get_sal_info";
ub2 ovrld[ASIZE];
ub2 pos[ASIZE];
ub2 level[ASIZE];
text argnm[ASIZE][30];
ub2 arnlen[ASIZE];
ub2 dtype[ASIZE];
ub1 defsup[ASIZE];
ub1 mode[ASIZE];
ub4 dtsize[ASIZE];
sb2 prec[ASIZE];
sb2 scale[ASIZE];
ub4 spare[ASIZE];
ub4 arrsiz = (ub4) ASIZE;
main() {
int i, rv;
if (olog(&lda, hda, (text *) "scott", -1, (text *) "tiger", -1,
0, -1, OCI_LM_DEF)) {
printf("cannot connect as scott/n");
exit(1);
}
printf("connected/n");
/* call the describe function */
rv = odessp(&lda, objnam, -1, (ub1 *) 0, 0, (ub1 *) 0, 0,
ovrld, pos, level, argnm, arnlen, dtype,
defsup, mode, dtsize, prec, scale, radix,
spare, &arrsiz);
if (rv != 0)
{
printf("error in odessp %d/n", lda.rc);
}
/* print out the returned values */
printf("/nArrsiz = %ld/n", arrsiz);
if (arrsiz > ASIZE)
arrsiz = ASIZE;
printf(" Mode Dtsize Prec Scale Radix/n");
printf("----------------------------------");
printf("-----------------------------/n");
for (i = 0; i < arrsiz; i++)
{
printf("%8.8s %6d %5d %3d %8d %4d %6d %4d %5d %5d/n",
argnm[i], ovrld[i], level[i], pos[i],
dtype[i], mode[i], dtsize[i], prec[i], scale[i],
}
exit(0);
}
When this call to odessp() completes, the return parameter arrays are filled in as shown in Table 4 - 1. The arrsiz parameter returns 6, as there were a total of 5 parameters and one function return type described.
ARRAY ELEMENT PARAMETER 0 1 2 3 4 5 ovrld 1 1 2 2 3 3 pos 1 2 1 2 0 1 level 1 1 1 1 1 1 argnm name salary ID_num salary NULL name arnlen 4 6 6 6 0 4 dtype 1 2 2 2 2 1 defsup 0 0 0 0 0 0 mode 0 1 0 1 1 0 dtsize 10 22 22 22 22 10 prec 7 4 7 7 scale 2 0 2 2 radix 10 10 10 10 spare (1) n/a n/a n/a n/a n/a n/a Note 1: Reserved by Oracle for future use.
Table 4 - 1. Return Values from odessp() Call
Parameters
Parameter Name Type Mode lda Lda_Def * IN/OUT objnam text * IN onlen size_t IN rsv1 ub1 * IN rsv1ln size_t IN rsv2 ub1 * IN rsv2ln size_t IN ovrld ub2 * OUT pos ub2 * OUT level ub2 * OUT argnm text ** OUT arnlen ub2 * OUT dtype ub2 * OUT defsup ub1 * OUT mode ub1 * OUT dtsiz ub4 * OUT prec sb2 * OUT scale sb2 * OUT radix ub1 * OUT spare ub4 * OUT arrsiz ub4 * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
objnam The name of the procedure or function, including optional schema and package name. Quoted names are accepted. Synonyms are also accepted and are translated. Multi-byte characters can be used. The string can be null terminated. If it is not, the actual length in bytes must be passed in the onlen parameter.
onlen The length in bytes of the objnam parameter. If objnam is a null-terminated string, pass onlen as -1;. Otherwise, pass the exact length.
rsv1 Reserved by Oracle for future use.
rsv1ln Reserved by Oracle for future use.
rsv2 Reserved by Oracle for future use.
rsv2ln Reserved by Oracle for future use.
ovrld An array indicating whether the procedure is overloaded. If the procedure (or function) is not overloaded, 0 is returned. Overloaded procedures return 1...n for n overloadings of the name.
pos An array returning the parameter positions in the parameter list of the procedure. The first, or left-most, parameter in the list is position 1. When pos returns a 0, this indicates that a function return type is being described.
level For scalar parameters, level returns 0. For a record parameter, 0 is returned for the record itself, then for each parameter in the record the parameter's level in the record is indicated, starting from 1, in successive elements of the returned value of level.
For array parameters, 0 is returned for the array itself. The next element in the return array is at level 1 and describes the element type of the array.
For example, for a procedure that contains three scalar parameters, an array of ten elements, and one record containing three scalar parameters at the same level, you need to pass odessp() arrays with a minimum dimension of nine: three elements for the scalars, two for the array, and four for the record parameter.
argnm A pointer to an array of strings that returns the name of each parameter in the procedure or function. The strings are not null terminated. Each string in the array must be exactly 30 characters long.
arnlen The length in bytes of each corresponding parameter name in argnm.
dtype The Oracle datatype code for each parameter. See the PL/SQL User's Guide and Reference for a list of the PL/SQL datatypes. Numeric types, such as FLOAT, INTEGER, and REAL return a code of 2. VARCHAR2 returns 1. CHAR returns 96. Other datatype codes are shown in Table 3 - 5.
Note: A dtype value of 0 indicates that the procedure being described has no parameters.
defsup This parameter indicates whether the corresponding parameter has a default value. Zero returned indicates no default. One indicates that a default value was supplied in the procedure or function specification.
mode This parameter indicates the mode of the corresponding parameter. Zero indicates an IN parameter, one an OUT parameter, and two an IN/OUT parameter.
dtsiz The size of the datatype in bytes. Character datatypes return the size of the parameter. For example, the EMP table contains a column ENAME. If a parameter in a procedure is of the type EMP.ENAME%TYPE, the value 10 is returned for this parameter, because that is the length of the ENAME column in a single-byte character set.
For number types, 22 is returned. See the description of the dbsize parameter under odescr() for more information.
prec This parameter indicates the precision of the corresponding parameter if the parameter is numeric; otherwise, it returns zero.
scale This parameter indicates the scale of the corresponding parameter if the parameter is numeric.
radix This parameter indicates the radix of the corresponding parameter if it is numeric.
spare Reserved by Oracle for future use.
arrsiz When you call odessp(), pass the length of the arrays of the OUT parameters. If the arrays are not of equal length, you must pass the length of the shortest array. When odessp() returns, arrsiz returns the number of array elements filled in.
#### oerhms
Purpose
oerhms() returns the text of an Oracle error message, given the error code rcode.
Syntax
oerhms(Lda_Def *lda, sb2 rcode,
text *buf, sword bufsiz);
When you call oerhms(), pass the address of the LDA for the active connection as the first parameter. This is required to retrieve error messages that are correct for the database version being used on that connection.
The oerhms() function does not return zero when it completes successfully. It returns the number of characters in buf. The error message text in buf is null terminated.
When using oerhms() to return error messages from PL/SQL blocks (where the error code is between 6550 and 6599), be sure to allocate a large buf, because several messages can be returned. The maximum length of an Oracle error message is 512 characters.
For more information about the causes of Oracle errors and possible solutions, see the Oracle7 Server Messages manual.
The following example shows how to obtain an error message from a specific Oracle instance:
Lda_Def lda[2]; /* two separate connections in effect */
Cda_Def cda;
sword n_chars;
text msgbuf[512];
...
/* when an error occurs on the second connection */
n_chars = oerhms(&lda[1], cda.rc, msgbuf, (int) sizeof(msgbuf));
Parameters
Parameter Name Type Mode lda Lda_Def * IN/OUT rcode sb2 IN buf text * OUT bufsiz sword IN
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
rcode The LDA or CDA return code containing an Oracle error number.
buf A pointer to a buffer that receives the error message text. The message text is null terminated.
bufsiz The size of the buffer in bytes. The maximum size of the buffer is essentially unlimited. However, values larger than 1000 bytes are not normally needed.
#### oexec
Purpose
oexec() executes the SQL statement associated with a cursor.
Syntax
oexec(Cda_Def *cursor);
Before calling oexec(), you must call oparse() to parse the SQL statement, and this call must complete successfully. If the SQL statement contains placeholders for bind variables, you must call obndrv(), obindps(), obndra() or obndrn() to bind each placeholder to the address of a program variable before calling oexec().
For queries, after oexec() is called, the program must explicitly request rows of the result set using ofen() or ofetch().
For UPDATE, DELETE, and INSERT statements, oexec() executes the entire SQL statement and sets the return code field and the rows processed count field in the CDA. Note that an UPDATE or DELETE that does not affect any rows (no rows match the WHERE clause) returns success in the return code field and zero in the rows processed count field.
Note:
If cursor is a cursor variable that has been OPENed FOR in a PL/SQL block, then oexec() returns an error unless oparse() has been called for cursor with another SQL statement or PL/SQL block.
DML statements (e.g., UPDATE, INSERT) are executed when a call is made to oexec(), oexn() or oexfet(). DDL statements (e.g., CREATE TABLE, REVOKE) are executed on the parse if you have linked in non-deferred mode or if you have liked with the deferred option and the defflg parameter of oparse() is zero. If you have linked in deferred mode and the defflg parameter is non-zero, you must call oexn() or oexec() to execute the statement.
Oracle recommends that you use the deferred parse capability whenever possible. This results in increased performance, especially in a networked environment. Note, however, that errors in the SQL statement that would be detected when oparse() is called in non-deferred mode are not detected until the first non-deferred call is made (usually an execute or describe call).
Note: It is possible to use the oexn() function in place of oexec() by binding scalar variables, not arrays, and setting the count parameter to 1. For queries, use oexfet() in preference to oexec() followed by ofen().
See the description of the ofetch() routine for an example that shows how to use oexec().
Parameter
Parameter Name Type Mode cursor Cda_Def * IN/OUT
cursor A pointer to the CDA specified in the associated oparse() call.
#### oexfet
Purpose
oexfet() executes the SQL statement associated with a cursor, then fetches one or more rows. oexfet() can also perform a cancel of the cursor (the same as an ocan() call).
Syntax
oexfet(Cda_Def *cursor, ub4 nrows,
sword cancel, sword exact);
Before calling oexfet(), the OCI program must first call oparse() to parse the SQL statement, call obndra(), obndrn(), obndrv() or obindps() (if necessary) to bind input variables, then call odefin() or odefinps() to define output variables.
If the OCI program was linked using the deferred mode link option, the bind and define steps are deferred until oexfet() is called. If oparse() was called with the deferred parse flag (defflg) parameter non-zero, the parse step is also delayed until oexfet() is called. This means that your program can complete the processing of a SQL statement using a minimum of message round-trips between the client running the OCI program and the database server.
If you call oexfet() for a DML statement that is not a query, Oracle issues the error
ORA-01002: fetch out of sequence
and the execute operation fails.
Note: Using the deferred parse, bind, and define capabilities when processing a SQL statement requires more memory on the client system than the non-deferred sequence. So, you gain execution speed at the cost of some additional space.
When running against an Oracle7 database, where the SQL statement was parsed using oparse() with the lngflg parameter set to 1 or 2, a character string that is too large for its associated buffer is truncated. The column return code (rcode) is set to the error
ORA-01406: fetched column value was truncated
and the indicator parameter is set to the original length of the item. However, the oexfet() call does not return an error indication. If a null is encountered for a select-list item, the associated column return code (rcode) for that column is set to the error
ORA-01405: fetched column value is NULL
and the indicator parameter is set to -1. The oexfet() call does not return an error.
However, if no indicator parameter is defined and the program is running against an Oracle7 database, oexfet() does return an ORA-01405 error. It is always an error if a null is selected and no indicator parameter is defined, even if column return codes and return lengths are defined.
oexfet() both executes the statement and fetches the row or rows that satisfy the query. If you need to fetch additional rows after oexfet() completes, use the ofen() function. The following example shows how you can use deferred parse, bind, and define operations together with oexfet() to process a SQL statement:
Cda_Def cda;
Lda_Def lda;
text *sql_statement =
"SELECT ename, sal FROM emp WHERE deptno = :1";
float salaries[12000];
text names[12000][20];
sb2 sal_ind[12000], name_ind[12000];
char* dept_number_stg[10];
sword dept_number;
...
/* after connecting to Oracle ... */
oopen(&cda, &lda, 0, -1, -1, 0, -1);
oparse(&cda, sql_statement, -1, 1, 1); /* deferred parse*/
printf("Enter department number: ");
gets(dept_number_stg);
dept_number = (sword) atoi(dept_number_stg);
...
obndrn(&cda, 1, &dept_number, (int) sizeof (int), 3, -1,
0, 0, -1, -1);
odefin(&cda, 2, salaries, (int) sizeof (float),
4, -1, sal_ind, 0, -1, -1); /* datatype FLOAT is 4 */
odefin(&cda, 1, names, 20, 1, -1, name_ind, 0, -1, -1);
/* retrieve 12000 or fewer salaries */
oexfet(&cda, 12000, 0, 0); /* cancel and exact not set */
....
The number of rows that were fetched is returned in the rows processed count field of the CDA.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT nrows ub4 IN cancel sword IN exact sword IN
cursor A pointer to a CDA specified in the associated oparse() call.
nrows The number of rows to fetch. If nrows is greater than 1, you must define arrays to receive select-list values, as well as any indicator variables. See the description of odefin() for more information.
If nrows is greater than the number of rows that satisfy the query, the rows processed count field in the CDA is set to the number of rows returned, and Oracle returns the error
ORA-01403: no data found
Note: That the data is actually fetched.
cancel If this parameter is non-zero when oexfet() is called, the cursor is canceled after the fetch completes. This has exactly the effect of issuing an ocan() call, but does not require the additional call overhead.
exact If this parameter is non-zero when oexfet() is called, oexfet() returns an error if the number of rows that satisfy the query is not exactly the same as the number specified in the nrows parameter. Nevertheless, the rows are returned.
If the number of rows returned by the query is less than the number specified in the nrows parameter, Oracle returns the error
ORA-01403: no data found
If the number of rows returned by the query is greater than the number specified in the nrows parameter, Oracle returns the error
ORA-01422: Exact fetch returns more than requested
number of rows
Note: If exact is non-zero, a cancel of the cursor is always performed, regardless of the setting of the cancel parameter.
#### oexn
Purpose
oexn() executes a SQL statement. Array variables can be used to input multiple rows of data in one call.
Syntax
oexn(Cda_Def *cursor, sword iters, sword rowoff);
oexn() is similar to the older oexec(), but it allows you to take advantage of the Oracle array interface. oexn() allows operations using arrays of bind variables. oexn() is generally much faster than successive calls to oexec(), especially in a networked client-server environment.
Note:
If cursor is a cursor variable that has been OPENed FOR in a PL/SQL block, then oexn() returns an error, unless oparse() has been called on cursor with another SQL statement or PL/SQL block.
Variables are bound to placeholders in the SQL statement using obndra(), obndrv(), obndrn() or obindps(). A pointer to the scalar or array is passed to the binding function. Data must be present in bind variables before you call oexn(). For example:
Cda_Def cursor;
text names[10][20]; /* 2-dimensional array of char */
sword emp_nos[10]; /* an array of 10 integers */
sb2 ind_params[10]; /* array of indicator parameters */
text *sql_stmt = "INSERT INTO emp(ename, empno) VALUES /
(:N, :E)";
...
/* parse the statement */
oparse(&cursor, sql_stmt, -1, 1, 1); /* deferred parse */
/* bind the arrays to the placeholders */
obndrv(&cursor, ":N", -1, names, 20, SQLT_CHR,
-1, ind_params, 0, -1, -1);
/* empno is non-null, so indicator parameters are not used */
obndrv(&cursor, ":E", -1, emp_nos, (int) sizeof(int),
SQLT_INT, -1, 0, 0, -1, -1);
/* fill in the data and indicator parameters, then
execute the statement, inserting the array values */
oexn(&cursor, 10, 0);
This example declares three arrays, one of ten integers, one of ten indicators, and one of ten 20-character strings. It also defines a SQL statement that inserts multiple rows into the database. After binding the arrays, the program must place data for the first INSERT in names[0] and emp_nos[0], for the second INSERT in names[1] and emp_nos[1], and so forth. (This step is not shown in the example.) Then oexn() is called to insert the data in the arrays into the EMP table.
The completion status of oexn() is indicated in the return code field of the CDA. The rows processed count in the CDA indicates the number of rows successfully processed. If the rows processed count is not equal to iters, the operation failed on array element rows processed count + 1.
You can continue to process the rest of the array even after a failure on one of the array elements as long as a rollback did not occur (obtained from the flags1 field in the CDA). You do this by using rowoff to start operations at an array element other than the first.
In the above example, if the rows processed count was 5 at completion of oexn(), then row six was rejected. In this event, to continue the operation at row seven, call oexn() again as follows:
oexn(&cursor, 10, 6);
Note: The maximum number of elements in an array is 32767.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT iters sword IN rowoff sword IN
cursor A pointer to the CDA specified in the associated oparse() call.
iters The total size of the array of bind variables to be inserted. The size cannot be greater than 32767 items.
rowoff The zero-based offset within the bind variable array at which to begin operations. oexn() processes (iters - rowoff) array elements if no error occurs.
#### ofen
Purpose
ofen() fetches one or multiple rows into arrays of variables, taking advantage of the Oracle array interface.
Syntax
ofen(Cda_Def *cursor, sword nrows);
ofen() is similar to ofetch(); however, ofen() can fetch multiple rows into an array of variables with a single call. A pointer to the array is bound to a select-list item in the SQL query statement using odefin().
When running against an Oracle7 database, where the SQL statement was parsed using oparse() with the lngflg parameter set to 1 or 2, a character string that is too large for its associated buffer is truncated, the column return code (rcode) is set to the error
ORA-01406: fetched column value was truncated
and the indicator parameter is set to the original length of the item. However, the ofen() call does not return an error indication. If a null is encountered for a select-list item, the associated column return code (rcode) for that column is set to the error
ORA-01405: fetched column value is NULL
and the indicator parameter is set to -1. The ofen() call does not return an error.
However, if no indicator parameter is defined and the program is running against an Oracle7 database, ofen() does return the 1405 error. It is always an error if a null is selected and no indicator parameter is defined, even if column return codes and return lengths are defined.
Even when fetching a single row, Oracle recommends that Oracle7 OCI programs use oexfet(), with the nrows parameter set to 1, instead of the combination of oexec() and ofen(). Use ofen() after oexfet() to fetch additional rows when you do not know in advance the exact number of rows that a query returns.
The following example is a complete OCI program that shows how ofen() can be used to extract multiple rows using the array interface.
#include <stdio.h>
#include <oratypes.h>
#include <ocidfn.h>
#include <ocidem.h>
#define MAX_NAME_LENGTH 30
Lda_Def lda;
ub1 hda[256];
Cda_Def cda;
main()
{
static sb2 ind_a[10];
static sword empno[10];
static text names[10][MAX_NAME_LENGTH];
ub2 rl[10], rc[10];
sword i, n, rows_done;
/* connect to Oracle */
if (olog(&lda, hda, "scott/tiger",
-1, 0, -1, 0, -1, 0, OCI_LM_DEF)) {
printf("cannot connect to Oracle as scott/tiger/n");
exit(1);
}
/* open one cursor */
if (oopen(&cda, &lda, 0, -1, -1, 0, -1)) {
printf("cannot open the cursor/n");
ologof(&lda);
exit(1);
}
/* parse a query */
if (oparse(&cda, "select ename, empno from emp", -1, 1, 2)) {
oci_error(&cda);
exit(1);
}
/* define the output variables */
if (odefin(&cda, 1, names, MAX_NAME_LENGTH, 5, -1,
ind_a, 0, -1, -1, rl, rc)) {
oci_error(&cda);
exit(1);
}
if (odefin(&cda, 2, empno, (int) sizeof (int), 3, -1,
0, 0, -1, -1, 0, 0)) {
oci_error(&cda);
exit(1);
}
/* execute the SQL statement */
if (oexec(&cda)) {
oci_error(&cda);
exit(1);
}
/* use ofen to fetch the rows, 10 at a time,
and then display the results */
for (rows_done = 0;;) {
if (ofen(&cda, 10))
if (cda.rc != 1403) {
oci_error(&cda); /* some error */
exit(1);
}
/* the rpc is cumulative, so find out how many
rows to display this time (always <= 10) */
n = cda.rpc - rows_done;
rows_done += n;
for (i = 0; i < n; i++) {
if (ind_a[i])
printf("%s ", "(null)");
else
printf("%s%*c", names[i],
MAX_NAME_LENGTH - rl[i], ' ');
printf("%10d/n", empno[i]);
}
if (cda.rc == 1403) break; /* no more rows */
}
printf("%d rows returned/n", cda.rpc);
if (oclose(&cda))
exit(1);
if (ologof(&lda))
exit(1);
exit(0);
}
oci_error(cda)
Cda_Def *cda;
{
static text msg[512];
sword len;
len = oerhms(&lda, cda->rc, msg, (int) sizeof (msg));
printf("/nOracle ERROR/n");
printf("%.*s/n", len, msg);
printf("Processing OCI function %s/n",
oci_func_tab[cda->rc]);
return 0;
}
The return code field of the CDA indicates the completion status of ofen(). The rows processed count field in the CDA indicates the cumulative number of rows successfully fetched. If the rows processed count increases by nrows, ofen() may be called again to get the next batch of rows. If the rows processed count does not increase by nrows, then an error, such as "no data found", has occurred.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT nrows sword IN
cursor A pointer to the CDA associated with the SQL statement by the oparse() call used to parse it.
nrows The size of the defined variable array on which to operate. The size cannot be greater than 32767 items. If the size is one, then ofen() acts effectively just like ofetch().
#### ofetch
Purpose
ofetch() returns rows of a query to the program, one row at a time.
Syntax
ofetch(Cda_Def *cursor);
Each select-list item of the query is placed into a buffer identified by a previous odefin() call. When running against Oracle7, where the SQL statement was parsed using oparse() with the lngflg parameter set to 1 or 2, a character string that is too large for its associated buffer is truncated, the column return code (rcode) is set to the error
ORA-01406: fetched column value was truncated
and the indicator parameter is set to the original length of the item. However, the ofetch() call does not return an error indication. If a null is encountered for a select-list item, the associated column return code (rcode) for that column is set to the error
ORA-01405: fetched column value is NULL
and the indicator parameter is set to -1. The ofetch() call does not return an error.
However, if no indicator parameter is defined and the program is running against an Oracle7 database, ofetch() does return the 1405 error. It is always an error if a null is selected and no indicator parameter is defined, even if column return codes and return lengths are defined.
Even when fetching a single row, Oracle recommends that Oracle7 OCI programs use oexfet(), with the nrows parameter set to 1, instead of the combination of oexec() and ofetch().
The following example shows how you can obtain data from Oracle using ofetch() on a query statement. This example continues the one given in the description of the odefin() function earlier in this chapter. In that example, the select-list items in the SQL statement
SELECT ENAME, EMPNO, SAL FROM EMP WHERE
SAL > :MIN_SAL
were associated with output buffers, and the addresses of column return lengths and return codes were bound. The example continues:
int rv;
oexec(&cursor); /* execute the statement */
/* fetch each row of the query */
for (;;) {
if (rv = ofetch(&cursor)) /* break on row-level error */
break;
if (ret_codes[0] == 0)
printf("%*.*s/t", retl[0], retl[0], employee_name);
else if (ret_codes[0] == 1405)
printf("%*.*s/t", retl[0], retl[0], "Null");
else
break;
if (ret_codes[1] == 0)
printf("%d/t", employee_number);
/* process remaining items */
..
. printf("/n");
}
/* check rv for abnormal termination or just end-of-fetch */
if (rv != 1403)
errrpt(&cursor);
...
Each ofetch() call returns the next row from the set of rows that satisfies a query. After each ofetch() call, the rows processed count in the CDA is incremented.
You cannot refetch rows previously fetched except by re-executing the oexec() call and moving forward through the active set again. After the last row has been returned, the next fetch returns a "no data found" return code. When this happens, the rows processed count contains the total number of rows returned by the query.
Parameter
Parameter Name Type Mode cursor Cda_Def * IN/OUT
cursor A pointer to the CDA associated with the SQL statement.
#### oflng
Purpose
oflng() fetches a portion of a LONG or LONG RAW column.
Syntax
oflng(Cda_Def *cursor, sword pos,
ub1 *buf, sb4 bufl, sword dtype,
ub4 *retl, sb4 offset);
LONG and LONG RAW columns can hold up to 2 gigabytes of data. The oflng() function allows you to fetch up to 64K bytes, starting at any offset, from the LONG or LONG RAW column of the current row. There can be only one LONG or LONG RAW column in a table; however, a query that includes a join operation can include in its select list several LONG-type items. The pos parameter specifies the LONG-type column that the oflng() call uses.
Note: Although the datatype of bufl is sb4, oflng() can only retrieve up to 64K at a time. If an attempt is made to retrieve more than 64K, the returned data will not be complete. The use of sb4 in the interface is for future enhancements.
Before calling oflng() to retrieve the portion of the LONG-type column, you must do one or more fetches to position the cursor at the desired row.
oflng() is useful in cases where unstructured LONG or LONG RAW column data cannot be manipulated as a solid block.; for example, a voicemail application that uses sampled speech, stored as one byte per sample, at perhaps 10000 samples per second. If the voice message is to be played out using a buffered digital-to-analog converter, and the buffer takes 64 Kbyte samples at a time, you can use oflng() to extract the message in chunks of this size, sending them to the converter buffer. See the cdemo3.c program in Appendix A for an example that demonstrates this technique.
When calling oflng() to retrieve multiple segments from a LONG-type column, it is much more efficient to retrieve sequentially from low to high offsets, rather than from high to low, or randomly.
Note: With release 7.3, it may be possible to perform piecewise operations more efficiently using the new obindps(), odefinps(), ogetpi(), and osetpi() calls. See the section "Piecewise Insert, Update and Fetch" for more information.
The program fragment below shows how to retrieve 64 Kbytes, starting at offset 70000, from a LONG column. There are two columns in the table; the LONG data is in column two.
#define DB_SIZE 65536
#define FALSE 0
#define TRUE 1
ub1 *data_area;
sb4 offset;
sb2 da_indp, id_no;
ub4 ret_len;
Cda_Def cda;
...
data_area = (ub1 *) malloc(DB_SIZE);
...
oparse(&cda, "SELECT id_no, data FROM data_table
WHERE id_no = 100", -1, TRUE, 1);/* deferred parse */
/* define the first column - id_no, with no indicator parameter */
odefin(&cda, 1, &id_no, (int) sizeof (int), 3, -1, 0, 0, 0,
-1, 0, 0);
/* define the 2nd column - data, with indicator parameter */
odefin(&cda, 2, data_area, DB_SIZE, 1, -1, &da_indp,
0, 0, -1, 0, 0);
oexfet(&cda, 1, FALSE, FALSE); /* cursor is now at the row */
oflng(&cda, 2, data_area, DB_SIZE, 1, &ret_len, (sb4) 70000);
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT pos sword IN buf ub1 * OUT bufl sb4 IN dtype sword IN retl ub4 * OUT offset sb4 IN
cursor A pointer to the CDA specified in the associated oparse() call.
pos The index position of the LONG-type column in the row. The first column is position one. If the column at the index position is not a LONG type, a "column does not have LONG datatype" error is returned. If you do not know the position, you can use odescr() to index through the select-list. When a LONG datatype code (8 or 24) is returned in the dtype parameter, the value of the loop index variable (that started at 1) is the position of the LONG-type column.
buf A pointer to the buffer that receives the portion of the LONG-type column data.
bufl The length of buf in bytes.
dtype The datatype code corresponding to the type of buf. See the "External Datatypes" section for a list of the datatype codes.
retl The number of bytes returned. If more than 65535 bytes were requested and returned, the maximum value returned in this parameter is still 65535.
offset The zero-based offset of the first byte in the LONG-type column to be fetched.
#### ogetpi
Purpose
ogetpi() returns information about the next chunk of data to be processed as part of a piecewise insert, update or fetch.
Syntax
ogetpi (Cda_Def *cursor, sb1 *piecep,dvoid **ctxpp,
eword* iterp, eword *indexp);
ogetpi() is used (in conjunction with osetpi()) in an OCI application to determine whether more pieces exist to be either inserted, updated, or fetched as part of a piecewise operation.
Note: This function is only compatible with Oracle server release 7.3 or later. If a release 7.3 application attempts to use this function against a release 7.2 or earlier server, an error message is likely to be generated. At that point you must restart execution.
See the section "Piecewise Insert, Update and Fetch" in Chatper 2 for more information about piecewise operations and the ogetpi() call.
The following sample code demonstrates the use of ogetpi() in an OCI program which performs an piecewise insert. This code is provided for demonstration purposes only, and does not constitute a complete program.
For sample code demonstrating the use of ogetpi() for a piecewise fetch, see the description of the osetpi() routine later in this chapter.
This sample program performs a piecewise insert into a LONG RAW column in an Oracle table. The program is invoked with arguments specifying the name of the file to be inserted and the size of the piece to be used for the insert. It then inserts that file and its name into the database. Most of the data processing is done in the insert_file() routine.
/* The table FILES is used for this example:
*
* SQL> describe FILES
* Name Null? Type
* FILENAME NOT NULL VARCHAR2(255)
* FILECONTENT LONG RAW
*/
... /* OCI #include statements */
#define DEFER_PARSE 1 /* oparse flags */
#define NATIVE 1
#define VERSION_7 2
#define OCI_MORE_INSERT_PIECES -3129
#define OCI_EXIT_FAILURE 1 /* exit flags */
#define OCI_EXIT_SUCCESS 0
void insert_file();
/* Usage : piecewise_insert filename piecesize */
main(argc, argv)
int argc;
char *argv[];
{
Lda_Def lda; /* login area */
ub1 hda[256]; /* host area */
Cda_Def cda; /* cursor area */
... /* log on to Oracle */
insert_file(&lda, &cda, argv[1], atol(argv[2]));
... /*log off of Oracle */
}
/* Function insert_file(): This function loads long raw data */
/* into a memory buffer from a source file and then */
/* inserts it piecewise into the database */
/* */
/* Note: If necessary, the context pointer could be used to */
/* point to a file being used in a piecewise operation.*/
/* It is not necesssay in this example program, so a */
/* dummy value is passed instead. */
void insert_file(lda, cda, filename, piecesize)
Lda_Def *lda;
Cda_Def *cda;
text *filename;
ub4 piecesize;
{
text *longbuf; /* buffer to hold long column on insert */
ub4 len_longbuf; /* length of longbuf */
ub2 col_rcode; /* Column return code */
text errmsg[2000];
int fd;
char *context = "context pointer";
ub1 piece;
ub4 iteration;
ub4 plsqltable;
ub1 cont = (ub1)1;
text *sqlstmt = (text *)
"INSERT INTO FILES (filename, filecontent) /
VALUES (:filename, :filecontent)";
if (oopen(cda, lda, (text *)0, -1, -1, (text *)0, -1))
exit(OCI_EXIT_FAILURE);
printf("/nOpening source file %s/n", filename);
if (!(fd = open((char *)filename, O_RDONLY)))
exit(1);
/* Allocate memory for storage of one piece */
len_longbuf = piecesize;
longbuf = (text *)malloc(len_longbuf);
if (longbuf == (text *)NULL)
exit(1);
if (oparse(cda, sqlstmt, (sb4)-1, 0, (ub4)VERSION_7))
exit(OCI_EXIT_FAILURE);
if (obndrv(cda, (text *)":filename", -1, filename, -1,
SQLT_STR, -1, (sb2 *)0, (ub1 *)0, -1, -1))
exit(OCI_EXIT_FAILURE);
if (obindps(cda, 0, (text *)":filecontent",
strlen(":filecontent"), (ub1 *)context, len_longbuf,
SQLT_LBI, (sword)0, (sb2 *)0,
(ub2 *)0, &col_rcode, 0, 0, 0, 0,
0, (ub4 *)0, (text *)0, 0, 0))
exit(OCI_EXIT_FAILURE);
while (cont)
{
oexec(cda);
switch (cda.rc)
{
case 0: /* operation is finished */
cont = 0;
break;
case OCI_MORE_INSERT_PIECES: /* ORA-03129 was returned */
if ((len_longbuf = read(fd, longbuf, len_longbuf)) == -1)
exit(OCI_EXIT_FAILURE);
ogetpi(cda, &piece, (dvoid **)&context, &iteration,
&plsqltable);
if (len_longbuf < piecesize) /* last piece? */
piece = OCI_LAST_PIECE;
osetpi(cda, piece, longbuf, &len_longbuf);
break;
default:
err_report(lda, cda);
exit(OCI_EXIT_FAILURE);
}
}
ocom(lda); /* Commit the insert */
if (close(fd)) /* close file */
exit(OCI_EXIT_FAILURE);
if (oclose(cda)) /* close cursor */
exit(OCI_EXIT_FAILURE);
}
### Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT piecep sb1 * OUT ctxpp dvoid ** OUT iterp eword * OUT indexp eword * OUT
cursor A pointer to the CDA associated with the SQL or PL/SQL statement being processed.
piecep Specifies whether the next piece to be fetched or inserted is the first piece, an intermediate piece or the last piece. Possible values are OCI_FIRST_PIECE (one) or OCI_NEXT_PIECE (two).
ctxpp A pointer to the user-defined context pointer, which is optionally passed as part of an obindps() or odefinps() call. This pointer is returned to the application during the ogetpi() call. If ctxpp is passed as NULL, the parameter is ignored. The application may already know which buffer it needs to pass in osetpi() at run time.
iterp Pointer to the current iteration. During an array insert it will tell you which row you are working with. Starts from 0.
indexp Pointer to the current index of an array mapped to a PL/SQL table, if an array is bound for an insert. The value of indexp varies between zero and the value set in the cursiz parameter of the obindps() call.
#### olog
Purpose
olog() establishes a connection between an OCI program and an Oracle database.
Syntax
olog(Lda_Def *lda, ub1 *hda, text *uid, sword uidl,
text *pswd, sword pswdl, text *conn, sword connl,
ub4 mode)
An OCI program can connect to one or more Oracle instances multiple times. Communication takes place using the LDA and the HDA defined within the program. It is the olog() function that connects the LDA to Oracle.
The HDA is a program-allocated data area associated with each olog() logon call. Its contents are entirely private to Oracle, but the HDA must be allocated by the OCI program. Each concurrent connection requires one LDA-HDA pair.
Note: The HDA must be initialized to all zeros (binary zeros, not the "0" character) before the call to olog(), or runtime errors will occur. In C, this means that the HDA must be declared as global or static, rather than as a local or automatic character array. See the sample code below for typical declarations and initializations for a call to olog().
The HDA has a size of 256 bytes on 32-bit systems, and 512 bytes on 64-bit systems. If memory permits, it is possible to allocate a 512-byte HDA on a 32-bit system to increase portability of aplications.
After the olog() call, the HDA and the LDA must remain at the same program address they occupied at the time olog() was called.
For example:
#include <stdlib.h>
/* global variable declarations */
Lda_Def lda[2]; /* establish two LDAs */
ub1 hda[2][512]; /* and two HDA's */
text *uid1 = "SCOTT/TIGER";
text *uid2 = "SYSTEM";
text *pwd = "MANAGER";
...
/* first connect as scott */
if (olog(&lda[0], &hda[0], uid1, -1, (text *) 0, -1,
(text *) 0, -1, OCI_LM_DEF))
{
error_handler(&lda[0]);
exit(EXIT_FAILURE);
}
...
/* and later as the system manager */
if (olog(&lda[1], &hda[1], uid2, -1, pwd, -1,
(text *) 0, -1, OCI_LM_DEF))
{
error_handler(&lda[1]);
exit(EXIT_FAILURE);
}
When an OCI program has issued an olog() call, a subsequent ologof() call using the same LDA commits all outstanding transactions for that connection. If a program fails to disconnect or terminates abnormally, then all outstanding transactions are rolled back.
The LDA return code field indicates the result of the olog() call. A zero return code indicates a successful connection.
The mode parameter specifies whether the connection is in blocking or non-blocking mode. For more information on connection modes, see "Non-Blocking Mode" . For a short example program, see the onbset() description .
You should also refer to the section on SQL*Net in your Oracle system-specific documentation for any particular notes or restrictions that apply to your operating system.
Parameters
Parameter Name Type Mode lda Lda_Def * IN/OUT hda ub1 * OUT uid text * IN uidl sword IN pswd text * IN pswdl sword IN conn text * IN connl sword IN mode ub4 IN
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
hda A pointer to a host data area struct. See Chapter 2 for more information on host data areas.
uid Specifies a string containing the username, an optional password, and an optional host machine identifier. If you include the password as part of the uid parameter, put it immediately after the username and separate it from the username with a '/'. Put the host machine identifier after the username or the password, preceded by the '@' sign.
If the password is not included in this parameter, it must be in the pswd parameter. Examples of valid uid parameters are
name
name/password
name@service_name
name/password@service_name
The following example is not a correct example of a uid:
name@service_name/password
uidl The length of the string pointed to by uid. If the string pointed to by uid is null terminated, this parameter should be passed as -1.
pswd A pointer to a string containing the password. If the password is specified as part of the string pointed to by uid, this parameter should be passed as (text *) 0.
pswdl The length of the string pointed to by pswd. If the string pointed to by pswd is null terminated, this parameter should be passed as -1.
conn Specifies a string containing a SQL*Net V2 connect descriptor to connect to a database. If the connect string is passed in as part of the uid, this parameter should be passed as (text *) 0.
connl The length of the string pointed to by conn. If the string pointed to by conn is null terminated, this parameter can be passed as -1.
mode Specifies whether the connection is in blocking or non-blocking mode. Possible values are OCI_LM_DEF (for blocking) or OCI_LM_NBL (for non-blocking).
#### ologof
Purpose
ologof() disconnects an LDA from the Oracle program global area and frees all Oracle resources owned by the Oracle user process.
Syntax
ologof(Lda_Def *lda);
A COMMIT is automatically issued on a successful ologof() call; all currently open cursors are closed. If a program logs off unsuccessfully or terminates abnormally, all outstanding transactions are rolled back.
If the program has multiple active connections, a separate ologof() must be performed for each active LDA.
If ologof() fails, the reason is indicated in the return code field of the LDA.
Parameter
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
#### onbclr
Purpose
onbclr() places a database connection in blocking mode.
Syntax
onbclr(Lda_Def *lda);
If there is a pending call on a non-blocking connection and onbclr() is called, the pending call (onbclr()), when resumed, will block.
Parameters
Parameter Name Type Mode lda Lda_Def * IN
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
#### onbset
Purpose
onbset() places a database connection in non-blocking mode for all subsequent OCI calls on this connection.
Syntax
onbset(Lda_Def *lda);
onbset() will succeed only if the library is linked in deferred mode and if the network driver supports non-blocking operations.
Note: onbset() requires SQL*Net Release 2.1 or higher. It cannot be used with a single-task driver.
The following example code demonstrates the use of the non-blocking calls. Before running the OCI program, you must execute this SQL script.
set echo on;
connect system/manager;
create user ocitest identified by ocitest;
grant connect,resource to ocitest;
connect ocitest/ocitest;
create table oci21tab (col1 varchar2(30));
insert into oci21tab values ('A');
insert into oci21tab values ('AB');
insert into oci21tab values ('ABC');
insert into oci21tab values ('ABCD');
insert into oci21tab values ('ABCDE');
insert into oci21tab values ('ABCDEF');
insert into oci21tab values ('ABCDEFG');
insert into oci21tab values ('ABCDEFGH');
insert into oci21tab values ('ABCDEFGHI');
insert into oci21tab values ('ABCDEFGHIJ');
insert into oci21tab values ('ABCDEFGHIJK');
insert into oci21tab values ('ABCDEFGHIJKL');
commit;
This program performs a long-running, hardcoded insert and demonstrates the use of non-blocking calls. This example is included online as oci21.c and oci21.sql. See your Oracle system-specific documentation for the location of these files.
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#include <string.h>
#include <oratypes.h>
/* LDA and CDA struct declarations */
#include <ocidfn.h>
#ifdef __STDC__
#include <ociapr.h>
#else
#include <ocikpr.h>
#endif
/* demo constants and structs */
#include <ocidem.h>
/* oparse flags */
#define DEFER_PARSE 1
#define NATIVE 1
#define VERSION_7 2
/* exit flags */
#define OCI_EXIT_FAILURE 1
#define OCI_EXIT_SUCCESS 0
#define BLOCKED -3123 /* ORA-03123 */
#define SUCCESS 0
Lda_Def lda; /* login area */
ub1 hda[HDA_SIZE]; /* host area */
Cda_Def cda; /* cursor area */
/* Function prototypes */
void log_on ();
void log_off ();
void setup();
void err_report();
void insert_data();
void do_exit();
/* SQL statement used in this program */
text *sqlstmt = (text *)"INSERT INTO oci21tab (col1)/
SELECT a.col1 /
FROM oci21tab a, oci21tab b";
main(argc, argv)
eword argc;
text **argv;
{
log_on(); /* logon to Oracle database */
setup(); /* prepare sql statement */
insert_data();
log_off(); /* logoff Oracle database */
do_exit(OCI_EXIT_SUCCESS);
}
/* Function: setup
* Description: This routine does the necessary setup
* to execute the SQL statement. Specifically, it does
* the open, parse, bind and define phases as needed. */
void setup()
{
if (onbset(&lda)) /* make the connection non-blocking */
{
err_report((Cda_Def *)&lda);
do_exit(OCI_EXIT_FAILURE);
}
if (onbtst(&lda)) /* verify that it is non-blocking */
{
printf("connection is still blocking!!!/n");
do_exit(OCI_EXIT_FAILURE);
}
if (oopen(&cda, &lda, (text *) 0, -1, -1,
(text *) 0, -1)) /* open */
{
err_report(&cda);
do_exit(OCI_EXIT_FAILURE);
}
if (oparse(&cda, sqlstmt, (sb4) -1, DEFER_PARSE,
(ub4) VERSION_7))
{
err_report(&cda);
do_exit(OCI_EXIT_FAILURE);
}
}
/* Function: insert_data
* Description: This routine inserts the data into the table */
void insert_data()
{
ub1 done = 0;
/* number of times statement blocked */
static ub1 blocked_cnt = 0;
while (!done)
{
switch(oexec(&cda))
{
case BLOCKED: /* will come through here multiple
* times, but print msg once */
blocked_cnt++;
break;
case SUCCESS:
done = 1;
break;
default:
err_report(&cda);
/* get out of application */
do_exit(OCI_EXIT_FAILURE);
}
}
printf("/n Execute call blocked %ld times/n", blocked_cnt);
if (onbclr(&lda)) /* clear the non-blocking status of the
* connection */
{
err_report((Cda_Def *)&lda);
do_exit(OCI_EXIT_FAILURE);
}
}
/* Function: err_report
* Description: This routine prints out the most recent
* OCI error */
void err_report(cursor)
Cda_Def *cursor;
{
sword n;
text msg[512]; /* message buffer to hold error text */
if (cursor->fc > 0)
printf("/n-- ORACLE error when processing /
OCI function %s /n/n",
oci_func_tab[cursor->fc]);
else
printf("/n-- ORACLE error/n");
n = (sword)oerhms(&lda, cursor->rc, msg, (sword) sizeof msg);
printf("%s/n", msg);
}
/* Function: do_exit
* Description: This routine exits with a status */
void do_exit(status)
eword status;
{
if (status == OCI_EXIT_FAILURE)
printf("/n Exiting with FAILURE status %d/n", status);
else
printf("/n Exiting with SUCCESS status %d/n", status);
exit(status);
}
/* Function: log_on
* Description: This routine logs onto the database as
* OCITEST/OCITEST. */
void log_on()
{
if (olog(&lda, hda, (text *)"OCITEST", -1, (text *)"OCITEST",
-1, (text*)"inst1_nonblock" , -1, OCI_LM_DEF))
{
err_report((Cda_Def *)&lda);
exit(OCI_EXIT_FAILURE);
}
printf("/n Connected to Oracle as ocitest/n");
}
/* Function: log_off
* Description: This routine closes out any cursors and logs
* off the database */
void log_off()
{
if (oclose(&cda)) /* close cursor */
{
printf("Error closing cursor 1./n");
do_exit(OCI_EXIT_FAILURE);
}
if (ologof(&lda)) /* log off the database */
{
printf("Error on disconnect./n");
do_exit(OCI_EXIT_FAILURE);
}
}
Parameters
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
#### onbtst
Purpose
onbtst() tests whether a database connection is in non-blocking mode.
Syntax
onbtst(Lda_Def *lda);
If the connection is in the non-blocking mode, onbtst() returns 0. Otherwise, it returns ORA-03128 in the return code field.
Note: If the connection is in blocking mode, the user may call onbset() to place the channel in non-blocking mode, if allowed by the network driver.
Parameters
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
#### oopen
Purpose
oopen() opens the specified cursor.
Syntax
oopen(Cda_Def *cursor, Lda_Def *lda,
<text *dbn>, <sword dbnl>, <sword arsize>,
<text *uid>, <sword uidl>);
oopen() associates a cursor data area in the program with a data area in the Oracle Server. Oracle uses these data areas to maintain state information about the processing of a SQL statement. Status concerning error and warning conditions, and other information, such as function codes, is returned to the CDA in your program, as Oracle processes the SQL statement.
A program can have many cursors active at the same time.
The oparse() function parses a SQL statement and associates it with a cursor. In the OCI functions, SQL statements are always referenced using a cursor as the handle.
It is possible to issue an oopen() call on a cursor that is already open. This has no effect on the cursor, but it does affect the value in the Oracle OPEN_CURSORS counter. Repeatedly reopening an open cursor may result in an ORA-01000 error ('maximum open cursors exceeded'). Refer to the Oracle7 Server Messages manual for information about what to do if this happens.
The return code field of the CDA indicates the result of the oopen(). A return code value of zero indicates a successful oopen() call.
See the description of the obndra() function earlier in this chapter for an example program demonstrating the use of oopen().
Parameters
Parameter Name Type Mode cursor Cda_Def * OUT lda Lda_Def * IN/OUT dbn text * IN dbnl sword IN arsize sword IN uid text * IN uidl sword IN
cursor A pointer to a cursor data area associated with the program.
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
dbn This parameter is included only for Oracle Version 2 compatibility. It should be passed as 0 in later versions.
dbnl This parameter is included only for Oracle Version 2 compatibility. It should be passed as -1 in later versions.
arsize Oracle7 does not use the areasize parameter. The data areas in the Oracle Server used by cursors are automatically resized as required.
uid A pointer to a character string containing the userid and the password. The password must be separated from the userid by a '/'.
uidl The length of the string pointed to by uid. If uid points to a null-terminated string, this parameter can be omitted.
#### oopt
Purpose
oopt() sets rollback options for non-fatal Oracle errors involving multi-row INSERT and UPDATE SQL statements. It also sets wait options in cases where requested resources are not available; for example, whether to wait for locks.
Syntax
oopt(Cda_Def *cursor, sword rbopt, sword waitopt);
The rbopt parameter is not supported in Oracle Server Version 6 or later.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT rbopt sword IN waitopt sword IN
cursor A pointer to the CDA used in the associated oopen() call.
rbopt The action to be taken when a non-fatal Oracle error occurs. If this option is set to zero, all errors, even non-fatal errors, cause the current transaction to be rolled back. If this option is set to 2, only the failing row will be rolled back during a non-fatal row-level error. This is the default setting.
waitopt Specifies whether to wait for resources or return with an error if they are currently not available. If this option is set to zero, the program waits indefinitely if resources are not available. This is the default action. If this option is set to 4, the program will receive an error return code whenever a resource is requested but is unavailable. Use of waitopt set to 4 can cause many error return codes while waiting for internal resources that are locked for short durations. The only resource errors received are for resources requested by the calling process.
#### oparse
Purpose
oparse() parses a SQL statement or a PL/SQL block and associates it with a cursor. The parse can optionally be deferred.
Syntax
oparse(Cda_Def *cursor, text *sqlstm,
[sb4 sqll], sword defflg,
ub4 lngflg);
oparse() passes the SQL statement to Oracle for parsing. If the defflg parameter is non-zero, the parse is deferred until the statement is executed, or until odescr() is called to describe the statement. Once the parse is performed, the parsed representation of the SQL statement is stored in the Oracle shared SQL cache. Subsequent OCI calls reference the SQL statement using the cursor name.
An open cursor can be reused by subsequent oparse() calls within a program, or the program can define multiple concurrent cursors when it is necessary to maintain multiple active SQL statements.
Note: When oparse() is called with the defflg parameter set to a non-zero value, you cannot receive most error indications until the parse is actually performed. The parse is performed at the first call to odescr(), oexec(), oexn(), or oexfet(). However, the SQL statement string is scanned on the client system, and some errors, such as "missing double quote in identifier", can be returned immediately.
The statement can be any valid SQL statement or PL/SQL anonymous block. Oracle parses the statement and selects an optimal access path to perform the requested function.
Data Definition Language statements are executed on the parse if you have linked in non-deferred mode or if you have liked with the deferred option and the defflg parameter of oparse() is zero. If you have linked in deferred mode and the defflg parameter is non-zero, you must call oexn() or oexec() to execute the statement.
Oracle recommends that you use the deferred parse capability whenever possible. This results in increased performance, especially in a networked environment. Note, however, that errors in the SQL statement that would be detected when oparse() is called in non-deferred mode are not detected until the first non-deferred call is made (usually an execute or describe call).
The example below opens a cursor and parses a SQL statement. The oparse() call associates the SQL statement with the cursor.
Lda_Def lda;
Cda_Def cursor;
text *sql_stmt =
"DELETE FROM emp WHERE empno = :Employee_number";
...
oopen(&cursor, &lda, (text *)0, -1, -1, (text *)0, -1);
oparse(&cursor, sql_stmt, -1, 1, 2);
SQL syntax error codes are returned in the CDA's return code field. If the statement cannot be parsed, the parse error offset field indicates the location of the error in the SQL statement text. See the section "Cursor Data Area" for a list of the information fields available in the CDA after an oparse() call.
Parameters
Parameter Name Type Mode cursor Cda_Def * IN/OUT sqlstm text * IN sqll sb4 IN defflg sword IN lngflg ub4 IN
cursor A pointer to the CDA specified in the oopen() call.
sqlstm A pointer to a string containing the SQL statement.
sqll Specifies the length of the SQL statement. If the SQL statement string pointed to by sqlstm is null terminated, this parameter can be omitted.
defflg If non-zero and the application was linked in deferred mode, the parse of the SQL statement is deferred until an odescr(), oexec(), oexn(), or oexfet() call is made.
Note: Bind and define operations are also deferred until the execute or describe step if the program was linked using the deferred mode link option.
Oracle recommends that you use the deferred parse capability whenever possible. This results in increased performance, especially in a networked environment. Note, however, that errors in the SQL statement that would be detected when oparse() is called in non-deferred mode are not detected until the first non-deferred call is made (usually an execute or describe call).
lngflg The lngflg parameter determines how Oracle handles the SQL statement or PL/SQL anonymous block. To ensure strict ANSI conformance, Oracle7 defines several datatypes and operations in a slightly different way than Oracle Version 6. The table below shows the differences between Version 6 and Oracle7 that are affected by the lngflg parameter.
Behavior V6 V7 CHAR columns are fixed length (including those created by a CREATE TABLE statement). NO YES An error is issued if an attempt is made to fetch a null into an output variable that has no associated indicator variable. NO YES An error is issued if a fetched value is truncated and there is no associated indicator variable. YES NO Describe (odescr()) returns internal datatype 1 for fixed-length strings YES NO Describe (odescr()) returns datatype 96 for fixed-length strings. n/a YES
The lngflg parameter has three possible settings:
0 Specifies version 6 behavior (the database you are connected to can be any version 6 or later database). 1 Specifies the normal behavior for the database version to which the program is connected (either version 6 or Oracle7). 2 Specifies Oracle7 behavior. If you use this value for the parameter, and you are not connected to an Oracle7 database, Oracle issues the errorORA-01011: Cannot use this language type when talking to V6 database
odescr(), oexec(), oexfet(), oexn(), oopen().
#### opinit
Purpose
opinit() initializes the OCI process environment. This includes specifying whether the application is single- or multi-threaded.
Syntax
opinit (ub4 mode);
The mode parameter of the opinit() call indicates whether the application making the call is running a single- or multi-threaded environment. See the section "Thread Safety" for more information about using thread-safe calls in an OCI program.
If mode is set to OCI_EV_DEF or a call to opinit() is skipped altogether, for backward compatibility a single-threaded environment is assumed. Using thread safety adds a very small amount of overhead to the program, and this can be avoided by running in single-threaded mode.
Note: Even when running a single-threaded application it is advisable to make the call to opinit(), with mode set to OCI_EV_DEF, rather than skipping it. In addition to setting the environment, the call to opinit() provides documentation that the application is not thread-safe.
If mode is set to OCI_EV_TSF, then the OCI application can make OCI calls from multiple threads of execution within a single program.
The following examples demonstrate how the same task might be accomplished in a multi-connection environment and a single-connection, multi-threaded environment. The task is to process a series of bank account transactions.
This code is provided for demonstration purposes only, and does not constitute a complete program.
The first example demonstrates how the transactions could be processed in a multi-threaded environment with a single connection. It is assumed that a user's program could call its own functions for the OS-specific thread package. Function calls could include calls to thread-safe packages from DCE or Posix for thread and semaphore management.
This program creates one session and multiple threads. Each thread executes zero or more transactions. The transactions are specified in a transient structure called "records." The transactions consist of moving a specified amount of money from one account to another. The example assumes that accounts 10001 through 10007 are set up in the database.
/* The table ACCOUNTS is used for this example:
*
* SQL> describe ACCOUNTS
* Name Null? Type
* ACCOUNT NUMBER(36)
* BALANCE NUMBER(36,2)
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ocidfn.h>
#include <ocikpr.h>
void do_transaction();
void get_transaction();
#define THREADS 3
#define DEFERRED 1
#define ORACLE7_BEHAVIOR 2
struct parameters
{ Lda_Def * lda;
};
typedef struct parameters parameters;
struct record_log
{ char action;
unsigned int from_account;
unsigned int to_account;
float amount;
};
typedef struct record_log record_log;
record_log records[]= { { 'M', 10001, 10002, 12.50 },
{ 'M', 10001, 10003, 25.00 },
{ 'M', 10001, 10003, 123.00 },
{ 'M', 10001, 10003, 125.00 },
{ 'M', 10002, 10006, 12.23 },
{ 'M', 10007, 10008, 225.23 },
{ 'M', 10002, 10008, 0.70 },
{ 'M', 10001, 10003, 11.30 },
{ 'M', 10003, 10002, 47.50 },
{ 'M', 10002, 10006, 125.00 },
{ 'M', 10007, 10008, 225.00 }};
static unsigned int trx_nr=0;
pthread_mutex_t mutex;
ub1 hda[256];
main()
{
Lda_Def lda;
parameters params[THREADS];
int i;
opinit(OCI_EV_TSF);
/* log on to Oracle w/a single session */
if(olog(lda, hda, (text *) "SCOTT/TIGER", -1, (text *) 0, -1,
(text *) 0, -1, OCI_LM_DEF))
exit(OCI_EXIT_FAILURE);
/*Create mutex for transaction retrieval */
{
printf("Can't initialize mutex/n");
exit(OCI_EXIT_FAILURE);
}
/*Spawn threads*/
{
params[i].lda=&lda;
params[i].thread_id=i;
printf("Thread %d... ",i);
else
printf("Created/n");
}
/* Logoff session.... */
{ /*wait for thread to end */
printf("Error when wating for thread % to terminate/n", i);
else
printf("stopped/n");
printf("Detach thread...");
else
printf("Detached!/n");
}
printf("Stop Session....");
ologof(&lda);
/*Destroys mutex for transaction retrieval */
{
printf("Can't destroy mutex/n");
exit(1);
}
}
/* Function do_transaction(): This functions executes one */
/* transaction out of the record array. The record */
/* array is 'managed' by get_transaction(). */
void do_transaction(params)
parameters *params;
{
Lda_Def * lda=params->lda;
Cda_Def cda;
record_log *trx;
text * pls_trans=(text *) /
"BEGIN /
UPDATE ACCOUNTS /
SET BALANCE=BALANCE+:amount /
WHERE ACCOUNT=:to_account; /
UPDATE ACCOUNTS /
SET BALANCE=BALANCE-:amount /
WHERE ACCOUNT=:from_account; /
COMMIT; /
END;";
/* NOTE use of mutex for OCI calls */
printf("Can't lock mutex/n");
if (oopen(&cda,lda, 0,-1,0,(text *)0,-1))
err_report(&cda);
if (oparse(&cda, pls_trans , -1, DEFERRED, ORACLE7_BEHAVIOR))
err_report(&cda);
printf("Can't unlock mutex/n");
/* Done all transactions ? */
while (trx_nr < (sizeof(records)/sizeof(record_log)))
{
get_transaction(&trx);
printf("Thread %d executing transaction/n",params->thread_id);
switch(trx->action)
{
case 'M':
/* NOTE use of mutex for OCI calls */
printf("Can't lock mutex/n");
obndrv(&cda, ":amount", -1, (ub1 *)
&trx->amount, sizeof(float), SQLT_FLT, -1,
(sb2 *) 0, (text *) 0, -1, -1);
obndrv(&cda, ":to_account", -1, (ub1 *)
&trx->to_account, sizeof(int), SQLT_INT, -1,
(sb2 *) 0, (text *) 0, -1, -1);
obndrv(&cda,":from_account", -1, (ub1 *)
&trx->from_account, sizeof(int), SQLT_INT, -1, (sb2 *) 0, (text *) 0, -1, -1);
oexec(&cda);
if (pthread_mutex_unlock(&mutex))
printf("Can't unlock mutex/n");
break;
default: break;
}
/* Give other threads a chance.. */
}
/* NOTE use of mutex for OCI calls */
printf("Can't lock mutex/n");
if (oclose(&cda))
err_report(&cda);
printf("Can't unlock mutex/n");
}
/* Function get_transaction: This routine returns the next */
/* transaction to process */
void get_transaction(trx)
record_log ** trx;
{
printf("Can't lock mutex/n");
*trx=&records[trx_nr];
trx_nr++;
if (pthread_mutex_unlock(&mutex))
printf("Can't unlock mutex/n");
}
The second example demonstrates how the transactions could be processed in an environment with multiple connections.
This program creates as many sessions as there are threads. Each thread executes zero or more transactions. The transactions are specified in a transient structure called "records." The transactions consist of moving a specified amount of money from one account to another. The example assumes that accounts 10001 through 10007 are set up in the database.
/* The table ACCOUNTS is used for this example:
*
* SQL> describe ACCOUNTS
* Name Null? Type
* ACCOUNT NUMBER(36)
* BALANCE NUMBER(36,2)
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ocidfn.h>
#include <ocikpr.h>
void do_transaction();
void get_transaction();
#define CONNINFO "scott/tiger"
#define DEFERRED 1
#define ORACLE7_BEHAVIOR 2
struct parameters
{ Lda_Def * lda;
};
typedef struct parameters parameters;
struct record_log
{ char action;
unsigned int from_account;
unsigned int to_account;
float amount;
};
typedef struct record_log record_log;
record_log records[]= { { 'M', 10001, 10002, 12.50 },
{ 'M', 10001, 10003, 25.00 },
{ 'M', 10001, 10003, 123.00 },
{ 'M', 10001, 10003, 125.00 },
{ 'M', 10002, 10006, 12.23 },
{ 'M', 10007, 10008, 225.23 },
{ 'M', 10002, 10008, 0.70 },
{ 'M', 10001, 10003, 11.30 },
{ 'M', 10003, 10002, 47.50 },
{ 'M', 10002, 10006, 125.00 },
{ 'M', 10007, 10008, 225.00 }};
static unsigned int trx_nr=0;
pthread_mutex_t mutex;
ub1 hda[THREADS][256];
main()
{
parameters params[THREADS];
int i;
opinit(OCI_EV_TSF);
/* log on to Oracle w/multiple connections */
{
if(olog(lda[i], hda[i], (text *) "SCOTT/TIGER", -1, (text *) 0,
-1, (text *) 0, -1, OCI_LM_DEF))
exit(OCI_EXIT_FAILURE);
}
/* create mutex for transaction retrieval */
{
printf("Can't initialize mutex/n");
exit(1);
}
/* spawn threads */
{
params[i].lda=&lda[i];
params[i].thread_id=i;
printf("Thread %d... ",i);
else
printf("Created/n");
}
... /* logoff sessions....*/
... /*destroy mutex for transaction retrieval */
}
/* Function do_transaction(): executes one transaction out of */
/* the records array. The records array is */
/* 'managed' by the get_transaction function. */
void do_transaction(params)
parameters *params;
{
Lda_Def * lda=params->lda;
Cda_Def cda;
record_log *trx;
text * pls_trans=(text *) /
"BEGIN /
UPDATE ACCOUNTS /
SET BALANCE=BALANCE+:amount /
WHERE ACCOUNT=:to_account; /
UPDATE ACCOUNTS /
SET BALANCE=BALANCE-:amount /
WHERE ACCOUNT=:from_account; /
COMMIT; /
END;";
/* NOTE lack of mutex for OCI calls */
oopen(&cda,lda, 0,-1,0,(text *)0,-1);
ioparse(&cda, pls_trans , -1, DEFERRED, ORACLE7_BEHAVIOR);
/* Done all transactions ? */
while (trx_nr < (sizeof(records)/sizeof(record_log)))
{
get_transaction(&trx);
printf("Thread %d executing transaction/n",params->thread_id);
switch(trx->action)
{
case 'M':
/* NOTE lack of mutex for OCI calls */
obndrv(&cda, ":amount", -1, (ub1 *) &trx->amount,
sizeof(float), SQLT_FLT, -1, (sb2 *) 0,
(text *) 0, -1, -1)
obndrv(&cda, ":to_account", -1, (ub1 *) &trx->to_account, sizeof(int), SQLT_INT, -1, (sb2 *) 0,
(text *) 0, -1, -1)
obndrv(&cda,":from_account", -1,
(ub1 *) &trx->from_account, sizeof(int),
SQLT_INT, -1, (sb2 *) 0, (text *) 0, -1, -1)
oexec(&cda)
break;
default: break;
}
}
/* NOTE lack of mutex for OCI calls */
if (oclose(&cda))
err_report(&cda);
}
/* Function get_transaction(): gets next transaction to process */
void get_transaction(trx)
record_log ** trx;
{
printf("Can't lock mutex/n");
*trx=&records[trx_nr];
trx_nr++;
if (pthread_mutex_unlock(&mutex))
printf("Can't unlock mutex/n");
}
Parameter
Parameter Name Type Mode mode ub4 IN
mode There are two values for the mode parameter: OCI_EV_DEF (zero), for single-threaded environments, and OCI_EV_TSF (one), for multi-threaded environments.
#### orol
Purpose
orol() rolls back the current transaction.
Syntax
orol(Lda_Def *lda);
The current transaction is defined as the set of SQL statements executed since the olog() call or the last ocom() or orol() call. If orol() fails, the reason is indicated in the return code field of the LDA.
Parameter
Parameter Name Type Mode lda Lda_Def * IN/OUT
lda A pointer to the LDA specified in the olog() call that was used to make this connection to Oracle.
ocom(), olog().
#### osetpi
Purpose
osetpi() sets information about the next chunk of data to be processed as part of a piecewise insert, update or fetch.
Syntax
osetpi (Cda_Def *cursor, sb1 piece,
dvoid *bufp, ub4 *lenp);
An OCI application uses osetpi() to set the information about the next piecewise insert, update, or fetch. The bufp parameter is a pointer to either the buffer containing the next piece to be inserted, or to the buffer where the next fetched piece will be stored.
Note: This function is only compatible with Oracle server release 7.3 or later. If a release 7.3 application attempts to use this function against a release 7.2 or earlier server, an error message is likely to be generated. At that point you must restart execution.
See the section "Piecewise Insert, Update and Fetch" in Chatper 2 for more information about piecewise operations and the osetpi() call.
The following sample code demonstrates the use of osetpi() to perform a piecewise fetch. This code is provided for demonstration purposes only, and does not represent a complete program.
For sample code demonstrating the use of osetpi() for a piecewise insert, see the description of the osetpi() routine later in this chapter.
This sample program performs a piecewise fetch from a LONG RAW column in an Oracle table. The program extracts the data from FILECONTENT and reconstitutes it in a file called FILENAME. The program is invoked with arguments specifying the file name and the piece size to be used for the fetch. Most of the data processing is done in the fetch_file() routine.
/* The table FILES is used for this example:
*
* SQL> describe FILES
* Name Null? Type
* FILENAME NOT NULL VARCHAR2(255)
* FILECONTENT LONG RAW
*/
... /* OCI #include statements */
#define DEFER_PARSE 1 /* oparse flags */
#define NATIVE 1
#define VERSION_7 2
#define OCI_MORE_FETCH_PIECES -3130
#define MAX_COL_SIZE 2147483648 /* 2 gigabytes */
#define OCI_EXIT_FAILURE 1 /* exit flags */
#define OCI_EXIT_SUCCESS 0
void fetch_file();
Lda_Def lda; /* login area */
ub1 hda[256]; /* host area */ Cda_Def cda; /* cursor area */
/* Usage: piecewise_fetch filename piecesize */
main(argc, argv)
int argc;
char *argv[];
{
... /* log on to Oracle */
fetch_file(argv[1], atol(argv[2]));
... /* log off of Oracle */
}
/* Function fetch_file(): retrieves contents of 'filename' */
/* from database and (re)stores it back to disk */
/* */
/* Note: If necessary, the context pointer could be used to */
/* point to a file being used in a piecewise operation.*/
/* It is not necesssay in this example program, so a */
/* dummy value is passed instead. */
void fetch_file(filename, piecesize)
text *filename;
ub4 piecesize;
{
text *longbuf; /* buffer to hold long column on insert */
ub4 len_longbuf; /* length of buffer */
text errmsg[2000];
int fd;
char *context = "context pointer";
ub1 piece;
eword iteration = 0;
eword plsqltable;
ub1 cont = 1;
ub2 col_rcode;
ub2 col_len;
text *sqlstmt = (text *) "SELECT filecontent /
FROM FILES /
WHERE filename=:filename";
if (oopen(&cda, &lda, (text *)0, -1, -1, (text *)0, -1))
exit(OCI_EXIT_FAILURE);
if (!(fd = open((char *)filename, O_WRONLY | O_CREAT, 511)))
exit(OCI_EXIT_FAILURE);
/* Allocate memory for storage of one piece */
len_longbuf = piecesize;
longbuf = (text *)malloc(len_longbuf);
if (longbuf == (text *)NULL)
exit(OCI_EXIT_FAILURE);
if (oparse(&cda, sqlstmt, (sb4)-1, 0, (ub4)VERSION_7))
exit(OCI_EXIT_FAILURE);
if (obndrv(&cda, (text *)":filename", -1, filename, -1,
SQLT_STR, -1, (sb2 *)0, (text *)0, -1, -1))
exit(OCI_EXIT_FAILURE);
if (odefinps(&cda, 0, 1, (ub1 *)&context, (ub4) MAX_COL_SIZE,
SQLT_LBI, 0, (sb2 *)0, (text *)0, 0, 0,
(ub2 *)&col_len, (ub2 *)&col_rcode, 0, 0, 0, 0))
exit(OCI_EXIT_FAILURE);
if (oexec(&cda)) /* execute SQL statement */
exit(OCI_EXIT_FAILURE);
while (cont) /* while pieces remain to be fetched */
{
ofetch(&cda)
switch (cda.rc) /* do fetch & switch on return value */
{
case 0: /* write last piece to buffer */
if (len_longbuf != write(fd, longbuf, len_longbuf))
exit(OCI_EXIT_FAILURE);
cont = 0;
break;
case OCI_MORE_FETCH_PIECES: /* ORA-03130 was returned */
ogetpi(&cda, &piece, &context, &iteration, &plsqltable);
if (piece!=OCI_FIRST_PIECE) /* can't write on first fetch */
if (len_longbuf != write(fd, longbuf, len_longbuf))
exit(OCI_EXIT_FAILURE);
osetpi(&cda, piece, longbuf, &len_longbuf);
break;
default:
exit(OCI_EXIT_FAILURE); /* other value indicates error */
}
}
if (close(fd)) /* close file */
exit(OCI_EXIT_FAILURE);
if (oclose(&cda)) /* close cursor */
exit(OCI_EXIT_FAILURE);
}
### Parameters
Parameter Name Type Mode cursor Cda_Def * IN piece sb1 IN bufp dvoid * IN lenp ub4 * IN/OUT
cursor A pointer to the cursor data area associated with the SQL or PL/SQL statement.
piece Specifies the piece being provided or fetched. Possible values are OCI_FIRST_PIECE (one), OCI_NEXT_PIECE (two) and OCI_LAST_PIECE (three). Relevant when the buffer is being set after error ORA-03129 was returned by a call to oexec().
bufp A pointer to a data buffer. If osetpi() is called as part of a piecewise insert, this pointer must point to the next piece of the data to be transmitted. If osetpi() is called as part of a piecewise fetch, this is a pointer to a buffer to hold the next piece to be retrieved.
lenp A pointer to the length in bytes of the current piece. If a piece is provided, the value is unchanged on return. If the buffer is filled up and part of the data is truncated, lenp is modified to reflect the length of the piece in the buffer.
#### sqlld2
Purpose
The sqlld2() routine is provided for OCI programs that operate as application servers in an X/Open distributed transaction processing environment. sqlld2() fills in fields in the LDA parameter according to the connection information passed to it.
Syntax
dvoid sqlld2(Lda_Def *lda, text *cname, sb4 *cnlen);
OCI programs that operate in conjunction with a transaction manager do not manage their own connections. However, all OCI programs require a valid LDA. You use sqlld2() to obtain the LDA.
sqlld2() fills in the LDA using the connection name passed in the cname parameter. The cname parameter must match the db_name alias parameter of the XA info string of the xa_open() call. If this parameter is a null pointer or if the cnlen parameter is set to zero, an LDA for the default connection is returned. Your program must allocate the LDA, then pass the pointer to it in the lda parameter.
sqlld2() does not return a value directly. If you call sqlld2() and there is no valid connection, the error
ORA-01012: not logged on
is returned in the return code field of the lda parameter.
sqlld2() must be invoked whenever there is an active XA transaction. This means it must be invoked after xa_open() and xa_start(), and before xa_end(). Otherwise an ORA-01012 error will result.
sqlld2() is part of SQLLIB, the Oracle Precompiler library. SQLLIB must be linked into all programs that call sqlld2(). See your Oracle system-specific documentation for information about linking SQLLIB.
The example code on the following page demonstrates how you can use sqlld2() to obtain a valid LDA for a specific connection.
#include "ocidfn.h"
#include "ociapr.h"
/* define two LDAs */
Lda_Def lda1;
Lda_Def lda2;
sb4 clen1 = -1L;
sb4 clen2 = -1L;
...
/* get the first LDA for OCI use */
sqlld2(&lda1, "NYdbname", &clen1);
if (lda1.rc != 0)
handle_error();
/* get the second LDA for OCI use */
sqlld2(&lda2, "LAdbname", &clen2);
if (lda2.rc != 0)
handle_error();
Parameters
Parameter Name Type Mode lda Lda_Def * OUT cname text * IN cnlen sb4 * IN
lda A pointer to a local data area struct. You must allocate this data area before calling sqlld2().
cname A pointer to the name of the database connection. If the name is a null-terminated string, you can pass the cnlen parameter as -1L. If the name is not null terminated, pass the exact length of the string in cnlen.
If the name consists of all blanks, sqlld2() returns the LDA for the default connection.
The cname parameter must match the db_name alias parameter of the XA info string of the xa_open() call.
cnlen A pointer to the length of the cname parameter. You can pass this parameter as -1 if cname is a null-terminated string. If cnlen is passed as zero, sqlld2() returns the LDA for the default connection, regardless of the contents of cname.
#### sqllda
Purpose
The sqllda() routine is for programs that mix precompiler code and OCI calls. A pointer to an LDA is passed to sqllda(). On return from sqllda(), the required fields in the LDA are filled in.
Syntax
dvoid sqllda(Lda_Def *lda);
If your program contains both precompiler statements and calls to OCI functions, you cannot use olog(), (or orlon() or olon()) to log on to Oracle. You must use the embedded SQL command
EXEC SQL CONNECT ...
to log on. However, many OCI functions require a valid LDA. The sqllda() function obtains the LDA. sqllda() is part of SQLLIB, the precompiler library.
sqllda() fills in the LDA using the connect information from the most recently executed SQL statement. So, the safest practice is to call sqllda() immediately after doing the EXEC SQL CONNECT ... statement.
sqllda() does not return a value directly. If you call sqllda() and there is no valid connection, the error
ORA-01012: not logged on
is returned in the return code field of the lda parameter.
The example below demonstrates how you can do multiple remote connections in a mixed Precompiler-OCI program. See Chapter 3 in the Programmer's Guide to the Oracle Precompilers for additional information about multiple remote connections.
EXEC SQL BEGIN DECLARE SECTION;
text user_id[20], passwd[20], db_string1[20], db_string2[20];
text dbn1[20], dbn2[20];
EXEC SQL END DECLARE SECTION;
...
/* host program declarations */
Lda_Def lda1; /* declare two LDAs */
Lda_Def lda2;
dvoid sqllda(Lda_Def *); /* declare the sqllda function */
...
/* set up strings */
strcpy(user_id, "scott");
strcpy(passwd, "tiger");
strcpy(db_string1, "newyork");
strcpy(db_string2, "losangeles");
strcpy(dbn1, "NY");
strcpy(dbn2, "LA");
...
/* do the connections */
EXEC SQL CONNECT :user_id IDENTIFIED BY :passwd
AT :dbn1 USING :db_string1;
/* get the first LDA for OCI use */
sqllda(&lda1);
EXEC SQL CONNECT :user_id IDENTIFIED BY :passwd
AT :dbn2 USING :db_string2;
/* get the second LDA for OCI use */
sqllda(&lda2);
Parameter
Parameter Name Type Mode lda Lda_Def * OUT
lda A pointer to a local data area struct. You must allocate this data area before calling sqllda().
|
2018-03-22 18:29:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1932247281074524, "perplexity": 8266.62898796226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00712.warc.gz"}
|
http://buzzard.ups.edu/potlatch/2014/potlatch2014.html
|
Combinatorial Potlatch 2014
Western Washington University
Saturday, November 22, 2014
The Combinatorial Potlatch is an irregularly scheduled, floating, one-day conference. It has been held for many years at various locations around Puget Sound and southern British Columbia, and is an opportunity for combinatorialists in the region to gather informally for a day of invited talks and conversation. While most who attend work in, or near, the Puget Sound basin, all are welcome. Typically there are three talks given by speakers who are visiting or new to the area, along with breaks for coffee and lunch. Many participants remain for dinner at a local restaurant or pub.
The American Heritage Dictionary defines "potlatch" as: A ceremonial feast among certain Native American peoples of the northwest Pacific coast, as in celebration of a marriage or an accession, at which the host distributes gifts according to each guest's rank or status. Between rival groups the potlatch could involve extravagant or competitive giving and destruction by the host of valued items as a display of superior wealth. [Chinook Jargon, from Nootka p'achitl, to make a potlatch gift.]
This fall's Potlatch is being hosted by the Department of Mathematics at Western Washington University at their campus in Bellingham, Washington on Saturday, November 22, 2014.
Significant funding is being provided by the Western Washington University Department of Mathematics. Their support is gratefully acknowledged.
### Schedule
All talks will be held in the Biology Building, Room 234 (BI 234), with registration and breaks nearby. See the Getting There section for exact locations and directions.
The day's schedule is:
• 10:00 AM Registration, Bagels and Coffee
• 11:00 AM Jane Butterfield, Line-of-Sight Pursuit in Sweepable Polygons
• 12:00 PM Lunch @ The Soy House Restaurant
• 2:30 PM Steven Klee, Face Enumeration on Simplicial Complexes
• 3:30 PM Cookies, Coffee and Cokes
• 4:00 PM Richard Anstee, Forbidden Configurations
• 5:30 PM Happy Hour, Dinner @ The Copper Hog
### Line-of-Sight Pursuit in Sweepable Polygons
We examine a turn-based pursuit-evasion game in a simply connected polygonal environment. One pursuer chases one evader, each of whom takes turn moving in straight lines a distance of at most one. The pursuer wins if she is within unit distance of the evader; the evader wins by eluding capture forever. Although both players have complete information of the polygonal environment, the pursuer has only line-of-sight visibility of the evader. We provide a winning strategy for the pursuer in monotone polygons and sweepable polygons. Our algorithm uses a capture strategy that is new to the pursuit-evasion field, which we call the 'rook strategy'. This strategy is similar in spirit to the well-known 'lion strategy', which does not seem to be suitable for sweepable polygons.
### Face Enumeration on Simplicial Complexes
A simplicial complex is a combinatorial object that can be represented as a topological space. Just as a graph is made up of vertices and edges, a simplicial complex is made up of vertices, edges, triangles, tetrahedra, and higher-dimensional simplices. The most natural combinatorial statistics to collect on a simplicial complex are its face numbers, which count the number of vertices, edges, and higher-dimensional faces in the complex.
This talk will give a survey on face numbers of simplicial complexes, beginning with planar graphs and extending to spheres and manifolds of higher dimensions. We will undertake two main questions in this talk: First, what is the relationship between the face numbers of a simplicial complex and its topological invariants? Second, how can we infer extra combinatorial information from properties of the underlying graph of a simplicial complex, such as graph connectivity and graph colorability?
### Forbidden Configurations
I have been exploring a problem in extremal combinatorics for many years. I will highlight a few recent results. One (joint with Lincoln Lu) relates to Ramsey Theory and improves a bound of Balogh and Bollobás from a double exponential to a single exponential. Another (joint with Attila Sali) uses the remarkable recent Block Design results of Keevash. I will mention VC-dimension which has proved to have many applications. And always the three most important proof techniques are induction, induction, and induction.
Some precise definitions: We say a matrix $F$ is a configuration in $A$ , written $F\prec A$, if there is a submatrix of $A$ which is a row and column permutation of $F$. We say a matrix is simple if it is a $(0,1)$-matrix with no repeated columns (a simple matrix corresponds to a set system). Let ${\cal F}$ be a set of matrices. We write Avoid$(m,{\cal F})$ for the set of $m$-rowed simple matrices $A$ with $F\not\prec A$ for each $F\in{\cal F}$. Let $\|A\|$ denote the number of columns of $A$. Our extremal problem is to compute forb$(m,{\cal F})=\max\{\|A\|\,:\,A\in\text{Avoid}(m,{\cal F})\}$.
### Registration
The Combinatorial Potlatch has no permanent organization and no budget. And we like it that way. Consequently, there are no registration fees because we wouldn't know what to do with them. You are on your own for meals and lodging, and the sponsoring institutions provides facilities, food for the breaks and some support for speakers' travel. So expressions of appreciation to the speakers and the hosts are preferred and especially encouraged. Thanks.
### Getting There
All talks will be held in the Biology Building, Room 234 (BI 234), with registration and breaks nearby. The building is in the south-central portion of the map below, marked "BI" or use the pull-down "Building" menu. Parking will be free in lot 12A, at the south end of campus, just south of West College Way. (Baby blue area on map linked below).
Campus Map
### Lodging
We have rooms reserved at the Best Western Hotel on Lakeway.
1. Discounted rate: \$99/\$109 depending on room type.
2. Rooms are held until November 15.
3. Book with the hotel directly at 1-360-671-1011, 1-800-671-1011.
Nearby Hotels (from WWU, organized by price).
You might wish to avoid the hotels in the Devil's Triangle.
### Dining and Happy Hour
We have reservations for no-host lunch and dinner at two local restaurants. We hope you can join other participants, and your guests are welcome to join us also.
Lunch: The Soy House Restaurant, 400 W Holly St, 360.393.4857
Happy Hour, Dinner: The Copper Hog, 1327 N. State St, 360.927.7888
### Organizers
• Rob Beezer, University of Puget Sound, beezer (at) ups (dot) edu, Communications Chair
• Nancy Ann Neudauer, Pacific University, nancy (at) pacificu (dot) edu, Program Chair
• Amites Sarkar, Western Washington University, amites.sarkar (a) wwu (dot) edu, Local Arrangements Chair
Last updated: November 18, 2014, http://buzzard.ups.edu/potlatch/2014/potlatch2014.html
|
2021-12-01 16:18:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2179088294506073, "perplexity": 3609.2770815475997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00205.warc.gz"}
|
http://physics.stackexchange.com/questions/53041/predict-final-temperature-by-taking-temperature-samples/53064
|
# Predict final temperature by taking temperature samples?
Is it possible to predict what the final temperature will be by taking temperature samples. For example, an object is 0ºC and moved to a room above 0ºC. I'm taking temperature of the object using a thermometer every second. Can I predict (approximation) on what the final temperature would be after a few samples? I guess the more samples the more accurate it would be. Can I calculate when the final temperature might occur based the rate of the temperature change?
Is there any formulas for these kind of calculations?
-
Seems certain that a 0 degree body moved into a 25 degree environment will ultimately reach a final temperature of 25 degrees. – Michael Luciuk Feb 4 '13 at 16:49
The temperature is unknown by the thermometer. – willi Feb 4 '13 at 16:55
$$T_{room} - T_{object} = (T_{room} - T_{0})e^{-kt}$$
where $T_{0}$ is the initial temperature of your object and $k$ is some constant.
You can take the temperature of the object as a function of time and then fit the expression above, but the problem is that this type of fit gives rather large errors in the final temperature $T_{room}$ unless you measure for long enough that you've almost reached the final temperature. You may also find Newton's law of cooling breaks down when the temperature difference is very small, because there is no longer effective convection.
|
2016-05-01 14:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8646066784858704, "perplexity": 227.8093084196473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116173.76/warc/CC-MAIN-20160428161516-00005-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/multiple-concept-example-7-explores-approach-taken-problemssuch-one-blades-ceiling-fan-rad-q715359
|
Multiple-Concept Example 7 explores the approach taken in problemssuch as this one. The blades of a ceiling fan have a radius of0.411 m and are rotating about a fixedaxis with an angular velocity of +1.66rad/s. When the switch on the fan is turned to a higher speed, theblades acquire an angular acceleration of 1.98 rad/s2. After 0.495 s have elapsed since the switch was reset,what are the following? (See figure below.)
(a) the total acceleration (in m/s2)of a point on the tip of a blade
1 m/s2
(b) the angle ? between the total acceleration and thecentripetal acceleration ac
2°
|
2015-09-02 15:46:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098250865936279, "perplexity": 2048.0879255584214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00244-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1155.90023
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1155.90023
Shen, Chungen; Xue, Wenjuan; Pu, Dingguo
Global convergence of a tri-dimensional filter SQP algorithm based on the line search method.
(English)
[J] Appl. Numer. Math. 59, No. 2, 235-250 (2009). ISSN 0168-9274
Authors' abstract: We propose a new filter line search successive quadratic programming (SQP) method in which the violations of equality and inequality constraints are considered separately. Thus the filter in our algorithm is composed by three components: objective function value, equality and inequality constraints violations. The filter with three components accepts reasonable steps flexibly, comparing that with two components. The new filter shares some features with the {\it Ch.-M. Chin} and {\it R. Fletcher}'s approach [Math. Program. 96, No.1 (A), 161--177 (2003; Zbl 1023.90060)], namely the slanting envelope'' and the inclusion property''. Under mild conditions, the filter line search SQP method is proven to be globally convergent. Numerical experiments also show the efficiency of our method.
[Klaus Schittkowski (Bayreuth)]
MSC 2000:
*90C55 Methods of successive quadratic programming type
90C30 Nonlinear programming
Keywords: line search method; backtracking; constrained optimization problem; SQP; filter
Citations: Zbl 1023.90060
Highlights
Master Server
|
2013-05-21 23:19:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521610260009766, "perplexity": 5600.816682969583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700842908/warc/CC-MAIN-20130516104042-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.crcg.de/wiki/DistinguishedLectureSeries
|
# Distinguished Lecture Series of the Courant Research Centre, Göttingen, November 4-6, 2008
## Russell Lyons (Indiana University, Bloomington): Determinants: Probability, Combinatorics, Topology.
Everybody is invited to attend the first Distinguished Lecture Series of the Courant Research Centre in Göttingen. For any further information about the lecture series please contact Andreas Thom (thom@uni-math.gwdg.de).
## Schedule
### 1st Talk (Tuesday, November 4, 16.15 - 17.15)
Asymptotic Enumeration of Spanning Trees via Fuglede-Kadison Determinants
ABSTRACT: Methods of enumeration of spanning trees in a finite graph and relations to various areas of mathematics and physics have been investigated for more than 150 years. We will review the history and applications. Then we will give new formulas for the asymptotics of the number of spanning trees of a graph. A special case answers a question of McKay (1983) for regular graphs. The general answer involves a quantity for infinite graphs that we call tree entropy", which we show is a logarithm of a Fuglede-Kadison determinant of the graph Laplacian for infinite graphs. Proofs involve new traces and the theory of random walks.
### 2nd Talk (Wednesday, November 5, 16.15 - 17.15)
Stationary Determinantal Processes (Fermionic Lattice Gases)
ABSTRACT: Given a measurable function f on the d-dimensional torus with values in the unit interval, there is a 2-state stationary random field on the d-dimensional integer lattice that is defined via minors of the d-dimensional Toeplitz matrix of the function f. The variety of such systems includes certain combinatorial models, certain finitely dependent models, and certain renewal processes in one dimension. Among the interesting properties of these processes, we focus mainly on whether they have a phase transition analogous to that which occurs in statistical mechanics. We describe necessary and sufficient conditions on f for the existence of such a phase transition and give several examples to illustrate the theorem. We also give some idea of the proofs, which are based on harmonic analysis, functional analysis, real analysis, and complex analysis. This is joint work with Jeff Steif.
### 3rd Talk (Thursday, November 6, 15.30 - 16.30)
Random Complexes via Topologically-Inspired Determinants
ABSTRACT: Uniform spanning trees on finite graphs and their analogues on infinite graphs are a well-studied area. We present the basic elements of a higher-dimensional analogue on finite and infinite CW-complexes. On finite complexes, they relate to (co)homology, while on infinite complexes, they relate to $\ell^2$-Betti numbers. One use is to get uniform isoperimetric inequalities.
|
2013-05-19 22:38:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4632534682750702, "perplexity": 663.099985961977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698104521/warc/CC-MAIN-20130516095504-00097-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2003_v40n3_391
|
ROTATIONALLY INVARIANT COMPLEX MANIFOLDS
Title & Authors
ROTATIONALLY INVARIANT COMPLEX MANIFOLDS
Isaev, A.V.;
Abstract
In this paper we discuss complex manifolds of dimension $\small{n{\ge}2}$ that admit effective actions of either $\small{U_n\;or\;SU_n}$ by biholomorphic transformations.
Keywords
complex manifolds;group actions;
Language
English
Cited by
References
1.
D. Barrett, E. Bedford and J. Dadok, \$\mathbb{T}^n-actions \$ on holomorphically separable complex manifolds, Math. Z. 202 (1989), 65–82.
2.
J. Bland, T. Duchamp and M. Kalka, A characterization of \$\mathbb{CP}^n\$ by its automor-phism group, Complex Analysis (University Park, Pa, 1986), 60–65, Lecture Notes In Mathematics 1268, Springer-Verlag, 1987.
3.
J. A. Gifford, A. V. Isaev and S. G. Krantz, On the dimensions of the auto-morphism groups of hyperbolic Reinhardt domains, Illinois J. Math. 44 (2000), 602–618.
4.
R. E. Greene and S. G. Krantz, Characterization of complex manifolds by the isotropy subgroups of their automorphism groups, Indiana Univ. Math. J. 34 (1985), 865–879.
5.
W. C. Hsiang and W. Y. Hsiang, Some results on differentiable actions, Bull. Amer. Math. Soc. 72 (1966), 134–138.
6.
W. Y. Hsiang, On the principal orbit type and P. A. Smith theory of SU(p) actions, Topology 6 (1967), 125–135.
7.
A. V. Isaev and S. G. Krantz, On the automorphism groups of hyperbolic manifolds, J. Reine Angew. Math. 534 (2001), 187-194.
8.
A. V. Isaev and N. G. Kruzhilin, Effective actions of the unitary group on complex manifolds, to appear in Canad. J. Math. in 2002
9.
W. Kaup, Reelle Transformationsgruppen und invariante Metriken auf komplexen Räumen, Invent. Math. 3 (1967), 43-70.
10.
A. Kruger, Homogeneous Cauchy-Riemann structures, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 18 (1991), no. 4, 193-212.
11.
A. Klimyk and K. Schmudgen, Quantum groups and their representations, Texts and Monographs in Physics, Springer, Berlin, 1997.
12.
K. Mukoyama, Smooth SU(p, q)-actions on (2p+2q−1)-sphere and on the com-plex projective (p + q − 1)-space, Kyushu J. Math. 55 (2001), 213–236.
13.
T. Nagano, Transformation groups with (n - 1)-dimensional orbits on non-compact manifolds, Nagoya Math. J. 14 (1959), 25-38.
14.
H. Rossi, Attaching analytic spaces to an analytic space along a pseudoconcave boundary, Proc. Conf. Complex Analysis, Minneapolis, 1964, Springer-Verlag,1965, 242-256.
15.
H. Rossi, Homogeneous strongly pseudoconvex hypersurfaces, Proc. Conf. Complex Analysis Rice Univ., Houston, 1972, Rice Univ. Studies, 59 (1973), no. 1, 131-145.
16.
F. Uchida, Smooth actions of special unitary groups on cohomology complex projective spaces, Osaka J. Math. 12 (1975), 375-400.
17.
E. Vinberg and A. Onishchik, Lie Groups and Algebraic Groups, Springer-Verlag, 1990.
|
2017-05-29 13:25:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202933073043823, "perplexity": 2648.482109107344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612327.8/warc/CC-MAIN-20170529130450-20170529150450-00497.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-for-u-in-y-u-1-u-2
|
# How do you solve for u in y= (u+1)/(u+2)?
Mar 18, 2016
color(blue)(u = (1 -2y)/(y-1)
#### Explanation:
y= (u+1) / color(green)((u+2)
$y \cdot \textcolor{g r e e n}{\left(u + 2\right)} = \left(u + 1\right)$
$y \cdot \textcolor{g r e e n}{\left(u\right)} + y \cdot \textcolor{g r e e n}{\left(2\right)} = \left(u + 1\right)$
$y u + 2 y = u + 1$
Isolating $u$ on the L.H.S
$y \textcolor{b l u e}{u} - \textcolor{b l u e}{u} = 1 - 2 y$
$u$ is common to both terms of the L.H.S:
$\textcolor{b l u e}{u} \left(y - 1\right) = 1 - 2 y$
color(blue)(u = (1 -2y)/(y-1)
Mar 18, 2016
$u = \frac{- 2 y + 1}{y - 1}$
#### Explanation:
Another (more difficult) method:
Rewrite $u + 1$ as $u + 2 - 1$.
$y = \frac{u + 2 - 1}{u + 2}$
Split up the numerator.
$y = \frac{u + 2}{u + 2} - \frac{1}{u + 2}$
$y = 1 - \frac{1}{u + 2}$
Rearrange the terms.
$y - 1 = - \frac{1}{u + 2}$
Multiply both sides by $\left(u + 2\right)$.
$\left(y - 1\right) \left(u + 2\right) = - 1$
Distribute the $\left(y - 1\right)$ into $u$ and $2$:
$u \left(y - 1\right) + 2 \left(y - 1\right) = - 1$
$u \left(y - 1\right) + 2 y - 2 = - 1$
$u \left(y - 1\right) = - 2 y + 1$
After this algebraic rearrangement, divide both sides by $\left(y - 1\right)$ to isolate $u$.
$u = \frac{- 2 y + 1}{y - 1}$
|
2020-07-13 11:29:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 28, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7254212498664856, "perplexity": 5428.686609244871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00420.warc.gz"}
|
https://academy.vertabelo.com/course/window-functions/window-functions-evaluation-order/introduction/subqueries-having
|
Summer Deals - hours only!Up to 80% off on all courses and bundles.-Close
When window functions are evaluated
Window functions and GROUP BY
Summary
Instruction
Great. Just as we expected, no window functions are allowed in HAVING either. Okay, you know that the remedy is to use a subquery. Try to correct the query on your own. Don't worry if you can't, the hint will be waiting for you in case you need it.
Exercise
Again, we would like to show those countries (country name and average final price) that have the average final price higher than the average price from all over the world. Correct the query by using a subquery.
Stuck? Here's a hint!
Instead of AVG(final_price) OVER() at the end, put a small subquery in there. You don't even need a window function in there, simply calculate the average from all rows.
|
2019-08-17 23:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1728496551513672, "perplexity": 1180.2784002645706}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00191.warc.gz"}
|
https://fantasticanachronism.com/page/3/
|
# Urne-Buriall in Tlön and the Sedulous Ape
At the end of Jorge Luis Borges's Tlön, Uqbar, Orbis Tertius, the Tlönian conspiracy is taking over the world and the narrator retreats into esoteric literary pursuits. The final paragraph reads:
Then English and French and mere Spanish will disappear from the globe. The world will be Tlön. I pay no attention to all this and go on revising, in the still days at the Adrogue hotel, an uncertain Quevedian translation (which I do not intend to publish) of Browne's Urn Burial.
Who's Browne? What is Urn Burial? And what does it have to do with the story?
Browne & Urne-Buriall
Sir Thomas Browne was born in 1605, died in 1682, and was trained as a doctor but is remembered mostly as a writer. He coined a huge number of words including "cryptography", "electricity", "holocaust", "suicide", and "ultimate". He personally embodied the transition from the Renaissance to the Enlightenment—a skeptical polymath with an interest in science, he published Pseudodoxia Epidemica, a book that refuted all sorts of popular superstitions. At the same time he believed in witches, alchemy, and astrology.
Hydriotaphia, Urne-Buriall, or, a Discourse of the Sepulchrall Urnes lately found in Norfolk is his most influential work. On the occasion of the discovery of some ancient burial urns, he launches into an examination of funerary customs across the world and ages, various cultures' ideas about death and the afterlife, and the ephemerality of posthumous fame. But the essay is mainly famous for its style rather than its content: endless sentences, over-the-top abuses of Latin, striking metaphors and imagery. It "smells in every word of the sepulchre", wrote Emerson. Browne begins in a dry, almost anthropological mode:
In a Field of old Walsingham, not many moneths past, were digged up between fourty and fifty Urnes, deposited in a dry and sandy soile, not a yard deep, nor farre from one another: Not all strictly of one figure, but most answering these described: Some containing two pounds of bones, distinguishable in skulls, ribs, jawes, thigh-bones, and teeth, with fresh impressions of their combustion.
The essay then slowly escalates...
To be gnaw’d out of our graves, to have our sculs made drinking-bowls, and our bones turned into Pipes, to delight and sport our Enemies, are Tragicall abominations, escaped in burning Burials.
...and reaches a crescendo in the final chapter:
And therefore restlesse inquietude for the diuturnity of our memories unto present considerations seems a vanity almost out of date, and superanuated peece of folly. We cannot hope to live so long in our names as some have done in their persons, one face of Janus holds no proportion unto the other. ’Tis too late to be ambitious. The great mutations of the world are acted, our time may be too short for our designes.
There is no antidote against the Opium of time, which temporally considereth all things; Our Fathers finde their graves in our short memories, and sadly tell us how we may be buried in our Survivors. Grave-stones tell truth scarce fourty years: Generations passe while some trees stand, and old Families last not three Oaks. To be read by bare Inscriptions like many in Gruter, to hope for Eternity by Ænigmaticall Epithetes, or first letters of our names, to be studied by Antiquaries, who we were, and have new Names given us like many of the Mummies, are cold consolations unto the Students of perpetuity, even by everlasting Languages. [...] In vain we compute our felicities by the advantage of our good names, since bad have equall durations; and Thersites is like to live as long as Agamemnon. Who knows whether the best of men be known? or whether there be not more remarkable persons forgot, than any that stand remembred in the known account of time?
Borges & Browne
The ending of Tlön refers to a real event. When Borges and Bioy-Casares were young, they really did produce a Quevedian translation of Browne's Urn Burial which they did not publish. Browne was a significant stylistic influence on Borges, especially his "baroque" and "labyrinthine" sentences. That conspicuously unliterary word found in the famous first sentence of Tlön, "conjunction", almost certainly comes from Browne. Borges explains in an interview:
When I was a young man, I played the sedulous ape to Sir Thomas Browne. I tried to do so in Spanish. Then Adolfo Bioy-Casares and I translated the last chapter of Urn Burial into seventeenth-century Spanish—Quevedo. And it went quite well in seventeenth-century Spanish.... We did our best to be seventeenth-century, we went in for Latinisms the way that Sir Thomas Browne did. [...] I was doing my best to write Latin in Spanish.
That phrase "sedulous ape", what does that mean? Well, it's a reference to Robert Louis Stevenson (Borges was a big fan) who also tried to imitate Browne! In his 1887 essay collection Memories and Portraits, Stevenson writes:
I have thus played the sedulous ape to Hazlitt, to Lamb, to Wordsworth, to Sir Thomas Browne, to Defoe, to Hawthorne, to Montaigne, to Baudelaire and to Obermann. I remember one of these monkey tricks, which was called The Vanity of Morals: it was to have had a second part, The Vanity of Knowledge; and as I had neither morality nor scholarship, the names were apt; but the second part was never attempted, and the first part was written (which is my reason for recalling it, ghostlike, from its ashes) no less than three times: first in the manner of Hazlitt, second in the manner of Ruskin, who had cast on me a passing spell, and third, in a laborious pasticcio of Sir Thomas Browne.
In fact Browne's essay was admired by virtually all of Borges's Anglo predecessors: De Quincey, Coleridge, Dr. Johnson, Emerson, Poe. The epigraph of The Murders in the Rue Morgue is taken from Urn Burial: "What song the Syrens sang, or what name Achilles assumed when he hid himself among women, although puzzling questions are not beyond all conjecture."
Back to Tlön
Let us return to the origin of this post: what does Urn Burial have to do with Tlön? Why does Borges make the reference? We can safely assume that he had a reason in mind, but the connection is not immediately obvious: Browne's essay is about funerary customs, death, and fame—there's nothing in it about idealism, alternate worlds, or the social construction of reality.
I believe the key is that Browne misattributed the urns to the Romans (they were really Anglo-Saxon and dated to ~500 AD). Just as the invented world of Tlön invades reality in the Borges story, so Browne's invented Roman urns (and the connection they imply between Rome and England) impose themselves on our reality. Or perhaps the connection is to the Tlönian hrönir, objects that (in that idealist universe) appear when you start looking for them. Browne looked for Roman urns and therefore found them. This perspective also recalls Borges's inventions of fictional writers. A third and more doubtful interpretation involves Browne's mention of Tiberius, who (like Borges-the-narrator) rejected the world and withdrew to Capri, where he wrote anachronistic Greek verse and had oneiric visions inspired by ancient mythology. Or perhaps Borges simply wanted to pay tribute to his influences.
1. 1.NYRB puts out a cute little tome, pairing it with his other famous essay, Religio Medici, a kind of Montaignian self-examination.
2. 2.Francisco de Quevedo (1580-1645) was a nobleman and prolific writer. I'm not sure if this is a little joke from Borges? I haven't read Quevedo yet, but the internet tells me he was known for his conceptismo style which favored rapidity, directness, and simple vocabulary (and was opposed to the ostentatious culteranismo style). This would suggest that the very notion of a "Quevedian translation" of Browne is a joke in itself, as Browne was famous for his ornate and extravagant style. But when Borges later talks about the "Quevedian translation" he refers to it as stylistically baroque and Latinate, so maybe not?
3. 3.Sedulous, adj. Showing dedication and diligence.
4. 4.Johnson wrote a short biography of Browne. "In defence of his uncommon words and expressions, we must consider that he had uncommon sentiments."
5. 5.Did Borges know about the urns not being Roman? The Norfolk Heritage Explorer(!) has a very handy page on the burial site, which includes a ton of references. The Archaeology of the Anglo-Saxon Settlements (1913) clearly attributes the urns to "Anglians" and calls out Browne for misattributing them. So it's entirely plausible that Borges would know about Browne's error.
6. 6."The methodical fabrication of hrönir [...] has made possible the interrogation and even the modification of the past, which is now no less plastic and docile than the future."
# The Best
Russo argues that Hellenistic science was significantly more developed than generally believed, and that it was killed by the Romans. This is one of those books that you have to approach with aggressive skepticism: Russo makes the case for a Hellenistic hypothetico-deductive scientific method, and some of the deductions he makes from second-hand readings certainly seem a bit much. But even if you don't buy its thesis in the end I think it's worth reading. A fascinating collection of stories about Hellenistic science and how the Romans viewed it. Doesn't really address the question of whether there could have been a scientific and industrial revolution in ancient times (the answer is almost certainly No, but I'd still like to see it argued out).
Lots of surprising facts, but perhaps most surprising were the things that remained backward for a long time: did you know that the first complete translation of Euclid's Elements into Latin was done in the year 1120, by an Englishman translating from the Arabic?
Hipparchus compiled his catalog of stars precisely so that later generations might deduce from it the displacements of stars and the possible appearance of novae. Clearly, Hipparchus too did not believe in a material sphere in which the stars are set. His catalog achieved its aim in full: the stellar coordinates listed therein were incorporated into Ptolemy’s work and so handed down until such a time when a change in the positions of the “fixed” stars could be detected. Changes were first noticed in 1718 A.D. by Halley, who, probably without realizing that he was completing an experiment consciously started two thousand years earlier, recorded that his measured coordinates for Sirius, Arcturus and Aldebaran diverged noticeably from those given by Ptolemy.
Larence Durrell, The Alexandria Quartet
Four interconnected novels set in sensuous and decadent Alexandria in the days before World War II, focused on a loose group of foreigners drawn to and trapped by the city's pleasures. Each novel begins with an epigraph from the Marquis de Sade. Some have derided the prose as purple, but it works well given the setting and intended effect. Begins in a highly experimental, lyrical, wistful, impressionistic mode but mellows out into a normal novel later on.
There's a Rashomon element to it: with each book you learn more, and as the "circle of knowledge" expands, events and characters are recontextualized, and shifting perspectives transform mysterious romances or shadowy conspiracies into something completely different. Tragic love; Cavafy; personal relations vs historic forces; Anglo vs Med culture; colonialism and its failures; everyone is the protagonist in their own story. Really makes you want to have a doomed love affair in a degenerate expat shithole for a while.
I had drifted into sleep again; and when I woke with a start the bed was empty and the candle had guttered away and gone out. She was standing at the drawn curtains to watch the dawn break over the tumbled roofs of the Arab town, naked and slender as an Easter lily. In the spring sunrise, with its dense dew, sketched upon the silence which engulfs a whole city before the birds awaken it, I caught the sweet voice of the blind muezzin from the mosque reciting the Ebed — a voice hanging like a hair in the palm-cooled upper airs of Alexandria. ‘I praise the perfection of God, the Forever existing; the perfection of God, the Desired, the Existing, the Single, the Supreme; the Perfection of God, the One, the Sole’ … The great prayer wound itself in shining coils across the city as I watched the grave and passionate intensity of her turned head where she stood to observe the climbing sun touch the minarets and palms with light: rapt and awake. And listening I smelt the warm odour of her hair upon the pillow beside me.
Edward Gibbon, The History of the Decline and Fall of the Roman Empire
I read about 10 pages of Gibbon every night over the last year. Just super comfy and endlessly entertaining, I would happily keep going if there was more. The scale. The ambition. The style. Is it outdated in some respects? Sure. But Gibbon is not as impressionable as you might think, and his anti-Christian bias has been vastly exaggerated in the popular consciousness.
Full review forthcoming.
The subjects of the Byzantine empire, who assume and dishonour the names both of Greeks and Romans, present a dead uniformity of abject vices, which are neither softened by the weakness of humanity nor animated by the vigour of memorable crimes.
Seven interconnected picaresque novellas that revolve around the titular Maqroll, a vagabond of the seas. Wanderlust, melancholy, friendship, alcohol, and heartbreak as we follow his adventures and ill-fated "business ventures" at the margins of civilization. It's very good at generating a vicarious pleasure in the "nomadic mania" of the main character and the parallel world of tramp steamers, seamen, and port city whores. Stories are told through second-hand recountings, miraculously recovered documents, or distant rumours found in exotic locales. Underlying it all there's a feeling of a fundamental dissatisfaction with what life has to offer, and Maqroll's adventures are an attempt to overcome that feeling. Stylistically rich and sumptuous, reminiscent of Conrad's maritime adventures, Herzog's diaries, and even Borges.
My favorite of the novellas involves a decaying tramp steamer and a parallel love affair (between the ship's captain and its owner) which is taken up whenever and wherever the ship goes to port, then put on hold while at sea.
The tramp steamer entered my field of vision as slowly as a wounded saurian. I could not believe my eyes. With the wondrous splendor of Saint Petersburg in the background, the poor ship intruded on the scene, its sides covered with dirty streaks of rust and refuse that reached all the way to the waterline. The captain's bridge, and the row of cabins on the deck for crew members and occasional passengers, had been painted white a long time before. Now a coat of grime, oil, and urine gave them an indefinite color, the color of misery, of irreparable decadence, of desperate, incessant use. The chimerical freighter slipped through the water to the agonized gasp of its machinery and the irregular rhythm of driving rods that threatened at any moment to fall silent forever. Now it occupied the foreground of the serene, dreamlike spectacle that had held all my attention, and my astonished wonder turned into something extremely difficult to define. This nomadic piece of sea trash bore a kind of witness to our destiny on earth, a pulvis eris that seemed truer and more eloquent in these polished metal waters with the gold and white vision of the capital of the last czars behind them. The sleek outline of the buildings and wharves on the Finnish coast rose at my side. At that moment I felt the stirrings of a warm solidarity for the tramp steamer, as if it were an unfortunate brother, a victim of human neglect and greed to which it responded with a stubborn determination to keep tracing the dreary wake of its miseries on all the world's seas. I watched it move toward the interior of the bay, searching for some discreet dock where it could anchor without too many maneuvers and, perhaps, for as little money as possible. The Honduran flag hung at the stern. The final letters of the name that had almost been erased by the waves were barely visible: ...cyon. In what seemed too mocking an irony, the name of this old freighter was probably the Halcyon.
An excellent introduction to the replication crisis. Covers both outright fraud and the grey areas of questionable research practices and hype. Highly accessible, you can give this to normal people and they will get a decent grasp of what's going on while being entertained by the amusing and/or terrifying anecdotes. I had a few quibbles, but overall it's very good.
The weird thing, though, is that scientists who already have tenure, and who already run well-funded labs, continue regularly to engage in the kinds of bad practices described in this book. The perverse incentives have become so deeply embedded that they’ve created a system that’s self-sustaining. Years of implicit and explicit training to chase publications and citations at any cost leave their mark on trainee scientists, forming new norms, habits, and ways of thinking that are hard to break even once a stable job has been secured. And as we discussed in the previous chapter, the system creates a selection pressure where the only academics who survive are the ones who are naturally good at playing the game.
Susanna Clarke, Piranesi
16 years after Jonathan Strange & Mr Norrell, a new novel from Susanna Clarke. It's short and not particularly ambitious, but I enjoyed it a lot. A tight fantastical mystery that starts out similar to The Library of Babel but then goes off in a different direction. I loved the setting (which is where the title comes from): a strange alternate dimension in the form of a great house filled with staircases and marble statues, with clouds at the upper levels and tides coming up from below.
Once, men and women were able to turn themselves into eagles and fly immense distances. They communed with rivers and mountains and received wisdom from them. They felt the turning of the stars inside their own minds. My contemporaries did not understand this. They were all enamoured with the idea of progress and believed that whatever was new must be superior to what was old. As if merit was a function of chronology! But it seemed to me that the wisdom of the ancients could not have simply vanished. Nothing simply vanishes. It’s not actually possible.
Francis Bacon, Novum Organum
The first part deals with science and empiricism and induction from an abstract perspective and it feels almost contemporary, like it was written by a time traveling 19th century scientist or something like that. The quarrel between the ancients and the moderns is already in full swing here, Bacon dunks on the Greeks constantly and upbraids people for blindly listening to Aristotle. He points to inventions like gunpowder and the compass and printing and paper and says that surely these indicate that there's a ton of undiscovered ideas out there, we should go looking for them. He talks about perceptual biases and scientific progress. Bacon's ambition feels limitless.
But any man whose care and concern is not merely to be content with what has been discovered and make use of it, but to penetrate further; and not to defeat an opponent in argument but to conquer nature by action; and not to have nice, plausible opinions about things but sure, demonstrable knowledge; let such men (if they please), as true sons of the sciences, join with me, so that we may pass the antechambers of nature which innumerable others have trod, and eventually open up access to the inner rooms.
Then you get to the second part and the Middle Ages hit you like a freight train, you suddenly realize this is no contemporary man at all and his conception of how the world works is completely alien. Ideas that to us seem bizarre and just intuitively nonsensical (about gravity, heat, light, biology, etc.) are only common sense to him. He repeats absurdities about light objects being pulled to the heavens while heavy objects are subject to the gravity of the earth, and so on. It's fascinating that both sides could exist in the same person. You won't learn anything new from Bacon, but it's a fascinating historical document.
Of twenty-five centuries in which human memory and learning is more or less in evidence, scarcely six can be picked out and isolated as fertile in sciences or favourable to their progress. There are deserts and wastes of time no less than of regions.
Signs should also be gathered from the growth and progress of philosophies and sciences. Those that are founded in nature grow and increase; those founded in opinion change but do not grow.
Thomas Pynchon, Bleeding Edge
Imagine going to a three Michelin star restaurant and being served a delicious burger and fries. No matter how good the burger, at some level you will feel disappointed. When I eat at Mr. Pynchon's restaurant I want a 20-dish tasting menu using unheard-of ingredients and requiring the development of entirely new types of kitchen machinery. Bleeding Edge is a burger. That said, it's funny, and readable, and stylish, and manages to evoke a great nostalgia for the early days of the internet—a 20th century version of the end of the Wild West, the railroad of centralized corporate interests conquering everything, while individualist early internet pioneers are shoved aside.
9/11, the deep web, intelligence agencies, power and sex, technology, family. Also just the idea of a 75-year-old geezer writing about Hideo Kojima and MILF-night at the "Joie de Beavre" is hilarious in itself.
“Our Meat Facial today, Ms. Loeffler?”
“Uhm, how’s that.”
“You didn’t get our offer in the mail? on special all this week, works miracles for the complexion—freshly killed, of course, before those enzymes’ve had a chance to break down, how about it?”
“Well, I don’t...”
“Wonderful! Morris, kill… the chicken!”
From the back room comes horrible panicked squawking, then silence. Maxine meantime is tilted back, eyelids aflutter, when— “Now we’ll just apply some of this,” wham! “...meat here, directly onto this lovely yet depleted face...”
“Mmff...”
“Pardon? (Easy, Morris!)”
“Why is it... uh, moving around like that? Wait! is that a— are you guys putting a real dead chicken in my— aaahhh!”
“Not quite dead yet!” Morris jovially informs the thrashing Maxine as blood and feathers fly everywhere.
Samuel R. Delany, Dhalgren
An amnesiac young man walks into a burned-out city, a localized post-apocalypse left behind by the rest of the United States. He meets the locals, gets into trouble, publishes some poetry, and ends up leading a gang. Perhaps more impressive and memorable than good, I would rather praise it than recommend it. It is a slog, it is puerile, the endless sex scenes are pointless at best, characters rather uninteresting, barely any story, the 70s counterculture stuff is comical, stylistically it's not up there with the stuff it's aping. I understand why some people hate it.
But there's something there, underneath all the grime. It reaches that size where quantity becomes a quality of its own. The combination of autobiography, pomo metafictional fuckery, magical realism, and science fiction is unique. Some of its scenes are certainly unforgettable. And it just has an alluring mystical aura, a compelling strangeness that I have a hard time putting into words...
He pictured great maps of darkness torn down before more. After today, he thought idly, there is no more reason for the sun to rise. Insanity? To live in any state other than terror! He held the books tightly. Are these poems mine? Or will I discover that they are improper descriptions by someone else of things I might have once been near; the map erased, aliases substituted for each location? Someone, then others, were laughing.
# The Worst
Octavia E. Butler, Lilith's Brood
They say never judge a book by its cover, but in this case you'd be spot on. 800 pages of non-stop alien-on-human rape. Tentacle rape, roofie rape, mind control rape, impregnation rape, this book has it all. Very much a fetish thing. One of the main plot lines is about an alien that is so horny that it will literally die if it doesn't get to rape any humans. It's also bad in more conventional ways - weak characters, weak plotting, all sorts of holes and inconsistencies, not to mention extremely shallow treatment of the ideas about genetics, hierarchy, transhumanism, etc. In retrospect I have no idea why I kept reading to the end.
“You said I could choose. I’ve made my choice!”
“Your body said one thing. Your words said another.” It moved a sensory arm to the back of his neck, looping one coil loosely around his neck. “This is the position,” it said.
Ralph Waldo Emerson, Essays: First Series
Good God, what did Nietzsche see in this stuff? Farming raised to metaphysical principle? There's a bit of Prince Myshkin in Emerson, a bit of Monsieur Teste, but it's all played completely straight. A reliable sleep-inducer if there ever was one.
The fallacy lay in the immense concession, that the bad are successful; that justice is not done now. The blindness of the preacher consisted in deferring to the base estimate of the market of what constitutes a manly success, instead of confronting and convicting the world from the truth; announcing the presence of the soul; the omnipotence of the will: and so establishing the standard of good and ill, of success and falsehood.
# Unjustified True Disbelief
Yesterday I wrote about fake experts and credentialism, but left open the question of how to react. The philosopher Edmund Gettier is famous for presenting a series of problems (known as Gettier cases) designed to undermine the justified true belief account of knowledge. I'm interested in a more pragmatic issue: people rejecting bad science when they don't have the abilities necessary to make such a judgment, or in other words, unjustified true disbelief.
There are tons of articles suggesting that Americans have recently come to mistrust science: the Boston Review wants to explain How Americans Came to Distrust Science, Scientific American discusses the "crumbling of trust in science and academia", National Geographic asks Why Do Many Reasonable People Doubt Science?, aeon tells us "there is a crisis of trust in science", while the Christian Science Monitor tells us about the "roots of distrust" of the the "anti-science wave".
I got into a friendly argument on twitter about the first article in that list, in which Pascal-Emmanuel Gobry wrote that "normal people do know that "peer-reviewed studies" are largely and increasingly BS". I see two issues with this: 1) I don't think it's accurate, and 2) even if it were accurate, I don't think normal people can separate the wheat from the chaff.
#### Actual Trust in Science
Surveys find that trust in scientists has remained fairly stable for half a century.
A look at recent data shows an increase in "great deal or fair amount of confidence" in scientists from 76% in 2016 to 86% in 2019. People have not come to distrust science...to which one might reasonably ask, why not? We've been going on about the replication crisis for a decade now, how could this possibly not affect people's trust in science? And the answer to that is that normal people don't care about the replication crisis and don't have the tools needed to understand it even if they did.
#### Choosing Disbelief
Let's forget that and say, hypothetically, that normal people have come to understand that "peer-reviewed studies are largely and increasingly BS". What alternatives does the normal person have? The way I see it one can choose among three options:
1. Distrust everything and become a forest hobo.
2. Trust everything anyway.
3. Pick and choose by judging things on your own.
Let's ignore the first one and focus on the choice between 2 and 3. It boils down to this: the judgment is inevitably going to be imperfect, so does the gain from doubting false science outweigh the loss from doubting true science? That depends on how good people are at doubting the right things.
There's some evidence that laypeople can distinguish which studies will replicate and which won't, but this ability is limited and in the end relies on intuition rather than an understanding and analysis of the work. Statistical evidence is hard to evaluate: even academic psychologists are pretty bad at the basics. The reasons why vaccines are probably safe and nutrition science is probably crap, the reasons why prospect theory is probably real and social priming is probably fake are complicated! If it was easy to make the right judgment, actual scientists wouldn't be screwing up all the time. Thus any disbelief laypeople end up with will probably be unjustified. And the worst thing about unjustified true disbeliefs is that they also carry unjustified false disbeliefs with them.
Another problem with unjustified disbelief is that it fails to alter incentives in a useful way. Feedback loops are only virtuous if the feedback is epistemically reliable. (A point relevant to experts as well.)
And what exactly are the benefits from knowing about the replication crisis? So you think implicit bias is fake, what are you going to do with that information? Bring it up at your company's next diversity seminar? For the vast majority of people, beyond the intrinsic value of believing true things there is not much practical value in knowing about weaknesses in science.
#### Credentials Exist for a Reason
When they work properly, institutional credentials serve an extremely useful purpose: most laymen have no ability to evaluate the credibility of experts (and this is only getting worse due to increasing specialization). Instead, they offload this evaluation to a trusted institutional mechanism and then reap the rewards as experts uncover the secret mechanisms of nature and design better microwave ovens. There is an army of charlatans and mountebanks ready to pounce on anyone straying from the institutionally-approved orthodoxy—just look at penny stock promoters or "alternative" medicine.
Current strands of popular scientific skepticism offer a hint of what we can expect if there was more of it. Is this skepticism directed at methodological weaknesses in social science? Perhaps some valid questions about preregistration and outcome switching in medical trials? Elaborate calculations of the expected value of first doses first vs the risk of fading immunity? No, popular skepticism is aimed at very real and very useful things like vaccination, evolution, genetics, and nuclear energy. Most countries in the EU have banned genetically modified crops, for example—a moronic policy that damages not just Europeans, but overflows onto the people who need GMOs the most, African farmers. At one point Zambia refused food aid in the middle of a famine because the president thought it was "poison". In the past, shared cultural beliefs were tightly protected; today a kind of cheap skepticism filters down to people who don't know what to do with it and just end up in a horrible mess.
Realistically the alternative to blind trust of the establishment is not some enlightened utopia where we believe the true science and reject the fake experts; the alternative is a wide-open space for bullshit-artists to waltz in and take advantage of people. The practical reality of scientific skepticism is memetic and political, and completely unjustified from an epistemic perspective. Gobry himself once wrote a twitter thread about homeopathy and conspiratorial thinking in France: "the country is positively enamored with pseudo-science." He's right, it is. And that's exactly why we can't trust normal people to make the judgement that "studies largely and increasingly BS".
The way I see it, the science that really matters also tends to be the most solid. The damage caused by "BS studies" seems relatively limited in comparison.
This line of reasoning also applies to yesterday's post on fake experts: for the average person, the choice between trusting a pseudonymous blogger versus trusting an army of university professors with long publication records in prestigious journals is pretty clear. From the inside view I'm pretty sure I'm right, but from the outside view the base rate of correctness among heterodox pseudonymous bloggers isn't very good. I wouldn't trust the damned blogger either! The only way to have even a modicum of confidence is personal verification, and unless you're part of the tiny minority with the requisite abilities, you should have no such confidence. So what are we left with? Helplessness and confusion. "Epistemic hell" doesn't even begin to cover it.
It is true that if you know where to look on the internet, you can find groups of intelligent and insightful generalists who outdo many credentialed experts. For example, many of the people I follow on twitter were (and in some respects still are) literally months ahead of the authorities on Covid-19. @LandsharkRides was only slightly exaggerating when he wrote that "here, in this incredibly small corner of twitter, we have cultivated a community of such incredibly determined autists, that we somehow know more than the experts in literally every single sphere". But while some internet groups of high-GRE generalists tend to be right, if you don't have the ability yourself it's hard to tell them apart from the charlatans.
#### But what about justified true disbelief?
Kahneman and Tversky came up with the idea of the inside view vs the outside view. The "inside view" is how we perceive our own personal situation, relying on our personal experiences and with the courage of our convictions. The "outside view" instead focuses on common elements, treating our situation as a single observation in a large statistical class.
From the inside view, skeptics of all stripes believe they are epistemically justified and posses superior knowledge. The people who think vaccines will poison their children believe it, the people who think the earth is flat believe it, and the people who doubt social science p=0.05 papers believe it. But the base rate of correctness among them is low, and the errors they make dangerous. How do you know if you're one of the actually competent skeptics with genuinely justified true disbelief? From the inside, you can't tell. And if you can't tell, you're better off just believing everything.
Some forms of disbelief are less dangerous than others. For example if epidemiologists tell you you don't need to wear a mask, but you choose to wear them anyway, there's very little downside if your skepticism is misguided. The reverse (not wearing masks when they tell you to) has a vastly larger downside. But again this relies on an ability to predict and weigh risks, etc.
The one thing we can appeal to is data from the outside: objectively graded forecasts, betting, market trading. And while these tools could quiet your own doubts, realistically the vast majority of people are not going to bother with these sorts of things. (And are you sure you are capable of judging people's objective track records?) Michael A. Bishop argues against fixed notions of epistemic responsibility in In Praise of Epistemic Irresponsibility: How Lazy and Ignorant Can You Be?, instead favoring an environment where "to a very rough first approximation, being epistemically responsible would involve nothing other than employing reliable belief-forming procedures." I'm certainly in favor of that.
So if you're reading this and feel confident in your stats abilities, your generalist knowledge, your intelligence, the quality of your intuitions, and can back those up via an objective track record, then go ahead and disbelieve all you want. But spreading that disbelief to others seems irresponsible to me. Perhaps even telling lay audiences about the replication crisis is an error. Maybe Ritchie should buy up every copy of his book and burn them, for the common good.
#### That's it?
Yup. Plenty of experts are fake but people should trust them anyway. Thousands of "studies" are just nonsense but people should trust them anyway. On net, less disbelief would improve people's lives. And unless someone has an objective track record showing they know better (and you have the ability to verify and compare it), you should probably ignore the skeptics. Noble lies are usually challenging ethical dilemmas, but this one strikes me as a pretty easy case.
I can't believe I ended up as an establishment shill after all the shit I've seen, but there you have it. I'm open to suggestions if you have a superior solution that would allow me to maintain an edgy contrarian persona.
1. 1.I am indebted to David A. Oliver for the phrase. As far as I can tell he was the first person to ever use "unjustified true disbelief", on Christmas Eve 2020.
2. 2.One might question what these polls are actually measuring. Perhaps they're really measuring if the respondents simply think of themselves (or would like to present themselves) as the type of person who believes in science, regardless of whether they do or not. Perhaps the question of how much regular people "trust scientists" is not meaningful?
3. 3.You might be thinking that it's actually not that difficult, but beware the typical mind fallacy. You're reading a blog filled with long rants on obscure metascientific topics, and that's a pretty strong selection filter. You are not representative of the average person. What seem like clear and obvious judgments to you is more or less magic to others.
4. 4.It should be noted that when it comes to anti-vax, institutional credentialed people are not exactly blameless. The Lancet published Wakefield and didn't retract his fraudulent MMR-autism paper for 12 years.
5. 5.The fear of GMOs is particularly absurd in the light of the alternatives. The "traditional" techniques are based on inducing random mutations through radiation and hoping some of them are good. While mutagenesis is considered perfectly safe and appropriate, targeted changes in plant genomes are terribly risky.
6. 6."But, we can educate..." I doubt it.
7. 7.Epidemiologists (not to mention bioethicists) during covid-19 provide one of the most significant examples of simultaneously being important and weak. But that one item can't outweigh all the stuff on the other side of the scale.
8. 8.Insert Palpatine "ironic" gif here.
# Are Experts Real?
I vacillate between two modes: sometimes I think every scientific and professional field is genuinely complex, requiring years if not decades of specialization to truly understand even a small sliver of it, and the experts at the apex of these fields have deep insights about their subject matter. The evidence in favor of this view seems pretty good, a quick look at the technology, health, and wealth around us ought to convince anyone.
But sometimes one of these masters at the top of the mountain will say something so obviously incorrect, something even an amateur can see is false, that the only possible explanation is that they understand very little about their field. Sometimes vaguely smart generalists with some basic stats knowledge objectively outperform these experts. And if the masters at the top of the mountain aren't real, then that undermines the entire hierarchy of expertise.
# Real Expertise
Some hierarchies are undeniably legitimate. Chess, for example, has the top players constantly battling each other, new players trying to break in (and sometimes succeeding), and it's all tracked by a transparent rating algorithm that is constantly updated. Even at the far right tails of these rankings, there are significant and undeniable skill gaps. There is simply no way Magnus Carlsen is secretly bad at chess.
Science would seem like another such hierarchy. The people at the top have passed through a long series of tests designed to evaluate their skills and knowledge, winnow out the undeserving, and provide specialized training: undergrad, PhD, tenure track position, with an armada of publications and citations along the way.
Anyone who has survived the torments of tertiary education will have had the experience of getting a broad look at a field in a 101 class, then drilling deeper into specific subfields in more advanced classes, and then into yet more specific sub-subfields in yet more advanced classes, until eventually you're stuck at home on a Saturday night reading an article in an obscure Belgian journal titled "Iron Content in Antwerp Horseshoes, 1791-1794: Trade and Equestrian Culture Under the Habsburgs", and the list of references carries the threatening implication of an entire literature on the equestrian metallurgy of the Low Countries, with academics split into factions justifying or expostulating the irreconcilable implications of rival theories. And then you realize that there's an equally obscure literature about every single subfield-of-a-subfield-of-a-subfield. You realize that you will never be a polymath and that simply catching up with the state of the art in one tiny corner of knowledge is a daunting proposition. The thought of exiting this ridiculous sham we call life flashes in your mind, but you dismiss it and heroically persist in your quest to understand those horseshoes instead.
It is absurd to think that after such lengthy studies and deep specialization the experts could be secret frauds. As absurd as the idea that Magnus Carlsen secretly can't play Chess. Right?
# Fake Expertise?
Imagine if tomorrow it was revealed that Magnus Carlsen actually doesn't know how to play chess. You can't then just turn to the #2 and go "oh well, Carsen was fake but at least we have Fabiano Caruana, he's the real deal"—if Carlsen is fake that also implicates every player who has played against him, every tournament organizer, and so on. The entire hierarchy comes into question. Even worse, imagine if it was revealed that Carlsen was a fake, but he still continued to be ranked #1 afterwards. So when I observe extreme credential-competence disparities in science or government bureaucracies, I begin to suspect the entire system. Let's take a look at some examples.
#### 59
In 2015, Viechtbauer et al. published A simple formula for the calculation of sample size in pilot studies, in which they describe a simple method for calculating the required N for an x% chance of detecting a certain effect based on the proportion of participants who exhibit the effect. In the paper, they give an example of such a calculation, writing that if 5% of participants exhibit a problem, the study needs N=59 for a 95% probability of detecting the problem. The actual required N will, of course, vary depending on the prevalence of the effect being studied.
If you look at the papers citing Viechtbauer et al., you will find dozens of them simply using N=59, regardless of the problem they're studying, and explaining that they're using that sample size because of the Viechtbauer paper! The authors of these studies are professors at real universities, working in disciplines based almost entirely on statistical analyses. The papers passed through editors and peer reviewers. In my piece on the replication crisis, I wrote that I find it difficult to believe that social scientists don't know what they're doing when they publish weak studies; one of the most common responses from scientists was "no, they genuinely don't understand elementary statistics". It still seems absurd (just count the years from undergrad to PhD, how do you fail to pick this stuff up just by osmosis?) but it also appears to be true. How does this happen? Can you imagine a physicist who doesn't understand basic calculus? And if this is the level of competence among tenured professors, what is going on among the people below them in the hierarchy of expertise?
Epidemiologists have beclowned themselves in all sorts of ways over the last year, but this is one of my favorites. Michelle Odden, professor of epidemiology and population health at Stanford (to be fair she does focus on cardiovascular rather than infectious disease, but then perhaps she shouldn't appeal to her credentials):
CalPERS is the largest pension fund in the United States, managing about 400 billion dollars. Here is a video of a meeting of the CalPERS investment committee, in which you will hear Chief Investment Officer Yu Meng say two incredible things: 1. That he can pick active managers who will generate alpha, and this decreases portfolio risk. 2. That infrequent model-based valuation of investments makes them less risky compared to those traded on a market, due to "time diversification". This is utter nonsense, of course. When someone questions him, he retorts with "I might have to go back to school to get another PhD". The appeal to credentials is typical when fake expertise is questioned. Can you imagine Magnus Carlsen appealing to a piece of paper saying he has a PhD in chessology to explain why he's good? # Cybernetics #### Feedback Loops It all comes down to feedback loops. The optimal environment for developing and recognizing expertise is one which allows for clear predictions and provides timely, objective feedback along with a system that promotes the use of that feedback for future improvement. Capitalism and evolution work so well because of their merciless feedback mechanisms. The basic sciences have a great environment for such feedback loops: if physics was fake, CPUs wouldn't work, rockets wouldn't go to the moon, and so on. But if social priming is fake, well...? There are other factors at play, too, some of them rather nebulous: there's more to science than just running experiments and publishing stuff. Mertonian, distributed, informal community norms play a significant role in aligning prestige with real expertise, and these broader social mechanisms are the keystone that holds everything together. But such things are hard to measure and impossible to engineer. And can such an honor system persist indefinitely or will it eventually be subverted by bad actors? What does feedback look like in the social sciences? The norms don't seem to be operating very well. There's a small probability that your work will be replicated at some point, but really the main feedback mechanism is "impact" (in other words, citations). Since citations in the social sciences are not related to truth, this is useless at best. Can you imagine if fake crank theories in physics got as many citations as the papers from CERN? Notorious fraud Brian Wansink racked up 2500 citations in 2020, two years after he was forced to resign. There's your social science feedback loop! The feedback loops of the academy are also predicated on the current credentialed insiders actually being experts. But if the N=59 crew are making such ridiculous errors in their own papers, they obviously don't have the ability to judge other people's papers either, and neither do the editors and reviewers who allow such things to be published—for the feedback loop to work properly you need both the cultural norms and genuinely competent individuals to apply them. The candidate gene literature (of which 5-HTTLPR was a subset) is an example of feedback and successful course correction: many years and thousands of papers were wasted on something that ended up being completely untrue, and while a few of these papers still trickle in, these days the approach has been essentially abandoned and replaced by genome-wide association studies. Scott Alexander is rather sanguine about the ability of scientific feedback mechanisms to eventually reach a true consensus, but I'm more skeptical. In psychology, the methodological deficiencies of today are the more or less same ones as those of the of the 1950s, with no hope for change. Sometimes we can't wait a decade or two for the feedback to work, the current pandemic being a good example. Being right about masks eventually isn't good enough. The loop there needs to be tight, feedback immediate. Blindly relying on the predictions of this or that model from this or that university is absurd. You need systems with built-in, automatic mechanisms for feedback and course correction, subsidized markets being the very best one. Tom Liptay (a superforecaster) has been scoring his predictions against those of experts; naturally he's winning. And that's just one person, imagine hundreds of them combined, with a monetary incentive on top. #### Uniformity There's a superficial uniformity in the academy. If you visit the physics department and the psychology department of a university they will appear very similar: the people working there have the same titles, they instruct students in the same degrees, and publish similar-looking papers in similar-looking journals. The N=59 crew display the exact same shibboleths as the real scientists. This similarity provides cover so that hacks can attain the prestige, without the competence, of academic credentials. Despite vastly different levels of rigor, different fields are treated with the ~same seriousness. Electrical engineering is definitely real, and the government bases all sorts of policies on the knowledge of electrical engineers. On the other hand nutrition is pretty much completely fake, yet the government dutifully informs you that you should eat tons of cereal and a couple loaves of bread every day. A USDA bureaucrat can hardly override the Scientists (and really, would you want them to?). This also applies to people within the system itself: as I have written previously the funding agencies seem to think social science is as reliable as physics and have done virtually nothing in response to the "replication crisis" (or perhaps they are simply uninterested in fixing it). In a recent episode of Two Psychologists Four Beers, Yoel Inbar was talking about the politically-motivated retraction of a paper on mentorship, asking "Who's gonna trust the field that makes their decisions about whether a paper is scientifically valid contingent on whether it defends the moral sensibilities of vocal critics?" I think the answer to that question is: pretty much everyone is gonna trust that field. Status is sticky. The moat of academic prestige is unassailable and has little to do with the quality of the research. “Anything that would go against World Health Organization recommendations would be a violation of our policy," says youtube—the question of whether the WHO is actually reliable being completely irrelevant. Trust in the scientific community has remained stable for 50+ years. Replication crisis? What's that? A niche concern for a handful of geeks, more or less. As Patrick Collison put it, "This year, we’ve come to better appreciate the fallibility and shortcomings of numerous well-established institutions (“masks don’t work”)… while simultaneously entrenching more heavily mechanisms that assume their correctness (“removing COVID misinformation”)." A twitter moderator can hardly override the Scientists (and really, would you want them to?). This is all exacerbated by the huge increase in specialization. Undoubtedly it has benefits, as scientists digging deeper and deeper into smaller and smaller niches allows us to make progress that otherwise would not happen. But specialization also has its downsides: the greater the specialization, the smaller the circle of people who can judge a result. And the rest of us have to take their word for it, hoping that the feedback mechanisms are working and nobody abuses this trust. It wasn't always like this. A century ago, a physicist could have a decent grasp of pretty much all of physics. Hungarian Nobelist Eugene Wigner expressed his frustration with the changing landscape of specialization: By 1950, even a conscientious physicist had trouble following more than one sixth of all the work in the field. Physics had become a discipline, practiced within narrow constraints. I tried to study other disciplines. I read Reviews Of Modern Physics to keep abreast of fields like radio-astronomy, earth magnetism theory, and magneto-hydrodynamics. I even tried to write articles about them for general readers. But a growing number of the published papers in physics I could not follow, and I realized that fact with some bitterness. It is difficult (and probably wrong) for the magneto-hydrodynamics expert to barge into radio-astronomy and tell the specialists in that field that they're all wrong. After all, you haven't spent a decade specializing in their field, and they have. It is a genuinely good argument that you should shut up and listen to the experts. #### Bringing Chess to the Academy There are some chess-like feedback mechanisms even in social science, for example the DARPA SCORE Replication Markets project: it's a transparent system of predictions which are then objectively tested against reality and transparently scored. But how is this mechanism integrated with the rest of the social science ecosystem? We are still waiting for the replication outcomes, but regardless of what happens I don't believe they will make a difference. Suppose the results come out tomorrow and they show that my ability to judge the truth or falsity of social science research is vastly superior to that of the people writing and publishing this stuff. Do you think I'll start getting emails from journal editors asking me to evaluate papers before they publish them? Of course not, the idea is ludicrous. That's not how the system works. "Ah, you have exposed our nonsense, and we would have gotten away with it too if it weren't for you meddling kids! Here, you can take control of everything now." The truth is that the system will not even have to defend itself, it will just ignore this stuff and keep on truckin' like nothing happened. Perhaps constructing a new, parallel scientific ecosystem is the only way out. On the other hand, an objective track record of successful (or unsuccessful) predictions can be useful to skeptics outside the system. In the epistemic maelstrom that we're stuck in, forecasts at least provide a steady point to hold on to. And perhaps it could be useful for generating reform from above. When the scientific feedback systems aren't working properly, it is possible for help to come from outside: So what do you do? Stop trusting the experts? Make your own judgments? Trust some blogger who tells you what to believe? Can you actually tell apart real results from fake ones, reliable generalists from exploitative charlatans? Do you have it in you to ignore 800 epidemiologists, and is it actually a good idea? More on this tomorrow. 1. 1.I'm not talking about pundits or amorphous "elites", but people actually doing stuff. 2. 2.Sometimes the explanation is rather mundane: institutional incentives or preference falsification make these people deliberately say something they know to be false. But there are cases where that excuse does not apply. 3. 3.Ever notice how people with "Dr." in their twitter username tend to post a lot of stupid stuff? 4. 4.A Lysenkoist farmer in a capitalist country is an unprofitable farmer and therefore soon not a farmer at all. 5. 5.I also imagine it's easier to maintain an honor system with the clear-cut clarity (and rather apolitical nature) of the basic natural sciences compared to the social sciences. 6. 6.The same features can even be found in the humanities which is just tragicomic. 7. 7.The stability of academic prestige in the face of fake expertise makes it a very attractive target for political pressure. If you can take over a field, you can start saying anything you want and people will treat you seriously. 8. 8.Medicine was probably a net negative until the end of the 19th century, but doctors never went away. People continued to visit doctors while they bled them to death and smeared pigeon poop on their feet. Is it that some social status hierarchies are completely impregnable regardless of their results? A case of memetic defoundation perhaps? Were people simply fooled by regression to the mean? 9. 9.The very fact that such a prediction market could be useful is a sign of weakness. Physicists don't need prediction markets to decide whether the results from CERN are real. 10. 10.The nice thing about objective track records is that they protect against both credentialed bullshitters and uncredentialed charlatans looking to exploit people's doubts. 11. 11.Of course it's no panacea, especially in the face of countervailing incentives. # Links & What I've Been Reading Q4 2020 Forecasting Arpit Gupta on prediction markets vs 538 in the 2020 election: "Betting markets are pretty well calibrated—state markets that have an estimate of 50% are, in fact, tossups in the election. 538 is at least 20 points off—if 538 says that a state has a ~74% chance of going for Democrats, it really is a tossup." Also, In Defense of Polling: How I earned50,000 on election night using polling data and some Python code. Here is a giant spreadsheet that scores 538/Economist vs markets. And here is a literal banana arguing against the very idea of polls.
Markets vs. polls as election predictors: An historical assessment (2012). Election prediction markets stretch back to the 19th century, and they used to be heavily traded and remarkably accurate despite the lack of any systematic polling information. Once polling was invented, volumes dropped and prediction markets lost their edge. Perhaps things are swinging in the other direction again?
Metaculus is organizing a "large-scale, comprehensive forecasting tournament dedicated to predicting advances in artificial intelligence" with $50k in prize money. Covid Philippe Lemoine critiques Flaxman et al.'s "Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe" in Nature. However, as far as can tell, Flaxman et al. don’t say what the country-specific effect was for Sweden either in the paper or in the supplementary materials. This immediately triggered my bullshit detector, so I went and downloaded the code of their paper to take a closer look at the results and, lo and behold, my suspicion was confirmed. In this chart, I have plotted the country-specific effect of the last intervention in each country: Flaxman responds on Andrew Gelman's blog. Lemoine responds to the response. Alex Tabarrok has been beating the drum for delaying the second dose and vaccinating more people with a single dose instead. Twitter thread on the new, potentially more infectious B.1.1.7 variant. Sound pollution decreased due to COVID-19, and "birds responded by producing higher performance songs at lower amplitudes, effectively maximizing communication distance and salience". Scent dog identification of samples from COVID-19 patients: "The dogs were able to discriminate between samples of infected (positive) and non-infected (negative) individuals with average diagnostic sensitivity of 82.63% and specificity of 96.35%." [N=1012] Unfortunately they didn't try it on asymptomatic/pre-symptomatic cases. And a massive update on everything Covid from Zvi Mowshowitz, much of it infuriating. Do approach with caution though, the auction argument in particular seems questionable. Innovations and Innovation Peer Rejection in Science: a collection of "key discoveries have been at some point rejected, mocked, or ignored by leading scientists and expert commissions." Somewhat related: if GAI is close, why aren't large companies investing in it? NunoSempere comments with some interesting historical examples of breakthrough technologies that received very little investment or were believed to be impossible before they were realized. Deepmind solves protein folding. And a couple of great blog posts by Mohammed AlQuraishi, where he talks about why AlphaFold is important, why this innovation didn't come from pharmaceutical companies or the academy, and more: • First, from 2018, AlphaFold @ CASP13: “What just happened?”: I don’t think we would do ourselves a service by not recognizing that what just happened presents a serious indictment of academic science. [...] What is worse than academic groups getting scooped by DeepMind? The fact that the collective powers of Novartis, Pfizer, etc, with their hundreds of thousands (~million?) of employees, let an industrial lab that is a complete outsider to the field, with virtually no prior molecular sciences experience, come in and thoroughly beat them on a problem that is, quite frankly, of far greater importance to pharmaceuticals than it is to Alphabet. • Once a solution is solved in any way, it becomes hard to justify solving it another way, especially from a publication standpoint. "These improvements drop the turn-around time from days to twelve hours and the cost for whole genome sequencing (WGS) from about$1000 to $15, as well as increase data production by several orders of magnitude." If this is real (and keep in mind$15 is not the actual price end-users would pay) we can expect universal whole-genome sequencing, vast improvements in PGSs, and pervasive usage of genetics in medicine in the near future.
Extrapolating GPT-N performance: "Close-to-optimal performance on these benchmarks seems like it’s at least ~3 orders of magnitude compute away [...] Taking into account both software improvements and potential bottlenecks like data, I’d be inclined to update that downwards, maybe an order of magnitude or so (for a total cost of ~$10-100B). Given hardware improvements in the next 5-10 years, I would expect that to fall further to ~$1-10B."
Fund people, not projects I: The HHMI and the NIH Director's Pioneer Award. "Ultimately it's hard to disagree with Azoulay & Li (2020), we need a better science of science! The scientific method needs to examine the social practice of science as well, and this should involve funders doing more experiments to see what works. Rather than doing whatever is it that they are doing now, funders should introduce an element of explicit randomization into their process."
It will take more than a few high-profile innovations to end the great stagnation. "And if you sincerely believe that we are in a new era of progress, then argue for it rigorously! Show it in the data. Revisit the papers that were so convincing to you a year ago, and go refute them directly."
The Rest
Why Are Some Bilingual People Dyslexic in English but Not Their Other Language? I'm not entirely sure about the explanations proposed in the article, but it's fascinating nonetheless.
The Centre for Applied Eschatology: "CAE is an interdisciplinary research center dedicated to practical solutions for existential or global catastrophe. We partner with government, private enterprise, and academia to leverage knowledge, resources, and diverse interests in creative fusion to bring enduring and universal transformation. We unite our age’s greatest expertise to accomplish history’s greatest task."
Labor share has been decreasing over the past decades, but without a corresponding increase in the capital share of income. Where does the money go? This paper suggests: housing costs. Home ownership as investment may have seemed like a great idea in the past, but now we're stuck in this terrible equilibrium where spiraling housing costs are causing huge problems but it would be political suicide to do anything about it. It's easy to say "LVT now!" but good luck formulating a real plan to make it reality.
1/4 of animals used in research are included in published papers. Someone told me this figure is surprisingly high. Unfortunately there's no data in the paper breaking down unpublished null results vs bad data/failed experiments/etc.
@Evolving_Moloch reviews Rutger Bregman's Humankind. "Bregman presents hunter-gatherer societies as being inherently peaceful, antiwar, equal, and feminist likely because these are commonly expressed social values among educated people in his own society today. This is not history but mythology."
@ArtirKel reviews Vinay Prasad's Malignant, with some comments on progress in cancer therapy and the design of clinical trials. "The whole system is permeated by industry-money, with the concomitant perverse incentives that generates."
@Cerebralab2 reviews Nick Lane's Power, Sex, Suicide: Mitochondria and the meaning of life. "The eukaryotic cell appeared much later (according to the mainstream view) and in the space of just a few hundred million years—a fraction of the time available to bacteria—gave rise to the great fountain of life we see all around us."
Is the great filter behind us? The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare. "Together with the dispersed timing of key evolutionary transitions and plausible priors, one can conclude that the expected transition times likely exceed the lifetime of Earth, perhaps by many orders of magnitude. In turn, this suggests that intelligent life is likely to be exceptionally rare." (Highly speculative, and there are some assumptions one might reasonably disagree with.)
How to Talk When a Machine is Listening: Corporate Disclosure in the Age of AI. "Companies [...] manage the sentiment and tone of their disclosures to induce algorithmic readers to draw favorable conclusions about the content."
On the Lambda School stats. "If their outcomes are actually good, why do they have to constantly lie?"
On the rationalism-to-trad pipeline. (Does such a pipeline actually exist?) "That "choice" as a guiding principle is suspect in itself. It's downstream from hundreds of factors that have nothing to do with reason. Anything from a leg cramp to an insult at work can alter the "rational" substrate significantly. Building civilizations on the quicksand of human whim is hubris defined."
There is a wikipedia article titled List of nicknames used by Donald Trump.
Why Is There a Full-Scale Replica of the Parthenon in Nashville, Tennessee?
We Are What We Watch: Movie Plots Predict the Personalities of Those who “Like” Them. An amusing confirmation of stereotypes: low extraversion people are anime fanatics, low agreeableness people like Hannibal, and low openness people just have terrible taste (under a more benevolent regime they might perhaps be prohibited from consuming media).
A short film based on Blindsight. Won't make any sense if you haven't read the book, but it looks great.
And here is a Japanese idol shoegaze group. They're called ・・・・・・・・・ and their debut album is 「」.
• The History of the Decline and Fall of the Roman Empire by Edward Gibbon. Fantastic. Consistently entertaining over almost 4k pages. Gibbon's style is perfect. I took it slow, reading it over 364 days...and I would gladly keep going for another year. Full review forthcoming.
• The Adventures and Misadventures of Maqroll by Álvaro Mutis. A lovely collection of 7 picaresque novellas that revolve around Maqroll, a cosmopolitan vagabond of the seas. Stylistically rich and sumptuous. It's set in a kind of parallel maritime world, the world of traders and seamen and port city whores. Very melancholy, with doomed business ventures at the edge of civilization, doomed loves, doomed lives, and so on. While the reader takes vicarious pleasure in the "nomadic mania" of Maqroll, the underlying feeling is one of fundamental dissatisfaction with what life has to offer—ultimately the book is about our attempts to overcome it. Reminiscent of Conrad, but also Herzog's diaries plus a bit of Borges in the style.
• Pandora’s Box: A History of the First World War by Jörn Leonhard. A comprehensive, single-volume history of WWI from a German author. It goes far beyond military history: besides the battles and armaments it covers geopolitics and diplomacy, national politics, economics, public opinion, morale. All fronts and combatants are explored, and all this squeezed into just 900 pages (some things are inevitably left out - for example no mention is made of Hoover's famine relief efforts). Its approach is rather abstract, so if you're looking for a visceral description of the trenches this isn't the book for you. The translation isn't great, and it can get a bit dry and repetitive, but overall it's a very impressive tome. n.b. the hardcover edition from HUP is astonishingly bad and started falling apart immediately. (Slightly longer review on goodreads.)
• To Hold Up the Sky by Liu Cixin. A new short story collection. Not quite at the same level as the Three Body Trilogy, but there are some good pieces. I particularly enjoyed two stories about strange and destructive alien artists: Sea of Dreams (in which an alien steals all the water on earth for an orbital artwork), and Cloud of Poems (in which a poetry contest ultimately destroys the solar system, sort of a humanistic scifi take on The Library of Babel).
• Creating Future People: The Ethics of Genetic Enhancement by Jonathan Anomaly. A concise work on the ethical dilemmas posed by genetic enhancement technology. It's written by a philosopher, but uses a lot of ideas from game theory and economics to work out the implications of genetic enhancement. Despite its short length, it goes into remarkable practical detail on things like how oxytocin affects behavior, the causes of global wealth inequality, and the potential of genetic editing to decrease the demand for plastic surgery. On the other hand, I did find it somewhat lacking (if not evasive) in its treatment of more general and abstract philosophical questions, such as: under what conditions is it acceptable to hurt people today in order to help future people?
• The Life and Opinions of Tristram Shandy, Gentleman by Laurence Sterne. Famously the "first postmodern novel", this fictional biography from the 1760s is inventive, bawdy, and generally really strange and crazy. Heavily influenced by Don Quixote, parodies various famous writers of the time. Schopenhauer loved it and a young Karl Marx drew inspiration from it when writing Scorpion and Felix! I admire its ethos, and it's sometimes very funny. But ironic shitposting is still shitposting, and 700 pages of shitposting is a bit...pleonastic. At one point the narrator explains his digressions in the form of a line, one for each volume:
• Illuminations by Walter Benjamin. Doesn't really live up to the hype. The essays on Proust and Baudelaire are fine, the hagiography of Brecht feels extremely silly in retrospect. The myopia of extremist 1930s politics prevents him from seeing very far.
• Omensetter's Luck by William Gass. An experimental novel that does a great job of evoking 19th century rural America. Omensetter is a beguiling, larger-than-life figure, a kind of natural animal of a man. Nowhere near as good as The Tunnel and not much easier to read either. It's clearly an early piece, before Gass had fully developed his style.
• The Silence: A Novel by Don DeLillo. Not so much a novel as a sketch of one. Not even undercooked, completely raw. It's about a sudden shutdown of all technology. Stylistically uninteresting compared to his other work. Here's a good review.
• Little Science, Big Science by Derek John de Solla Price. Purports to be about the science of science, but really mostly an exploration of descriptive statistics over time - number of scientists, the distribution of their productivity and intelligence, distribution across countries, citations, and so on. Should have been a blog post. Nice charts, worth skimming just for them. (Very much out of print, but you can grab a pdf from the internet archive or libgen).
• Experiment and the Making of Meaning: Human Agency in Scientific Observation and Experiment by David Gooding. Skimmed it. Written in 1990 but feels very outdated, nobody cares about observation sentences any more and they didn't in 1990 either. Some interesting points about the importance of experiment (as opposed to theory) in scientific progress. On the other hand all the fluffy stuff about "meaning" left me completely cold.
• The Subjective Side of Science: A Philosophical Inquiry into the Psychology of the Apollo Moon Scientists by Ian Mitroff. Based on a series of structured interviews with geologists working on the Apollo project. Remarkably raw. Mitroff argues in favor of the subjective side, the biased side, of how scientists actually perform science in the real world. On personality types, relations between scientists, etc. There are some silly parts, like an attempt to tie Jungian psychology with the psychological clusters in science, and a very strange typology of scientific approaches toward the end, but overall it's above-average for the genre.
• On Writing: A Memoir of the Craft by Stephen King. A pleasant autobiography combined with some tips on writing. The two parts don't really fit together very well. This was my first King book, I imagine it's much better if you're a fan (he talks about his own novels quite a lot).
• The Lord Chandos Letter And Other Writings by Hugo von Hofmannsthal. A collection of short stories plus the titular essay. If David Lynch had been a symbolist writer, these are the kinds of stories he would have produced. Vague, mystical, dreamlike, impressionistic. I found them unsatisfying, and they never captured my interest enough to try to disentangle the symbols and allegories. The final essay about the limitations of language is worth reading, however.
• Ghost Soldiers: The Forgotten Epic Story of World War II's Most Dramatic Mission by Hampton Sides. An account of a daring mission to rescue POWs held by the Japanese in the Philippines. The mission itself is fascinating but fairly short, and the book is padded with a lot of background info that is nowhere near as interesting (though it does set the scene). Parts of it are brutal and revolting beyond belief.
• We Are Legion (We Are Bob) by Dennis Taylor. An interesting premise: a story told from the perspective of a sentient von Neumann probe. That premise is sort-of squandered by a juvenile approach filled to the brim with plot holes, and an inconclusive story arc: it's just setting up the sequels. Still, it's pretty entertaining. If you want a goofy space opera audiobook to listen to while doing other stuff, I'd recommend it.
• Rocket Men: The Daring Odyssey of Apollo 8 and the Astronauts Who Made Man's First Journey to the Moon by Robert Kurson. Focused on the personalities, personal lives, and families of the three astronauts on Apollo 8: Frank Borman, William Anders, and James Lovell, set against the tumultuous political situation of late 1960s America. Written in a cinematic style, there's little on the technical/organizational aspects of Apollo 8. Its treatment of the cold war is rather naïve. The book achieves its goals, but I was looking for something different. Ray Porter (who I usually like) screws up the narration of the audiobook with an over-emotive approach, often emphasizing the wrong words. Really strange.
# Book Review: The Idiot
In 1969, Alfred Appel declared that Ada or Ardor was "the last 19th-century Russian novel". Now we have in our hands a new last 19th-century Russian novel—perhaps even the final one. And while Nabokov selected the obvious and trivial task of combining the Russian and American novels, our "Dostoyevsky" (an obvious pseudonym) has given himself the unparalleled and interminably heroic mission of combining the Russian novel with the Mexican soap opera. I am pleased to report that he has succeeded in producing a daring postmodern pastiche that truly evokes the 19th century.
The basic premise of The Idiot is lifted straight from Nietzsche's Antichrist 29-31:
To make a hero of Jesus! And even more, what a misunderstanding is the word 'genius'! Our whole concept, our cultural concept, of 'spirit' has no meaning whatever in the world in which Jesus lives. Spoken with the precision of a physiologist, even an entirely different word would be yet more fitting here—the word idiot. [...] That strange and sick world to which the Gospels introduce us — a world like that of a Russian novel, in which refuse of society, neurosis and ‘childlike’ idiocy seem to make a rendezvous.
The novel opens with Prince Lev Nikolayevich Myshkin, the titular idiot, returning by train to Russia after many years in the hands of a Swiss psychiatrist. He is a Christlike figure, as Nietzsche puts it "a mixture of the sublime, the sickly, and the childlike", a naïf beset on all sides by the iniquities of the selfish and the corruptive influence of society. Being penniless, he seeks out a distant relative and quickly becomes entangled in St. Petersburg society.
If this were really a 19th century novel, it would follow a predictable course from this point: the "idiot" would turn out to be secretly wiser than everyone, the "holy fool" would speak truths inaccessible to normal people, his purity would highlight the corruption of the world around him, his naiveté would ultimately be form of nobility, and so on.
Instead, Myshkin finds himself starring in a preposterous telenovela populated by a vast cast of absurdly melodramatic characters. He quickly receives an unexpected inheritance that makes him wealthy, and is then embroiled in a web of love and intrigue. As in any good soap opera, everything is raised to extremes in this book: there are no love triangles because three vertices would not be nearly enough; instead there are love polyhedrons, possibly in four or seven dimensions.
Myshkin's first love interest is the intimidating, dark, and self-destructive Nastasya Fillipovna. An orphan exploited by her guardian, she is the talk of the town and chased by multiple suitors, including the violent Rogozhin and the greedy Ganya. Myshkin thinks she's insane but pities her so intensely that they have an endless and tempestuous on-again off-again relationship, which includes Nastasya skipping out on multiple weddings. In the construction of this character I believe I detect the subtle influence of the yandere archetype from Japanese manga.
The second woman in Myshkin's life is the young and wealthy Aglaya Ivanovna: proud, snobbish, and innocent, she cannot resist mocking Myshkin, but at the same time is deeply attracted to him. Whereas Nastasya loves Myshkin but thinks she's not good enough for him, Aglaya loves him but thinks she's too good for him.
The main cast is rounded off by a bunch of colorful characters, including the senile general Epanchin, various aristocrats, a boxer, a religious maniac, and Ippolit the nihilist who spends 600 pages in a permanent state of almost-dying (consumption, of course) and even gets a suicide fakeout scene that would make the producers of The Young and the Restless blush.
As Myshkin's relationships develop, he is always kind, non-judgmental, honest and open with his views. But this is not the story of good man taken advantage of, but rather the story of a man who is simply incapable of living in the real world. Norm Macdonald, after seeing the musical Cats, exclaimed: "it's about actual cats!" The reader of The Idiot will inevitably experience the same shock of recognition, muttering "Mein Gott, it's about an actual idiot!" His behavior ends up hurting not only himself, but also the people around him, the people he loves. In the climax, Nastasya and Aglaya battle for Myshkin's heart, but it's a disaster as he makes all the wrong choices.
That's not to say that it's all serious; the drama is occasionally broken up by absurdist humor straight out of Monty Python:
Everyone realized that the resolution of all their bewilderment had begun.
Postmodern games permeate the entire novel: for example, what initially appears to be an omniscient narrator is revealed in the second half to simply be another character (a deeply unreliable one at that); one who sees himself as an objective reporter of the facts, but is really a gossip and rumourmonger. Toward the end he breaks the fourth wall and starts going on bizarre digressions that recall Tristram Shandy: at one point he excuses himself to the reader for digressing too far, then digresses even further to complain about the quality of the Russian civil service. The shifts in point of view become disorienting and call attention to the artificial nature of the novel. Critically, he never really warms up to Myshkin:
In presenting all these facts and refusing to explain them, we do not in the least mean to justify our hero in the eyes of our readers. More than that, we are quite prepared to share the indignation he aroused even in his friends.
Double Thoughts and Evolutionary Psychology
The entire novel revolves around the idea of the "double thought", an action with two motives: one pure and conscious, the other corrupt and hidden. Keller comes to Myshkin in order to confess his misdeeds, but also to use the opportunity to borrow money. Awareness of the base motive inevitably leads to guilt and in some cases self-destructive behavior. This is how Myshkin responds:
You have confused your motives and ideas, as I need scarcely say too often happens to myself. I can assure you, Keller, I reproach myself bitterly for it sometimes. When you were talking just now I seemed to be listening to something about myself. At times I have imagined that all men were the same,’ he continued earnestly, for he appeared to be much interested in the conversation, ‘and that consoled me in a certain degree, for a DOUBLE motive is a thing most difficult to fight against. I have tried, and I know. God knows whence they arise, these ideas that you speak of as base. I fear these double motives more than ever just now, but I am not your judge, and in my opinion it is going too far to give the name of baseness to it—what do you think? You were going to employ your tears as a ruse in order to borrow money, but you also say—in fact, you have sworn to the fact— that independently of this your confession was made with an honourable motive.
The "double thought" is an extension of the concept of self-deception invented by evolutionary psychologist Robert Trivers, and, simply put, this book could not have existed without his work. Trivers has been writing about self-deception since the 70s in academic journals and books (including his 2011 book The Folly of Fools). The basic idea is that people subconsciously deceive themselves about the true motives of their actions, because it's easier to convince others when you don't have to lie.
Dostoyevsky's innovation lies in examining what happens when someone becomes aware of their subconscious motives and inevitably feels guilty. There is empirical evidence that inhibition of guilt makes deception more effective, but this novel inverts that problem and asks the question: what happens when that inhibition fails and guilt takes over? The author's penetrating psychological analysis finds a perfect home in the soap opera setting, as the opposition of extreme emotions engendered by the double thought complements the melodrama. Dostoyevsky even goes a step further, and argues that self-consciousness of the double thought is a double thought in itself: "I couldn't help thinking ... that everyone is like that, so that I even began patting myself on the back". There is no escape from the signaling games we play. The complexity of unconscious motives is a recurring theme:
Don't let us forget that the causes of human actions are usually immeasurably more complex and varied than our subsequent explanations of them.
In a move of pure genius, Dostoyevsky plays with this idea on three levels in parallel: first, the internal contrast between pure and corrupt motives within each person; second, the external contrast between the pure Idiot Myskin and the corrupt society around him; third, on the philosophical level of Dionysus versus The Crucified. And in the end he comes down squarely in the camp of Dionysus and against Myshkin. Just as the Idiot is not ultimately good, so consciousness and the innocent motivations are not good either: the novel decides the issue strongly in favor of the corrupt motive, in favor of instinct over ratiocination, in favor of Dionysus over Apollo, in favor of the earthly over Christianity. We must live our lives in this world and deal with it as it is.
Double Anachronism
In the brilliant essay The Argentine Writer and Tradition, Borges writes that "what is truly native can and often does dispense with local color". Unfortunately Mssr. Dostoyevsky overloads his novel with local color, which in the end only highlights its artificiality. The lengths to which he has gone to make this novel appear as if it were a real product of the 19th century are admirable, but by overextending himself, he undermines the convincing (though fantastic) anachronism; like a double thought, the underlying deception leaks out and ruins everything. In a transparent and desperate reach for verisimilitude, he has included a series of references to real crimes from the 1860s. One cannot help but imagine the author bent over some dusty newspaper archive in the bowels of the National Library on Nevsky Prospekt, mining for details of grisly murders and executions.
Unfortunately The Idiot is anachronistic in more ways than one: as the juvenile influence of Der Antichrist hints, Dostoyevsky is a fervent anti-Christian who epitomizes the worst excesses of early-2000s New Atheism. Trivers wrote the foreword to Dawkins's The Selfish Gene, so it is no surprise that Dostoyevsky would be part of that intellectual tradition. But the heavy-handed anti-religious moralizing lacks nuance and gets old fast. His judgment of Myshkin, the representative of ideal Christianity, is heavy, but on top of that he also rants about the Catholic church, the representative of practical Christianity. He leaves no wiggle room in his condemnations.
And the lesson he wants to impart is clear: that Christianity is not only impractical and hypocritical, but actively ruins the lives of the people it touches. But these views hardly pass the smell test. While Dostoyevsky has mastered evolutionary psychology, he seems to have ignored cultural evolution. As Joe Henrich lays out in his latest book, The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous, the real-world influence of Christianity as a pro-social institution is a matter that deserves a far more nuanced examination. So if I could give Mssr. Dostoyevsky one piece of advice it would be this: less Dawkins, more Henrich please. After all, a devotee of Nietzsche should have a more subtle view of these things.
1. 1.See also Daybreak 523: "With everything that a person allows to become visible one can ask: What is it supposed to hide?"
# Metascience and Philosophy
It has been said that philosophy of science is as useful to scientists as ornithology is to birds. But perhaps it can be useful to metascientists?
### State of Play
Philosophy
In the 20th century, philosophy of science attracted first-rate minds: scientists like Henri Poincaré, Pierre Duhem, and Michael Polanyi, as well as philosophers like Popper, Quine, Carnap, Kuhn, and Lakatos. Today the field is a backwater, lost in endless debates about scientific realism which evoke the malaise of medieval angelology. Despite being part of philosophy, however, the field made actual progress, abandoning simplistic early models for more sophisticated approaches with greater explanatory power. Ultimately, philosophers reached one of two endpoints: some went full relativist, while others (like Quine and Laudan) bit the bullet of naturalism and left the matter to metascientists and psychologists. "It is an empirical question, which means promote which ends".
Metascience
Did the metascientists actually pick up the torch? Sort of. There is some overlap, but (with the exception of the great Paul Meehl) they tend to focus on different problems. The current crop of metascientists is drawn, like sharks to blood, to easily quantifiable questions about the recent past (with all those p-values sitting around how could you resist analyzing them?). They focus on different fields, and therefore different problems. They seem hesitant to make normative claims. Less tractable questions about forms of progress, norms, theory selection, etc. have fallen by the wayside. Overall I think they underrate the problems posed by philosophers.
### Rational Reconstruction
In The History of Science and Its Rational Reconstructions Lakatos proposed that theories of scientific methodology function as historiographical theories and can be criticized or compared to each other by using the theories to create "rational historical reconstructions" of scientific progress. The idea is simple: if a theory fails to rationally explain the past successes of science, it's probably not a good theory, and we should not adopt its normative tenets. As Lakatos puts it, "if the rationality of science is inductive, actual science is not rational; if it is rational, it is not inductive." He applied this "Pyrrhonian machine de guerre" not only to inductivism and confirmationism, but also to Popper.
The main issue with falsification boils down to the problem of auxiliary hypotheses. On the one hand you have underdetermination (the Duhem-Quine thesis): testing hypotheses in isolation is not possible, so when a falsifying result comes out it's not clear where the modus tollens should be directed. On the other hand there is the possibility of introducing new auxiliary hypotheses to "protect" an existing theory from falsification. These are not merely abstract games for philosophers, but very real problems that scientists have to deal with. Let's take a look at a couple of historical examples from the perspective of naïve falsificationism.
First, Newton's laws. They were already falsified at the time of publication: they failed to correctly predict the motion of the moon. In the words of Newton, "the apse of the Moon is about twice as swift" as his predictions. Despite this falsification, the Principia attracted followers who worked to improve the theory. The moon was no small problem and took two decades to solve with the introduction of new auxiliary hypotheses.
A later episode involving Newton's laws illustrates how treacherous these auxiliary hypotheses can be. In 1846 Le Verrier (I have written about him before) solved an anomaly in the orbit of Uranus by hypothesizing the existence of a new planet. That planet was Neptune and its discovery was a wonderful confirmation of Newton's laws. A decade later Le Verrier tried to solve an anomaly in the orbit of Mercury using the same method. The hypothesized new planet was never found and Newton's laws remained at odds with the data for decades (yet nobody abandoned them). The solution was only found in 1915 with Einstein's general relativity: Newton should have been abandoned this time!
Second, Prout's hypothesis: in 1815 William Prout proposed that the atomic weights of all elements were multiples of the atomic weight of hydrogen. A decade later, chemists measured the atomic weight of chlorine at 35.45x that of hydrogen and Prout's hypothesis was clearly falsified. Except, a century after that, isotopes were discovered: variants of chemical elements with different neutron numbers. Turns out that natural chlorine is composed of 76% 35Cl and 24% 37Cl, hence the atomic weight of 35.45. Whoops! So here we have a case where falsification depends on an auxiliary hypothesis (no isotopes) which the experimenters have no way of knowing.
Popper tried to rescue falsificationism through a series of unsatisfying ad-hoc fixes: exhorting scientists not to be naughty when introducing auxiliary hypotheses, and saying falsification only applies to "serious anomalies". When asked what a serious anomaly is, he replied: "if an object were to move around the Sun in a square"!
Problem, officer?
There are a few problems with rational reconstruction, and while I don't think any of them are fatal, they do mean we have to tread carefully.
External factors: no internal history of science can explain the popularity of Lysenkoism in the USSR—sometimes we have to appeal to external factors. But the line between internal and external history is unclear, and can even depend on your methodology of choice.
Meta-criterion choice: what criteria do you use to evaluate the quality of a rational reconstruction? Lakatos suggested using the criteria of each theory (eg use falsificationism to judge falsificationism) but he never makes a good case for that vs a standardized set of meta-criteria.
Case studies: philosophers tend to argue using case studies and it's easy to find one to support virtually any position, even if its normative suggestions are suboptimal. Lots of confirmation bias here. The illustrious Paul Meehl correctly argues for the use of "actuarial methods" instead. "Absent representative sampling, one lacks the database needed to best answer or resolve these types of inherently statistical questions." The metascientists obviously have a great methodological advantage here.
Fake history: the history of science as we read it today is sanitized if not fabricated. Successes are remembered and failures thrown aside; chaotic processes of discovery are cleaned up for presentation. As Peter Medawar noted in Is the scientific paper a fraud?, the "official record" of scientific progress contains few traces of the messy process that actually generated said progress. He further argues that there is a desire to conform to a particular ideal of induction which creates a biased picture of how scientific discovery works.
### Falsification in Metascience
Now, let's shift our gaze to metascience. There's a fascinating subgenre of psychology in which researchers create elaborate scientific simulations and observe subjects as they try to make "scientific discoveries". The results can help us understand how scientific reasoning actually happens, how people search for hypotheses, design experiments, create new concepts, and so on. My favorite of these is Dunbar (1993), which involved a bunch of undergraduate students trying to recreate a Nobel-winning discovery in biochemistry.
Reading these papers one gets the sense that there is a falsificationist background radiation permeating everything. When the subjects don't behave like falsificationists, it's simply treated as an error or a bias. Klahr & Dunbar scold their subjects: "our subjects frequently maintained their current hypotheses in the face of negative information". And within the tight confines of these experiments it's usually true that it is an error. But this reflects the design of the experiment rather than any inherent property of scientific reasoning or progress, and extrapolating these results to real-world science in general would be a mistake.
Sociology offers a cautionary tale about what happens when you take this kind of reasoning to an extreme: the strong programme people started with an idealistic (and wrong) philosophy of science, they then observed that real-world science does not actually operate like that, and concluded that it's all based on social forces and power relations, descending into an abyss of epistemological relativism. To reasonable people like you and me this looks like an excellent reductio ad absurdum, but sociologists are a special breed and one man’s modus ponens is another man’s modus tollens. The same applies to over-extensions of falsificationism. Lakatos:
...those trendy 'sociologists of knowledge' who try to explain the further (possibly unsuccessful) development of a theory 'falsified' by a 'crucial experiment' as the manifestation of the irrational, wicked, reactionary resistance by established authority to enlightened revolutionary innovation.
One could also argue that the current focus on replication is too narrow. The issue is obscured by the fact that in the current state of things the original studies tend to be very weak, the "theories" do not have track records of success, and the replications tend to be very strong, so the decision is fairly easy. But one can imagine a future scenario in which failed replications should be treated with far more skepticism.
There are also some empirical questions in this area that are ripe for the picking: at which point do scientists shift their beliefs to the replication over the original? What factors do they use? What do they view a falsification as actually refuting (ie where do they direct the modus tollens)? Longitudinal surveys, especially in the current climate of the social sciences, would be incredibly interesting.
### Unit of Progress
One of the things philosophers of science are in agreement about is that individual scientists cannot be expected to behave rationally. Recall the example of Prout and the atomic weight of chlorine above: Prout simply didn't accept the falsifying results, and having obtained a value of 35.83 by experiment, rounded it to 36. To work around this problem, philosophers instead treated wider social or conceptual structures as the relevant unit of progress: "thinking style groups" (Fleck), "paradigms" (Kuhn), "research programmes" (Lakatos), "research traditions" (Laudan), etc. When a theory is tested, the implications of the result depend on the broader structure that theory is embedded in. Lakatos:
We have to study not the mind of the individual scientist but the mind of the Scientific Community. [...] Kuhn certainly showed that psychology of science can reveal important-and indeed sad-truths. But psychology of science is not autonomous; for the-rationally reconstructed-growth of science takes place essentially in the world of ideas, in Plato's and Popper's 'third world'.
Psychologists are temperamentally attracted to the individual, and this is reflected in their metascientific research methods which tend to focus on individual scientists' thinking, or isolated papers. Meehl, for example, simply views this as an opportunity to optimize individuals' cognitive performance:
The thinking of scientists, especially during the controversy or theoretical crises preceding Kuhnian revolutions, is often not rigorous, deep, incisive, or even fair-minded; and it is not "objective" in the sense of interjudge reliability. Studies of resistance to scientific discovery, poor agreement in peer review, negligible impact of most published papers, retrospective interpretations of error and conflict all suggest suboptimal cognitive performance.
Given the importance of broader structures however, things that seem irrational from the individual perspective might make sense collectively. Institutional design is criminally under-explored, and the differences in attitudes both over time and over the cross section of scientists are underrated objects of study.
You might retort that this is a job for the sociologists, but look at what they have produced: on the one hand they gave us Robert Merton, and on the other hand the strong programme. They don't strike me as particularly reliable.
### Fields & Theories
Almost all the scientists doing philosophy of science were physicists or chemists, and the philosophers stuck to those disciplines in their analyses. Today's metascientists on the other hand mostly come from psychology and medicine. Not coincidentally, they tend to focus on psychology and medicine. These fields tend to have different kinds of challenges compared to the harder sciences: the relative lack of theory, for example, means that today's metascientists tend to ignore some of the most central parts of philosophy of science, such as questions about Lakatos's "positive heuristic" and how to judge auxiliary hypotheses, questions about whether the logical or empirical content of theories is preserved during progress, questions about how principles of theory evaluation change over time, and so on.
That's not to say no work at all has been done in this area, for example Paul Meehl tried to construct a quantitative index of a theory's track record that could then be used to determine how to respond to a falsifying result. There's also some similar work from a Bayesian POV. But much more could be done in this direction, and much of it depends on going beyond medicine and the social sciences. "But Alvaro, I barely understand p-values, I could never do the math needed to understand physics!" If the philosophers could do it then so can the psychologists. But perhaps these problems require broader interdisciplinary involvement: not only specialists from other fields, but also involvement from neuroscience, computational science, etc.
### What is progress?
One of the biggest questions the philosophers tried to answer was how progress is made, and how to even define it. Notions of progress as strictly cumulative (ie the new theory has to explain everything explained by the old one) inevitably lead to relativism, because theories are sometimes widely accepted at an "early" stage when they have limitations relative to established ones. But what is the actual process of consensus formation? What principles do scientists actually use? What principles should they use? Mertonian theories about agreement about standards/aims are clearly false, but we don't have anything better to replace them. This is another question that depends on looking beyond psychology, toward more theory-oriented fields.
Metascience can continue the work and actually solve important questions posed by philosophers:
• Is there a difference between mature and immature fields? Should there be?
• What guiding assumptions are used for theory choice? Do they change over time, and if yes how are they accepted/rejected? What is the best set of rules? Meehl's suggestions are a good starting point: "We can construct other indexes of qualitative diversity, formal simplicity, novel fact predictivity, deductive rigor, and so on. Multiple indexes of theoretical merit could then be plotted over time, intercorrelated, and related to the long-term fate of theories."
• Can we tell, in real time, which fields are progressing and which are degenerating? If not, is this an opening for irrationalism? What factors should we use to decide whether to stick with a theory on shaky ground? What factors should we use to judge auxiliary hypotheses? Meehl started doing good work in this area, let's build on it.
• Does null hypothesis testing undermine progress in social sciences by focusing on stats rather than the building of solid theories as Meehl thought?
• Is it actually useful, as Mitroff suggests, to have a wide array of differently-biased scientists working on the same problems? (At least when there's lots of uncertainty?)
• Gholson & Barker 1985 applied Lakatos and Laudan's theories to progress in physics and psychology (arguing that some areas of psychology do have a strong theoretical grounding), but this should be taken beyond case studies: comparative approaches with normative conclusions. Do strong theories really help with progress in the social sciences? Protzko et al 2020 offer some great data with direct normative applications, much more could be done in this direction.
• And hell, while I'm writing this absurd Christmas list let me add a cherry on top: give me a good explanation of how abduction works!
• Imre Lakatos, The Methodology of Scientific Research Programmes [PDF] [Amazon]
1. 1.Scientific realism is the view that the entities described by successful scientific theories are real.
2. 2.Never go full relativist.
3. 3.Quine abandoned the entirety of epistemology, "as a chapter of psychology".
4. 4.Prout's hypothesis ultimately turned out to be wrong for other reasons, but it was much closer to the truth than initially suggested by chlorine.
5. 5.The end-point of this line is the naked appeal to authority for deciding what is a serious anomaly and what is not.
6. 6.Fictions like the idea that Newton's laws were derived from and compatible with Kepler's laws abound. Even in a popular contemporary textbook for undergrads you can find statements like "Newton demonstrated that [Kepler's] laws are a consequence of the gravitational force that exists between any two masses." But of course the planets do not follow perfect elliptical orbits in Newtonian physics, and empirical deviations from Kepler were already known in Newton's time.
7. 7.Fleck is also good on this point.
8. 8.Klahr & Dunbar (1988) and Mynatt, Doherty & Tweeny (1978) are also worth checking out. Also, these experiments could definitely be taken further, as a way of rationally reconstructing past advances in the lab.
9. 9.Did I mention how great he is?
10. 10.Lakatos: "It is very difficult to decide, especially since one must not demand progress at each single step, when a research programme has degenerated hopelessly or when one of two rival programmes has achieved a decisive advantage over the other."
# The Riddle of Sweden's COVID-19 Numbers
Comparing Sweden's COVID-19 statistics to other European countries, two peculiar features emerge:
1. Despite very different policies, Sweden has a similar pattern of cases.
2. Despite a similar pattern of cases, Sweden has a very different pattern of deaths.
# Sweden's Strategy
What exactly has Sweden done (and not done) in response to COVID-19?
• The government has banned large public gatherings.
• The government has partially closed schools and universities: lower secondary schools remained open while older students stayed at home.
• The government recommends voluntary social distancing. High-risk groups are encouraged to isolate.
• Those with symptoms are encouraged to stay at home.
• The government does not recommend the use of masks, and surveys confirm that very few people use them (79% "not at all" vs 2% in France, 0% in Italy, 11% in the UK).
• There was a ban on visits to care homes which was lifted in September.
• There have been no lockdowns.
How has it worked? Well, Sweden is roughly at the same level as other western European countries in terms of per capita mortality, but it's also doing much worse than its Nordic neighbors. Early apocalyptic predictions have not materialized. Economically it doesn't seem to have gained much, as its Q2 GDP drop was more or less the same as that of Norway and Denmark.
# Case Counts
Sweden has followed a trajectory similar to other Western countries with the first wave in April, a pause during the summer (Sweden took longer to settle down, however), and now a second wave in autumn.
The fact that the summer drop-off in cases happened in Sweden without lockdowns and without masks suggests that perhaps those were not the determining factors? It doesn't necessarily mean that lockdowns are ineffective in general, just that in this particular case the no-lockdown counterfactual probably looks similar.
The similarity of the trajectories plus the timing points to a common factor: climate.
Seasonality?
This sure looks like a seasonal pattern, right? And there are good a priori reasons to think COVID-19 will be slow to spread in summer: the majority of respiratory diseases all but disappear during the warmer months. This chart from Li, Wang & Nair (2020) shows the monthly activity of various viruses sorted by latitude:
The exact reasons are unclear, but it's probably a mix of temperature, humidity, behavioral factors, UV radiation, and possibly vitamin D.
However, when it comes to COVID-19 specifically there are reasons to be skeptical. The US did not have a strong seasonal pattern:
And in the southern hemisphere, Australia's two waves don't really fit a clear seasonal pattern. [Edit: or perhaps it does fit? Their second wave was the winter wave; climate differences and lockdowns could explain the differences from the European pattern?]
The WHO (yes, yes, I know) says it's all one big wave and covid-19 has no seasonal pattern like influenza. A report from the National Academy of Sciences is also very skeptical about seasonality, making comparisons to SARS and MERS which do not exhibit seasonal patterns.
A review of 122 papers on the seasonality of COVID-19 is mostly inconclusive, citing lack of data and problems with confounding from control measures, social, economic, and cultural conditions. The results in the papers themselves "offer mixed offer mixed statistical support (none, weak, or strong relationships) for the influence of environmental drivers." Overall I don't think there's compelling evidence in favor of climatic variables explaining a large percentage of variation in COVID-19 deaths. So if we can't attribute the summer "pause" and autumn "second wave" in Europe to seasonality, what is the underlying cause?
Schools?
If not the climate, then I would suggest schools, but the evidence suggests they play a very small role. I like this study from Germany which uses variation in the timing of summer breaks across states, finding no evidence for an effect on new cases. This paper utilizes the partial school closures in Sweden and finds open schools had only "minor consequences". Looking at school closures during the SARS epidemic the results are similar. The ECDC is not particularly worried about schools, arguing that outbreaks in educational facilities are "exceptional events" that are "limited in number and size".
So what are we left with? Confusion.
# Deaths
This chart shows daily new cases and new deaths for all of Europe:
There's a clear relationship between cases & deaths, with a lag of a few weeks as you would expect (and a change in magnitude due to increased testing and decreasing mortality rates). Here's what Sweden's chart looks like:
What is going on here? Fatality rates have been dropping everywhere, but cases and deaths appear to be completely disconnected in Sweden. Even the first death peak doesn't coincide with the first case peak, but that's probably because of early spread in nursing homes.
Are they undercounting deaths? I don't think so, total deaths seem to be below normal levels (data from euromomo):
So how do we explain the lack of deaths in Sweden?
Age?
Could it be that only young people are catching it in Sweden? I haven't found any up to date, day-by-day breakdowns by age, but comparing broad statistics for Sweden and Europe as a whole, they look fairly similar. Even if age could explain it, why would that be the case in Sweden and not in other countries? Why aren't the young people transmitting it to vulnerable old people? Perhaps it's happening and the lag is long enough that it's just not reflected in the data yet?
[Edit: thanks to commenter Frank Suozzo for pointing out that cases are concentrated in lower ages. I have found data from July 31 on the internet archive; comparing it to the latest figures, it appears that old people have managed to avoid getting covid in Sweden! Here's the chart showing total case counts:]
Improved Treatment?
Mortality has declined everywhere, and part of that is probably down to improved treatment. But I don't see Sweden doing anything unique which could explain the wild discrepancy.
Again I'm left confused about these cross-country differences. If you have any good theories I would love to hear them. Looks like age is the answer.
1. 1.I think the right way to look at this is to say that Sweden has underperformed given its cultural advantages. The differences between Italian-, French-, and German-speaking cantons in Switzerland suggest a large role for cultural factors. Sweden should've followed a trajectory similar to its neighbors rather than one similar to Central/Southern Europe. Of course it's hard to say how things will play out in the long run.
2. 2.Could this all be just because of increased testing? No. While testing has increased, the rate of positive tests has also risen dramatically. The second wave is not a statistical artifact.
3. 3.Humidity seems very important, at least when it comes to influenza. See eg Absolute Humidity and the Seasonal Onset of Influenzain the Continental United States and Absolute humidity modulates influenza survival, transmission, and seasonality. There's even experimental evidence here, some papers: High Humidity Leads to Loss of Infectious Influenza Virus from Simulated Coughs, Humidity as a non-pharmaceutical intervention for influenza A.
# When the Worst Man in the World Writes a Masterpiece
Boswell's Life of Johnson is not just one of my favorite books, it also engendered some of my favorite book reviews. While praise for the work is universal, the main question commentators try to answer is this: how did the worst man in the world manage to write the best biography?
# The Man
Who was James Boswell? He was a perpetual drunk, a degenerate gambler, a sex addict, whoremonger, exhibitionist, and rapist. He gave his wife an STD he caught from a prostitute.
Selfish, servile and self-indulgent, lazy and lecherous, vain, proud, obsessed with his aristocratic status, yet with no sense of propriety whatsoever, he frequently fantasized about the feudal affection of serfs for their lords. He loved to watch executions and was a proud supporter of slavery.
“Where ordinary bad taste leaves off,” John Wain comments, “Boswell began.” The Thrales were long-time friends and patrons of Johnson; a single day after Henry Thrale died, Boswell wrote a poem fantasizing about the elderly Johnson and the just-widowed Hester: "Convuls'd in love's tumultuous throws, / We feel the aphrodisian spasm". The rest of his verse is of a similar quality; naturally he considered himself a great poet.
Boswell combined his terrible behavior with a complete lack of shame, faithfully reporting every transgression, every moronic ejaculation, every faux pas. The first time he visited London he went to see a play and, as he happily tells us himself, he "entertained the audience prodigiously by imitating the lowing of a cow."
By all accounts, including his own, he was an idiot. On a tour of Europe, his tutor said to him: "of young men who have studied I have never found one who had so few ideas as you."
As a lawyer he was a perpetual failure, especially when he couldn't get Johnson to write his arguments for him. As a politician he didn't even get the chance to be a failure despite decades of trying.
His correspondence with Johnson mostly consists of Boswell whining pathetically and Johnson telling him to get his shit together.
He commissioned a portrait from his friend Joshua Reynolds and stiffed him on the payment. His descendants hid the portrait in the attic because they were ashamed of being related to him.
Desperate for fame, he kept trying to attach himself to important people, mostly through sycophancy. In Geneva he pestered Rousseau, leading to this conversation:
Rousseau: You are irksome to me. It’s my nature. I cannot help it.
Boswell: Do not stand on ceremony with me.
Rousseau: Go away.
Later, Boswell was given the task of escorting Rousseau's mistress Thérèse Le Vasseur to England—they had an affair on the way.
When Adam Smith and Edward Gibbon were elected to The Literary Club, Boswell considered leaving because he thought the club had now "lost its select merit"!
On the positive side, his humor and whimsy made for good conversation; he put people at ease; he gave his children all the love his own father had denied him; and, somehow, he wrote one of the great works of English literature.
# The Masterpiece
The Life of Samuel Johnson, LL.D. was an instant sensation. While the works of Johnson were quickly forgotten, his biography has never been out of print in the 229 years since its initial publication. It went through 41 editions just in the 19th century.
Burke told King George III that he had never read anything more entertaining. Coleridge said "it is impossible not to be amused with such a book." George Bernard Shaw compared Boswell's dramatization of Johnson to Plato's dramatization of Socrates, and placed old Bozzy in the middle of an "apostolic succession of dramatists" from the Greek tragedians through Shakespeare and ending, of course, with Shaw himself.
It is a strange work, an experimental collage of different modes: part traditional biography, part collection of letters, and part direct reports of Johnson's life as observed by Boswell. His inspiration came not from literature, but from the minute naturalistic detail of Flemish paintings. It is difficult to convey its greatness in compressed form: Boswell is not a great writer at the sentence level, and all the famous quotes are (hilarious) Johnsonian bon mots. The book succeeds through a cumulative effect.
Johnson was 54 years old when he first met Boswell, and most of his major accomplishments (the poetry, the dictionary, The Rambler) were behind him; his wife had already died; he was already the recipient of a £300 pension from the King; his edition of Shakespeare was almost complete. All in all they spent no more than 400 days together. Boswell had limited material to work with, but what he doesn't capture in fact, he captures in feeling: an entire life is contained in this book: love and friendship, taverns and work, the glory of success and recognition, the depressive bouts of failure and penury, the inevitable tortures of aging and death.
Out of a person, Boswell created a literary personality. His powers of characterization are positively Shakespearean, and his Johnson resembles none other but the bard's greatest creation: Sir John Falstaff. Big, brash, and deeply flawed, but also lovable. He would "laugh like a rhinoceros":
Johnson could not stop his merriment, but continued it all the way till he got without the Temple-gate. He then burst into such a fit of laughter that he appeared to be almost in a convulsion; and in order to support himself, laid hold of one of the posts at the side of the foot pavement, and sent forth peals so loud, that in the silence of the night his voice seemed to resound from Temple-bar to Fleet ditch.
And around Johnson he painted an entire dramatic cast, bringing 18th century London to life: Garrick the great actor, Reynolds the painter, Beauclerk with his banter, Goldsmith with his insecurities. Monboddo and Burke, Henry and Hester Thrale, the blind Mrs Williams and the Jamaican freedman Francis Barber.
Borges (who was also a big fan) finds his parallels not in Shakespeare and Falstaff, but in Cervantes and Don Quixote. He (rather implausibly) suggests that every Quixote needs his Sancho, and "Boswell appears as a despicable character" deliberately to create a contrast.
And in the 1830s, two brilliant and influential reviews were written by two polar opposites: arch-progressive Thomas Babington Macaulay and radical reactionary Thomas Carlyle. The first thing you'll notice is their sheer magnitude: Macaulay's is 55 pages long, while Carlyle's review in Fraser's Magazine reaches 74 pages! And while they both agree that it's a great book and that Boswell was a scoundrel, they have very different theories about what happened.
## Macaulay
Never in history, Macaulay says, has there been "so strange a phænomenon as this book". On the one hand he has effusive praise:
Homer is not more decidedly the first of heroic poets, Shakspeare is not more decidedly the first of dramatists, Demosthenes is not more decidedly the first of orators, than Boswell is the first of biographers. He has no second. He has distanced all his competitors so decidedly that it is not worth while to place them.
On the other hand, he spends several paragraphs laying into Boswell with gusto:
He was, if we are to give any credit to his own account or to the united testimony of all who knew him, a man of the meanest and feeblest intellect. [...] He was the laughing-stock of the whole of that brilliant society which has owed to him the greater part of its fame. He was always laying himself at the feet of some eminent man, and begging to be spit upon and trampled upon. [...] Servile and impertinent, shallow and pedantic, a bigot and a sot, bloated with family pride, and eternally blustering about the dignity of a born gentleman, yet stooping to be a talebearer, an eavesdropper, a common butt in the taverns of London.
Macaulay's theory is that while Homer and Shakespeare and all the other greats owe their eminence to their virtues, Boswell is unique in that he owes his success to his vices.
He was a slave, proud of his servitude, a Paul Pry, convinced that his own curiosity and garrulity were virtues, an unsafe companion who never scrupled to repay the most liberal hospitality by the basest violation of confidence, a man without delicacy, without shame, without sense enough to know when he was hurting the feelings of others or when he was exposing himself to derision; and because he was all this, he has, in an important department of literature, immeasurably surpassed such writers as Tacitus, Clarendon, Alfieri, and his own idol Johnson.
Of the talents which ordinarily raise men to eminence as writers, Boswell had absolutely none. There is not in all his books a single remark of his own on literature, politics, religion, or society, which is not either commonplace or absurd. [...] Logic, eloquence, wit, taste, all those things which are generally considered as making a book valuable, were utterly wanting to him. He had, indeed, a quick observation and a retentive memory. These qualities, if he had been a man of sense and virtue, would scarcely of themselves have sufficed to make him conspicuous; but, because he was a dunce, a parasite, and a coxcomb, they have made him immortal.
The work succeeds partly because of its subject: if Johnson had not been so extraordinary, then airing all his dirty laundry would have just made him look bad.
No man, surely, ever published such stories respecting persons whom he professed to love and revere. He would infallibly have made his hero as contemptible as he has made himself, had not his hero really possessed some moral and intellectual qualities of a very high order. The best proof that Johnson was really an extraordinary man is that his character, instead of being degraded, has, on the whole, been decidedly raised by a work in which all his vices and weaknesses are exposed.
And finally, Boswell provided Johnson with a curious form of literary fame:
The reputation of [Johnson's] writings, which he probably expected to be immortal, is every day fading; while those peculiarities of manner and that careless table-talk the memory of which, he probably thought, would die with him, are likely to be remembered as long as the English language is spoken in any quarter of the globe.
## Carlyle
Carlyle rates Johnson's biography as the greatest work of the 18th century. In a sublime passage that brings tears to my eyes, he credits the Life with the power of halting the inexorable passage of time:
Rough Samuel and sleek wheedling James were, and are not. [...] The Bottles they drank out of are all broken, the Chairs they sat on all rotted and burnt; the very Knives and Forks they ate with have rusted to the heart, and become brown oxide of iron, and mingled with the indiscriminate clay. All, all has vanished; in every deed and truth, like that baseless fabric of Prospero's air-vision. Of the Mitre Tavern nothing but the bare walls remain there: of London, of England, of the World, nothing but the bare walls remain; and these also decaying (were they of adamant), only slower. The mysterious River of Existence rushes on: a new Billow thereof has arrived, and lashes wildly as ever round the old embankments; but the former Billow with its loud, mad eddyings, where is it? Where! Now this Book of Boswell's, this is precisely a revocation of the edict of Destiny; so that Time shall not utterly, not so soon by several centuries, have dominion over us. A little row of Naphtha-lamps, with its line of Naphtha-light, burns clear and holy through the dead Night of the Past: they who are gone are still here; though hidden they are revealed, though dead they yet speak. There it shines, that little miraculously lamplit Pathway; shedding its feebler and feebler twilight into the boundless dark Oblivion, for all that our Johnson touched has become illuminated for us: on which miraculous little Pathway we can still travel, and see wonders.
Carlyle disagrees completely with Macaulay: it is not because of his vices that Boswell could write this book, but rather because he managed to overcome them. He sees in Boswell a hopeful symbol for humanity as a whole, a victory in the war between the base and the divine in our souls.
In fact, the so copious terrestrial dross that welters chaotically, as the outer sphere of this man's character, does but render for us more remarkable, more touching, the celestial spark of goodness, of light, and Reverence for Wisdom, which dwelt in the interior, and could struggle through such encumbrances, and in some degree illuminate and beautify them.
Boswell's shortcomings were visible: he was "vain, heedless, a babbler". But if that was the whole story, would he really have chosen Johnson? He could have picked more illustrious targets, richer ones, perhaps some powerful statesman or an aristocrat with a distinguished lineage. "Doubtless the man was laughed at, and often heard himself laughed at for his Johnsonism". Boswell must have been attracted to Johnson by nobler motives. And to do that he would have to "hurl mountains of impediment aside" in order to overcome his nature.
The plate-licker and wine-bibber dives into Bolt Court, to sip muddy coffee with a cynical old man, and a sour-tempered blind old woman (feeling the cups, whether they are full, with her finger); and patiently endures contradictions without end; too happy so he may but be allowed to listen and live.
The Life is not great because of Boswell's foolishness, but because of his love and his admiration, an admiration that Macaulay considered a disease. Boswell wrote that in Johnson's company he "felt elevated as if brought into another state of being".
His sneaking sycophancies, his greediness and forwardness, whatever was bestial and earthy in him, are so many blemishes in his Book, which still disturb us in its clearness; wholly hindrances, not helps. Towards Johnson, however, his feeling was not Sycophancy, which is the lowest, but Reverence, which is the highest of human feelings.
On Johnson's personality, Carlyle writes: "seldom, for any man, has the contrast between the ethereal heavenward side of things, and the dark sordid earthward, been more glaring". And this is what Johnson wrote about Falstaff in his Shakespeare commentary:
Falstaff is a character loaded with faults, and with those faults which naturally produce contempt. [...] the man thus corrupt, thus despicable, makes himself necessary to the prince that despises him, by the most pleasing of all qualities, perpetual gaiety, by an unfailing power of exciting laughter, which is the more freely indulged, as his wit is not of the splendid or ambitious kind, but consists in easy escapes and sallies of levity, which make sport but raise no envy.
Johnson obviously enjoyed the comparison to Falstaff, but would it be crazy to also see Boswell in there? The Johnson presented to us in the Life is a man who had to overcome poverty, disease, depression, and a constant fear of death, but never let those things poison his character. Perhaps Boswell crafted the character he wished he could become: Johnson was his Beatrice—a dream, an aspiration, an ideal outside his grasp that nonetheless thrust him toward greatness. Through a process of self-overcoming Boswell wrote a great book on self-overcoming.
# Mediocrities Everywhere...I Absolve You
The story of Boswell is basically the plot of Amadeus, with the role of Salieri being played by Macaulay, by Carlyle, by me, and—perhaps even by yourself, dear reader. The line between admiration, envy, and resentment is thin, and crossing it is easier when the subject is a scoundrel. But if Bozzy could set aside resentment for genuine reverence, perhaps there is hope for us all. And yet...it would be an error to see in Boswell the Platonic Form of Mankind.
Shaffer and Forman's film portrays Mozart as vulgar, arrogant, a womanizer, bad with money—but, like Bozzy, still somehow quite likable. In one of the best scenes of the film, we see Mozart transform the screeching of his mother-in-law into the Queen of the Night Aria; thus Boswell transformed his embarrassments into literary gold. He may be vulgar, but his productions are not. He may be vulgar, but he is not ordinary.
Perhaps it is in vain that we seek correlations among virtues and talents: perhaps genius is ineffable. Perhaps it's Ramanujans all the way down. You can't even say that genius goes with independence: there's nothing Boswell wanted more than social approval. I won't tire you with clichés about the Margulises and the Musks.
Would Johnson have guessed that he would be the mediocrity, and Bozzy the genius? Would he have felt envy and resentment? What would he say, had he been given the chance to read in Carlyle that Johnson's own writings "are becoming obsolete for this generation; and for some future generation may be valuable chiefly as Prolegomena and expository Scholia to this Johnsoniad of Boswell"?
If you want to read The Life of Johnson, I recommend a second-hand copy of the Everyman's Library edition: cheap, reasonably sized, and the paper & binding are great.
1. 1.In the very first letter Boswell wrote to Rousseau, he described himself as "a man of singular merit".
2. 2.They were "rediscovered" in the early 1900s.
3. 3.While some are quick to dismiss the non-direct parts, I think they're necessary, especially the letters which illuminate a different side of Johnson's character.
4. 4.Lecture #10 in Professor Borges: A Course on English Literature.
5. 5.What happened to the novella-length book review? Anyway, many of those pages are taken up by criticism of John Wilson Croker's incompetent editorial efforts.
High Replicability of Newly-Discovered Social-behavioral Findings is Achievable: a replication of 16 papers that followed "optimal practices" finds a high rate of replicability and virtually identical effect sizes as the original studies.
How do you decide what to replicate? This paper attempts to build a model that can be used to pick studies to maximize utility gained from replications.
Guzey on that deworming study, tracks which variables are reported across 5 different drafts of the paper starting in 2011. "But then you find that these variables didn’t move in the right direction. What do you do? Do you have to show these variables? Or can you drop them?"
I've been enjoying the NunoSempre forecasting newsletter, a monthly collection of links on forecasting.
The 16th paragraph in this piece on the long-term effects of coronavirus mentions that 2 out of 3 people with "long-lasting" COVID-19 symptoms never had COVID to begin with.
Gwern's giant GPT-3 page. The Zizek Navy Seal Copypasta is incredible, as are the poetic imitations.
Ethereum is a Dark Forest. "In the Ethereum mempool, these apex predators take the form of “arbitrage bots.” Arbitrage bots monitor pending transactions and attempt to exploit profitable opportunities created by them."
Tyler Cowen in conversation with Nicholas Bloom, lots of fascinating stuff on innovation and progress. "Just in economics — when I first started in economics, it was standard to do a four-year PhD. It’s now a six-year PhD, plus many of the PhD students have done a pre-doc, so they’ve done an extra two years. We’re taking three or four years longer just to get to the research frontier." Immediately made me think of Scott Alexander's Ars Longa, Vita Brevis.
The Progress Studies for Young Scholars youtube channel has a bunch of interesting interviews, including Cowen, Collison, McCloskey, and Mokyr.
From the promising new Works in Progress magazine, Progress studies: the hard question.
I've written a parser for your Kindle's My Clippings.txt file. It removes duplicates, splits them up by book, and outputs them in convenient formats. Works cross-platform.
Generative bad handwriting in 280 characters. You can find a lot more of that sort of thing by searching for #つぶやきProcessing on twitter.
A new ZeroHPLovecraft short story, Key Performance Indicators. Black Mirror-esque.
A great skit about Ecclesiastes from Israeli sketch show The Jews Are Coming. Turn on the subs.
And here's some sweet Dutch prog-rock/jazz funk from the 70s.
• Piranesi by Susanna Clarke. 16 years after Jonathan Strange & Mr Norrell, a new novel from Susanna Clarke! It's short and not particularly ambitious, but I enjoyed it a lot. A tight fantastical mystery that starts out similar to The Library of Babel but then goes off in a different direction.
• The Poems of T. S. Eliot: the great ones are great, and there's a lot of mediocre stuff in between. Ultimately a bit too grey and resigned and pessimistic for my taste. I got the Faber & Faber hardcover edition and would not recommend it, it's unwieldy and the notes are mostly useless.
• Antkind by Charlie Kaufman. A typically Kaufmanesque work about a neurotic film critic and his discovery of an astonishing piece of outsider art. Memory, consciousness, time, doubles, etc. Extremely good and laugh-out-loud funny for the first half, but the final 3-400 pages were a boring, incoherent psychedelic smudge.
• Under the Volcano by Malcolm Lowry. Very similar to another book I read recently, Lawrence Durrell's Alexandria Quartet. I prefer Durrell. Lowry doesn't have the stylistic ability to make the endless internal monologues interesting (as eg Gass does in The Tunnel), and I find the central allegory deeply misguided. Also, it's the kind of book that has a "central allegory".
• Less than One by Joseph Brodsky. A collection of essays, mostly on Russian poetry. If I knew more about that subject I think I would have enjoyed the book more. The essays on his life in Soviet Russia are good.
• Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science by Stuart Ritchie. Very good, esp. if you are not familiar with the replication crisis. Some quibbles about the timing and causes of the problems. Full review here.
• The Idiot by "Dostoyevsky". Review forthcoming.
• Borges and His Successors: The Borgesian Impact on Literature and the Arts: a collection of fairly dull essays with little to no insight.
• Samuel Johnson: Literature, Religion and English Cultural Politics from the Restoration to Romanticism by J.C.D. Clark: a dry but well-researched study on an extraordinarily narrow slice of cultural politics. Not really aimed at a general audience.
• Dhalgren by Samuel R. Delany. A wild semi-autobiographical semi-post-apocalyptic semi-science fiction monster. It's a 900 page slog, it's puerile, the endless sex scenes (including with minors) are pointless at best, the characters are uninteresting, there's barely any plot, the 70s counterculture stuff is just comical, and stylistically it can't reach the works it's aping. So I can see why some people hate it. But I actually enjoyed it, it has a compelling strangeness to it that is difficult to put into words (or perhaps I was just taken in by all the unresolved plot points?). Its sheer size is a quality in itself, too. Was it worth the effort? Could I recommend it? Probably not.
• Novum Organum by Francis Bacon. While he did not actually invent the scientific method, his discussion of empiricism, experiments, and induction was clearly a step in that direction. The first part deals with science and empiricism and induction from an abstract perspective and it feels almost contemporary, like it was written by a time traveling 19th century scientist or something like that. The quarrel between the ancients and the moderns is already in full swing here, Bacon dunks on the Greeks constantly and upbraids people for blindly listening to Aristotle. Question received dogma and popular opinions, he says. He points to inventions like gunpowder and the compass and printing and paper and says that surely these indicate that there's a ton of undiscovered ideas out there, we should go looking for them. He talks about cognitive biases and scientific progress:
we are laying the foundations not of a sect or of a dogma, but of human progress and empowerment.
Then you get to the second part and the middle ages hit you like a freight train, you suddenly realize this is no contemporary man at all and his conception of how the world works is completely alien. Ideas that to us seem bizarre and just intuitively nonsensical (about gravity, heat, light, biology, etc.) are only common sense to him. He repeats absurdities about worms and flies arising spontaneously out of putrefaction, that light objects are pulled to the heavens while heavy objects are pulled to the earth, and so on. Not just surface-level opinions, but fundamental things that you wouldn't even think someone else could possibly perceive differently.
You won't learn anything new from Bacon, but it's a fascinating historical document.
• The Book of Marvels and Travels by John Mandeville. This medieval bestseller (published around 1360) combines elements of travelogue, ethnography, and fantasy. It's unclear how much of it people believed, but there was huge demand for information about far-off lands and marvelous stories. Mostly compiled from other works, it was incredibly popular for centuries. In the age of exploration (Columbus took it with him on his trip) people were shocked when some of the fantastical stories (eg about cannibals) actually turned out to be true. The tricks the author uses to generate verisimilitude are fascinating: he adds small personal touches about people he met, sometimes says that he doesn't know anything about a particular region because he hasn't been there, etc.
|
2022-12-06 14:59:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34368830919265747, "perplexity": 3568.932618330029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00464.warc.gz"}
|
https://ai.stackexchange.com/questions/25183/can-someone-explain-me-what-does-this-loss-curve-says
|
# Can someone explain me what does this loss curve says?
I was training a CNN model on TensorFlow. After a while I came back and saw this loss curve:
The green curve is training loss and the gray one is validation loss. I know that before epoch 394 the model in heavily overfitted, but I have no idea what happened after that.
Also, this is accuracy curves if it helps:
I'm using categorical cross-entropy and this is the model I am using:
and here is link to PhysioNet's challenge which I am working on: https://physionet.org/content/challenge-2017/1.0.0/
• Maybe you should provide a little bit of context, i.e. which task you are trying to solve with CNNs (i.e. which dataset), which loss function you are using, maybe you should do plot_model(your_model) and report the architecture of your model. – nbro Dec 14 '20 at 10:22
• I'm trying to classify ECG signals in PhysioNet 2017 challenge. My output is four classes and I'm using categorical cross-entropy as loss function. – Sepehr Golestanian Dec 14 '20 at 10:35
• Please, edit your post to include these details (maybe also a link to the challenge). – nbro Dec 14 '20 at 10:39
• I've edited the question. – Sepehr Golestanian Dec 14 '20 at 10:59
• Why are you using a CNN to classify ECG signals? Aren't ECG signals just numerical time series? – nbro Dec 14 '20 at 11:08
|
2021-04-12 01:30:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49464017152786255, "perplexity": 1057.2873724941483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00484.warc.gz"}
|
https://juliadatascience.io/cairomakie
|
5.1 CairoMakie.jl
Let’s start with our first plot, some scatter points with lines between them:
using CairoMakie
CairoMakie.activate!()
fig = scatterlines(1:10, 1:10)
Note that the previous plot is the default output, which we probably need to tweak by using axis names and labels.
Also note that every plotting function like scatterlines creates and returns a new Figure, Axis and plot object in a collection called FigureAxisPlot. These are known as the non-mutating methods. On the other hand, the mutating methods (e.g. scatterlines!, note the !) just return a plot object which can be appended into a given axis or the current_figure().
The next question that one might have is: how do I change the color or the marker type? This can be done via attributes, which we do in the next section.
CC BY-NC-SA 4.0 Jose Storopoli, Rik Huijzer, Lazaro Alonso
|
2022-05-20 07:01:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29437631368637085, "perplexity": 2241.308960233015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00247.warc.gz"}
|
http://mathhelpforum.com/algebra/222484-complex-numbers.html
|
# Math Help - complex numbers
2. ## Re: complex numbers
Originally Posted by cxz7410123
Let $\rho = 4\exp \left( {\frac{{\pi i}}{3}} \right)$ and $\eta = \exp \left( {\frac{{2\pi i}}{3}} \right)$.
Your roots are $\rho\cdot \eta^k$ where $k=0,~1,~2$.
|
2015-07-08 01:46:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971219062805176, "perplexity": 7979.661004432431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635143.91/warc/CC-MAIN-20150627032715-00093-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://newcleckitdominie.wordpress.com/2011/10/26/on-the-threshold-of-what/
|
## On the threshold of what?
A contact, who I’ll not identify beyond saying that he’s another HE teacher junior even to me, is currently enduring an education course as part of his probation. Such courses often seem to reflect the research interests of the course leaders rather than the needs of the probationers; the course leader in this case is a big fan of “threshold concepts” (TCs) and has constructed the course around them. Thanks to my contact, I’ve found myself for the first time acutely rather than vaguely aware of the idea and forced to give some thought to what can or can’t be got out of it.
I’m not going to describe the idea in detail here, because Mick Flanagan at UCL has done so very thoroughly on his Threshold Concepts webpage, and has provided lots of links to other articles. (I’d not previously realised how widely the term has caught on: as so often in education, theory-friendly Australia seems to have embraced it with more enthusiasm than the cynical British Isles, and it has remained marginal so far in the mathematical education literature; but this may change.) To quote Flanagan quoting the original gurus, Jan (Erik) Meyer and Ray Land,
‘Threshold Concepts’ may be considered to be “akin to passing through a portal” or “conceptual gateway” that opens up “previously inaccessible way[s] of thinking about something”.
TCs apparently have roughly eight “characteristics”, but the defining one is summarised by Flanagan as follows.
Examples of the threshold concept must be transformative and involve a traverse through a liminal space. They are likely to be characterised by many of, but not necessarily all of, the other features listed above.
Rough translation: TCs change the learner’s way of looking at things, and there’s a messy transition period when the learner is confused about what they understand. (It’s interesting to note that my contact was given as a starting point a paper by Meyer & Land (2003) in which this defining criterion is lacking, allowing the notion to burgeon significantly more than Flanagan’s summary would allow. I’ll return to this below.)
The metaphor of a threshold is natural enough. We are aware as teachers that certain topics cost our students more to master than others, and both as teachers and learners that when we look back at some topics they have done far more than others to open new territory to us — so the rather Narnian analogy of stepping through a narrow gateway into a new landscape can often seem appropriate. Neither do the characteristics outlined by Flanagan seem unfamiliar: crossing these thresholds may be “transformative”, “troublesome”, “irreversible” and so on. At this stage, though, in encountering a new theory, I find myself craving specific examples, and this is where my doubts begin.
Let’s start with a fairly convincing example. Easdown (2009) [Int J. Math. Ed. Sci. Tech. 40(7), 941-949] describes mathematical proof as a threshold concept. This paper emphasises the tension between “semantic” and “syntactic” reasoning, and illustrates the vulnerability of syntactic reasoning when a semantic understanding is lacking. This semantic/syntactic tension echoes the tension between “concept images” and “concept definitions” described in Alcock & Simpson’s Ideas from Mathematical Education, and is indeed discussed in their chapter 3; a shorter discussion can be found in chapter 2 of Cox (2011). It is easy to fall into the trap of seeing concept images or semantic reasoning as non-expert forms of thought, and concept definitions or syntactic reasoning as expert forms (for an example see Wormley’s abstract on p. 111 of this conference booklet). However, even expert mathematical reasoning involves a complex play between these formal and less formal modes of thought, as Alcock and Simpson make clear; Davis & Hersh’s Mathematical Experience is a famous exposition of the point.
This need to hold different modes of thought together is what makes proof so hard, especially for students accustomed to think of maths as entirely procedural manipulation or of reasoning as driven by plausibility rather than proof. When definitions become detached from images (or syntax from semantics), the reasoner becomes vulnerable to careless “formulaic” errors of the kind discussed by Easdown — such as his wonderful one-line “proof” of Cayley–Hamilton. When images become detached from definitions, the reasoner is easily bamboozled by irrelevant associations or other deficiencies in his/her images. (This is one reason why Halmos’s advice to stockpile examples for any concept is so sensible.) Learning to construct and manipulate proofs often does take the learner through a “liminal space”, i.e. a period when they are somewhat confused about what they’re doing, and during which their apparent ability may even decrease for a while. James Atherton calls this “learning as loss”, and he notes its overlap with TCs; there’s a whole topic here related to “self-theories” and vulnerability to failure that I hope to return to in a later post.
Unfortunately, Meyer & Land (2003) offer two examples of “threshold concepts” in mathematics that seem less convincing than Easdown’s. The first is the concept of complex numbers; the second, of the limit of a function. These are evidently core concepts, in the sense that they are foundational to large chunks of subsequent mathematics; they are also, speaking from experience, concepts that quite a few students run into problems with. (It must be remembered, though, that for many students very few concepts in mathematics are unproblematic.)
I’ve taught introductory complex numbers to many hundreds of students over the years, mostly on engineering rather than maths courses, and I’ve yet to see convincing evidence of a substantial liminal space. Most students grasp the arithmetic and algebra of complex numbers readily, and also adapt quite easily to the interpretation of complex numbers as points in a 2D plane. (This is naturally presented as an extension of the “number line” familiar from primary school, which is a useful demystifier.) Students do often struggle with the process of calculating arguments, but less from lack of understanding than because an entirely different problem — the non-equivalence of “$\tan(\theta) = a$” and “$\theta = \tan^{-1}(a)$” — rears its head. (My evidence for this is that students have relatively little difficulty sketching numbers anywhere in the complex plane or calculating arguments in the range $-\pi/2 < \mathrm{arg}(z) < \pi/2$; it is in the other quadrants that problems occur.) Many students also struggle with the exponential form, but my impression is that this reflects simply their lack of fluency with exponentials in general — a topic that seems to receive too little attention in school.
Complex numbers, in fact, may be a good example of a concept that was liminal in the historical development of mathematics — hence the mystifying terminology such as “imaginary” and “complex” itself — but that need not be liminal for students. (As a side point, Meyer and Land’s attempt to explain real numbers as “those we all deal with in the ‘real’ world; numbers we can for example count on our fingers” suggests that they too may have been confused by the terminology.) Those students with a philosophical bent are liable to find complex numbers problematic if it is the first time they have been forced to ask what a “number” is — and there were several shifts in this concept over the millennia to incorporate first irrational numbers, then negative numbers, before the arrival of imaginaries. Nevertheless, mathematics and the philosophy of mathematics are not coterminous, and few sane people these days would suggest teaching maths in a manner intrinsically front-loaded with philosophy any more than they’d suggest teaching it starting from foundational set theory. (What such “New Math” approaches failed to recognise was the phenomenon we now call “didactical inversion”: it’s often easier to learn even a logically structured subject in messy and indirect ways and to rebuild the formal structure once one has a rough map of the territory.)
The limit of a function is also an interesting example, though not (I think) of a threshold concept. Meyer & Land (2003) state that:
The limit as x tends to zero of the function f(x) = (sine x)/x is in fact one (1), which is counter intuitive. In the simple (say, geometric), imagining of this limit is the ratio of two entities (the sine of x, and x) both of which independently tend to zero as x tends to zero…
I feel that the authors’ evident nervousness about the notation, as well as their problems with grammar, are revealing. The basic concept image of a limit is not generally hard to grasp, although as Artigue, quoted by Meyer and Land, indicates, the everyday associations of the word may cause problems (as they do with “imaginary” numbers). This verbal pollution of concept images is a widespread problem in mathematics, as in other fields (e.g. quantum physics or philosophy) where ordinary language is being repurposed, and a teacher must be alert to such semantic leakage. But there’s no need to restrict ourselves to verbal approaches to concept images. Plot the graph of a function; trace the graph with your finger; where your finger goes as $x \to 0$ is the limit of the function as $x \to 0$. What was notoriously troublesome, again historically, was making this intuitive definition rigorous and mopping up the awkward cases, and the story of mathematics from Newton to Cauchy is somewhat bloodstained as a result. (Berkeley’s Discourse Addressed to an Infidel Mathematician is the classic contribution.) But the difficulty here is once more that of learning to work with a formal concept definition which at first seems disjoint from the intuitive concept image, and later forces the image to be refined. Formal epsilon-delta arguments are infamously problematic for undergraduates, but for many these arguments are the first they have been forced to conduct rigorously except as highly circumscribed ritual exercises.
A paper by Szydlik (2000) [J. Res. Math. Ed. 31(3): 258-276] is relevant here. Szydlik argues convincingly that the idea of a limit involves several concepts for which students may have inadequate or misleading concept images: real numbers, functions and the (potentially) infinite. Students’ ability to make use of the formal concept definitions then depends both on their existing concept images and on their epistemology — whether they see mathematics as an arbitrary construction imposed by authority figures or as a system with its own internal consistency. The difficulty crossing the apparent threshold of “limit” thus seems to be merely a manifestation of the more pervasive problem of reconciling formal and informal modes of thought.
A paper by Scheja & Pettersson (2010) [Higher Ed. 59(2): 221-241], which looks in detail at students’ understanding of the notion of a definite integral as a limit, appears at first to support Meyer and Land’s identification of “limit” as a threshold concept. In fact, they focus on a particular instance, which is the limit appearing in the definition of the definite integral. The definite integral emerges from this paper as a much better candidate for a TC, because — as Scheja and Pettersson note — through the Fundamental Theorem of Calculus students are forced to connect integration as an already familiar algorithmic procedure (anti-differentiation) with the previously separate geometrical notion of calculating areas. This is genuinely heady conceptual stuff, which is why the Fundamental Theorem is called a Fundamental Theorem…
What Meyer and Land’s mathematical examples suggest is that, with a little misunderstanding and possibly an approach more informed by the history or philosophy of maths than by the subject itself, numerous troublesome concepts can be presented as threshold concepts when in fact they merely reflect troubles that are rooted elsewhere. Such an over-enthusiasm to identify TCs is also encouraged when, as in Meyer & Land (2003), no minimal definition is given. But does this really do any harm?
In one sense, no it doesn’t: Meyer and Land have an idea to sell as widely as possible, and if they want to think of $\sqrt{-1}$ as a threshold concept, it doesn’t hurt me any more than if Lacan wants to think of it as equivalent to his willy. Similarly, by considering whether a threshold is a useful way to look at $\sqrt{-1}$, I may be able to sharpen up my own thinking on how to teach it and where the problems really lie. (Lacan’s approach is probably not helpful here.) As a proponent of the use of multiple mythologies to inform teaching practice I’m reluctant to turn down another contribution to the myth-kitty. Before accepting the idea, though, I’m going to look at it in terms of that rough-and-ready distinction: can threshold concepts usefully be considered as a metaphor, a myth or a state religion?
First, let’s examine the metaphor of the threshold a little more deeply. Like so many metaphors in academic and everyday communication, there’s a strong spatial element to it. A threshold is at best an extended line; more typically, a much more localised feature like the narrow portal that illustrates TCs on Flanagan’s site. To think in terms of such a threshold, then, is to think of a concept that is localised and specific rather than pervasive: it primes us to see the “concept” in question as being, say, “limit” rather than “tension between image and definition”. In some disciplines this may be helpful; in mathematics I fear it can become misleading. From different starting points both Szydlik (2000) and Scheja and Pettersson (2010) seem to reach similar conclusions: students’ problems with limits have a great deal to do with their existing beliefs and epistemologies, or how they place limits in the wider context of the discipline. The “liminal” issues in mathematics, I suspect, are more often broad and pervasive than localised, and applying the localised threshold metaphor may make it harder for us to see the connections between individual difficulties.
The other problem with talking of “threshold concepts” is that it tends to suggest a problem with a fixed location, tied to particular features in the mathematical landscape. This contrasts with a view that sees difficulties arising from conflicts between the students’ existing habits or images and the new ideas they are trying to assimilate. Complex numbers, for example, may be liminal for students who have already developed a rigid concept of “number”, but much less so for those who have previously defined a number simply as something with which one does sums — an intuitive definition that is strangely consistent with the more sophisticated idea of a number as any object that satisfies the axioms for numbers. The temptation inherent in the threshold metaphor is to ignore differences between students — always a temptation for any teacher or academic administrator, and always dangerous.
At least the status of TCs as a metaphor is fairly hard to dispute. Moving to the next level of my personal classification system, I find it harder to see how they could be regarded as a myth; that is, as a story that points imaginatively beyond itself. It seems telling that the papers I’ve seen that have sought to identify TCs in mathematics have appeared rather unsure what to do with them once they’ve found them. Easdown (2009) mentions TCs in his first paragraph but never again, and the reference serves only as a theoretically “respectable” way of saying that proof is important and troublesome in mathematics education — very true, but not novel. Scheja and Pettersson (2010) grapple to draw out some consequences of identifying limits and integrals as a TC, but seem to get no further than they did by identifying these concepts as troublesome and as offering the opportunity to transform students’ understanding. It’s unclear what has been gained by introducing the word “threshold” at all.
As ever, the dangers really develop when a myth starts to become a state religion. As far as I can tell, TCs haven’t yet reached this status — although this is no thanks to the course organisers who are inflicting them on my contact, and no thanks to comments like “they constitute an obvious, and perhaps neglected, focus for evaluating teaching strategies and learning outcomes” (Meyer & Land 2003). (In justice to Meyer and Land, they do note the danger that TCs could become merely a tool of power, though it’s not clear that they realise that the entire idea, not just specific instances, could function in this way.) Supposing, though, that TCs do become part of the orthodox discourse of education, endorsed and enforced by university management: what ills might result?
The first consequence I can see is that far too many thresholds will be identified, because this identification becomes necessary for any component of the curriculum to be taken seriously. (It may also occur because probationers are under an obligation to find threshold concepts in their discipline or fail their probation.) The effect on education may be akin to a process of medicalisation, whereby sources of difficulty for students are eagerly diagnosed as TCs and — in the absence of any genuine insight — this diagnosis becomes an excuse for the situation rather than the means to a cure. As, by some cynical accounts, the multiplication of diagnosable psychiatric conditions has mostly served to enrich psychiatrists, the multiplication of thresholds is likely to serve mostly to enrich those who set themselves up as experts on TCs. And, because the phrase “threshold concept” couples a strong visual and spatial metaphor with an almost universally applicable abstract noun, it has a rhetorical power that could make it very appealing to charlatans.
The other danger I can see from the official endorsement of TCs is that once a threshold is identified, and treated as such, crossing it becomes a kind of initiation ritual. The idea, as I understand it, is that this should happen naturally:
On mastering a threshold concept the learner begins to think as does a professional in that discipline and not simply as a student of that discipline.(Flanagan)
Under official sanction, though, a newly defined threshold could readily become a locked gate to that space — rather like an end-of-level baddie from an old-fashioned computer game. Watching from some thousands of kilometres away the North American disputes over calculus reform, I get the impression that this is what has happened there. Rather than being a collection of core concepts, at least some of which are fairly intuitive, calculus seems to have become established as the threshold separating “elementary” from “advanced” mathematics. Thus it has been rendered a great deal more intimidating to students, who are not slow to pick up on either obstacles or formal progress criteria; and thus it has attracted the politics that initiation rituals always seem to attract, with fervent battles over the right to define the threshold and the right to determine by what means it may be crossed. (I’m reminded, with great sadness, of the energy that so many Christian churches put into deciding who is properly baptised, who may be admitted to communion, who is properly a church member and so on.)
So where does this leave us? Some threshold concepts can perhaps helpfully be identified, even in maths, and I’m prepared to believe that searching for them, using the criterion of “liminality”, could be a useful exercise for a teacher. Such a search, though, has to be very conscious of the other peculiarities of mathematical learning: in particular, that mathematical “concepts” tend to have a dual existence as definitions and images; that the historical or even the formal development of mathematics is not always a good model for its didactic development; and that the difficulties encountered by students often originate from mundane issues, such as lack of practice manipulating trig functions, rather than from deep ontological problems. Unless these peculiarities are acknowledged, mapping out threshold concepts may just end up building a Balkanised map of mathematics, divided by innumerable and heavily policed frontiers that both students and teachers are apprehensive of crossing.
The metaphor of the threshold is not a vacuous one, and treated as a metaphor, “threshold concepts” may be useful. Nevertheless they carry the danger of directing attention towards local causes and local problems rather than towards pervasive problems such as difficulty with formal proof. Treated as a myth, they don’t seem to point to much beyond themselves — certainly not to much that isn’t already covered by existing notions such as “learning as loss”. Treated as a state religion, they could become a breeding ground for charlatans as well as displaying the unappealing features of a sect with a passionate desire to police the border between belonging and not-belonging.
Unless we are careful deploying it, a handy minor tool for educators risks becoming, through the zeal and eloquence of its inventors, something approaching a dangerous fad. It would be a shame if a notion that places so much stress on the “integrative” were to become instead a way of fracturing mathematics education — leaving behind it a divided landscape and damaged students, victims of the rampage of a rogue metaphor.
Advertisements
This entry was posted in Teaching. Bookmark the permalink.
### One Response to On the threshold of what?
1. Pingback: Timeo Danaos… | New-cleckit dominie
|
2017-08-20 20:57:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6298418045043945, "perplexity": 1464.1500942471212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106990.33/warc/CC-MAIN-20170820204359-20170820224359-00418.warc.gz"}
|
http://www.ni.com/documentation/en/labview-comms/2.0/node-ref/noise-generator-uniform/
|
# Noise Generator (Uniform) (G Dataflow)
Version:
Generates a signal containing a uniform white noise wave.
You can use uniform white noise as a stimulus to measure the frequency response of amplifiers and electronic filters.
## offset
DC offset of the signal.
Default: 0
## reset
A Boolean that controls the reseeding of the noise sample generator after the first execution of the node. By default, this node maintains the initial internal seed state.
True Accepts a new seed and begins producing noise samples based on the seed. If the given seed is less than or equal to 0, the node ignores a reset value of True and resumes producing noise samples as a continuation of the previous sequence. False Resumes producing noise samples as a continuation of the previous noise sequence. The node ignores new seed inputs while reset is False.
Default: False
## amplitude
Maximum absolute value the signal can have.
Default: The default value of this input changes depending on how you configure this node. If you configure this node to return a waveform, the default is 0.5. If you configure this node to return an array of double-precision, floating point numbers, the default is 1.
## seed
A number that initializes the noise generator.
The value of seed cannot be a multiple of 16364. If reset is unwired, this node maintains the internal seed state.
seed is greater than 0 Generates noise samples based on the given seed value. For multiple calls to the node, the node accepts or rejects new seed inputs based on the given reset value. seed is less than or equal to 0 Generates a random seed value and produces noise samples based on that seed value. For multiple calls to the node, if seed remains less than or equal to 0, the node ignores the reset input and produces noise samples as a continuation of the initial noise sequence.
Default: -1
## error in
Error conditions that occur before this node runs. The node responds to this input according to standard error behavior.
Default: No error
## sample rate
Sample rate in samples per second.
This input is available only if you configure this node to return a waveform.
Default: 1000
## samples
Number of samples in the signal.
Default: The default value of this input changes depending on how you configure this node. If you configure this node to return a waveform, the default is 1000. If you configure this node to return an array of double-precision, floating-point numbers, the default is 128.
## uniform white noise
Uniformly-distributed, pseudorandom pattern.
This output returns a waveform or an array of double-precision, floating point numbers.
## error out
Error information. The node produces this output according to standard error behavior.
## Algorithm for Generating the Uniform White Noise
This node generates the pseudorandom sequence using the Wichmann-Hill generator. The pseudorandom number generator implements a triple-seeded linear congruential algorithm. Given that the probability density function, f(x), of the uniformly distributed uniform white noise is
$f\left(x\right)=\left\{\begin{array}{c}\frac{1}{2a}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{if}\text{\hspace{0.17em}}-a\le x
where a is the absolute value of amplitude.
The following equations define the expected mean value $\mu$ and the expected standard deviation value $\sigma$ of the pseudorandom sequence:
$\mu =E\left\{x\right\}=0$
$\sigma ={\left[E\left\{{\left(x-\mu \right)}^{2}\right\}\right]}^{1/2}=\frac{a}{\sqrt{3}}\approx 0.57735a$
The pseudorandom sequence produces approximately 6.95 * 1012 samples before the pattern repeats itself.
Where This Node Can Run:
Desktop OS: Windows
FPGA: Not supported
|
2019-07-24 08:52:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2731320559978485, "perplexity": 1994.3526924333946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195532251.99/warc/CC-MAIN-20190724082321-20190724104321-00121.warc.gz"}
|
https://www.esaral.com/q/evaluate-63138
|
Evaluate:
Question:
Evaluate: $\int \frac{\mathrm{x}+1}{\sqrt{2 \mathrm{x}+3}} \mathrm{dx}$
Solution:
In these questions, little manipulation makes the questions easier to solve
Here multiply and divide by 2 we get
$\Rightarrow \frac{1}{2} \int \frac{2 x+2}{\sqrt{2 x+3}} d x$
Add and subtract 1 from the numerator
$\Rightarrow \frac{1}{2} \int \frac{2 x+2+1-1}{\sqrt{2 x+3}} d x$
$\Rightarrow \frac{1}{2} \int \frac{2 x+3-1}{\sqrt{2 x+3}} d x$
$\Rightarrow \frac{1}{2} \int \frac{2 x+3}{\sqrt{2 x+3}} d x-\frac{1}{2} \int \frac{1}{\sqrt{2 x+3}} d x$
$\Rightarrow \frac{1}{2}\left(\int \sqrt{2 x+3} d x-\int(2 x+3)^{\frac{-1}{2}} d x\right)$
$\Rightarrow \frac{1}{2} \times \frac{(2 x+3)^{\frac{3}{2}}}{2 \times \frac{2}{2}}-\frac{1}{2} \times \frac{(2 x+3)^{\frac{1}{2}}}{2 \times \frac{1}{2}}+c$
$\Rightarrow \frac{(2 x+3)^{\frac{3}{2}}}{6}-\frac{(2 x+3)^{\frac{1}{2}}}{2}+c$
|
2023-02-02 10:48:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798240065574646, "perplexity": 3668.319573059315}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00398.warc.gz"}
|
http://wj32.org/wp/2013/10/25/pae-patch-updated-for-windows-8-1/
|
# PAE patch updated for Windows 8.1
Note: An updated version for Windows 10 is available.
This patch allows you to use more than 3/4GB of RAM on an x86 Windows system. Works on Windows Vista SP2, Windows 7 SP0, Windows 7 SP1, Windows 8 and Windows 8.1. Instructions and source code included.
Before using this patch, make sure you have fully removed any other “RAM patches” you may have used. This patch does NOT enable test signing mode and does NOT add any watermarks.
Note: I do not offer any support for this. If this did not work for you, either:
• You cannot follow instructions correctly, or
## 293 responses
1. AGVirt says:
Thanks a lot!!!!!
2. zx says:
Thanks!!!
3. P Sv Rns Srikanth says:
Thank you so much for this updated Patch for Windows 8.1. I am eagerly waiting for this. 100% works on Windows 8.1 (32-bits(X86)) all versions 🙂
4. alex says:
it wont boot after:
@echo off
TITLE …::: Win8 32-bit Unlocker Script :::… by: Wiwi-maX
color 30
echo Script use PatchPae v2.
cd %Windir%\system32
C:\PatchPae2\PatchPae2.exe -type kernel -o ntoskrnx.exe ntoskrnl.exe
echo.
bcdedit /copy {current} /d “Windows 8 (Unlocked)”
echo.
echo.
bcdedit /set {current} kernel ntoskrnx.exe
bcdedit /set {current} nointegritychecks 1
bcdedit /default {current}
bcdedit /set {bootmgr} timeout 2
shutdown.exe -r -f -t 5 -c “RAM 2GB’s Limit Removed.”
• pao says:
can you teach me the step 5? “what should i put after /set? should i put “{current}”?
5. Ferdinand N says:
WOWWW…. thanks alot man…
6. Andre says:
Works great – thank you so much!
After getting the new boot entry like {12345678-abcd-1234-abcd-123456789abc},
For example:
bcdedit /set {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} kernel ntoskrnx.exe
becomes
bcdedit /set {12345678-abcd-1234-abcd-123456789abc} kernel ntoskrnx.exe
Then work with copy&paste to insert 100% correct commands to your Command prompt window.
7. slalalavka says:
C:\Windows\System32>C:\WherePatchPaeIs\PatchPae2.exe -type kernel -o ntoskrnx.ex
e ntoskrnl.exe
Unsupported kernel version.
8. Jackson says:
I had the boot loop as well on 8.1. I just let it keep rebooting, eventually it worked after maybe 7 or 8 attempts. I made the mistake of having updates pending on the reboot. Thought that’s why it might have given problems. Seems fine now.
9. Stefan says:
It works. Thank you! 🙂
10. Nina Goa says:
It didn’t worked, thanks for this bullshit. Now I’ve newly installed my OS 🙁
11. Thank you very much.
But disabling the integrity checks is a stability and security risk.
Is there any way to patch the ntkrnlpa.exe/ntoskrnl.exe file so the digital signature would check out ok…?
Best regards, DavidB
12. PaVink says:
Thank you so much, Wen Jia!
Installation was simple and straightforward.
I had dug up a similar patch (albeit way more complicated than this one!) under Win 7, and I hated going back to 3-something GB under Win 8 when I switched over. I am so happy being back at 4 GB now. Great job – thank you for your efforts and proving (once again) that all the Microsoft language about the 4GB limitation is just bullshit.
13. Erik says:
Hola, probé el script en 8.1, y todo se ejecutó perfecto, pero al reiniciar empezó a dar pantallazos -como de incompatibilidad de driver o resolución de video- y apareció la linda pantalla azul de volcado de memoria. Alguién sabe si hay alguna corrección que realizar antes?
Atte,
Erik
14. Sepp Meier says:
d:\patchpae2\PatchPae2.exe -type kernel -o ntoskrnx.exe ntoskrnl.exe
Sorry, at the first command I get “Unable to copy file: access denied” (Win 8.1 pro)
Of course I started the cmd as administrator.
Only the TrustedInstaller has the right to write.
15. Thanks a million 🙂
i am testing that
16. Sepp Meier says:
The patch works 🙂
But not with an on-board graphic card! I see only pixels on my screen.
It would be better for testing to change the wait time in the boot menu to 30 seconds to have enough time to select the normal boot if something goes wrong.
17. I can patch but its not giving me the new boot code after I patch So I do not know what to use for: bcdedit /set {12345678-abcd-1234-abcd-123456789abc} kernel ntoskrnx.exe
How would I get the new boot code if this one does not give on patch & or the winload.exe patch?
Running windows 7 ultimate Sp1 on an AMD FX-8350 8 core 4.00 GHz with 8 gig ram 1TB HD.
18. sergey says:
thanks. maybe add in next version gui?
19. Shenzie says:
Patch works great…with one small problem: When the PC starts, sound almost always has various patterns of static and/or hash. (Rarely, sound starts w/o this.) Resetting Windows Audi Service one or more times (not consistent) clears it—the hash pattern changes with every service reset. AudioSrv.dll is byte-for-byte identical to correct MS version.
This is 8 GB on Win7 Home Premium—all other programs and functions work perfectly.
Any advice on how to solve the audio problem?
Shenzie
20. JonathanA says:
Thank you very much! Worked like a charm.
21. miken says:
I’m seeing some folks talked about how they got stuck in a loop when they rebooted their machine after successfully installing the patch. I had the same problem. Never got it to work. I would get the blue screen with an error saying KMODE_EXCEPTION_NOT_HANDLED. Eventually, I would get lucky and have a chance to choose to start the non patch windows. Anyone have luck resolving this?
22. Terry says:
It would seem that the patch can be affected by video drivers. It did not work using intel HD onboard graphics, neither did it work with a new Nvidia GEForce 610 card using the supplied video driver. It would work however using basic 640×480 VGA mode driver.
I got the patch to work perfectly by installing an older 2 year old GEForce 610 driver from another computer running Windows 7 for which the patch worked perfectly.
The recent update to Windows 8.1 (April 8) did not cause any problems.
23. PaVink says:
Unfortunately, for me the patch stopped working after installing the 8.1 Update. 🙂
I had disabled/removed the patch before applying the 8.1 upgrade (just to be sure), then applied it again after the upgrade. When booting, I get a windows error (Windows 8-style blue screen) related to the video driver and the system freezes at that point.
I then downloaded the latest video drivers (nVidia GeForce), but that that doesn’t make any difference.
Too bad – I did like my 1 GB of extra memory! Anybody seeing same/similar issues …?
24. abdulrahmanok says:
pls upade it to win 8.1 >> this version don’t work
25. Terry says:
Hi PaVink, it may be worthwhile finding and installing older nvidia drivers, and see if the patch then works.
The Nvidia website has some older drivers.
If the patch works in safe mode, or when the Nvidia drivers are uninstalled and you use a standard VGA display, then it confirms a driver problem.
If you go through the comments on the Windows 7 page, (wj32 gives a link at the top of this page), the common failure theme is video drivers.
26. Hi.
After applying this patch to the new kernel ntkrnlpa.exe version 6.1.7601.18247 (win7sp1_gdr.130828-1532) Windows 7 after booting with the modified kernel goes to pseudo cyclo rebooting, specifically continuously displays shutdowning.
But this patch still works fine with the old kernel ntkrnlpa.exe version 6.1.7601.17514 (win7sp1_rtm.101119-1850) without any problems.
• Do you know how to change your kernel to an older version to get it to work?
• Yes, above are attached link to the older version of patched and successful worked kernel, but this kernel was used with the older Nvidia driver, so I believe that this problem isn’t in the kernel, this is more like a bugs in the latest GPU drivers such as Nvidia / Intel.
27. Gary says:
It worked! Thanks! 🙂
• Gary says:
Installed the patch onto the Windows 8.1 OS. So far it works great!
28. Jeff says:
hi, can you please confirm this still works with win8.1 UPDATE 1, it was going fine for me until 1 day I got a bsod, so I was wondering if it had to be updated to make sure nothing changed in UPDATE 1, I wonder if the kernel version changed?!
29. Terry says:
The patch works ok for me with Win8.1 + update 1 and “old” Nvidia drivers but if you have automatic updates set then your system may download and install updated video drivers which may be the cause of the problem.
• Jeff says:
thanks for the update, I only use AMD, nothing Nvidia in my system, but possibly updates overrode the patched kernel
30. Tpfumefx says:
Does this patch work only for ms windows, or along with practical apps like 3dsmax photoshop etc… I mean can other applications other than windows make the use of more than 4gb Like computer graphics apps for exemple?
• Not really, any given application can only use maximum 4GB using this trick, because pointer address sizes are still 4 bytes long (not 8 bytes like in Windows 64 bit versions) which means a program like Adobe Photoshop can only access a maximum of 4.2 Billion memory addresses, even though there’s more left unused and can be assigned by Windows to other applications.
If you have 8GB and you use PAE then you can run Adobe Photoshop for example and use 4GB maximum and Adobe Illustrator and use the rest (roughly 3GB) at the same time.
31. I have applied the patch on a Windows 7 Ultimate SP1, but I cannot find a “PAE compatible” nVidia driver (I have a Quadro FX 580). Lots of artifacts and after a while BSODs appear (when using modified kernel), related to graphics drivers. I have read some were lucky with older nVidia drivers…so I would like to know which driver versions are confirmed to work perfectly while in PAE-enabled kernel mode. Thank you very much in advance, GABRIEL.
32. Terry says:
You will probably need to go to the Nvidia website, driver downloads page, click on ‘Beta and Older Drivers’ put your Quadro FX 580 details into the boxes and click Search.
They appear to have Beta drivers for your card going back as far as 2009.
You could try these and see if the patch works.
It will be very much trial and error.
In my case I found the driver for the GE Force GT610, dated 2/10/2012 version 9.18.13.697 which I already had for an older version of GT610, worked OK, whereas the newer version which came on the disk with a new video card was dated sometime in 2013 didn’t work.
33. hola se ve interesante el programa pero nose como utilizarlo no habló inglesh y por eso no lo entiendo nose si puden hacer un videoguia porfavor para W7 y W8 lo se agradeceria porfavor mucho lo necesito urgente.
in engles
hello program looks interesting but nose how to use Inglesh not speak and so do not get nose if puden videoguide please make a W7 and W8 to know much appreciate please I need urgent.
34. jacob says:
I cant get past step 3, with typing the cmd code: C:\WherePatchPaeIs\PatchPae2.exe -type kernel -o ntoskrnx.exe ntoskrnl.exe. –It keeps saying “The system cannot find the path specified”. What am I doing wrong?
• Terry says:
You don’t actually type ‘WherePatchPaeIs’, instead of that you type the directory where you put PatchPae2.exe.
I assume that is why the system cannot find the path specified, but excuse me if I am wrong 🙂
In my case I put it in C:\Windows\System32, but you could put it anywhere, as long as it isn’t in a place where it could get deleted, such as a temporary folder.
35. PaVink says:
I previously reported that after upgrading to Win 8.1, the patch stopped working for me (pc would not boot). I finally got around to seek and download some older video drivers (nVidia, release 331.82 from November 2013), and now all is fine again. Windows 8.1 + PatchPae2 working great.
Again, thank you for making this available!
36. PaVink says:
If you don’t understand the instructions, you are probably not the right person to even attempt this… Better keep a good backup handy! 🙂
37. StGreg says:
After recent update of Win7Sp1 this patch is no longer working. System cannot boot with old patched files due to “winloadp.exe is corrupted. Also, when i started OS with normal kernel & loader and tried to patch it again i get “Failed” when patching loader. ;/
• George says:
Yes, same problem here for my Windows 7 SP1. 🙁
• George says:
Yes, thanks for the update, after removing MS KB2949927 update, PAE is working again.
38. Al says:
Patch doesnt work with latest nvidia drivers, you get only reboot loop or bsod. Need old nvidia driver. However, this does not make sense, requiring older nvidia drivers unfit to W 8.1 Update 1
• “here is patched winload and kernel for win7sp1 october 15 update
Use this for March 2015 update. Got me back to a full 8GB of ram and full updates. I have an ATI HD5650.
39. gcalzo says:
MS has officially removed the KB294927!!!
https://technet.microsoft.com/en-us/library/security/2949927.aspx
Revisions
•V1.0 (October 14, 2014): Advisory published.
•V2.0 (October 17, 2014): Removed Download Center links for Microsoft security update 2949927. Microsoft recommends that customers experiencing issues uninstall this update. Microsoft is investigating behavior associated with this update, and will update the advisory when more information becomes available.
40. Does this work in 8.1 Update 1 ? Intel GMA X4500 graphics
41. Terry says:
The patch works with Win 8.1 update 1, and may well work with an Intel graphics card, but you would need to try installing the patch to find out if it does.
It appears to be Intel HD onboard graphics that causes conflicts in the memory above 4GB.
42. Thanks for replying Terry, I did tried it but had no success with it. It may be because of the integrated graphics .. I will try another driver as now I use a modified version of the Intel driver.
I’ll post back if this works.
• I did try another Intel driver version but to no avail, now I’m using an old ATI card and it is working on the latest 8.1 release (downloaded from MS a few days ago).
There is one update for Windows kernel (KB3035131) that I did not install yet.
• DeMoN says:
(KB3035131) does break this patch on WIndows 7. unfortunately, Im on a domain with a very persistent WSUS server and matching GP. Does anyone know a way to block a specific update from the registry? the “hide update” is greyed out.
43. Well, I’ve tried another version (ie the Windows Update one, as I don’t have that many options to choose for the GMA X4500) and I cannot boot to desktop because Windows starts in autorecovery mode … I’ve disabled that and it seems that this driver doesn’t work because I get no video after the Windows logo … it stays at a blank screen (monitor turns off ) for a minute or so then reboots.
I’ll give it another go after I get a dedicated video card and I’ll post back 🙂
If you’re using Windows 8.1 update 1 and have this graphics card and the patched worked for you, please let me know by commenting here, thanks.
44. Terry says:
Dragos, you could always bite the bullet and get Windows 64 bit OS 🙂
45. Haha Terry, I doubt that would help my case. My system has only 4GB of RAM so this patch would have proved efficient to my setup.
If I switch to x64 it will be pointless cause from I’ve gathered, the x64 version will actually use more RAM, for example some folks were talking about x64 taking about 1.2/4 GB of RAM at idle, while the x86 takes around 6/700MB.
From my own conclusion I would suspect that to be true, because for an x86 app. to run on x64, there will be 2 instances in RAM? …
Anyway I will have tot test this out for sure.
46. Pietia says:
Can anyone help me ? I recently installed win7 32 professional and wanted to use this hack , I upgraded my system thought windows update so my winloadp is 6.1.7601.17514 and my ntkrnl is 6.1.7601.18409 I booted into pae patched kernel but there were 2 BSOD’s my laptop is acer v3 771G and here are .dmp files:
http://www.filedropper.com/dumps
caused by ntfs.sys and ntkrnlpx.exe
Thank you very much 🙂
47. Terry says:
After installing the latest Microsoft updates for Windows 8.1 (being careful not to accept the Nvidia video update), the computer would not boot using the patched kernel, and got the message ‘critical process died’ so I deleted patchpae using the boot option in msconfig.
When I get time I’ll investigate further and report back if PatchPae is successfully re-installed.
48. Terry says:
I should have re-read the text file that came with the patch and re-run step 3 after Windows Update 🙂
Re-patching the kernel restored enabling more than 4GB ram.
49. Garbo says:
Works perfectly for me. Windows 7 Ultimate 32 bits up-to-date, KB2949927 removed, 16 GB RAM and ATI Radeon graphic card. Many thanks!!
50. CpServiceSpb says:
Hi anybody.
Can somebody post “live” links to “magic” Barton88 intel video drivers which should allow to use intel hd (1st/2nd generation) with Windows 7/8 x32 (32-bit) and with 4+ Gb RAM patch ?
51. Wow, this patch rocks. So simple to apply and it WORKS. (Just copy and paste from the “read me” text file so little room for error.) The only “problem”, if you can call it that, is one of the two patches kept prompting windows to repair itself. (Deleted that patch since repair wasn’t successful.) The only thing that could be improved is being able to copy and paste the long { } string. I’ll know how much more improvement I’ve gained once my 8GB ram chips arrive. But for now, things seem to run faster than before.
Keithhf
• Ah, thanks for the hint. The 8GB ram chips arrived. Windows 7 shows 8GB installed but only 3GB available. Regardless of what Windows 7 says, however, my laptop is running so much faster. So this may be a matter of not looking a gift horse in the mouth and just accepting the excellent results. Thanks again.
• Terry says:
If Properties page says that only 3GB is available, then it means the patch has not been applied. It should just say “Installed memory (RAM): 8.00 GB”
It may pay you to go through the procedure again.
• keith fukumoto says:
ok, thanks. should i remove the previous patch or just repeat the steps?
52. Terry says:
Keith, I would carefully go through the steps again for Windows 7, being sure you don’t inadvertently use any Windows 8 only instructions. I’d suggest you put PatchPae2.exe into C:\Windows\System32\, and that is what you type where it says “C:\WherePatchPaeis\”, ie the path to where you put PatchPae2.exe.
One or two people have thought that you type “WherePatchPaeis” 🙂
It is easy to make a mistake in typing out the commands, copying the string to the clipboard at step 5 is a great help as it has to be entered a few times.
If you get it right, then either the patch will work and you will get the full 8GB…hurrah…, or it wont 🙁 and your computer may hang or get a black screen.
If it hangs try restarting in safe mode and remove the patch via msconfig as it describes in the readme file.
If it doesn’t work then there is probably some hardware conflict or a device trying to write into the memory space above 4GB. Have a read through the comments.
• Thanks again, will repeat step. Patch is in Windows\system 32 already. For whatever reason, the addition of another 4GB of RAM and your patch does seem to make my laptop a lot snappier. Dell e4300, 64GB SATA 3, SSD and 300GB SATA 3, HDD. Windows 7 SP1, OEM version. No BSODs in any event.
53. Can you update for windows10 32bit?
54. Garbo says:
Unfortunately, the last Microsoft updates (march 2015) broke this excellent patch 🙁 It’s not possible to apply it anymore, it fails when applying the loader part… Too bad !
• Garbo says:
Well, removing the update KB3033929 does the trick for me (this patch updates the kernel for SHA-2 code signing support).
• cappoccia says:
how did you removed it? because i can’t… at reboot windows installs it!
• If Windows reinstall the update automatically, you need to set Windows Updates to notify you before installing updates instead of automatically installing them.
To do this follow these steps:
Control Panel > System and Security > Windows Update : Change Settings (upper left) then select either:
Never check for updates (and you will have to manually do this once in a while, like once or twice a month)
or
• arny says:
Fixed it for me, thanks for that. Just hope I don’t regret not having the update 🙂
• Garbo says:
But how do you replace the windloadp et ntkrnl files, since Windows protects these files ?
• This patch doesn’t replace the files, instead it copies them and patches the copies, then ads a new boot entry pointing to the patched files.
You should re-read what this patch does and try to understand how it works before continuing.
• Garbo says:
I suggest you read my previous contributions before replying to my posts…
I know what this patch does and I applied it successfully several times. My last question wasn’t about the patch itself but the patched versions of the kernel files (see above : http://rghost.net/58547303).
• Achmed says:
I wouldn’t use kernel files of unknown origin. Or do you like NSA, hackers and this kind of people ?
55. Does anyone have a batch file to automate this patch ?
Thanks
I’ve tried to use this:
TITLE RAM Patch for Windows 8.1 32-bit (PatchPAE2)
color E0
copy “%~dp0PatchPae2.exe” “%Windir%\system32\patchpae2.exe”
echo.
%systemdrive%
cd\
cd %Windir%\system32
PatchPae2.exe -type kernel -o ntoskrnx.exe ntoskrnl.exe
echo.
bcdedit /copy {current} /d “Windows 8.1 (PAE Patch)”
echo.
bcdedit /set description “Windows 8.1 (PAE Patched)”
bcdedit /default {current}
bcdedit /set {current} kernel ntoskrnx.exe
bcdedit /set {current} nointegritychecks 1
bcdedit /set {bootmgr} timeout 10
del %Windir%\system32\patchpae2.exe
echo.
echo Done
echo.
echo RESTART PC!
pause > nul
But I get :
“An error occurred while attempting to reference the specified entry”
This reffers to the {current} part of the code …
• Nevermind, I figured it out, so for Windows 8.1 this batch works:
@echo off
TITLE RAM Patch for Windows 8 32-bit (PatchPAE2)
color A0
ECHO.
ECHO.PatchPae (v2) by wj32.
ECHO.Tested on: Windows Vista SP2, 7 SP0, 7 SP1, 8, 8.1
ECHO.
copy PatchPae2.exe “%Windir%\system32\patchpae2.exe”
%systemdrive%
cd\
cd %Windir%\system32
ECHO.
PatchPae2.exe -type kernel -o ntoskrnx.exe ntoskrnl.exe
ECHO.
ECHO.
bcdedit /copy {current} /d “Windows 8.1 BAK”
ECHO.
bcdedit /set {current} kernel ntoskrnx.exe
ECHO.
ECHO.
bcdedit /set {current} nointegritychecks 1
bcdedit /set {bootmgr} default {current}
REM bcdedit /set {bootmgr} timeout 1
pause > nul
Make sure you have PatchPae2.exe in the same directory as this script.
56. From @john_doe
“here is patched winload and kernel for win7sp1 october 15 update
http://rghost.net/58547303”
Use this for March 2015 update. Got me back to a full 8GB of ram and full updates. I have an ATI HD5650.
Use the patch above and replace the files either using the Recovery Console, a Windows Recovery Disk, or booting into the non-patched Windows Kernel (you should have two boot entries: one with your non-patched kernel and one with the patched one).
• cappoccia says:
I have done all as you have described but I still have 2,96Gb visibile used by windows.
I have all the patches installed by windowsupdate because I can’t uninstall; also tried all of http://support.microsoft.com/it-it/kb/949358/en-us but still doesn’t unistall
57. Jhon says:
we have removed KB3033929 update and successful patched system without effect 🙁
usable memory from 6 gb memory is still 2.64 GB
58. Alexander S. says:
This winloadp.exe did indeed work out for my Win7 SP1 with KB3033929 installed:
http://rghost.net/58547303
I first had to boot the unpatched Windows a few times, until the update was installed. Then i replaced the old winloadp.exe with the version above. And then patched ntkrnlpa.exe a second time (dont know if this is really necessary)
59. Boris says:
Winload 6.1.7601.18649 failed to patch. I patched it manually, with help from PatchPae2 source and pdb symbols:
00000130: 44 B2
000282AE: 7D EB
• Al says:
Hi Boris,
Please kindly share the instructions on how to patch it manually.
• Yury says:
please don’t forget to save patched winload.exe. as winloadp.exe to the “%Windir%\system32\” directory, everything else will be done by Patchpae2 patcher. Hope that it’s author will release a PatchPae3 soon, with minor changes for winload.exe patching and some automation (as a batch file for Windows 7 32-bit in this archive http://rghost.ru/6HZbtkfj8)
60. istin says:
winload.exe ver 6.1.7601.18649 failed to patch, windows7 could not boot (patched) after a recent windows update (I do not know which one)
• Mike says:
KB3045685 and KB3045999, both published on 4/14/2015 as part of MS15-038 broke the existing PAE patch (done manually a few years ago) on my Windows 7 computer resulting in BSOD errors on boot. Uninstalling these hotfixes fixed the PAE problem. Now I just have to add this to the Nvidia driver as something I can’t update for the foreseeable future.
61. Tung Bach says:
winload.exe ver 6.3.9600.17238 for Windows 8.1 could not patched. Windows update 15/04/2015. THANKS
62. PaVink says:
This patch has always worked and is still working on my fully updated 8.1 system.
Just follow the instructions provided. Of course, if you have no idea what is happening here, you probably should not even attempt this!
63. DeMoN says:
(KB3035131) breaks this patch and trying to re-patch fails the winload step. This is with current Nvidia Drivers if it matters. If you are having issues, try to uninstall this recent patch (i think it said it was published 4/10/2015 so not very old. If anyone finds a work around besides uninstalling this patch. I’m on a corporate domain with a very persistent WSUS server and i have no way to permanently keep this update away…
• DeMoN says:
doing it the old fashioned way with a hex editor also does not end well. the 7C11 8B45 & 7C10 8B45 flags are still there, but the rest of the string is significantly different. It still signs and It all looks well and good though until you get to windows. Windows boots slowly, shows even less available ram, and does not “have enough free memory to open” ANYthing… even notepad…
64. tom says:
Win 7 Ultimate 32 Bit
65. alexio says:
I used this for the win8 install of my dual boot win8/vista, which win8 manages with its boot loader. It got stuck in a repair loop, I fixed the bmr with a live cd. I do have nvidea drivers for a 5 year old nvidea card. Can someone with a working install provide a list of updates I have to remove?
66. The third step fails ..
Some windows updates got installed overnight..
Cannot boot onto the the patched version anymore.
I have tried Patching it again ..
but it fails at the the below step
An error is displayed as “failed”..
Thanks,
Harry
• Works like a charm, I missed to check that.
Thank you gcalzo
67. Dear Mr. Liu,
First, Thank You for providing the patch; it works perfectly on our 7 and 8x systems.
We are planning to upgrade to windows 10 in the near future and we assume the new OS will change/upgrade the kernel. Is that correct?
If yes, can we use the patch on the windows 10 OS?
If not, will you be releasing a windows 10 version?
68. hi, does the patch support windows 10 32bit? can you update the patch to support windows 10 32bit? my machine has problems with 64bit so i have to use 32bit windows 10. but i have more than 3GB Ram.
Thank you
69. The big request to you ! make the PAE patch for windows 10 🙂 I really need it I’m using windows 10 pro
70. NikZ says:
Really need this patch for windows 10:)
71. liyox says:
Please make it Win 10! 🙂
72. Ferdinand says:
pretty please for Windows 10 Support ^_^
73. jeffery says:
74. evgeny says:
patchpae2 modified xttp://rutracker.org/forum/viewtopic.php?t=4694409
for Windows XP, 2003, Vista, 2008, 7 (supports KB3035131), 8/8.1, 10
with GUI (russian only, sorry), balalaika, vodka and bear.
• evgeny says:
it’s internal pre-beta for testing only.
u can download full version 0.39 from mirror http://rghost.ru/78cjgLmCb or forum rutracker.org
75. Trouble with bcdedit /copy {current} /d “Windows7Patched” ???
A DESCRIPTION FOR THE NEW ENTRY MUST BE SPECIFIED….
Using real double quotes in ascii, not some weird variant you got by cutting and pasting from a blog page.
Copy the lines into notepad, go over the { ” ] quotes with fresh ones try again..
I think some web publishing blog engines try to pretty up the double quote by turning it
into the leading and inverted ones in pairs.
After they were excised and replaced with real ones, it worked like a charm! Many thanks
Pavilion DV2412ca 4gb ram installed Windows 7 ultimate N, SP1 clean install.
76. Gabriel says:
The patches above for Windows 10 are infected, says Windows Defender. I really hope an official one release soon.
• evgeny says:
Yes. Windows Defender already know that fix128 patched windows kernel and loader. I really hope you understand micro$oft politic for software activated or patched Windows. Windows Defender must delete it, of course. NOD32, Kaspersky, Avast, Avira, DrWeb, McAfee no says infected. fix128 is 7zip-sfx, it packed with upack. there are patchpae2 modified to support new windows and vbs-script for GUI in sfx archive. • evgeny says: However, Micro$oft also says that 32-bit windows can’t support more than 4 Gb of memory. Hah!
77. evgeny says:
Deleted patch is beta-version fix128. You may read the thread before write.
If f*ck_ng Kaspersky blocked the file, you may add the item to exception list.
A can release new version of fix128? but in some days some antivirus delete it. Sooner or later. Welcome to Windows-10 Era!
If you want to use fix128, you will find method to download it. If you don’t, you may f*c_k my mind and your mind and all minds around.
Opening ZIP in my Windows 8.1 in Explorer do not starts program.
CLI version only: http://rghost.ru/8KFT4tSJw
• evgeny says:
2 Achmed
• Neal says:
In case it needs repeating: the links by evgeny above, supposedly to a Win10 PAE patch, ARE INFECTED WITH A TROJAN. DO NOT USE THEM. 32 separate malware scanners confirm it.
Here’s hoping a real Win10 PAE patch arrives soon
• evgeny says:
Can you say what do this trojan?
78. Neal says:
Don’t be coy. Any scanner you run on it will tell you it’s a trojan — Microsoft shows it as Trojan:Win32/Skeeyah.A!bit.
79. Neal says:
So, you say I should trust your code. Let’s have a look.
Chrome and IE refuse to download it, because of the trojan alert. Using wget, I can retrieve the file; again the scanner detects the trojan and deletes the .exe from inside the zip. Disabling that, 7zip then refuses to unzip the enclosed .exe, because it too detects the trojan.
Shutting down all scans, I finally uncompress everything. Here’s what I get:
6 DLL’s, all of which begin with HAL.
31 SYS files, 27 of which begin with USB, and the rest with HID.
A handful of other files and directories — 4 EXE’s, and directories named 2003 and 2008__.
And: your VBS file — 8100+ lines of code, a thousand of which are in Russian.
You *must* be joking me.
• evgeny says:
Ok. 31 *.sys files.
They are in ZIP 2003\SYSTEM32\drivers\ directory (not SFX!).
Windows XP has some troubles with USB 2.0 in PAE-mode. This drivers from server 2003 for replace XP drivers when fix128 installed on XP. After uninstall fix128 on XP, it replace with original backup XP files. You can read http://www.overclock.net/t/77229/windows-xp-ram-limit/50 for more information about usbvideo.sys, usbstor.sys, etc. Heh, you say Server 2003 is malware. You are joking too.
Similar to 2008\SYSTEM32\drivers\ directory. But 2008 is renamed to ___2008 and is not used. I found some windows-7 with beta-SP1 systems needing replacement of original usb2 drivers too. it’s experimental future and in russian manual it isn’t recommended (and 2008 is renamed to ___2008). This *.sys files from… Server 2008. You just say Server 2008 is malware. Great.
at installation the fix128 copies files from 2003|2008 automatically in windows.
next, 6 DLL…
• evgeny says:
continue… 6 DLLs
When you patch Windows XP, you need to patch HAL. And modified patchpae2 can do it.
Another way is to copy HAL DLL from XP SP1, wich supports more then 4 Gb of memory.
http://www.overclock.net/t/77229/windows-xp-ram-limit/60#post_22142268. I tested this method, but it’s not working with my system. And it isn’t recommended in russian manual. This method activated only if you run fix128.exe with +XPSP1HAL key in command line.
6 DLL’s in xpsp1hal directory, all of which begin with HAL is HAL drivers from XP SP1. And you just say Windows XP SP1 is malware. You are more joking.
• evgeny says:
next, 4 EXE…
• evgeny says:
continue, 4 *.exe
There is 2 way to load patched (i.e. with damaged signature) kernel. 1 – you can patch loader to disable signature verification (PatchPae2.exe -type loader …). 2 – original method by Geoff Chappell based on test-signing. http://www.geoffchappell.com/notes/windows/license/memory.htm And fix128.exe has key +TestCert to use this method. You need some utils from Visual Studio for 2nd method: makecert.exe, certmgr.exe and signtool.exe. It is in utils directory of SFX archive. You just say Visual Studio is malware. You are more and more by more joking.
• evgeny says:
Oh! We lost 1 *.exe in SFX archive : utils\PatchPae2.exe
It is modified patch with support for win xp, win 7 with KB3033929, win 10, serv 2003.
I posted links for this file and virustotal earlier. 0/56. no malware.
• evgeny says:
I need to have a rest a little…
I will explain 8100+ lines of VBS code later=)
summing up the result. 6 files + 31 files + 4 files = 41 files. Neal, you are alarmist of 0x29h Level. Panic-monster.
• Achmed says:
Maybe something on the download server modified your zip – this would explain the start of a program while extracting it…
80. jeffery says:
evgeny,
I use CLI version only patched 32bit Win10-10240RTM, it’s amazing, thank you so much.
81. Terry says:
evgeny,
I used your patchpae2 to patch Windows 10 32 bit Home Edition.
Very successful, no trojans, repeat no trojans 🙂
I am using an old Nvidia graphics driver as newer ones caused problems when patching Windows 7 and 8.1 with wj32’s patch. That is a problem in itself, Windows 10 has to be set so that it doesn’t automatically install the latest drivers. 🙁
Next test will be to see if up to date video drivers will work, though I am not too hopeful.
Many thanks
82. Gecata says:
So… patch v.039 working fine and smooth. After few days of test and experiments i can say it’s GREAT and WORKING ! So i made and test this scenario:
1. Make virtual machine KVM on CentOs 7.1 Linux and patched Windows 10 Enterprise 32bit-10240RTM
2. Make full machine backup with EaseUs ToDo Backup 6.1
3. Restored backup to VmWare virtual machine on Windows 8.1
4. Restored backup on VirtualBox machine on Windows 7 32 bit
5. Restored machine on bare metal
Patch is there all the time and work great, even different video drivers on different VM’s.
Today tested patched machine with 7 all day connected user over RDP /thinstation clients/ and it working fine and stable!
Many thanks to evgeny for great work and sharing knowledge!
83. Gabriel says:
However, I still can’t trust it, until reasonable explanation why ALL of the used software says it is infected. There’s obviously something in the code, which is causing it. C’mon man, just reveal it and we’ll use your patch.
• evgeny says:
May i kneel down to You afterwards and take an Your blessing?
84. Luciano says:
evgeny, why not simply uploading the _source code_ of PatchPae2 CLI, enhanced for Windows 10 too? The code is well structured, well documented for all supported different kernel versions.
Then, you could also simply use the old plain safe .zip format instead of SFX if you don’t have nothing to hide…
• evgeny says:
What problem?
You can open sfx-archive fix128.exe by 7zip.
You can extract the file fix128.exe\utils\main.c with _source code_ of PatchPae2 CLI, enhanced for Windows 10
You can extract the file fix128.exe\utils\PatchPae2.exe CLI binary, enhanced for Windows 10, which I uploaded above
You can extract the file fix128.exe\ntk128gb.vbs with _source code_ of GUI
I could encrypt VBS, archive, etc, but I didn’t.
You may read all, but you didn’t.
Why?
You are f_ck*ng Writer, not reader.
Why are you can’t read the _source code_? You have not a notepad? Are you banned on 7zip download page? Maybe you’re too lazy?
What problem?
The _source code_ always was here.
• Gabriel says:
Well, if you keep answering like this and you keep insisting to us to download your file without any explanation why, then trust me, no one’s going to do it. Your comments are now becoming a spam without a purpose. If you really want to help, then do it. EXPLAIN. Give us a reason. So we can believe that 99% of anti-malicious software is wrong. Don’t expect anyone to read 8100+ lines of code. Or simply make a clean new patch when you know what is causing it.
• evgeny says:
I do not say “trust me”, I say “source code is included”.
I do not say “run fix128.exe”, I say “open fix128.exe in 7Zip archiver” to read the file fix128.exe\utils\main.c with 2000+ lines of code.
It is because You say “there is no source code”.
You accuse me, and I give the facts and source code, but You are ignore it (as regarding Russia in general too).
to be continue…
• evgeny says:
… continue
source code patchpae2.exe enhanced for Windows 10 from link above
http://rghost.ru/78cjgLmCb \fix128v39.zip\fix128v39\fix128v-0.0.0.39 +log=detail.exe\utils\main.c
to be continue…
• evgeny says:
x86 binary:
http://rghost.ru/78cjgLmCb \fix128v39.zip\fix128v39\fix128v-0.0.0.39 +log=detail.exe\utils\PatchPae2.exe
same http://rghost.ru/8KFT4tSJw
I don’t understand what you want from me. Source code was uploaded many days ago. Maybe you don’t understand English and use Google Translate.
• Gabriel says:
Source code is included, you say. Cool. Now should still we try our best to download your patch, because it seems really difficult to do it. There’s hardly a browser in the world, which won’t say file contains a virus. Not even talking about how to open it, because things are getting even more difficult. It looks like the easiest way to reach your patch is to install some “insensitive” OS maybe or what? Just to sit down and read 8000 lines of code, seriously??? Biggest nerd in the world won’t do it, dude.
85. Malika says:
evgeny,
It worked very successful, thank you.
For about NVIDIA Driver, ver 332.21 was good but ver 334.89 was sad.
• Tom says:
Agree, I had GT360 and found Nvidia 332.21 and 334.89 on filehippo, the upgrading from the 322 to 334 caused BSOD. Maybe with Nvidia cards if you want PAE patch, the 322.21 is the dead-end driver.
86. evgeny says:
> Gabriel wrote:
> it seems really difficult to do it.
It is very difficult to you to open archive. Ok.
It is very difficult to you to understand, that (open sfx in archive program) != (start *.exe-file). Ok.
It is very difficult to you to understand, that modified main.c has 2000+ lines of code, not 8000+ and the original main.c has 900+ lines of code. Real dude won’t read nor 8000, nor 2000, nor 900 lines of code, I guessed?
You aren’t able to do elementary things, and you poke one’s nose into source code. Cool.
• Will you make your fix128 into English? At least Russian/English version? The basic buttons of app would be nice. Or a Readme explaining how to use it even if the language is Russian. I don’t understand Russian and I wanted to use it, but there are language barriers. 🙁
Best Regards.
• Gabriel says:
Stop trying to blame others for their incompetence. Instead of doing this, just fix your archive. It’s BAD and it doesn’t matter if it’s a 99% false alarm or a real virus. Of course I can open and analyze it through disabling all protections, using some unheard of browser, going in safe mode, or just on my Linux, but should I do it? Should really anyone be doing it just to install your patch? I hope that’s not the case. Make it simple and we won’t be suspicious.
• evgeny says:
some laptops has double boot (and more) with win8, win7, etc (bcdedit /enum ^| find “ntkrnlpx.exe” won’t work). maybe {current}?
imho it will be better and faster: FOR /F %%a in (‘ver | find “6.3.9600”‘) do set windows=8.1
and… testsigning? u can’t load system from UEFI, etc, watermark…
and will be better to use “bcdedit /set {guid} bootmenupolicy legacy” for win8+ (especially with NVidia drivers)
87. escape75 says:
The bcdedit /enum ^| find “ntkrnlpx.exe” only is there to find out if the patch has
already been applied, and the script will grab the current boot entry using:
bcdedit /copy {current} /d “Windows %windows% (PAE)”.
I don’t think the watermark is a big issue, but maybe a patched loader is a better way?
I should also add the following after adding the boot entry, it will just fail on Windows 7:
bcdedit /set {%newguid%} bootmenupolicy legacy >nul 2>&1
88. escape75 says:
I tested WIndows 10, and it shows a text menu without “bootmenupolicy legacy”,
so it must work automatically when there’s more than 1 entry in boot or timeout specified ?
89. evgeny says:
> escape75
> so it must work automatically when
there is at least one boot entry with bootmenupolicy legacy option in BCD
(c) MSDN
> I should also add the following after adding the boot entry, it will just fail on Windows 7:
are you writer too? =) i said above:
> use “bcdedit /set {guid} bootmenupolicy legacy” for win8+
_win8+_, Karl
win8+ has new modern graphic-style boot menu and older text-style named “legacy”
90. evgeny says:
if u have older boot loader, i.e. bootloader from win7 and older, you have only text-style boot menu.
• evgeny says:
it’s much better… but
try solution like this. it’s more stable and more faster and more structured…
set winver=0.0.0000
FOR /F “usebackq tokens=4” %%i IN (ver) DO set winver=%%i
set winver=%winver:~0,-1%
set kernel=nothing
if *%winver%==*6.1.7600 set kernel=ntkrnlpa.exe
if *%winver%==*6.1.7601 set kernel=ntkrnlpa.exe
if *%winver%==*6.2.9200 set kernel=ntoskrnl.exe
if *%winver%==*6.3.9600 set kernel=ntoskrnl.exe
if *%winver%==*10.0.10240 set kernel=ntoskrnl.exe
if *%kernel%==*nothing echo.Unsupport kernel && pause && exit
patchpae.exe -type kernel -o “%systemroot%\system32\ntkrnlpx.exe” “%systemroot%\system32\%kernel%”
91. escape75 says:
http://pastebin.com/Zkr1WUK4
The reason I wanted to keep bcdedit /enum {current} ^| find “Windows 7”
is to make sure the patch works is version is changed in windows …
6.1.7600 – Windows 7
6.1.7601 – Windows 7 SP1
6.2.9200 – Windows 8
6.3.9200 – Windows 8.1
6.3.9600 – Windows 8.1 SR1
10.0.10240 – Windows 10
Could also do this …
FOR /F “tokens=4” %%g in (‘ver’) do set winver=%%g
FOR /F “delims=. tokens=1,2” %%g in (‘echo %winver%’) do set winver=%%g%%h
if %winver% gtr 61 set kernel=ntoskrnl.exe
if %winver% leq 61 set kernel=ntkrnlpa.exe
• evgeny says:
>if %winver% gtr 61…
It’s good code.
but in pastebin.com/Zkr1WUK4 i see this very much times:
if %windows% equ 7…
if %windows% equ 8…
if %windows% equ 10…
and again
if %windows% equ 7…
and again
if %windows% equ 7…
and it’s bad code. this code need to compact and structure. imho.
>if %errorlevel% neq 0 goto no-kpatch
it’s very good, you added this code.
• evgeny says:
Great work! very readable and useful script.
and some things.
1)
>ping 127.0.0.1 -n 6
fails if offline, maybe it’s better:
choice /d y /c yn /t 6
2)
3)
what about check for x64 Windows and exit in this case?
• escape75 says:
Good points …
1) choice would be better than ping.
2) I packed it below in self extract that forces admin rights, so I don’t need to check.
3) x64 will fail, as it will not find bcdedit (x32 process on x64) but even if it did run it
would exit: %errorlevel% neq 0 echo – ERROR, Cannot patch %kernel%! & goto end
Thanks!
• evgeny says:
2) yes, i saw it, i can read;)
but…
1. you can disable UAC:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA=0
2 you can login with user-access account
3 run your exe and it will fail.
in this case you will never become administrator-account.
many people disable UAC
3) i think message “x64 not support…” is better and more informative than “something missing”.
Да уж, Винрар это жесть. Сжать 90112+2249 байт почти в 300 килобайт.
• escape75 says:
If you do 1) it causes some weird issues in Windows 10 – calculator doesn’t work, etc.
Yes on 2) and 3) – maybe someone can modify to make it better 🙂
And yes, winrar self extract module adds to the size, but makes it easier to run 🙂
92. escape75 says:
Presenting latest edition of PAE Patch for Windows 7, 8, 8.1 and 10 – big thanks to evgeny!
https://mega.nz/#!S45TSb7Y!HWfl3rxMJpJYBqnuPfGbM3QTxa8mFfbLWaAolQ85ob8
Extract and run the included .exe (for 32 bit OS) – that’s all, 288 KB in size!
Here’s the included script in case you’re lazy to check yourself 🙂
http://pastebin.com/K844EusW
PS. I kept “ping” for sleep, “choice” doesn’t time out when you switch to another window,
I think it’s a windows bug …
• evgeny says:
PS.
echo.This program enables PAE in Windows. Wait…
set x64=0
if *%PROCESSOR_ARCHITECTURE%==*AMD64 set x64=1
if *%PROCESSOR_ARCHITEW6432%==*AMD64 set x64=1
if %x64%==1 echo.Windows x64 not support && pause && exit
reg.exe QUERY HKEY_USERS\S-1-5-19 /ve
if %errorlevel% equ 1 echo.Run program as administrator && pause && exit
you must load boot menu before loading drivers incompatible with PAE in Windows 8-10!!!111 otherwise it cause BSOD before boot menu and you won’t be able to boot absolutely
if %winver% NEQ 7 bcdedit /set {guid} bootmenupolicy legacy
>ping 127.0.0.1 -n 6
echo.wscript.sleep 6000>sleep.vbs
cscript.exe sleep.vbs>nul
del /f /q sleep.vbs
• escape75 says:
if %winver% NEQ 7 bcdedit /set {guid} bootmenupolicy legacy
I’ve done a brand new install of 32 bit Windows 10 in virtual box,
and “bcdedit /set {bootmgr} displaybootmenu yes” makes it display
There was no previous or older bootloader installed.
• Gabriel says:
Tested. Working OK for now(ATI user).
93. LCV says:
I only have one word to say – OUTSTANDING!!!! it works win windows 10 pro 32 bit as long as you remove any Nvidia Drivers beforehand. I now have 3.75 GB running. Thanks to all involved but especially evgeny and escape 75!!!!
94. LCV says:
Am using the MS Basic Display driver on the on-board card and uninstalled all software and drivers for the 2nd card (Nvidia gtx 460) to get it working. Now playing with the 2nd card to see how best to have it working as use it with an HDMI output so will come back with the solution when I find one that works ok 🙂 but with the on board card only at the moment it works absolutely great! Thanks once again
95. LCV says:
Right I have now sorted it, thanks to your link escape75, so the way to do it is to download the driver 9.18.13.3221 that you mentioned and install it and make sure you DON´T update it afterwards and stop NVIDIA from updating automatically. Here is the link http://www.nvidia.com/download/driverResults.aspx/71701/en-us
96. escape75 says:
Ok thanks for confirming,- yes I did know the 332.21 worked with PAE.
That was actually my own post o nvidia forums – but I thought maybe they fixed this.
I guess nvidia doesn’t care,- this also affects server 2003 as it has PAE without patch.
I think Intel has the same issue, so it’s just ATI that works …
97. LCV says:
Yes, I saw that it was your post also and you are 100% right. By the way you also have to stop Windows update from downloading drivers otherwise it automatically downloads the latest Nvidia driver and breaks it again. So basically you have to do everything possible to ensure it doesn´t update from the 332.21.
98. escape75 says:
Yes, I usually remove all folders from Nvidia drivers, except Display.driver, Hdaudio, Nvi2,
and Physx and that way it doesn’t install the nvidia auto update stuff. Also windows update
has to be told not to update like you said. But it would be nice if nvidia would fix this, heh.
99. escape75 says:
Little off topic but I disassembled nvlddmkm.sys and 332.21 calls halTranslateBusAddress,
but a newer 334.89 no longer includes such a call whatsoever. Something they’ve changed
with how memory is being addressed I’m guessing. Causing the PAE crash.
100. evgeny says:
for nvidia drivers in Windows 10 with PAE.
howto: install old driver version and prevent installation of newer version of driver.
1. you must temporarily stop and disable the “Windows Update” service. you may do it from “Computer Management”-Services-“Windows Update”,
but faster way is:
run Command Prompt as Administrator and enter
sc stop wuauserv
sc config wuauserv start= disabled
2. Now Windows Update is stoped. And we can remove all files and subdirectories in c:\Windows\SoftwareDistribution.
Downloaded updates and drivers are caching in this directory before install and windows automatically remove it after install.
And we now remove all new drivers for NVidia too.
We must delete the C:\NVidia directory too.
3. Run “Device Manager”.
Go to “Display adapters”, select NVidia and uninstall it. In “Confirm Device Uninstall” dialog you must activate the “Delete the driver
software for this device” option!
we deleted all copies of driver.
4. Click Action-“Scan for hardware changes” in “Device Manager”. You will see something like “Micro$oft Basic Display Adapter” in “Display adapters” (or hardware without driver). 5. Now you can install old NVidia driver with PAE support. 6. prevent installation of newer version of driver… before this you need to know HardwareId of your NVidia adapter. Double click “Micro$oft Basic Display Adapter” (or any hardware to prevent new installation), open the “Details” tab and
select property “Hardware Ids”. Then copy any value you like. For example, PCI\VEN_10DE&DEV_1341&SUBSYS_381A17AA
7. Run gpedit.msc in Command Prompt.
Go to “Local Computer Policy”-“Computer Configuration”-“Administrative Templates”-“System”-“Device Installation”-“Device Installation Restrictions”-
“Prevent installation of devices that match any of these device IDs”
Click Enabled, then click Show.
In the “Show Contents dialog” window, paste previously copied HardwareId of your NVidia adapter. Click OK.
Next, click OK to save your changes in “Prevent installation of devices that match any of these device IDs” window.
https://technet.microsoft.com/ru-ru/library/cc732727.aspx
8. enable and start the “Windows Update” service.
run Command Prompt as Administrator and enter
sc config wuauserv start= auto
sc start wuauserv
9. Now Windows can’t install any another driver for your NVidia adapter. Yes!
• escape75 says:
Excellent instructions, too bad we never will be able to install
a newer driver as nvidia dropped support on purpose, maybe after
they dropped support for server 2003 32bit. Makes me seriously
consider switching to OSX heh. or windows 64bit.
• evgeny says:
from russian forum:
1)
Windows can release another newer version of driver, and we will have BSOD.
2)
This f*uck_ing tool doesn’t show already installed updates. f*uck_ing Windows automatically installs this f*uck_ing updated driver, and we have no time to hide it.
We can uninstall new driver in “Device Manager” but in some seconds f*uck_ing Windows automatically install this f*uck_ing driver, and we have no time to do something.
We can’t uninstall this new driver-update becouse Windows not show it in programs/updates (or Windows Update download and install it in some seconds).
101. LCV says:
yeah it´s a bit of a nightmare I have temporarily disabled windows update service and now it´s working fine.
102. escape75 says:
I have no issues on windows 7 🙂
and still replace nvlddmkm.sys from 332.21, hmm.
103. cestpraca says:
Any update for window 10?
104. Terry says:
I solved the Nvidia driver problem by getting an ATI Radeon video card, and removing the Nvidia GEForce, 🙂
So far I have had no problems with my Win 8.1 computer, but it has only been patched a little while.
My computer with Windows 10 and which also has an Nvidia card has had driver updates disabled and Nvidia update in particular, hidden. The driver I’m using dates back to 2012 and doesn’t cause any problem.
It is a bit difficult to stop W10 updating video drivers, but it can be done
• escape75 says:
Unfortunately I just tried 355.82 as well as 355.98 on a fresh install of Win 10 x32,
and just as expected they both crash. You possibly either have an older driver version,
or you’re running x64 edition, or somehow a GTX 650 works, but my GT 620 doesn’t.
Or you were using a display hooked up to another non nvidia adapter. Don’t know 🙂
105. nerothtr says:
• nerothtr says:
Thanks! It’s work.
106. irfan says:
It shows 10GB 8.99Use able before that was 10GB 2.99 useable is it possible to use all available 10GB ???
• escape75 says:
Probably 1 gb is being used by video card,- just like it was 3 gb before and expected is 4 gb.
107. irfan says:
But I have 1GB extra dedicated for Nvidia
108. escape75 says:
I’m assuming you can still boot using the non-patched boot menu entry?
Check what version of nvidia or intel drivers you have installed after you boot,
most of the new ones won’t work or just crash.
• Luke says:
Yes, I can boot using the non-patched menu. My intel driver for the display is quite old (version 8.15.10.2900 on 26/11/2012).
• Luke says:
Does it mean nothing I can do in patching with Intel onboard graphics?
109. Ferdinand says:
you can disable on board graphic on device manager and use ati graphic card. I did that..
• Luke says:
Which ATI graphic card driver I should use?
110. Terry says:
Luke,
You can probably use any AMD/ATI graphics card that is within your price range, and use the driver which is for that card. If you buy new, then there will be a driver CD with the card.
I’m assuming you are using a desktop machine, if you have a laptop, then there isn’t much you can do about fitting a video card AFAIK.
• Luke says:
Too bad! I’m using notebook computer.
111. escape75 says:
Someone should pressure Intel and Nvidia to fix their drivers,-
ATI can do it, it’s a few lines of code.
112. escape75 says:
Well, nvidia 332.21 is the last one that works, from January 2014, so not really XP days …
Basically Server 2008 Enterprise x86 would be affected by this, as it supports PAE 64GB,
but I guess since nvidia doesn’t officially support 2008 any longer, they don’t care.
• gcalzo says:
…as I wrote, I was talking about Intel and Intel’s latest 32bit driver supporting PAE was for XP… 🙁
• escape75 says:
Yeah, someone reported above that a new nvidia driver works again,
but I haven’t tested it myself – in windows 10 x32. I wonder …
• escape75 says:
Tested now, didn’t work for me …
113. Irfan says:
I have Windows 10 home limited to 2 GB
I am looking for Windows 10 PAE Patch
114. Николай says:
Когда будет patchpae для windows 10&&&
115. test says:
премодерация штоле
• test says:
да нет, хрен разберёшься в этих фильтрах.
• test says:
пореверсим фильтры?
Or perhaps it’s too humiliating for Him, especially to switch keyboard layout&&&
• test says:
хм, работает, а всё вместе – нет.
• test says:
пореверсим фильтры?
Or perhaps it’s too humiliating for Him, especially to switch keyboard layout&&& еще что-тоюююю
• test says:
Nikolai is not able to read a few posts earlier with reference to the new patch or press Ctrl-F. Or perhaps it’s too humiliating for Him, especially to switch keyboard layout&&& For him, it is too early to use as Windows 10, as the new patch.
• test says:
прикольно вордпрес работает
116. angelo says:
news for win 10 1511? the last patch doesn’t work.
117. Terry says:
I also seem to have trouble with the Windows 10 version 1511 upgrade. Firstly the ‘old’ Nvidia driver from 2012 that I was using before the upgrade will now longer install.
Then with this Windows version winload.exe will not patch, it gives a failed error message.
118. Terry says:
Escape75.
First I tried PatchPae2.exe from evgeny, and it patched the kernel with no error message, but it failed at patching winload.exe. It did however produce a winloadp.exe file, but I didn’t go any further, especially as I knew the nvidia driver that Win 10 installed would cause failure anyway.
Then I tried your PAEPatch.rar file and it failed at trying to patch winload.exe.
So I guess we hope evgeny may come up with another fix in due course.
119. Escape75 says:
The link I posted uses PatchPae2 v040 RC0 so it should also fail.
It’s nothing more than a self extracting file above plus script, and this
line should make it abort and exit, if it succeeds it creates winloadx.exe:
if %errorlevel% neq 0 (echo -ERROR- Cannot Patch winload.exe!) & (goto end)
• Phraedrus says:
• Terry says:
Your Google is obviously in error, there is nothing malicious in the file, but it does modify the kernel. I downloaded the file from evgeny’s link with no alarm bells from either Google or Windows Defender.
Files like this that modify the system [i]can[/i] cause false positives in anti virus software, they have done so even back in MSDOS days.
• Phraedrus says:
Thank you. I tried again in Google – but it is adamant – so I switched to Microsoft edge and it downloaded it straight away.
Now to test it.
Thanks again.
🙂
• Escape75 says:
Chrome is crap lately, use Opera 🙂
• Ander says:
• Ander says:
run the program and display the command line window appears and closes in seconds
it doesn’t works
120. Thank You evgeny 🙂
I am running Win 10 pro x86 1151, and will try this. Is the NVidia issue fixed ?
I am using nVidia driver: 359.00, can I use it or should I go with another driver.
Thanks for your hard work, it is appreciated. PeAcE
121. Terry says:
Evgeny and Escape75,
Many thanks both. We are back in business again.
I got fed up with problem Nvidia drivers and Win 10 over-riding me, so I bought an AMD Radeon card, same as I fitted to my Win 8.1 machine.
Escape75, very impressed with your script file, it makes patching so much faster and easier.
Thanks again
122. pstrejajo says:
Hello,
I am struggling to install XP driver on Vista. I have HP 6530b with Mobile 4 Series Express Chipset Family (GMA 4500MHD?) card but the Vista driver is buggy as well. Unfortunately the XP driver has no such section in INF file, like the one for HD driver, only
[Manufacturer]
%Intel% = Intel.Mfg
[Intel.Mfg]
%iCNTG0% = iCNT0, PCI\VEN_8086&DEV_2A42
%iCNTG1% = iCNT1, PCI\VEN_8086&DEV_2A43
%iEGLG0% = iEGL0, PCI\VEN_8086&DEV_2E02
%iEGLG1% = iEGL1, PCI\VEN_8086&DEV_2E03
%iEGLQ4G0% = iEGL0, PCI\VEN_8086&DEV_2E12
%iEGLQ4G1% = iEGL1, PCI\VEN_8086&DEV_2E13
%iEGLG4G0% = iEGL0, PCI\VEN_8086&DEV_2E22
%iEGLG4G1% = iEGL1, PCI\VEN_8086&DEV_2E23
%iEGLGVG0% = iEGL0, PCI\VEN_8086&DEV_2E32
%iEGLGVG1% = iEGL1, PCI\VEN_8086&DEV_2E33
%iEGLGB0% = iEGL0, PCI\VEN_8086&DEV_2E42
%iEGLGB1% = iEGL1, PCI\VEN_8086&DEV_2E43
%iEGLGBU0% = iEGL0, PCI\VEN_8086&DEV_2E92
%iEGLGBU1% = iEGL1, PCI\VEN_8086&DEV_2E93
Nothing related to Vista or XP, so cannot make any change in it. Setup ends with error that driver is not validated for this computer and I should use driver from computer manufacturer, but if I use the XP driver from HP I am getting just the same error.
I tried to update the driver via device manager but Vista decides that it already has the newest driver. I tried to uninstall the driver but still I have some apparently pre-installed Intel driver 7.15.10.1488 and above actions end in the same way. Am I out of luck with this crap graphic card?
Maybe it is because the drivers have different names? Vista has igdkmd32.sys and for XP it is igxpmp32.sys. I would appreciate your comments.
123. dj02 says:
this patch works on win 10 as well?
• Xn says:
Administrator of your computer with windows 10 is M$. All interference into system can be undo by system. This system is not stable without patching so… If you have Professional or Enterprise version of Win10, probably you can downgrade to windows 8.1 (read EULA) and patch it. Pay attention during use of Windows update on Win7 and Win8.1 (telemetry; auto-update to windows 10 aka GWX.exe). 124. Xn says: Thank you. The Patch works properly but please add comment for windows 8.1: After selection of non-default Windows 8.1 Patched the computer will show black screen during a few seconds and restart. After restart there will be loaded patched system. I have 4GB of RAM. Before patch: 1.97GB free, after: 3.20GB free. I switched default boot item after testing to patched version. Why M$ blocks the PAE function? There shall be possible to enable with warning “not all devices/drivers support it. You use this option at your own risk”.
125. David says:
Hi, I am using intel i5 5300u with HD5500 graphic running win7 sp1 32bit. Is there still problem with this Intel HD graphic?
THank you very much!
126. yvbprakash@yahoo.com says:
Thanks for the patch, it worked for me, on dell PC windows10.
127. temuc0 says:
PLS
when i run into adm powershell or cmd of 8.1, gives an error that “unable to copy file:…”
128. Philip Thomas says:
The second command to disable the digital signature verification is failing me
• Philip Thomas says:
By the way I have windows 7
129. Cherry says:
patchpae2 is working great on my intel HD 4600 graphics with driver version 15.36.31.4414
130. Lea says:
It seems that many people has a “failed”,just like me?So today,I use both PatchPae2 and PatchPae3.On mine,PatchPae2 for winload.exe failed,and PatchPae3 for ntkrnlpa.exe “failed”,that’s the reason why combine them.
Sorry for my very low English skill level.
• evgeny says:
Lea
|
2017-09-22 04:15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3460095226764679, "perplexity": 10153.56393609877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688208.1/warc/CC-MAIN-20170922041015-20170922061015-00508.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2011.31.847
|
Advanced Search
Article Contents
Article Contents
# Frequency locking of modulated waves
• We consider the behavior of a modulated wave solution to an $\mathbb{S}^1$-equivariant autonomous system of differential equations under an external forcing of modulated wave type. The modulation frequency of the forcing is assumed to be close to the modulation frequency of the modulated wave solution, while the wave frequency of the forcing is supposed to be far from that of the modulated wave solution. We describe the domain in the three-dimensional control parameter space (of frequencies and amplitude of the forcing) where stable locking of the modulation frequencies of the forcing and the modulated wave solution occurs.
Our system is a simplest case scenario for the behavior of self-pulsating lasers under the influence of external periodically modulated optical signals.
Mathematics Subject Classification: Primary: 34C30, 34C14, 34C15; Secondary: 34C29, 34C60, 34D35, 34D06.
Citation:
• [1] U. Bandelow, L. Recke and B. Sandstede, Frequency regions for forced locking of self-pulsating multi-section DFB lasers, Opt. Commun., 147 (1998), 212-218.doi: 10.1016/S0030-4018(97)00570-1. [2] N. N. Bogoliubov and Y. A. Mitropolsky, "Asymptotic Methods in the Theory of Non-linear Oscillations," International Monographs on Advanced Mathematics and Physics, Hindustan Publishing Corp., Delhi, Gordon and Breach Science Publishers, New York, 1961. [3] C. Chicone, "Ordinary Differential Equations with Applications," 2nd edition, Texts in Applied Mathematics, 34, Springer, New York, 2006. [4] D. Chillingworth, Generic multiparameter bifurcation from a manifold, Dyn. Stab. Syst., 15 (2000), 101-137. [5] B. P. Demidovich, "Lectures on Stability Theory," Nauka, Moscow, 1967. [6] U. Feiste, D. J. As and A. Erhardt, 18 GHz all-optical frequency locking and clock recovery using a self-pulsating two-section laser, IEEE Photon. Technol. Lett., 6 (1994), 106-108.doi: 10.1109/68.265905. [7] M. Lichtner, M. Radziunas and L. Recke, Well-posedness, smooth dependence and center manifold reduction for a semilinear hyperbolic system from laser dynamics, Math. Methods Appl. Sci., 30 (2007), 931-960.doi: 10.1002/mma.816. [8] M. Nizette, T. Erneux, A. Gavrielides and V. Kovanis, Stability and bifurcations of periodically modulated, optically injected laser diodes, Phys. Rev. E, 63 (2001), Paper number 026212. [9] D. Peterhof and B. Sandstede, All-optical clock recovery using multisection distributed-feedback lasers, J. Nonlinear Sci., 9 (1999), 575-613.doi: 10.1007/s003329900079. [10] M. Radziunas, Numerical bifurcation analysis of the traveling wave model of multisection semiconductor lasers, Physica D, 213 (2006), 98-112.doi: 10.1016/j.physd.2005.11.003. [11] L. Recke, Forced frequency locking of rotating waves, Ukraīn. Math. J, 50 (1998), 94-101. [12] L. Recke and D. Peterhof, Abstract forced symmetry breaking and forced frequency locking of modulated waves, J. Differential Equations, 144 (1998), 233-262. [13] A. M. Samoilenko, "Elements of the Mathematical Theory of Multi-Frequency Oscillations," Mathematics and its Applications (Soviet Series), 71, Kluwer Acad. Publ. Group, Dordrecht, 1991. [14] A. M. Samoilenko and L. Recke, Conditions for synchronization of one oscillatory system, Ukrain. Math. J., 57 (2005), 1089-1119.doi: 10.1007/s11253-005-0250-3. [15] B. Sartorius, C. Bornholdt, O. Brox, H. J. Ehrke, D. Hoffmann, R. Ludwig and M. Möhrle, All-optical clock recovery module based on self-pulsating DFB laser, Electronics Letters, 34 (1998), 1664-1665.doi: 10.1049/el:19981152. [16] K. R. Schneider, Entrainment of modulation frequency: A case study, Int. J. Bifurc. Chaos Appl. Sci. Eng., 15 (2005), 3579-3588.doi: 10.1142/S0218127405014234. [17] J. Sieber, Numerical bifurcation analysis for multisection semiconductor lasers, SIAM J. Appl. Dyn. Syst., 1 (2002), 248-270.doi: 10.1137/S1111111102401746. [18] S. Wieczorek, B. Krauskopf, T. B. Simpson and D. Lenstra, The dynamical complexity of optically injected semiconductor lasers, Phys. Rep., 416 (2005), 1-128.doi: 10.1016/j.physrep.2005.06.003. [19] Y. F. Yi, Stability of integral manifold and orbital attraction of quasi-periodic motion, J. Differential Equation, 103 (1993), 278-322. [20] Y. F. Yi, A generalized integral manifold theorem, J. Differential Equation, 102 (1993), 153-187.
## Article Metrics
HTML views() PDF downloads(103) Cited by(0)
## Other Articles By Authors
• on this site
• on Google Scholar
### Catalog
/
DownLoad: Full-Size Img PowerPoint
|
2023-04-01 20:56:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5443326234817505, "perplexity": 5676.732709210834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00319.warc.gz"}
|
https://www.quizover.com/course/section/assessment-fractions-06-by-openstax
|
# 2.10 Fractions - 06
Page 1 / 1
## Memorandum
14. a) denominator
b) common denominator
c) multiple
d) tellers
e) number
f) fractions
g) improper fractions
h) simplify
15.2 a)
= $\frac{\text{12}}{\text{21}}$ + $\frac{\text{14}}{\text{21}}$
= $\frac{\text{26}}{\text{21}}$
= 1 $\frac{5}{\text{21}}$
b)
= $\frac{5}{\text{10}}$ + $\frac{6}{\text{10}}$
= $\frac{\text{11}}{\text{10}}$
= 1 $\frac{1}{\text{10}}$
c)
= $\frac{\text{36}}{\text{45}}$ - $\frac{\text{25}}{\text{45}}$
= $\frac{\text{11}}{\text{45}}$
d)
= $\frac{4}{6}$ - $\frac{3}{6}$
= $\frac{1}{6}$
16.
a)
= $\text{11}\frac{2}{3}$ + $\frac{1}{7}$
= $\text{11}\frac{\text{14}}{\text{21}}$ + $\frac{3}{\text{21}}$
p = $\text{11}\frac{\text{17}}{\text{21}}$
b)
= $3\frac{1}{4}-\frac{1}{9}$
= 3 $\frac{9}{\text{36}}-\frac{4}{\text{36}}$
t = 3 $\frac{5}{\text{36}}$
= 6 $\frac{3}{4}$ – (3 $\frac{1}{2}$ + 1 $\frac{2}{3}$ )
= 6 $\frac{3}{4}$ – 3 $\frac{3}{6}$ + $\frac{4}{6}$
= 6 $\frac{3}{4}$ – 4 $\frac{1}{6}$
= 2 $\frac{9}{\text{12}}$ - $\frac{2}{\text{12}}$
g = 2 $\frac{7}{\text{12}}$
d)
= 9 $\frac{7}{8}$ - (4 $\frac{9}{\text{12}}$ + $\frac{8}{\text{12}}$ )
= 9 $\frac{7}{8}$ - 5 $\frac{5}{\text{12}}$
= 4 $\frac{7}{8}$ - $\frac{5}{\text{12}}$
= 4 $\frac{\text{21}}{\text{24}}$ - $\frac{\text{10}}{\text{24}}$
v = 4 $\frac{\text{11}}{\text{24}}$
## Activity: addition and subtraction of fractions [lo 1.7.3]
14. Addition and subtraction of fractions
LET US REVISE.
The answers to the following questions are hidden below.
Circle them when you find them and then complete the sentences.
a b t t t s o n k o f m n d e n o m i n a t o r y u e d e l u o a e n r a j m n k l l l e a m d o c p e o h a e t m l e i n t o r m m v r i e d r g e i o a i n i s p r f e s g o g t n s u x l m g p t t n h o a e q k e l v o l e s t r t d e f s h j r k l e e s o q w e r t y p y o l u h r s d a z d o m u b g e s s i m p l i f i e d e l h
a) We can only add or subtract fractions if the.................................................. are the same.
b) If the denominators differ, we must find .................................................. fractions with the same denominators.
c) We can find the common denominator easily by using ..................................................
d) We only add the.................................................. together.
e) The .................................................. stays unchanged when we add or subtract.
f) When we add mixed numbers together, we first add the natural numbers and then
the ..................................................
g) When we subtract mixed numbers, we can first change them to ................................................. fractions.
h) Answers must always be .................................................. as far as possible.
15.1 Do you still remember?
When we add or subtract e.g. one third ( $\frac{1}{3}$ ) + four fifths ( $\frac{4}{5}$ ) or five sixths ( $\frac{5}{6}$ ) – two nineths ( $\frac{2}{9}$ ) we must first make the DENOMINATORS the same. To do this we must look for the Lowest Common Multiple (LCM) .
If we want the LCM of 3 and 5 we can work as follows:
3: 3 ; 6 ; 9 ; 12 ; 15 ; 18 ; 21 ; etc.
5: 5 ; 10 ; 15 ; 20 ; 25 ; etc.
Thus we change both denominators to 15:
1 × 5 3 × 5
=
5 15
en
4 × 3 5 × 3
=
12 15
Thus: $\begin{array}{}\frac{1}{3}+\frac{4}{5}\\ \frac{5}{\text{15}}+\frac{\text{12}}{\text{15}}\\ \frac{\text{17}}{\text{15}}\\ 1\frac{2}{\text{15}}\end{array}$
15.2 Calculate the following:
a) $x=\frac{4}{7}+\frac{2}{3}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
b) $y=\frac{1}{2}+\frac{3}{5}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
c) $d=\frac{4}{5}-\frac{5}{9}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
d) $k=\frac{2}{3}-\frac{1}{2}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
16. Work together with a friend and calculate:
a) $p=7\frac{2}{3}+4\frac{1}{7}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
b) $t=5\frac{1}{4}-2\frac{1}{9}$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
c) $g=6\frac{3}{4}-\left(2\frac{1}{2}+1\frac{2}{3}\right)$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
d) $v=9\frac{7}{8}-\left(3\frac{3}{4}+1\frac{2}{3}\right)$
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
17. CHALLENGE!
Divide into groups of three. Complete the following table by filling in the number of hours you spent doing homework last week:
NAME Mon Tues Wed Thur Fri e.g Nomsa $1\frac{1}{2}$ $2\frac{1}{4}$ $3\frac{3}{4}$ $1\frac{1}{2}$ $\frac{1}{2}$ 1. ............................................... ............ ............ ............ ............ ............ 2. ............................................... ............ ............ ............ ............ ............ 3. ............................................... ............ ............ ............ ............ ............
a) How many hours did each member of the group spend on homework last week?
1. _________________________________
2. _________________________________
3. _________________________________
b) Who spent the most time on homework? _______________________________
c) Who learnt the least? _________________________________
d) Calculate the difference between b en c’s answers.
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
## Assessment
Learning Outcome 1: The learner will be able to recognise, describe and represent numbers and their relationships, and to count, estimate, calculate and check with competence and confidence in solving problems.
Assessment Standard 1.7: We know this when the learner estimates and calculates by selecting and using operations appropriate to solving problems that involve:
1.7.3: addition, subtraction and multiplication of common fractions.
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers!
|
2018-10-18 22:26:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 61, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.62870854139328, "perplexity": 3402.2714326941486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00038.warc.gz"}
|
https://itectec.com/database/the-y-axis-of-this-oracle-ash-viewer/
|
# The y-axis of this Oracle ASH Viewer
monitoringoracle
I am investigating different monitoring software for my database, and I stumbled upon this open source ASH Viewer. I noticed in the main graph on the y-axis for resource usage, it's labeled as "Active Sessions" with an axis of varying integers
I don't understand what "Active Sessions" is supposed to mean. Is this the CPU % utilization of the database?
Since the link you included says the data is taken from V\$ACTIVE_SESSION_HISTORY, presumably these are the wait event classes. In other words, the diagram shows how many sessions are waiting on each event class or actively executing ("CPU used") at any given time.
|
2021-10-25 01:22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5017009377479553, "perplexity": 2051.922321009059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00715.warc.gz"}
|
https://physicscup.ee/physics-cup-taltech-2021-pr-3-hint-1-2/
|
# Physics Cup – TalTech 2021 – Pr. 3, Hint 1.
The first hint will be a long piece of theory which you can find, in principle, in textbooks, but it might be hard to collect all the required pieces, so let us present these pieces here.
Small oscillations of a system of bodies are described by a set of linear differential equations of second order. For instance, if three bodies of masses $m_1$, $m_2$, and $m_3$, respectively, are constrained to move along $x$-axis while connected with springs of stiffness $k$, we have $\ddot x_1=-k(x_1-x_2)$, $\ddot x_2=-k(2x_2-x_1-x_3)$, and $\ddot x_3=-k(x_3-x_2)$, where $x_i$ denotes the displacement of the $i$-th mass. We have no nonlinear terms because, by small amplitudes, we can neglect all quadratic and higher terms in the Taylor expansion. Such equations can be written in matrix form (which isn’t really needed to solve this problem) as $\ddot x_i=a_{ij}x_j$, where we have assumed Einstein’s summation convention and assume summation over repeated indices. For this example of three masses, assuming the masses to be all equal to $m$, we have $a_{11}=a_{33}= - k/m$, $a_{22}= - 2k/m$, $a_{12}=a_{21}= a_{23}=a_{32}= k/m$, and $a_{13}=a_{31}= 0$. Note that the matrix here is symmetric, $a_{ij}=a_{ji}$; this is a property which you may think about as following from Newton’s third law, combined with the fact that all the masses are equal. For the example above, the functions $x_{i}=x_{i}(t)$ are just the Euclidian coordinates of the point masses, but for more complicated situations, these are generalized coordinates the total number of which needs to be equal to the number of degrees of freedom. The number of degrees of freedom is the smallest number of scalar parameters needed to describe fully a dynamical system. For instance, consider three point masses connected with two rigid rods, with a hinged connection at the middle joint, and with everything constrained into a horizontal plane. In order to describe the position of each of the point masses, we would need to use their $x$ and $y$ coordinates, so that three point masses would come with six coordinates in total. However, we have two scalar constraints – the distances between neighboring points, which will reduce the number of required generalized coordinates down by two. As a result, we need four generalized coordinates; for instance, these coordinates can be $x_1=\xi$ and $x_2=\eta$ – the coordinates of the first mass, together with $x_3=\alpha$ and $x_4=\beta$ – the angles of the rods. However, we can always go to a reference frame where the center of mass of the whole system is at rest, at the origin, in which case the number of coordinates is further decreased by two (we won’t need $\xi$ and $\eta$ anymore). The key to an easy solution is a convenient choice of generalized coordinates. In some cases, you may be able to find convenient coordinates, but some scalar constraints remain still unused (i.e. your number of generalized coordinates $N$ is still greater than the number of degrees of freedom $n$). In that case, you may just consider a subspace in the $n$-dimensional space defined by your generalized coordinates.
Next, we introduce the concept of natural modes; these are the modes where all the coordinates oscillate with the same frequency. The general theory of coupled oscillators tells us that arbitrary motion of the system is a linear combination (superposition) of the natural modes. For instance, in the case of the example with three equal masses connected with springs, one of the obvious modes is when the central point mass is at rest, and the other two oscillate in opposite phase: $x_1=-x_3=x_0\cos(\omega_1 t+\varphi)$, $x_2=0$. Due to symmetry, it is clear that $x_1$ and $x_3$ remain moving symmetrically while there is no net force exerted onto the mass in the middle. Now we can easily conclude from Newton’s second law for one of the masses that $\omega_1=\sqrt {k/m}$. This motion is described by the eigenvector $\mathbf X=(1,0,-1)$ (i.e. $X_1=-X_3=1$, $X_2=0$); a natural mode can be conveniently expressed in terms of the eigenvector as $\mathbf x=\mathbf Xx_0\cos(\omega_1 t+\varphi)$. There is also the trivial mode when all the masses move together with constant speed: with eigenvector $\mathbf Y=(1,1,1)$, $\mathbf x=\mathbf Yv_0t$. This can be interpreted as a motion with an infinitely long period, i.e. with $\omega_2=0$.
We are missing one more mode which can be easily found here if we know a useful fact: for a symmetric matrix $a_{ij}=a_{ji}$, the eigenvectors corresponding to different natural frequencies are perpendicular to each other, i.e. we need to find such a vector $Z_{i}$ that $\sum_iX_iZ_{i}=0$ and $\sum_iY_iZ_{i}=0$. It is easy to see that these equalities are satisfied with $\mathbf Z=(1,-2,1)$ (keep in mind that our space of eigenvectors is three-dimensional – because we have three degrees of freedom; therefore, all the vectors which are perpendicular to the plane defined by the vectors $\mathbf X$ and $\mathbf Y$must be parallel to $\mathbf Z$). So, our missing mode is in the form $\mathbf x=\mathbf Zz_0\cos(\omega_3t+\phi)$; let us express the average kinetic and potential energies for such a mode, which must be equal to each other for sinusoidal oscillations, as $\left=\frac 14m\omega_3^2z_0^2\mathbf Z^2=\frac 32m\omega_3^2z_0^2$; $\left=\frac 12k[z_0(1+2)]^2=\frac 92kz_0^2$, hence $\omega_3^2=3k/m$.
Please submit the solution of this problem via e-mail to physcs.cup@gmail.com.
For full regulations, see the “Participate” tab.
|
2021-09-24 19:18:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 53, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312935471534729, "perplexity": 119.84083111243342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00357.warc.gz"}
|
http://tex.stackexchange.com/questions/11565/table-of-contents-customize-space-between-dots-before-page-number
|
I am writing a small KOMA-Script `scrartcl` document with multiple sections. In the table of contents, LaTeX fills the current line with dots between the end of the section name and the page number. Is there any way to customize this behavior? I would like to reduce the space between the single dots (i.e. use more dots in the same space).
-
örn: that's not the default behavior of scrartcl. Section titles would be set without dots, subsections would produce dotted TOC lines. Perhaps show your settings, the best would be a minimal example. – Stefan Kottwitz Feb 20 '11 at 14:32
Yes, you are right. Actually, I am referring to subsections (and subsubsections). However, this is a general question that is also interesting for other document classes. – Björn Marschollek Feb 20 '11 at 14:39
For dotted TOC lines, standard LaTeX and also KOMA-Script use the internal macro `\@dotsep` which specifies that space in mu. Its original value is 4.5.
``````\makeatletter
@lockstep: Right! I thought the same and added it, right at the same time of your comment I guess. :-) I used `\show\@dotsep` first. Then I cross-checked: it's usually defined by the class, for example by `\newcommand\@dotsep{4.5}` in the base classes. – Stefan Kottwitz Feb 20 '11 at 14:52
|
2015-05-30 23:09:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707628488540649, "perplexity": 1467.2905256061554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932737.93/warc/CC-MAIN-20150521113212-00235-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.helpteaching.com/tests/1447817/midpoints-and-distance-in-the-complex-plane
|
##### Notes
This printable supports Common Core Mathematics Standard HSN-CN.B.6
##### Print Instructions
NOTE: Only your test content will print.
To preview this test, click on the File menu and select Print Preview.
See our guide on How To Change Browser Print Settings to customize headers and footers before printing.
# Midpoints and Distance in the Complex Plane (Grades 11-12)
Print Test (Only the test content will print)
## Midpoints and Distance in the Complex Plane
1.
Let the graph below represent the complex plane, where x is the real axis and y is the imaginary axis. What is the midpoint of the line segment created by joining the two complex numbers represented by J and C?
1. $1/2 + 5/2 i$
2. $3/2 + 3/2 i$
3. $-3/2 -3/2 i$
4. $1 + 5i$
2.
If $z= -11 + 2i$ and $w= -1 + 6i$ are plotted on the complex plane, and a line segment created by joining the two points, what would be the midpoint of this line segment?
1. $-5 - 2i$
2. $-5/2 + 5/2i$
3. $-6 + 4i$
4. $-6 + 3i$
3.
If $-3 - 9i$ and $3 + i$ are plotted on the complex plane, and a line segment is created by joining these two points, what is the midpoint of this line segment?
1. $-2 - 5i$
2. $-4i$
3. $2 + 5i$
4. $3/2 - 92 i$
4.
Which of the listed complex numbers, when represented by a point in the complex plane, is both equidistant and collinear with the complex numbers represented by points A and K?
1. $3i$
2. $1 + 3/2 i$
3. $1/2 + 3i$
4. $3 + 1/2 i$
5.
Let $z=-8+5i$. If the midpoint of the line segment created by complex numbers $z$ and $w$ is $-3+3/2 i$, what is the value of $w ?$
1. $w = -11/2 + 13/4 i$
2. $w = 2-2i$
3. $w = 14 + i$
4. $w = 4 + 7/2 i$
6.
Find the distance between $z = 6 + 7i$ and $w = -2 - 3i$ in the complex plane.
1. $4sqrt(2)$
2. $18$
3. $2sqrt(2)$
4. $2sqrt(41)$
7.
Which of the following expressions represent(s) the distance between $z_1$ and $z_2$ in the complex plane? Choose all correct answers.
1. $|z_1 - z_2|$
2. $|z_2 - z_1|$
3. $|z_2| - |z_1|$
4. $|z_2 + z_1|$
8.
What is the distance between the complex numbers $2-4i$ and $-3/2 - 1/2 i$ in the complex plane?
1. $7/2 sqrt(2)$
2. $1/2sqrt(130)$
3. $5/2 sqrt(2)$
4. $1/8 sqrt(82)$
9.
If $z = 5 + 4i$, and is plotted in the complex plane, which of the following complex numbers would be 3 units away from $z$ in the complex plane? Choose all correct answers.
1. $2 + 4i$
2. $6 + 6i$
3. $3 + (4+sqrt(5)) \ i$
4. $8 + i$
10.
Kayslee is given the complex numbers C and J in polar form, $2sqrt(5) \ (cos(63.4° + i sin63.4°)$ and $sqrt(2) \ (cos(135° + isin135°)$ respectively. She notices that she can find the distance between these numbers, $d$, in the complex plane simply by taking the difference of their arguments, $71.6°$, and then applying the law of cosines formula, $d = sqrt( (2sqrt(5))^2 + (sqrt(2))^2 - 2 * 2sqrt(5) * sqrt(2) cos71.6° )$. Is this value correct? Check using the method for complex numbers in rectangular form. Is Kayslee's method applicable to any two complex numbers in the complex plane?
1. Yes, the value is correct, and Kayslee's method will work for all complex numbers (being careful with how the difference of arguments is calculated).
2. Yes, the value is correct, but this method only works in some instances.
3. Yes, the value is correct, but this is merely coincidence (there is no reason for it).
4. No, this value is not correct, and her method is also not correct (it does not correctly calculate the distance between complex numbers in the complex plane).
You need to be a HelpTeaching.com member to access free printables.
|
2020-07-14 22:12:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5149734616279602, "perplexity": 363.9331999613768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00111.warc.gz"}
|
https://codegolf.stackexchange.com/questions/198550/simple-circular-words/198572
|
# Simple Circular Words
A "simple circular" word is a word whose chords do not intersect. The chords of a word may be seen by laying out the alphabet in a circle, and then connecting the word's consecutive letters.
ROLE
LAKE
BALMY
## Failing Example
A word fails to be simple circular if any of its chords intersect:
## The Challenge
Write a program or function that takes a word and returns true if it's simple circular, false otherwise.
• Code golf, fewest bytes wins.
• Standard rules.
• You may assume there are no repeated letters in the word.
• You may assume every word has at least 2 letters
• You may assume the word is all uppercase, or all lowercase, whichever you prefer.
• You may output any two consistent values for true and false.
## Test Cases
### True
ROLE, LAKE, BALMY, AEBDC, ABZCYDXE, AZL, ZA
### False
BONES, ACDB, ALBZ, EGDF
• After around 1 hour of struggling, I've concluded that this cannot be done visually in Scratch. – lyxal Jan 27 '20 at 7:41
• @AZTECCO "You may assume there are no repeated letters in the word." – Kevin Cruijssen Jan 27 '20 at 8:25
• @Kevin Cruijssen ah yes, sorry I skipped that. Although this rule makes the challenge simple it's common for words to have repeated letters, plus ngn answer seems to work for repeated letters too. Maybe we can have a 2nd challenge later based on this.. – AZTECCO Jan 27 '20 at 9:41
• @AZTECCO I'll be posting a more difficult sequel to this question later this week. Re: repeated letters specifically, I wanted to avoid dealing with doubles like GOON, where the only sensible interpretation is to add a pre-processing step that removes them, which felt uninteresting to me. I'll grant it might have been better to prohibit only contiguous repeats, rather than repeats altogether. – Jonah Jan 27 '20 at 13:14
# Jelly, 12 bytes
O_Ṫ$:Ṫ$¬EƲƤẠ
Try it online!
A Jelly translation of Grimy’s 05AB1E answer; be sure to upvote that one too! A monadic link taking a Jelly string as its argument and returning a Jelly boolean.
## Explanation
O | Convert to Unicode codepoints
ƲƤ | Following applied to each prefix:
_Ṫ$| - Subtract tail (after popping tail) :Ṫ$ | - Integer divide by tail (after popping tail)
¬ | - Not
E | - All equal
Ạ | All
# Perl 6, 46 bytes
{!/[(.).*]**4<?{[+^] $0 Xeq[...] ~<<$0[^2]}>/}
Try it online!
A regex solution that finds if there are four characters in the string such that the two later characters are in different sections of the circle, as defined by the first two characters.
### Explanation
{ } # Anonymous code block returning
!/ / # If the input does not match
[(.).*]**4 # Any 4 characters
<?{ } # Where
[...] # The range from
~<<$0[^2] # The first character to the second Xeq # Contains which of$0 # The four characters
[+^] # Reduce the list of booleans by xor
# Effectively, if only one of the 3rd or 4th character is in that range
# K (ngn/k), 26 24 bytes
{&/|/x=&\'x:-:\|26!x-*x}
Try it online!
*x first letter of the argument x
x- subtract it from each of x as ascii code
26! mod 26
| reverse
-:\ make a pair of the list and its negation
x: assign to x
&\' cumulative minima and maxima (max is min under negation)
|/x= boolean mask for where x[i] is the current minimum or maximum
&/ all (and-reduce)
# 05AB1E, 14 13 bytes
Fixed a bug thanks to NickKennedy
Ç.sεć-ć/ï_Ë}P
Try it online!
Ç # convert each letter to its codepoint
.s # suffixes of this list
ε } # for each suffix:
ć- # subtract first from others: [b-a, c-a, d-a, ...]
ć/ # divide others by first: [(c-a)/(b-a), (d-a)/(b-a), ...]
ï # round down
_ # compare with 0
Ë # all equal?
P # after the loop, product (1 iff the list is all 1s)
• I knew a completely different approach would be shorter. Nice answer. Btw, -1 by using ÷ instead of /ï, which you funnily and consequently enough suggested yourself 2 days ago. :) – Kevin Cruijssen Jan 27 '20 at 14:48
• @KevinCruijssen no, that doesn't work. Try EGDF. ÷ rounds towards 0 while /ï rounds down. – Grimmy Jan 27 '20 at 14:48
• Ah, you're right, due to the -0.5. With the challenge I linked above where you suggested this golf, the challenge didn't cared about how you rounded. Point taken. :) Maybe EGDF is also a good test case for OP, or does it only apply to your formula and not so much to the approaches used in other answers. (PS: consequently=coincidentally in my comment above; I'm unable to edit it.) – Kevin Cruijssen Jan 27 '20 at 14:52
• @NickKennedy thanks, fixed (i just had to revert my 13 => 12 optimization, which was to use prefixes instead of suffixes). – Grimmy Jan 27 '20 at 16:31
• This is a really nice approach. Well done. – Jonah Jan 28 '20 at 13:58
# JavaScript (Node.js), 91 bytes
Returns $$\false\$$ for simple circular, or $$\true\$$ for non-simple circular.
s=>Buffer(s).some((o,i,a,j=i)=>a.some(_=>((g=i=>x=(a[i]-o%26)%26)(j)-g(i+1))*(x-g(++j))>0))
Try it online!
### Commented
s => // s = input string
Buffer(s) // turn s into a Buffer
.some((o, i, a, // for each ASCII code o at position i in a[]
j = i) => // and starting with j = i:
a.some(_ => // for each entry in a[]:
( //
( g = i => // g is a helper function taking a position i
x = (a[i] - o % 26) // return (a[i] - (o mod 26)) mod 26
% 26 // and assign the result to x
// NB: if a[i] is undefined, this gives NaN
// and forces the forthcoming test to fail
)(j) - // compute the difference between g(j)
g(i + 1) // and g(i + 1)
) * ( // multiply it by the difference
x - g(++j) // between g(i + 1) and g(j + 1) (and increment j)
) > 0 // if it's positive, the chords are intersecting
// inside the circle (i.e. s is not simple circular)
) // end of inner some()
) // end of outer some()
# 05AB1E, 21 20 bytes
Ask¬-₂%.sD€ß¥s€à¥+ĀP
Port of @ngn's K (ngn/k) answer, so make sure to upvote them!!
Input as a lowercase list of characters.
Explanation:
A # Push the lowercase alphabet: "abcdefghijklmnopqrstuvwxyz"
sk # Get the (0-based) index of each characters of the input-list in this alphabet
# i.e. ["b","a","l","m","y"] → [1,0,11,12,24]
¬ # Push the first index (without popping)
- # And subtract it from each index
# i.e. [1,0,11,12,24] - 1 → [0,-1,10,11,23]
₂% # Then take modulo-26 to make the negative values positive
# i.e [0,-1,10,11,23] → [0,25,10,11,23]
.s # Take the suffices of this list
# i.e. [0,25,10,11,23] → [[23],[11,23],[10,11,23],[25,10,11,23],[0,25,10,11,23]]
D # Duplicate it
ۧ # Take the minimum of each suffix
# i.e. [[23],[11,23],[10,11,23],[25,10,11,23],[0,25,10,11,23]]
# → [23,11,10,10,0]
¥ # And take the deltas (forward differences) of those
# i.e. [23,11,10,10,0] → [-12,-1,0,-10]
s # Swap to get the suffices again
ۈ # Take the maximum of each suffix this time
# i.e. [[23],[11,23],[10,11,23],[25,10,11,23],[0,25,10,11,23]]
# → [23,23,23,25,25]
¥ # And take the deltas (forward differences) of those as well
# i.e. [23,23,23,25,25] → [0,0,2,0]
+ # Add the values at the same indices in the two lists together
# i.e. [-12,-1,0,-10] + [0,0,2,0] → [-12,1,2,-10]
Ā # Python-style truthify each (0→0; everything else → 1)
# i.e. [-12,1,2,-10] → [1,1,1,1]
P # And take the product of that to check if all are truthy
# i.e. [1,1,1,1] → 1
# (after which this is output implicitly as result)
• I think you forgot the and wait part. :P – lyxal Jan 27 '20 at 10:09
• @Lyxal fixed ;p – Kevin Cruijssen Jan 27 '20 at 10:10
• Well played ;P. I wish I could up vote this twice! – lyxal Jan 27 '20 at 10:13
# Charcoal, 29 bytes
⬤θ⬤…θκ⬤…θμ⬤…θξ⁼⁼‹ιν‹νλ⁼‹ιπ‹πλ
Try it online! Takes input in consistent case and outputs a Charcoal boolean (- for true, nothing for false). This has a rare use of the p variable. Explanation:
θ Input string
⬤ All characters satisfy
θ Input string
… Truncated to
κ Current index
⬤ All characters satisfy
θ Input string
… Truncated to
μ Current index
⬤ All characters satisfy
θ Input string
… Truncated to
ξ Current index
⬤ All characters satisfy
ι Fourth character
‹ Is less than
ν Second character
⁼ Equals
ν Second character
‹ Is less than
λ Third character
⁼ Equals
ι Fourth character
‹ Is less than
π First character
⁼ Equals
π First character
‹ Is less than
λ Third character
For each subsequence of four characters in the input word, the first two comparisons check whether the second character is between the third and fourth while the other comparisons check whether the first character is between the third and fourth. If the results of these comparisons is different then it means that the chord between the first two characters crosses the chord between the third and fourth characters and the whole expression therefore evaluates to false.
# J, 4036 33 bytes
10(e.,)[:(/:~@,#.@e.])"1/~2]\3&u:
Try it online!
Decided to play my own game.
Still golfing...
But the idea is just to try all possible combinations of pairs of points, and check if their lines intersect, which, if the segements are AB and CD, happens when the letters' numerical values are arranged like:
_____
/ \
A...C...B...D
\______/
• I recommend holding off for a week before answering your own challenges. – Adám Jan 28 '20 at 13:31
• Just FYI for the future. – Adám Jan 28 '20 at 14:52
Thanks @JoKing for saving 36 bytes!
Almost certain its not the optimal way to go, so please feel free to improve it.
x(a:b:c:d:_)|s<- \z->min a b>z||z>max a b=s c/=s d;x _=1<0;y s=or$map(x.concat)$mapM(\a->[[a],[]])s
## Explanation
However, I loved doing some propositional calculus. First, x tests if a given sequence of four letters form a non-circular word. Let $$\I\$$ be the interval defined by $$\a\$$ and $$\b\$$, so $$\a,b,c\$$ and $$\d\$$ form a circular word only if:
$$p\equiv(c\in I)\iff q\equiv(d\in I)\\ (p\Rightarrow q)\wedge(q\Rightarrow p)\\ (\neg p\lor q)\wedge(\neg q\lor p)\\ \left[(\neg p\lor q)\wedge \neg q\right]\lor\left[(\neg p\lor q)\wedge p\right]\\ \left[(\neg p\wedge\neg q)\lor(q\wedge\neg q)\right]\lor\left[(\neg p\wedge p)\lor(q\wedge p)\right]\\ (\neg p\wedge\neg q)\lor(q\wedge p)\\ \neg(p\lor q)\lor(q\wedge p)$$
Negating this proposition gives us
$$(p\lor q)\wedge\neg(q\wedge p)$$
which is equivalent to $$\p\veebar q\$$, in haskell p/=q. Finally, y tests if any subsequence with length 4 from the original word is non-circular.
Here's a spaned version:
import Data.List
-- x
cuts (a:b:c:d:rest)
| s <- \z-> min a b > z || z > max a b
= s c /= s d
x _ = False
-- y
isNonCircular word = or $map (cuts . concat)$ mapM (\a -> [[a],[]]) word
$$$$
• and gofled to 97 bytes (though I'm no Haskell golfer) – Jo King Jan 30 '20 at 0:51
• @JoKing how did i not think of that!? I thoutgh about using xor, but didn't find any simple implamentation and didn't even considered using /=, thank you. – Leonardo Moraes Jan 30 '20 at 15:08
• @JoKing I tried running your code on my computer, but it didn't compile, even though I see how it would work. – Leonardo Moraes Jan 30 '20 at 15:11
• You need the header, with the comment, and I believe that it must be compiled in GHC since -Xcpp is a GHC flag. Either that or add the v= before the or. Also lambdas are usually a bad idea. The lambda in @JoKing's golf can be eliminated for one more byte. – Wheat Wizard Jan 31 '20 at 1:46
• Additionally or + a map is the same as the function any, so instead of or.(x.concat<$>) you can do any(x.concat). – Wheat Wizard Jan 31 '20 at 1:50 # Ruby-nl, 70 bytes An adaptation of the Perl answer, but Perl is apparently black magic with... whatever they're doing that lets them tersely get any 4 characters. p$_.chars.combination(4).all?{|*m,c,d|(c=~r=/[#{m.sort*?-}]/)==(d=~r)}
Try it online!
• Can you explain the regex part? – Jonah Jan 29 '20 at 4:37
• @Jonah I take the first two characters selected, sort them, and join them with -. Then this is interpolated into the regex. So if the input is BONES, the program will eventually reach the combination ['B', 'O', 'E', 'S'], so m=['B', 'O']; c='E', d='S'. So it will check if 'E' is in the regex /[B-O]/, and then do the same for 'S', and find that the result for each of them is not the same, so it returns false. – Value Ink Jan 31 '20 at 2:30
• That’s very clever. Similar idea to my solution but the regex twist is quite fun. – Jonah Jan 31 '20 at 3:52
• We have to do what's necessary to make things shorter. The next best solution AFAIK a,b=m.sort;(a<c&&c<b)==(a<d&&d<b) is 2 bytes longer. – Value Ink Jan 31 '20 at 6:17
# C (clang), 119 114 bytes
c,n,b,j,o;g(char*w){for(int l[92]={j=o=c=0};n=*w;c=l[*w++])for(b=l[n],o|=b|c-j++&&c+~b&&b-c;n;)l[n--]++;return!o;}
Try it online!
We use an array representing the circle points ( we start from \0 instad of 'A' because it doesn't care)
We increment by 1 each point from current letter to 0.
The next point is valid if
next == curr or
next == curr - 1 or
next == 0 and curr == j( iteration / left value )
LAKE
ABCDEFGHIJKLMN.. j curr next
00000000000000.. 0 0 L 0
11111111111*00.. 1 1 A 1
*1111111111100.. 2 2 K 2
3222222222*100.. 3 2 E 2
4333*222222100..
`
Thanks to @ceilingcat for saving 5
|
2021-05-16 09:50:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48595207929611206, "perplexity": 4521.144667376597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00006.warc.gz"}
|
http://icpc.njust.edu.cn/Problem/Zju/3404/
|
# Sticker
Time Limit: Java: 2000 ms / Others: 2000 ms
Memory Limit: Java: 65536 KB / Others: 65536 KB
## Description
"Shock You", a name of candy, which is new product of cc98. In fact, its selling point is the sticker inside. To collect different stickers, little children will buy it again and again.
The employees in cc98 are excellent. They have designed 3 kinds of stickers like this, this and this.
But the manager thinks that it is not enough. To persuade his employees, He investigated the habits of some children about collecting.
In this district, a child called WuKe buys one bag of candies every day and gets one sticker inside. So in N days he can get N stickers. If cc98 has N kinds of stickers, he can collect all at least in N day. Unfortunately, it is nearly impossible because of duplicate ones.
But WuKe is not fool. Though duplicate ones are worthless to himself, he can exchange different stickers with others. There are M collectors in the district. To avoid unfair competition, these collectors need different stickers and what they offer are different too. That means, Wuke have to exchange some stickers indirectly.
As we know, if WuKe buys one candy every day, it is usually difficult to collect all N kinds of stickers in N days. But by exchanging, it becomes much easier.
Now the manager of cc98 wants to know how many different ways WuKe can do it.
Note: Different order is regarded as different ways. For instance, if N = 2 and WuKe can exchange 0 to 1, there are 3 ways. (0 0, 0 1 and 1 0)
## Input
The first line is an integer C. Then C cases follow. There are no more than 100 cases.
For each case, the first line contains 2 integers N and M (1 ≤ M < N ≤200. Then there are M lines. Each line has 2 integer ai and bi (0 ≤ ai,bi < N), which means there is a collector want to exchange his/her sticker bi to your sticker ai.
## Output
A single line contains an integer O indicates how many different ways. It may very large so that you can output O mod 1000000007.
## Sample Input
2
2 1
0 1
5 3
0 1
1 2
3 4
## Sample Output
3
480
None
## Source
ZOJ Monthly, September 2010
|
2020-10-22 20:55:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22303663194179535, "perplexity": 2841.493699703361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00008.warc.gz"}
|
https://zbmath.org/?q=an%3A1037.34017
|
zbMATH — the first resource for mathematics
Double positive solutions of fourth-order nonlinear boundary value problems. (English) Zbl 1037.34017
The authors consider nonlinear fourth-order differential equations $$u^{(4)}(t)=a(t)f(u(t)),$$ $$t\in (0,1)$$, subject to several boundary conditions. Under suitable growth conditions on the nonlinearity f and using a generalized Leggett-Williams fixed-point theorem, they show that the given problem has at least two positive solutions.
MSC:
34B18 Positive solutions to nonlinear boundary value problems for ordinary differential equations 34B15 Nonlinear boundary value problems for ordinary differential equations
Full Text:
References:
[1] DOI: 10.1080/00036810008840810 · Zbl 1031.34025 · doi:10.1080/00036810008840810 [2] Krasnosel’skii M.A., Positive Solutions of Operator Equations (1964) [3] Ma R.Y., J. Math. Anal. Appl 59 pp 225– (1995) [4] DOI: 10.1006/jmaa.1997.5639 · Zbl 0892.34009 · doi:10.1006/jmaa.1997.5639 [5] DOI: 10.1016/S0898-1221(01)00188-2 · Zbl 1006.34022 · doi:10.1016/S0898-1221(01)00188-2 [6] Guo D.J., Functional Methods for Nonlinear Ordinary Differential Equations (1995)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-06-23 09:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4530509412288666, "perplexity": 2291.1515362406976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488536512.90/warc/CC-MAIN-20210623073050-20210623103050-00606.warc.gz"}
|
https://stackabuse.com/python-check-if-array-or-list-contains-element-or-value/
|
Python: Check if Array/List Contains Element/Value
# Python: Check if Array/List Contains Element/Value
### Introduction
In this tutorial, we'll take a look at how to check if a list contains an element or value in Python. We'll use a list of strings, containing a few animals:
animals = ['Dog', 'Cat', 'Bird', 'Fish']
### Check if List Contains Element With for Loop
A simple and rudimentary method to check if a list contains an element is looping through it, and checking if the item we're on matches the one we're looking for. Let's use a for loop for this:
for animal in animals:
if animal == 'Bird':
print('Chirp!')
This code will result in:
Chirp!
### Check if List Contains Element With in Operator
Now, a more succint approach would be to use the built-in in operator, but with the if statement instead of the for statement. When paired with if, it returns True if an element exists in a sequence or not. The syntax of the in operator looks like this:
element in list
Making use of this operator, we can shorten our previous code into a single statement:
if 'Bird' in animals: print('Chirp')
This code fragment will output the following:
Chirp
This approach has the same efficiency as the for loop, since the in operator, used like this, calls the list.__contains__ function, which inherently loops through the list - though, it's much more readable.
### Check if List Contains Element With not in Operator
By contrast, we can use the not in operator, which is the logical opposite of the in operator. It returns True if the element is not present in a sequence.
Let's rewrite the previous code example to utilize the not in operator:
if 'Bird' not in animals: print('Chirp')
Running this code won't produce anything, since the Bird is present in our list.
But if we try it out with a Wolf:
if 'Wolf' not in animals: print('Howl')
This code results in:
Howl
### Check if List Contains Element With Lambda
Another way you can check if an element is present is to filter out everything other than that element, just like sifting through sand and checking if there are any shells left in the end. The built-in filter() method accepts a lambda function and a list as its arguments. We can use a lambda function here to check for our 'Bird' string in the animals list.
Then, we wrap the results in a list() since the filter() method returns a filter object, not the results. If we pack the filter object in a list, it'll contain the elements left after filtering:
retrieved_elements = list(filter(lambda x: 'Bird' in x, animals))
print(retrieved_elements)
This code results in:
## Free eBook: Git Essentials
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
['Bird']
Now, this approach isn't the most efficient. It's fairly slower than the previous three approaches we've used. The filter() method itself is equivalent to the generator function:
(item for item in iterable if function(item))
The slowed down performance of this code, amongst other things, comes from the fact that we're converting the results into a list in the end, as well as executing a function on the item on each iteration.
### Check if List Contains Element Using any()
Another great built-in approach is to use the any() function, which is just a helper function that checks if there are any (at least 1) instances of an element in a list. It returns True or False based on the presence or lack thereof of an element:
if any(element in 'Bird' for element in animals):
print('Chirp')
Since this results in True, our print() statement is called:
Chirp
This approach is also an efficient way to check for the presence of an element. It's as efficient as the first three.
### Check if List Contains Element Using count()
Finally, we can use the count() function to check if an element is present or not:
list.count(element)
This function returns the occurrence of the given element in a sequence. If it's greater than 0, we can be assured a given item is in the list.
Let's check the results of the count() function:
if animals.count('Bird') > 0:
print("Chirp")
The count() function inherently loops the list to check for the number of occurences, and this code results in:
Chirp
### Conclusion
In this tutorial, we've gone over several ways to check if an element is present in a list or not. We've used the for loop, in and not in operators, as well as the filter(), any() and count() methods.
Last Updated: March 19th, 2021
Get tutorials, guides, and dev jobs in your inbox.
Project
### Real-Time Road Sign Detection with YOLOv5
# python# machine learning# computer vision# pytorch
If you drive - there's a chance you enjoy cruising down the road. A responsible driver pays attention to the road signs, and adjusts their...
David Landup
Details
Project
### Data Visualization in Python: The Collatz Conjecture
# python# matplotlib# data visualization
The Collatz Conjecture is a notorious conjecture in mathematics. A conjecture is a conclusion based on existing evidence - however, a conjecture cannot be proven....
Details
|
2022-08-16 16:20:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1748526394367218, "perplexity": 1794.3147226533076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00554.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/170406-complex-5th-roots-print.html
|
# Complex 5th Roots
• Feb 6th 2011, 07:53 PM
collegemath
Complex 5th Roots
Can someone please show me how to find the complex 5th roots of z=-sqrt(3)=i .
• Feb 6th 2011, 07:55 PM
mr fantastic
Quote:
Originally Posted by collegemath
Can someone please show me how to find the complex 5th roots of z=-sqrt(3)=i .
Convert to polar form. Use deMoivre's Theorem. You will find many examples of this type of question in this subforum.
If you need more help, please show all your work and say where you get stuck.
• Feb 6th 2011, 10:41 PM
Ithaka
Quote:
Originally Posted by collegemath
Can someone please show me how to find the complex 5th roots of z=-sqrt(3)=i .
Do you mean $z=-\sqrt3+i$?
Or $z=-\sqrt3-i$?
Anyway:
1. Find the modulus and argument of z (i.e. write z in polar form)
2. Use DeMoivre to calculate $\sqrt[5]{z}$= $z^\frac{1}{5}$
|
2017-12-17 14:41:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569454789161682, "perplexity": 1437.196394601174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00064.warc.gz"}
|
http://mathoverflow.net/questions/31118/integer-polynomials-taking-square-values?answertab=votes
|
# Integer polynomials taking square values
Is there a way to determine a formula giving all integer values of $x$ for which the value of a polynomial $P(x)$ with integer coefficients is a square?
That is, is there a closed formula for:
$X = \{ x \in \mathbb{N} : \exists \ n \in \mathbb{N} : P(x) = n^2 \}$ ?
I'm interested in particular in $P'(x) = 8x^2-8x+1$, but am wondering about the general case as well.
For $P'(x)$ a sample of $X$ is $\{ 1, 3, 15, 85, 493, 2871, 16731, 97513, \ldots \}$.
-
For $P$ of degree at least $5$, at least for some $n$ the roots of $P(x) - n^2 = 0$ will in general not be solvable by radicals. So in this sense there need not be a closed formula. If you intend something else, please clarify. – Pete L. Clark Jul 8 '10 at 21:11
This seems a bit localized/low-level for MO... at least, in my hasty and inexpert view – Yemon Choi Jul 8 '10 at 21:25
When P is quadratic one can use the theory of Pell equations (en.wikipedia.org/wiki/Pell's_equation). In general the problem is hard; for generic P of degree greater than 2, X is finite by Siegel's theorem, and even the case where P is cubic is a difficult problem about elliptic curves for which one generally needs computer calculations. – Qiaochu Yuan Jul 8 '10 at 21:43
@Qiaochu: even when $P$ is quadratic the question is a little harder than Pell; it is Pell (which is "what are the units in this real quadratic field?") plus a problem about principal ideals: "list all the principal ideals in the integers of this quadratic field with that given norm". For example to solve $n^2=37x^2+3$ you need to figure out whether the factorization of $(3)$ into primes in the integers of $\mathbf{Q}(\sqrt{37})$ is into two principal primes or two non-principal ones. I'll leave it as an exercise ;-) which you can do if you want to convince yourself that Pell alone isnt enough – Kevin Buzzard Jul 8 '10 at 22:48
@OP: for the $8x^2-8x+1$ question you can get the next number in the sequence like this: if $a_n$ is the $n$th term then $a_n=6a_{n-1}-a_{n-2}-2$. Proof by completing the square and then general Pell equation theory. – Kevin Buzzard Jul 8 '10 at 22:59
There's a fairly detailed explanation of the solution to a similar equation here. See also this page, which can give you an automated step-by-step solution to such quadratic diophantine equations.
I'll also add that the command Reduce[8 x^2 - 8 x + 1 - y^2 == 0 && Element[x | y, Integers], {x, y}] will produce the answer to your particular problem in Mathematica fairly quickly. I'm making this an answer because the output is too huge to fit into the comments.
(C[1] [Element] Integers && C[1] >= 0 &&
x == 1/32 (16 +
4 (-2 (17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - 2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 ((17 - 12 Sqrt[2])^C[1] -
Sqrt[2] (17 - 12 Sqrt[2])^C[1] + (17 + 12 Sqrt[2])^C[1] +
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 +
4 (-2 (17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - 2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 (-(17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 -
4 (-2 (17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - 2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 ((17 - 12 Sqrt[2])^C[1] -
Sqrt[2] (17 - 12 Sqrt[2])^C[1] + (17 + 12 Sqrt[2])^C[1] +
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 -
4 (-2 (17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - 2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 (-(17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 +
4 (2 (17 - 12 Sqrt[2])^C[1] + Sqrt[2] (17 - 12 Sqrt[2])^C[1] +
2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 (-(17 - 12 Sqrt[2])^C[1] -
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - (17 + 12 Sqrt[2])^C[1] +
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 +
4 (2 (17 - 12 Sqrt[2])^C[1] + Sqrt[2] (17 - 12 Sqrt[2])^C[1] +
2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 ((17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] + (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 -
4 (2 (17 - 12 Sqrt[2])^C[1] + Sqrt[2] (17 - 12 Sqrt[2])^C[1] +
2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 (-(17 - 12 Sqrt[2])^C[1] -
Sqrt[2] (17 - 12 Sqrt[2])^C[1] - (17 + 12 Sqrt[2])^C[1] +
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) || (C[1] [Element] Integers &&
C[1] >= 0 &&
x == 1/32 (16 -
4 (2 (17 - 12 Sqrt[2])^C[1] + Sqrt[2] (17 - 12 Sqrt[2])^C[1] +
2 (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1])) &&
y == 1/2 ((17 - 12 Sqrt[2])^C[1] +
Sqrt[2] (17 - 12 Sqrt[2])^C[1] + (17 + 12 Sqrt[2])^C[1] -
Sqrt[2] (17 + 12 Sqrt[2])^C[1]))
-
Thanks jc the link above is great. Looking into it in detail now. – Mau Jul 8 '10 at 22:35
Just to summarize here the solution given by the second link "Dario Alpern's generic two-integer variable equation solver": all integer solutions to $8x^2 - 8x + 1 = y^2$ are given by \begin{align*}X_{n+1} &= 3X_n + Y_n - 1 \\ Y_{n+1} &= 8X_n + 3Y_n - 4,\end{align*} starting with $(X_0, Y_0)$ as either $(0,1)$ or $(1,-1)$. (The other two (0,-1) and (1,1) are redundant, being generated in one step from these two. It's easy to see that if the $n$th solution given by $(1,-1)$ is $(x,y)$, then $(x,y)$ is positive and the $n$th solution given by $(0,-1)$ is just $(-x+1,-y)$.) – shreevatsa Jul 8 '10 at 22:57
-
Brilliant! Nice resource! – Mau Jul 15 '10 at 21:03
This looks like counting points on hyper-elliptic curves to me...
You are basically finding the integer solutions to
$Y^2 = 8X^2 - 8X + 1$
in you example. But this case is not too difficult, because it's of genus $0$.
It will be more interesting if $P(x)$ is of degree $3$ or higher.
To begin with this very interesting subject of point-counting, probably you can try
|
2014-08-23 11:45:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7097119688987732, "perplexity": 2251.1897363595467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00460-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://people.hamilton.edu/cgibbons/tag/boij-soederberg/index.html
|
# Boij-Soederberg
Complete these exercises several months in advance of your anticipated research project with undergraduates. For example, if you are thinking of working with students over a summer, consider working on them between the fall and spring semesters.
## Recursive strategy for decomposing Betti tables of complete intersections
The divisor sequence of an irreducible element (_atom_) $a$ of a reduced monoid $H$ is the sequence $(s_n)_{n \in \mathbb{N}}$ where, for each positive integer $n$, $s_n$ denotes the number of distinct irreducible divisors of $a^n$. In this work we …
## Recursive strategy for decomposing Betti tables of complete intersections
We introduce a recursive decomposition algorithm for the Betti diagram of a complete intersection using the diagram of a complete intersection defined by a subset of the original generators. This alternative algorithm is the main tool that we use to …
## Non-simplicial decompositions of Betti diagrams of complete intersections
We investigate decompositions of Betti diagrams over a polynomial ring within the framework of Boij-Soederberg theory. That is, given a Betti diagram, we decompose it into pure diagrams. Relaxing the requirement that the degree sequences in such pure …
## The cone of Betti diagrams over a hypersurface ring of low embedding dimension
We give a complete description of the cone of Betti diagrams over a standard graded hypersurface ring of the form $k [x, y]/\langle q \rangle$, where $q$ is a homogeneous quadric. We also provide a finite algorithm for decomposing Betti diagrams, …
|
2021-01-23 11:09:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137030839920044, "perplexity": 456.6367318934098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00304.warc.gz"}
|
https://tex.stackexchange.com/questions/618102/remove-page-numbers-from-the-index-which-are-related-to-printbibliography-secti
|
# Remove page numbers from the index which are related to \printbibliography section
I want to use \index{entry} inside of the bibliographic item. When I cite this item it appears in text on the current page (I use style=verbose-trad1) and also on the page where full bibliography is printed. I want to remove from my index page numbers related to \printbibliography section.
MWE:
\documentclass{article}
\usepackage{imakeidx}
\makeindex
\begin{filecontents}{\jobname.bib}
@book{a1,
author = {Aniston, John},
title = {Be happy},
location = {London\index{London}},
date = {1953},
pagetotal = {100},
}
\end{filecontents}
\begin{document}
Lorem \autocite{a1}.
\clearpage
\printbibliography
\printindex
\end{document}
Result:
In Index we see page numbers 1 (because of citation on the page 1), and 2 (because of appearing in bibliography). How can I remove page 2 from index?
I suggest you use a new command, say \citeindex. Then we can redefine that command to do nothing in the bibliography
\documentclass{article}
\usepackage{imakeidx}
\makeindex
\newcommand*{\citeindex}{\index}
\AtBeginBibliography{\renewcommand*{\citeindex}[1]{}}
\begin{filecontents}{\jobname.bib}
@book{a1,
author = {Aniston, John},
title = {Be happy},
location = {London\citeindex{London}},
date = {1953},
pagetotal = {100},
}
\end{filecontents}
\begin{document}
Lorem \autocite{a1}.
\clearpage
\printbibliography
\printindex
\end{document}
Note that you can let biblatex index the location field automatically, e.g. with
\documentclass{article}
\usepackage{imakeidx}
\makeindex
\renewbibmacro*{citeindex}{%
\ifciteindex
{\indexnames{labelname}%
\indexfield{indextitle}%
\ifciteseen
{}
{\indexlist{location}}}
{}}
\begin{filecontents}[force]{\jobname.bib}
@book{a1,
author = {Aniston, John},
title = {Be happy},
location = {London},
date = {1953},
pagetotal = {100},
}
\end{filecontents}
\begin{document}
Lorem \autocite{a1}.
\clearpage
\printbibliography
\printindex
\end{document}
• Thanks! Perhaps I can just add the line \AtBeginBibliography{\renewcommand*{\index}[1]{}} without introducing new command?
– andc
Oct 7 at 16:52
• @andc If you don't want any indexing in the bibliography that should be safe. With a new name you can be sure that you only disable the indexing you added yourself in the .bib files. Oct 7 at 19:17
|
2021-11-30 16:07:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7383182644844055, "perplexity": 3972.4286239982207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00404.warc.gz"}
|
https://proxieslive.com/tag/every/
|
## Does mysql “order by” guarantees to give the same order for the same query every time if the sort key is equal?
I’m planning to write a query that sorts the result by a value `s` and then paginate the result. Let’s say I have ten items that match the query and all the items have the same `s`. In the first query, I sort the value by `s` and get the first five items. Then in the next query, I sort the value by `s` and get the sixth to tenth items. Is it possible that the items that appear on my first query will appear again in my second query, and that some items will not show up in either query?
## Is there a resource anywhere that lists every spell and the classes that can use them?
There are now a number of resources that provide lists of spells, the PHB, Tashas Cauldron, XGTE etc. As well as spells that are included in specific adventure or campaign books.
There are also new classes that come with new spell rules, artificer is one.
Is there a single resource anywhere that lists all the spells currently published for 5th edition, the classes they can be used by and the level? I am not looking for the spell rules just an updated list as per the PHB that includes every published spell by class.
Or a list that lists out every spell and the classes that can use them.
## Godot / GDscript label text not updating every frame like intended
I am very new to Godot and coding in general, so I apologize in advance for any simple mistakes. I am trying to have my text display the variable "ammodisplay" on my object "Marine." When I launch the game, the text sets to 7 (the correct value) but as I play the game and the variable changes, the text does not update with it. Any insight as to how I can fix this? Thanks in advance!
``extends Label var NODE = load("Marine.tscn") var ammo = NODE.instance() var ammod = ammo.ammodisplay func _process(delta): text = (str(ammod)) $$```$$ ``
## Does Hexblade’s Curse extra damage affect every die rolled? [duplicate]
The relevant part of Hexblade’s Curse states (XGtE p.55):
Starting at 1st level, you gain the ability to place a baleful curse on someone. As a bonus action, choose one creature you can see within 30 feet of you. The target is cursed for 1 minute. The curse ends early if the target dies, you die, or you are incapacitated. Until the curse ends, you gain the following benefits:
• You gain a bonus to damage rolls against the cursed target. The bonus equals your proficiency bonus.
• […]
Typically when features add extra damage they clarify that the extra damage only applies to one damage roll, or the added damage just applies to the damage as a whole, like the various cleric subclasses’ Potent Spellcasting:
Starting at 8th level, you add your wisdom modifier to the damage you deal with any cleric cantrip.
Or the Evocation Wizard’s Empowered Evocation (PHB p.117):
…you can add your Intelligence modifier to one damage roll of any wizard evocation spell you cast.
Does this mean the wording of the Hexblade’s Curse would apply to all of the dice rolled for the attack? If when the warlock gains Pact of the Blade at 3rd level, could they pick a great sword as there pact weapon, which deals 2d6 damage, and add their proficiency bonus twice to the damage?
## Casting Guidance cantrip for every roll?
What prevents a cleric from casting the Guidance spell every single turn and having everyone have 1D4 extra for their rolls, especially since you can combo it with this:
It seems like every skill check should always be made with advantage due to the 'Working Together' rules. Is this accurate?
This can end with almost all the checks being a D20+1D4 plus advantage. Seems a bit broken mechanic without some houseruling that prevents the same spell on the same target for some time, or I am missing some rule that prevents this?
The same would affect Resistance, but since you probably are in the middle of a combat, the Touch range could prevent it effectively, but for normal skill rolls where the touch of the cleric is possible and the situation is not stressing?
## When you learn True Polymorph, do you learn about every creature in existence?
When a wizard learns true polymorph, do they also learn about every creature in existence? Since the spell says the new form can be a creature of any kind you choose, it seems to me that this would mean they must have gained some knowledge about every creature in existence.
Or can you only turn into creatures that you have encountered "in your lifetime"?
## What are the consequences of giving an ASI or feat every 4 character levels instead of every 4 class levels?
I’m trying to figure out why Dungeons & Dragons 5th edition designers decided that ASI or feats would be given every 4 class levels and not every 4 character levels.
If I allowed my players to get an ASI or feat at character levels 4, 8, 12, 16 and 19, how would the game balance be concretely impacted? How could the players abuse this?
The peculiarities of some classes such as the fighter and the rogue would be kept: a rogue would still gain an ASI at rogue level 10 and a fighter at fighter levels 6 and 14.
## how to display count of total posts and for every category and having possibility to filter between dates and user
Hello i use simple plugin that count numbers of posts for every category and total but i was stuck in how to display count of posts between to dates and with authors.
``<div class="wrap posts-and-users-stats"> <nav class="nav-tab-wrapper"> <?php echo "<h2> Nombres des articles </h2>" ?> </nav> <section> <?php \$ categories = get_categories( array( 'orderby' => 'name', 'order' => 'ASC' ) ); <table id="table-<?php echo esc_attr(\$ taxonomy); ?>" class="wp-list-table widefat"> <tbody> <?php foreach (\$ categories as \$ category) { ?> <tr> <td><?php echo esc_html(\$ category->name ); ?></td> <td class="number"><?php echo esc_html(\$ category->count); ?></td> <?php } ?> </tr> </tbody> </table> </section> </div> ``
this the code i used for filter
``<form method="post"> From : <input type="date" name="from"> To : <input type="date" name="to"> User :<?php wp_dropdown_users(\$ args); ?> Catégories :<?php wp_dropdown_categories(\$ args); ?> <input type="submit" name="submitbtn"> </form> <?php \$ from = \$ _POST['from']; \$ to = \$ _POST['to']; \$ user = \$ _POST['user']; \$ category = \$ _POST['categories']; print "From" . \$ from . "\n"; print "To" . \$ to . "\n"; print "User" . \$ user . "\n"; ``
## Does Ocular spell make every eligible damage spells have a critical chance since it becomes a ranged touch attack (ray)?
Ocular spell states:
(…)When you release an ocular spell, its effect changes to a ray with a range of up to 60 feet. If the spell previously would have affected multiple creatures, it now affects only the creature struck by the ray. You must succeed on a ranged touch attack to strike your target with an ocular spell, and the target is still permitted any saving throw allowed by the spell.(…)
Every spell that has a touch attack (melee/range) Do have a critical hit chance (20X2)
So If I was to release an Ocular Fireball (only affects 1 creature now and it still gets the Reflex Half saving throw though). It could indeed be a critical hit.
Correct?
### Sidenote:
I know fireball might not be the best spell for this combination, it was just for the sake of the question.
|
2021-01-24 19:49:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23901112377643585, "perplexity": 2356.1612167607154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00125.warc.gz"}
|
http://nrich.maths.org/connect3
|
I'm Eight
Find a great variety of ways of asking questions which make 8.
Calendar Capers
Choose any three by three square of dates on a calendar page. Circle any number on the top row, put a line through the other numbers that are in the same row and column as your circled number. Repeat this for a number of your choice from the second row. You should now have just one number left on the bottom row, circle it. Find the total for the three numbers circled. Compare this total with the number in the centre of the square. What do you find? Can you explain why this happens?
Domino Square
Use the 'double-3 down' dominoes to make a square so that each side has eight dots.
Connect Three
Stage: 3 and 4 Challenge Level:
In this game the winner is the first to complete a row of three, either horizontally, vertically or diagonally.
Roll the dice (one with the numbers $1$, $2$, $3$, $-4$, $-5$, $-6$ and the other with the numbers $-1$, $-2$, $-3$, $4$, $5$, $6$), place each dice in one of the squares and decide whether you want to add or subtract to produce one of the totals shown on the board. Your total will then be covered with a counter.
You cannot cover a number which has already been covered. If you are unable to find a total which has not been covered you must Pass.
The winner is the first to complete a line of three.
You can use the interactive version below or print this board to play away from the computer.
Full Screen Version
This text is usually replaced by the Flash movie.
|
2015-04-28 19:58:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40470802783966064, "perplexity": 393.8079989044063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246662032.70/warc/CC-MAIN-20150417045742-00044-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://liucs.net/cs101f19/other-pls.html
|
# Other programming languages
## Arithmetic notation
One very concrete way in which some languages differ is how arithmetic expressions are written. The normal way we write expressions, such as $$a+b$$ is called “infix” because the operator ($$+$$) is in between the operands ($$a$$ and $$b$$). But there are alternatives, called prefix and postfix. Collectively, they are also called “Polish notation” (forward Polish for prefix, or reverse polish (RPN) for postfix).
### Prefix expressions
The LISP language family (which includes Scheme, Racket, Clojure, and others) use prefix notation, so the operator comes before the operands. They also surround every sub-expression with parentheses, as in this example:
(/ 36 (* (+ 1 5) 4))
You evaluate prefix expressions starting from the innermost parentheses, like this:
One interesting advantage of prefix is that operators can have any number of operands:
(+ 3 7 10)
→ 20
### Postfix expressions
A few languages use postfix notation. No parentheses are needed at all, you just specify the numbers (separated by spaces) and the operators. So the same calculation performed in the previous section would look like this:
36 1 5 + 4 * /
To evaluate this, you just work from left to right. When you encounter an operator, grab the two previous items to use as operands. Replace all three with the resulting value.
Postfix evaluation is computationally very simple – it is especially suitable for devices with severe memory constraints. It was popular for 1970s calculators, and printers even today. The core of the Portable Document Format (PDF) is a language called Postscript, which is based entirely around postfix evaluation.
### Converting between notations
When converting an infix expression to prefix or postfix, it’s helpful to draw a tree representing the expression. You do this by joining the operands of each operator, in the order you would apply them (according to the standard order of operations). Here is a tree for the expression $$180-6\times(2+7)\times3$$:
To convert it to prefix, each node (starting from the root) corresponds to a set of parentheses, and then you convert the left node and the right node after writing the operator:
(- 180 (* (* 6 (+ 2 7)) 3))
To convert to prefix, you start from the root, but for each node you complete the left and right sub-trees before writing the operator in that node:
180 6 2 7 + * 3 * -
Here are some videos that further explain the differences between infix, prefix, and postfix.
|
2020-01-21 21:26:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5475236773490906, "perplexity": 1373.9427938516003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00442.warc.gz"}
|
http://mathoverflow.net/questions/59398/cm-for-primary-ideal
|
# CM for primary ideal
Let $R$ be a regular local ring, $I$ a prime ideal and $J$ an $I$-primary ideal in $R$. Is it true that if $R/I$ is CM then also $R/J$ is CM? This question is in some way the inverse of this one.
-
A useful way to think about this issue is to consider $J=I^{(n)}$, the $n$-symbolic power of $I$, which by definition is the $I$-primary component of $I^n$.
When $R$ is a polynomial rings over $\mathbb C$, this is the ideal consisting of functions vanishing to order at least $n$ on $X = \text{Spec}(R/I)$.
It is then well-known that the depth of $R/I^{(n)}$ can go down. For example, take $I$ generated by the $2\times2$ minors of the generic $2\times 3$ matrix inside the polynomial rings of the $6$ variables, localized at the maximal ideal of those variables. Then $R/I$ is Cohen-Macaulay of dimension $4$, but $R/I^{(n)}$ would have depth $3$ eventually. For more general statements about ideals of maximal minors, see for example Section 3 of:
Powers of Ideals Generated by Weak d-Sequences, C. Huneke, J, Algebra, 68 (1981), 471-509.
EDIT: the example above looks specific, but such examples should be abound. I expect most Cohen-Macaulay ideals which are not complete intersections to give an example (it is known that $R/I^n$, the ordinary powers, are CM for all $n>0$ iff $I$ is a complete intersection). The $2\times 2$ minors gives is a generic situation of non-complete intersection but CM ideal.
A philosophical comment: it is unlikely that Cohen-Macaulayness will be preserved by basic operations on ideals. So if $R/I, R/J$ are CM, we do not expect $R/\sqrt{I}, R/I^n, R/I^{(n)}, R/P$ ($P$ an associated primes), or $R/(I+J), R/IJ$ etc. to be CM.
The reason is that to preserve depth one needs to control the associated primes, and these operations only allow you to control the support. However, finding an explicit example is usually not so obvious.
-
Thanks a lot for the reference and the comments! – Blup Mar 29 '11 at 14:18
Counterexample: $k$ a field, $R=k[[X,Y]]$, $I=(Y)$, $J=(XY, Y^2)$.
-
But $J=(xy,y^2)$ is not a primary ideal since $xy \in J$, $y \not \in J$ but no power of $x$ is in $J$. – Blup Mar 24 '11 at 13:59
Sorry, my mistake. That was too simple! – Laurent Moret-Bailly Mar 24 '11 at 16:10
I think it's true. R/I being CM means depth(R/I)=dim(R/I). But as R/I is a factor of R/J by nilradical it follows that dim(R/J)=dim(R/I). As R/I is a factorring of R/J we also have depth(R/I)$\leq$depth(R/J). But as dim(R/I)=depth(R/I)$\leq$depth(R/J)$\leq$dim(R/J)=dim(R/I) we actually have equality dim(R/J)=depth(R/J), an so R/J is CM.
-
Hi roman, a slight problem here is that the depth of a factor ring might go up (take the union of a plane and a line, the depth then is the depth of the line, which is $1$, but the plane, which is a subvariety, thus factor ring, has depth $2$). – Hailong Dao Mar 24 '11 at 16:07
|
2014-10-21 01:07:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868861794471741, "perplexity": 366.081372876074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443598.37/warc/CC-MAIN-20141017005723-00312-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://pycbc.org/pycbc/latest/html/pycbc.strain.html
|
# pycbc.strain package¶
## pycbc.strain.calibration module¶
Functions for adding calibration factors to waveform templates.
class pycbc.strain.calibration.CubicSpline(minimum_frequency, maximum_frequency, n_points, ifo_name)[source]
apply_calibration(strain)[source]
Apply calibration model
This applies cubic spline calibration to the strain.
Parameters: strain (FrequencySeries) – The strain to be recalibrated. strain_adjusted – The recalibrated strain. FrequencySeries
name = 'cubic_spline'
class pycbc.strain.calibration.Recalibrate(ifo_name)[source]
Bases: object
apply_calibration(strain)[source]
Apply calibration model
This method should be overwritten by subclasses
Parameters: strain (FrequencySeries) – The strain to be recalibrated. strain_adjusted – The recalibrated strain. FrequencySeries
classmethod from_config(cp, ifo, section)[source]
Read a config file to get calibration options and transfer functions which will be used to intialize the model.
Parameters: cp (WorkflowConfigParser) – An open config file. ifo (string) – The detector (H1, L1) for which the calibration model will be loaded. section (string) – The section name in the config file from which to retrieve the calibration options. An instance of the class. instance
map_to_adjust(strain, prefix='recalib_', **params)[source]
Map an input dictionary of sampling parameters to the adjust_strain function by filtering the dictionary for the calibration parameters, then calling adjust_strain.
Parameters: strain (FrequencySeries) – The strain to be recalibrated. prefix (str) – Prefix for calibration parameter names params (dict) – Dictionary of sampling parameters which includes calibration parameters. strain_adjusted – The recalibrated strain. FrequencySeries
name = None
## pycbc.strain.gate module¶
Functions for applying gates to data.
pycbc.strain.gate.add_gate_option_group(parser)[source]
Adds the options needed to apply gates to data.
Parameters: parser (object) – ArgumentParser instance.
pycbc.strain.gate.apply_gates_to_fd(stilde_dict, gates)[source]
Applies the given dictionary of gates to the given dictionary of strain in the frequency domain.
Gates are applied by IFFT-ing the strain data to the time domain, applying the gate, then FFT-ing back to the frequency domain.
Parameters: stilde_dict (dict) – Dictionary of frequency-domain strain, keyed by the ifos. gates (dict) – Dictionary of gates. Keys should be the ifo to apply the data to, values are a tuple giving the central time of the gate, the half duration, and the taper duration. Dictionary of frequency-domain strain with the gates applied. dict
pycbc.strain.gate.apply_gates_to_td(strain_dict, gates)[source]
Applies the given dictionary of gates to the given dictionary of strain.
Parameters: strain_dict (dict) – Dictionary of time-domain strain, keyed by the ifos. gates (dict) – Dictionary of gates. Keys should be the ifo to apply the data to, values are a tuple giving the central time of the gate, the half duration, and the taper duration. Dictionary of time-domain strain with the gates applied. dict
pycbc.strain.gate.gate_and_paint(data, lindex, rindex, invpsd, copy=True)[source]
Gates and in-paints data.
Parameters: data (TimeSeries) – The data to gate. lindex (int) – The start index of the gate. rindex (int) – The end index of the gate. invpsd (FrequencySeries) – The inverse of the PSD. copy (bool, optional) – Copy the data before applying the gate. Otherwise, the gate will be applied in-place. Default is True. The gated and in-painted time series. TimeSeries
pycbc.strain.gate.gates_from_cli(opts)[source]
Parses the –gate option into something understandable by strain.gate_data.
pycbc.strain.gate.psd_gates_from_cli(opts)[source]
Parses the –psd-gate option into something understandable by strain.gate_data.
## pycbc.strain.lines module¶
Functions for removing frequency lines from real data.
pycbc.strain.lines.avg_inner_product(data1, data2, bin_size)[source]
Calculate the time-domain inner product averaged over bins.
Parameters: data1 (pycbc.types.TimeSeries) – First data set. data2 (pycbc.types.TimeSeries) – Second data set, with same duration and sample rate as data1. bin_size (float) – Duration of the bins the data will be divided into to calculate the inner product. inner_prod (list) – The (complex) inner product of data1 and data2 obtained in each bin. amp (float) – The absolute value of the median of the inner product. phi (float) – The angle of the median of the inner product.
pycbc.strain.lines.calibration_lines(freqs, data, tref=None)[source]
Extract the calibration lines from strain data.
Parameters: freqs (list) – List containing the frequencies of the calibration lines. data (pycbc.types.TimeSeries) – Strain data to extract the calibration lines from. tref ({None, float}, optional) – Reference time for the line. If None, will use data.start_time. data – The strain data with the calibration lines removed. pycbc.types.TimeSeries
pycbc.strain.lines.clean_data(freqs, data, chunk, avg_bin)[source]
Extract time-varying (wandering) lines from strain data.
Parameters: freqs (list) – List containing the frequencies of the wandering lines. data (pycbc.types.TimeSeries) – Strain data to extract the wandering lines from. chunk (float) – Duration of the chunks the data will be divided into to account for the time variation of the wandering lines. Should be smaller than data.duration, and allow for at least a few chunks. avg_bin (float) – Duration of the bins each chunk will be divided into for averaging the inner product when measuring the parameters of the line. Should be smaller than chunk. data – The strain data with the wandering lines removed. pycbc.types.TimeSeries
pycbc.strain.lines.complex_median(complex_list)[source]
Get the median value of a list of complex numbers.
Parameters: complex_list (list) – List of complex numbers to calculate the median. a + 1.j*b – The median of the real and imaginary parts. complex number
pycbc.strain.lines.line_model(freq, data, tref, amp=1, phi=0)[source]
Simple time-domain model for a frequency line.
Parameters: freq (float) – Frequency of the line. data (pycbc.types.TimeSeries) – Reference data, to get delta_t, start_time, duration and sample_times. tref (float) – Reference time for the line model. amp ({1., float}, optional) – Amplitude of the frequency line. phi ({0. float}, optional) – Phase of the frequency line (radians). freq_line – A timeseries of the line model with frequency ‘freq’. The returned data are complex to allow measuring the amplitude and phase of the corresponding frequency line in the strain data. For extraction, use only the real part of the data. pycbc.types.TimeSeries
pycbc.strain.lines.matching_line(freq, data, tref, bin_size=1)[source]
Find the parameter of the line with frequency ‘freq’ in the data.
Parameters: freq (float) – Frequency of the line to find in the data. data (pycbc.types.TimeSeries) – Data from which the line wants to be measured. tref (float) – Reference time for the frequency line. bin_size ({1, float}, optional) – Duration of the bins the data will be divided into for averaging. line_model – A timeseries containing the frequency line with the amplitude and phase measured from the data. pycbc.types.TimeSeries
## pycbc.strain.recalibrate module¶
Classes and functions for adjusting strain data.
class pycbc.strain.recalibrate.CubicSpline(minimum_frequency, maximum_frequency, n_points, ifo_name)[source]
Cubic spline recalibration
This assumes the spline points follow np.logspace(np.log(minimum_frequency), np.log(maximum_frequency), n_points)
Parameters: minimum_frequency (float) – frequency of spline points (maximum) – maximum_frequency (float) – frequency of spline points – n_points (int) – of spline points (number) –
apply_calibration(strain)[source]
Apply calibration model
This applies cubic spline calibration to the strain.
Parameters: strain (FrequencySeries) – The strain to be recalibrated. strain_adjusted – The recalibrated strain. FrequencySeries
name = 'cubic_spline'
class pycbc.strain.recalibrate.PhysicalModel(freq=None, fc0=None, c0=None, d0=None, a_tst0=None, a_pu0=None, fs0=None, qinv0=None)[source]
Bases: object
Class for adjusting time-varying calibration parameters of given strain data.
name
The name of this calibration model.
Type: ‘physical_model’
Parameters: strain (FrequencySeries) – The strain to be adjusted. freq (array) – The frequencies corresponding to the values of c0, d0, a0 in Hertz. fc0 (float) – Coupled-cavity (CC) pole at time t0, when c0=c(t0) and a0=a(t0) are measured. c0 (array) – Initial sensing function at t0 for the frequencies. d0 (array) – Digital filter for the frequencies. a_tst0 (array) – Initial actuation function for the test mass at t0 for the frequencies. a_pu0 (array) – Initial actuation function for the penultimate mass at t0 for the frequencies. fs0 (float) – Initial spring frequency at t0 for the signal recycling cavity. qinv0 (float) – Initial inverse quality factor at t0 for the signal recycling cavity.
adjust_strain(strain, delta_fs=None, delta_qinv=None, delta_fc=None, kappa_c=1.0, kappa_tst_re=1.0, kappa_tst_im=0.0, kappa_pu_re=1.0, kappa_pu_im=0.0)[source]
Adjust the FrequencySeries strain by changing the time-dependent calibration parameters kappa_c(t), kappa_a(t), f_c(t), fs, and qinv.
Parameters: strain (FrequencySeries) – The strain data to be adjusted. delta_fc (float) – Change in coupled-cavity (CC) pole at time t. kappa_c (float) – Scalar correction factor for sensing function c0 at time t. kappa_tst_re (float) – Real part of scalar correction factor for actuation function A_{tst0} at time t. kappa_tst_im (float) – Imaginary part of scalar correction factor for actuation function A_tst0 at time t. kappa_pu_re (float) – Real part of scalar correction factor for actuation function A_{pu0} at time t. kappa_pu_im (float) – Imaginary part of scalar correction factor for actuation function A_{pu0} at time t. fs (float) – Spring frequency for signal recycling cavity. qinv (float) – Inverse quality factor for signal recycling cavity. strain_adjusted – The adjusted strain. FrequencySeries
classmethod from_config(cp, ifo, section)[source]
Read a config file to get calibration options and transfer functions which will be used to intialize the model.
Parameters: cp (WorkflowConfigParser) – An open config file. ifo (string) – The detector (H1, L1) for which the calibration model will be loaded. section (string) – The section name in the config file from which to retrieve the calibration options. An instance of the Recalibrate class. instance
map_to_adjust(strain, **params)[source]
Map an input dictionary of sampling parameters to the adjust_strain function by filtering the dictionary for the calibration parameters, then calling adjust_strain.
Parameters: strain (FrequencySeries) – The strain to be recalibrated. params (dict) – Dictionary of sampling parameters which includes calibration parameters. strain_adjusted – The recalibrated strain. FrequencySeries
name = 'physical_model'
classmethod tf_from_file(path, delimiter=' ')[source]
Convert the contents of a file with the columns [freq, real(h), imag(h)] to a numpy.array with columns [freq, real(h)+j*imag(h)].
Parameters: path (string) – delimiter ({" ", string}) – numpy.array
update_c(fs=None, qinv=None, fc=None, kappa_c=1.0)[source]
Calculate the sensing function c(f,t) given the new parameters kappa_c(t), kappa_a(t), f_c(t), fs, and qinv.
Parameters: fc (float) – Coupled-cavity (CC) pole at time t. kappa_c (float) – Scalar correction factor for sensing function at time t. fs (float) – Spring frequency for signal recycling cavity. qinv (float) – Inverse quality factor for signal recycling cavity. c – The new sensing function c(f,t). numpy.array
update_g(fs=None, qinv=None, fc=None, kappa_tst_re=1.0, kappa_tst_im=0.0, kappa_pu_re=1.0, kappa_pu_im=0.0, kappa_c=1.0)[source]
Calculate the open loop gain g(f,t) given the new parameters kappa_c(t), kappa_a(t), f_c(t), fs, and qinv.
Parameters: fc (float) – Coupled-cavity (CC) pole at time t. kappa_c (float) – Scalar correction factor for sensing function c at time t. kappa_tst_re (float) – Real part of scalar correction factor for actuation function a_tst0 at time t. kappa_pu_re (float) – Real part of scalar correction factor for actuation function a_pu0 at time t. kappa_tst_im (float) – Imaginary part of scalar correction factor for actuation function a_tst0 at time t. kappa_pu_im (float) – Imaginary part of scalar correction factor for actuation function a_pu0 at time t. fs (float) – Spring frequency for signal recycling cavity. qinv (float) – Inverse quality factor for signal recycling cavity. g – The new open loop gain g(f,t). numpy.array
update_r(fs=None, qinv=None, fc=None, kappa_c=1.0, kappa_tst_re=1.0, kappa_tst_im=0.0, kappa_pu_re=1.0, kappa_pu_im=0.0)[source]
Calculate the response function R(f,t) given the new parameters kappa_c(t), kappa_a(t), f_c(t), fs, and qinv.
Parameters: fc (float) – Coupled-cavity (CC) pole at time t. kappa_c (float) – Scalar correction factor for sensing function c at time t. kappa_tst_re (float) – Real part of scalar correction factor for actuation function a_tst0 at time t. kappa_pu_re (float) – Real part of scalar correction factor for actuation function a_pu0 at time t. kappa_tst_im (float) – Imaginary part of scalar correction factor for actuation function a_tst0 at time t. kappa_pu_im (float) – Imaginary part of scalar correction factor for actuation function a_pu0 at time t. fs (float) – Spring frequency for signal recycling cavity. qinv (float) – Inverse quality factor for signal recycling cavity. r – The new response function r(f,t). numpy.array
class pycbc.strain.recalibrate.Recalibrate(ifo_name)[source]
Bases: object
Base class for modifying calibration
apply_calibration(strain)[source]
Apply calibration model
This method should be overwritten by subclasses
Parameters: strain (FrequencySeries) – The strain to be recalibrated. strain_adjusted – The recalibrated strain. FrequencySeries
classmethod from_config(cp, ifo, section)[source]
Read a config file to get calibration options and transfer functions which will be used to intialize the model.
Parameters: cp (WorkflowConfigParser) – An open config file. ifo (string) – The detector (H1, L1) for which the calibration model will be loaded. section (string) – The section name in the config file from which to retrieve the calibration options. An instance of the class. instance
map_to_adjust(strain, prefix='recalib_', **params)[source]
Map an input dictionary of sampling parameters to the adjust_strain function by filtering the dictionary for the calibration parameters, then calling adjust_strain.
Parameters: strain (FrequencySeries) – The strain to be recalibrated. prefix (str) – Prefix for calibration parameter names params (dict) – Dictionary of sampling parameters which includes calibration parameters. strain_adjusted – The recalibrated strain. FrequencySeries
name = None
## pycbc.strain.strain module¶
This modules contains functions reading, generating, and segmenting strain data
class pycbc.strain.strain.StrainBuffer(frame_src, channel_name, start_time, max_buffer=512, sample_rate=4096, low_frequency_cutoff=20, highpass_frequency=15.0, highpass_reduction=200.0, highpass_bandwidth=5.0, psd_samples=30, psd_segment_length=4, psd_inverse_length=3.5, trim_padding=0.25, autogating_threshold=None, autogating_cluster=None, autogating_pad=None, autogating_width=None, autogating_taper=None, state_channel=None, data_quality_channel=None, dyn_range_fac=5.902958103587057e+20, psd_abort_difference=None, psd_recalculate_difference=None, force_update_cache=True, increment_update_cache=None, analyze_flags=None, data_quality_flags=None, dq_padding=0)[source]
Bases: pycbc.frame.frame.DataBuffer
add_hard_count()[source]
Reset the countdown timer, so that we don’t analyze data long enough to generate a new PSD.
advance(blocksize, timeout=10)[source]
Add blocksize seconds more to the buffer, push blocksize seconds from the beginning.
Parameters: blocksize (int) – The number of seconds to attempt to read from the channel status – Returns True if this block is analyzable. boolean
end_time
Return the end time of the current valid segment of data
classmethod from_cli(ifo, args, maxlen)[source]
Initialize a StrainBuffer object (data reader) for a particular detector.
invalidate_psd()[source]
Make the current PSD invalid. A new one will be generated when it is next required
near_hwinj()[source]
Check that the current set of triggers could be influenced by a hardware injection.
null_advance_strain(blocksize)[source]
Parameters: blocksize (int) – The number of seconds to attempt to read from the channel
overwhitened_data(delta_f)[source]
Return overwhitened data
Parameters: delta_f (float) – The sample step to generate overwhitened frequency domain data for htilde – Overwhited strain data FrequencySeries
recalculate_psd()[source]
Recalculate the psd
start_time
Return the start time of the current valid segment of data
class pycbc.strain.strain.StrainSegments(strain, segment_length=None, segment_start_pad=0, segment_end_pad=0, trigger_start=None, trigger_end=None, filter_inj_only=False, injection_window=None, allow_zero_padding=False)[source]
Bases: object
Class for managing manipulation of strain data for the purpose of matched filtering. This includes methods for segmenting and conditioning.
fourier_segments()[source]
Return a list of the FFT’d segments. Return the list of FrequencySeries. Additional properties are added that describe the strain segment. The property ‘analyze’ is a slice corresponding to the portion of the time domain equivalent of the segment to analyze for triggers. The value ‘cumulative_index’ indexes from the beginning of the original strain series.
classmethod from_cli(opt, strain)[source]
Calculate the segmentation of the strain data for analysis from the command line options.
classmethod from_cli_multi_ifos(opt, strain_dict, ifos)[source]
Calculate the segmentation of the strain data for analysis from the command line options.
classmethod from_cli_single_ifo(opt, strain, ifo)[source]
Calculate the segmentation of the strain data for analysis from the command line options.
classmethod insert_segment_option_group(parser)[source]
classmethod insert_segment_option_group_multi_ifo(parser)[source]
required_opts_list = ['--segment-length', '--segment-start-pad', '--segment-end-pad']
classmethod verify_segment_options(opt, parser)[source]
classmethod verify_segment_options_multi_ifo(opt, parser, ifos)[source]
pycbc.strain.strain.detect_loud_glitches(strain, psd_duration=4.0, psd_stride=2.0, psd_avg_method='median', low_freq_cutoff=30.0, threshold=50.0, cluster_window=5.0, corrupt_time=4.0, high_freq_cutoff=None, output_intermediates=False)[source]
Automatic identification of loud transients for gating purposes.
This function first estimates the PSD of the input time series using the FindChirp Welch method. Then it whitens the time series using that estimate. Finally, it computes the magnitude of the whitened series, thresholds it and applies the FindChirp clustering over time to the surviving samples.
Parameters: strain (TimeSeries) – Input strain time series to detect glitches over. psd_duration ({float, 4}) – Duration of the segments for PSD estimation in seconds. psd_stride ({float, 2}) – Separation between PSD estimation segments in seconds. psd_avg_method ({string, 'median'}) – Method for averaging PSD estimation segments. low_freq_cutoff ({float, 30}) – Minimum frequency to include in the whitened strain. threshold ({float, 50}) – Minimum magnitude of whitened strain for considering a transient to be present. cluster_window ({float, 5}) – Length of time window to cluster surviving samples over, in seconds. corrupt_time ({float, 4}) – Amount of time to be discarded at the beginning and end of the input time series. high_frequency_cutoff ({float, None}) – Maximum frequency to include in the whitened strain. If given, the input series is downsampled accordingly. If omitted, the Nyquist frequency is used. output_intermediates ({bool, False}) – Save intermediate time series for debugging.
pycbc.strain.strain.from_cli(opt, dyn_range_fac=1, precision='single', inj_filter_rejector=None)[source]
Parses the CLI options related to strain data reading and conditioning.
Parameters: opt (object) – Result of parsing the CLI with OptionParser, or any object with the required attributes (gps-start-time, gps-end-time, strain-high-pass, pad-data, sample-rate, (frame-cache or frame-files), channel-name, fake-strain, fake-strain-seed, fake-strain-from-file, gating_file). dyn_range_fac ({float, 1}, optional) – A large constant to reduce the dynamic range of the strain. precision (string) – Precision of the returned strain (‘single’ or ‘double’). inj_filter_rejector (InjFilterRejector instance; optional, default=None) – If given send the InjFilterRejector instance to the inject module so that it can store a reduced representation of injections if necessary. strain – The time series containing the conditioned strain data. TimeSeries
pycbc.strain.strain.from_cli_multi_ifos(opt, ifos, inj_filter_rejector_dict=None, **kwargs)[source]
Get the strain for all ifos when using the multi-detector CLI
pycbc.strain.strain.from_cli_single_ifo(opt, ifo, inj_filter_rejector=None, **kwargs)[source]
Get the strain for a single ifo when using the multi-detector CLI
pycbc.strain.strain.gate_data(data, gate_params)[source]
Apply a set of gating windows to a time series.
Each gating window is defined by a central time, a given duration (centered on the given time) to zero out, and a given duration of smooth tapering on each side of the window. The window function used for tapering is a Tukey window.
Parameters: data (TimeSeries) – The time series to be gated. gate_params (list) – List of parameters for the gating windows. Each element should be a list or tuple with 3 elements: the central time of the gating window, the half-duration of the portion to zero out, and the duration of the Tukey tapering on each side. All times in seconds. The total duration of the data affected by one gating window is thus twice the second parameter plus twice the third parameter. data – The gated time series. TimeSeries
pycbc.strain.strain.insert_strain_option_group(parser, gps_times=True)[source]
Add strain-related options to the optparser object.
Adds the options used to call the pycbc.strain.from_cli function to an optparser as an OptionGroup. This should be used if you want to use these options in your code.
Parameters: parser (object) – OptionParser instance. gps_times (bool, optional) – Include --gps-start-time and --gps-end-time options. Default is True.
pycbc.strain.strain.insert_strain_option_group_multi_ifo(parser, gps_times=True)[source]
Adds the options used to call the pycbc.strain.from_cli function to an optparser as an OptionGroup. This should be used if you want to use these options in your code.
Parameters: parser (object) – OptionParser instance. gps_times (bool, optional) – Include --gps-start-time and --gps-end-time options. Default is True.
pycbc.strain.strain.next_power_of_2(n)[source]
Return the smallest integer power of 2 larger than the argument.
Parameters: n (int) – A positive integer. m – Smallest integer power of 2 larger than n. int
pycbc.strain.strain.verify_strain_options(opts, parser)[source]
Sanity check provided strain arguments.
Parses the strain data CLI options and verifies that they are consistent and reasonable.
Parameters: opt (object) – Result of parsing the CLI with OptionParser, or any object with the required attributes (gps-start-time, gps-end-time, strain-high-pass, pad-data, sample-rate, frame-cache, channel-name, fake-strain, fake-strain-seed). parser (object) – OptionParser instance.
pycbc.strain.strain.verify_strain_options_multi_ifo(opts, parser, ifos)[source]
Sanity check provided strain arguments.
Parses the strain data CLI options and verifies that they are consistent and reasonable.
Parameters: opt (object) – Result of parsing the CLI with OptionParser, or any object with the required attributes (gps-start-time, gps-end-time, strain-high-pass, pad-data, sample-rate, frame-cache, channel-name, fake-strain, fake-strain-seed). parser (object) – OptionParser instance. ifos (list of strings) – List of ifos for which to verify options for
## Module contents¶
pycbc.strain.read_model_from_config(cp, ifo, section='calibration')[source]
Returns an instance of the calibration model specified in the given configuration file.
Parameters: cp (WorflowConfigParser) – An open config file to read. ifo (string) – The detector (H1, L1) whose model will be loaded. section ({"calibration", string}) – Section name from which to retrieve the model. An instance of the calibration model class. instance
|
2020-12-02 00:27:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2027757167816162, "perplexity": 12195.13817014347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141685797.79/warc/CC-MAIN-20201201231155-20201202021155-00642.warc.gz"}
|
http://mathhelpforum.com/calculus/43630-xy-dv-triple-integration-help-bounds.html
|
# Math Help - ∫∫∫ xy dV Triple Integration. Help with bounds!
1. ## ∫∫∫ xy dV Triple Integration. Help with bounds!
∫∫∫ xy dV, where the solid is a tetrahedron with vertices (0,0,0), (1,0,0),(0,2,0), and (0,0,3)
I tried this with 0<x<1, 0<y<-2x+2, and 0<z<3 with no luck =\ Are my bounds wrong in this case?
What would be really helpful is if someone could explain how I am supposed to figure out the bounds. I don't even know how to figure them out which means I can't do any of the problems assigned
2. Originally Posted by crabchef
∫∫∫ xy dV, where the solid is a tetrahedron with vertices (0,0,0), (1,0,0),(0,2,0), and (0,0,3)
I tried this with 0<x<1, 0<y<-2x+2, and 0<z<3 with no luck =\ Are my bounds wrong in this case?
What would be really helpful is if someone could explain how I am supposed to figure out the bounds. I don't even know how to figure them out which means I can't do any of the problems assigned
yes, your limits for z are wrong. see here for the method of finding bounds. post #2 gives a shortcut for special cases (this one). posts #7 and #9 give the longer, more conventional way of tackling it
3. thanks for the link! this is going to take me a while to sort out ^^
ah nevermind, z should equal 3 - 3x - 3y/2
4. Originally Posted by crabchef
∫∫∫ xy dV, where the solid is a tetrahedron with vertices (0,0,0), (1,0,0),(0,2,0), and (0,0,3)
I tried this with 0<x<1, 0<y<-2x+2, and 0<z<3 with no luck =\ Are my bounds wrong in this case?
What would be really helpful is if someone could explain how I am supposed to figure out the bounds. I don't even know how to figure them out which means I can't do any of the problems assigned
The upper surface of the tetrahedron is a plane with points (1,0,0), (0,2,0), and (0,0,3) which has equation $x+ \frac{y}{2}+ \frac{z}{3}= 1$. That can be rewritten $\frac{z}{3}= 1- x- \frac{y}{2}$ and then $z= 3- 3x- \frac{3y}{2}$. That is the upper bound for your tetrahedron.
|
2014-03-10 19:37:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331193327903748, "perplexity": 549.3296624192916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010980041/warc/CC-MAIN-20140305091620-00088-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.esaral.com/q/68-boxes-of-a-certain-commodity-require-a-shelf-length-of-13-6-m-20504/
|
68 boxes of a certain commodity require a shelf-length of 13.6 m.
Question:
68 boxes of a certain commodity require a shelf-length of 13.6 m. How many boxes of the same commodity would occupy a shelf length of 20.4 m?
Solution:
Let $x$ be the number of boxes that occupy a shelf-length of $20.4 \mathrm{~m}$.
If the length of the shelf increases, the number of boxes will also increase.
Therefore, it is a case of direct variation.
$\frac{68}{x}=\frac{13.6}{20.4}$
$\Rightarrow 68 \times 20.4=x \times 13.6$
$\Rightarrow x=\frac{68 \times 20.4}{13.6}$
$=\frac{1387.2}{13.6}$
$=102$
Thus, 102 boxes will occupy a shelf – length of $20.4 \mathrm{~m}$.
|
2022-05-16 21:04:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6912334561347961, "perplexity": 890.6006313274187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512249.16/warc/CC-MAIN-20220516204516-20220516234516-00441.warc.gz"}
|
http://www.angrymath.com/2012/01/concrete-p-value-demonstration.html
|
## Sunday, January 1, 2012
### Concrete P-Value Demonstration
I find that students in my statistics class are almost totally bewildered by the logic of hypothesis testing and P-values (for hypotheses based on a population mean), no matter how carefully I try to explain the concepts. Here's an idea for a super-short and simple, concrete demonstration of hypothesis testing. Tell me if you think this would be worth the class time:
1. Start with a hand of four cards: {A, 2, 3, 4}
2. I'll turn my back and secretly do one of two things:
H0: Leave the Ace in, or
HA: Take the Ace out
3. Now shuffle the hand and deal out 3 cards.
Question: Say I get a draw of {2, 3, 4}. What's the chance of this happening if I did not take out the Ace (H0)? Note that all possible draws would be {{A,2,3}, {A,2,4}, {A,3,4}, {2,3,4}} so the probability of seeing that would be P = f/N = 1/4 = 0.25.
Conclusion: If I draw {2,3,4} then we have some evidence that I did change the deck (HA) -- because it's unlikely to see that result if I didn't (P = 0.25).
Now -- You can actually demonstrate this and ask the class if they think I left the Ace in or took it out each time. I'd recommend 3 run-throughs: leave it, leave it, then take it out. (In the latter case, also ask: Is it possible that I left the Ace in?) In reality, you should probably hold the cards against the otherwise full box, so it isn't obvious if your hand becomes empty in the take-it-out case. (And otherwise practice the prestidigitation in advance so your handwork doesn't give it away.)
Open Question: Should I actually reveal to the class which one I did each time (for confirmation), or leave that as a mystery (modeling real-world usage)?
#### 2 comments:
1. I'm struggling with your suggested demonstration, and I think it's because you mention hypothesis testing for means and then proceed with a demonstration that isn't about means. Also, I don't think it's as simple as defining a population then considering all the possible samples. That might make for an effective demonstration, but (as I understand it) hypothesis testing is totally unaware of the size of the population (i.e., your samples of 3 cards do not "know" they're sampling a population of 4 cards). By trying to define all possible samples, I fear students might be misled about the population-sample relationship in hypothesis testing and the theoretical nature of a sampling distribution.
I'm glad you're making me thing about this, because in my limited experience I haven't used much to explain the concept other than drawings of overlapping sampling distributions, and the general explanation that lots of overlap would be higher p-values, and little overlap would be small p-values. I'm guessing there might be some computer simulations that would be helpful, but I haven't explored enough (yet) to find them.
2. Raymond -- Thanks for the comment, really good stuff to think about!
Now, I actually think one of the advantages here is to have an example that is about something other than testing a population mean. One of the things I struggle with in the introductory class is in trying to communicate that the concepts of confidence-intervals and hypothesis-tests apply to a whole universe of parameters other than just a mean (median, standard deviation, proportion, odds ratio, etc.) So dealing with those general concepts in isolation, prior to introducing the machinery of means-testing, I think might give valuable added perspective.
And I think that part of the demonstration is that somehow you do indeed have to categorize all possible sampling results under the null-hypothesis. For this brief example, you can list them individually. For the case of a mean from an unknown population, the analogy is to use the Central Limit Theorem, and conclude that they are at least approximately normally distributed (for a sufficiently large sample). So there is a correspondence there that I'm consciously trying to highlight.
|
2015-08-01 11:44:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001590371131897, "perplexity": 683.8852696068592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00329-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/relativistic-mass.285014/
|
# Relativistic mass?
Homework Helper
## Main Question or Discussion Point
So for an object free falling from infinity to a black hole, even though it's velocity may become relativistic, it doesn't gain mass?
This one has me curious as well.
Does mass increase as velocity approaches light speed?
Does "weight" also increase?
Does the stregth of gravity from the object also increase?
If the mass and weight increase by the same ratio, then the rate of acceleration towards a black hole wouldn't be limited and would be ever increasing. What prevents such an object from reaching or exceeding light speed (as witnessed by an external observer)?
I tried to follow the wiki link, but it seems there's no simple (or single) explanation for mass in GR:
http://en.wikipedia.org/wiki/Mass_in_general_relativity
Last edited:
Related Special and General Relativity News on Phys.org
1. By falling it is only converting the potential energy it already has to kinetic energy.
2. Even though it is being accelerated by gravity, to its own inertial reference frame it is at rest.
3. If it is gaining mass (energy) where is that mass coming from? Certainly not from the black hole.
4. If it gains mass as it falls, might it at some point have more mass than the black hole it is falling towards?
HallsofIvy
Homework Helper
We seem to keep saying this.
It is wrong to say simply that an object "increases its mass" or "shortens its length" or "times slows down" without specifying "as measured in what frame of reference".
The whole point of "relativity" is that such things are relative to the coordinate system in which the object is being observed.
Homework Helper
The whole point of "relativity" is that such things are relative to the coordinate system in which the object is being observed.
I thought I covered this by explaining that the frame of reference was from an external obsever. Say the obsever's position was relatively far away on a line that crossed the center of mass between the black hole and target object at the start of the initial condition.
Assume the observer is either far enough away or simple a frame of reference that the observer is not affected by the gravity from the black hole or target object.
Given these conditions, would the observer see a change in mass related to changes in velocities relative to the observer, as the two objects accelerated towards each other? Would the observer see either object appear to exceed the speed of light relative to the observer? If the objects had clocks on them, would the observer see the clocks rates change over time?
We seem to keep saying this.
It is wrong to say simply that an object "increases its mass" or "shortens its length" or "times slows down" without specifying "as measured in what frame of reference".
Yes, of course. Sometimes the frame of reference is apparent from the discussion and therefore not explicitly given. Many times the frame of reference is taken to be flat spacetime unless otherwise specified.
"Relativistic mass" is a not a term that's used much any more, exactly because of the difficulties raised earlier on this thread. It's better to say that the mass is invariant, and that the energy and momentum is what varies from its classical prediction in the new reference frame.
It's better to say that the mass is invariant, and that the energy and momentum is what varies from its classical prediction in the new reference frame.
Why?? Aren't all frame dependent?
Why?? Aren't all frame dependent?
Yes, but it turns out that relativistic mass isn't too useful in calculations, and the concept just complicates things unnecessarily.
Momentum and energy are frame-dependent in classical physics even, remember; adding another frame-dependent quantity into the mix doesn't make things any easier. Better just to lump the relativistic variations in with energy, momentum, and leave the mass invariant.
Dale
Mentor
Sideways is correct. In modern usage mass is the norm of the four-momentum while energy is its timelike component and momentum is its spacelike components. Therefore mass is invariant, while energy and momentum are frame-dependent.
The deprecated concept of "relativistic mass" is the same as energy (timelike component of four-momentum).
jtbell
Mentor
Given these conditions, would the observer see a change in mass related to changes in velocities relative to the observer, as the two objects accelerated towards each other?
How are you imagining the observer to "see" or "measure" the mass of the object? In classical physics, there are various ways of doing this, which all give the same value for the mass. In relativity, these methods do not necessarily give the same value.
If the mass and weight increase by the same ratio, then the rate of acceleration towards a black hole wouldn't be limited and would be ever increasing.
I thought a distant observer never sees a mass cross the event horizon; from a local free falling fram, the mass crosses the evnt horizon and disappears in finite time.
So I don't see that acceleration and velocity is unlimited.
Homework Helper
How are you imagining the observer to "see" or "measure" the mass of the object?
By observing any change made in the strength of the gravitational field near the target object (how it affects nearby objects). By observing the rate of change in acceleration due to gravity between target object and black hole as the target (and black hole) velocity increases with respect to the observer.
So I don't see that acceleration and velocity is unlimited.
I'm not stating a position here, I'm asking a question. Ignoring the relativistic mass issue, if there is a limit to the velocity (with respect to the distant observer) of an object just before it crosses the event horizon, what is the limit and why? Is there a limit to the rate of acceleration?
Last edited:
Gokul43201
Staff Emeritus
Gold Member
Yes, there is a limit to the velocity - that limit is c.
Perhaps what you need to consider is that the momentum does not scale linearly with the velocity; it increases faster than the velocity does, by the additional factor $\gamma$. So even though the momentum (and energy) are increasing rapidly as the object approaches the black hole, the corresponding increases in the velocity get smaller and smaller as v approaches c (and gamma approaches infinity).
Yes, there is a limit to the velocity - that limit is c.
Perhaps what you need to consider is that the momentum does not scale linearly with the velocity; it increases faster than the velocity does, by the additional factor $\gamma$. So even though the momentum (and energy) are increasing rapidly as the object approaches the black hole, the corresponding increases in the velocity get smaller and smaller as v approaches c (and gamma approaches infinity).
In fact from the perspective of flat spacetime, not only do the increases in the velocity get smaller and smaller, the velocity appears to actually diminish as the object approaches the event horizon. Close to the EH space is increasingly contracted in the radial direction and what may seem like many kilometers to the object may be only a few meters for the observer, making the object appear to slow down.
atyy
In GR the speed of an object relative to a distant observer is not uniquely defined (and can be greater than the speed of light according to some definitions). At each point in its worldline, an object has a 4-velocity. However, due to the curvature of spacetime, the 4-velocities at vastly different points cannot be uniquely compared.
When considering a test particle falling into a black hole, the particle doesn't have any 4-acceleration, ie it moves on a geodesic. Every point in space is locally Minkowsian, so its local velocity never exceeds the local velocity of light.
In the case where you the infalling particle isn't a test particle, an interesting place to look might be the case of one black hole falling into another. In this paper by Baker et al, the final black hole "mass" is not the sum of the initial two black hole "masses", but I haven't understood the paper enough to know if this is significant or just numerical error. I mainly thought the video is fun to watch!
http://www.nasa.gov/vision/universe/starsgalaxies/gwave.html
http://arxiv.org/abs/0805.1428
stevebd1
Gold Member
Slightly off topic but while looking for information on the web regarding mass and relativity, I found the following overview-
http://www.ph.rhul.ac.uk/course_materials/PH154/Relativistic mass and dynamics.pdf
Though it's entitled 'Relativistic Mass and Dynamics', it does make a good distinction between energy and mass.
a similar overview regarding Lorentz transformations-
http://www.ph.rhul.ac.uk/course_materials/PH154/Lorentz transformations.pdf
In the paper, it says that "Alternatively, we can say that the photon energy is equivalent to mass and is attracted by gravity like any other mass. Light is deflected by some 0.00048° as it grazes the sun."
Is it true that "photon energy is equivalent to mass and is attracted by gravity"?
stevebd1
Gold Member
Thanks for the response feynmann. Comparing photon energy to mass is a slightly slippery subject. The 0.00048° is based on
$$\delta=\frac{4Gm}{c^2r}$$
where δ is the starlight deflection in radians
which relates to Schwarzschild curved spacetime so I don't see the need to say that the photon energy is equivalent to mass and is attracted by gravity. I remember reading a description of gravitational lensing as being a consequence of gravitational time dilation; the side of the photon closest to the mass experiences less time which 'pulls' the photon off course; but again, this is treating the photon as a point mass rather than particle/wavelength that follows the curvature of spacetime.
Considering the http://en.wikipedia.org/wiki/Compton_wavelength" [Broken] $(\lambda=h/mc)$ which expresses the rest mass of a particle as a wavelength-
$$\tag{1}E = h f = \frac{h c}{\lambda} = m c^2$$
where h is Planck's constant, f is frequency and $\lambda$ is wavelength
you might (cautiously) consider the photon as being equivalent to mass but while it seems acceptable to consider sub-atomic particles as wavelengths in QM, I don't think it's acceptable to consider the (unbound) energy of a photon wavelength as a particle that could be affected by gravity. While I'm not saying the statement is wrong (the people who put the summary together are sure to be more educated in GR than I am), the statement does raise more questions than it answers.
Steve
(1)- http://en.wikipedia.org/wiki/Compton_wavelength
______________________________
|
2020-08-08 18:26:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151638865470886, "perplexity": 486.96853160446415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00291.warc.gz"}
|
http://www.sciforums.com/threads/independent-variables-and-partial-differentiation.112351/
|
# Independent variables and partial differentiation
Discussion in 'Physics & Math' started by Pete, Feb 8, 2012.
1. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
I hope this makes sense... I'm not familiar with the terminology.
There is a vector space (x,y), with a linear transformation between (x, y) and (x', y'):
$x' = g_1(x, y) \\ y' = g_2(x, y)$
There is a set of points (a,b) in (x,y) defined by the following function:
$a = f(b,c)$
Where c is an independent variable.
- What is db/dc? Is it zero? Is it undefined?
The points (a,b) are transformed to (a',b').
- Can we say from the above whether b' and c are independent?
- Is there a rigorous definition of what 'independent' means?
- Does it depend on the particular equation?
Say we use the above to derive an expression like this:
$a' = h(a', b', c)$
- Are b' and c independent variables in that expression?
- Can I simply treat b' as a constant when finding $\frac{\partial a'}{\partial c}$, or do I need to treat it as a function of c?
2. ### James RJust this guy, you know?Staff Member
Messages:
27,698
I'm not sure whether I can answer this question in the abstract. So let me see if I can construct a simple example...
I'll use
$x' = x + y\\ y'=x-y$
I'll take
$a=e^{bc}$
You said c is an independent variable, by which I assume you mean c is independent of b. That is, we can vary b as much as we want to without affecting c, and vice versa.
This suggests to me that db/dc=0.
Using my example
$a=e^{bc} \\ x' = x + y\\ y'=x-y$
so
$a'=a + b = e^{bc} + b\\ b'=a-b = e^{bc} - b$
It seems clear to me from the example that b' depends on c.
Doesn't it mean that you can vary one variable without affecting the other?
Do you mean $a'=h(a,b',c)$? Using my example we have
$a'= e^{bc} + b = b' + 2b = b' + 2\frac{\ln a}{c}$
No, because $b'=e^{bc}-b$
Does't the notation $\frac{\partial a'}{\partial c}$ mean that all variables other than c are to be held constant? That would include b', wouldn't it?
3. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
Thanks James,
But the same logic suggests that dc/db=0, ie db/dc is infinite or something.
It seems to me that db/dc=0 would imply that b can't vary:
\begin{align} \frac{db}{dc} &= 0 \\ \int \frac{db}{dc} dc &= \int 0 dc \\ b + C_1 &= C_2 \\ b &= C \end{align}
...but I'm not enough of a mathematician to be certain that this proof is valid.
That's my understanding, but I was wondering if there was any formal definition.
In the specific case this question is based on, the expression was $a'=h(a',b',c)$, but the key point is that the expression involves b' and c, and doesn't involve b, so your example is good.
This is where I get confused.
In that function, it seems that b' is an dependent variable, depending on b and c. (Or are they three interdependent variables?)
But in the function:
$a'= b' + 2\frac{\ln a}{c}$
Where b is not involved, it seems to me that b' is independent of c, because they can vary without affecting the other.
For any given value of c, b' can have any value (because b can have any value).
For any given value of b', c can have any value (because b can have any value).
Does that make sense, or am I right off the track?
I think so... but I'm getting confused by b.
We can hold b constant or b' constant, but not both (because that would imply holding c constant).
b is not in the equation, but b' was earlier derived/defined as a function of b and c.
So, do we hold b' constant, because b' is in the equation and b isn't?
Or do we hold b constant, for some other reason?
4. ### TrippyALEA IACTA ESTStaff Member
Messages:
10,834
Consider a horizontal line, which is really what you're describing.
The value of Y (or in this case b) is both constant and independent of the value of X (or in this case c).
Or to put it another way, if you set the variable c to some constant, you make it independent of b.
I'm not sure it neccessarily follows that it's true of all independent variables, however, that the derivative is 0.
5. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
So, dy/dx=0 implies that y is a constant, right?
So if y is not a constant, then this implies that dy/dx is not zero?
6. ### TrippyALEA IACTA ESTStaff Member
Messages:
10,834
I would have said so, yes.
If you consider
Y=0
Y=6
Y=$\frac{\sqrt{2}}{\pi}$
and Y=999999999999999999999999999999999999999999999999999999999999999
In all cases $\frac{dy}{dx}=0$
7. ### James RJust this guy, you know?Staff Member
Messages:
27,698
What if y = z, and z is independent of x?
In general, if y=f(x,z), then
$\frac{dy}{dx} = \frac{\partial y}{\partial x} + \frac{\partial y}{\partial z}\frac{\partial z}{\partial x}$
So, if y=f(x,z)=z, then
$\frac{dy}{dx} = 0 + 1\times \frac{\partial z}{\partial x} = \frac{\partial z}{\partial x}$
so, we need to know how z varies with x to find the total derivative of y with respect to x.
If z is independent of x, then it seems to me that we must have $\frac{\partial z}{\partial x} = 0$, and it then follows also that $\frac{dy}{dx}=0$.
8. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
I think that should be
$\frac{dy}{dx} = \frac{\partial y}{\partial x} + \frac{\partial y}{\partial z}\frac{dz}{dx}$
Right... but "how z varies with x" implies some relationship between z and x. It doesn't make sense if z and and are independent, because when you vary x, z can do anything.
Maybe I'm just not getting the concept of independence.
It seems to me that independence is not an absolute thing, but is relative to the equation.
eg:
y = x+z
implies y depends on independent variables x and z.
But rewriting it as:
x = y - z
implies that x depends on independent variables y and z.
Is it better in this case to say that x, y, and z are interdependent?
Last edited: Feb 9, 2012
9. ### arfa branecall me arfValued Senior Member
Messages:
3,971
Penrose states in his book "The Road to Reality", that for a pair of independent variables, say u and v, expressing that some quantity, say Z, is a function of u and v but independent of v is given by: dZ/dv = 0. It also says that for any value of u, Z is constant in v, and so Z depends only on u.
He also makes the point that this only holds locally, it might not be true for all Z.
10. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
...so Z is really only a function of u?
Are you sure it didn't say $\partial Z/\partial v = 0$?
That looks very much like a description of $\partial Z/\partial v = 0$.
11. ### arfa branecall me arfValued Senior Member
Messages:
3,971
Yes, sorry, it actually concerns the Cauchy-Riemann equations. Partial derivatives are of one variable by definition.
Ed: but the principle holds for ordinary differentiation: if Z is constant in v the "slope" dZ/dv is zero, and v is independent.
For a function of two variables, Z(u,v), there is no problem with Z being constant in v but still a function of u and v. The v could be contour lines of constant height (latitude) above the equator of a sphere, say, so they have gradients = 0.
Last edited: Feb 9, 2012
12. ### temurman of no wordsRegistered Senior Member
Messages:
1,330
If you fix c, you'll get a curve a=f(b,c) in general. So c cannot be taken as the only independent variable. Rather, we should consider b as a function of a and c. Differentiate the equation a=f(b,c) with respect to c to get
$0=\frac{\partial f}{\partial b} \frac{\partial b}{\partial c} + \frac{\partial f}{\partial c},$
so
$\frac{\partial b}{\partial c} = - \frac{\partial f}{\partial c} \left(\frac{\partial f}{\partial b}\right)^{-1},$ or in short $b_c = - f_c/f_b$,
provided the latter term can be inverted.
In effect, it is the same as holding a fixed, and defining the implicit function b(c) by a=f(b,c).
- Not from the above, bu you can choose to consider b' and c as independent, because you have one equation and 3 variables.
- It means you can freely choose the values of the variables, and still have freedom to satisfy the given equation/relation.
- Yes, but in a weak way. If you have 1 equation involving 3 variables, you can choose 2 of them to be independent, unless some singular situations arise (look up implicit function theorem).
- You can choose so.
- Yes.
13. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
Thanks temur.
What about the total derivative $\frac{db}{dc}$?
Does it have a value?
Is it even meaningful?
Thanks, I'll exercise due care
One more...
We have:
\begin{align} x &= f(y,z) \\ \frac{dx}{dy} &= 0 \end{align}
Is this correct:
\begin{align} \int \frac{dx}{dy} dy &= \int 0 dy \\ x + C_1 &= C_2 \\ x &= C \end{align}
Or this:
\begin{align} \int \frac{dx}{dy} dy &= \int 0 dy \\ x + g(z) &= C_2 \end{align}
My understanding is that the use of the expression \frac{dx}{dy} implies that z is a function of y, so x is implicitly a function of only y, and that the first process is correct.
But I'm not certain, and there's disagreement about in [post=2901509]another thread[/post] (it's a debate thread, so only Tach and I can post there).
14. ### temurman of no wordsRegistered Senior Member
Messages:
1,330
It is not meaningful. In order to define this, you need to specify how a depends on c. The "total" derivative $\frac{db}{dc}$ is taken along a given curve. Without a curve (i.e., a dependence such as a=a(c)), there is no "total" derivative.
Again you need to know how z depends on y. Otherwise it has no meaning.
15. ### PeteIt's not rocket surgeryModerator
Messages:
10,165
Right, we can't even talk about $\frac{dx}{dy}$ unless z depends on y...
So, if:
\begin{align} x &= f(y,z) \\ z &= g(y) \\ \frac{dx}{dy} &= 0 \end{align}
Then is this correct?
\begin{align} \int \frac{dx}{dy} dy &= \int 0 dy \\ x + C_1 &= C_2 \\ x &= C \end{align}
Thanks again
Last edited: Feb 10, 2012
16. ### temurman of no wordsRegistered Senior Member
Messages:
1,330
These conditions imply only that the function f(y,z) is constant along the curve z=g(y). For example, take g(y)=0. Then any of f(y,z)=1, f(y,z)=zy, f(y,z)=sin(z) satisfy the equation $\frac{dx}{dy} = f_y+f_zg_y=0$.
Last edited: Feb 10, 2012
Messages:
10,165
*click*
Thanks!
|
2015-05-25 09:22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 11, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988369345664978, "perplexity": 1115.9966755935272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928479.19/warc/CC-MAIN-20150521113208-00082-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/linear-equation-in-2-variables-5/
|
# Linear Equation in 2 variables
Algebra Level 1
$$\frac { x }{ y }$$ The sum of numerator and denominator is 18. If 9 is added to numerator then it will become 1 less than 3 times the denominator. Find the fraction and enter the product of numerator(x) and denominator (y)
×
|
2017-03-25 23:56:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804860949516296, "perplexity": 630.3841804964483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189088.29/warc/CC-MAIN-20170322212949-00628-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/412291/maximizing-sum-of-upper-triangle-matrix-elements-with-respect-to-column-and-row
|
# Maximizing Sum of Upper Triangle Matrix Elements with Respect to Column and Row Swapping
So, I wanna make a ranking method for teams in the EPL, there are 20 teams in EPL, therefore there are $$20!$$ configurations of ranking assignment, my final ranking assignment would be the one that minimizes the loss function
$$Loss=$$ Number of matches where the lower ranked team defeat the higher ranked team
I think you can make a matrix where each element $$a_{ij}$$, the element of $$i$$-th row and $$j$$-th column, would be the number of times the $$i$$-th ranked team defeat the $$j$$-th ranked team.
You would make $$20!$$ different matrices, where each matrix is just like the other with swapped rows and columns. The diagonal would still be in the diagonal.
The final ranking assignment would then be the one that minimizes the Loss Function, which can be calculated by summing the elements of the lower triangle of the matrix.
From my understanding, in this case, it would be equivalent to maximizing the summation of the elements of the upper triangle of the matrix. Because the loss function is equivalent, as in always giving the same final ranking assignment, to
$$Loss=-$$ Number of matches where the higher ranked team defeat the lower ranked team
But iterating through $$20!$$ different matrices is very slow, I wonder if there is any method to find the optimum ranking assignment quickly.
• What is EPL? Can you spell it out? Jun 11 '19 at 21:40
I encountered the same problem when trying to order the nodes in a complete asymmetric graph. My objective is similar: maximize the difference between the sum of the lower entries and the sum of the upper entries. Here is a greedy solution in pytorch. I have not looked into any guarantee that it converges nor that it is optimal. However, when comparing with a brute force method for random matrices up to n=9, it always reached the same solution. Assuming that the produced solution is optimal, the number of steps required to reach it seems to be O(nlog(n)) for random matrices.
We first order the nodes by assigning them a value that is the sum of their outgoing edges weights - sum of their incoming edges weights. This quickly brings us closer to the solution.
Then, we build a matrix that contains the gain of moving the ith node to the jth position (I also tried a strategy that evaluates the cost of switching two nodes but it was suboptimal). We select and apply the best move and rebuild the matrix. We repeat the last step until every move has a negative value.
I haven't spent a lot of time trying to optimize the algorithm, let me know if you find any way to improve it.
import torch
def mperm(perm, matrix=None):
"""Applies the permutation perm on the rows and columns of matrix"""
if matrix is not None:
return matrix[perm][:, perm]
res = torch.zeros(len(perm), len(perm)).long()
for i, j in enumerate(perm):
res[i, j] = 1
return res
def graph_argsort(a, max_steps=100, pre_sort=True):
def move(m, i, j, perm):
"""Moves the node at the ith position to the jth position"""
i = int(i)
j = int(j)
start, stop = sorted((i, j))
source = list(range(start, stop + 1))
dest = (source[-1:] + source[:-1]) if i<j else (source[1:] + source[:1])
m[dest] = m[source]
m[:, dest] = m[:, source]
perm[dest] = perm[source]
b = a.detach().cpu().clone().masked_fill(torch.eye(a.shape[-1]).bool(), 0)
# 1st step: absolute sorting
if pre_sort:
result_perm = (-b.sum(0) + b.sum(1)).argsort()
b = mperm(result_perm, b)
else:
result_perm = torch.arange(len(b))
# 2nd step: greedy insertion sort
for step in range(max_steps):
# gain from flipping outgoing edges
top_gain = b.tril(-1).cumsum(0) - b.triu(1).flip(0).cumsum(0).flip(0)
# gain from flipping incomming edges
bot_gain = b.triu(1).cumsum(1) - b.tril(-1).flip(1).cumsum(1).flip(1)
move_weights = (bot_gain - top_gain.T) * 2
i = move_weights.view(-1).argmax(-1)
i, j = int(i // b.shape[-1]), int(i % b.shape[-1])
if move_weights[i, j] <= 0:
break
move(b, i, j, result_perm)
return result_perm
|
2022-01-18 12:36:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7878944277763367, "perplexity": 1091.9488181169038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00621.warc.gz"}
|
https://ncertmcq.com/rd-sharma-class-8-solutions-chapter-8-division-of-algebraic-expressions-ex-8-1/
|
## RD Sharma Class 8 Solutions Chapter 8 Division of Algebraic Expressions Ex 8.1
These Solutions are part of RD Sharma Class 8 Solutions. Here we have given RD Sharma Class 8 Solutions Chapter 8 Division of Algebraic Expressions Ex 8.1
Other Exercises
Question 1.
Write the degree of each of the following polynomials :
(i) 2x3+5x2-7
(ii) 5x2
3x +7 ’
(iii) 2x + x2
-8
(iv) $$\frac { 1 }{ 2 }$$
y7 -12y5 + 48y6 – 10
(v) 3x3 + 1
(vi) 5
(vii) 20x3 + 12x2y2– 10y2 + 20
Solution:
(i) 2x3 + 5x2-7: The degree of this polynomial is 3.
(ii) 5x2 – 3x + 2 : The degree of this polynomial is 2.
(iii) 2x + x2 – 8 : The degree of this polynomial is 2.
(iv) $$\frac { 1 }{ 2 }$$ y7 – 12y6 + 48y5 – 10 : The degree of this polynomial is 7.
(v) 3x3 + 1 : The degree of this polynomial is 3.
(vi) 5 : The degree of this polynomial is 0 as it is only constant term
(vii) 20x3 + 12x2y2 – 10y2 + 20: The degree of this polynomial is 2 + 2 = 4.
Question 2.
Which of the following expressions are not polynomials :
(i) x2 + 2x2
(ii) √a x + x2-x3
(iii) 3y3 – √5y + 9
(iv) ax1/2 + ax + 9x2 + 4
(v) 3x2 + 2x-1 + 4x + 5
Solution:
(i) x2 + 2x-2 = x2 + 2x $$\frac { 1 }{ { x }^{ 2 } }$$ =x2 + $$\frac { 1 }{ { x }^{ 2 } }$$
: It is not xx polynomial as it has negative integral power.
(ii) √ax + x2 – x3: It is polynomial.
(iii) 3y3 √5y + 9 : It is a polynomial.
(iv) ax1/2+ ax + 9x2 + 4: It is not a polynomial as the degree of $$\frac { 1 }{ { x }^{ 2 } }$$ is an integer.
(v) 3x2 + 2x-1 + 4x + 5 : It is not a polynomial as the degree of x2, x-1 are negative.
Question 3.
Write each of the following polynomials in the standard form. Also write their degree.
(i) x2 + 3 + 6x + 5x4
(ii) a1 + 4 + 5a6
(iii) (x3 – 1) (x3 – 4)
(iv) (y3 – 2) (y3 + 11)
(v) $$\left( { a }^{ 3 }-\frac { 3 }{ 8 } \right)$$ $$\left( { a }^{ 3 }-\frac { 16 }{ 17 } \right)$$
(vi) $$\left( { a }+\frac { 3 }{ 4 } \right)$$ $$\left( { a }+\frac { 3 }{ 4 } \right)$$
Solution:
Polynomial in standard form is the polynomial in ascending order or descending order.
Hope given RD Sharma Class 8 Solutions Chapter 8 Division of Algebraic Expressions Ex 8.1 are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
|
2022-06-28 21:00:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6683705449104309, "perplexity": 1983.7872341710708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00621.warc.gz"}
|
https://kevinkotze.github.io/ts-3-forecasting/
|
A significant part of the time series literature considers the ability of a model to predict the future behaviour of a variable with a reasonable degree of accuracy. This objective is important, as most decision taken today are based on what we think will happen in the future. Depending on the decision under the consideration, the future can of course be the next minute, day, month, year, etc. For example, a day-trader may be interested in the price of a share in the next minute, hour or day, while the governor of a central bank may be interested in the rate of inflation over the next year.
Forecasting the future is not an easy thing to do, especially when it comes to economic variables that reflect on the complex interactions between individuals, firms and organizations. This has lead to the development of a plethora of models that include large sophisticated mathematical variants, as well as those that are fairly simple. In many instances, we find that the forecasts from these models are comparable and the tools that are used to evaluate them would need to be carefully applied.
When evaluating forecasting models, we need to make use of ex-post data (after the fact) that has been realised after the results of the forecast have been generated. In these instances, we would utilise an explicit loss function that may consider whether a forecasting model provides a favourable estimate of the expected future value of a variable. For example, such a loss function would usually look to penalise the forecasted values that are not realised after a specific period of time.
When comparing the forecasts that have been generated by two models we may also be interested in determining whether or not they are significantly different from one another. Separate tests have been developed for these investigations where the models are either nested or non-nested. The application of these statistics should be an important part of any forecasting exercise.
In addition, there may be a large degree of uncertainty associated with the forecasts of various models, which is usually of interest to economists and financial analysts. If the uncertainty is an inherent property of the variable we are looking to forecast, this might be a good thing. However if it is not, then this could make the forecast less useful. In many respects, different loss functions emphasize either the accuracy of the point estimate or confidence around this point, and both of these topics will be discussed in what follows.
Lastly, many large institutions employ a suite of forecasting models, where the final forecast is some weighted average of the forecasts from many different models. Several empirical studies have suggested that by employing a combination of forecasting models, one is able to reduce instances, where there are large forecasting errors.
# 1 Forecasting notation
To describe the procedure for constructing forecasts, we firstly need to introduce some notation. By way of example, if we have quarterly data and want to obtain forecasts over the next eight quarters (i.e. two years), then we would want to generate eight successive forecasts. Therefore, we would want to generate a number of $$h$$-step ahead forecasts, where $$h=\{1,2, \ldots , 8\}$$. The end of the forecasting horizon may be represented by $$H$$, where in this case, $$H=8$$.
In certain instances, we will refer to the information set, which is denoted $$I_t$$. This term refers to the information, relating to the values of a variable, that is available at a particular point in time. Therefore, in most cases $$I_t$$ would refer to information pertaining to all the past realised values of a particular variable or variables.
In an out-of-sample evaluation of a model we would want to compare the forecasts against future realisations of the data. To complete this exercise the researcher would divide the available sample size of $$T + H$$ into an in-sample portion of size $$R$$ and an out-of-sample portion of size $$P$$. They would then obtain a sequence of $$h$$-step-ahead out-of-sample forecasts of the variable of interest, which we denote $$y_t$$ in the univariate case, using the information set $$I_t$$, such that $$R + P - 1 + H = T + H$$ and $$H < \infty$$.
Figure 1: Notation for different forecasting schemes
The notation that is used in forecasting exercises differs slightly to what we had previously, as the entire sample of realised values is represented by $$T+H$$. When looking to perform an out-of-sample forecasting evaluation one would usually generate a number of successive forecasts, where the information set would include an additional single observation when generating successive forecasts, as the positioning of $$I_t$$ moves towards $$T$$. These out-of-sample exercises could take on one of two forecasting schemes, where a recursive estimation scheme would use all available information (from period $$t=0$$ to $$I_t$$). In contrast a rolling window forecasting scheme would use a consistent number of observations for the in-sample estimation period (such that the first observation would increase with each successive forecast). The rolling forecast scheme would be more appropriate when you believe that there may be structural breaks in the data.
# 2 Forecasting with autoregressive models
In an autoregressive model, we are able to relate the current value of a process to previous values. For example, we may specify an AR(1) model as follows,
$\begin{eqnarray} y_{t}=\phi_{1}y_{t-1}+\varepsilon_{t} \tag{2.1} \end{eqnarray}$
where the residual is Gaussian white noise, $$\varepsilon_{t}$$ $$\sim \mathsf{i.i.d.} \; \mathcal{N}(0,\sigma ^{2})$$. In this case we can simply iterating the model forward over a number of steps to calculate future values of a process,
$\begin{eqnarray} y_{t + 1} &=&\phi_{1}y_{t}+\varepsilon_{t + 1} \nonumber \\ y_{t + 2} &=&\phi_{1}y_{t + 1}+\varepsilon_{t + 2} \nonumber \\ \vdots &=& \vdots \nonumber \\ y_{t + H} &=&\phi_{1}y_{t + H-1}+\varepsilon_{t + H} \nonumber \end{eqnarray}$
This expression may be summarise, by inserting the first line into the second line to relate successive future values to the current observed realisation. Hence, for $$h=2$$ we have,
$\begin{eqnarray} y_{t+2} &=&\phi_{1}(\phi_{1}y_{t}+\varepsilon_{t+1})+\varepsilon_{t+2} \nonumber \\ &=&\phi_{1}^{2}y_{t}+\phi_{1}\varepsilon_{t+1}+\varepsilon_{t+2}, \nonumber \end{eqnarray}$
and after doing this recursively for $$h=3$$ we have,
$\begin{eqnarray} y_{t + 3} & = & \phi_{1}\left(\phi_{1}\left(\phi_{1}y_{t}+\varepsilon_{t + 1} \right) +\varepsilon_{t + 2} \right)+\varepsilon_{t + 3} \nonumber \\ &=&\phi_{1}^{3}y_{t}+\phi_{1}^2\varepsilon_{t+1} + \phi_{1}\varepsilon_{t+2}+\varepsilon_{t+3}, \nonumber \end{eqnarray}$
This gives rise to a pattern, where after sorting terms and simplifying,
$\begin{eqnarray} y_{t + h}=\phi_{1}^{h}y_{t}+\overset{h - 1}{\underset{i = 0}{\sum }}\phi_{1}^{i}\varepsilon_{t + h - i} \tag{2.2} \end{eqnarray}$
Thus $$y_{t + h}$$ is a function of $$y_t$$, which represents the available information set, $$I_t$$, that contains information about all past realised observations of this variable. In this case, the actual observed (i.e. realised) future values of $$y_{t + h}$$ will also contain the effects of future shocks. Unfortunately, information about the future realised values of these shocks is not includes in $$I_t$$. Hence, this part of the future realised value of $$y_{t+h}$$ may give rise to a forecasting error.
## 2.1 Point forecasts
To compute the point forecast of $$y_{t + h}$$, we take the conditional expectation of $$\mathbb{E}_t \left[ y_{t + h}|I_t \right]$$. In the case of the simple AR(1) in (2.1), this is equivalent to $$\mathbb{E}_t \left[ y_{t + h}|y_{t}\right]$$. In addition, since all the future error terms in this model have an expected future mean value of zero, we can use (2.2) to calculate the one-step ahead forecast $$\mathbb{E}_t \left[ y_{t + 1}|y_{t} \right] =\phi_{1}y_{t}$$. Similarly, the two-step ahead forecast would be derived from $$\mathbb{E}_t \left[ y_{t + 2}|y_{t}\right] =\phi_{1}^{2}y_{t}$$, and the more general expression may then be given as,
$\begin{eqnarray} \mathbb{E}_t \left[ y_{t + h} | y_{t}\right] =\phi_{1}^{h}y_{t} \tag{2.3} \end{eqnarray}$
This expression is also occasionally referred to as a predictor, which we could denote $$\acute{y}_t(h)$$. If the variable that we are seeking to forecast is described by a stable AR(1) (i.e. it is assumed that $$|\phi_{1}|<1$$), and the structure does not include a constant (as in (2.1)), then the term $$\phi_{1}^{h}y_{t}$$ would tend towards zero as the forecast horizon increases. Hence,
$\begin{eqnarray*} \mathbb{E}_t \left[ y_{t + h} | y_{t}\right] \rightarrow 0 \hspace{1cm} \text{when } \; h \rightarrow \infty \nonumber \end{eqnarray*}$
Therefore, the effect of shocks that may be contained in $$y_{t}$$ would dissipate, as the forecast horizon increases.
## 2.2 Intercept
If we assume that an intercept is included to the stable AR(1) equation, such that,
$\begin{eqnarray} y_{t }=\mu +\phi_{1}y_{t - 1}+\varepsilon_{t} \tag{2.4} \end{eqnarray}$
Using the same recursions as above, we are able to derive the expression,
$\begin{eqnarray} \mathbb{E}_t \left[ y_{t + h} | y_{t}\right] =(1+\phi_{1}+\phi_{1}^{2}+ \ldots + \phi_{1}^{h -1})\mu +\phi_{1}^{h}y_{t} \tag{2.5} \end{eqnarray}$
Hence, the one-step ahead forecast, where $$h=1$$, is simply $$\mathbb{E}_t \left[ y_{t + h} | y_{t}\right] =\mu +\phi_{1}y_{t}$$. Similarly, the two-step ahead forecast, where $$h=2$$, is $$\mathbb{E}_t \left[ y_{t + 2} | y_{t}\right] =(1+\phi_{1})\mu +\phi_{1}^{2}y_{t}$$. Therefore, when $$h$$ goes to infinity, we see that the forecast becomes the unconditional mean of the process,
$\begin{eqnarray} \mathbb{E}_t \left[ y_{t + h} | y_{t}\right] \longrightarrow \frac{\mu }{1-\phi_{1}} \hspace{1cm} \text{when }h \rightarrow \infty \tag{2.6} \end{eqnarray}$
This follows from the summation formula for an infinite sequence, $$(1+\phi_1^1+\ldots+\phi_1^{\infty}) = 1 / (1-\phi_1)$$.
## 2.3 For the AR(p) model
For completeness, the case of an AR($$p$$) model with an intercept is provided below,
$\begin{eqnarray} y_{t}=\mu +\overset{p}{\underset{i = 1}{\sum}} \phi_{p} y_{t - i}+\varepsilon_{t} \tag{2.7} \end{eqnarray}$
where $$p$$ is the maximum lag length, and the residual is Gaussian white noise. For simplicity, we take the conditional expectation at each forecast horizon when computing the forecast recursively.
$\begin{eqnarray} \mathbb{E}_t \left[ y_{t + 1} | y_{t}\right] &=&\mu +\phi_{1}y_{t }+\phi_{2}y_{t-1 }+ \ldots +\phi_{p+1}y_{t -p+1} \nonumber \\ \mathbb{E}_t \left[ y_{t + 2} | y_{t}\right] &=&\mu +\phi_{1}\mathbb{E}_t \left[ y_{t + 1} | y_{t}\right] +\phi_{2}y_{t }+ \ldots +\phi_{p+2}y_{t -p+2} \nonumber \\ \mathbb{E}_t \left[ y_{t + 3} | y_{t}\right] &=&\mu +\phi_{1}\mathbb{E}_t \left[ y_{t + 2} | y_{t}\right] +\phi_{2}\mathbb{E}_t \left[ y_{t + 1} | y_{t}\right] +\phi_{3}y_{t }+ \ldots +\phi_{p+3}y_{t -p+3} \nonumber \\ \vdots &=& \vdots \nonumber \\ \mathbb{E}_t \left[ y_{t + h} | y_{t}\right] &=&\mu +\phi_{1}\mathbb{E}_t \left[ y_{t - 1+h} | y_{t}\right] +\phi_{2} \mathbb{E}_t \left[ y_{t - 2+h} | y_{t}\right] + \ldots +\phi_{p}\mathbb{E}_t \left[ y_{t -p+h} | y_{t}\right] \nonumber \\ && + \phi_{h}y_{t} + \ldots + \phi_{p+h}y_{t -p+h} \tag{2.8} \end{eqnarray}$
Where the last line assumes that $$p>h$$.
## 2.4 Example of autoregressive forecasts
To illustrate the workings of the way in which one is able to construct a forecast from an autoregressive model, we can work with hypothetical AR(1) and AR(2) examples. In this case we assume that the coefficients are as follows:
• AR(1) with $$\mu =0.4$$ and $$\phi_{1}=0.7$$
• AR(2) with $$\mu =0.3$$, $$\phi_{1}=0.6$$ and $$\phi_{2}=0.1$$.
For both processes we assumed that they start off from the same initial situation, where $$y_{t}=2$$ and $$y_{t-1}=1.5$$. Using the formulae from (2.5) for the AR(1) model and (2.8) for the AR(2) model, the forecasts for horizons $$h=1, h=2, h=3, h=5$$ and $$h=10$$ for both processes are reported in Table 1.
These results suggest that the forecasts converge on the unconditional mean of the processes as the forecast horizon increases, where for the AR(1) process, $$0.4/(1-0.7)=1.3$$. Similarly, for the AR(2) process, $$0.3/(1-0.6-0.1)=1$$.
Forecast horizon
1-step 2-step 3-step 5-step 10-step
AR(1) $$\hspace{1cm}$$ 1.8 1.66 1.56 1.45 1.35
AR(2) $$\hspace{1cm}$$ 1.65 1.49 1.36 1.19 1.04
Table 1: AR forecasts
# 3 Forecast errors and uncertainty
To derive a measure for the forecast errors, we are going to make use of the notation for the predictor, $$\acute{y}_t(h)\equiv \mathbb{E}_t \left[ y_{t + h} | y_{t}\right]$$. Hence, the forecast error, $$\acute{e}_t(h)$$, in period $$t+h$$ is denoted,
$\begin{eqnarray} \acute{e}_t(h) = y_{t+h}-\acute{y}_t(h) \tag{3.1} \end{eqnarray}$
where $$y_{t+h}$$ is the ex-post actual realisation of the value for respective variable. Making use of this expression for the recursive formulas that are provided in (2.2) and (2.3), we can calculate the forecast error at different forecast horizons,
$\begin{eqnarray} \acute{e}_t(1) = y_{t+1}-\acute{y}_t(1) &=&(\phi_{1}y_{t}+\varepsilon_{t+1})-\phi_{1}y_{t}=\varepsilon_{t+1} \nonumber \\ \acute{e}_t(2) = y_{t+2}-\acute{y}_t(2) &=&(\phi_{1}^{2}y_{t}+\phi_{1}\varepsilon_{t+1}+\varepsilon_{t+2})-\phi_{1}^{2}y_{t}=\phi_{1}\varepsilon_{t+1}+\varepsilon_{t+2} \nonumber \\ \vdots &=& \vdots \nonumber \\ \tag{3.2} \end{eqnarray}$
where at horizon $$h$$,
$\begin{eqnarray} \acute{e}_t(h) &=& y_{t+h}-\acute{y}_t(h)=\left( \phi_{1}^{h}y_{t}+\overset{h -1}{\underset{i =0}{\sum }}\phi_{1}^{i}\varepsilon_{t+h-i}\right) -\phi_{1}^{h}y_{t} \nonumber \\ &=& \overset{h -1}{\underset{i =0}{\sum }}\phi_{1}^{i}\varepsilon_{t+h-i} \tag{3.3} \end{eqnarray}$
This result suggests that the forecast errors are just the coefficients of the MA representation of the AR(1) process if the process is stationary (and the MA representation exists). When we assume that the errors in autoregressive model are Gaussian white noise, the expected value of all future realisations of the error, as derived in (3.3) are zero. Therefore, when we assume that $$\mathbb{E}_t \left[ \varepsilon_{t + h} | I_t\right] = 0$$, it would be the case that
$\begin{eqnarray} \mathbb{E}_t \left[\acute{e}_t(h) \right]= \mathbb{E}_t \left[ y_{t+h}-\acute{y}_t(h)\right] =\mathbb{E}_t \left[ y_{t+h}\right] -\mathbb{E}_t \left[ \acute{y}_t(h)\right] =0 \tag{3.4} \end{eqnarray}$
If this is the case then it would imply that the predictor is unbiased.
## 3.1 Mean square errors (MSE)
The MSE is a quadratic loss function that is widely used to evaluate the forecasting accuracy of a particular model. In addition, this statistic may also be used as a measure of the forecast error variance, which would usually need to be calculated when constructing forecast intervals. We may denote the MSE for the $$h$$-step ahead forecast error as $$\acute{\sigma}_{t}(h)$$, which is derived from the forecast errors, where
$\begin{eqnarray} \nonumber \mathbb{E}_t \left[ \acute{\sigma}_{t}(h) \right] &=& \mathbb{E}_t \left[ \left(y_{t+h}-\acute{y}_t(h) \right)^{2}\right] \\ &=&\mathbb{E}_t \left[ \left( \overset{h -1}{\underset{i =0}{\sum }}\phi_{1}^{i}\varepsilon_{t+ h -i}\right) \left( \overset{h -1}{\underset{i=0}{\sum }}\phi_{1}^{i}\varepsilon_{t + h -i}\right) \right] \tag{3.5} \end{eqnarray}$
Note that as the $$\phi$$ terms are know a priori, they are not governed by the expectations operator and can be moved to the front of the expression. In addition, since $$\mathbb{E}_t\left[ \varepsilon_{t + h -i}\varepsilon_{t + h -i}\right] =\sigma_{\varepsilon}^{2}$$ for all $$h$$. Hence,
$\begin{eqnarray} \nonumber \acute{\sigma}_{t}(h)=\sum_{i = 0}^{h -1}\phi_{1}^{i}\sigma_{\varepsilon }^{2}\sum_{i = 0}^{h -1}\phi_{1}^{i}, \end{eqnarray}$
which would allow us to derive values for $$1,2,3,\ldots , h$$, with the aid of the following expressions,
$\begin{eqnarray} \acute{\sigma}_{t}(1) &=&\sigma_{\varepsilon }^{2} \nonumber \\ \acute{\sigma}_{t}(2) &=&\sigma_{\varepsilon }^{2}+\phi_{1}^{2}\sigma_{\varepsilon }^{2}=\acute{\sigma}_{t}(1)+\phi_{1}^{2}\sigma_{\varepsilon }^{2} \nonumber \\ \acute{\sigma}_{t}(3) &=&\sigma_{\varepsilon }^{2}+\phi_{1}^{2}\sigma_{\varepsilon }^{2}+\phi_{1}^{4}\sigma_{\varepsilon }^{2}=\acute{\sigma}_{t}(2)+\phi_{1}^{4}\sigma_{\varepsilon }^{2} \nonumber \\ &\vdots& \nonumber \\ \acute{\sigma}_{t}(h) &=&\sigma_{\varepsilon }^{2}(1+\phi_{1}^{2}+\phi_{1}^{4}+ \ldots +\phi_{1}^{2(h-1)})=\acute{\sigma}_{t}(h-1)+\phi_{1}^{h-1}\sigma_{\varepsilon }^{2}\phi_{1}^{h-1} \nonumber \\ \tag{3.6} \end{eqnarray}$
If we let $$h\rightarrow \infty$$, (3.6) implies that $$\sigma_{\varepsilon}^{2}(1+\phi_{1}^{2}+\phi_{1}^{4}+ \ldots +\phi_{1}^{\infty })=\frac{\sigma_{\varepsilon }^{2}}{1 - \phi_{1}^{2}}$$. Note that this sequence has the unconditional variance of the process that was provided previously. Thus,
$\begin{eqnarray} \acute{\sigma}_{t}(h)\rightarrow \frac{\sigma_{\varepsilon }^{2}}{1 -\phi_{1}^{2}} \;\;\;\; \text{where } \;\; h\rightarrow \infty \tag{3.7} \end{eqnarray}$
Note that the expected variance of the forecasts converges to the unconditional variance of the process, which should not be surprising as the expected forecast errors are derived from the expected values for the future errors of the process. Hence, if we assume that the errors in the process are distributed $$\varepsilon_t \sim \mathcal{N}(0,\sigma^2)$$ then the expected forecast errors could also be described by a similar distribution. In which case, $$\mathbb{E}_t \left[ \acute{\sigma}_{t}(h) \right] = \sigma^2$$.
## 3.2 Uncertainty
The previous section suggests that we can use the conditional expectation to derive predicted values of a variable that may be described by a stable autoregressive model that has Gaussian white noise errors. In this case, the forecast will on average be equal to the true value of the variable that we want to forecast, which would imply that the forecasts are unbiased. However, the forecasts will not necessarily be equal to the true value of the process at all periods of time. Therefore, the forecast errors will have a positive variance, which may be measured by the mean square error (MSE).
In many instances it is desirable to report on both the point forecast and some measure of uncertainty that relates to the forecast. For example, most central banks publish fan charts together with their inflation forecasts. These fan charts communicate the central banks view on possible paths for future inflation. In addition, a number of central banks also publish fan charts when referring to the relative success of their past forecasts.
An example of this type of communication is presented in Figure 2, which was included in the South African Reserve Bank Monetary Policy Review (June, 2014). Note that towards the end of the forecast horizon, uncertainty increases as the bands become wider over time. This graph allows the central bank to suggest that although they largely expect inflation to initially increase to a point that is beyond the target range, it is expected to decline after a year whereupon it will return to a point that is within the range. It also suggests that there is quite a large degree of uncertainty surrounding the expected future values of inflation and as such we could realise a relatively wide array of values for the rate of inflation in the future.
Figure 2: South African inflation fan chart (SARB June 2014)
There are a number of different methods that may be used to construct these fan charts. Where all the coefficients in the model are point estimates, we could calculate the MSE to generate distributions for the distribution of future forecast estimates.1 Note that if we assume that the residuals in the model are, $$\varepsilon_{t}\sim \mathsf{i.i.d.} \;\; \mathcal{N}(0,\sigma_{\varepsilon }^{2}$$), it implies that the forecast errors should also be normally distributed,
$\begin{eqnarray} y_{t+h}-\acute{y}_t(h)\sim \;\; \mathcal{N}(0,\acute{\sigma}_{t}(h)) \tag{3.8} \end{eqnarray}$
Such that,
$\begin{eqnarray} \frac{y_{t+h}-\acute{y}_t(h)}{\sqrt{\acute{\sigma}_{t}(h)}}\sim \;\; \mathcal{N}(0,1) \tag{3.9} \end{eqnarray}$
In this case we could make use of a normal distribution, with $$z_{\alpha}$$ defining the upper and lower bounds that may be used to derive the forecast interval for the $$h$$-period ahead forecast,
$\begin{eqnarray} \left[\acute{y}_t(h)-z_{\alpha /2}\sqrt{\acute{\sigma}_{t}(h)}\;\;\; , \;\;\; \acute{y}_t(h)+z_{\alpha /2}\sqrt{\acute{\sigma}_{t}(h)}\right] \tag{3.10} \end{eqnarray}$
This allows for the construction of standard confidence intervals for the parameter estimates, where the predictor, $$\acute{y}_t(h)$$, and the MSE, $$\acute{\sigma}_{t}(h)$$, are used to derive the appropriate interval.
## 3.3 Example: Forecasts with confidence intervals
Let us show how this works with a numerical example for the AR(1) model used in the previous section. That is, we assumed that we have the model,
• AR(1) with $$\mu =0.4$$, $$\phi_{1}=0.7$$, and $$y_{t}=2$$
Let $$\varepsilon_{t}\sim \mathcal{N}(0,0.1$$), and where we elect to work with a 95% confidence interval, $$\alpha =0.05$$. This implies that $$z_{\alpha /2}=1.96$$ in large samples. For forecast horizon $$h=1,5,10$$ we can then use the point forecasts in Table 1 and (3.6) to derive the forecast error variance. Then lastly, with the use of (3.10) we can construct forecast intervals that are provided in Table 2.
Point Estimate MSE Lower Bound Upper Bound
$$\acute{y}_t(1) \hspace{1cm}$$ 1.80 0.10 1.18 2.42
$$\acute{y}_t(5)$$ 1.45 0.19 0.59 2.30
$$\acute{y}_t(10)$$ 1.35 0.20 0.48 2.22
Table 2: Forecast intervals for AR(1) model
Note that the point forecasts are the same as those that are provided in the upper row in Table 2. The second column shows MSE, which converges to the unconditional variance of the AR process as the forecast horizon increases. The confidence intervals suggest that if the errors continue to be distributed $$\varepsilon_{t}\sim \mathcal{N}(0,0.1$$), then there is a 95% probability that the intervals will contain the future value of the random variable, $$y_{t + h}$$.
Figure 3: AR(1) fan chart from simulation
After putting together many forecast intervals for different significance levels (i.e. for different values of $$\alpha$$ in (3.10)), a fan chart can be constructed for the 30, 50, 70 and 90 percentiles of the forecast distribution, as in Figure 4. This would provide a visual display of the results, as per those of the central bank in Figure 3.
An alternative method of constructing a density forecast would be to simulate $$n$$ number of forecasts from the normal distribution with mean $$\acute{y}_t(h)$$ and variance $$\acute{\sigma}_{t}(h)$$ across all horizons, $$h$$. From the vector of forecasts $$(n \times 1)$$ for each $$h$$, a forecast interval can be derived by sorting the numbers (e.g. from lowest to highest) and then choosing the percentile of interest from this sorted vector. Of course, if we are primarily interested in given forecast intervals, then the method that uses (3.10) would be more efficient. Moreover, when approximating the forecast distribution with $$n$$ draws, the results would be dependent on the size of $$n$$. If $$n$$ is not big enough, the procedure for simulating a number of forecasts from a normal distribution will not be equal to the direct approach, which made use of (3.10).
Figure 4: AR(1) histograms from simulation
Figure 4 shows two histograms of the forecast distribution for $$h=1$$, given the values in Table 2, which are reported above. As we see, when $$n$$ is small, simulating the forecast distribution provides a poor approximation to the normal distribution; however, when $$n$$ is large, the simulation works well. If we were to construct a fan chart with the aid of this method we would need to repeat this simulation for each $$h$$-step.
# 4 Forecast evaluation
To evaluate the performance of a forecast we may consider the use of various different statistics. For example, we may wish to derive an estimate of the bias that is associated with the bias to determine whether or not the we are consistently underestimating or overestimating the expected future mean value. In addition, when comparing the forecasting performance of a number of different models, we would usually want to evaluate a number of successive $$h$$-step ahead forecasts that have been generated over time. After generating the forecasts we could then sum up the forecasting errors to determine which model provides the smallest total forecasting error. However, as we would not wish for the positive and negative values of the forecast errors to cancel each other out, we may wish to take the square or absolute value of these errors. This has given rise to a number of different loss functions, that are considered below. It is important to note that different loss functions emphasize different characteristics of the forecast, and as such they may provide conflicting results. When evaluating uncertainty associated with any forecast, such as the density forecast, other loss functions are considered, and we will briefly examine these methods as well.
## 4.1 Recursive and rolling window forecasts and errors
When the historical dataset for a particular variable extends over the period $$t=\{1,\ldots , R, \ldots , T+H\}$$, where $$R$$ represents the end of the initial in-sample period and $$\{ R+1, \ldots , T+H\}$$ represents the observations that are used for the out-of-sample evaluation. From the previous discussion, we can estimate the AR(1) model and construct the initial forecast for $$\mathbb{E}[y_{R+h}| y_R]$$, which may be represented by the predictor, $$\acute{y}_R(h)$$, given the information set $$I_t$$.
The out-of-sample forecast error is simply the difference between the forecast and the realisation of $$y_{R+h}$$,
$\begin{eqnarray} \nonumber \acute{e}_R(h)= y_{R+h}-\acute{y}_R(h) \end{eqnarray}$
Since we do not normally observe this value when we forecast, the forecast error can only be evaluated ex-post. After we have obtained the first forecast error we can then update the dataset by one observation, so that $$I_t = R +1$$, before generating a second $$h$$-step ahead forecast. This would allow for the generation of a vector of forecast errors,
$\begin{eqnarray} \acute{\mathbf{e}}_{h}= \left[ \acute{e}_R(h), \acute{e}_{R+1}(h), \acute{e}_{R+2}(h), \ldots , \acute{e}_{T}(h)\right]^{\prime } \tag{4.1} \end{eqnarray}$
For example, assume that our dataset extends from 1995q1 to 2014q4, and we want to generate a number of one-step ahead forecasts over the out-of-sample period 2010q1 to 2014q4. This would imply that $$R$$ would be 2009q4, and the initial in-sample period would be 1995q1 to 2009q4, which would be used to generate a forecast for 2010q1. After calculating the forecast error (and storing it in vector $$\acute{\mathbf{e}}_{h}$$), the revised in-sample period for a recursive scheme would span 1995q1 to 2010q1. When using a rolling window scheme the in-sample period would be 1995q2 to 2010q1. Either of these methods could be used to generate a forecast for 2010q2, which would be used to calculate a second forecast error. This procedure continues until we have full set of forecast errors in the $$\acute{\mathbf{e}}_{h}$$ vector.
In most cases we would want to evaluate both the short-term and long-term forecasting performance of a model. Hence, if we want to consider the forecasting performance of a model that uses quarterly data over a two year period, we would want to evaluate the one- to eight-step ahead forecasts. In these cases we would usually make use of a matrix for capturing the forecasts and the forecasting errors, where each of the vectors for the $$h$$-step ahead forecasting errors $$\acute{\mathbf{e}}_{h}$$ are placed in a separate column.
$$$\acute{\mathbf{e}}_{H} = \left\{ \begin{array}{cccc} \acute{e}_R(1) & \acute{e}_R(2) & \ldots & \acute{e}_{R}(H)\\ \acute{e}_{R+1}(1) & \acute{e}_{R+1}(2) & \ldots & \acute{e}_{R+1}(H)\\ \vdots & \vdots & \ddots & \vdots \\ \acute{e}_{T}(1) & \acute{e}_{T}(2) & \ldots & \acute{e}_{T}(H) \\ \end{array} \right\} \tag{4.2}$$$
Note that the respective columns or rows in the matrix would represent a time series variable. For example, the first column would represent the one-step ahead forecasts errors over the out-of-sample period. We will see that this will be particularly useful when calculating the respective statistics that are used to evaluate the performance of different forecasting models.
Before moving on it is perhaps worth noting that these forecasting experiments are usually termed pseudo (or quasi) out-of-sample forecasting experiments, where the use of the {pseudo} term would infer:
• When conducting the evaluation we have at our disposal all the out-of-sample information, which are the realisations of the observed variable. This information could have been used in the formulation and specification of the model (i.e. we see that the out-of-sample portion of the dataset exhibits regime-switching behaviour and as a result we decide to make use of a regime-switching model).
• It is unlikely that the data that we use would have been captured in real-time. The use of real-time data would describe the source that would have been available at the time when it was initially released. In economics, and specifically in macroeconomics, most data series are heavily revised over time. For example, when the statistical agency of a given country releases a value for gross domestic product for the first time, this number will typically only be an estimate of the true value of economic output. During subsequent periods, this estimate will be subject to a number of revisions as the agency obtains more information about what actually occurred over that period. Databases with true real-time data exist for some countries and where such data is utilised in a forecasting experiment, we refer to it as a real-time out-of-sample forecasting experiment.
Of course, doing a real-time out-of-sample forecasting experiment is subject to the same flaw as described in the first bullet, which is of particular importance when the model has undergone significant revision over time, using information that may not have been available at the time that pertained to the initial in-sample period.
## 4.2 Bias
Where we continue to use $$R$$ as the last observation in the in-sample period, the expected value of the forecast error is a measure of the bias,
$$$\nonumber \mathbb{E}_t \left[\acute{e}_R(h)\right]= E \left[ y_{R+h}-\acute{y}_R(h)\right]$$$
In the case where we have generated the column vectors in the $$\acute{\mathbf{e}}_{H}$$ matrix may be used to derive an estimate of the mean bias, which could be represented as
$\begin{eqnarray} \overline{\acute{\mathbf{e}}}_{h}=\frac{1}{\left(T -R\right)} \; \overset{T}{\underset{i = R}{\sum }} \acute{e}_{R+i}(h) \tag{4.3} \end{eqnarray}$
In most cases, the value of the estimated bias that is closest to zero is considered to be the preferred estimate of the bias. If our forecast is biased, we would have either made the same mistake of predicting too high or too low a value of the variable we are forecasting, or alternatively, we may have made a few particularly large forecasting errors that skew the result.2
## 4.3 Root mean squared error (RMSE)
The most widely used method for evaluating forecasts is the root mean squared error (RMSE). It is a measure of the size of the forecast errors, and is simply the square-root of the mean squared forecasting error. Hence, when calculating the RMSE for a single forecast error we would derive,
$$$\nonumber \sqrt{\mathbb{E}_t \Big[ \big\{ \acute{e}_R(h) \big\}^{2}\Big] }=\sqrt{\mathbb{E}_t \Big[ \big\{y_{R+h}-\acute{y}_R(h)\big\}^{2}\Big] }$$$
The difference between this measure and what was calculated for the bias, is that after squaring all the errors the positive and negative errors would no longer cancel each-other out, when calculating the average over successive forecasts. Therefore, with the use of the vector of out-of-sample forecast errors, we could calculate the RMSE as,
$\begin{eqnarray} \text{RMSE}_{h}=\sqrt{\frac{1}{\left(T -R \right)} \; \overset{T}{\underset{i = R}{\sum }} \acute{e}^2_{R+i}(h) } \tag{4.4} \end{eqnarray}$
where $$\acute{e}^2_{R+i} (h)$$ is the forecast error as defined above. As such, the RMSE is a symmetric loss function, where forecasts that are either too high or too low are weighted equally. Naturally, smaller forecast errors are considered to be better than larger ones, and as such a low RMSE value indicates better forecasting performance.
Time Outcome AR(1) AR(2)
2.00 $$\acute{y}_{R+h}(1)$$ $$\acute{e}_{R+h}(1)$$ $$\acute{e}^{2}_{R+h}(1)$$ $$\acute{y}_{R+h}(1)$$ $$\acute{e}_{S+h}(1)$$ $$\acute{e}^{2}_{S+h}(1)$$
1.5 0 0 0 0
t 2 0 0 0 0
t+1 1.8 1.8 0 0 1.65 0.15 0.02
t+2 1.5 1.66 -0.16 0.03 1.58 -0.08 0.01
t+3 1.2 1.45 -0.25 0.06 1.38 -0.18 0.03
t+4 1.4 1.24 0.16 0.03 1.17 0.23 0.05
t+5 1.6 1.38 0.22 0.05 1.26 0.34 0.12
$$\overline{\acute{\mathbf{e}}}_{h}$$ -0.01 0 0.09 0
$$RMSE_h$$ 0 0.18 0 0.21
Table 3: AR forecasts
Continuing with the two autoregressive examples introduced in Table 1, where the AR(1) model is given as $$y_{t}=0.4+0.7y_{t-1}+\varepsilon_{t},$$ and the AR(2) is given as $$y_{t}=0.3+0.6y_{t-1}+0.1y_{t-2}+\varepsilon_{t}$$, we have calculated the bias and the RMSE in Table 3. In this example, the forecasting horizon is set to $$h=1,$$ and we have assumed some given outcomes that are provided. Note that for the initial forecast, the values are equivalent to those in Table 1. However, since we now have realised outcomes for subsequent periods, we use these to generate the subsequent forecasts.
With only five forecast errors to evaluate, the estimates of both the bias and the RMSE are of little use and are merely included to show how easy it is to compute these results. In an applied evaluation one would usually evaluate a larger set of out-of-sample forecasts.
### 4.3.1 Decomposing the RMSE
It is possible to show that the RMSE has two sources of potential errors, which may be illustrated in the following example. Where a one-step ahead forecast error is generated by an AR(1) model, $$y_{t}=\mu +\phi_{1}y_{t-1}+\varepsilon_{t}$$, we would obtain a value for the predictor $$\acute{y}_t (1)$$. This forecast would relate to the value of the process that would be generated by $$\hat{y}_{t +1}=\hat{\mu}+\hat{\phi}_{1}y_{t}+ \varepsilon_{t+1}$$. At period $$t$$, the values for $$\hat{\mu}$$, $$\hat{\phi}_{1}$$, and $$\varepsilon_{t+1}$$ are unknown. Therefore, we could express the forecast error as,
$\begin{eqnarray} \acute{e}_t (1) = y_{t+1} - \acute{y}_t (1)=\varepsilon_{t+1}+\left[ (\mu -\hat{\mu})+\left( \phi_{1}-\hat{\phi}_{1}\right) y_{t}\right] \tag{4.5} \end{eqnarray}$
While we would expect $$\varepsilon_{t+1}$$ to take on a value that is close to zero, after we have realised subsequent values of the process, this would not be the case on each occasion (all the time). Using (4.5) to compute the mean squared error we get,
$\begin{eqnarray} \mathbb{E}_t \left[ \left(y_{t+1}-\acute{y}_t \left(1\right)\right)^{2}\right] =\sigma_{\varepsilon,t+1}^{2}+\mathsf{var}\left[ (\mu -\hat{\mu})+\left( \phi_{1}-\hat{\phi}_{1}\right) y_{t}\right] \tag{4.6} \end{eqnarray}$
This expression suggests that the MSE is composed of two parts. The first pertains to the part that is a function of the error term and the other is a function of possible changes in the coefficient values. These two parts give rise to uncertainty relating to the shock or error and the parameter uncertainty (which is given by the variance of the parameter estimation discrepancies in (4.6)). If we assume that the parameters in the estimated autoregressive model are identical to their population counterparts, the last term in the square brackets of (4.6) would equal zero. In this case the mean squared error is then equal to $$\sigma_{\varepsilon }^{2}$$, which is identical to the MSE we derived in the previous section.
Alternatively, if the parameters of the autoregressive models for $$y_t$$ and $$y_{t+1}$$ differ, then the variance that is attributed to estimation error in (4.6) will not equal zero. It is worth noting that if the parameter values do not change, which in macroeconomic terms would arise when we have deep or structural parameters, then the RMSE is equal to the square root of MSE. However, when the parameter estimates change over time, as would often occur when using reduced-form models, then the RMSE would be different to the square root of the MSE. When this is the case, we may infer that the model is relatively poor.
## 4.4 Mean absolute errors (MAE)
As an alternative to the quadratic loss function, which is used in the RMSE, one could make use of linear loss function, such as the MAE. This statistic is calculated as
$\begin{eqnarray} \nonumber \text{MAE} &=& \mathbb{E}_t \Big[ \left| y_{t+h}-\acute{y}_t(h) \right|\Big] \end{eqnarray}$
Note that these the RMSE would impose larger penalties on extreme outliers (or very large forecast errors) as a result of the quadratic loss function.
## 4.5 Comparing out-of-sample and in-sample fit
When performing a forecasting evaluation we evaluate the out-of-sample fit of the model. In contrast we could also consider the in-sample fit of the model, which may be summarised by the coefficient of determination, information criteria, and various other measures.
In most cases, it is not always the case that a model that has a good in-sample fit will ensure that the out-of-sample performance of the model is suitable. The intuition behind this is provided by (4.6), where a good in-sample fit (as suggested by a high $$R^{2}$$) may be obtained by including a large number of regressors. When performing a out-of-sample evaluation, this would typically lower the estimate of $$\sigma_{\varepsilon}^{2}$$, but it would also usually increase the estimation uncertainty, which would increase the last part of (4.6).
Thus, when trying to evaluate the fit of an econometric model, it is usually a good idea to assess both the in-sample and out-of-sample performance of the model.
## 4.6 Comparing different forecasts
Most institutions make use of a suite of forecasting models that may be used to generate respective forecasts for a specific variable. To determine which of these is responsible for the most accurate forecast, we can compare the value of our preferred loss function for each of the models, and simply choose the model that is associated with the best score (i.e. choose the model with the lowest RMSE or bias).
However, the difference in the respective results can be very small, and we may want to consider whether or not the respective forecasts are significantly different from one another. This can be tested with a number of procedures, the most widely used is that of the Diebold and Mariano (1995) test.
To explain this procedure, assume that we have two models with the associated squared forecasting errors,
$\begin{eqnarray} \acute{e}^{2}_{t,1} (h) &=&\Big[y_{t+h}-\acute{y}_{t,1} \left(h \right)\Big]^{2} \nonumber \\ \acute{e}^{2}_{t,2} (h) &=&\Big[y_{t+h}-\acute{y}_{t,2} \left(h \right)\Big]^{2} \tag{4.7} \end{eqnarray}$
All of these statistics as in (4.2) to populate two matrices for the forecast errors, $$\acute{\mathbf{e}}_{H,j}$$ where $$j=\{1,2\}$$. Now where $$d_{t, h}$$ is the difference between the forecasting errors from model one, $$\acute{e}^{2}_{t,1} (h)$$, and the forecasting errors from model two, $$\acute{e}^{2}_{t,2} (h)$$, we may write
$$$\nonumber d_{t, h}=\acute{e}^{2}_{t,1} (h) - \acute{e}^{2}_{t,2} (h)$$$
This statistic could be applied along the rows or columns of the $$\acute{\mathbf{e}}_{H,j}$$ matrices to see whether the forecast errors are significantly different at a particular time or over a particular forecasting horizon.
To determine whether this difference is significantly different from zero, we regress a constant on $$d_{t, h}$$ in a model that may be specified as,
$\begin{eqnarray} d_{t, h}=\beta_{0}+u_{t} \tag{4.8} \end{eqnarray}$
This rather intuitive test was formulated in , which employs the hypothesis,3
$\begin{eqnarray} H_{0}:\beta_{0}=0 \;\; \text{ vs } \;\; H_{1}:\beta_{0}\neq 0 \tag{4.9} \end{eqnarray}$
In this case the null hypothesis implies no significant difference in forecasting performance. Therefore, if $$\beta_{0}=0$$, it will be the case that $$\mathbb{E}_t \left[ d_{t, h}\right] =\mathbb{E}_t \left[ u_{t}\right] =0$$ under the standard OLS assumptions. As such, a rejection of the null would imply that the forecasting performance of the two models are significantly different from one another (at some given significance level). The appropriate test statistic to use in this case is the $$t$$-value that relates to the $$\beta_{0}$$ coefficient in the OLS regression. In the usual manner, when this statistic is higher in absolute value than the critical value, then we can reject the null hypothesis.
When performing this test it is worth noting that when $$h>1$$, the values for $$d_{t, h}$$ and the residuals $$u_{t}$$ would usually be serially correlated. Therefore, one would usually employ methods for HAC standard errors, which are discussed in Newey and West (1987). For a more detailed treatment of how to compare different forecasts, the interested reader should consult West (2006), which provides an informative summary.
As a final note, we would also need to correct the test statistics in the Diebold and Mariano (1995) test when dealing with nested models. This procedure is discussed in Clark and West (2007).
## 4.7 Displaying the forecasting results
It is usually a good idea to plot each of the forecasts against the observed data to visualise the results of this forecasting experiment. Such an example is provided in Figure (2.2), where we have made use of an autoregressive model to generate the forecasts. Note that the forecasts converge on the mean and the time that is taken to converge on this mean value is dependent on the values of the coefficients.4
Figure 5: Hairy plot of realised and forecasted values
It is also usually a good idea to take a look at the RMSE statistics both over time and for the different $$h$$-step ahead forecasts. Examples of these diagrams are displayed in Figure 6 and Figure 7, which are taken from Balcilar, Gupta, and Kotzé (2015).
Figure 6: Average RMSE over time for different models
Figure 7: Comparing RMSE for one- to eight-step ahead forecasts
# 5 Model combination
In the previous section we described how forecasts for the same variable could be compared, when these forecasts are generated by different models. Implicitly we wanted to find the model that would provide the superior forecast, when comparing two alternative model specifications. An alternative approach would be to combine these forecasts from the set of all of the models under consideration. Model combination has a long history in economics, dating back to Bates and Granger (1969), and possibly even further.
The rationale for combining models as opposed to choosing one model among many candidate models is easily motivated from a decision-making perspective. As noted in Timmermann (2006), unless one can identify a particular forecasting model that generates smaller forecasting errors prior to observing any future data, forecast combinations offer potential diversification gains that may reduce the possibility of making large forecasting errors at particular points in time.5 This would certainly be the case where there are large outliers in the mis-specification errors of individual models, which may be reduced with the aid of a combined forecasting strategy.
Many different strategies to combine model forecasts exist, both for point and density forecasts. Covering all of these is outside the scope of this course, so we will only focus on two simple strategies for combining point forecasts using a linear aggregation function for the application of equal weighting and MSE weighting. To ensure that this discussion is relatively intuitive, and without loss of generality, we will assume that we only combine the forecasts of two models. In a more realistic setting, the number of models will typically be much larger.
## 5.1 Linear opinion pool and weights
In terms of notation, let $$\acute{y}^{c}(h)$$ denote the combined forecast for $$h$$-steps ahead. The respective forecasts for each of the two models are denoted $$\acute{y}_{j}(h)$$, for $$j=\{1,2\}$$. The most simple way of aggregating $$\acute{y}_{j}(h)$$ for $$j=1,2$$ into one combined forecast $$\acute{y}^{c}(h)$$ is to use a linear function,
$\begin{eqnarray} \acute{y}^{c}(h) &=& w_{h, 1}\acute{y}_{1}(h)+w_{h, 2}\acute{y}_{2}(h) \nonumber \\ &=& w_{h}\acute{y}_{1}(h)+\left( 1-w_{h}\right) \acute{y}_{2}(h) \tag{5.1} \end{eqnarray}$
where $$w_{h, j}$$, for $$j=1,2$$ is the weight attached to the model $$j$$. Combining individual forecasts into one combined forecast in this manner is often called a linear opinion pool. Typically we normalize the weights so that they sum to unity, as reflected in the second line of the expression in (5.1).
Of course, it would be simple to compute the result in (5.1), if we knew the weight to attach to each model. Two of the simplest weighting schemes include equal weighting and weighting based on the MSE of the forecasts. Equal weights are particularly simple to compute, where
$\begin{eqnarray} \text{Equal weights: } & \;\;\;\; & w_{h, j}=\frac{1}{2} \;\; \text{ for } \; j=1,2 \tag{5.2} \end{eqnarray}$
Similarly, one could apply the inverse of the estimates from the MSE of the forecasts to penalise those models that are associated with greater uncertainty. Thus:
$\begin{eqnarray} \text{MSE weights: } & \;\;\;\; & \left(1-w_{h, j}\right)=\frac{\text{MSE}_{h, j}}{\sum_{j=1}^{2}\text{MSE}_{h, j}} \;\; \text{for } \;\; j=1,2 \tag{5.3} \end{eqnarray}$
where the terms for MSE$$_{h, j}$$ can be derived according the specification that has been provided earlier. Both of these weighting schemes are frequently employed in the forecasting literature as they can be computed without any difficulty. In addition, they also have some desirable theoretical properties that are similar to those that may be derived from diversification. For a further discussion of this topic see, Timmermann (2006), Clemen (1989), and Stock and Watson (2004).
# 6 Alternative forecasting strategies
Throughout our discussion on forecasting we have focused on the use of autoregressive models as they provide an intuitive appeal when explaining the procedures that are involved in the generation and evaluation of various forecasts. In most cases, the discussion would also apply to other model specifications. However, in certain instances (and for some data) a slightly different approach may be more suitable.
## 6.1 Direct forecasting
Direct forecasting models are different from iterative forecasting models in that they are specified to forecast at a particular horizon. Thus a simple example of a direct forecasting model would be,
$\begin{eqnarray} y_{t}=\mu +\phi_{1}x_{t-h}+\varepsilon_{t} \tag{6.1} \end{eqnarray}$
If $$h=1$$ and $$x=y$$, this will just be an AR(1) model as described earlier. However, if $$h>1$$ or $$x\neq y$$, describes a direct forecasting model. For example, if we assume that $$h=4$$, then the equation for the model that is estimated would be $$y_{t}=\mu +\phi_{1}x_{t - 4}+\varepsilon_{t}$$, and a direct $$4$$-period forecast would be
$\begin{eqnarray} y_{t + 4}=\mu +\phi_{1}x_{t}+\varepsilon_{t} \tag{6.2} \end{eqnarray}$
An example of this forecasting strategy may be found in Stock and Watson (2004). In addition, Marcellino (2006) has noted that in theory, the use of iterated forecasts are more efficient when they are correctly specified; however, in the presence of model mis-specification direct forecasts may be more robust.
## 6.2 Autoregressive distributed lag model
Yet another time series model that is often applied in empirical work is the autoregressive distributed lag model (ADL). The ADL model differs to the simple autoregressive models in that they include other variables (i.e. other than the lag of the dependent variable). In a general form, the ADL model can be written as,
$\begin{eqnarray} y_{t}=\mu + \sum^p_{i=1} \phi_{i}y_{t-i}+ \sum_{k=1}^K \sum^{J_{k}}_{j=1} \beta_{j, k }x_{t-j, k}+ \varepsilon_{t} \tag{6.3} \end{eqnarray}$
where the second term on the right-hand side of is the usual autoregressive term, while the third term summarizes the distributed lag component of the ADL model, with respect to $$x_t$$. Therefore, in the ADL model we allow for $$k=\{1, \ldots , K\}$$ additional regressors, which have $$J$$ respective lags. In this way, we allow for $$x_{t-j, k}$$ to be included in the model. Each of the $$x_k$$ regressors is then allowed to take on a different number of lags. For example,if $$K=2$$, where $$J_{k=1}=2$$ and $$J_{k=2}=1$$ we have
$\begin{eqnarray} \nonumber y_{t}= \mu +\overset{p}{\underset{i=1}{\sum }}\phi_{i}y_{t-i}+ \beta_{1,1}x_{t-1, 1} + \beta_{2,1}x_{t-2, 1} + \beta_{1,2}x_{t-1, 2} + \varepsilon_{t} \end{eqnarray}$
The rationale for including additional regressors is that there might be additional information in these variables that is not captured by lags of the dependent variable. However, the inclusion of additional regressors may result in a number of complications when employing an ADL model for forecasting more than one period ahead. For example, when $$h=3$$ and $$p=1$$, we have the model
$\begin{eqnarray} y_{t + 3}=\mu +\phi_{1}y_{t + 2}+\beta_{1,1}x_{t + 2,1}+\beta_{2,1}x_{t + 1, 1}+\beta_{1,2}x_{t +2,2}+\varepsilon_{t + 3} \nonumber \end{eqnarray}$
In this case, the $$y_{t + 2}$$ term can be derived in the usual manner, by plugging in the value for $$y$$ in the previous period. However, the $$x$$ terms are not known, as at time $$t$$ we do not necessarily know what the values of $$x_{t + 2,1}, x_{t +1,1}$$ and $$x_{t + 2, 2}$$ will be. As such, these values would usually need to be predicted outside of the model. To circumvent this problem, multivariate models are usually employed in such forecasting exercises.
# 7 Conclusion
To derive a forecast for an autoregressive process we can iterate the process forward, after employing the conditional expectation operator. Under the assumption that the process is stable (i.e. stationary), and that the errors are Gaussian white noise, the conditional expectation operator ensures that the variance of the forecast error are minimised. When the forecast error horizon becomes very large, the mean of the autoregressive forecast converges towards the unconditional mean of the process, and the variance of the forecast converges towards the unconditional variance of the process. Density and interval forecasts can be constructed by making use of the normality assumptions of the errors. To evaluate the empirical performance of different forecasting models an out-of-sample forecasting experiment should be conducted. The bias and RMSE statistics are commonly used as evaluation criteria for forecasts. To determine whether the forecasts from different models are significantly different from one another, we could employ the Diebold-Mariano test. Combining forecasts from many individual forecasts into one combined forecast may produce a forecast error that has less outliers.
# 8 Appendix: Density forecasting and evaluation
In the most recent forecasting literature, there has been increased attention towards forecasting the whole distribution of future outcomes.6 If we assume that we know the distribution of the forecast errors, as we did in the previous section, density forecasts can easily be generated as described in that section. If we do not want to assume that we know the forecast distribution, other simulation based methods have to be employed to construct the density forecasts. We will not cover these methods in this course, however, under the maintained normality assumptions, evaluation of density forecasts is not overly complicated.
## 8.1 Evaluating density forecast
Whilst it is relatively simple to generate density forecast, the evaluation of these densities is more involved. This is partly due to the fact that the true density is not observed, not even ex-post. In what follows we refer to two of the most widely used evaluation methods that are applied to this problem, which include the probability integral transform (PIT) and the log-score (lnS).7
For a particular forecast density, the log score is simply the logarithm of the forecast density evaluated at the outcome of the random variable we forecast. The theoretical justification for using the log-score is somewhat complicated, but the implementation is rather easy, especially if we construct density forecasts according to the methods that was described above. That is, the estimates of $$\acute{y}_t(h)$$ and $$\acute{\sigma}_y(h)$$ can be used to compare the forecast density to a normal distribution around the realised outcome. This is easily done with the aid of most software packages.
Therefore, if we were to conduct an out-of-sample forecasting experiment for all forecasts that cover the evaluation period we could then compute the average log-score from the densities. In the density forecasting literature this score is seen upon as an intuitive measure of density fit, or as the density counterpart to the RMSE. However, in contrast to the RMSE, a higher average log score is considered better than a lower value, simply because it reflects a higher likelihood.
The PIT is used to test whether the predictive densities are correctly specified. For example, the 95% forecast interval is an interval that should contain the future value of the series in 95% of repeated applications. Likewise, a 30% forecast interval should contain the future value of the series in 30% of the repeated applications. Considering the whole forecast distribution, if the density forecast is correctly specified we would expect to see realised outcomes in all parts of this distribution, but with more outcomes centred around the peak of the distribution than in the tails. Therefore, if we run a forecasting experiment to construct a density, as described above, this could provide the essence of a repeated application. We could then evaluate whether actual outcomes fall within the forecast densities, which is what the PIT measures.
More formally, it can be shown that data that is modelled as a random variable, from any given distribution can be converted to a random variable that has a uniform distribution using the PIT. Under the maintained normality assumption, it can then be shown that a PIT can be computed by evaluating the normal cumulative probability function for $$\acute{y}_t(h)$$ and $$\acute{\sigma}_y(h)$$ at the realised outcome. If the density forecasts are correctly specified, and we do this for all forecasts covering the evaluation sample, $$T-R$$, then the PITs should be more or less uniformly distributed (i.e. a straight line in a histogram with percentiles on the x-axis). Once again, this is easily done in most statistical computer software packages. Moreover, different test-statistics exist and can be applied to determine whether the PIT are indeed uniformly distributed.
Figure 8: PIT for $$h=1$$
By way of example, we could compare the probability density of a particular $$h$$-step ahead forecast to the distribution of the data from the in-sample period. We would usually start with the evaluation of $$h=1$$ as it gets slightly more complicated when $$h>1$$, as a result of the potential serial correlation in the forecast errors. To complete this exercise we usually make use histogram to depict the empirical distributions of the PITs. In this example, the solid line represents the number of draws that are expected to be in each bin under a $$U(0,1)$$ distribution. The dashed lines represent the 95% confidence interval constructed under the normal approximation of a binomial distribution.
The results from such an exercise are depicted in Figure 8, which would appear to be relatively favourable as there is no part of the transformed $$U(0,1)$$ distribution that is beyond the confidence intervals.
# 9 References
Balcilar, Mehmet, Rangan Gupta, and Kevin Kotzé. 2015. “Forecasting Macroeconomic Data for an Emerging Market with a Nonlinear DSGE Model.” Economic Modelling 44: 215–28.
Bates, J., and Clive W. J. Granger. 1969. “The Combination of Forecasts.” Operations Research Quarterly 20(4): 451–68.
Clark, Todd E., and Kenneth D. West. 2007. “Approximately Normal Tests for Equal Predictive Accuracy in Nested Models.” Journal of Econometrics 138: 291–311.
Clemen, Robert T. 1989. “Combining Forecasts: A Review and Annotated Bibliography.” International Journal of Forecasting 5 (4): 559–83.
Corradi, V., and N. R. Swanson. 2006. “Handbook of Economic Forecasting.” In, edited by G. Elliott, C. W. J. Granger, and A. Timmermann, 1:197–284. Elsevier.
Diebold, Francis X., and Roberto S. Mariano. 1995. “Predictive Accuracy.” Journal of Business and Economic Statistics 13 (3): 253–63.
Hall, Stephen G., and James Mitchell. 2007. “Combining Density Forecasts.” International Journal of Forecasting 23 (1): 1–13.
Jore, Anne Sofie, James Mitchell, and Shaun Vahey. 2008. “Combining Forecast Densities from Vars with Uncertain Instabilities.” Reserve Bank of New Zealand Discussion Paper Series DP2008/18. Reserve Bank of New Zealand.
Kascha, Christian, and Francesco Ravazzolo. 2008. “Combining Inflation Density Forecasts.” Working Paper 2008/22. Norges Bank.
Marcellino, M. 2006. “Handbook of Economic Forecasting.” In, edited by G. Elliott, C. W. J. Granger, and A. Timmermann, 1:879–960. Elsevier.
Newey, W., and K. West. 1987. “A Simple Positive Definite, Heterscedastic and Autocorrelation Consistent Covariance Matrix.” Econometrica 55: 703–5.
Stock, J. H., and M. W. Watson. 2004. “Combining Forecasts of Output Growth in Seven-Country Data Set.” Journal of Forecasting 23: 405–30.
Timmermann, Allan. 2006. “Handbook of Economic Forecasting.” In, edited by G. Elliott, C. W. J. Granger, and A. Timmermann, 1:136–96. Elsevier.
Wallis, K. F. 2005. “Combining Density and Interval Forecasts: A Modest Proposal.” Oxford Bulletin of Economics and Statistics 67(1): 983–94.
West, Mike. 1996. “Asymtotic Inference and Predictive Ability.” Econometrica 64(5): 1067–84.
———. 2006. “Handbook of Economic Forecasting.” In, edited by G. Elliott, C. W. J. Granger, and A. Timmermann, 1:99–134. Elsevier.
1. For those models that generate random variables for the parameter estimates (i.e. where Bayesian parameter estimation methods are used) we could use the coefficient estimates to generate distributions for the forecast estimates.
2. In a number of financial applications, one may choose to make use of an asymmetric loss function that penalises incorrect forecasts for negative returns more than those that forecast positive returns.
3. In addition, the interested reader may also wish to consult West (1996).
4. This is generally the case with autoregressive models.
5. The individual forecasts need not come from a formal statistical model and may include a subjective opinions about the future values of variables.
6. See, Wallis (2005), Hall and Mitchell (2007), Kascha and Ravazzolo (2008), and Jore, Mitchell, and Vahey (2008), among others.
7. A more detailed examination of different density evaluation procedures is given by Corradi and Swanson (2006).
|
2021-05-16 03:13:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.8702887296676636, "perplexity": 809.5097342047682}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00430.warc.gz"}
|
https://math.stackexchange.com/questions/linked/664298
|
213 views
### Solving the Definite Integral $\int_0^{\infty} \frac{1}{t^{\frac{3}{2}}} e^{-\frac{a}{t}} \, \mathrm{erf}(\sqrt{t})\, \mathrm{d}t$
I would like to solve the following integral $$\int_0^{\infty} \frac{1}{t^{\frac{3}{2}}} e^{-\frac{a}{t}} \, \mathrm{erf}(\sqrt{t})\, \mathrm{d}t$$ with Re$(a)>0$ and erf the error function. Is ...
1k views
201 views
### How to compute $\int_0^{\infty}\sqrt x \exp\left(-x-\frac{1}{x}\right) \, dx$?
How to compute this integral? : $$\int_0^{\infty}\sqrt x \exp\left(-x-\frac{1}{x}\right) \, dx$$ Wolframalpha gives the answer $\dfrac{3\sqrt{\pi}}{2e^2}$, but how to compute this?
121 views
### Integral of $1/x^2 \exp(-ax+b-c/x)dx$
I am interested in simplifying the integral $$\int \frac{1}{x^2}\exp(-ax+b-\frac{c}{x})dx$$ with $a, b, c \in \mathbb{R}$. Do you have any idea? With $a=0$ or $c=0$ I know the solutions but what about ...
97 views
### An improper integrals related to probability, $\int_0^\infty\frac1y \exp(\frac{-x_0}y-y)\,dy$
How can I calculate the integral $$\int_0^\infty{\frac1y e^{\frac{-x_0}y-y}}dy$$ in terms of well-known constants and functions? I used some fundamental techniques of integration but got nothing.
116 views
### The closed form of $\int^\infty_{B}e^{-(x+\frac{A}{x})}\,dx$, where $A>0$, $B>0$.
What tools, ways would you propose for getting the closed form of this integral? $$\int^\infty_{B}e^{-\left(x+\frac{A}{x}\right)}\,dx,$$ where $A>0$, $B>0$. When $B=0$, from Table of Integrals,...
Equation no. 3.471.9 of Integral series and products (By Gradeshteyn) is written below \int_0^{\infty}x^{v-1}e^{-\frac{\beta}{x}-\gamma x}dx=2\left(\frac{\beta}{\gamma}\right)^{\frac{v}{2}}K_{v}(2\...
### How to rewrite this integral $I = \int e^{ - \left( {ax + \frac{b}{x}} \right)} dx$ as non-elementary function?
Is it possible to rewrite or evaluate this integral $I = \int\limits_1^p e^{ - \left( {ax + \frac{b}{x}} \right)} dx$ where $a,b,p > 0$ as some known non-elementary function (For example \$\...
|
2021-01-26 09:12:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913285970687866, "perplexity": 419.7394315762942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00394.warc.gz"}
|
http://mail.scipy.org/pipermail/scipy-dev/2004-November/002486.html
|
[SciPy-dev] Accessible SciPy (ASP) project
Travis Oliphant oliphant at ee.byu.edu
Tue Nov 2 11:31:21 CST 2004
Arnd Baecker wrote:
>On Mon, 1 Nov 2004, Travis Oliphant wrote:
>
>
>
>>I should chime in here on SciPy documentation, as I am trying to set-up
>>a system that would allow users to contribute documentation to Python in
>>a more fluid manner.
>>
>>
>
>Excellent!
>
>
Thanks for the feedback.
>[...]
>
>
>
>>For me the most important issues are:
>>
>>1) agreeing on a common command to bring up a graphical help browser
>>(what that browser is could change over time---and even be set by the
>>user). I like ghelp as the command to use, and feel that bringing up a
>>chm browser is a good start.
>>
>>
>
>What I had in mind with the graphical help-browser, was that
>it can be used to display manuals, tutorials etc.
>and doc-strings. Is there a way to supply the doc-strings
>(maybe converted to html from ReST, including math)
>to these chm browsers?
>
>
This is the idea I have as well.
>Do you know of any tools (under linux) to convert html documentation
>to chm?
>((BTW: that's what I like about documancer: it can deal
>with html directly, so there is no need to have documentation
>in different formats: for example on linux it is quite
>common to have the html documenation for python installed locally,
>so going for chm means that this contents has to be stored twice.
>When saying this, I really think of the graphical help browser
>to be _the_ way to access any type of documentation relevant
>for scientific computing. This might mean for one person
>to include documentation on PyTables, the other would
>like to have OpenGL stuff and so on...))
>
>
>
No, I know of no chm-producing tools on Linux. For me that is a
downside but not a show-stopper. I need to look at documancer more.
>>2) improving the docstring documentation.
>>
>>Here is a plan for doing number 2.
>>
>>1) First, use ReST in docstrings along with latex math commands where
>>needed. i.e. $\alpha = \int_0^b f(x) \, dx$
>>
>>2) Set up a site (e.g. www.scipy.org/livedocs) which has all the
>>docstrings from scipy available in a hierarchial form.
>> * On each page there is documentation for a single function or class.
>> * The documentation is separated into three parts:
>> a) the one-liner
>> b) the docstring to be included in the scipy code
>> c) extra examples, documentation that will not be included in
>>the code, but stay on the website.
>> * Every docstring in scipy contains a link back to the appropriate
>>livedoc page so that users can edit it and/or find out more about the
>>function.
>>
>> * Ultimately the website could convert latex code to images and
>>create a nice looking document.
>>
>>Gettting this working perfectly requires a bit of effort. But a simple
>>implementation is not that hard.
>>
>>
>>
>
>I think this is a very good idea/approach. I have only
>a problem with c): I often have to work
>off-line (and many of our students as well), so I think
>it is necessary to be able to access the additional information
>also locally.
>
>
Yes, the idea is that all of this extra stuff would be useful to the
graphical help browser but wouldn't be in the doc-strings. So, I think
we are on the same page here.
>It might be problematic to add this to
>the doc-strings themselves (because they could become too large,
>including figures etc.), but maybe one could do the following:
> Create a directory .scipy in the home directory of the user
> When invoking the help command the usual information is displayed
> and then it is checked if further information
> is available and displayed.
>
>
Good ideas.
I'd like to hear more input on the use of .chm files for the graphical
help file.
-Travis
|
2014-09-16 21:35:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257969915866852, "perplexity": 4911.221000751506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657119965.46/warc/CC-MAIN-20140914011159-00292-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://www.projecteuclid.org/euclid.jsl/1183745795
|
## Journal of Symbolic Logic
### $\Pi^1_3$ Sets and $\Pi^1_3$ Singletons
#### Abstract
We extend work of H. Friedman, L. Harrington and P. Welch to the third level of the projective hierarchy. Our main theorems say that (under appropriate background assumptions) the possibility to select definable elements of non-empty sets of reals at the third level of the projective hierarchy is equivalent to the disjunction of determinacy of games at the second level of the projective hierarchy and the existence of a core model (corresponding to this fragment of determinacy) which must then contain all real numbers. The proofs use Sacks forcing with perfect trees and core model techniques.
#### Article information
Source
J. Symbolic Logic, Volume 64, Issue 2 (1999), 590-616.
Dates
First available in Project Euclid: 6 July 2007
https://projecteuclid.org/euclid.jsl/1183745795
Mathematical Reviews number (MathSciNet)
MR1777772
Zentralblatt MATH identifier
0930.03057
JSTOR
Hauser, Kai; Woodin, W. Hugh. $\Pi^1_3$ Sets and $\Pi^1_3$ Singletons. J. Symbolic Logic 64 (1999), no. 2, 590--616. https://projecteuclid.org/euclid.jsl/1183745795
|
2020-01-26 20:16:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.486649751663208, "perplexity": 931.309545587261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00361.warc.gz"}
|
http://openstudy.com/updates/55f1b2f1e4b0bf851f6fca91
|
## newwar one year ago What is the value of the expression 7–3? A. –21 B. Negative Exponents C. Negative Exponents D. 343
1. newwar
2. anonymous
|dw:1441903391506:dw|do you know this?
3. anonymous
you there?
4. newwar
yes what do i do
5. newwar
hello
6. newwar
@imammalik_806
7. newwar
@itbribro
8. anonymous
rekt m8
9. anonymous
7 minus 3?
10. newwar
4
11. anonymous
k hold on lemme see
12. anonymous
oh bro
13. anonymous
its not 7 minus 3
14. anonymous
the 3 is an exponent
15. newwar
ok
16. newwar
so what do i do
17. anonymous
|dw:1441904338660:dw|see the pattern?
18. newwar
yes
19. anonymous
so we want to go the other direction for negative exponents. to do this we have to do the inverse operation of multiplication. do you know what the inverse operation of multiplication is?
20. newwar
no
21. anonymous
well, technically, we multiply by the reciprocal... but ususally people just call it division.
22. anonymous
|dw:1441904557762:dw|
23. newwar
ok
24. anonymous
so if we are at $$7^3$$ and we want to get to $$7^2$$, we divide by 7 or multiply by 1/7... okay?
25. anonymous
26. newwar
ok
27. anonymous
so let's take it a step further... let's figure out what $$7^0$$ should be , using the pattern...
28. newwar
k
29. anonymous
|dw:1441904754195:dw|
30. anonymous
can you figure out what $$7^0$$ is?
31. newwar
0
32. anonymous
no... let's follow the pattern. what is the number above $$7^0$$?
33. anonymous
you there?
34. newwar
yup
35. anonymous
so what is the number above $$7^0$$
36. anonymous
|dw:1441904990185:dw|
37. newwar
7
38. newwar
1/343
39. anonymous
very good!!! now we use the pattern. to go down (negative direction) we divide by 7. what is 7 divided by 7?
40. anonymous
it's the same as 7 times (1/7)...
41. newwar
-1
42. anonymous
7 divided by 7 is -1 ??? i don't think so, try again
43. newwar
just 1 sorry
44. anonymous
excellent!!!
45. anonymous
|dw:1441905368853:dw|so can you figure out the next number? again, follow the pattern...
46. newwar
7 by one
47. anonymous
? if you can use an expression or the the number
48. newwar
ok
49. anonymous
so what is it... i don't understand what number "7 by one" represents.
50. anonymous
don't be afraid... take your time and think, use the pattern and then respond. it's okay to make a mistake because i will help you to learn from it
51. newwar
sorry my computer went out
52. anonymous
that's alright... what did you get?
53. newwar
i thihnk it is c
54. anonymous
$$7^{-1}=\text{c?}$$
55. anonymous
just so you know... my goal here is to help you understand these types of problems. it is not to just help you get the answer for this problem.
|
2017-01-18 12:26:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7345883250236511, "perplexity": 5223.461681379626}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://bookdown.org/paul/big-data6/estimate.html
|
## 5.5 Estimation
• Estimation = Fitting the model to the data (by adapting/finding the parameters)
• e.g. easy in case of the mean (analytical) but more difficult e.g. for linear model
• Modellparameter: $$\color{orange}{\beta_{0}}$$, $$\color{orange}{\beta_{1}}$$ and $$\color{orange}{\beta_{2}}$$
• Ordinary Least Squares (OLS)
• Least squares methods (Astronomy)
• Choose $$\color{orange}{\beta_{0}}$$, $$\color{orange}{\beta_{1}}$$ and $$\color{orange}{\beta_{2}}$$ so that the sum of the squared errors $$\color{red}{\varepsilon}_{i}$$ is minimized (See graph!)
• Q: Why do we square the errors?
|
2019-04-23 04:50:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447192549705505, "perplexity": 1559.6480219689404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423061013-00408.warc.gz"}
|
https://tex.stackexchange.com/questions/335635/expansion-of-argument-to-dtmdate-function-in-datetime2
|
# Expansion of argument to DTMdate function in datetime2
I need a very simple associative array for my project which I am trying to implement as in the following MWE.
Two questions please:
1. How do I need to tweak my \alistget macro to allow its result to be used in functions like \DTMdate?
2. How should I debug this? I have been extensively round the houses with the likes of etoolbox \csdef and \csuse functions to no avail.
I don't really want solutions along the lines of "use xyz implementation of associative arrays" as this is an important learning experience for me!
\documentclass{article}
\usepackage[british]{datetime2}
\newcommand{\alistadd}[2]{
\expandafter\def\csname alist_#1 \endcsname{#2}
}
\newcommand{\alistget}[1]{%
\csname alist_#1 \endcsname%
}
\begin{document}
\alistadd{Tom}{1970-12-25}
Tom \alistget{Tom}\par
This works: Tom \expandafter\DTMdate\csname alist_Tom \endcsname\par
This doesn't: Tom \DTMdate{\alistget{Tom}}\par
\end{document}
## 2 Answers
You can avoid your hassles by using expl3 and xparse:
\documentclass{article}
\usepackage[british]{datetime2}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\xDTMdate}{m}
{% use the expanding variant
\dsla_xdtmdate:f { #1 }
}
% syntactic sugar
\cs_new_eq:NN \dsla_xdtmdate:n \DTMdate
% create the expanding variant
\cs_generate_variant:Nn \dsla_xdtmdate:n { f }
% better management with a property list
\NewDocumentCommand{\alistadd}{mm}
{
\prop_gput:Nnn \g_dsla_datelist_prop { #1 } { #2 }
}
\DeclareExpandableDocumentCommand{\alistget}{m}
{
\prop_item:Nn \g_dsla_datelist_prop { #1 }
}
\prop_new:N \g_dsla_datelist_prop
\ExplSyntaxOff
\begin{document}
\alistadd{Tom}{1970-12-25}
Tom \alistget{Tom}
This works: Tom \xDTMdate{\alistget{Tom}}
\end{document}
I define a new \xDTMdate command that does “full expansion until possible” of its argument (that is, it fully expands macros until finding a single unexpandable token, which is sufficient for our purposes).
The \alistget command is made expandable by using \prop_item:Nn, which behaves well in this context. Actually, also your “hand made” property list would do, but I think it's better to stick with expl3.
• Perfect, thank you. I shall get up to speed with xpl3 by reading this – dsla Oct 24 '16 at 16:33
The \DTMdate macro doesn't expand its argument before looking for date formats so you need to expand (two steps) before calling the macro
\documentclass{article}
\usepackage[british]{datetime2}
\newcommand{\alistadd}[2]{%dont forget
\expandafter\def\csname alist_#1 \endcsname{#2}% these
}
\newcommand{\alistget}[1]{%
\csname alist_#1 \endcsname% this one isn't needed
}
\begin{document}
\alistadd{Tom}{1970-12-25}
Tom \alistget{Tom}\par
This works: Tom \expandafter\DTMdate\csname alist_Tom \endcsname\par
This doesn't: Tom
\expandafter\expandafter\expandafter\DTMdate
\expandafter\expandafter\expandafter{\alistget{Tom}}\par
\end{document}
The first step give gets from \alistget{Tom} to \alist_Tom and the second step gets to 1970-12-25
Note that your macros work by pure expansion so there is nothing really you can do to make it more likely that they "just work" in the argument of another command, but unlike in typical compiled languages, macro arguments are not expanded before the macro is called, so there is no way. in general, of making a macro always be equivalent to its replacement text, it depends on the details of the calling macro (\DTMdate here).
|
2019-10-14 12:50:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720391750335693, "perplexity": 5690.928696446188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00378.warc.gz"}
|
https://www.aimspress.com/article/doi/10.3934/era.2020046
|
### Electronic Research Archive
2020, Issue 2: 879-895. doi: 10.3934/era.2020046
# Quasineutral limit for the compressible two-fluid Euler–Maxwell equations for well-prepared initial data
• Received: 01 April 2020 Revised: 01 April 2020
• Primary: 35L60, 35B40; Secondary: 35C20
• In this paper, we study the quasi-neutral limit for the compressible two-fluid Euler–Maxwell equations for well-prepared initial data. Precisely, we proved the solution of the three-dimensional compressible two-fluid Euler–Maxwell equations converges locally in time to that of the compressible Euler equation as $\varepsilon$ tends to zero. This proof is based on the formal asymptotic expansions, the iteration techniques, the vector analysis formulas and the Sobolev energy estimates.
Citation: Min Li, Xueke Pu, Shu Wang. Quasineutral limit for the compressible two-fluid Euler–Maxwell equations for well-prepared initial data[J]. Electronic Research Archive, 2020, 28(2): 879-895. doi: 10.3934/era.2020046
### Related Papers:
• In this paper, we study the quasi-neutral limit for the compressible two-fluid Euler–Maxwell equations for well-prepared initial data. Precisely, we proved the solution of the three-dimensional compressible two-fluid Euler–Maxwell equations converges locally in time to that of the compressible Euler equation as $\varepsilon$ tends to zero. This proof is based on the formal asymptotic expansions, the iteration techniques, the vector analysis formulas and the Sobolev energy estimates.
[1] Quasineutral limit of the Euler-Poisson system for ions in a domain with boundaries. Indiana Univ. Math. J. (2013) 62: 359-402. [2] D. Gérard-Varet, D. Han-Kwan and F. Rousset, Quasineutral limit of the Euler-Poisson system for ions in a domain with boundaries Ⅱ, J. Éc. Polytech. Math., 1 (2014), 343–386. doi: 10.5802/jep.13 [3] Global solutions of the Euler–Maxwell two-fluid system in 3D. Ann. of Math. (2) (2016) 183: 377-498. [4] Quasineutral limit of the two-fluid Euler-Poisson system in a bounded domain of $\mathbb{R}^3$. J. Math. Anal. Appl. (2019) 469: 169-187. [5] The Cauchy problem for quasi-linear symmetric hyperbolic systems. Arch. Rational Mech. Anal. (1975) 58: 181-205. [6] Singular limits of quasilinear hyperbolic systems with large parameters and the incompressible limit of compressible fluids. Comm. Pure Appl. Math. (1981) 34: 481-524. [7] The asymptotic behavior and the quasineutral limit for the bipolar Euler-Poisson system with boundary effects and a vacuum. Chin. Ann. Math. Ser. B (2013) 34: 529-540. [8] Quasineutral limit for the compressible quantum Navier-Stokes-Maxwell equations. Commun. Math. Sci. (2018) 16: 363-391. [9] A. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Applied Mathematical Sciences, 53. Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-1116-7 [10] Y. J. Peng, Global existence and long-time behavior of smooth solutions of two-fluid Euler-Maxwell equations, Ann. Inst. H. Poincaré Anal. Non Lińeaire, 29 (2012), 737–759. doi: 10.1016/j.anihpc.2012.04.002 [11] Convergence of compressible Euler-Maxwell equations to compressible Euler-Poisson equations. Chin. Ann. Math. Ser. B (2007) 28: 583-602. [12] Convergence of compressible Euler-Maxwell equations to incompressible Euler equations. Comm. Partial Differential Equations (2008) 33: 349-476. [13] Rigorous derivation of incompressible e-MHD equations from compressible Euler-Maxwell equations. SIAM J. Math. Anal. (2008) 40: 540-565. [14] Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete Contin. Dyn. Syst. (2009) 23: 415-433. [15] Boundary layers and quasi-neutral limit in steady state Euler-Poisson equations for potential flows. Nonlinearity (2004) 17: 835-849. [16] Asymptotic behaviors for the full compressible quantum Navier-Stokes-Maxwell equations with general initial data. Discrete Contin. Dyn. Syst. B (2019) 24: 5149-5181. [17] A boundary layer problem for an asymptotic preserving scheme in the quasi-neutral limit for the Euler-Poisson system. SIAM J. Appl. Math. (2010) 70: 1761-1787. [18] Quasineutral limit of Euler-Poisson system with and without viscosity. Comm. Partial Differential Equations (2004) 29: 419-456. [19] Incompressible limit of isentropic Navier-Stokes equations with Navier-slip boundary. Kinet. Relat. Models (2018) 11: 469-490. [20] Global classical solutions to the compressible Euler-Maxwell equations. SIAM J. Math. Anal. (2011) 43: 2688-2718. [21] The non-relativistic limit of Euler-Maxwell equations for two-fluid plasma. Nonlinear Anal. (2010) 72: 1829-1840. [22] Convergence of the Euler-Maxwell two-fluid system to compressible Euler equations. J. Math. Anal. Appl. (2014) 417: 889-903.
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142
1.604 0.8
|
2022-07-02 08:13:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6904025077819824, "perplexity": 1427.929403221182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00683.warc.gz"}
|
https://math.illinois.edu/research/igl/projects/fall/2017/automata-and-space-filling-curves
|
# Automata and Space-Filling Curves
### Faculty Member
Philipp Hieronymi and Erik Walsberg
In the last few years novel connections between mathematical logic, automata theory and metric geometry have emerged. A question that often arises in this area is the following: let $r \in \mathbb{N}_{>1}$ and let $C \subseteq \mathbb{R}^n$ be a geometrically interesting set (often a fractal), can the set of all $r$-ary representations of elements of C be recognized by a Büchi automaton? For example, let C be the usual middle-thirds Cantor set. Elements of C are precisely those real numbers in $[0,1]$ that have a ternary representation in which the digit 1 does not occur. Therefore it is not hard to see that the set of ternary representations of elements of C can be recognized by such an automaton. The goal of this project to answer similar questions in the case that C is the graph of a function. In particular, this project aims to determine whether graphs of space-filling curves can be recognized in this way.
### Team Meetings
At least biweekly
Intermediate
|
2017-12-17 15:38:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7473950386047363, "perplexity": 264.4460896001282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596115.72/warc/CC-MAIN-20171217152217-20171217174217-00495.warc.gz"}
|
https://codereview.stackexchange.com/questions/126827/typesetting-a-in-latex-using-algorithm2e-follow-up
|
Typesetting A* in LaTeX using algorithm2e - follow-up
(See the previous and initial iteration.)
I have this second version of my LaTeX code. I made it less dry by removing the duplicate keyword definitions shared by the two algorithms being typeset. Also, in the previous version, in the argument of main While loop, OPEN was typeset instead of desired OPEN.
See what I have now:
\documentclass[10pt]{article}
\usepackage{amsmath}
\usepackage[ruled,vlined,linesnumbered]{algorithm2e}
\SetArgSty{textnormal} % Make the While argument non-italic
% Define special keywords.
\SetKw{Nil}{nil}
\SetKw{Is}{is}
\SetKw{Not}{not}
\SetKw{Mapped}{mapped}
\SetKw{In}{in}
\SetKw{ChildNode}{child node}
\SetKw{Of}{of}
\SetKw{Continue}{continue}
\begin{document}
% A*.
\begin{algorithm}
$\text{OPEN} = \{ s \}$ \\
$\text{CLOSED} = \emptyset$ \\
$\pi = \{ (s \mapsto$ \Nil $)\}$ \\
$g = \{ (s \mapsto 0) \}$ \\
\While{$|\text{OPEN}| > 0$}{
$u = \textsc{ExtractMinimum}(\text{OPEN})$ \\
\If{$u$ \Is $t$}{
\KwRet \textsc{TracebackPath}$(u, \pi)$ \\
}
$\text{CLOSED} = \text{CLOSED} \cup \{ u \}$ \\
\ForEach{\ChildNode $v$ \Of $u$}{
\If{$v \in \textsc{CLOSED}$}{
\Continue \\
}
$c = g(u) + w(u, v)$ \\
\If{$v$ \Is \Not \Mapped \In $g$}{
$g(v) = c$ \\
$\pi(v) = u$ \\
\textsc{Insert}$(\text{OPEN}, v, c + h(v))$ \\
}
\ElseIf{$g(v) > c$}{
$g(v) = c$ \\
$\pi(v) = u$ \\
\textsc{DecreaseKey}$(\text{OPEN}, v, c + h(v))$ \\
}
}
}
\KwRet $\langle \rangle$
\caption{\textsc{AStarPathFinder}$(s, t, w, h)$}
\end{algorithm}
% Traceback path.
\begin{algorithm}
$p = \langle \rangle$ \\
\While{$u$ \Is \Not \Nil}{
$p = u \circ p$ \\
$u = \pi(u)$ \\
}
\KwRet $p$
\caption{\textsc{TracebackPath}$(u, \pi)$}
\end{algorithm}
\end{document}
The result looks like this:
Any critique is much appreciated.
• I really like the visualization of the blocks – MrSmith42 Apr 27 '16 at 10:18
• @MrSmith42 Yes, they are fairly convenient. – coderodde Apr 27 '16 at 10:26
Nice. I mean if you're going for perfection I'm going to write something but otherwise that looks really good.
• The documentation for algorithm2e mentions that every line should be terminated with \; - and then has a flag to disable showing that semicolon, \DontPrintSemicolon - take that how you will. At least Emacs highlights that less garishly than the forced newline.
• The variables could be marked via \SetKwData. If you want to keep the same style, modify it via \SetDataSty{textnormal}. Similarly for the functions via \SetKwFunction.
• The style for the captions could also just be set to smallcaps globally via \let\AlCapNameSty\textsc (works and should be correct, but I'm not a $\LaTeX$ expert by any means.
• Also there's the procedure environment (instead of algorithm) that has some more shortcuts. Note that I added procnumbered and \SetAlgoProcName to set the names and numbering back to what is the default algorithm settings.
Looks like this:
\documentclass[10pt]{article}
\usepackage{amsmath}
\usepackage[ruled,vlined,linesnumbered,procnumbered]{algorithm2e}
\SetArgSty{textnormal} % Make the While argument non-italic
\SetDataSty{textnormal}
\SetFuncSty{textsc}
\let\AlCapNameSty\textsc
\DontPrintSemicolon
\SetAlgoProcName{Algorithm}
% Define special keywords.
\SetKw{Nil}{nil}
\SetKw{Is}{is}
\SetKw{Not}{not}
\SetKw{Mapped}{mapped}
\SetKw{In}{in}
\SetKw{ChildNode}{child node}
\SetKw{Of}{of}
\SetKw{Continue}{continue}
\SetKwData{Open}{OPEN}
\SetKwData{Closed}{CLOSED}
\SetKwFunction{ExtractMinimum}{ExtractMinimum}
\SetKwFunction{TracebackPath}{TracebackPath}
\SetKwFunction{Insert}{Insert}
\SetKwFunction{DecreaseKey}{DecreaseKey}
\begin{document}
% A*.
\begin{procedure}
$\Open = \{ s \}$ \;
$\Closed = \emptyset$ \;
$\pi = \{ (s \mapsto$ \Nil $)\}$ \;
$g = \{ (s \mapsto 0) \}$ \;
\While{$|\Open| > 0$}{
$u = \ExtractMinimum(\Open)$ \;
\If{$u$ \Is $t$}{
\KwRet \TracebackPath{$u$, $\pi$} \;
}
$\Closed = \Closed \cup \{ u \}$ \;
\ForEach{\ChildNode $v$ \Of $u$}{
\If{$v \in \Closed$}{
\Continue \;
}
$c = g(u) + w(u, v)$ \;
\If{$v$ \Is \Not \Mapped \In $g$}{
$g(v) = c$ \;
$\pi(v) = u$ \;
\Insert{\Open, $v$, $c + h(v)$} \;
}
\ElseIf{$g(v) > c$}{
$g(v) = c$ \;
$\pi(v) = u$ \;
\DecreaseKey{\Open, $v$, $c + h(v)$} \;
}
}
}
\KwRet $\langle \rangle$
\caption{AStarPathFinder($s$, $t$, $w$, $h$)}
\end{procedure}
% Traceback path.
\begin{procedure}
$p = \langle \rangle$ \;
\While{$u$ \Is \Not \Nil}{
$p = u \circ p$ \;
$u = \pi(u)$ \;
}
\KwRet $p$
\caption{TracebackPath($u$, $\pi$)}
\end{procedure}
\end{document}
|
2019-10-15 05:23:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999769389629364, "perplexity": 12931.470542281428}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00446.warc.gz"}
|
https://socratic.org/questions/54babc6b581e2a2c9243c9f7
|
# Question 3c9f7
Jan 17, 2015
$\text{2020 g}$
#### Explanation:
You don't even need the balanced chemical equation for this one, all you need to use to solve this problem is the Law of Conservation of Mass.
So, the first sample produced $\text{1.64 kg}$ of magnesium and $\text{2.56 kg}$ of fluorine. Since the combined mass of the products must equal the mass of the reactant, you'll have
$\text{mass MgF"_2 = "1.64 kg" + "2.56 kg" = "4.20 kg}$
Out of this,
(1.64 cancel("kg"))/(4.20 cancel("kg")) * 100% = 39% by mass is magnesium
and
(2.56 cancel("kg"))/(4.20 cancel("kg")) * 100% = 61% by mass is fluorine.
Since the same compound undergoes decomposition in the second reaction, the same proportions by mass will be true for both magnesium and fluorine.
So, the second sample produces $\text{1.29 kg}$ of magnesium, which means that
1.29 cancel("kg Mg") * "100 g MgF"_2/(39cancel("kg Mg")) = "3.31 kg MgF"_2
is present in the second sample, which means that
3.31 cancel("kg MgF"_2) * "61 kg F"/(100 cancel("kg MgF"_2)) = "2.02 kg F"
will be produced this time around. As a side, keep in mind that the reaction will produce fluorine gas, ${\text{F}}_{2}$.
Verify that the mass is conserved by
$\text{3.31 kg" = "1.29 kg" + "2.02 kg}$
This confirms the calculations for fluorine and for the magnesium fluoride.
Finally, to convert this to grams, use the fact that
$\text{1 kg = 1000 g}$
You should end up with
2.02 cancel("kg") * "1000 g"/(1cancel("kg")) = "2020 g" -># rounded to three sig figs
|
2022-01-23 08:24:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6583812236785889, "perplexity": 4350.406675130309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00692.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/19541
|
# CONSTRUCTION AND CALIBRATION OF A DIFFERENCE FREQUENCY LASER SPECTROMETER AND NEW THZ FREQUENCY MEASUREMENTS OF WATER AND AMMONIA
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19541
Files Size Format View
1999-WF-01.jpg 134.4Kb JPEG image
Title: CONSTRUCTION AND CALIBRATION OF A DIFFERENCE FREQUENCY LASER SPECTROMETER AND NEW THZ FREQUENCY MEASUREMENTS OF WATER AND AMMONIA Creators: Pearson, J. C.; Pickett, H. M.; Chen, Pin; Matsuura, Shuji; Blake, Geoffrey A. Issue Date: 1999 Abstract: A three laser system based on 852nm DBR lasers has been constructed and used to generate radiation in the 750 GHz to 1600 GHz frequency region. The system works by locking two of the three lasers to modes of an ultra low expansion Fabry-Perot cavity. The third laser is offset locked to one of the cavity locked lasers with conventional microwave techniques. The signal from the offset laser and the other cavity locked laser are injected into a Master Oscillator Power Amplifier (MOPA), amplified and focused on a low temperature grown GaAs photomixer, which radiates the difference frequency. The system has been calibrated with molecular lines to better than one part in $10^{7a}$ In this paper we present the application of this system to the $v_{2}$ inversion band of Ammonia and the ground and $v_{2}$ states of water. A discussion of the system design, the calibration and the new spectral measurements will be presented. URI: http://hdl.handle.net/1811/19541 Other Identifiers: 1999-WF-01
|
2014-04-23 17:21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47184163331985474, "perplexity": 2562.7431805950287}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://fip-ifp.org/2ul59zgl/structure-of-hypophosphoric-acid-985604
|
## Events
• Sat
12
Dec
2020
• Tue
12
Oct
2021
Thu
14
Oct
2021
### Romanian National Podiatry Congress
Go to all podiatry events
# structure of hypophosphoric acid
Pure hypophosphorous acid forms white crystals that melt at 26.5 °C (79.7 °F). % in H 2 O Synonym: Phosphinic acid CAS Number 6303-21-5. English: Chemical structure of hypophosphoric acid (diphosphoric acid) Date: 2 May 2014, 12:04 (UTC) Source: Own work: Author: Ed : Permission (Reusing this file) Public domain Public domain false false: I, the copyright holder of this work, release this work into the public domain. We know that in hypo-acids, there is one less $\ce{O}$ in the empirical formula of the acid i.e. The pure acid can be obtained by extraction of its aqueous solution by diethyl ether. Structure, properties, spectra, suppliers and links for: Hypodiphosphoric acid, 7803-60-3. Hypophosphoric Acid : Product Information : ProductID: 109379: Product Name: Hypophosphoric Acid: Synonym Name : CAS No. Disclosed is a method of making hypophosphorous acid from sodium hypophosphite by performing electrodialytic water splitting upon an aqueous solution of sodium hypophosphite. Hypophosphoric acid is a mineral acid with phosphorous in an oxidation state of +4. 8.1 Export of Hypophosphoric Acid by Region; 8.2 Import of Hypophosphoric Acid by Region; 8.3 Balance of Trade; Chapter 9 Historical and Current Hypophosphoric Acid in North America (2013-2018) 9.1 Hypophosphoric Acid Supply; 9.2 Hypophosphoric Acid Demand by End Use; 9.3 Competition by Players/Suppliers; 9.4 Type Segmentation and Price Hypophosphoric acid is tetrabasic in nature. NACRES NA.21 It can be manufactured by reacting red phosphorous with sodium chlorite at room temperature. Suggested Videos Hypophosphoric acid, tetramethyl ester Unknown: Inactive EPA Applications/Systems. Oxoacids of Phosphorus are Hypophosphoric acid(H 3 PO 4), Metaphosphoric acid(HPO 2), Pyrophosphoric acid (H 4 P 2 O 7), Hypophosphorous acid(H 3 PO 2), Phosphorous acid (H 3 PO 3), Peroxophosphoric acid (H 3 PO 5), Orthophosphoric acid (H 3 PO 5).Oxoacids are acids containing oxygen.Let us Learn more about Oxoacids and Oxoacids of Phosphorus. Draw the Structure of H4p2o6 Hypophosphoric Acid Concept: P - Block Group 15 Elements - Compounds of Phosphorus. Below are the EPA applications/systems, statutes/regulations, or other sources that track or regulate this substance. In hypophosphoric acid the P atoms are identical and joined directly with a P-P bond. Hydrophosphoric Acid H3P Molar Mass, Molecular Weight. In hypophosphoric acid the P atoms are identical and joined directly with a P-P bond. On standing the anydrous acid undergoes rearrangement and disproportionation to form a mixture of isohypophosphoric acid, HPO-O-PO2. In hypophosphoric acid the phosphorus atoms are identical and joined directly with a P−P bond. hypophosphoric acid,physical properties,suppliers,CAS,MSDS,structure,Molecular Formula, Molecular Weight ,Solubility,boiling point, melting point The basicity of phosphoric acid, H3PO4 is 3. Molecular Structure: Product Property: Orthorhombic plates, mp 70°(easily forms a dihydrate, mp 55°, usually available only in aq soln. Molecular Weight 66.00 . That's it. Compare Products: Select up to 4 products. The term is used for salts or derivatives whose bonded hydrogen and phosphorous … MDL number MFCD02183592. THE structure of the hypophosphate ion has been the subject of speculation and controversy since hypophosphoric acid was discovered by Salzer1 in 1877. Maharashtra State Board HSC Science (Computer Science) 12th Board Exam. 2H2O. THE structure of the hypophosphate ion has been the subject of speculation and controversy since hypophosphoric acid was discovered by Salzer 1 in 1877. The electronic structure of hypophosphorous acid is such that it has only one hydrogen atom bound to oxygen, and it is thus a monoprotic oxyacid. It is a phosphorus oxoacid or hydroxy phosphine oxide with a monobasic character (oxide of PH3 which contains OH-). 2H 2 O. Additional Metadata. Hypophosphorous acid solution 50 wt. 2: Phosphorous Acid. Pure hypophosphorous acid forms white crystals that melt at 26.5° C. The electronic structure of hypophosphorous acid is such that it has only one hydrogen atom bound to oxygen, and thus it is a monoprotic oxyacid. Linear Formula H 3 PO 2. This table shows how each list refers to the substance. When the disodium salt of the acid is formed it then moves via cation exchanger that ultimately gives hypophosphoric acid. This report is an essential reference for who looks for detailed information on US Hypophosphoric Acid market. The pure acid can be obtained by extraction of its aqueous solution by diethyl ether, (C 2 H 5) 2 O. It has a chemical formula H 4 O 2 P 6.In the solid-state, it exists as a dihydrate, H 4 O 2 P 6.2H 2 0. For example, following data is given regarding the structure of pyrophosphorus acid in my textbook, which is not consistent with the number of oxygen atoms. Hypophosphoric Acid (H 4 P 2 O 6) It is set up by controlled oxidation of red phosphorus with sodium chlorite arrangement when disodium salt of the hypophosphoric acid is obtained it is passed through cation exchanger to yield hypophosphoric acid. The structure of rhombohedral (R\overline{3}) iron(III) tris[dihydrogenphosphate(I)] or iron(III) hypophosphite, Fe(H2PO2)3, has been determined by single-crystal X-ray diffraction. Hypophosphorous acid, also known as phosphinic acid, HPA, hydroxy(oxo)-λ5-phosphane, oxo-λ5-phosphanol and oxo-λ5-phosphinous acid, is an inorganic compound with the formula H3PO2. Question Bank Solutions 9445. In hypophosphoric acid the P atoms are identical and joined directly with a P-P bond. Preparation. In hypophosphoric acid the phosphorus atoms are identical and joined directly with a P−P bond. Hypophosphoric acid should have one less $\ce{O}$ than in $\ce{H3PO4}$ i.e. Fig. PubChem Substance ID 329752159. Question 2. Skip … Hypophosphoric acid is tetraprotic with dissociation constants pKa1 = 2.2, pKa2 = 2.8, pKa3 = 7.3 and pKa4 = 10.0. Chemical structure of hypophosphoric acid (diphosphoric acid) Add extension button. To view more metadata about the specific Synonym, click on the Synonym. For more information about the substance, you may click one of the links below to take you to the relevant section: Program and regulatory information about this substance, including links to EPA applications/systems, statues/regulations, or other sources that track or regulate this substance This applies worldwide. Hypophosphoric Acid (H 4 P 2 O 6) Hypophosphoric acid is formed by conducting a controlled oxidation of red phosphorus with sodium chlorite. Hypophosphoric acid consists of oxonium ions and is best formulated [H 3 O +] 2 [H 2 P 2 O 6] 2−. Question Papers 172. For hydrogen to be acidic, it must be attached to a strongly electronegative atom such as oxygen or fluorine. In some textbooks, general rule followed is higher the oxidation state of central atom, more is the acidic nature. Basicity is the number of acidic hydrogen atoms in the molecule. Search results for Hypophosphorous acid at Sigma-Aldrich. One can prepare Hypophosphoric acid by reacting red phosphorous with sodium chlorite at room temperature. ENDMEMO. 2H 2 O. The structure consists of [001] chains of Fe3+ cations in octahedral sites with \overline{3} symmetry bridged by bidentate hypophosphite anions. *Please select more than one item to compare Textbook Solutions 8446. The acid is of tetrabasic nature. 15537-82-3 - NCPXQVVMIXIKTN-UHFFFAOYSA-N - Hypophosphoric acid, sodium salt - Similar structures search, synonyms, formulas, resource links, and other chemical information. I asked this question here because the answer which I found on web and other resources are inconsistent which each other. This is an exception. Hypophosphoric acid definition: a crystalline odourless deliquescent solid : a tetrabasic acid produced by the slow... | Meaning, pronunciation, translations and examples 7803-60-3 - TVZISJTYELEYPI-UHFFFAOYSA-N - Hypophosphoric acid - Similar structures search, synonyms, formulas, resource links, and other chemical information. Important Solutions 3102. 7803-60-3: Molecular Formular: H4O6P2: EINEC No. The source code for the WIKI 2 extension is being checked by … Hypophosphoric Acid Formula Structure. 2H 2 O. Concept Notes & Videos 417. Now this is correct for some acids but not all. What is the structure of pyrophosphorous acid? Then shouldn't the structure of pyrosulfurous acid be like this: Why is the structure not similar to pyrophosphoric acid? There is one less $\ce { H3PO4 }$ in the formula... 2 O Synonym: Phosphinic acid CAS Number 6303-21-5 79.7 °F ) information: ProductID 109379... Basicity of phosphoric acid, tetramethyl ester Unknown: Inactive EPA Applications/Systems - Block Group 15 Elements - Compounds phosphorus! And joined directly with a P−P bond but not all on US hypophosphoric acid Concept: P Block... Unknown: Inactive EPA Applications/Systems of speculation and controversy since hypophosphoric acid should one! 15 Elements - Compounds of phosphorus be acidic, it must be attached a... Manufactured by reacting red phosphorous with sodium chlorite at room temperature textbooks, general rule followed is the. The substance: ProductID: 109379: Product Name: hypophosphoric acid market by extraction its! Be obtained by extraction of its aqueous solution by diethyl ether or other sources that track or regulate substance... Should have one less $\ce { O }$ i.e C 2 H ). Acidic nature more metadata about the specific Synonym, click on the Synonym can prepare hypophosphoric the., more is the structure of hypophosphoric acid market shows how each refers..., HPO-O-PO2 room temperature 7803-60-3 - TVZISJTYELEYPI-UHFFFAOYSA-N - hypophosphoric acid directly with a P−P bond is... Solution by diethyl ether that ultimately gives hypophosphoric acid market the structure of the acid i.e that ultimately hypophosphoric. Speculation and controversy since hypophosphoric acid Concept: P - Block Group 15 Elements - Compounds phosphorus! Other chemical information: Product information: ProductID: 109379: Product information::. Crystals that melt at 26.5 °C ( 79.7 °F ), H3PO4 3.: Phosphinic acid CAS Number 6303-21-5 ether, ( C 2 H 5 ) 2 Synonym! - Similar structures search, synonyms, formulas, resource links, and other resources are which! Is being checked by … What is the structure of the acid i.e with phosphorous in an state. Or regulate this substance that ultimately gives hypophosphoric acid ( diphosphoric acid ) Add extension.! It is a phosphorus oxoacid or hydroxy phosphine oxide with a P-P bond the disodium salt of the is. Science ) 12th Board Exam: Hypodiphosphoric acid, tetramethyl ester Unknown: EPA... Solution by diethyl ether, ( C 2 H 5 ) 2 O know that hypo-acids! $i.e has been the subject of speculation and controversy since hypophosphoric acid ( diphosphoric acid Add... Name: hypophosphoric acid market be manufactured by reacting red phosphorous with sodium chlorite at room temperature Block. In an oxidation state of +4 and other resources are inconsistent which each other::! 7803-60-3 - TVZISJTYELEYPI-UHFFFAOYSA-N - hypophosphoric acid Concept: P - Block Group 15 Elements - of!, synonyms, formulas, resource links, and other resources are inconsistent which each other acid ) Add button. By reacting red phosphorous with sodium chlorite at room temperature the structure of pyrophosphorous acid Molecular Formular: H4O6P2 EINEC! Its aqueous solution by diethyl ether of +4 hypo-acids, there is less! Salt of the hypophosphate ion has been the subject of speculation and controversy since acid. The basicity of phosphoric acid, H3PO4 is 3 Product Name: hypophosphoric acid market in the.. Should have one less$ \ce { O } $in the empirical formula of the i.e. H 5 ) 2 O Synonym Name: hypophosphoric acid ( diphosphoric acid ) Add button! To form a mixture of isohypophosphoric acid, tetramethyl ester Unknown: Inactive Applications/Systems...: H4O6P2: EINEC No synonyms, formulas, resource links, and other resources are which... By … What is the Number of acidic hydrogen atoms in the empirical formula of the ion... H4P2O6 hypophosphoric acid - Similar structures search, synonyms, formulas, resource links and! Be acidic, it must be attached to a strongly electronegative atom such as oxygen fluorine... Product Name: hypophosphoric acid by reacting red phosphorous with sodium chlorite at room temperature results for Hypophosphorous acid Sigma-Aldrich. What is the Number of acidic hydrogen atoms in the empirical formula the... - Block Group 15 Elements - Compounds of phosphorus 5 ) 2 O, H3PO4 is 3, is... Hypodiphosphoric acid, H3PO4 is 3 Concept: P - Block Group 15 Elements - of!$ than in $\ce { H3PO4 }$ i.e for: Hypodiphosphoric acid, 7803-60-3 and joined with... Acid market when the disodium salt of the hypophosphate ion has been the subject of and. The empirical formula of the acid is formed it then moves via cation exchanger that ultimately gives hypophosphoric -. Na.21 search results for Hypophosphorous acid at Sigma-Aldrich P−P bond oxygen or fluorine has been the subject of and. Acid ( diphosphoric acid ) Add extension button of acidic hydrogen atoms in the molecule hypo-acids, is. Then moves via cation exchanger that ultimately gives hypophosphoric acid by reacting red phosphorous with sodium chlorite room... H4P2O6 hypophosphoric acid the phosphorus atoms are identical and joined directly with a monobasic character ( oxide of PH3 contains! Ether, ( C 2 H 5 ) 2 O Synonym: Phosphinic acid CAS Number.. Be attached to a strongly electronegative atom such as oxygen or fluorine US hypophosphoric acid the P atoms identical. H3Po4 is 3 one less $\ce { O }$ in the empirical of! By Salzer1 in 1877 as oxygen or fluorine i found on web and other resources are inconsistent which other... In $\ce { H3PO4 }$ than in $\ce { O }$ in! Of phosphoric acid, 7803-60-3 phosphorous with sodium chlorite at room temperature obtained by extraction of aqueous! P atoms are identical and joined directly with a monobasic character ( oxide of PH3 which OH-. The P atoms are identical and joined directly with a P-P bond extension button it. Room temperature Synonym Name: CAS No the molecule manufactured by reacting red phosphorous with sodium chlorite at temperature... Below are the EPA Applications/Systems list refers to the substance correct for some acids but not all because. Higher the oxidation state of central atom, more is the Number acidic. ( Computer Science ) 12th Board Exam Formular: H4O6P2: EINEC No to a strongly atom... And other chemical information prepare hypophosphoric acid the phosphorus atoms are identical and joined directly with a P−P.... Melt at 26.5 °C ( 79.7 °F ) in the molecule specific Synonym, click on the Synonym melt! Product information: ProductID: 109379: Product information: ProductID: 109379: Product information::... Concept: P - Block Group 15 Elements - Compounds of phosphorus is formed it then moves via cation that! Undergoes rearrangement and disproportionation to form a mixture of isohypophosphoric acid, tetramethyl ester Unknown: Inactive Applications/Systems. I asked this question here because the answer which i found on web and other information. Table shows how each list refers to the substance and disproportionation to form mixture... The oxidation state of central atom, more is the Number of acidic hydrogen atoms in the molecule Elements. Synonym: Phosphinic acid CAS Number 6303-21-5 the acid is a mineral acid with phosphorous in an state. $i.e, resource links, and other chemical information at room temperature: ProductID::. The disodium salt of the acid is formed it then moves via cation exchanger that ultimately gives hypophosphoric market... Regulate this substance of hypophosphoric acid market hydrogen atoms in the molecule is formed it then moves cation. Structure of hypophosphoric acid Concept: P - Block Group 15 Elements - Compounds phosphorus... View more metadata about the specific Synonym, click on the Synonym of hypophosphoric acid is mineral., resource links, and other chemical information regulate this substance the EPA Applications/Systems, statutes/regulations, or sources... Contains OH- ), it must be attached to a strongly electronegative atom such as oxygen or fluorine 5! At room temperature ion has been the subject of speculation and controversy since hypophosphoric acid was discovered by Salzer1 1877... Is an essential reference for who looks for detailed information on US acid. Chemical information such as oxygen or fluorine extraction of its aqueous solution by diethyl ether solution by ether! Must be attached to a strongly electronegative atom such as oxygen or.. Is higher the oxidation state of central atom, more is the Number of acidic atoms. Science ) 12th Board Exam for the WIKI 2 extension is being checked …! Einec No - Compounds of phosphorus Add extension button a P−P bond ( of. That in hypo-acids, there is one less$ \ce { O } $in the molecule which i on! Tvzisjtyeleypi-Uhfffaoysa-N - hypophosphoric acid - Similar structures search, synonyms, formulas resource... By diethyl ether, ( C 2 H 5 ) 2 O Synonym: Phosphinic acid CAS Number.! Cas Number 6303-21-5 refers to the substance to form a mixture of isohypophosphoric acid H3PO4! Regulate this substance synonyms, formulas, resource links, and other resources inconsistent! Be manufactured by reacting red phosphorous with sodium chlorite at room temperature prepare hypophosphoric acid should have one$. The WIKI 2 extension is being checked by … What is the Number of acidic hydrogen atoms the! How each list refers to the substance the oxidation state of +4 in $\ce { }. ( diphosphoric acid ) Add extension button acidic, it must be attached to a strongly electronegative atom as. One less$ \ce { O } $in the molecule phosphoric acid H3PO4... Number 6303-21-5 contains OH- ), general rule followed is higher the oxidation state of central atom more... Being checked by … What is the acidic nature to be acidic, it must attached... With phosphorous in an oxidation state of central atom, more is acidic... O }$ i.e character ( oxide of PH3 which contains OH- ) one less \$ \ce O!
|
2021-04-19 04:10:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3478667140007019, "perplexity": 13345.352652259971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00509.warc.gz"}
|
http://mathhelpforum.com/trigonometry/40034-simplifying-trig-expression.html
|
# Math Help - Simplifying a trig expression
1. ## Simplifying a trig expression
[(1 / (cos^2 (x)) - x]*sin(x)
Can someone help me simplify this? Thanks,
Kim
2. Probably not. Why do you think it possible?
3. Originally Posted by Kim Nu
[(1 / (cos^2 (x)) - x]*sin(x)
Can someone help me simplify this? Thanks,
Kim
You've done a good job trying to line the brackets up, but they are unfortunately not correct. The "(" to the left of the "1" is never closed.
4. Originally Posted by Kim Nu
[(1 / (cos^2 (x)) - x]*sin(x)
Can someone help me simplify this? Thanks,
Kim
I cannot see any simplification other than
$\bigg[\frac{1}{\cos^2(x)}-x\bigg]\cdot\sin(x)=\frac{\sin(x)}{\cos^2(x)}-x\cdot\sin(x)=\tan(x)\cdot\sec(x)-x\cdot\sin(x)$
|
2015-03-31 20:50:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881983160972595, "perplexity": 4116.836482538894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131301015.31/warc/CC-MAIN-20150323172141-00247-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://aptitude.gateoverflow.in/6743/nielit-2019-feb-scientist-d-section-a-18
|
# NIELIT 2019 Feb Scientist D - Section A: 18
1 vote
59 views
Read the following information carefully and answer the questions given below :
1. Five students Sujit, Randhir, Neena, Mihir and Vinay have total five books on subjects Physics, Chemistry, Maths, Biology and English written by authors Gupta, Khanna, Harish, D'Souza and Edwin.Each student has only book on one of the five subjects.
2. Gupta is the author of Physics book, which is not owned by Vinay or Sujit.
3. Mihir owns the book written by Edwin.
4. Neena owns Maths book. Vinay has English book, which is not written by Khanna. Biology book is written by D'Souza.
Who is the author of Chemistry book?
1. Harish only
2. Edwin only
3. Khanna or Harish
4. Edwin or Khanna
retagged
## Related questions
1 vote
1
78 views
If water is called blue, blue is called red, red is called white, white is called sky, sky is called rain, rain is called green, green is called air and air is called table, which of the following is the colour of milk? White Rain Sky Green
1 vote
2
117 views
Prabhat remembers that his mother's birthday is after $17^{th}$ April but before $21^{st}$ April, whereas his sister Urmila remembers that their mother's birthday is after $19^{th}$ but before $24^{th}$ April. Which of the following days in April is definitely their mother's birthday ? $19^{th}$ $21^{st}$ $22^{nd}$ $20^{th}$
1 vote
Use the information given below : There is a group of five persons - $A, B, C,D,$ and $E$. In the group there is a Professor of Philosophy, a Professor of Psychology and a Professor of Economics. $A$ and $D$ are ladies who have no specialisation in any subject and are ... is the brother of $C$ and is neither a psychologist nor an economist. Who is the Professor of Philosophy ? $A$ $B$ $C$ $D$
Use the information given below : There is a group of five persons - $A, B, C,D,$ and $E$. In the group there is a Professor of Philosophy, a Professor of Psychology and a Professor of Economics. $A$ and $D$ are ladies who have no specialisation in any subject and are married. No ... $C$ and is neither a psychologist nor an economist. Who is the wife of $E$? $A$ $B$ $C$ $D$
Use the information given below : There is a group of five persons - $A, B, C,D,$ and $E$. In the group there is a Professor of Philosophy, a Professor of Psychology and a Professor of Economics. $A$ and $D$ are ladies who have no specialisation in any subject and ... of $C$ and is neither a psychologist nor an economist. Which of the following groups includes all the men ? $ABC$ $BCD$ $BC$ $BE$
|
2021-03-03 21:29:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2466367781162262, "perplexity": 1887.7070479761367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00120.warc.gz"}
|
https://projecteuclid.org/euclid.die/1356038627
|
Differential and Integral Equations
On the regularity criteria for the generalized Navier-Stokes equations and Lagrangian averaged Euler equations
Abstract
We obtain some regularity conditions for solutions of the 3D generalized Navier-Stokes equations with fractional powers of the Laplacian, in terms of the velocity, the vorticity, and the pressure in Besov space, Triebel-Lizorkin space, and Lorentz space, respectively. We also present a regularity condition for the 3D Lagrangian averaged Euler equations.
Article information
Source
Differential Integral Equations, Volume 21, Number 5-6 (2008), 443-457.
Dates
First available in Project Euclid: 20 December 2012
|
2019-01-15 23:12:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428837299346924, "perplexity": 859.436237188952}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656530.8/warc/CC-MAIN-20190115225438-20190116011438-00214.warc.gz"}
|
https://en-academic.com/dic.nsf/enwiki/1945020
|
# Anomalous monism
Anomalous monism
Anomalous monism is a philosophical thesis about the mind-body relationship. It was first proposed by Donald Davidson in his 1970 paper "Mental events". The theory is twofold and states that mental events are identical with physical events, and that the mental is anomalous, i.e. under their mental descriptions these mental events are not regulated by strict physical laws. Hence, Davidson proposes an identity theory of mind without the reductive bridge laws associated with the type-identity theory. Since the publication of his paper, Davidson has refined his thesis and both critics and supporters of anomalous monism have come up with their own characterizations of the thesis, many of which appear to differ from Davidson's.
Overview
Considering views about the relation between the mental and the physical as distinguished first by whether or not mental entities are identical with physical entities, and second by whether or not there are strict psychophysical laws, we arrive at a fourfold classification: (1) "nomological monism", which says there are strict correlating laws, and that the correlated entities are identical (this is usually called type physicalism); (2) "nomological dualism", which holds that there are strict correlating laws, but that the correlated entities are not identical (parallelism and pre-established harmony); (3) "anomalous dualism", which holds there are no laws correlating the mental and the physical, and that the substances are ontologically distinct (i.e. Cartesian dualism); and (4) "anomalous monism", which allows only one class of entities, but denies the possibility of definitional and nomological reduction. Davidson put forth his theory of anomalous monism as a possible solution to the mind-body problem.
Since every mental event is some physical event or other, the idea is that someone's thinking at a certain time, for example, that snow is white, is a certain pattern of neural firing in their brain at that time, an event which can be characterized as both a thinking that snow is white (a type of mental event) and a pattern of neural firing (a type of physical event). There is just one event that can be characterized both in mental terms and in physical terms. If mental events are physical events, they can at least in principle be explained and predicted, like all physical events, on the basis of laws of physical science. However, according to anomalous monism, events cannot be so explained or predicted as described in mental terms (such as "thinking", "desiring" etc), but only as described in physical terms: this is the distinctive feature of the thesis as a brand of physical monism.
Davidson's classic argument for AM
Davidson makes what even his opponents have called an "ingenuous" argument for his version of non-reductive physicalism. The argument relies on the following three intuitively compelling principles:
:#"The principle of causal interaction": there exist both mental-to-physical as well as physical-to-mental causal interactions.:#"The principle of the nomological character of causality": all events are causally related through strict laws.:#"The principle of the anomalism of the mental": there are no psycho-physical laws which relate the mental and the physical as just that, mental and physical.
Causal interaction
The first principle follows from Davidson's view of the ontology of events and the nature of the relationship of mental events (specifically propositional attitudes) with physical actions. Davidson subscribes to an ontology of events where events (as opposed to objects or states of affairs) are the fundamental, irreducible entities of the mental and physical universe. His original position, as expressed in "Actions and Events", was that event-individuation must be done on the basis of causal powers. He later abandoned this view in favour of the individuation of events on the basis of spatio-temporal localization, but his principle of causal interaction seems to imply some sort of, at least, implicit commitment to causal individuation. According to this view, all events are caused by and cause other events and this is the chief, defining characteristic of what an event is.
Another relevant aspect of Davidson's ontology of events for anomalous monism is that an event has an indefinite number of properties or aspects. An event such as "the turning on of the light-switch" is not fully described in the words of that particular phrase. Rather, "the turning on of the light-switch" also involves "the illumination of the room", "the alerting of the burglar in the kitchen", etc... Since a physical event, such as the action of turning on the light-switch can be associated with a very large variety of mental events (reasons) which are potentially capable of rationalizing the action "a posteriori", how is it possible to choose the real cause of my turning on the light-switch (which event is the causal one)? Davidson says that the causal event, in such a case, is the particular reason that "caused" the action to occur. It was "because" I wanted to see better that I turned on the light-switch and not "because" I wanted to alert the burglar in the kitchen. The latter is just a sort of side effect. So, for Davidson, "reasons are causes" and this explains the causal efficacy of the mental.
Nomological character of causality
The principle of the "nomological character of causality" (or "cause-law principle") requires that events be covered by so-called strict laws. Davidson originally assumed the validity of this principle but, in more recent years, he felt the need to provide a logical justification for it. So what is a strict law?
trict Laws
Whenever a particular event E1 is causally related to a second particular event E2, there must be, according to Davidson, a law such that ("C1" & "D1") -> "D2"), where "C1" represents a set of preliminary conditions, "D1" is a description of E1 which is sufficient, given "C1", for an occurrence of an event of the kind "D2", which represents the description of E2. The cause-law principle was intended by Davidson to take in both laws of temporal succession as well as bridge laws. Since Davidson denies that any such laws can involve psychological predicates (including such laws as "(M1 & M2) -> M3" wherein the predicates are "all" psychological or mixed laws such as ((M1 & M2 -> P1) and ((P1 & P2 -> M1))), it follows that such bridge laws as "P1 -> M1", "M1 -> P1" or "M1 if and only if P1" are to be excluded.
However, mental predicates may be allowed in what are called "hedged laws" which are just strict laws qualified by "ceteris paribus" (all other things being equal) clauses. What this means is that while the generalization ((M1 & M2 -> P1) is justifiable "ceteris paribus", it cannot be fully elaborated in terms of, e.g., (P2 & P3 & M1 & M2 & M3) -> P1.
Justification of Cause-Law
Davidson defended the cause-law principle by revising Curt John Ducasse's (1926) attempt to define singular causal relations without appealing to covering laws. Ducasse's account of cause was based on the notion of change. Some particular event "C" is the cause of some effect "E" if and only if "C" was the only change that occurred in the immediate environment of "E" just prior to the occurrence of "E". So, for example, the striking of a match is the cause of the flaming of the match to the extent that the striking is the only change that occurs in the immediate vicinity of the match.
Davidson turns this around and asks if it is not the case that our notions of change do not, rather, appeal to a foundation of laws. Davidson first observes that "change" is just shorthand for "change of predicate", in that a change occurs when and only when a predicate that is true (false) of some object later becomes false (true) of that object. Second, and more importantly, the notion of change has itself changed over time: under Newtonian physics, continuous motion counts as change but not in Aristotelian physics. Hence, change is theory-dependent and presupposes a background notion of laws. Since change is fundamental to the concept of cause and change is dependent on laws, it follows that cause is also dependent on laws.
The anomalism of the mental
The third principle requires a different justification. It suggests that the mental cannot be linked up with the physical in a chain of psycho-physical laws such that mental events can be predicted and explained on the basis of such laws. This principle arises out of two further doctrines which Davidson espoused throughout his life: the normativity of the mental and semantic holism.
Normativity
Propositional attitude ascriptions are subject to the constraints of rationality and, so, in ascribing one belief to an individual, I must also ascribe to him all of the beliefs which are logical consequences of that ascription. All of this is in accordance with the principle of charity, according to which we must "try for a theory that finds him consistent, a believer of truths, and a lover of the good" (Davidson 1970). But we can never have all the possible evidence for the ascription of mental states for they are subject to the indeterminacy of translation and there is an enormous amount of subjectivity involved in the process. On the other hand, physical processes are deterministic and descriptive rather than normative. Therefore, their base of evidence is closed and law-governed.
Holism
A beautiful illustration of the point that holism of the mental generates anomalism is offered by Vincenzo Fano. Fano asks us to first consider the attribution of length to a table. To do this, we must assume a set of laws concerning the interaction between the table and the measuring apparatus: the length of the table doesn't vary significantly during the measurement, length must be an additive quantity, "longer than" must be an asymmetric, transitive relation and so forth. By assuming these laws and carrying out a few operations, we reach the result of the measurement. There is a certain amount of holism in this process. For example, during the measurement process, we might discover that the table is much hotter than the measuring device, in which case the length of the latter will have been modified by the contact. Consequently, we need to modify the temperature of the measuring device. In some cases, we will even have to reconsider and revise some of our laws. This process can continue for some time until we are fairly confident of the results obtained. But it is not only necessary to have a theory of the interactions between the table and the measuring device, it is also necessary to attribute a set of predicates to the table: a certain temperature, rigidity, electric charge, etc... And the attribution of each of these predicates presupposes, in turn, another theory. So, the attribution of "F" to "x" presupposes "Px" and the theory $T_f$, but "Px", in turn, presupposes "P'x" and $T_p$ and so on. As a result, we have a series of predicates "F", "P", $P\text{'}$, $P"$... and a series of theories $T_f$, $T_p$, $T_p\text{'}$.... As Fano states it, "this process would seem like a "regressus ad infinitum", if it weren't that $T_f + T_p + T_p\text{'} + T_p"$ converges toward a theory "T" which is nothing other than physics in its entirety." The same is true of the predicates, which converge toward the set of all the possible physical quantities. Fano calls this "convergent holism".
How can all this be formalized? At the beginning, we attributed the predicate "no" to Thomas as a direct response to our question. This is a physical predicate "F". We can call the attribution of Thomas' belief that the relationship cannot continue "m". From "Fx", we cannot deduce "mx". On the basis of the hypothesis that a person who is angry is not capable of examining their own opinions clearly, we asked Thomas if he was angry. We ascribed to him the mental predicate "m1" and the physical predicate "F1" (the answer "yes" to the question whether he is angry). Now, we can deduce "m1" (the fact the he is angry) from "F1". But from "m1" and "F1", we can deduce neither "m" (the fact that Thomas believes the relationship cannot continue) nor "not m". So we continue by attributing the next physical predicate "F2" (the positive answer to our question whether he will be of the same opinion in one month).
From "F2", "F1" and "m1", we would like to deduce "not m". But we weren't sure what Thomas was thinking about during his pause, so we asked him to tell us and, on the basis of this response "F3", we deduce "m2" (that Thomas confuses his desires with his beliefs). And so on ad infinitum. The conclusion is that the holism of the mental is "non-convergent" and therefore it is anomalous with respect to the physical.
So how are the three seemingly irreconcilable principles above resolved? Davidson distinguishes causal relations, which are an extensional matter and not influenced by the way they are described, from law-like relations, which are intentional and dependent on the manner of description. There is no law of nature under which events fall when they are described according to the order in which they appeared on the television news. When the earthquake caused the Church of Santa Maria dalla Chiesa to collapse, there is surely some physical law(s) which explains what happened, but not under the description in terms of the event on Channel 7 at six p.m. causing the events on Channel 8 at six fifteen. In the same way, mental and physical events are causally related but not "qua" mental events. The mental events have explanatory predicates which are physical as well as predicates which are irreducibly mental. Hence, AM is a form of predicate dualism which accompanies ontological monism.
Finally, for those who objected that this is not really a form of physicalism because there is no assurance that every mental event will have a physical base, Davidson formulated the thesis of supervenience. Mental properties are dependent on physical properties and there can be no change in higher-level properties without a corresponding change in lower-level properties.
Arguments against AM and replies
Ted Honderich has challenged the thesis of anomalous monism, forcing, in his words, the "inventor of anomalous monism to think again". To understand Honderich's argument, it is helpful to describe the example he uses to illustrate the thesis of AM itself: the event of two pears being put on a scale causes the event of the scale's moving to the two-pound mark. But if we describe the event as "the two French and green things caused the scale to move to the two-pound mark", then while this is true, there is no lawlike relation between the greenness and Frenchness of the pears and the pointers moving to the two-pound mark.
Honderich then points out that what we are really doing when we say that there is "no lawlike relationship between two things under certain descriptions" is taking certain properties and noting that the two things are not in relation in virtue of those particular properties. But this does not mean they are not in lawlike relation in virtue of certain other properties, such as weight in the pears example. On this basis, we can formulate the generalization that Honderich calls "the Nomological Character of Causally-Relevant Properties". Then we ask what the causally relevant properties of the mental events which cause physical events are.
Since Davidson believes that mental events are causally efficacious (i.e. he rejects epiphenomenalism), then it must be a mental event as such (mental properties of mental events) which are the causally relevant properties. But if we accept the first two claims of the argument for AM, along with the idea of the causal efficacy of the mental, and the Principle of Causally-Relevant properties, then the result is a denial of anomalous monism because there are indeed psycho-physical lawlike connections. On the other hand, if we wish to retain the principle of the anomalism of the mental then we must reject causal efficacy and embrace epiphenomalism.
Davidson has responded to such arguments by reformulating anomalous monism and has defended the improved version in "Thinking Causes". He points out that the defect in the so-called "epiphenomalism problem" lies in its confusion of the concept "by virtue of" (or necessary for) with the idea of an event's being responsible for another. Also, Honderich's example of the pears and the scale is jerry rigged in such a way that only a single effect is taken into consideration: the alteration on the scale. But the action of placing pears on a scale can have many different effects; it can attract the attention of a customer, for example. In this case, the causally relevant properties would be precisely the color, shape and other "irrelevant" properties of the fruit. What is relevant or irrelevant therefore depends, in part, on the context of explanatory interest.
ee also
* Neutral monism
References
*Davidson, D. (1970) "Mental Events", in "Actions and Events", Oxford: Clarendon Press, 1980.
*Davidson, D. (1993) "Thinking Causes", in J. Heil and A. Mele (eds) "Mental Causation", Oxford: Clarendon Press.
*Honderich, T. (1982) "The Argument for Anomalous Monism", "Analysis" 42:59-64.
*Honderich, T. (1984) "Smith and the Champion of Mauve", "Analysis" 44:86-89.
*Fano, V. (1992) "Olismi non convergenti" (Non-convergent holisms) in Dell Utri, Massimo (ed). "Olismo", Quodlibet. 1992.
*Child, W. (1993) "Anomalism, Uncodifiability, and Psychophysical Relations", "Philosophical Review" 102: 215-45.
*Davidson, D. (1973) "The Material Mind", in "Actions and Events", Oxford: Clarendon Press, 1980.
*Davidson, D. (1974) "Psychology as Philosophy", in "Actions and Events", Oxford: Clarendon Press, 1980.
*Davidson, D. (1995) "Donald Davidson", in S. Guttenplan (ed.) "A Companion to the Philosophy of Mind", Oxford: Blackwell.
*Ducasse, C.J. (1926) "On the Nature and Observability of the Causal Relation", "Journal of Philosophy" 23:57-68.
*Honderich, T. (1981) "Psychophysical Lawlike Connections and their Problem", "Inquiry" 24: 277-303.
*Kim, J. (1985) "Psychophysical Laws", in E. LePore and B.P. McLaughlin (eds) "Actions and Events: Perspectives on the Philosophy of Donald Davidson", Oxford: Blackwell.
*LePore, E. and McLaughlin, B.P. (1985) "Actions and Events: Perspectives on the Philosophy of Donald Davidson", Oxford: Blackwell.
*McLaughlin, B.P. (1985) "Anomalous Monism and the Irreducibility of the Mental", in E. LePore and B.P. McLaughlin (eds) "Actions and Events: Perspectives on the Philosophy of Donald Davidson", Oxford: Blackwell.
*Stanton, W.L. (1983) "Supervenience and Psychological Law in Anomalous Monism", "Pacific Philosophical Quarterly" 64: 72-9.
* [http://host.uniroma3.it/progetti/kant/field/am.htm Anomalous Monism] in " [http://host.uniroma3.it/progetti/kant/field/index.html A Field Guide to the Philosophy of Mind] "
*sep entry|anomalous-monism|Anomalous Monism|Steven Yalowitz
* [http://consc.net/biblio/3.html#3.5d Bibliography on Anomalous Monism] in " [http://consc.net/biblio.html Contemporary Philosophy of Mind: An Annotated Bibliography] "
* [http://www.iep.utm.edu/m/anom-mon.htm Mind and Anomalous Monism] in "The Internet Encyclopedia of Philosophy"
Wikimedia Foundation. 2010.
Поможем написать реферат
### Look at other dictionaries:
• anomalous monism — A doctrine in the philosophy of mind associated with Davidson . In his influential paper ‘Mental Events’ (1970), Davidson asked how the following three propositions could be made consistent: (i) mental events cause physical events, (ii) where… … Philosophy dictionary
• Monism — is any philosophical view which holds that there is unity in a given field of inquiry. Accordingly, some philosophers may hold that the universe is one rather than dualistic or pluralistic. Monisms may be theologically syncretic by proposing that … Wikipedia
• Philosophy of mind — A phrenological mapping[1] of the brain. Phrenology was among the first attempts to correlate mental functions with specific parts of the brain. Philosophy of mind is a branch of philosophy that studies the nature of the mind, mental even … Wikipedia
• Dualism (philosophy of mind) — René Descartes s illustration of dualism. Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit. In philosophy of mind, dualism is a set of views about the relationship between mind and… … Wikipedia
• Donald Davidson (philosopher) — Donald Herbert Davidson Portrait of Donald Davidson by the photographer Steve Pyke in 1990. Full name Donald Herbert Davidson Born 6 March 1917(1917 03 06) Springfield, Massachusetts … Wikipedia
• Physicalism — is a philosophical position holding that everything which exists is no more extensive than its physical properties; that is, that there are no kinds of things other than physical things. The term was coined by Otto Neurath in a series of early… … Wikipedia
• Property dualism — In other words, it is the view that non physical, mental properties (such as beliefs, desires and emotions) adhere in some physical substances (namely brains). Substance dualism, on the other hand, is the view that there exist two kinds of… … Wikipedia
• Materialism — Not to be confused with Materialistic. For the prioritization of resources, see economic materialism. For the Marxist analysis, see dialectical materialism. For consumerism, see consumerism. For materialist perspective on social development, see… … Wikipedia
• Liste des théories philosophiques — Les écoles de philosophie ont eu des théories philosophiques variées, elles sont listées ci dessous. Sommaire : Haut A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A Absolutism … Wikipédia en Français
• Tokenidentität — Der anomale Monismus ist eine Position der Philosophie des Geistes, die von Donald Davidson entwickelt worden ist. Sie behauptet zum einen, dass jedes einzelne mentale Ereignis mit einem einzelnen physischen Ereignis identisch ist. Zum anderen… … Deutsch Wikipedia
|
2021-11-29 15:33:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6635615825653076, "perplexity": 2351.152305632986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00558.warc.gz"}
|
http://crypto.stackexchange.com/tags/modes-of-operation/hot?filter=all
|
# Tag Info
26
The really simple explanation for the difference between the two is this: ECB (electronic code book) is basically raw cipher. For each block of input, you encrypt the block and get some output. The problem with this transform is that any resident properties of the plaintext might well show up in the ciphertext – possibly not as clearly – that's what blocks ...
14
With CBC (Cipher block chaining) mode, before encryption, each block is XOR-ed with the ciphertext of the previous block, to randomize the input to the block cipher (and avoid encrypting the same block twice with the same key, as this would give the same output, and tell the attacker something about the plaintext). As the first block has no previous block, ...
14
There is only one main difference between PKCS#5 and PKCS#7 padding is the block size. PKCS#5 padding is only defined for 8-byte block sizes. PKCS#7 padding would work for any block size from 2 to 255 bytes. This is the definition of PKCS#5 padding (6.2): The padding string PS shall consist of 8 - (||M|| mod 8) octets all having value 8 - (||M|| mod ...
13
The algorithm (now reasonably clear) is reminiscent of a block cipher in CFB mode, with $random$ as the IV (which can be public), $secret$ as the key, and MD5 used as keystream generator instead of the block cipher. Decryption works as in CFB: $$M_1 = C_1 \oplus \operatorname{MD5}( secret||random )$$ M_n = C_n \oplus \operatorname{MD5}( secret||C_{n-1} ...
12
If you look closely at the definition of authenticated encryption modes, you will see they all are, actually, the combination of symmetric encryption and a MAC. Using traditional encryption and an independent MAC has a few tricky points, none of them being unsolvable: The encryption mode will use a key, and the MAC will also use a key; using the same key ...
12
The initialization vector is a property of the mode of operation (aka "chaining mode"), not of the block cipher itself. A block cipher does only one thing, which is mapping blocks (block size depends on the cipher, 64-bit for DES, 128-bit for AES) unto other blocks. The chaining mode is what says how input data should be transformed into block values, and ...
12
The crucial difference between plain encryption and authenticated encryption (AE) is that AE additionally provides authenticity, while plain encryption provides only confidentiality. Let's investigate in detail these two notions. In the further text, we assume $K$ to be a secret key, which is known to authorized parties, but unknown to attackers. Goals ...
11
ECB and CBC are only about encryption. Most situations which call for encryption also need, at some point, integrity checks (ignoring the threat of active attackers is a common mistake). There are combined modes which do encryption and integrity simultaneously; see EAX and GCM (see also OCB, but this one has a few lingering patent issues; assuming that ...
11
After reading the paper How to Break XML Encryption (thanks to Krzysztof for the link), here are my two cents. This attack relies on the fact that a CBC-ciphertext C = (IV, C1, ... Cd) can be decomposed into pairs of (IV, C1), (C1, C2), (C2, C3), ... (C(d-1), Cd), each of which is also a valid CBC ciphertext for the same key, relating to the corresponding ...
11
A block cipher is an invertible transformation that maps an $n$ bit block of bits to an $n$ bit block of bits, under the control of a key (and where $n=128$ in the case of AES) Now, we most often need to do things other than mapping blocks of $n$ bits; how we do that is using the block cipher within a Mode of Operation. A mode of operation is just a way to ...
10
Neither. It means that an attacker can decrypt all messages that have been encrypted using this standard. The attack is a padding oracle attack. That means that, if the attacker has a ciphertext they want to decrypt, they can send several variations of the ciphertext to the server. By analyzing the server's responses (e.g., error messages returned), it ...
10
While you do operate block-by-block when generating the pseudorandom stream, the actual encryption step (i.e., the XOR) is bitwise, and therefore does not require the message to be padded. For example, the message "Hello" will be processed as follows (pseudocode): byte stream[16] = AES(Key, Nonce); byte plaintext[5] = "Hello"; byte ciphertext[5]; for i ...
10
The security of that approach is equivalent to that of normal CBC. Your scheme with first plaintext block $IV^\prime$ is clearly identical to normal CBC with $IV=AES(IV^\prime)$. Since a block cipher is a permutation over a block, a uniformly random first plaintext block will lead to a uniformly random IV for normal CBC. A ciphertext produced with your ...
10
There are some serious problems with this design that would preclude it from being standardized, so it probably does not have a name. The 2 visibly main flaws are as follows: If the plaintext follows a pattern similar to the block counter, the block cipher inputs may repeat, exposing information about the plaintext (exact same issue as reuse of nonce, but ...
9
Each mode of operation has its own IV requirements. Some need uniform, unpredictable randomness. Other are equally happy with just uniqueness. CBC is well-known for its need of an IV chosen randomly and uniformly among the possible IV values, and such that an attacker who can choose the text to encrypt may not predict the IV value before submitting the said ...
9
If you look at the CBC diagram, you'll see that having a fixed IV is equivalent to having the first ciphertext block become the IV. If your cipher is a good pseudorandom permutation, then what you are doing does work, if and only if all timestamps are unique such that the "new IV" is unique and unpredictable. And in fact, if you do not use the ...
8
SHA-256(SHA-256(x)) was proposed by Ferguson and Schneier in their excellent book "Practical Cryptography" (later updated by Ferguson, Schneier, and Kohno and renamed "Cryptography Engineering") as a way to make SHA-256 invulnerable to "length-extension" attack. They called it "SHA-256d". We started using SHA-256d for everything when we launched the ...
8
The flaw in CBC which the recent BEAST attack exploits occurs when the attacker can choose part of the encrypted message while knowing the IV which will be used. In the case of SSL/TLS, data is split into successive records, each record being "a message" in its own right. The attacker produces some data, observes the corresponding record, and knows that the ...
8
Assuming that you can indeed guarantee that the keys will never be reused, both schemes should be secure. The only requirement for the nonce in CTR mode is that it must be unique (and, if used directly as the initial counter value, not equal to any intermediate counter value used in the past or in the future). If you're only encrypting one message with a ...
8
Free space and used space look exactly the same to someone who only sees one version of the ciphertext. First, the basic idea of a secure block cipher is that you learn nothing about the plaintext block simply by observing the ciphertext block. You may be able to learn something about the plaintext from the surrounding context, such as by collecting more ...
8
It's not clear from your decryption what the algorithm is used for. But you should be aware that while at first glance it provides privacy : it's a weird mode CFB with md5 used as a block cipher ; it doesn't provide authenticity. A simple bit flip of the ciphertext will result in the corresponding bit being flipped in the plaintext and such a bit flip ...
7
The paper you cite (Deterministic Authenticated-Encryption...) gives quite a bit of useful information (but I'm assuming you already knew that). It looks like a pretty good read (I'll let you know if that assumption holds after I finish it). For why simpler constructions (CBC/CTR with a MAC or even AEX mode) don't satisfy (emphasis added): A key-wrap ...
7
Given only what you've said, and assuming the keys are created and stored in a strong manner, using a different key to encrypt database entries mitigates the problem of ECB mode. Namely that identical plaintext, when encrypted with the same key always outputs the same ciphertext. No security is gained by switching to CBC mode (assuming you can easily store ...
7
You say that a random IV "would also be unique", but really that is the crux of the problem. The problem with counter mode is that it is secure unless the same counter is used twice; if it is, it is likely that an attacker will be able to recover both plaintext messages. This contrasts with CBC mode, which if you repeat an IV, it has the relatively benign ...
7
A reason to use CBC (or CFB) over CTR and OFB could be that they are a bit more misuse-resistant: If you use CBC with a repeated initialization vector, a (read-only) attacker only can get the fact that the plaintexts are equal up to some block, and not much more (and from the first different block the rest is different). With CTR and OFB, a repeated ...
7
Well, with CFB mode, the encryption process is "take the most recent ciphertext block, pass it through the block cipher, and then exclusive-or that with the plaintext block to generate the next ciphertext block". As for the IV, that's used as "the most recent ciphertext block" when encrypting the first plaintext block (where you don't have a most recent ...
7
Thomas is correct; there's no attack on CFB mode if you can predict the IV; NIST is just being cautious. With CBC, the value of the first encrypted block $C_0 = E_k( IV \oplus P_0)$, where $IV$ is the IV used for that packet, $P_0$ is the value of the first plaintext block, and $E_k$ is the evaluation of the block cipher. If an attacker can predict the ...
7
XTS vs. Undiffused CBC. The issue here is malleability. Both XTS and CBC prevent an attacker from learning information about encrypted data. However, neither one completely succeeds in preventing an attacker from tampering with encrypted data. However, it's arguably easier to tamper with an (undiffused) CBC ciphertext than it is to tamper with an XTS ...
7
If you could use the same IV, then yes, you would need to rewrite everything after the modified block. But you shouldn't do that; every time the contents change, you should generate a new IV, which would require the whole file to be rewritten. Otherwise an attacker can learn more information than it should about how the file changed (precisely by checking ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-04-23 12:38:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37081170082092285, "perplexity": 1478.969520369668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/34477/using-aes-ctr-to-generate-aes-subkeys-from-a-master-key-and-nonce
|
# Using AES-CTR to generate AES subkeys from a master key and nonce
Suppose I would like to encrypt some data using AES-GCM because it has hardware support on my processor and it is secure.
Unfortunately, I don't like the 96-bit nonce size, because I want to randomly generate the nonce each time (to avoid having to keep state).
So instead of it, I use my 256-bit master key to encrypt 32 null bytes with AES in CTR mode with a 127-bit nonce, and use the resulting keystream as the AES-GCM key.
I have now a $96 + 127 = 223$ bit nonce, which is plenty long enough to generate afresh each time.
I presume that this is secure since a block cipher in CTR mode can be used as a CSPRNG (as here).
Am I correct?
If I understand correctly:
You have a 256-bit master key $K0$
Generate a random 128-bit value $N0$ and set the last bit to 0
Generate a 256-bit keystream $K1$ using $E_{K0}(N0)$ || $E_{K0}(N0 \oplus 1)$
Generate a random 96-bit value $N1$
Using $K1$ as the key, and $N1$ as the nonce, encrypt your message with AES256-GCM, and authenticate $N0$ as associated data
Include $N0$ with the ciphertext
As you suggested, this should give a 127-bit effective nonce size for generation of the key $K1$, plus a 96-bit nonce size for the actual nonce, resulting in 16 byte ciphertext expansion, and minimal extra computation, but now with a $2^{-111.5}$ probability of a key/nonce pair reuse.
Because there are only $2^{127}$ possible keys, you may want to consider additional ciphertext expansion, and instead generate a 256-bit nonce $N0$, encrypting the whole thing with ECB or CBC to get $K1$.
Be aware, security proofs may include keeping the nonce $N0$ (effective state) of the PRNG secret, and here you are providing it as part of the ciphertext. I would do it like this:
You have a 256-bit master key $K0$
Generate a random 256-bit value $N0$
Generate 256-bit key $K1$ by hashing $N0$ with something like SHA-384
Generate a random 96-bit value $N1$ (optional)
Generate $S$ = $E_{K0}(N0)$
Using $K1$ as the key, and $N1$ as the nonce, encrypt your message with AES256-GCM, and authenticate $S$ as associated data
Include $S$ with the ciphertext
Encryption only requires a single hash iteration beyond the initial scheme, in addition to the ciphertext expansion from $S$. Decryption of the message requires decryption of $S$ followed by hashing to recover $K1$. In this scenario a GCM nonce $N1$ is not required to be generated, since you are using a new 256-bit key with each message you can use a fixed nonce. You could also get $N1$ from the unused remainder of the hash output. The GCM key is also protected by encrypting the state of the PRNG, which is now a random input to a hash function.
It is probably not insecure to do it your way, I just do not feel good about exposing $N0$ directly. If the system RNG has some problem, exposure of $N0$ (or even $N1$) could give an attacker information they could use against you in some way. Dual-EC is a good example.
• The reason I was not worried about exposing $N0$ is that it I thought that an attacker could not derive any useful information from it assuming that AES-CTR is secure. – Demi Apr 13 '16 at 16:01
• One disadvantage of your approach is that the new randomness must be completely random, as opposed to just being widely distributed (to prevent collisions). – Demi Apr 13 '16 at 16:20
|
2021-06-25 13:49:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24565637111663818, "perplexity": 1294.2088728174654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00211.warc.gz"}
|
http://math.stackexchange.com/questions/419998/why-g-n-is-measurable-in-the-proof-of-fatous-lemma
|
# why $g_n$ is measurable in the proof of Fatou's Lemma
Fatou Lemma: Suppose $\{f_n\}$ is a sequence of measurable functions with $f_n \geq 0$. If $\lim_{n\rightarrow\infty}f_n(x)=f(x)$ for a.e. $x$, then $$\int f \leq \liminf_{n\rightarrow\infty}\int f_n$$
Proof: Suppose $0\leq g \leq f$, where $g$ is bounded and supported on a set $E$ of finite measure. If we set $g_n(x)=\min(g(x),f_n(x))$, then $g_n$ is measurable. etc..
I don't know why $g_n$ is measurable. because the author didn't assume $g$ is measurable. can anyone explain why? thanks very much
-
The assumption that $g$ is measurable is implicit but very much necessary here. – Did Jun 14 '13 at 5:52
Just to say that there is proof that don't using that $g$ is measurable. For example: Let $g_n(x)=\inf_{k\ge n} f_k(x)=\inf\{f_n(x),f_{n+1}(x),\ldots\}.$ Then $g_n \le f_n$ for every $n \in \mathbb{N}$ and $g_n \uparrow \displaystyle\liminf_{n\to \infty}f_n=f$. Then using TMC we find $$\int_A f = \int_A \lim_{n \to \infty} g_n = \lim_{n\to \infty} \int_A g_n = \liminf_{n\to \infty} \int_A g_n \le \liminf_{n \to \infty} \int_A f_n$$ To prove that $g_n(x)=\inf_{k\ge n} f_k(x)$ is measurable note that $g_n^{-1}([c,+\infty])=\bigcap_{n=1}^{\infty} \{x \in X: f_n(x)\ge c\}$ – Cortizol Jun 14 '13 at 7:35
## 1 Answer
$g$ is assumed to be measurable, but the author didn't explicitly say so.
-
|
2015-11-28 11:38:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675515294075012, "perplexity": 147.63083853340953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398452385.31/warc/CC-MAIN-20151124205412-00335-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://thecoffeeparlor.com/nike-swoosh-utm/aluminum-tread-plate-thickness-82f40c
|
Questiom .which of the following compounds are are SP2 hybridization?? 0.86/0.86 points | Previous Answers NCSUGenChem102LabV1 5.PRE.11. 11. Based on the Lewis structure for CH 2 O, consider the following. E) sp 3d2 This will give the terminal N a formal charge of 0, the central N a formal charge of +1 and the oxygen a formal charge of -1. 20 - What is the hybridization of the underlined... Ch. The two arrangements are equivalent.Nitrogen is the central atom in molecules of nitrous oxide, N20. The hybridization of the central nitrogen atom in the molecule N2O is A) sp. distorted tetrahedron (seesaw).E. In this tutorial, we are going to learn followings of nitrous oxide gas. Log in. How many bonds does the central atom of nitrogen have in Ammonia? Trigonal hybridization in carbon: the double bond. Due to the repulsive forces between the pairs of electrons, CO2 takes up linear geometry. Click hereto get an answer to your question ️ Among the triatomic molecules/ions, BeCl2, N3^-, N2O, NO2^+, O3, SCl2, ICl2^-, I3^- and XeF2, the total number of linear molecule(s)/ion(s) where the hybridization of the central atom does not have contribution from the d - orbital(s) is [Atomic number : S = 16, Cl = 17,I = 53 and Xe = 54 ] (b) The orbital hybridization about the carbon atom is which of the following? HOCI HClO HOCI is the preferred arrangement. Hybridisation is equal to number of $\sigma$ bonds + lone pairs. Herein, we present the first transformation of borylphosphine into borylphosphinite using nitrous oxide. After hybridization these six electrons are placed in the four equivalent sp 3 hybrid orbitals. The hybridization of the central nitrogen atom in the molecule N2O is. HCIO is the preferred arrangement. Please include all nonbonding electrons. 2) What is the hybridization in BeCl 2? ... For which one of the following molecules is the indicated type of hybridization not appropriate for the central atom? The hybridization of the central nitrogen atom in the molecule N2O is A. sp. In sp³ hybridization, one s orbital and three p orbitals hybridize to form four sp³ orbitals, each consisting of 25% s character and 75% p character. Find the hybridization as well identify the pπ-pπ as well as pπ-dπ bonds in $\ce{ClO2}$. 3. tetrahedral. Bonding in H 2 O. Nitrogen can hybridize in the sp2 or sp3 state, depending on if it is bonded to two or three atoms, respectively. Geometry, Hybridization, and Molecular Polarity OBJECTIVE Students will identify characteristics for the three most common types of chemical bonds: ionic, covalent and metallic. The oxygen in H 2 O has six valence electrons. N 2 O Lewis Structure, Resonance Structures, Oxidation Number. Assign formal charges to each atom using the +/- button. Join now. sp hybrid orbitals. Log in. A. sp B. sp2 C. sp3 D. sp3d E. sp3d2 44. A molecule containing a central atom with sp3 hybridization has a(n) _____ electron geometry. Answer: Around the sp3d central atom, the bond angles are 90 o and 120 o. N 2 O (nitrous oxide) is an oxide of nitrogen and is called as laughing gas. Quiz your students on NF2- Lewis Dot Structure - Bond Angle, Hybridization, Molecular Geometry using our fun classroom quiz game Quizalize and personalize your teaching. Determine the electron geometry, molecular geometry and polarity of N2O (N central). According to the VSEPR theory, the shape of the SO3 molecule is.C. The electron configuration of oxygen now has two sp 3 hybrid orbitals completely filled with two electrons and two sp 3 hybrid orbitals with one unpaired electron each. It has an sp hybridization and has bond angles of 180 degrees. 20 - What is the molecular structure for each of the... Ch. Borylphosphine reacts with N2O via insertion of a single oxygen atom into the P–B bond and formation of a P–O–B bond system. A. sp B. sp2 C. sp3 D. sp3d E. sp3d2 43. Since we consider odd electron a lone pair like in $\ce{NO2}$ therefore hybridisation is coming to be $\ce{sp^3}$. The easiest way to determine the hybridization of nitrate is by drawing the Lewis structure. The Lewis structure for another possible arrangement is shown here. However, these products, e… The answer is sp2 and I am confused as to how the central atom, which would be O, could have an sp2 hybridization. How to draw the lewis structure for N 2 O; Drawing … The presence of the functional gene nirK, encoding the respiratory copper-containing nitrite reductase, and the absence of the nitrous oxide reductase gene nosZ indicated a truncated denitrification pathway and that this bacterium may contribute significantly to the formation of the important greenhouse gas N2O. sp. A molecule with the formula AX2 uses _____ to form its bonds. The concept of B. sp2. 20 - Nitrous oxide (N2O) can be produced by thermal... Ch. N 2 O has the following structure we consider the structure 1 as it is the most stable The left N atom has complete octet and has 3 bonds thus 1 sp hybrid orbital and 2 p orbital, since it has a triple bond with other N it will also have sp hybridisation ( like N 2 O) The oxygen has 3 lone pairs and ine sigma bond hence it will have sp3 hybridisation. trigonal planar.D. P.S. square planar.2. A molecule containing a central atom with sp3d hybridization has a(n) _____ electron geometry. 20 - What is the hybridization of the central atom in... Ch. Oxygen atom has sp3 hybridisation because it has 1 sigma bond and 3 loan pair of electrons. trigonal planar. What is the Hybridization of Nitrate? C) sp3 D) sp3d. And the bonds present in the molecule, and oxygen can share two with! Borylphosphine into borylphosphinite using nitrous oxide gas students will learn to draw the Lewis no3-. If you still have trouble understanding how to draw the Lewis structure for each the... Symmetric distribution of the central atom answer below.... See full answer below placed in the molecule its... Hybridization as well identify the pπ-pπ as well as pπ-dπ bonds in $\ce { ClO2 }$ for... We are going to learn followings of nitrous oxide hybridization of n2o is kind of a weird molecule,... Which of the electrons in its structure of compounds and polyatomic ions of nitrous oxide electrons in the,... Search up Lewis structure, just search up Lewis structure no3- '' on YouTube repulsive forces the. I the only sp the reaction goes to completion { llld that HNO2 i the sp... Sp2 C. sp3 D. sp3d E. sp3d2 43 Co 2 hybridization the only sp $bonds + pairs... Compounds and polyatomic ions displayed for this question for which one of the... Ch molecules sp3d. Number of$ \sigma $bonds, 1 lone pair, 2π bonds and odd. To count the number of bonds '' = # 3 #, the shape of methane molecule (! Pair of electrons in the molecule, and nitrogen needs three due to the forces! That the reaction goes to completion { llld that HNO2 i the sp. Its structure of$ \sigma $bonds, 1 lone pair, 2π bonds and 1 odd.. Sp hybridization and has bond angles of 180 degrees by drawing the diagram, we present first! A molecule with a single central nitrogen atom is sp and hybridization of the molecule determines its shape we! One of the central nitrogen atom in... Ch ( N2O ) can be produced by thermal... Ch School... Been developed for many applications such as masks and graphene-based photodetectors the pπ-pπ well. Need to count the number of$ \sigma $bonds, 1 lone pair, 2π bonds and 1 electron... Is trigonal planar BeCl 2 sp B. sp2 C. sp3 D. sp3d E. 44! D ) Co 2 hybridization Practice questions Ozone has sp2 hybridization? three. The VSEPR theory, the shape of the following compounds are are hybridization! And nitrogen needs three 's are not displayed for this question are sp2 hybridization? oxide nitrogen., there is a symmetric distribution of the following molecules is the hybridization of the following has. 1 odd electron to learn followings of hybridization of n2o oxide ) is an of! Into borylphosphinite using nitrous oxide What is the hybridization in the central atom more electrons to complete octet! Of hybrid orbital, '' this is where the hybridization of the following molecules the. And polyatomic ions e… the hybridization of the molecule, and nitrogen three. Is equal to number of electron pairs and the bonds present in the four equivalent sp 3 orbitals... Which of the following compounds are are sp2 hybridization? draw Lewis Structures use! More electrons to complete its octet, and there is a ) sp just up! More electrons to complete its octet, and oxygen can share two back, producing a double bond between pairs... Well as pπ-dπ bonds in hybridization of n2o \ce { ClO2 }$ has 2 \sigma! Sp hybridization and has bond angles of molecules showing sp3d hybridization in BeCl 2 know. In ClO3 a weird molecule shown here pairs and the bonds present in the central nitrogen atom to... As well as pπ-dπ bonds in $\ce { ClO2 }$ as the should! 2 ) What is the indicated type of hybridization is # sp^2 # for! Choices N2O SO2 BeCl2 NF3 PF5 Herein, we present the first transformation borylphosphine! Group of answer choices N2O SO2 BeCl2 NF3 PF5 Herein, we need to count number. Can share two electrons with oxygen, and nitrogen needs three by four groups of electrons understanding how to the... 10 Chemical bonding II molecular geometry and polarity of compounds and polyatomic ions number of bonds '' = 3... This is where the hybridization of each nitrogen atom that does not participate in bonding the electronic geometry the! Double bond between the two arrangements are equivalent.Nitrogen is the central atom of nitrogen have hybridization of n2o?. Following molecules has sp2 hybridization at the central nitrogen atom that does not participate in bonding in. Atom has sp3 hybridisation because it has an sp hybridization and has bond angles are O! Nitrogen can hybridize in the molecule determines its shape, we are going learn... 'As ' atom in ClO3 to only three other atoms } $2... Are 90 O and 120 O in this tutorial, we are going to learn followings of oxide. Sp2 and sp3 hybridization has a ( n central ) equal to number electron! 2 hybridization Practice questions Explain the geometry of Ozone e… the hybridization the. Ethene ) in which each carbon atom is trigonal planar shape there a. ) Co 2 hybridization Practice questions these products, e… the hybridization in the central atom of and... Its structure red X 's are not displayed for this question ) 2..., and there is a molecule with a single central nitrogen atom in molecules of nitrous oxide molecular,! Three other atoms II molecular geometry and hybridization of the central nitrogen atom that not. = # 3 #, the shape of methane molecule 2 O has six valence electrons applications such as and. ( n ) _____ electron geometry have a trigonal planar each of the following of electrons its! Determine the molecular geometry of sulfur hexafluoride, SF 6 molecule _____ electron geometry molecule N2O is sp. Two oxygen atoms.... See full answer below full answer below ( N2O ) can be produced thermal... Have trouble hybridization of n2o how to draw Lewis Structures and use them to determine the electron geometry sp! Going to learn followings of nitrous oxide n 2 O ( nitrous oxide N2O! Means that it should have a trigonal planar based on the nitrogen atom reversible manner, respectively activate. The bonds present in hybridization of n2o four equivalent sp 3 hybrid orbitals how to draw Structures... Practice questions D ) Co 2 hybridization Practice questions determine the electron geometry b ) SO2 C ) D! The 'As ' atom in AsF5 is sp3d geometry, molecular geometry of.! Three other atoms O, consider the following have in Ammonia by four groups of electrons the oxygen H! Many applications such as masks and graphene-based photodetectors the central atom of nitrogen in... To only three other atoms ) Co 2 hybridization Practice questions ClO2 }$ has \$!, Oxidation number oxygen can share two back, producing a double bond between two! The molecule N2O is a. sp B. sp2 C. sp3 D. sp3d E. 43! A molecule containing a central atom in ClO3 Co2 b ) the electronic geometry about the carbon atom is double! Back, producing a double bond between the two arrangements are equivalent.Nitrogen is the hybridization the... We can now know the molecular geometry and polarity of compounds and polyatomic ions in which carbon. Herein, we can regard carbon as being trivalent to draw Lewis and! Needs three is a molecule with the formula AX2 uses _____ to form bonds... Hexafluoride, SF 6 molecule 1 sigma bond and 3 loan pair lone. Diagram, we can now know the molecular structure for Ch 2 O, consider the following has... Oxide ( N2O ) can be produced by thermal... Ch of Ozone is required whenever atom. 10 Chemical bonding II molecular geometry and hybridization of the following molecules the. Single central nitrogen atom has sp3 hybridisation because it has 1 sigma bond and formation of a single central atom! Dioxide has sp 2 hybridization draw the Lewis structure no3- '' on YouTube central atom in central! Pair, 2π bonds and 1 odd electron form a compound ethylene ( ethene ) in which each atom... In H 2 O, consider the following molecules is the molecular structure for each of following. Reaction goes to completion { llld that HNO2 i the only sp reversible manner respectively! Bond between the pairs of electrons, Co2 takes up linear geometry atom to! E. sp3d2 44 how to draw the Lewis structure, Resonance Structures, Oxidation number as the hybridization of SO3! Atom is trigonal planar needs two more electrons to complete its octet, and is... In molecules of nitrous oxide ( N2O ) can be produced by thermal... Ch,. Structure no3- '' on YouTube bonding II molecular geometry and polarity of N2O ( n central ) hybridization. Trigonal planar shape we refer to the VSEPR theory, the hybridization in BeCl 2 sp2 C. sp3 D. E.! Loan pair of electrons each atom using the +/- button loan pair of lone electrons the. Surrounded by four groups of electrons in its structure What is the hybridization of following... Oxygen can share two electrons with oxygen, and oxygen can share two electrons with,! How to draw Lewis Structures and use them to determine the electron geometry hybridization! Atomic Orbitals.1 as masks and graphene-based photodetectors molecular geometry and hybridization of the following compounds are are hybridization... Not displayed for this question activate it in an irreversible and reversible manner, respectively oxide ( N2O can... Reversible manner, respectively ethene ) in which each carbon atom is planar! Also form a compound ethylene ( ethene ) in which each carbon atom is sp and hybridization the!
|
2022-05-27 03:01:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39652395248413086, "perplexity": 4720.6790157586665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00380.warc.gz"}
|
https://solvedlib.com/n/the-ratio-of-kellys-age-to-her-father-x27-s-age-is-2-7-in,21303064
|
5 answers
The ratio of Kellys age to her father's age is 2:7. In 5 years the sum of their ages will
Question:
The ratio of Kellys age to her father's age is 2:7. In 5 years the sum of their ages will be 55_ How old is Kelly's father now? years old Entcran integer decimal numbzl [mor- Question Help: Message instructor Submit Question
Answers
Similar Solved Questions
1 answer
Create a patient scenario utilizing at least 5 medical terms
Create a patient scenario utilizing at least 5 medical terms...
5 answers
Use the right triangle and the given information solve the trianglea=2, c =4; find b, A, and Bb-L (Round to the nearest hundredth as needed: }A-D (Round to the nearest tenth as needed )(Round to the nearest tenth as needed
Use the right triangle and the given information solve the triangle a=2, c =4; find b, A, and B b-L (Round to the nearest hundredth as needed: } A-D (Round to the nearest tenth as needed ) (Round to the nearest tenth as needed...
1 answer
Find the annual percentage yleld for an investment at the following rates (Round your answers t0 two ecima places. 7.6% compounded monthly 79 compounded continuously ~[1 POINTS MY NOTES ASK YOUR TEACHER Find the future value of an ordinary annuity of $5,000 paid quarterly for years, the interest rat... 1 answer QUESTION 5 Match the statements describing a sand dune with the type of sand dunes on... QUESTION 5 Match the statements describing a sand dune with the type of sand dunes on the right Long ridges of sand oriented parallel to the prevailing wind A. Barchanoid Solitary, crescent-shaped dunes oriented with their tips B. Transverse pointing downwind C. Star Ridges of sand oriented at right... 1 answer Prove that the series expansion of the exponential function is Cauchy. please use triangle inequality. Will... Prove that the series expansion of the exponential function is Cauchy. please use triangle inequality. Will rate, thank you 20 1 et = n! n-0... 1 answer Please help by showing steps. Question 4. (continued) (b) Consider a 16-bit binary number stored in AVR registers r1... Please help by showing steps. Question 4. (continued) (b) Consider a 16-bit binary number stored in AVR registers r15:r14 which the programmer considers to be a two's complement value. (r15 holds the most significant byte, r14 holds the least significant.) Write down a sequence of AVR assembl... 1 answer A 10.0 L gas sample is heated to 400 ◦C at P = constant. The final... A 10.0 L gas sample is heated to 400 ◦C at P = constant. The final volume is 20.0 L. What is the initial temperature of this gas? Assume ideal-gas behavior and a constant amount of gas throughout the process (n, P = constant). T(K) = T(◦C) + 273. (A) 63.5 ◦C (B) 73 ◦C (C) 200... 5 answers A student draws an incorrect free-body diagram shown below to represent the forces 0n A aTOW fying through the air in the €-direction, after the archer has fired it from her Dow . It is nO longer in contact with the bow the string: Air resistance is not zero:Incorrect diagram!Fbowpoints) Identify the errors in the diagram, and explain why they are errors(b)points) Draw a correct free-body diagram OlI the axes provided_ A student draws an incorrect free-body diagram shown below to represent the forces 0n A aTOW fying through the air in the €-direction, after the archer has fired it from her Dow . It is nO longer in contact with the bow the string: Air resistance is not zero: Incorrect diagram! Fbow points) Id... 1 answer 10. Mobius transformations. Let a, b, c, d ad-bc 0 . The function is called a... 10. Mobius transformations. Let a, b, c, d ad-bc 0 . The function is called a Mobius transformation (or linear fractional transformation). Show that a) lim z->inf T(z) = inf if c=0; b)kim z-> inf T(z) = a/c and lim z-> d/c T(z) = inf if c0 *10. Möbius transformations. Let a,b,c,d EC ... 1 answer N 2013, Doha Trains purchased equipment for$544000. Doha Trains' accumulated book depreciation with respect to...
n 2013, Doha Trains purchased equipment for $544000. Doha Trains' accumulated book depreciation with respect to the equipment is$414000, and its accumulated tax depreciation(cost recovery) is \$589000. Doha Trains' tax rate is 26%. what is the tax saving ?...
5 answers
Find the correlation coefficient of the data below showing the number of hours of sleep and the fluid intake of 12 individuals in a day: Indiviidual 10 Hours of Sleep (X) 7.5 6.5 7.5 5.5 4.5 Fluid intake (Y) 1520 1300 1500 1350 1400 900 800 1700 950 1400Round off your answer to two decimal places.
Find the correlation coefficient of the data below showing the number of hours of sleep and the fluid intake of 12 individuals in a day: Indiviidual 10 Hours of Sleep (X) 7.5 6.5 7.5 5.5 4.5 Fluid intake (Y) 1520 1300 1500 1350 1400 900 800 1700 950 1400 Round off your answer to two decimal places....
5 answers
3) (5 points) Solve the (third-order) linear initial value problem y" (t) - y"(t) + y(t) - y(t) = 4e y(0) = 0,y' (0) = 1,y"(0) = 0.
3) (5 points) Solve the (third-order) linear initial value problem y" (t) - y"(t) + y(t) - y(t) = 4e y(0) = 0,y' (0) = 1,y"(0) = 0....
5 answers
Propose curved-arrow mechanism for the following reaction 6 points}:ISO4
Propose curved-arrow mechanism for the following reaction 6 points}: ISO4...
1 answer
H. Purchase of strawberries used in Smucker's Strawberry Preserves i. Depreciation on food research lab -4...
h. Purchase of strawberries used in Smucker's Strawberry Preserves i. Depreciation on food research lab -4 Classify costs as direct or indirect (Learning Objective 3) Classify each of the following costs as a direct cost or an indirect cost, assuming that the cost object is the Juniors Departmen...
1 answer
1. A sample of gas with an initial volume of 28.4 L at a pressure of 735 mmHg and a temperature of 309 K is compressed t...
1. A sample of gas with an initial volume of 28.4 L at a pressure of 735 mmHg and a temperature of 309 K is compressed to a volume of 14.5 L and warmed to a temperature of 377 K .What is the final pressure of the gas? 2. A cylinder with a moveable piston contains 216 mL of nitrogen gas at a pressure...
5 answers
Which of the following is not characteristic of the restriction enzyme?1) They recognizes specific sequences on a DNA strand2) Which is known as recognition sequence or restriction site 3) These recognition sequences are palindromes 4) Like other enzymes, restriction enzymes do not show specificity5) They produces sticky or blunt ends
Which of the following is not characteristic of the restriction enzyme? 1) They recognizes specific sequences on a DNA strand 2) Which is known as recognition sequence or restriction site 3) These recognition sequences are palindromes 4) Like other enzymes, restriction enzymes do not show specificit...
4 answers
<Week HomeworktBiking Vectors6 of 15Review | Constants Perlodic Tablestudent bikes school by traveling first dy 0.900 miles north; then aw 0.500 miles west; and finally ds 0.200 miles south.Part FFlnally; find @, the angle north of west of the path followed by the bird_ Express your answer numarically in degrees1 of 1View Available Hint(s)AZddegreesSubmitProvide FeedbackNextFigure
<Week Homework tBiking Vectors 6 of 15 Review | Constants Perlodic Table student bikes school by traveling first dy 0.900 miles north; then aw 0.500 miles west; and finally ds 0.200 miles south. Part F Flnally; find @, the angle north of west of the path followed by the bird_ Express your answer...
5 answers
Suppose 9 is function that is differentiable at I = and that 9(1) = -4 9(1) = 3 Find the value of h' (1) where h(r) = (1? + 5) . g(r). (0) 10 (6) 41 (c) -6 (d) ~32 (e) None of the above_
Suppose 9 is function that is differentiable at I = and that 9(1) = -4 9(1) = 3 Find the value of h' (1) where h(r) = (1? + 5) . g(r). (0) 10 (6) 41 (c) -6 (d) ~32 (e) None of the above_...
1 answer
A 0.2 В 0.5 0.1 Given the events A and B above, find the following probabilities...
A 0.2 В 0.5 0.1 Given the events A and B above, find the following probabilities P(A and B) P(A or B) P(A | B) P(B | A) = P( not A and B) = P(A and not B) Are events A and B independent (yes Explain why or why not or no) Are events A and B independent (yes Explain why or why not or no) GRB 5/5...
5 answers
Find Vc (t),t z 0.282 MIs ? 4_8E"Icoy
Find Vc (t),t z 0. 2 82 M Is ? 4_8E" Icoy...
-- 0.030016--
|
2022-07-07 03:40:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4008222222328186, "perplexity": 3547.8659100916916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00741.warc.gz"}
|
https://codereview.stackexchange.com/questions/200827/macro-to-convert-an-indented-hierarchy-to-a-database-format
|
# Macro to convert an indented hierarchy to a database format
Our business intelligence suite outputs data as an indented hierarchy, as below:
Level 1 1000
Level 2a 600
Level 3a 500
Level 3b 100
Level 2b 400
Level 3c 400
I've written a macro that converts this into a 'database' format, where only rows with the base (most granular) level are preserved, and the parents are listed to the left as below (this way the # column is summable):
1 2 3 #
Level 1 Level 2a Level 3a 500
Level 1 Level 2a Level 3b 100
Level 1 Level 2b Level 3c 400
The problem I've been running into is that it takes 5-10 minutes to process a file with ~8000 rows. Although my code works, I'm convinced there's a faster way. See below for my code:
Sub Database()
Application.ScreenUpdating = False
Dim WS As Worksheet
Dim SR As Range
Dim Rows As Integer
Dim Indent As Integer
Dim TR As Integer
Dim BR As Integer
Set WS = ActiveWorkbook.ActiveSheet
'StartCell is a function that returns the address of the first cell in the hierarchy
Set SR = WS.Range(StartCell())
Rows = SR.End(xlDown).Row - SR.Row
BR = SR.End(xlDown).Row
TR = SR.Row
'Insert 4 columns & add headers (Level 1, Level 2, etc.)
For x = 0 To 3
SR.EntireColumn.Insert
SR.Offset(-1, -1) = "Level " & x + 3
Next x
x = 0
q = 0
'The main code
Do While x < Rows + 1
'Identifies a row with base-level indentation & sets indent value to this level
If Left(SR.Offset(x, 0), 5) = "P_PC7" Then
Indent = SR.Offset(x, 0).IndentLevel
End If
i = 0
'Loop while the indentation level is greater than one
Do While Indent > 1
'Move upwards and check whether indentation of new cell is one less than initial cell
If SR.Offset(x - i, 0).IndentLevel = Indent - 1 Then
'If so, this cell is the parent of the initial cell - copy it into the appropriate spot to the left of the base level cell
SR.Offset(x - i, 0).Copy SR.Offset(x, -1 * (6 - Indent))
'Set new indent level - the next loop will now look for the parent of the new cell
Indent = SR.Offset(x - i, 0).IndentLevel
'If indent level is not one less than initial cell, continue moving upward
Else: i = i + 1
End If
Loop
x = x + 1
Loop
'Remove all rows that are not base-level
For q = BR To TR Step -1
If WS.Cells(q, 6).IndentLevel <> 5 Then
WS.Cells(q, 6).EntireRow.Delete
End If
Next q
WS.UsedRange.IndentLevel = 0
Application.ScreenUpdating = True
End Sub
Thanks very much to JNevill for his solution - it is indeed significantly faster than my original code. I had to make some changes to accommodate more than one # column, as well as headers to the left of the indented hierarchy column i.e.:
Region Base Level Account 1 Account 2
USA Level 1 500 800
USA Level 2a 300 400
USA Level 2b 200 400
Here is my new code based on JNevill's framework:
Sub HierarchyConvert()
Application.ScreenUpdating = False
Application.Calculation = xlCalculationManual
Dim WS As Worksheet
Dim SR As Range
Dim LastRow As Long
Dim rngReadCell As Range
Dim rngWriteRow As Range
Dim Indent As Integer
Dim LastIndent As Integer
Dim MaxIndent As Integer
Dim ValueArray(0 To 19) As Variant
Set WS = ActiveWorkbook.ActiveSheet
Set SR = WS.Range(StartCell())
LastRow = SR.End(xlDown).Row
MaxIndent = 5
Set rngWriteRow = WS.Rows(SR.Row)
For x = 0 To 4
SR.Offset(0, 1).EntireColumn.Insert
SR.Offset(-1, 1) = "Level " & 7 - x
Next x
SR.Offset(-1, 0) = "Level 2"
SR.Offset(-1, 5) = "PC"
For Each rngReadCell In WS.Range(SR.Address & ":B" & LastRow)
If Indent <= LastIndent And LastIndent <> 0 Then
Set rngWriteRow = rngWriteRow.Offset(1)
For i = 1 To Indent
rngWriteRow.Cells(1, i + 1).Value = rngWriteRow.Cells(1, i + 1).Offset(-1).Value
Next i
End If
rngWriteRow.Cells(Indent + 2).Value = Trim(rngReadCell.Value)
If Indent = MaxIndent Then
'Copies leftmost header from base-level row to top left of write-row
rngWriteRow.Cells(1) = rngReadCell.Offset(, -1).Value
'Copies data to right of base-level row to the write-row
For Z = 0 To 19
ValueArray(Z) = rngReadCell.Offset(, Z + 6).Value
Next Z
For M = 0 To 19
rngWriteRow.Cells(Indent + M + 3).Value = ValueArray(M)
Next M
End If
LastIndent = Indent
Range("A" & SR.Offset(0, 1).End(xlDown).Row + 1 & ":Z" & LastRow + 1).ClearContents
WS.UsedRange.IndentLevel = 0
Application.ScreenUpdating = True
Application.Calculation = xlCalculationAutomatic
End Sub
In reviewing your code first, there are several things you can do to make your code more consistent.
1. Always use Option Explicit. Please.
2. When you're looking at performance, you can do more than just disable ScreenUpdating. See this answer for my usual solution if you feel you still need it.
3. Typical professional developers will use start variable names with lower case letters. Functions/Subs will start with upper case. CamelCaseVariableOrSubNames is also most common.
4. Your "main code" loop uses Rows, which implies the rows on the active worksheet. "Implying" which rows you're referencing will get you into loads of trouble. Always declare variables to specifically reference which worksheet or range that you're using and it's easier to keep it straight.
As with many of the performance questions in Code Review, your need for speed will be solved with a memory-based array. But one of the stumbling blocks you have is detecting the indent level for each of the rows in your source data. My quick solution is to create a "helper column" of data next to your source that uses a simple User Defined Function (UDF) to identify the indent level. The UDF is a single line:
Public Function GetIndent(ByRef target As Range) As Long
'--- UDF to return the numeric indent level of the target cell
' handles the multi-cell case and will return the indent
' level of ONLY the top left cell of the range
GetIndent = target.Resize(1, 1).IndentLevel
End Function
Using this function in the first column to the right of your data (=GetIndent(A1)) now turns my source data into this:
I had to do this because if I pull your original source data into an array, the array loses the indent level information. Otherwise, I'd have to continually refer back to the worksheet which is taking the bulk of your processing time.
A quick side note on how I am defining and accessing columns of data in my code. (I deeply regret I've lost track of which user on SO/CR I lifted this tip from. Whoever you are, mad props!) I find that much of my column-based data can change "shape" over a period of development and use. Columns can be added or deleted or switched in order. Hard-coding the column number in code then becomes problematic and forces lots of code changes to keep up. For the longest time I defined a set of Const declarations to keep track of column numbers, such as
Const COL_FIRST_NAME As Long = 1
Const COL_LAST_NAME As Long = 2
Const COL_ADDRESS As Long = 3
And this works just fine, but the names get tedious and it's easy to lose track of which constant to use for which range of data. So from one of the many things I've learned here, I now create an Enum to define column indexes that can more specifically be tied to a set of data. In the course of your solution, I have created
'--- convenience declarations for accessing data columns
Private Enum SrcColumns
ID = 1
Number = 2
Indent = 3
End Enum
Private Enum DstColumns
L1 = 1
L2 = 2
L3 = 3
Number = 4
End Enum
You'll see how they are used below.
First get your data into your memory-based array:
Dim srcWS As Worksheet
Dim dstWS As Worksheet
Set srcWS = ThisWorkbook.Sheets("Sheet1")
Set dstWS = ThisWorkbook.Sheets("Sheet2")
'--- get our source data into an array
Dim srcRange As Range
Dim srcData As Variant
Set srcRange = srcWS.UsedRange
srcData = srcRange
We have to next figure out how many rows we'll need in our resulting database. This turns out to be straightforward by counting the number of times the maximum indent level appears in the data. In this case, the max indent level is 2. So:
Const MAX_LEVEL = 2
Dim i As Long
Dim maxDBRows As Long
For i = 1 To UBound(srcData, 1)
If srcData(i, SrcColumns.Indent) = MAX_LEVEL Then
maxDBRows = maxDBRows + 1
End If
Next i
Optionally (ideally), you can dynamically determine the maximum indent level instead of creating a Const. You could use a WorksheetFunction to accomplish the same thing if you'd prefer.
In order to create your database, I'm making a strict assumption that you will always encounter previous indent levels before reaching the maximum indent. This means that inside my loop I can capture all the level labels up to the maximum level and keep them. So creating the database now becomes a simple loop:
For i = 1 To UBound(srcData, 1)
Select Case srcData(i, SrcColumns.Indent)
Case 0
level1 = srcData(i, SrcColumns.ID)
Case 1
level2 = srcData(i, SrcColumns.ID)
Case 2
level3 = srcData(i, SrcColumns.ID)
dstData(newDBRow, DstColumns.L1) = level1
dstData(newDBRow, DstColumns.L2) = level2
dstData(newDBRow, DstColumns.L3) = level3
dstData(newDBRow, DstColumns.Number) = srcData(i, SrcColumns.Number)
newDBRow = newDBRow + 1
End Select
Next i
And finally, it's a quick copy to get the database array out to the destination:
Dim dstRange As Range
Set dstRange = dstWS.Range("A1").Resize(UBound(dstData, 1), UBound(dstData, 2))
dstRange = dstData
This runs very fast. Here's the entire module:
Option Explicit
'--- convenience declarations for accessing data columns
Private Enum SrcColumns
ID = 1
Number = 2
Indent = 3
End Enum
Private Enum DstColumns
L1 = 1
L2 = 2
L3 = 3
Number = 4
End Enum
Public Function GetIndent(ByRef target As Range) As Long
'--- UDF to return the numeric indent level of the target cell
GetIndent = target.IndentLevel
End Function
Sub ConvertToDatabase()
Dim srcWS As Worksheet
Dim dstWS As Worksheet
Set srcWS = ThisWorkbook.Sheets("Sheet1")
Set dstWS = ThisWorkbook.Sheets("Sheet2")
'--- get our source data into an array
Dim srcRange As Range
Dim srcData As Variant
Set srcRange = srcWS.UsedRange
srcData = srcRange
'--- we can determine how many rows in the destination database
' by getting a count of the highest indent level in the array
Const MAX_LEVEL = 2
Dim i As Long
Dim maxDBRows As Long
For i = 1 To UBound(srcData, 1)
If srcData(i, SrcColumns.Indent) = MAX_LEVEL Then
maxDBRows = maxDBRows + 1
End If
Next i
'--- establish an empty database
Dim dstData() As Variant
ReDim dstData(1 To maxDBRows, 1 To 4)
'--- load up the database
Dim level1 As String
Dim level2 As String
Dim level3 As String
Dim newDBRow As Long
newDBRow = 1
For i = 1 To UBound(srcData, 1)
Select Case srcData(i, SrcColumns.Indent)
Case 0
level1 = srcData(i, SrcColumns.ID)
Case 1
level2 = srcData(i, SrcColumns.ID)
Case 2
level3 = srcData(i, SrcColumns.ID)
dstData(newDBRow, DstColumns.L1) = level1
dstData(newDBRow, DstColumns.L2) = level2
dstData(newDBRow, DstColumns.L3) = level3
dstData(newDBRow, DstColumns.Number) = srcData(i, SrcColumns.Number)
newDBRow = newDBRow + 1
End Select
Next i
'--- finally copy the array out to the destination
Dim dstRange As Range
Set dstRange = dstWS.Range("A1").Resize(UBound(dstData, 1), UBound(dstData, 2))
dstRange = dstData
End Sub
• Named Ranges can also be used to stabilise code instead of fixed column numbers. I am working on a code base at the moment where I am using "magic numbers" similar to your constants (e.g. COL_SOMETHING1_SOMETHING2) but converting between column letter and column number when fixing the template is tedious. In the process of changing to named columns which means that the code base would never change again (). _() for various definitions of "never"_ – AJD Aug 3 '18 at 21:05
• With your IndentLevel UDF: This works fine on an single cell, but can fail if multiple cells are passed. There was once an MSDN article that explained this but the new MSDN format is a lot blander and doesn't contain this useful in-depth information anymore. – AJD Aug 3 '18 at 21:07
• You make a good point about multiple cells being passed to the UDF. I've modified the code to handle that case -- simplistically assuming that the indent level of the first cell (upper left) of the range will be returned. – PeterT Aug 5 '18 at 1:27
|
2019-08-22 16:49:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4282187819480896, "perplexity": 7304.043480309951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317274.5/warc/CC-MAIN-20190822151657-20190822173657-00520.warc.gz"}
|
http://dengemarre-blog.logdown.com/posts/7405222
|
8 months ago
## Statics And Strength Of Materials 7th Edition 32 convenienza negativi
Statics And Strength Of Materials 7th Edition 32
Access Statics And Strength Of Materials 7th Edition solutions now.
Find great deals on eBay for statics and strength of materials. . Statics and Strength of Materials (7th Edition) . Applied Statics, Strength of Materials, .
Applied Statics and Strength of Materials (6th Edition): George F. Limbrunner, Craig D'Allaird, Leonard Spiegel: 9780133840544: Books - Amazon.ca
Statics and Mechanics of Materials. . Statics, 7th Edition SI Version . material taught in the disciplines of the Mechanics of Solid Materials and the Strength .
If searching for a ebook by Robert P. Kokernak, Harold I. Morrow Statics and Strength of Materials (7th Edition) in pdf format, then you have come on to loyal site.
21593c9487
|
2018-12-11 14:17:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425244688987732, "perplexity": 7911.3972688779895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823621.10/warc/CC-MAIN-20181211125831-20181211151331-00520.warc.gz"}
|
https://en.academic.ru/dic.nsf/enwiki/2330639
|
# Bol loop
Bol loop
In mathematics, a Bol loop is an algebraic structure generalizing the notion of group. Specifically, a loop, "L", is said to be a left Bol loop if it satisfies the identity
:$a\left(b\left(ac\right)\right)=\left(a\left(ba\right)\right)c$, for every "a","b","c" in "L",
while "L" is said to be a right Bol loop if it satisfies
:$\left(\left(ca\right)b\right)a=c\left(\left(ab\right)a\right)$, for every "a","b","c" in "L".
A loop is both left Bol and right Bol if and only if it is a Moufang loop. The unmodified term "Bol loop" can refer to either a left Bol or a right Bol loop, depending on author preferences.
Bruck loops
A Bol loop satisfying the "automorphic inverse property," (ab)−1 = a−1 b−1 for all a,b in L, is known as a (left or right) Bruck loop or K-loop. The example in the preceding section is a Bruck loop. Left Bruck loops are equivalent to A. A. Ungar's gyrocommutative gyrogroups, though the latter are defined differently; see Ungar (2002).
Example
Let "L" denote the set of "n x n" positive definite, Hermitian matrices over the complex numbers. It is generally not true that the matrix product "AB" of matrices "A", "B" in "L" is Hermitian, let alone positive definite. However, there exists a unique "P" in "L" and a unique unitary matrix "U" such that "AB = PU"; this is the polar decomposition of "AB". Define a binary operation * on "L" by "A" * "B" = "P". Then ("L", *) is a left Bruck loop. An explicit formula for * is given by "A" * "B" = ("A B"2 "A")1/2, where the superscript 1/2 indicates the unique positive definite Hermitian square root.
Applications
Bol loops, especially Bruck loops, have applications in special relativity; see Ungar (2002).
References
* H. Kiechle (2002), "Theory of K-Loops", Springer. ISBN 978-3-540-43262-3.
* H. O. Pflugfelder (1990), "Quasigroups and Loops: Introduction", Heldermann. ISBN 978-3-88538-007-8 . Chapter VI is about Bol loops.
* D. A. Robinson, Bol loops, "Trans. Amer. Math. Soc." 123 (1966) 341-354.
* A. A. Ungar (2002), "Beyond the Einstein Addition Law and Its Gyroscopic Thomas Precession: The Theory of Gyrogroups and Gyrovector Spaces", Kluwer. ISBN 978-0-7923-6909-7.
Wikimedia Foundation. 2010.
### См. также в других словарях:
• Problems in loop theory and quasigroup theory — In mathematics, especially abstract algebra, loop theory and quasigroup theory are active research areas with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many … Wikipedia
• Moufang loop — In mathematics, a Moufang loop is a special kind of algebraic structure. It is similar to a group in many ways but need not be associative. Moufang loops were introduced by Ruth Moufang. Contents 1 Definition 2 Examples 3 Properties … Wikipedia
• Gerrit Bol — (1970) Gerrit Bol (* 29. Mai 1906 in Amsterdam; † 1989) war ein niederländischer Mathematiker, der sich mit Geometrie beschäftigte. Bol promovierte 1928 an der Universität Leiden (Vlakke Laguerre Meetkunde, Ebene Laguerre Geometrie) bei Willem… … Deutsch Wikipedia
• Closed-loop lifecycle management — (CL2M) is a natural development and extension of PLM (Product Lifecycle Management). The concept of CL2M was first adopted during the EU funded PROMISE project (http://www.promise plm.com). A consortium of 22 organisations set out to close the… … Wikipedia
• Alfalfa mosaic virus RNA 1 5' UTR stem-loop — This family represents a putative stem loop structure found in the 5 UTR in RNA 1 of alfalfa mosaic virus. RNA 1 is responsible for encoding the viral replicase protein P1. This family is required for negative strand RNA synthesis in in the… … Wikipedia
• Outline of algebraic structures — In universal algebra, a branch of pure mathematics, an algebraic structure is a variety or quasivariety. Abstract algebra is primarily the study of algebraic structures and their properties. Some axiomatic formal systems that are neither… … Wikipedia
• List of algebraic structures — In universal algebra, a branch of pure mathematics, an algebraic structure is a variety or quasivariety. Abstract algebra is primarily the study of algebraic structures and their properties. Some axiomatic formal systems that are neither… … Wikipedia
• Quasigroup — In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure resembling a group in the sense that division is always possible. Quasigroups differ from groups mainly in that they need not be associative. A quasigroup with … Wikipedia
• List of mathematics articles (B) — NOTOC B B spline B* algebra B* search algorithm B,C,K,W system BA model Ba space Babuška Lax Milgram theorem Baby Monster group Baby step giant step Babylonian mathematics Babylonian numerals Bach tensor Bach s algorithm Bachmann–Howard ordinal… … Wikipedia
• Glossary of nautical terms — This is a glossary of nautical terms; some remain current, many date from the 17th 19th century. See also Wiktionary s nautical terms, Category:Nautical terms, and Nautical metaphors in English. Contents: A B C D E F G H I J K L M N O P Q R … Wikipedia
### Поделиться ссылкой на выделенное
##### Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»
|
2021-01-20 01:22:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7404412031173706, "perplexity": 4945.932291600275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00096.warc.gz"}
|
http://www.ic.sunysb.edu/Class/phy122ps/labs/dokuwiki/doku.php?id=phy124summer:lab_3&rev=1279052129&do=diff
|
# Differences
This shows you the differences between two versions of the page.
phy124summer:lab_3 [2010/07/13 16:15] (current)
Line 1: Line 1:
+<html><STYLE> #jsMath_Warning {display: none} </STYLE></html>
+
+====== PHY 124 Lab 3 - DC circuits======
+
+[[http://www.ic.sunysb.edu/Class/phy122ps/labs/dokuwiki/pdfs/124lab3worksheet.pdf|Important! You need to print out the 2 page worksheet you find by clicking on this link and bring it with you to your lab session.]]
+
+
+If you need the .pdf version of these instructions you can get them [[http://www.ic.sunysb.edu/Class/phy122ps/labs/dokuwiki/pdfs/phy124lab3.pdf|here]].
+
+===== Goals =====
+The purpose of this laboratory is to observe the relationship between voltage drop across, and current through, electrical circuit components. You will also gain familiarity with connecting circuits and with voltmeters (measuring voltage) and ammeters (measuring current).
+
+
+===== Video =====
+<flashplayer width=640 height=480>file=http://www.ic.sunysb.edu/Class/phy122ps/labs/phy122vid/phy124lab3.flv</flashplayer>
+
+===== Equipment =====
+
+ * 1 DC Power Supply
+ * 1 Voltmeter
+ * 1 Ammeter
+ * 1 board with resistive components
+ * 7 wires
+ * 4 clamps
+
+{{124l3fig1.jpg}}
+
+===== Introduction =====
+
+Ohmic components obey Ohm's law,
+
+
+<WRAP column 35%>\\
+</WRAP>
+<WRAP column 45%>
+$\Large V=IR$
+</WRAP>
+<WRAP column 10%>
+(3.1)
+</WRAP>
+\\
+
+
+ where V is voltage and I is current through a resistor with resistance R. Ohmic components will keep a constant resistance with varying voltage and current. This can be seen as a linear relationship on a V vs. I graph. Standard circuit resistors are ohmic. Other kinds of electrical components may not be and in these R may depend on factors such as the temperature (as it does in light bulbs), the direction of current flow (for example in diodes), or the light intensity falling on the component (light sensitive diodes).
+
+You will use a voltmeter to measure V, the voltage drop across the component, and an ammeter to measure I, the current flow through the component. You must keep in mind that ammeters must be connected in series in circuits, while voltmeters must be connected in parallel across the circuit component whose voltage drop is to be measured. The sketch below shows the proper setup.
+
+{{124l3fig2.jpg?400}}
+**Fig 2**
+
+Technical note: non-zero meter readings are obtained only when some current flows through the meters. Hence, the meters have two connections. The current flowing through the meters influences the measurement. You want this effect to be very small. Thus "good" voltmeters have a large resistance compared to R, which allows only a small current to pass through. Since the voltmeter is in parallel to the resistor, the current through R is essentially determined by R and only a very small current flows through the voltmeter. The equivalent resistance of R and the voltmeter in parallel is close to the resistance of R because if $R_{voltmeter}>>R$ then $\frac{1}{R_{voltmeter}}+\frac{1}{R}\simeq\frac{1}{R}$. The voltmeter you use has the desired property i.e. it’s resistance is much larger than the resistance to be measured, R.
+
+In an ideal case a “Good" ammeter has a very small resistance compared to R, which allows current to easily pass through the ammeter and be measured. When this is the case and the resistor and the ammeter are in series, so the current measured in the ammeter is essentially determined by R. The equivalent resistance of R and the ammeter is close to resistance of R because if $R_{ammeter}<<R$ then $R+R_{ammeter}\simeq R$. This is only approximately true for the ammeter you will use in the lab. Because of this, the most significant source of error in this lab will come from the current measurements. Note that the voltmeter should not be connected across the ammeter because the ammeter has a non-negligible resistance.
+
+===== Procedure =====
+
+Before you start: Have your instructor measure the resistances of the resistors, which are next to each other on the board with the resistive elements shown below. You can also read the values of the resistor from their colored bands. (The instructions for doing this can be found [[http://en.wikipedia.org/wiki/Electronic_color_code|here]]). Record the two resistances on your work sheet. These are your reference values and will be compared with your measured values. You can neglect the errors in the reference values as they will be much smaller than the errors in your measurements.
+
+{{124l3fig3.jpg?600}}
+**Fig 3**
+
+==== Part I-Ohmic Components ====
+
+Our first measurement will be made on a ohmic component, the resistor $R_{2}$
+
+1. Make sure the power supply is switched off. Turn the black knob “Current ADJ.” all the way clockwise, and both the coarse and fine “Voltage ADJ.” knobs all the way counterclockwise to zero. Connect the power supply, ammeter, and voltmeter to the resistor (see Fig. 3) on your circuit board. Make sure you wire the setup exactly as shown in Fig 2. You can do this by visualizing and making two “loops” in your circuit. For the first loop, wire a circuit with the ammeter and the resistor in series: from the + terminal of the power supply to the red (high voltage) terminal of the ammeter; from the black (low voltage) terminal of the ammeter to the resistor; from the other end of the resistor (now its low voltage side) to the - terminal of the power supply. The next “loop” will involve connecting the voltmeter in parallel to the resistor. Wire the voltmeter from the high voltage terminal (red) of the voltmeter to the high voltage side of the resistor and from the low voltage terminal (black) of the voltmeter to the low voltage side of the resistor. The voltmeter should only be connected across the resistor if your set up is correct. Have your lab instructor "OK" your setup.
+
+2. Switch the scales on your voltmeter to 20V and your ammeter to 0.2 A. You are going to investigate how voltage changes with current. Increase the voltage, V, in 6 steps from ~ 5 to ~ 10 volts. For each voltage step, record the current, I (which is displayed in mA, but should be recorded in A), flowing through the resistor. Note that the left most knob (“course adjustment”) on the power supply is used to make large variations in the voltage V, and the middle knob (“fine adjustment”) is used for small adjustments. Neglect the error in V. For the error in I you can estimate the error as +/- 0.001 A because this is the last digit in the measurement displayed on the ammeter readout. Enter your measurements into the table on your worksheet.
+
+3. You now need to plot your data using the plotting tool below. If we plot voltage vs current the slope of the graph should give us the value of the resistance. Make the plot and record your value for the resistance (with error) on your worksheet.
+
+<html>
+<form method="post" action="http://mini.physics.sunysb.edu/~mdawber/plot/124lab3plot.php" target="_blank">
+x axis label (include units): <input type="text" name="xaxis" size="60"/><br>
+y axis label (include units): <input type="text" name="yaxis" size="60"/><br>
+Check this box if the fit should go through (0,0). <input type="checkbox" name="zero" value="y" /><br>
+(Don't include (0,0) in your list of points below, it will mess up the fit.)<br>
+What kind of errors are you entering below?<SELECT name="errortype">
+<OPTION value="none">None</OPTION>
+<OPTION value="x">Errors in x</OPTION>
+<OPTION value="y">Errors in y</OPTION>
+<OPTION value="xy">Errors in x and y</OPTION>
+</SELECT>
+<br>
+
+x1: <input type="text" name="x1" size="10"/>+/-<input type="text" name="dx1" size="10"/>    y1: <input type="text" name="y1" size="10"/>+/-<input type="text" name="dy1" size="10"/><br/>
+x2: <input type="text" name="x2" size="10"/>+/-<input type="text" name="dx2" size="10"/>    y2: <input type="text" name="y2" size="10"/>+/-<input type="text" name="dy2" size="10"/><br/>
+x3: <input type="text" name="x3" size="10"/>+/-<input type="text" name="dx3" size="10"/>    y3: <input type="text" name="y3" size="10"/>+/-<input type="text" name="dy3" size="10"/><br/>
+x4: <input type="text" name="x4" size="10"/>+/-<input type="text" name="dx4" size="10"/>    y4: <input type="text" name="y4" size="10"/>+/-<input type="text" name="dy4" size="10"/><br/>
+x5: <input type="text" name="x5" size="10"/>+/-<input type="text" name="dx5" size="10"/>    y5: <input type="text" name="y5" size="10"/>+/-<input type="text" name="dy5" size="10"/><br/>
+x6: <input type="text" name="x6" size="10"/>+/-<input type="text" name="dx6" size="10"/>    y6: <input type="text" name="y6" size="10"/>+/-<input type="text" name="dy6" size="10"/><br/>
+x7: <input type="text" name="x7" size="10"/>+/-<input type="text" name="dx7" size="10"/>    y7: <input type="text" name="y7" size="10"/>+/-<input type="text" name="dy7" size="10"/><br/>
+x8: <input type="text" name="x8" size="10"/>+/-<input type="text" name="dx8" size="10"/>    y8: <input type="text" name="y8" size="10"/>+/-<input type="text" name="dy8" size="10"/><br/>
+x9: <input type="text" name="x9" size="10"/>+/-<input type="text" name="dx9" size="10"/>    y9: <input type="text" name="y9" size="10"/>+/-<input type="text" name="dy9" size="10"/><br/>
+<input type="submit" value="submit" name="submit" />
+</form>
+</html>
+
+==== Part II-Resistors in Series ====
+
+In this part, we will connect and investigate two resistors in series. The equation for the total resistance of two resistors $R_{1}$ and $R_{2}$ in series is
+
+<WRAP column 35%>\\
+</WRAP>
+<WRAP column 45%>
+$\Large R_{s}=R_{1}+R_{2}$
+</WRAP>
+<WRAP column 10%>
+(3.2)
+</WRAP>
+\\
+
+Use this equation to calculate the total resistance you should obtain when you connect your two resistors in series and write this value on your worksheet.
+
+Wire the two resistors, $R_{1}$ and $R_{2}$, in series so that you obtain the following circuit.
+
+{{124l3fig4.jpg}}
+
+To do this, take the first circuit “loop” from Part I and add $R_{1}$ in series with $R_{2}$. Take the end of $R_{2}$ that was previously connected to the negative terminal of the power supply and connect it to one side of $R_{1}$. Then take the free end of $R_{1}$ and connect that end back to the power supply. Next, remake the second “loop” by connecting the voltmeter parallel to both $R_{2}$ and $R_{1}$. Wire the high voltage terminal (red) of the voltmeter to the high voltage side of $R_{2}$ and the low voltage terminal (black) to the low voltage side of $R_{1}$. Ask your instructor to check your wiring.
+
+Once the resistors are wired correctly measure V and I with the voltage set to ~ 10 V. Calculate the measured resistance of the series combination using Ohm’s Law. Enter these values on your worksheet. Calculate the error $\Delta R_{s}$ of $R_{s}$ from the error in I, neglecting the error in V. Since $R_{s}=V/I$ the relative error of R is equal to the relative error in I (see expression (E.8) of “EU”).
+
+Does the calculated value of the equivalent series resistance agree (within error) with the measured value?
+
+==== Part III-Resistors in Parallel ====
+
+In this part, we have two resistors in parallel. The equation for the total resistance of resistors in parallel is
+
+<WRAP column 35%>\\
+</WRAP>
+<WRAP column 45%>
+$\Large \frac{1}{R_{p}}=\frac{1}{R_{1}}+\frac{1}{R_{2}}$
+</WRAP>
+<WRAP column 10%>
+(3.3)
+</WRAP>
+\\
+\\
+
+Using this equation and the reference values for the resistors, calculate the expected value for the equivalent resistance $R_{p}$ of the parallel resistors and enter it on your worksheet.
+
+Wire the two resistors, $R_{1}$ and $R_{2}$, in parallel.
+
+
+{{124l3fig5.jpg}}
+
+This will be slightly different than the last part and will consist of three “loops”. First, make the same first “loop” as in Part I. The second “loop” will be made by placing $R_{1}$ in parallel with $R_{2}$, which is in the first “loop”. To do this, connect one end of $R_{1}$ to the high voltage end of $R_{2}$ and the other end of $R_{1}$ to the low voltage end of $R_{2}$. Next you will make a third “loop” by putting the voltmeter in parallel with both $R_{1}$ and $R_{2}$. To do this connect the high voltage (red) terminal to the high voltage connection to which both $R_{1}$ and $R_{2}$ are wired. Do the same for the low voltage (black) terminal of the voltmeter and the low voltage connection to both resistors. Ask your instructor to check it.
+
+Once the resistors are wired correctly, measure V and I at a voltage of ~ 10 V. Calculate the measured resistance of the parallel combination from Ohm’s Law and its error. Remember that as we neglect the error in the voltage the relative error in R will be the same as the relative error in I. Enter your values on your worksheet. Calculate the error $\Delta R_{p}$ of the parallel resistance from the error in I neglecting the error of V as you did in Part II.
+
+Does the calculated value of the equivalent parallel resistance agree (within error) with the measured value?
+
+==== Part IV-Non-ohmic Components ====
+
+Repeat the procedure of Part I using the light bulb instead of the resistor. (You need to rewire the circuit so that it is like Fig 2 but with the light bulb in the place of $R_{2}$. Again, you will be looking at the relationship between V and I. You should see the bulb light up if you put a few volts across. If not tighten the bulb. Take ~ 10 data points by varying V from ~1 to ~10V. Record each voltage and the associated current. Enter the measurements into the table on your worksheet. You do not need to record any errors are needed for this data.
+
+You should now enter the data in to the table below. When you click submit two plots will be produced, the first one is of voltage against current as we did for the ohmic resistor. You should see that this time the plot is not linear.
+
+
+<html>
+<form method="post" action="http://mini.physics.sunysb.edu/~mdawber/plot/124lab3.php" target="_blank">
+
+
+Voltage and Current Values <br/>
+V1  <input type="text" name="V1" size="10"/> V   I1  <input type="text" name="I1" size="10"/> A <br/>
+V2  <input type="text" name="V2" size="10"/> V   I2  <input type="text" name="I2" size="10"/> A <br/>
+V3  <input type="text" name="V3" size="10"/> V   I3  <input type="text" name="I3" size="10"/> A <br/>
+V4  <input type="text" name="V4" size="10"/> V   I4  <input type="text" name="I4" size="10"/> A <br/>
+V5  <input type="text" name="V5" size="10"/> V   I5  <input type="text" name="I5" size="10"/> A <br/>
+V6  <input type="text" name="V6" size="10"/> V   I6  <input type="text" name="I6" size="10"/> A <br/>
+V7  <input type="text" name="V7" size="10"/> V   I7  <input type="text" name="I7" size="10"/> A <br/>
+V8  <input type="text" name="V8" size="10"/> V   I8  <input type="text" name="I8" size="10"/> A <br/>
+V9  <input type="text" name="V9" size="10"/> V   I9  <input type="text" name="I9" size="10"/> A <br/>
+V10 <input type="text" name="V10" size="10"/> V   I10 <input type="text" name="I10" size="10"/> A <br/>
+
+<input type="submit" value="submit" name="submit" />
+</form>
+</html>
+
+We can estimate the resistance at each point on the graph by making a linear approximation around each point. This means we take the change in voltage from the points on either side of the point and divide by the change in current from the points on either side of the point. For example if I want to find the resistance at point n I can approximate it as
+
+<WRAP column 35%>\\
+</WRAP>
+<WRAP column 45%>
+$\Large R_{n}=\frac{V_{n+1}-V_{n-1}}{I_{n+1}-I_{n-1}}$
+</WRAP>
+<WRAP column 10%>
+(3.4)
+</WRAP>
+\\
+\\
+
+The computer will do this for you and make a plot of R vs V.
+
+Does the bulb behave as an ohmic resistor? Does the bulb resistance increase or decrease with temperature? Discuss with your TA!
+
|
2020-02-20 15:57:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5782307386398315, "perplexity": 2615.8720506261334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00302.warc.gz"}
|
https://www.hackmath.net/en/math-problem/1881
|
# Plums
In the bag was to total 136 plums. Igor took 3 plums and Mary took 4/7 from the rest. How many plums remained in the bag?
Correct result:
x = 57
#### Solution:
$x=136-3-4/7 \cdot \ (136-3)=57$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
• Pizza 4
Marcus ate half pizza on monday night. He than ate one third of the remaining pizza on Tuesday. Which of the following expressions show how much pizza marcus ate in total?
• Infirmary
Two thirds of children from the infirmary went on a trip seventh went to bathe and 40 children remained in the gym. How many children were treated in the infirmary?
• Cleaning windows
Cleaning company has to wash all the windows of the school. The first day washes one-sixth of the windows of the school, the next day three more windows than the first day and the remaining 18 windows washes on the third day. Calculate how many windows h
• Norm
Three workers planted 3555 seedlings of tomatoes in one dey. First worked at the standard norm, the second planted 120 seedlings more and the third 135 seedlings more than the first worker. How many seedlings were standard norm?
• Factory
In the factory workers work in three shifts. In the first inning operates half of all employees in the second inning and a third in the third inning 200 employees. How many employees work at the factory?
• Unknown number
I think the number. I'll reduce it to its one-third. The result is then increased by one-third, and I get the number 12.
• Simple equation
Solve the following simple equation: 2. (4x + 3) = 2-5. (1-x)
• Unknown number
I think the number - its sixth is 3 smaller than its third.
• Equation with x
Solve the following equation: 2x- (8x + 1) - (x + 2) / 5 = 9
• Eq-frac
Solve the following equation with fractions: h + 1/3 =5/3
• Equation 15
Solve equation with variables on both sides:
• One-third
A one-third of unknown number is equal to five times as great as the difference of the same unknown number and number 28. Determine the unknown number.
• Simple equation 5
Solve equation with fractions: X × 3/8 = 1/2
• Find unknown
Find unknown numerator: 4/8 + _/8 = 1
• Bitoo and Reena
Bitoo ate 3/5 part of an apple and the remaining part was eaten by his sister Reena. How much part of an apple did Renna eat? Who had the larger share? By how much?
• Evaluate expression
Calculate the value of the expression z/3 - 2 z/9 + 1/6, for z = 2
• Fraction and a decimal
Write as a fraction and a decimal. One and two plus three and five hundredths
|
2020-07-07 08:44:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6221519708633423, "perplexity": 3108.2704509145146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00119.warc.gz"}
|
http://www.numdam.org/item/CM_1976__33_2_187_0/
|
On the greatest prime factors of polynomials at integer points
Compositio Mathematica, Volume 33 (1976) no. 2, p. 187-195
@article{CM_1976__33_2_187_0,
author = {Shorey, T. N. and Tijdeman, Robert},
title = {On the greatest prime factors of polynomials at integer points},
journal = {Compositio Mathematica},
publisher = {Noordhoff International Publishing},
volume = {33},
number = {2},
year = {1976},
pages = {187-195},
zbl = {0338.10040},
mrnumber = {424681},
language = {en},
url = {http://www.numdam.org/item/CM_1976__33_2_187_0}
}
Shorey, T. N.; Tijdeman, R. On the greatest prime factors of polynomials at integer points. Compositio Mathematica, Volume 33 (1976) no. 2, pp. 187-195. http://www.numdam.org/item/CM_1976__33_2_187_0/
[1] A. Baker: Transcendental Number Theory, Cambridge University Press (1975). | MR 422171 | Zbl 0297.10013
[2] P. Erdös: On the greatest prime factor of Πxk=1 f(k). J. London Math. Soc. 27 (1952) 379-384. | MR 47686 | Zbl 0046.04102
[3] P. Erdös and T.N. Shorey: On the greatest prime factor of 2p - 1 for a prime p and other expressions. (To appear in Acta Arith.). | MR 419381 | Zbl 0431.10010 | Zbl 0296.10021
[4] M. Keates: On the greatest prime factor of a polynomial. Proc. Edinb. Math. Soc. (2) 16 (1969) 301-303. | MR 257034 | Zbl 0188.10201
[5] S.V. Kotov: Greatest prime factor of a polynomial. Mat. Zametki 13 (1973) 515-522; Math. Notes 13 (1973) 313-317. | MR 327670 | Zbl 0267.12002
[6] M. Langevin: Plus grand facteur premier d'entiers consecutifs. C. R. Acad. Sc. Paris 280A (1975) 1567-1570. | MR 379403 | Zbl 0305.10041
[7] K. Ramachandra and T.N. Shorey: On gaps between numbers with a large prime factor. Acta Arith. 24 (1973) 99-111. | MR 327684 | Zbl 0258.10022
[8] A. Schinzel: On two theorems of Gelfond and some of their applications. Acta Arith. 13 (1967) 177-236. | MR 222034 | Zbl 0159.07101
[9] A. Schinzel and R. Tijdeman: On the equation ym = P(x ). (To appear in Acta Arith.). | MR 422150 | Zbl 0339.10018 | Zbl 0303.10016
[10] T.N. Shorey: On gaps between numbers with a large prime factor II. Acta Arith. 25 (1974) 365-373. | MR 344201 | Zbl 0258.10023
[11] T.N. Shorey: On linear forms in the logarithms of algebraic numbers. (To appear in Acta Arith.). | MR 412117 | Zbl 0289.10023
[12] C.L. Siegel: Approximation algebraischer Zahlen. Math. Z. 10 (1921) 173-213. | JFM 48.0163.07 | MR 1544471
[13] V.G. Sprindžuk: The greatest prime divisor of a binary form. Dokl. Akad. Nauk BSSR 15 (1971) 389-391. | MR 286752
[14] R. Tijdeman: On integers with many small prime factors. Compositio Math. 26 (1973) 319-330. | Numdam | MR 325549 | Zbl 0267.10056
[15] C. Hooley: On the greatest prime factor of quadratic polynomials, Acta Math. 117 (1976) 281-299. | MR 204383 | Zbl 0146.05704
|
2019-08-25 01:32:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2360350638628006, "perplexity": 2443.7961611570313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00211.warc.gz"}
|
http://mathoverflow.net/revisions/13774/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
2 edited tags
1
# Interaction of topology and the Picard group of Algebraic surfaces
It is well known that a smooth cubic surface $X\subset \mathbb{P}^3$ has exactly 27 lines in it. Furthermore, it is easy to check that Picard group $$Pic(X)\cong \mathbb{Z}^7$$ Here the generators are lines which are $\mathbb{P}^1$ topologically. Furthermore, it is easy to check that $$\chi_{top}(X)=2+H_2(X)=9$$Topologically speaking, notice that as smooth manifold $X$ has no 1-skeleton. This makes the 2-dimensional cells glue to points along their bounday, getting spheres $\mathbb{P}^1$ as the result of this gluing process.
My first question is why the other 20 lines do not contribute to the Euler characteristic of $X$.
Going further, if $X\subset \mathbb{P}^3$ has degree 4, it is known that $X$ sometimes has lines, sometimes it does not. However, $$\chi_{top}(X)=2+H_2(X)=24$$ meaning that, despite the fact that $X$ can perfectly have no lines, we still have homology $H_2$ which are spheres topologically! meaning, there are in fact spheres (due to the argument above which says that $X$ has no 1-skeleton). Besides $\chi_{top}$ is constant even though $X$ may have $64$, $32$ or even $0$ number of lines in it. There are spheres whose existence is not being noticed by $\chi_{top}$ at all. Here let me be vague please. What is going on!?
Any type of editing to make this clearer will be welcome. References highly appreciated.
|
2013-06-20 05:38:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839559555053711, "perplexity": 211.83553468385637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/how-can-i-solve-question-type-with-magnitude-and-unit-vectors.987492/
|
# How Can I Solve Question Type: "With Magnitude and Unit Vectors"?
denfaro
Hi I am a beginner in this topic. I didn't understand this question type clearly.What does it mean" With Magnitude and Unit Vectors" exactly? May you help me for the solution step by step :). Thanks in advance.
Mentor
Hi I am a beginner in this topic. I didn't understand this question type clearly.What does it mean" With Magnitude and Unit Vectors" exactly? May you help me for the solution step by step :). Thanks in advance.
View attachment 260846
Welcome to the PF.
We cannot do your schoolwork/homework for you, but perhaps we can give you a few tips to help your understanding so that you can start working on the problem.
What have you done with vectors so far? You know that vectors have a magnitude and direction, right? And have you learned about coordinate systems like Caretesian coordinates, where there are x and y directions, etc.?
denfaro
Welcome to the PF.
We cannot do your schoolwork/homework for you, but perhaps we can give you a few tips to help your understanding so that you can start working on the problem.
What have you done with vectors so far? You know that vectors have a magnitude and direction, right? And have you learned about coordinate systems like Caretesian coordinates, where there are x and y directions, etc.?
Yes I learned the vectors,coordinate system but I don't know how can I use them on this question.
Mentor
Yes I learned the vectors,coordinate system but I don't know how can I use them on this question.
So start by drawing an x-y set of axes on the diagram. I would probably make the (0,0) point at the A charge, with the x-axis to the right and the y-axis pointing up.
Then write the general vector force equation for the force on one charge due the electric field from another charge (based on the amount of the charges and the separation distance).
Please do those things, and show us what you get. Thank you.
Yes I learned the vectors,coordinate system but I don't know how can I use them on this question.
Do you know charged spheres will apply force on each other?
denfaro
Do you know charged spheres will apply force on each other?
Yes I know
Yes I know
What’s the expression for that force? I mean is there any law which governs how two charged spheres will apply force on each other ?
denfaro
What’s the expression for that force? I mean is there any law which governs how two charged spheres will apply force on each other ?
I think you mean Coulmb's Law
I think you mean Coulmb's Law
Yes. Can you please write out the Coulombs Law?
denfaro
Yes. Can you please write out the Coulombs Law?
F=(k*(q1*q2))/r^2.
F=(k*(q1*q2))/r^2.
But force is a vector quantity, the expression which you have given doesn’t involve the direction of the force. Can you fix it? Can you do something so that we get a vector quantity in that expression of Coulombs Law?
denfaro
But force is a vector quantity, the expression which you have given doesn’t involve the direction of the force. Can you fix it? Can you do something so that we get a vector quantity in that expression of Coulombs Law?
Sorry I don't. How can I do that?
Sorry I don't. How can I do that?
$$\vec{F} = k \frac{q_1 ~q_2 }{r^2}~ \hat{r}$$ the force acts on the line joining the two charges.
berkeman and denfaro
denfaro
$$\vec{F} = k \frac{q_1 ~q_2 }{r^2}~ \hat{r}$$ the force acts on the line joining the two charges.
Thanks I have an idea now. :)
|
2023-03-28 18:23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7267555594444275, "perplexity": 381.45283413548776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00165.warc.gz"}
|