url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://blog.surges.eu/sap-cap-generate-csv-files-with-test-data-easily/
|
# SAP CAP: Generate .csv-files with test data easily
When you are dealing with SAP CAP you will need test data to test your services and Fiori UIs. This blog post guides you to all relevant steps to easily generate .csv-files with test data and deploy it to a local persistent database (local development).
## Prerequisites
Please consider that your local development environment meets all requirements for the SAP Cloud Application Programming Model (CAP). Maybe, my settings will help you:
## Generate a .csn-file of the domain model (CDS)
The domain model is defined in a schema.cds-file in the db-folder. CDS (Core Data Services) is SAP’s universal modeling language to deal with different parts of domains. CDS models comply with the Core Schema Notation (CSN) that represents CDS models as JavaScript objects and goes beyond the JSON Schema.
First, use the terminal to generate a .csn-file from the CDS model. Then, use the VS-Code extension “CAP CDS CSV Generator” that takes this .csn-file for generating .csv-files with test data. Alternatively, you can also create a csn-file with the commands mta build or cds build.
# cds command: cds compile schema.cds --to csn --dest schema.csn
# Use cds compile --help to get further information
# The following command is executed in the project root folder
cds compile db/schema.cds --to csn --dest db/schema.csn
## Generate .csv-files with the VS Code extension
Next, execute the following steps to generate .csv-files with test data depending on the domain model that the schema.cds defines.
• Call the command palette of VS Code (Ctrl + Shift + P or F1) and type “Generate csv file“.
• Enter the namespace of the schema.cds and choose the .csn-file (e.g. db/schema.csn).
• Choose a folder to save the .csv-files with test data (e.g. db/data/).
The extension creates 10 entries per .csv-file and this default value can be changed in the extension settings (further information: CAP CDS CSV Generator).
## Use a persistent database like SQLite
Instead of using the test data in-memory, you can use a persistent database as well, for example SQLite:
# Add sqlite to dev dependencies of package.json
npm add sqlite3 -D
Deploy the test data of the .csv-files (folder db/data) to the persistent database:
# Deploy test data to db
cds deploy --to sqlite:my.db
When you run the app with cds watch you will see that the database is used.
## Troubleshooting
In the case, you will get an error message like “Expected uri token ‘ODataIdentifier’ could not be found in […]” you can fix it by adding “@odata.Type:’Edm.String‘” after UUID in the schema.cds (further information: Problem accessing single entity using OData v4).
entity EntityName {
key ID : UUID @odata.Type:'Edm.String';
name : String;
}
If “cds watch” runs in an error like “Duplicate definition of artifact”, delete the *.csn file after the generation of test data.
|
2022-08-09 17:52:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21911513805389404, "perplexity": 11512.405658381542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00286.warc.gz"}
|
http://mathhelpforum.com/trigonometry/83046-proving-identity-1-a.html
|
1. ## Proving identity #1
cotx+tanx=2csc2x
LS:
cotx+tanx
=cosx/sinx+sinx/cosx
=1/cosxsinx
RS:
2csc2x
=2(1/sin)2x
What should I do now?
2. Originally Posted by skeske1234
$\cot x+\tan x=2\csc 2x$
LS:
$\cot x+\tan x=$
$\frac{\cos x}{\sin x}+\frac{\sin x}{\cos x}=$
${\color{red}\frac{1}{\sin x \cos x}=}$
RS:
Try this instead: Recall that $\sin 2x = 2 \sin x \cos x$
$=\frac{2}{\sin 2x}$
$=\frac{2}{2 \sin x \cos x}$
${\color{red}=\frac{1}{\sin x \cos x}}$
..
3. Originally Posted by skeske1234
cotx+tanx=2csc2x
LS:
cotx+tanx
=cosx/sinx+sinx/cosx
=1/cosxsinx ... multiply numerator and denominator by 2
RS:
2csc2x
=2(1/sin)2x ... 2/[sin(2x)]
What should I do now?
.
|
2017-10-22 21:37:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984930157661438, "perplexity": 9596.324564696117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00122.warc.gz"}
|
http://parasys.net/error-propagation/error-propagation-log-base-2.php
|
parasys.net
Home > Error Propagation > Error Propagation Log Base 2
Error Propagation Log Base 2
Contents
In both cases, the variance is a simple function of the mean.[9] Therefore, the variance has to be considered in a principal value sense if p − μ {\displaystyle p-\mu } Not the answer you're looking for? Any help/pointers much appreciated. p.37. More about the author
In matrix notation, [3] Σ f = J Σ x J ⊤ . {\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.} That The theoretical background may be found in Garland, Nibler & Shoemaker, ???, or the Wikipedia page (particularly the "simplification"). Browse other questions tagged error-analysis or ask your own question. p.2. find more info
Error Propagation Natural Log
Recent papers discuss aspects of this as well: http://aem.asm.org/content/early/2012/04/02/AEM.07878-11.full.pdf and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3287174/ This latter publication constructively criticizes my own method - but failed to have a full-grasp of my approach - to asked 2 years ago viewed 21805 times active 1 year ago Related 1Percent error calculations dilemma1Error Propagation for Bound Variables-1Error propagation with dependent variables1Error propagation rounding0Systematic error of constant speed0error calculation al http://www.amstat.org/meetings/jsm/2010 ...
Gable's Web site Dr. is formed in two steps: i) by squaring Equation 3, and ii) taking the total sum from $$i = 1$$ to $$i = N$$, where $$N$$ is the total number of If you just want a rough-and-ready error bars, though, one fairly trusty method is to draw them in between $y_\pm=\ln(x\pm\Delta x)$. Error Propagation Division There is a need to stick up for this method when it is not being discussed as precisely or as fairly as it should be at this time.
It should take into account the error associated with the estimation of the efficiency, shouldn't it? Error Propagation For Log Function Retrieved 22 April 2016. ^ a b Goodman, Leo (1960). "On the Exact Variance of Products". External links A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic Uncertainties and Input follows "BASIC" type rules: Exponentiation is indicated by ^ or **.
Simple sample dilution (as the PQ approach calculates precisely and non-excessively) is another approach (in addition to good isolation/purification methods) to overcoming sample-introduced qPCR/RT-qPCR inhibition. Error Propagation Physics Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Why Is Quantum Mechanics So Difficult? In such cases one should use notation indicates the asymmetry, such as $y=1.2^{+0.1}_{-0.3}$. –Emilio Pisanty Jan 28 '14 at 15:10 add a comment| up vote 16 down vote While appropriate in
Error Propagation For Log Function
However, if the variables are correlated rather than independent, the cross term may not cancel out. It can be written that $$x$$ is a function of these variables: $x=f(a,b,c) \tag{1}$ Because each measurement has an uncertainty about its mean, it can be written that the uncertainty of Error Propagation Natural Log doi:10.6028/jres.070c.025. Error Propagation Logarithm Jul 12, 2013 Jack M Gallup · Iowa State University A colleague of mine (Dr.
Am I wrong or right in my reasoning? –Just_a_fool Jan 26 '14 at 12:51 its not a good idea because its inconsistent. my review here Most likely, this is by no fault of the above authors since expansion, evolution and application of the PREXCEL-Q Method/approach for qPCR/RT-qPCR has become more refined over the years but without This is desired, because it creates a statistical relationship between the variable $$x$$, and the other variables $$a$$, $$b$$, $$c$$, etc... Generally, reported values of test items from calibration designs have non-zero covariances that must be taken into account if b is a summation such as the mass of two weights, or Error Propagation Example
National Bureau of Standards. 70C (4): 262. Table 1: Arithmetic Calculations of Error Propagation Type1 Example Standard Deviation ($$\sigma_x$$) Addition or Subtraction $$x = a + b - c$$ $$\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2}$$ (10) Multiplication or Division \(x = Jul 14, 2013 Can you help by adding an answer? click site Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems.
f = ∑ i n a i x i : f = a x {\displaystyle f=\sum _ σ 4^ σ 3a_ σ 2x_ σ 1:f=\mathrm σ 0 \,} σ f 2 Error Propagation Calculus Young, V. The better way is to optimize the assay, primer design, template quality etc.
If the statistical probability distribution of the variable is known or can be assumed, it is possible to derive confidence limits to describe the region within which the true value of
Reciprocal In the special case of the inverse or reciprocal 1 / B {\displaystyle 1/B} , where B = N ( 0 , 1 ) {\displaystyle B=N(0,1)} , the distribution is University Science Books, 327 pp. Kevin P. Error Propagation Khan Academy GUM, Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx' uncertainties package, a program/library for transparently
Stay logged in Physics Forums - The Fusion of Science and Community Forums > Mathematics > General Math > Menu Forums Featured Threads Recent Posts Unanswered Threads Videos Search Media New Uncertainty in measurement comes about in a variety of ways: instrument variability, different observers, sample differences, time of day, etc. Viviana Cardozo Can someone advise on the pfaffl method and error propagation? http://parasys.net/error-propagation/error-propagation-exp.php Introduction Every measurement has an air of uncertainty about it, and not all uncertainties are equal.
Disadvantages of Propagation of Error Approach Inan ideal case, the propagation of error estimate above will not differ from the estimate made directly from the measurements. Journal of Sound and Vibrations. 332 (11). Keith (2002), Data Reduction and Error Analysis for the Physical Sciences (3rd ed.), McGraw-Hill, ISBN0-07-119926-8 Meyer, Stuart L. (1975), Data Analysis for Scientists and Engineers, Wiley, ISBN0-471-59995-6 Taylor, J. We will present the simplest cases you are likely to see; these must be adapted (obviously) to the specific form of the equations from which you derive your reported values from
I know how to do it for log10: 0.434(deltax/x) but I can't for the life of me remember how to derive it for an arbitrary base. Or in matrix notation, f ≈ f 0 + J x {\displaystyle \mathrm σ 6 \approx \mathrm σ 5 ^ σ 4+\mathrm σ 3 \mathrm σ 2 \,} where J is Unusual keyboard in a picture Appease Your Google Overlords: Draw the "G" Logo Would you feel Centrifugal Force without Friction? Addition and subtraction Note--$$S=√{S^2}$$ Formula for the result: $$x=a+b-c$$ x is the target value to report, a, b and c are measured values, each with some variance S2a, S2b, S2c. $$S_x=√{S^2_a+S^2_b+S^2_c}$$
|
2018-06-24 12:42:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.666694164276123, "perplexity": 1932.733656026921}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866937.79/warc/CC-MAIN-20180624121927-20180624141927-00251.warc.gz"}
|
https://math.stackexchange.com/questions/1127779/unitary-matrix-decomposition-using-orthogonal-matrices
|
# unitary matrix decomposition using orthogonal matrices
Is it possible to decompose an n by n unitary matrix U, such that $U=O_1DO_2$ with D being diagonal(obviously just has complex phase factors) and $O_1,O_2$ being real orthogonal matrices.
Hint: Let $M$ a complex $n\times n$ matrix. Is it always possible to decompose it as $$M = O_1 D O_2$$ ? Dimension considerations say no. Indeed, the space of complex matrices has $2n^2$ degrees of freedom, while on the RHS we get $\binom{n}{2} + 2n + \binom{n}{2} = n^2 + n$. There must be some extra conditions required for $n>1$.
Note that from $M= O_1 D O_2$ we get $$\bar M = O_1 \bar D O_2$$
So we see that $M$ and $\bar M$ (or, if you want, $\mathcal{Re}M$ and $\mathcal{Im}M$) have "real singular value decomposition with the same $O_1$, $O_2$. When is this possible. If we go through the proof of the singular value decomposition theorem we notice that the matrices $M^t M$ are used. So we get
$$M = O_1 D O_2\\ M^t = O_2^t D O_1^{t}\\ M^t M = O_2^t D^2 O_2$$
while
$$\bar M = O_1 \bar D O_2\\ \bar M^t = O_2^t \bar D O_1^{t}\\ \bar M^t \bar M = O_2^t \bar D^2 O_2$$
The last lines are the key
$$M^t M = O_2^{-1} D^2 O_2\\ \bar M^t \bar M = O_2^{-1} \bar D^2 O_2$$
The matrices $M^t M$ and $\bar M^t \bar M$ can be conjugated by the same orthogonal map to diagonal matrices. This implies that these matrices commute
$$M^t M \bar M^t \bar M= \bar M^t \bar M M^t M$$
It is not very hard (using the simultaneous reduction of commuting symmetric matrices to diagonal form by the same orthogonal) that this condition is also sufficient.
Now we ask: does this hold for every unitary matrix? I will leave this for you to check.
|
2020-04-06 12:43:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353060126304626, "perplexity": 151.20336246753536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00322.warc.gz"}
|
https://flaviocopes.com/how-to-push-two-repositories-sync/
|
I had the need to have 2 GitHub repositories with the same exact content.
Whenever I pushed my changes, those changes had to be sent to those 2 repositories without any extra work.
So here’s what I did.
I already had a working repository with some code, set up as the origin remote in Git.
I created a new empty repository on GitHub, and I set is as another URL for the origin remote:
git remote set-url --add --push origin git@github.com:flaviocopes/original.git
git remote set-url --add --push origin git@github.com:flaviocopes/clone.git
That’s it. Now doing a “git push” sends the changes to both repositories.
|
2022-05-17 21:53:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42099690437316895, "perplexity": 6462.7014096312605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00105.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-an-equation-of-a-line-parallel-to-y-3x-4-and-x-intercept-at-4
|
# How do you write an equation of a line parallel to y=-3x+4 and x intercept at 4?
Nov 11, 2016
A parallel has the same slope, so the form is $y = - 3 x + b$
#### Explanation:
The $x$-intercept means $y = 0 \mathmr{and} x = 4$
Use the equation:
$0 = - 3 \cdot 4 + b$
Add $3 \cdot 4 = 12$ to both sides:
$12 = \cancel{12} - \cancel{12} = b \to b = 12$
Equation: $y = - 3 x + 12$
graph{-3x+12 [-11.98, 16.5, -6.78, 7.46]}
|
2019-07-15 18:24:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686705827713013, "perplexity": 4814.115210222084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715201205-00107.warc.gz"}
|
https://math.stackexchange.com/questions/650379/why-is-an-orthogonal-matrix-called-orthogonal
|
# Why is an orthogonal matrix called orthogonal?
I know a square matrix is called orthogonal if its rows (and columns) are pairwise orthonormal
But is there a deeper reason for this, or is it only an historical reason? I find it is very confusing and the term would let me assume, that a matrix is called orthogonal if its rows (and columns) are orthogonal and that it is called orthonormal if its rows (and columns) are orthonormal but apparently that's not conventional.
I know that square matrices with orthogonal columns have no special interest, but thats not the point. If I read the term orthogonal matrix my first assumption is, that its rows (and columns) are orthogonal what is correct of course, but the more important property is that they are also orthonormal
So, Question: Why do you call an orthogonal matrix orthogonal and not orthonormal? Wouldn't this be more precisely and clearly?
• "It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name." en.wikipedia.org/wiki/Orthogonal_matrix#Matrix_properties – oldrinb Jan 24 '14 at 22:37
• Orthonormal would have been a better name. – littleO Jan 24 '14 at 23:10
• Not claiming this is the historical origin of the term, but note that multiplication by an $n \times n$ matrix $A$ is an isometry, i.e., preserves (length and) orthogonality of arbitrary vectors, if and only if the columns of $A$ are orthonormal. That is, I suspect it's the transformation $T(x) = Ax$ that's referred to by the term "orthogonal", not the rows/columns of $A$. – Andrew D. Hwang Jan 25 '14 at 1:03
• the comment tells you why: they are of no particular interest... if a matrix has orthogonal columns then it necessarily is of the form $UD$ where $U$ is orthogonal and $D$ is diagonal – oldrinb Jan 25 '14 at 1:41
• @Tunococ what about $\lambda\cdot I,\ |\lambda|\notin\{0,1\}$ then? That's a matrix which obviously has orthogonal, yet not orthonormal column AND row vectors. – Sora. Jul 27 '16 at 20:11
## 4 Answers
A affine transformation which preserves the dot-product on $\mathbb{R}^n$ is called an isometry of Euclidean $n$-space. In fact, one can begin without the assumption of an affine map and derive it as a necessary consequence of dot-product preservation. See the Mazur Ulam Theorem which shows this result holds for maps between finite dimensional normed spaces where the notion of isometry is that the map preserves the norm. In particular, an isometry of $\mathbb{R}^n$ can be expressed as $T(v)=Rv+b$ where $R^TR=I$. The significance of such a transformation is that it provides rigid motions of Euclidean $n$-space. Two objects are congruent in the sense of highschool geometry if and only if some rigid motion carries one object to the other.
My Point? this is the context from which orthogonal matrices gain their name. They correspond to orthogonal transformations. Of course, these break into reflections and rotations according to $\text{det}(R)= -1,1$ respective.
Likely thought: we should just call such transformations orthonormal transformations. I suppose that would be a choice of vernacular. However, we don't, so... they're not orthonormal matrices. But, I totally agree, this is just a choice of terminologies. Here's another: since $R^TR=I$ implies $R$ is either a rotation or reflection let's call the set of such matrices rotoreflective matrices. In any event, I would advocate the change of terminology you advocate, but, the current terminology is pretty well set at this point, so, good luck if you wish to change this culture.
• +1 Yes it definitely comes from this context. (Although I would hesitate to lump "rotoreflective" transformations and "rigid motions" in with each other. My impression is that preserving orientation is a requirement for rigid motions.) – rschwieb Jun 18 '14 at 14:53
• @rschwieb that may be, let me check... well, not in O'neill's Elementary Differential Geometry (which I happen to have sitting here) on page 100 he indicates rigid motion as just another name for isometry. I suppose, the question is, is a reflection a rigid motion? – James S. Cook Jun 18 '14 at 15:00
• Yeah, it's just a terminology decision. If "rigid" just means "doesn't change the distance relationships between points," then it's just an isometry, but for some authors rigid implies orientation preservation too. (Is the motion that carries you onto your mirror reflection rigid? :) ) Either way is OK, I just wanted to mention the topic. I wouldn't go so far as to say anything is "wrong" with either one. – rschwieb Jun 18 '14 at 15:04
• @rschwieb it could be worse, at least we only have two components to worry about here :) – James S. Cook Jun 18 '14 at 15:10
• Well, reflections are limited to having determinant $\pm 1$ since $\det(T)^2=1$ only has two solutions over any field, and rotations are products of reflections, so it can't be any worse, right? – rschwieb Jun 18 '14 at 15:17
A matrix $V$ with mutually orthogonal columns is called orthogonal because it maps each of the standard orthogonal coordinate directions to a new set of mutually orthogonal coordinate directions. For example, spherical coordinates are orthogonal because $$x=r\sin\phi\cos\theta,\;\; y=r\sin\phi\sin\theta,\;\; z = r\cos\phi$$ maps the coordinate lines where $r$ alone varies, $\phi$ alone varies, and $\theta$ alone varies into mutually orthogonal curves in $\mathbb{R}^{3}$. This is evidenced in the Jacobian matrix whose columns are mutually orthogonal: $$\frac{\partial(x,y,z)}{\partial(r,\phi,\theta)} = \left[\begin{array}{ccc} \sin\phi\cos\theta & r\cos\phi\cos\theta & -r\sin\phi\sin\theta \\ \sin\phi\sin\theta & r\cos\phi\sin\theta & r\sin\phi\cos\theta \\ \cos\phi & -r\sin\phi & 0 \end{array}\right]$$ It is well known that the spherical coordinate system is orthogonal because the three different coordinate lines always intersection orthogonally. The determinant of the Jacobian matrix for this orthogonal transformation is easy to determine: it is the product of the lengths of the three columns of the Jacobian matrix, which is the product of the standard spherical distance scale factors $\frac{dl}{dr}=1$, $\frac{dl}{d\phi}=r$, $\frac{dl}{d\theta}=r\sin\phi$. The standard spherical coordinate volume element is the the product of these $dV = r^{2}\sin\phi\,dr\,d\phi\,d\theta$.
A non-zero unitary matrix goes one step further: any set of orthogonal vectors is mapped to another, which is much stronger than the standard coordinate directions being mapped to mutually orthogonal directions. This condition implies that there is a constant $C$ such that the unitary matrix maps all unit vectors to vectors of length $C$. After renormalizing by dividing by $C$, the unitary matrix is distance preserving and preserve angles: $(Vx,Vy)/\|Vx\|\|Vy\|=(x,y)/\|x\|\|y\|$.
• Isn't unitary just the analogue of orthogonal in the case complex case? I don't think unitary is stronger at all since both preserve scalar product completely so all properties enhanced in a vector... – C-Star-W-Star Jun 18 '14 at 8:22
• -1: A matrix with mutually orthogonal columns is not called orthogonal. That is the entire point of the question. – Rahul Jun 18 '14 at 9:38
• @Rahul: You can argue over how the terminology is used for a matrix, but I answered the reason behind calling a coordinate transformation orthogonal. I described what the transformation being orthogonal means--regardless of your preferred definition--and how it relates to mutually orthogonal columns. – DisintegratingByParts Jun 18 '14 at 20:49
• @Freeze_S : If you're only talking about matrices, then orthogonal maps coordinate lines in one orthogonal frame to orthogonal lines in another. Unitary does the same thing, but in complex spaces. For smooth coordinate systems, orthogonal is a little different. Calculus orthogonal coordinate changes are very old, and important since the time of Fourier. Most significant linear theory--in one way or another--grew out of Fourier analysis. Abstract linear (esp inner-product) spaces definitely have their roots in Fourier's Principle of Superposition, orthogonal expansion, Separation of Variables. – DisintegratingByParts Jun 18 '14 at 21:01
• I'm afraid you're mistaken. The definition of an orthogonal transformation is well established (Wikipedia, MathWorld, Encyclopedia of Mathematics), and it's not what you say it is. Your post is correct insofar as you're talking about orthogonal coordinate systems, but they're not what the question is about, and the Jacobian of an orthogonal coordinate system is not necessarily an orthogonal matrix. – Rahul Jun 18 '14 at 22:42
Preservation of Structure
First of all note that any matrix represents a linear transformation: $$T(x):=Ax\quad$$ (That is what is of most interest.)
Now a quick calculation shows: $$\langle T(x),T(y)\rangle=\langle Ax,Ay\rangle=(Ax)^\intercal(Ay)=x^\intercal A^\intercal Ay$$
So a linear operator preserves scalar product iff it's adjoint is a left inverse: $$\langle T(x),T(y)\rangle\equiv\langle x,y\rangle\iff A^\intercal A=1$$ That is it is linear and preserves angles and lengths, especially orthogonality and normalization. These transformation are the morphisms between scalar product spaces and we call them orthogonal (see orthogonal transformations).
Unfortunately I guess that is not where the name comes from historically. But one should keep in mind that a statement about column and row vectors is fancy but also special and hides what is really happening...
Invertibility
First of all note that if it bijective, so invertible, its inverse will be linear too and also preserves scalar product automatically. Thus it is an isomorphism then.
Now, a linear transformation that preserves scalar product is necessarily injective: $$T(x)=0\implies\|T(x)\|=\langle T(x),T(x)\rangle=\langle 0,0\rangle\implies x=0$$ However it might fail to be surjective in general - take as an example the rightshift operator.
If it happens to be surjective too, so bijective, then it has an inverse matrix: $$A^{-1}A=1\text{ and }AA^{-1}=1$$ But since the inverse is unique we have in this case: $$A^\intercal=A^{-1}$$ Concluding that the isomorphisms are given by matrices that satisfy: $$A^\intercal A=1\text{ and }AA^\intercal=1$$
In the finite dimensional case surjectivity directly follows by the rank nullity theorem. Thus it is enough then to check that the transpose matrix is either left inverse or right inverse instead of checking both. That check goes for any matrix to conclude injectivity by surjectivity or vice versa.
Annotation
The rank nullity theorem states that: $$\dim\mathrm{dom}T=\dim\ker T+\dim\mathrm{ran}T$$
• None of this seems to address the actual question. – Santiago Canez Jun 18 '14 at 14:08
• Yes I was afraid already that somebody might doubt that this actually answers the question... However what is hidden behind all this is that the essence should be that the name was more or less a historical accident so that the crucial point one should keep in mind is that these matrices preserve structure especially orthogonality. – C-Star-W-Star Jun 18 '14 at 14:52
• @Freeze_S To give some more detailed feedback: the first point is highly relevant, but not very clearly explained. Things starting at "invertibility" and downward look irrelevant. – rschwieb Jun 18 '14 at 15:06
• Things starting after "invertibility look irrelevant however happen to be very much relevant as orthogonal transformation (and their associated orthogonal matrices) are the ones that preserve the structure and surely that's where their name comes from (orthogonality and normalization is preserved iff the transformation is linear and orthogonal) but the definition is not taken as $A^\intercal A=1$ only but together with $AA^\intercal=1$. This being no restriction in the finite dimensional case compared to only requiring $A^\intercal A=1$ is explained in the part on invertibility. – C-Star-W-Star Jun 19 '14 at 6:09
• After all transformations that preserve orthonormality in any case are given precisely by $A^\intercal A=1$ and not more... – C-Star-W-Star Jun 19 '14 at 6:10
I would like to explain orthogonality in terms of vectors which explains why orthogonal matrix is called orthogonal matrix.
Dot product of 2 orthogonal vectors is 0.
Assume vectors,$$\ \vec a= \left[ \begin{matrix} a_1 \\ a_2 \\ a_3 \\ ... \\ a_n \end{matrix} \right] \ and \ \vec b = \left[ \begin{matrix} b_1 \\ b_2 \\ b_3 \\ ... \\ b_n \end{matrix} \right]$$.
Now dot product between the vectors can be computed as: $$(\vec a)^T.(\vec b) = [a_1*b_1+a_2*b_2+a3*b3....+a_n*b_n]$$
Assume each of the columns in matrix Q as a vector. A matrix Q is an orthogonal matrix if each column vector is orthogonal to the other column vectors in the matrix Q.
So, for every column i and column j in matrix Q, if they have to be orthogonal to each other, the dot product across every column i with every column j should be 0, when i is not equal to j.
Also, the magnitude of every column needs to be same. So, let's say that when i=j, the value is constant 1.
Therefore dot product between column i of matrix Q and column j of matrix Q can be written as $$ = \begin{cases} 0, i \ne j \\ 1, i = j \end{cases}$$
So, computing dot products across all columns can be done simultaneously using $$Q^T.Q$$
The result will be $$I_n$$
• Welcome to MSE! – José Carlos Santos Jan 26 at 15:11
• "A matrix Q is an orthogonal matrix if each column vector is orthogonal to the other column vectors in the matrix Q." No, it's not. A matrix is orthogonal if the columns are orthonormal. That is the entire point of the question. – Rahul Jan 26 at 17:37
• Thanks for welcoming, @JoséCarlosSantos – Madhusudan N May 2 at 6:38
|
2019-08-23 22:23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664904832839966, "perplexity": 380.0869493715601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00397.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/71203
|
Files in this item
FilesDescriptionFormat
application/pdf
8218549.pdf (3MB)
(no description provided)PDF
Description
Title: Weak Radon-Nikdoym Sets in Dual Banach Spaces Author(s): Riddle, Lawrence Hollister Department / Program: Mathematics Discipline: Mathematics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Mathematics Abstract: The interplay between geometry, topology, measure theory and operator theory has long been evident in the study of the Radon-Nikodym property. Recently results of substantial interest in the structure of Banach spaces have been obtained by localizing these ideas to individual subsets. The study of the Radon-Nikodym property for subsets of Banach spaces can be thought of as the study of subsets of Banach spaces whose structural properties mimic those of the unit ball of a separable dual space.In this thesis I initiate the study of geometric, topological, measure theoretic and operator theoretic characterizations of convex weak*-compact subsets of dual Banach spaces whose structural properties mimic those of the unit ball of the dual of a space that contains no copy of the sequence space l(,1). These sets are described in terms of the Radon-Nikodym property for the Pettis integral, Dunford-Pettis operators, points of weak*-continuity and universal weak*-measurability of linear functionals in the second dual, extreme points, Rademacher trees, dentability and convergent martingales. By and large the work is based on a factorization theorem that says that an operator T : X (--->) Y factors through a Banach space containing no copy of l(,1) if and only if the adjoint operator T* maps the unit ball of Y* into a set with the Radon-Nikodym property for the Pettis integral.Also included in the thesis are several sufficient conditions for Pettis integrability. Using a deep theorem of Bourgain, Fremlin and Talagrand, I show that every bounded universally scalarly measurable function from a compact Hausdorff space into the dual of a separable space is universally Pettis integrable. In addition, I use a property of families of real-valued functions formulated by Jean Bourgain in order to recognize Pettis integrable functions into dual spaces. Issue Date: 1982 Type: Text Description: 107 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1982. URI: http://hdl.handle.net/2142/71203 Other Identifier(s): (UMI)AAI8218549 Date Available in IDEALS: 2014-12-16 Date Deposited: 1982
|
2016-10-21 09:16:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035753965377808, "perplexity": 917.2471115393447}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718034.77/warc/CC-MAIN-20161020183838-00367-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/explore-suppose-drew-set-cartesian-coordinates-baseball-field-home-base-origin-x-axis-home-q3030450
|
Explore
Suppose you drew a set of Cartesian coordinates on a baseball field with the home base at the origin, the x axis from home to first base, and the y axis from home to third base. If the Cartesian coordinates of a player on the field are x = 42 m and y = 54 m, as measured in the "baseline" coordinate system described, what are the players polar coordinates using the same set of axes?
Conceptualize
The paths from home plate to first base and from home plate to third base are perpendicular and can be thought of as coordinate axes.
Categorize
The positions all lie nearly in a plane, and the coordinates asked for are the polar coordinates measured in terms of the axes defined as described. So this is a problem of converting between Cartesian coordinates and polar coordinates for locations in a plane.
Analyze
Find the radial polar coordinate for the player (the player's distance to home base and the angle).
r =_________ m.
? =____________- �
|
2015-10-09 16:32:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010445475578308, "perplexity": 548.2046485090133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737933027.70/warc/CC-MAIN-20151001221853-00238-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calculating-plutos-velocity-angular-momentum.798263/
|
Homework Help: Calculating Pluto's velocity (angular momentum)
1. Feb 16, 2015
J3551C4
1. Pluto moves in a fairly elliptical orbit around the sun. Pluto's speed at its closest approach of 4.43×109km is 6.12 km/s.
2. Relevant equations: L=mvr, F_g: (GMm/r^2), A_c: mv^2/r
3. The attempt at a solution:
I found the answer here, but I'm more interested in why we would use angular momentum. I took L=mvr and used the given variables to set Pluto's momentum at its furthest point to its momentum at its closest. so I went from L=mvr to miviri=mfvfrf.
The masses cancel, so the equation simplifies to viri = vfrf. I only came to this conclusion after a classmate hinted at me to think about angular momentum, so I'm still confused as to why angular momentum is the key to solving this, rather than Newton's universal law of gravitation set to centripetal acceleration, which gives me the wrong answer.
I'm sorry if I was unclear on anything, thanks for your time.
2. Feb 16, 2015
Staff: Mentor
Hello J3551C4 (Jessica?), Welcome to Physics Forums.
Was there more to the problem statement? I don't see an actual question. It would appear from your attempt that you're looking for the velocity at its furthest distance from the Sun, but you haven't mentioned what that distance is.
EDIT: Okay, I see that your question is actually in the solution attempt rather than the problem statement.
Judging by what I can see of the problem statement and thread title it looks like you're looking for Pluto's velocity at aphelion (its furthest distance from the Sun), so it is an unknown quantity.
You can't evaluate the centripetal acceleration at that location without knowing the velocity (or angular angular velocity), so your proposed method of equating centripetal acceleration to gravitational acceleration there isn't viable.
Last edited: Feb 16, 2015
3. Feb 16, 2015
J3551C4
Hello gneiLL, yes it does stand for Jessica, and thank you.
I'm sorry for the wonky format, I will try to adhere more exactly to the template in the future. Thank you very much for your explanation, it makes sense and is a lot simpler than I was trying to make it. It seems I need to go back over some concepts. Thanks again.
|
2018-07-19 12:22:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570852279663086, "perplexity": 642.1871220443223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00391.warc.gz"}
|
http://cs.stackexchange.com/questions/3399/finding-the-minimum-cut-of-an-undirected-graph
|
# Finding the minimum cut of an undirected graph
Here's a question from a past exam I'm trying to solve:
For an undirected graph $G$ with positive weights $w(e) \geq 0$, I'm trying to find the minimum cut. I don't know other ways of doing that besides using the max-flow min-cut theorem. But the graph is undirected, so how should I direct it? I thought of directing edges on both ends, but then which vertex would be the source and which vertex would be the sink? Or is there another way to find the minimum cut?
-
If you don't have source and target in the original graph, I guess you'll have to try multiple choices. (For any given $s$ and $t$, the minimal cut may not separate the two.) – Raphael Sep 2 '12 at 13:14
Are you trying to find the min-cut for given source and sink nodes or the min-cut of the graph? – Peter Sep 2 '12 at 13:25
@Peter: The min cut of the graph. – Jozef Sep 2 '12 at 19:00
There are plenty of algorithms for finding the min-cut of an undirected graph. Karger's algorithm is a simple yet effective randomized algorithm.
In short, the algorithm works by selecting edges uniformly at random and contracting them with self-loops removed. The process halts when there are two nodes remaining, and the two nodes represent a cut. To increase the probability of success, the randomized algorithm is ran several times. While doing the runs, one keeps track of the smallest cut found so far.
See the Wikipedia entry for more details. For perhaps a better introduction, check out the first chapter of Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher and Eli Upfal.
-
Is this an approximation algorithm? – Strin Dec 10 '12 at 4:11
@Strin It's a randomized algorithm that finds the minimum cut with high probability. – Juho Dec 10 '12 at 17:42
I don't think Karger's is appropriate for finding a cut of minimum weight. The derivation of the probability that it finds a minimum cut is dependent on it finding a minimum-cardinality cut; Karger's is very unlikely to find a minimum cut with many lightweight edges. – Sumudu Fernando May 12 '14 at 5:42
For every undirected edge $(u,v, weight)$ create two directed edges $(u,v, weight)$ and $(v,u,weight)$.
...but then which vertex would be the source and which vertex would be the sink?
Doesn't matter.
-
Ford-Fulkerson algorithm should work for you. You can create two fake vertices viz. the source and sink.
Also have a look at Edmonds-Karp algorithm. There are two variations of it:
1. One version picks the shortest path
2. Other picks a path with the maximum capacity
, as opposed to Ford-Fulkerson which picks an arbitrary path.
This is a good resource.
-
Welcome to cs.stackexchange! It might help the OP if you could further explain how the fake vertices are connected to the existing graph. And what will be the edge weights of the new edges. – Paresh Dec 10 '12 at 10:17
|
2015-07-31 23:28:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6915037035942078, "perplexity": 523.7058188189792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988317.67/warc/CC-MAIN-20150728002308-00069-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.caltech.edu/campus-life-events/calendar/physics-colloquium-16
|
# Physics Colloquium - CANCELED
Thursday, May 12, 2022
4:00pm to 5:00pm
Online and In-Person Event
Towards a theory of strange quantum metals
Senthil Todadri, Massachusetts Institute of Technology,
Electrons in a conventional metal are described by Landau's celebrated theory of Fermi liquids. In the last few decades a growing number of metals have been discovered that defy a description in terms of Fermi liquid theory. Prominently, such strange metals' appear as parent phases out of which phenomena such as high temperature superconductivity develop. However their theoretical understanding has mostly remained mysterious. In this talk, I will discuss, in great generality, some properties of strange metals' in an ideal clean system. I will discuss general constraints on the emergent low energy symmetries of any such strange metal.
I will show how these model-independent considerations lead to concrete experimental predictions about a class of strange metals. Time permitting, I will discuss the utility of a focus on the emergent symmetries to reliably extract some physical properties of certain models of strange metals. .
All attendees must show valid Caltech ID upon entry.
Join via Zoom: https://caltech.zoom.us/j/89237465190. Meeting ID: 892 3746 5190
|
2022-06-27 20:10:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18163569271564484, "perplexity": 3131.738846046345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00298.warc.gz"}
|
https://ch.mathworks.com/help/ident/ug/correlation-analysis-algorithm.html
|
Documentation
## Correlation Analysis Algorithm
Correlation analysis refers to methods that estimate the impulse response of a linear model, without specific assumptions about model orders.
The impulse response, g, is the system's output when the input is an impulse signal. The output response to a general input, u(t), is obtained as the convolution with the impulse response. In continuous time:
`$y\left(t\right)={\int }_{-\infty }^{t}g\left(\tau \right)u\left(t-\tau \right)d\tau$`
In discrete-time:
`$y\left(t\right)=\sum _{k=1}^{\infty }g\left(k\right)u\left(t-k\right)$`
The values of g(k) are the discrete time impulse response coefficients.
You can estimate the values from observed input-output data in several different ways. `impulseest` estimates the first n coefficients using the least-squares method to obtain a finite impulse response (FIR) model of order n.
Several important options are associated with the estimate:
• Prewhitening — The input can be pre-whitened by applying an input-whitening filter of order `PW` to the data. This minimizes the effect of the neglected tail (`k > n`) of the impulse response.
1. A filter of order `PW` is applied such that it whitens the input signal `u`:
`1/A = A(u)e`, where `A` is a polynomial and `e` is white noise.
2. The inputs and outputs are filtered using the filter:
`uf = Au`, `yf = Ay`
3. The filtered signals `uf` and `yf` are used for estimation.
You can specify prewhitening using the `PW` name-value pair argument of `impulseestOptions`.
• Regularization — The least-squares estimate can be regularized. This means that a prior estimate of the decay and mutual correlation among `g(k)` is formed and used to merge with the information about `g` from the observed data. This gives an estimate with less variance, at the price of some bias. You can choose one of the several kernels to encode the prior estimate.
This option is essential because, often, the model order `n` can be quite large. In cases where there is no regularization, `n` can be automatically decreased to secure a reasonable variance.
You can specify the regularizing kernel using the `RegularizationKernel` Name-Value pair argument of `impulseestOptions`.
• Autoregressive Parameters — The basic underlying FIR model can be complemented by `NA` autoregressive parameters, making it an ARX model.
`$y\left(t\right)=\sum _{k=1}^{n}g\left(k\right)u\left(t-k\right)-\sum _{k=1}^{NA}{a}_{k}y\left(t-k\right)$`
This gives both better results for small `n` and allows unbiased estimates when data are generated in closed loop. `impulseest` uses NA = 5 for t>0 and NA = 0 (no autoregressive component) for t<0.
• Noncausal effects — Response for negative lags. It may happen that the data has been generated partly by output feedback:
`$u\left(t\right)=\sum _{k=0}^{\infty }h\left(k\right)y\left(t-k\right)+r\left(t\right)$`
where h(k) is the impulse response of the regulator and r is a setpoint or disturbance term. The existence and character of such feedback h can be estimated in the same way as g, simply by trading places between y and u in the estimation call. Using `impulseest` with an indication of negative delays, , returns a model `mi` with an impulse response
`$\left[h\left(-nk\right),h\left(-nk-1\right),...,h\left(0\right),g\left(1\right),g\left(2\right),...,g\left(nb+nk\right)\right]$`
aligned so that it corresponds to lags $\left[nk,nk+1,..,0,1,2,...,nb+nk\right]$. This is achieved because the input delay (`InputDelay`) of model `mi` is `nk`.
For a multi-input multi-output system, the impulse response g(k) is an ny-by-nu matrix, where ny is the number of outputs and nu is the number of inputs. The ij element of the matrix g(k) describes the behavior of the ith output after an impulse in the jth input.
Get trial now
|
2019-11-21 08:34:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.726073145866394, "perplexity": 1034.0770609066074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00053.warc.gz"}
|
https://www.physicsforums.com/threads/frequency-of-light.233237/
|
# Frequency of light
## Homework Statement
The speed of light is 3 x $$10^8$$ m/s. Blue light has a wavelength of about 450nm (nm = 10^-9 m). Calculate the frequency of this light
## Homework Equations
v=f$$\lambda$$
## The Attempt at a Solution
3 x $$10^8$$ = f x 450
f = $$\frac{3 times 10^8}{450}$$
|
2021-03-03 12:56:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5668913722038269, "perplexity": 2376.315848124668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00318.warc.gz"}
|
https://gist.github.com/jjhelmus/85446a2ccaaadbc08472
|
{{ message }}
Instantly share code, notes, and snippets.
# jjhelmus/stats.html
Created Sep 23, 2015
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Statistical functions (scipy.stats) — SciPy v0.16.0 Reference Guide
rv_continuous([momtype, a, b, xtol, ...]) A generic continuous random variable class meant for subclassing.
rv_discrete([a, b, name, badvalue, ...]) A generic discrete random variable class meant for subclassing.
Continuous distributions
alpha An alpha continuous random variable.
anglit An anglit continuous random variable.
arcsine An arcsine continuous random variable.
beta A beta continuous random variable.
betaprime A beta prime continuous random variable.
burr A Burr continuous random variable.
cauchy A Cauchy continuous random variable.
chi A chi continuous random variable.
chi2 A chi-squared continuous random variable.
cosine A cosine continuous random variable.
dgamma A double gamma continuous random variable.
dweibull A double Weibull continuous random variable.
erlang An Erlang continuous random variable.
expon An exponential continuous random variable.
exponnorm An exponentially modified Normal continuous random variable.
exponweib An exponentiated Weibull continuous random variable.
exponpow An exponential power continuous random variable.
f An F continuous random variable.
fatiguelife A fatigue-life (Birnbaum-Saunders) continuous random variable.
fisk A Fisk continuous random variable.
foldcauchy A folded Cauchy continuous random variable.
foldnorm A folded normal continuous random variable.
frechet_r A Frechet right (or Weibull minimum) continuous random variable.
frechet_l A Frechet left (or Weibull maximum) continuous random variable.
genlogistic A generalized logistic continuous random variable.
gennorm A generalized normal continuous random variable.
genpareto A generalized Pareto continuous random variable.
genexpon A generalized exponential continuous random variable.
genextreme A generalized extreme value continuous random variable.
gausshyper A Gauss hypergeometric continuous random variable.
gamma A gamma continuous random variable.
gengamma A generalized gamma continuous random variable.
genhalflogistic A generalized half-logistic continuous random variable.
gilbrat A Gilbrat continuous random variable.
gompertz A Gompertz (or truncated Gumbel) continuous random variable.
gumbel_r A right-skewed Gumbel continuous random variable.
gumbel_l A left-skewed Gumbel continuous random variable.
halfcauchy A Half-Cauchy continuous random variable.
halflogistic A half-logistic continuous random variable.
halfnorm A half-normal continuous random variable.
halfgennorm The upper half of a generalized normal continuous random variable.
hypsecant A hyperbolic secant continuous random variable.
invgamma An inverted gamma continuous random variable.
invgauss An inverse Gaussian continuous random variable.
invweibull An inverted Weibull continuous random variable.
johnsonsb A Johnson SB continuous random variable.
johnsonsu A Johnson SU continuous random variable.
ksone General Kolmogorov-Smirnov one-sided test.
kstwobign Kolmogorov-Smirnov two-sided test for large N.
laplace A Laplace continuous random variable.
logistic A logistic (or Sech-squared) continuous random variable.
loggamma A log gamma continuous random variable.
loglaplace A log-Laplace continuous random variable.
lognorm A lognormal continuous random variable.
lomax A Lomax (Pareto of the second kind) continuous random variable.
maxwell A Maxwell continuous random variable.
mielke A Mielke’s Beta-Kappa continuous random variable.
nakagami A Nakagami continuous random variable.
ncx2 A non-central chi-squared continuous random variable.
ncf A non-central F distribution continuous random variable.
nct A non-central Student’s T continuous random variable.
norm A normal continuous random variable.
pareto A Pareto continuous random variable.
pearson3 A pearson type III continuous random variable.
powerlaw A power-function continuous random variable.
powerlognorm A power log-normal continuous random variable.
powernorm A power normal continuous random variable.
rdist An R-distributed continuous random variable.
reciprocal A reciprocal continuous random variable.
rayleigh A Rayleigh continuous random variable.
rice A Rice continuous random variable.
recipinvgauss A reciprocal inverse Gaussian continuous random variable.
semicircular A semicircular continuous random variable.
t A Student’s T continuous random variable.
triang A triangular continuous random variable.
truncexpon A truncated exponential continuous random variable.
truncnorm A truncated normal continuous random variable.
tukeylambda A Tukey-Lamdba continuous random variable.
uniform A uniform continuous random variable.
vonmises A Von Mises continuous random variable.
wald A Wald continuous random variable.
weibull_min A Frechet right (or Weibull minimum) continuous random variable.
weibull_max A Frechet left (or Weibull maximum) continuous random variable.
wrapcauchy A wrapped Cauchy continuous random variable.
Multivariate distributions
multivariate_normal A multivariate normal random variable.
dirichlet A Dirichlet random variable.
wishart A Wishart random variable.
invwishart An inverse Wishart random variable.
Discrete distributions
bernoulli A Bernoulli discrete random variable.
binom A binomial discrete random variable.
boltzmann A Boltzmann (Truncated Discrete Exponential) random variable.
dlaplace A Laplacian discrete random variable.
geom A geometric discrete random variable.
hypergeom A hypergeometric discrete random variable.
logser A Logarithmic (Log-Series, Series) discrete random variable.
nbinom A negative binomial discrete random variable.
planck A Planck discrete exponential random variable.
poisson A Poisson discrete random variable.
randint A uniform discrete random variable.
skellam A Skellam discrete random variable.
zipf A Zipf discrete random variable.
Statistical functions
Several of these functions have a similar version in scipy.stats.mstats which work for masked arrays.
describe(a[, axis, ddof]) Computes several descriptive statistics of the passed array.
gmean(a[, axis, dtype]) Compute the geometric mean along the specified axis.
hmean(a[, axis, dtype]) Calculates the harmonic mean along the specified axis.
kurtosis(a[, axis, fisher, bias]) Computes the kurtosis (Fisher or Pearson) of a dataset.
kurtosistest(a[, axis]) Tests whether a dataset has normal kurtosis
mode(a[, axis]) Returns an array of the modal (most common) value in the passed array.
moment(a[, moment, axis]) Calculates the nth moment about the mean for a sample.
normaltest(a[, axis]) Tests whether a sample differs from a normal distribution.
skew(a[, axis, bias]) Computes the skewness of a data set.
skewtest(a[, axis]) Tests whether the skew is different from the normal distribution.
kstat(data[, n]) Return the nth k-statistic (1<=n<=4 so far).
kstatvar(data[, n]) Returns an unbiased estimator of the variance of the k-statistic.
tmean(a[, limits, inclusive]) Compute the trimmed mean.
tvar(a[, limits, inclusive]) Compute the trimmed variance
tmin(a[, lowerlimit, axis, inclusive]) Compute the trimmed minimum
tmax(a[, upperlimit, axis, inclusive]) Compute the trimmed maximum
tstd(a[, limits, inclusive]) Compute the trimmed sample standard deviation
tsem(a[, limits, inclusive]) Compute the trimmed standard error of the mean.
nanmean(*args, **kwds) nanmean is deprecated!
nanstd(*args, **kwds) nanstd is deprecated!
nanmedian(*args, **kwds) nanmedian is deprecated!
variation(a[, axis]) Computes the coefficient of variation, the ratio of the biased standard deviation to the mean.
cumfreq(a[, numbins, defaultreallimits, weights]) Returns a cumulative frequency histogram, using the histogram function.
histogram2(*args, **kwds) histogram2 is deprecated!
histogram(a[, numbins, defaultlimits, ...]) Separates the range into several bins and returns the number of instances in each bin.
itemfreq(a) Returns a 2-D array of item frequencies.
percentileofscore(a, score[, kind]) The percentile rank of a score relative to a list of scores.
scoreatpercentile(a, per[, limit, ...]) Calculate the score at a given percentile of the input sequence.
relfreq(a[, numbins, defaultreallimits, weights]) Returns a relative frequency histogram, using the histogram function.
binned_statistic(x, values[, statistic, ...]) Compute a binned statistic for a set of data.
binned_statistic_2d(x, y, values[, ...]) Compute a bidimensional binned statistic for a set of data.
binned_statistic_dd(sample, values[, ...]) Compute a multidimensional binned statistic for a set of data.
obrientransform(*args) Computes the O’Brien transform on input data (any number of arrays).
signaltonoise(*args, **kwds) signaltonoise is deprecated!
bayes_mvs(data[, alpha]) Bayesian confidence intervals for the mean, var, and std.
mvsdist(data) ‘Frozen’ distributions for mean, variance, and standard deviation of data.
sem(a[, axis, ddof]) Calculates the standard error of the mean (or standard error of measurement) of the values in the input array.
zmap(scores, compare[, axis, ddof]) Calculates the relative z-scores.
zscore(a[, axis, ddof]) Calculates the z score of each value in the sample, relative to the sample mean and standard deviation.
sigmaclip(a[, low, high]) Iterative sigma-clipping of array elements.
threshold(a[, threshmin, threshmax, newval]) Clip array to a given value.
trimboth(a, proportiontocut[, axis]) Slices off a proportion of items from both ends of an array.
trim1(a, proportiontocut[, tail]) Slices off a proportion of items from ONE end of the passed array distribution.
f_oneway(*args) Performs a 1-way ANOVA.
pearsonr(x, y) Calculates a Pearson correlation coefficient and the p-value for testing non-correlation.
spearmanr(a[, b, axis]) Calculates a Spearman rank-order correlation coefficient and the p-value to test for non-correlation.
pointbiserialr(x, y) Calculates a point biserial correlation coefficient and the associated p-value.
kendalltau(x, y[, initial_lexsort]) Calculates Kendall’s tau, a correlation measure for ordinal data.
linregress(x[, y]) Calculate a regression line
theilslopes(y[, x, alpha]) Computes the Theil-Sen estimator for a set of points (x, y).
ttest_1samp(a, popmean[, axis]) Calculates the T-test for the mean of ONE group of scores.
ttest_ind(a, b[, axis, equal_var]) Calculates the T-test for the means of TWO INDEPENDENT samples of scores.
ttest_ind_from_stats(mean1, std1, nobs1, ...) T-test for means of two independent samples from descriptive statistics.
ttest_rel(a, b[, axis]) Calculates the T-test on TWO RELATED samples of scores, a and b.
kstest(rvs, cdf[, args, N, alternative, mode]) Perform the Kolmogorov-Smirnov test for goodness of fit.
chisquare(f_obs[, f_exp, ddof, axis]) Calculates a one-way chi square test.
power_divergence(f_obs[, f_exp, ddof, axis, ...]) Cressie-Read power divergence statistic and goodness of fit test.
ks_2samp(data1, data2) Computes the Kolmogorov-Smirnov statistic on 2 samples.
mannwhitneyu(x, y[, use_continuity]) Computes the Mann-Whitney rank test on samples x and y.
tiecorrect(rankvals) Tie correction factor for ties in the Mann-Whitney U and Kruskal-Wallis H tests.
rankdata(a[, method]) Assign ranks to data, dealing with ties appropriately.
ranksums(x, y) Compute the Wilcoxon rank-sum statistic for two samples.
wilcoxon(x[, y, zero_method, correction]) Calculate the Wilcoxon signed-rank test.
kruskal(*args) Compute the Kruskal-Wallis H-test for independent samples
friedmanchisquare(*args) Computes the Friedman test for repeated measurements
combine_pvalues(pvalues[, method, weights]) Methods for combining the p-values of independent tests bearing upon the same hypothesis.
ansari(x, y) Perform the Ansari-Bradley test for equal scale parameters
bartlett(*args) Perform Bartlett’s test for equal variances
levene(*args, **kwds) Perform Levene test for equal variances.
shapiro(x[, a, reta]) Perform the Shapiro-Wilk test for normality.
anderson(x[, dist]) Anderson-Darling test for data coming from a particular distribution
anderson_ksamp(samples[, midrank]) The Anderson-Darling test for k-samples.
binom_test(x[, n, p]) Perform a test that the probability of success is p.
fligner(*args, **kwds) Perform Fligner’s test for equal variances.
median_test(*args, **kwds) Mood’s median test.
mood(x, y[, axis]) Perform Mood’s test for equal scale parameters.
boxcox(x[, lmbda, alpha]) Return a positive dataset transformed by a Box-Cox power transformation.
boxcox_normmax(x[, brack, method]) Compute optimal Box-Cox transform parameter for input data.
boxcox_llf(lmb, data) The boxcox log-likelihood function.
entropy(pk[, qk, base]) Calculate the entropy of a distribution for given probability values.
Circular statistical functions
circmean(samples[, high, low, axis]) Compute the circular mean for samples in a range.
circvar(samples[, high, low, axis]) Compute the circular variance for samples assumed to be in a range
circstd(samples[, high, low, axis]) Compute the circular standard deviation for samples assumed to be in the range [low to high].
Contingency table functions
chi2_contingency(observed[, correction, lambda_]) Chi-square test of independence of variables in a contingency table.
contingency.expected_freq(observed) Compute the expected frequencies from a contingency table.
contingency.margins(a) Return a list of the marginal sums of the array a.
fisher_exact(table[, alternative]) Performs a Fisher exact test on a 2x2 contingency table.
Plot-tests
ppcc_max(x[, brack, dist]) Returns the shape parameter that maximizes the probability plot correlation coefficient for the given data to a one-parameter family of distributions.
ppcc_plot(x, a, b[, dist, plot, N]) Calculate and optionally plot probability plot correlation coefficient.
probplot(x[, sparams, dist, fit, plot]) Calculate quantiles for a probability plot, and optionally show the plot.
boxcox_normplot(x, la, lb[, plot, N]) Compute parameters for a Box-Cox normality plot, optionally show it.
|
2022-05-26 15:45:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079699277877808, "perplexity": 2648.1558477717854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00686.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/entry-level-assessment-page-xl/15
|
## Geometry: Common Core (15th Edition)
$-x(y-8)^2$ Substitute the given values of x and y. =$-(-2)(5-8)^2$ The order of operations tells us to solve terms in parenthesis first. =$-(-2)(-3)^2$ The order of operations tells us to solve exponents next. =$-(-2)(9)$ Finally work left to right multiplying terms. Finding the negative of a number is the same as multiplying by -1. =$(2)(9)$ =$18$
|
2019-11-21 07:53:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5134904980659485, "perplexity": 815.1543944873948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00490.warc.gz"}
|
https://www.physicsforums.com/threads/how-to-create-quantum-entangled-electrons.893098/
|
# How to create quantum entangled electrons
• B
IsaiahvH
I understand that one way of creating quantum entangled electrons is by splitting a Cooper pair. Is then their spin property used in the measurement, as this must always sum to ##0## for a Cooper pair?
If that is the case, do quantum entangled electrons only exist in the singlet state, where the spin is always opposite to one another?
$$\frac{1}{\sqrt{2}}(\left|01\right> \pm \left|10 \right>)$$
Are there other methods of creating quantum entangled electrons?
A. Neumaier
Any atom (except for hydrogen atoms) is surrounded by several entangled electrons.
PeroK
Homework Helper
Gold Member
2020 Award
I understand that one way of creating quantum entangled electrons is by splitting a Cooper pair. Is then their spin property used in the measurement, as this must always sum to ##0## for a Cooper pair?
If that is the case, do quantum entangled electrons only exist in the singlet state, where the spin is always opposite to one another?
$$\frac{1}{\sqrt{2}}(\left|01\right> \pm \left|10 \right>)$$
Are there other methods of creating quantum entangled electrons?
It seems to me you may be getting the terminology mixed up here.
The singlet state is ##|0 \ 0\rangle = \frac{1}{\sqrt{2}} (\uparrow \downarrow - \downarrow \uparrow)##.
The triplet is a combination of 3 states, so called because they share a common value for total spin:
##|1 \ 1\rangle =\ \uparrow \uparrow##
##|1 \ 0\rangle = \frac{1}{\sqrt{2}} (\uparrow \downarrow + \downarrow \uparrow)##
##|1 \ -\!1\rangle =\ \downarrow \downarrow##
Last edited:
Strilanc
It seems to me you may be getting the terminology mixed up here.
The singlet state is ##|0 \ 0\rangle = \frac{1}{\sqrt{2}} (\uparrow \downarrow - \downarrow \uparrow)##.
The triplet is a combination of 3 states, so called because they share a common value for total spin:
##|1 \ 1\rangle =\ \uparrow \uparrow##
##|1 \ 0\rangle = \frac{1}{\sqrt{2}} (\uparrow \downarrow + \downarrow \uparrow)##
##|1 \ -\!1\rangle =\ \downarrow \downarrow##
Where are you getting that notation from? In quantum information the singlet state is ##\frac{1}{\sqrt{2}} \left( |01\rangle - |10\rangle \right)##. Certainly not ##|00\rangle##. That's the "both qubits are in the off state" state.
PeroK
Homework Helper
Gold Member
2020 Award
Where are you getting that notation from? In quantum information the singlet state is ##\frac{1}{\sqrt{2}} \left( |01\rangle - |10\rangle \right)##. Certainly not ##|00\rangle##. That's the "both qubits are in the off state" state.
I thought we were talking about electrons and that that notation was fairly standard.
..
Are there other methods of creating quantum entangled electrons?
Yes there is another way. Two entangled photons can interact with two electorns and swap the entanglement to the electrons. I haven't got the details to hand but I'll search.
Nugatory
Mentor
Where are you getting that notation from? In quantum information the singlet state is ##\frac{1}{\sqrt{2}} \left( |01\rangle - |10\rangle \right)##. Certainly not ##|00\rangle##. That's the "both qubits are in the off state" state.
I believe that @PeroK is using a convention in which the spin state of the individual particles is represented with up and down arrows, and a ket containing two numbers is the state with total spin given by the first number and projection of the total spin by the second. That's not the convention being used elsewhere in this thread, and I don't know how common it is, but I'm pretty sure it was the one used back when I learned about quantum spin.
Strilanc and PeroK
I understand that one way of creating quantum entangled electrons is by splitting a Cooper pair. Is then their spin property used in the measurement, as this must always sum to ##0## for a Cooper pair?
If that is the case, do quantum entangled electrons only exist in the singlet state, where the spin is always opposite to one another?
$$\frac{1}{\sqrt{2}}(\left|01\right> \pm \left|10 \right>)$$
Are there other methods of creating quantum entangled electrons?
Entangled electrons don't only exist in the singlet state. They can also have, for instance, positively correlated spins.
There are a number of methods of creating entangled electrons and particles. Unfortunately I don't know what they are, ask an experimenter. For photons, there's parametric downconversion, SPS cascades, annihilation of spin-zero particle states into two gamma rays, etc.
Finally, I'm supposing you mean to "create" the pair in the lab for entanglement experiments. In "the wild" they exist everywhere. As @A. Neumaier says, they exist in an atom. For instance whenever an orbit has two electrons they must have opposite spins. Generally whenever two particles interact in any way they become entangled (maybe there are exceptions). For instance if they scatter off each other, the sum of their outgoing momenta must equal sum of ingoing momenta - they're entangled.
Ammar Ahmed
|
2021-06-22 05:48:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7885453104972839, "perplexity": 1412.940844959232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00518.warc.gz"}
|
https://wiki.q-researchsoftware.com/wiki/Tukey_HSD
|
# Tukey HSD
Tukey’s Honestly Significant Differences (also known as Tukey’s Whole Significant Differences). The test statistic is:
${\displaystyle t={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{\sqrt {{\frac {\sum _{j=1}^{J}\sum _{i=1}^{n_{j}}w_{ij}(x_{ij}-{\bar {x}}_{j})^{2}}{v}}({\frac {1}{e_{1}}}+{\frac {1}{e_{1}}})}}}}$
where:
${\displaystyle {\bar {x}}_{1}}$ and ${\displaystyle {\bar {x}}_{2}}$ are the means of the two groups being compared and ${\displaystyle {\bar {x}}_{j}}$ is the mean of the ${\displaystyle j}$ of ${\displaystyle J}$ groups,
when applying the test to Repeated Measures, each respondent’s average is initially subtracted from their data and it is this corrected data that constitutes of ${\displaystyle x_{ij}}$,
${\displaystyle n_{j}}$ is the number of observations in the ${\displaystyle j}$th of ${\displaystyle J}$ groups,
${\displaystyle w_{ij}}$ is the Calibrated Weight for the ${\displaystyle i}$th observation in the ${\displaystyle j}$ group,
${\displaystyle e_{j}}$ is the Effective Sample Size for the ${\displaystyle j}$ group,
${\displaystyle v=(J-1)(\sum _{j=1}^{J}e_{j}-1)}$ for Repeated Measures and ${\displaystyle v=\sum _{j=1}^{J}e_{j}-J}$ otherwise.
${\displaystyle t}$ is evaluated using a Tukey’s Studentized Range distribution with ${\displaystyle v}$ degrees of freedom for ${\displaystyle J}$ groups.
|
2019-09-20 07:43:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158289790153503, "perplexity": 589.5525932699953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573908.70/warc/CC-MAIN-20190920071824-20190920093824-00393.warc.gz"}
|
http://earthpy.org/speed.html
|
How to make your python code run faster
Make your python scripts run faster
Solution:
multiprocessor, cython, numba
One of the counterarguments that you constantly hear about using python is that it is slow. This is somehow true for many cases, while most of the tools that scientist mainly use, like numpy, scipy and pandas have big chunks written in C, so they are very fast. For most of the geoscientific applications main advice would be to use vectorisation whenever possible, and avoid loops. However sometimes loops are unavoidable, and then python speed can get on to your nerves. Fortunately there are several easy ways to make your python loops faster.
Multiprocessor¶
Let's first download some data to work with (NCEP reanalysis air temperature):
In [2]:
#variabs = ['air']
#for vvv in variabs:
# for i in range(2000,2010):
# !wget ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/{vvv}.sig995.{i}.nc
In [3]:
ls *nc
air.sig995.2000.nc air.sig995.2002.nc air.sig995.2004.nc air.sig995.2006.nc air.sig995.2008.nc air.sig995.2012.nc
air.sig995.2001.nc air.sig995.2003.nc air.sig995.2005.nc air.sig995.2007.nc air.sig995.2009.nc
This is netCDF files, so we need netCDF4 library:
In [38]:
from netCDF4 import Dataset
Now we create useless but time consuming function, that have a lot of loops in it. It takes year as an input and then just summs up all the numbers from the file one by one.
In [39]:
def useless(year):
from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = 0
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
a_cum = a_cum+a[i,j,n]
a_cum.tofile(year+'.bin')
print(year)
return a_cum
It works slow enough:
In [40]:
%%time
useless('2000')
2000
CPU times: user 3.49 s, sys: 24 ms, total: 3.52 s
Wall time: 3.49 s
Out[40]:
1068708186.2315979
We can create a loop that will process several files one by one. First make a list of years:
In [41]:
years = [str(x) for x in range(2000,2008)]
years
Out[41]:
['2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007']
And now the loop:
In [42]:
%%time
for yy in years:
useless(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 27.2 s, sys: 288 ms, total: 27.5 s
Wall time: 28 s
Processing of each file is independent from others. This is "embarrassingly parallel" problem and can be done very easily in parallel with multiprocessing module of the standard library.
Most of the modern computers have more than one processor. Even the 5 year old machine, where I now edit this notebook have two. The way to check how many processors you have is to run:
In [43]:
!nproc
2
Now let's import multiprocessing:
In [44]:
import multiprocessing
And create a pool with 2 processors (you can use your number of processors):
In [45]:
pool = multiprocessing.Pool(processes=2)
Let's have a look how fast do we get:
In [46]:
%%time
r = pool.map(useless, years)
pool.close()
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 9.69 s
2000
2001
20022003
20042005
20072006
More than two times faster - not bad! But we can do better!
Cython¶
Cython is an optimising static compiler for Python. Cython magic is one of the default extensions, and we can just load it (you have to have cython already installed):
In [47]:
%load_ext cythonmagic
The cythonmagic extension is already loaded. To reload it, use:
The only thing we do here - add %%cython magic at the top of the cell. This function will be compiled with cython.
In [48]:
%%cython
def useless_cython(year):
from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = 0
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
a_cum = a_cum+a[i,j,n]
a_cum.tofile(year+'.bin')
print(year)
return a_cum
Only this give us a good boost:
In [49]:
%%time
useless_cython('2000')
2000
CPU times: user 2.57 s, sys: 40 ms, total: 2.61 s
Wall time: 2.62 s
Out[49]:
1068708186.2315979
One processor:
In [50]:
%%time
for yy in years:
useless_cython(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 19.7 s, sys: 200 ms, total: 19.9 s
Wall time: 19.9 s
Two processors:
In [51]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_cython, years)
pool.close()
CPU times: user 0 ns, sys: 16 ms, total: 16 ms
Wall time: 7.27 s
2000
2001
20022003
20052004
20072006
But the true power of cython revealed only when you provide types of your variables. You have to use cdef keyword in the function definition to do so. There are also couple other modifications to the function
In [52]:
%%cython
import numpy as np
def useless_cython(year):
# define types of variables
cdef int i, j, n
cdef double a_cum
from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = 0.
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
#here we have to convert numpy value to simple float
a_cum = a_cum+float(a[i,j,n])
# since a_cum is not numpy variable anymore,
# we introduce new variable d in order to save
# data to the file easily
d = np.array(a_cum)
d.tofile(year+'.bin')
print(year)
return d
In [53]:
%%time
useless_cython('2000')
2000
CPU times: user 1.16 s, sys: 20 ms, total: 1.18 s
Wall time: 1.19 s
Out[53]:
array(1068708186.2315979)
One processor:
In [54]:
%%time
for yy in years:
useless_cython(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 11.1 s, sys: 180 ms, total: 11.3 s
Wall time: 11.3 s
Multiprocessing:
In [55]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_cython, years)
pool.close()
CPU times: user 0 ns, sys: 12 ms, total: 12 ms
Wall time: 4.3 s
2000
2001
20022003
20052004
20072006
From 9 seconds to 4 on two processors - not bad! But we can do better! :)
Numba¶
Numba is an just-in-time specializing compiler which compiles annotated Python and NumPy code to LLVM (through decorators). The easiest way to install it is to use Anaconda distribution.
In [56]:
from numba import jit, autojit
We now have to split our function in two (that would be a good idea from the beggining). One is just number crunching part, and another responsible for IO. The only thing that we have to do afterwards is to put jit decorator in front of the first function.
In [57]:
@autojit
def calc_sum(a):
a_cum = 0.
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
a_cum = a_cum+a[i,j,n]
return a_cum
def useless_numba(year):
#from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = calc_sum(a)
d = np.array(a_cum)
d.tofile(year+'.bin')
print(year)
return d
In [58]:
%%time
useless_numba('2000')
2000
CPU times: user 464 ms, sys: 12 ms, total: 476 ms
Wall time: 483 ms
Out[58]:
array(1068708186.2315979)
One processor:
In [59]:
%%time
for yy in years:
useless_numba(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 1.53 s, sys: 152 ms, total: 1.68 s
Wall time: 1.7 s
Two processors:
In [60]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_numba, years)
pool.close()
CPU times: user 4 ms, sys: 8 ms, total: 12 ms
Wall time: 912 ms
2000
2001
20022003
20042005
20062007
Nice! Maybe we can speed up a bit more?
You can also provide type for the input and output:
In [61]:
@jit('f8(f4[:,:,:])')
def calc_sum(a):
a_cum = 0.
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
a_cum = a_cum+a[i,j,n]
return a_cum
def useless_numba2(year):
#from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = calc_sum(a)
d = np.array(a_cum)
d.tofile(year+'.bin')
print(year)
return d
In [62]:
%%time
useless_numba2('2000')
2000
CPU times: user 216 ms, sys: 16 ms, total: 232 ms
Wall time: 244 ms
Out[62]:
array(1068708186.2315979)
In [65]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_numba2, years)
pool.close()
CPU times: user 8 ms, sys: 4 ms, total: 12 ms
Wall time: 884 ms
2000
2001
20022003
20042005
20062007
just a tiny bit...
Native numpy¶
This is how you really should solve this problem using numpy.sum(). Note, that the result will be different compared to previous examples. Only if you first convert to float64 it becomes the same. Be careful when dealing with huge numbers!
In [66]:
import numpy as np
def calc_sum(a):
a = np.float64(a)
return a.sum()
def useless_good(year):
from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = calc_sum(a)
d = np.array(a_cum)
d.tofile(year+'.bin')
print(year)
return d
In [67]:
%%time
useless_good('2000')
2000
CPU times: user 172 ms, sys: 44 ms, total: 216 ms
Wall time: 224 ms
Out[67]:
array(1068708186.2315979)
In [68]:
%%time
for yy in years:
useless_good(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 1.76 s, sys: 296 ms, total: 2.06 s
Wall time: 2.07 s
In [69]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_good, years)
pool.close()
CPU times: user 4 ms, sys: 8 ms, total: 12 ms
Wall time: 1.04 s
2001
2000
20032002
20052004
20072006
Actually numpy version is a bit slower than numba version, but it's not clear to me how much I can trust this result.
Cython 2nd try¶
I was surprised to see Cython results so much different from Numba, and decide to make a second try. Here I make cython to be aware of numpy arrays.
In [76]:
%%cython
import numpy as np
cimport numpy as np
cimport cython
def calc_sum(np.ndarray[float, ndim=3] a):
cdef int i, j, n
cdef float a_cum
a_cum = 0
for i in range(a.shape[0]):
for j in range(a.shape[1]):
for n in range(a.shape[2]):
a_cum = a_cum+(a[i,j,n])
return a_cum
In [77]:
def useless_cython2(year):
from netCDF4 import Dataset
f = Dataset('air.sig995.'+year+'.nc')
a = f.variables['air'][:]
a_cum = calc_sum(a)
d = np.array(a_cum)
d.tofile(year+'.bin')
print(year)
return d
In [78]:
%%time
useless_cython2('2000')
2000
CPU times: user 184 ms, sys: 36 ms, total: 220 ms
Wall time: 225 ms
Out[78]:
array(1068708186.2315979)
In [79]:
%%time
for yy in years:
useless_cython2(yy)
2000
2001
2002
2003
2004
2005
2006
2007
CPU times: user 1.44 s, sys: 164 ms, total: 1.6 s
Wall time: 1.62 s
In [80]:
%%time
pool = multiprocessing.Pool(processes=2)
r = pool.map(useless_cython2, years)
pool.close()
CPU times: user 4 ms, sys: 8 ms, total: 12 ms
Wall time: 866 ms
2000
2001
20022003
20042005
20062007
Now it's comparable to other competitors :)
|
2017-06-24 03:40:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.306636780500412, "perplexity": 9602.014075281724}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00697.warc.gz"}
|
https://www.edaboard.com/threads/hspice-warning-pd-0-is-less-than-w.53878/
|
Continue to Site
# hspice warning: Pd =0 is less than W?
Status
Not open for further replies.
#### vhdl00
##### Junior Member level 2
warning: pd = 0 is less than w.
it is said that hspice will calculate the junction capcitance automatically with w, L only. for the warning, seems it won't calculate the Cj correctly assuming Pd=0?
or my hspice is outdated?
thanks
#### chunlee
the Pd is depend the layout u draw. Pd will always greater than W. so hspice will generate a warning message if Pd < W
#### vhdl00
##### Junior Member level 2
hspice pd
thanks chunlee, you mean it is normal to get the warning like this? i was trying to extract a hspice netlist from cadence tool, it did have ps pd value. Compare the capacitance, there are some differences. I just thought the hspice should generate some pd value, which should depend on the mosfet size(w,L).---it is just the drain perimeter.
thanks again
#### chunlee
pd is less than w.
hspice has default value for pd ps ad as. if i don't forgot, the default value is 0. I don't use cadence before. so i don't know cadence will generate pd and ps automatically or not. But again, pd ps ad as is depend the layout u draw.
But u can use following formula to estimate the value of pd ps ad as. formula was given by somebody in this forum.
PD = PS = 5*Lmin + W or 5*Lmin
#### sunking
ps is less than w
require the foundry for the PDK. then the software will generate the PD PS and so on, Or you have to caculate by yourself.
Status
Not open for further replies.
|
2023-02-01 19:07:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204689025878906, "perplexity": 5500.862760415601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00561.warc.gz"}
|
http://mathhelpforum.com/differential-equations/46890-applications-integration-shm.html
|
# Thread: Applications of Integration + SHM
1. ## Applications of Integration + SHM
Hi, all. Ultra stuck on a few questions, and help would be much appreciated.
- a) Show that x = Asin(3t + a), where A and a are constants, satisfies this differential equation.
My working: ---- dx/dt = 3Acos(3t + a), (d^2x)/(dt^2) = -9Asin(3t + a), therefore = -9x
Is this enough?
2. #1: You can use shells or washers.
Shells:
$2{\pi}\int_{-1}^{4}(y+1)\sqrt{4-y}dy$
Washers:
${\pi}\int_{-\sqrt{5}}^{\sqrt{5}}(4-x^{2})^{2}-(-1)^{2}dx$
3. Shells?... Washers!?
|
2017-10-22 04:06:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024185299873352, "perplexity": 6663.696518114488}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00499.warc.gz"}
|
https://search.datacite.org/works/10.3204/DESY-PROC-2012-02/55
|
### Status on the transversity parton distribution: the dihadron fragmentation functions way.
A. Courtoy, Alessandro Bacchetta & Marco Radici
We report on the extraction of dihadron fragmentation functions (DiFF) from the semi-inclusive production of two hadron pairs in back-to-back jets in $e^+e^-$ annihilation. A nonzero asymmetry in the correlation of azimuthal orientations of opposite $\pi^+\pi^-$ pairs is related to the transverse polarization of fragmenting quarks through a significant polarized DiFF. A combined analysis of this asymmetry and the spin asymmetry in the SIDIS process $ep^{\uparrow}\to e'(\pi^+\pi^-)X$ has led to the first extraction of the...
|
2018-03-18 19:41:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548229098320007, "perplexity": 5914.3167292737435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00188.warc.gz"}
|
https://stats.stackexchange.com/questions/238150/meaning-of-alpha-c-in-svm
|
# Meaning of alpha = C in SVM
I have been studying SVM lately, following Andrew Ng's CS229 lecture notes. I can understand most of the notes. But for the case where the KKT condition is satisfied at alpha = C, I am not sure what that means.
I know that for alpha = 0, the KKT condition is satisfied with inequality constraint, namely, the point lies above the decision function yi * (w' * xi + b) = 1. And for 0 < alpha < C, the KKT condition is satisfied with equality constraint, namely, the point is a support vector. Now as for the case where alpha = C, I wonder what does this mean, does it mean that the point violate the constraint and that the penalty is C * slack_variable?
$$C$$ is the maximum and $$\alpha$$ can be and indicates the Support Vector is inside the margin.
|
2019-10-14 03:34:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512782454490662, "perplexity": 320.9339801731725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00521.warc.gz"}
|
https://lists.macromates.com/hyperkitty/list/textmate@lists.macromates.com/message/WYRRFYSQUFIZMECSAI426YZBSGV6I5NM/attachment/2/attachment.htm
|
Looking at your test program log output this is what I see in the log:
(./test.out) (./test.out)
\@outlinefile=\write3
\openout3 = test.out'.
)
Runaway argument?
{http://jabref.sourceforge.net^^M\end {document}^^M
! File ended while scanning use of \hyper@n@rmalise.
<inserted text>
\par
<*> test.tex
I suspect you have forgotten a }', causing me
to read past where you wanted me to stop.
I'll try to recover; but if the error is serious,
you'd better type E' or X' now and fix your file.
! Emergency stop.
The LaTeX bundle uses a set of regular expressions to match warning and error messages.
The key characters that I look for are:
! at the beginning of a line
or the word error or warning on a line followed by the file and line information.
These catch the vast majority of the errors and warnings that latex produces, and is why TM does catch the line ! File ended....
Adding a match for Runaway argument would not be hard if it is the right thing to do....
Thanks,
On May 2, 2008, at 9:34 AM, Christian wrote:
Am 02.05.2008 um 13:10 schrieb Charilaos Skiadas:
On May 2, 2008, at 4:28 AM, Christian wrote:
Am 02.05.2008 um 09:47 schrieb Christian:
Hi all,
I really having trouble with TM. For some reason TM does not consider the % sign within LaTeX. This means I get a lot of error messages. If I comment out some included file (e.g. %\input{history}) TM still reads the file and sends me errors like:
Latex Error: ./history.tex:6 LaTeX Error: Something's wrong--perhaps a missing \item.
The file looks at this point as follows:
% \item parindent durch Option halfparskip entfernt
Why does that happen and so suddenly and how could I resolve it?
What a strange behaviour. I typeset the file within TeXShop and it gave me a runaway argument with the precise position in the code. I missed a curly bracket on a \url command. Thats it.
Why does TM responds with such wrong error messages and not with a precise one?
TM does not try to compile your file, it delegates that task to pdflatex. So does TeXShop. In theory they are calling the same program. So I see no reason for the discrepancy. Please provide a minimal reproducible and clear example.
Here is the working minimal example (the curly bracket after \url is missing):
You can see that TeXShop provides the erroneous passage of the code.
-------------------------
\documentclass{scrartcl}
\usepackage{hyperref}
%
\begin{document}
\end{document}
-------------------------
TM error message:
-------------------------
This is pdfTeXk, Version 3.141592-1.40.3 (Web2C 7.5.6)
#### Processing: ./urltest.tex
Document Class: scrartcl 2006/07/30 v2.95b KOMA-Script document class (article)
! File ended while scanning use of \hyper@n@rmalise.
! Emergency stop.
! ==> Fatal error occurred, no output PDF file produced!
Complete transcript is in urltest.log
Found 1 errors, and 2 warnings in 1 runs
-------------------------
TeXShop error message
-------------------------
<ishot-2.png>
-------------------------
______________________________________________________________________
For new threads USE THIS: textmate@lists.macromates.com
(threading gets destroyed and the universe will collapse if you don't)
http://lists.macromates.com/mailman/listinfo/textmate
|
2021-05-06 07:45:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915694952011108, "perplexity": 6439.292808574793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00090.warc.gz"}
|
http://www.ck12.org/analysis/Sums-of-Finite-Arithmetic-Series/lesson/Finding-the-Sum-of-a-Finite-Arithmetic-Series-ALG-II/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Sums of Finite Arithmetic Series
## Find series sums using formula and calculator
Estimated48 minsto complete
%
Progress
Practice Sums of Finite Arithmetic Series
Progress
Estimated48 minsto complete
%
Finding the Sum of a Finite Arithmetic Series
A theater's seating is arranged such that each row has two more seats than the one in front of it. The first row has five seats and there are 30 rows of seats in the theater. How many total seats are in the theater?
### Sum of a Finite Arithmetic Series
The method of using the calculator to evaluate the sum of a series can be used to find the sum of an arithmetic series as well. However, in this concept we will explore an algebraic method unique to arithmetic series. As we discussed earlier in the unit a series is simply the sum of a sequence so an arithmetic series is a sum of an arithmetic sequence. Let’s look at a problem to illustrate this and develop a formula to find the sum of a finite arithmetic series.
Let's find the sum of the arithmetic series: \begin{align*}1 + 3 + 5 + 7 + 9 +11 + \ldots + 35 + 37 + 39\end{align*}.
Now, while we could just add up all of the terms to get the sum, if we had to sum a large number of terms that would be very time consuming. A famous German mathematician, Johann Carl Friedrich Gauss, used the method described here to determine the sum of the first 100 integers in grade school. First, we can write out all the numbers twice, in ascending and descending order, and observe that the sum of each pair of numbers is the same:
\begin{align*}& \ 1 \ \ 3 \quad 5 \quad 7 \ \ \ 9 \ \ \ 11 \quad \ldots \ \ \ 35 \quad 37 \quad 39 \\ & 39 \ 37 \ 35 \ \ 33 \ \ 31 \ \ \, 29 \quad \ldots \ \ \ \, 5 \quad \ \, 3 \quad \ \ 1 \\ & \qquad \qquad \qquad \quad \ \vdots \\ & 40 \ 40 \ 40 \ \ 40 \ \ 40 \ \ \ 40 \quad \ldots \ \ 40 \quad \ 40 \quad 40\end{align*}
Notice that the sum of the corresponding terms in reverse order is always equal to 40, which is the sum of the first and last terms in the sequence.
What Gauss realized was that this sum can be multiplied by the number of terms and then divided by two (since we are actually summing the series twice here) to get the sum of the terms in the original sequence. For the problem he was given in school, finding the sum of the first 100 integers, he was able to just use the first term, \begin{align*}a_1=1\end{align*} , the last term, \begin{align*}a_n=100\end{align*}, and the total number of terms, \begin{align*}n=100\end{align*}, in the following formula:
\begin{align*}\frac{n \left(a_1+a_n\right)}{2}=\frac{100 \left(1+100\right)}{2}=5050\end{align*}
In our problem we know the first and last terms, but how many terms are there? We need to find \begin{align*}n\end{align*} to use the formula to find the sum of the series. We can use the first and last terms and the \begin{align*}n^{th}\end{align*} term to do this.
\begin{align*}a_n&=a_1+d(n-1) \\ 39&=1+2(n-1) \\ 38&=2(n-1)\\ 19&=n-1 \\ 20&=n\end{align*} Now the sum is \begin{align*}\frac{20 \left(1+39\right)}{2}=400\end{align*}.
#### Proof of the Arithmetic Sum Formula
The rule for finding the \begin{align*}n^{th}\end{align*} term of an arithmetic sequence and properties of summations can be used to prove the formula algebraically. First, we will start with the \begin{align*}n^{th}\end{align*} term rule \begin{align*}a_n=a_1+(n-1)d\end{align*}. We need to find the sum of numerous \begin{align*}n^{th}\end{align*} terms (\begin{align*}n\end{align*} of them to be exact) so we will use the index, \begin{align*}i\end{align*}, in a summation as shown below:
\begin{align*}\sum \limits_{i=1}^n \left [ a_1+(i-1)d\right ]\end{align*} Keep in mind that \begin{align*}a_1\end{align*} and \begin{align*}d\end{align*} are constants in this expression.
We can separate this into two separate summations as shown: \begin{align*}\sum \limits_{i=1}^n a_1+ \sum \limits_{i=1}^n (i-1)d\end{align*}
Expanding the first summation, \begin{align*}\sum \limits_{i=1}^n a_1=a_1+a_1+a_1+ \ldots+a_1\end{align*} such that \begin{align*}a_1\end{align*} is added to itself \begin{align*}n\end{align*} times. We can simplify this expression to \begin{align*}a_1n\end{align*}.
In the second summation, \begin{align*}d\end{align*} can be brought out in front of the summation and the difference inside can be split up as we did with the addition to get: \begin{align*}d \left [ \sum \limits_{i=1}^n i- \sum \limits_{i=1}^n 1 \right ]\end{align*}. Using rules you have seen before, \begin{align*}\sum \limits_{i=1}^n i= \frac{1}{2}n(n+1)\end{align*} and \begin{align*}\sum \limits_{i=1}^n 1=n\end{align*}. Putting it all together, we can write an expression without any summation symbols and simplify.
\begin{align*}& a_1n+d \left [ \frac{1}{2}n(n+1)-n\right ] \\ & =a_1n+\frac{1}{2}dn(n+1)-dn && \ \text{Distribute} \ d \\ & =\frac{1}{2}n \left [2a_1+d(n+1)-2d\right ] && \ \text{Factor out} \ \frac{1}{2}n \\ & =\frac{1}{2}n \left [ 2a_1+dn+d-2d\right ] \\ & =\frac{1}{2}n \left [ 2a_1+dn-d\right ] \\ & =\frac{1}{2}n \left [ 2a_1+d(n-1)\right ] && \leftarrow \ \text{This version of the equation} \\ & && \ \ \quad \text{ is very useful if you don't know the} \ n^{th} \ \text{term}.\\ & =\frac{1}{2}n \left [ a_1+(a_1+d(n-1))\right ] \\ & =\frac{1}{2}n(a_1+a_n)\end{align*}
Now, let's find the sum of the first 40 terms in the arithmetic series \begin{align*}35 + 31 + 27 + 23 + \ldots\end{align*}
For this particular series we know the first term and the common difference, so let’s use the rule that doesn't require the \begin{align*}n^{th}\end{align*} term: \begin{align*}\frac{1}{2}n \left [ 2a_1+d(n-1)\right ]\end{align*}, where \begin{align*}n=40,d=-4\end{align*} and \begin{align*}a_1=35\end{align*}.
\begin{align*}\frac{1}{2}(40) \left [ 2(35)+(-4)(40-1)\right ]=20 \left [ 70-156 \right ]=-1720\end{align*}
We could also find the \begin{align*}n^{th}\end{align*} term and use the rule \begin{align*}\frac{1}{2}n(a_1+a_n)\end{align*}, where \begin{align*}a_n=a_1+d(n-1)\end{align*}.
\begin{align*}a_{40}=35+(-4)(40-1)=35-156=-121\end{align*}, so the sum is \begin{align*}\frac{1}{2}(40)(35-121)=20(-86)=-1720.\end{align*}
Next, given that in an arithmetic series \begin{align*}a_{21}=165\end{align*} and \begin{align*}a_{35}=277\end{align*}, let's find the sum of terms 21 to 35.
This time we have the “first” and “last” terms of the series, but not the number of terms or the common difference. Since our series starts with the \begin{align*}21^{st}\end{align*} term and ends with the \begin{align*}35^{th}\end{align*} term, there are 15 terms in this series. Now we can use the rule to find the sum as shown.
\begin{align*}\frac{1}{2}(15)(165+277)=3315\end{align*}
Finally, let's find the sum of the arithmetic series \begin{align*}\sum \limits_{i=1}^8 (12-3i)\end{align*}.
From the summation notation, we know that we need to sum 8 terms. We can use the expression \begin{align*}12-3i\end{align*} to find the first and last terms as and the use the rule to find the sum.
First term: \begin{align*}12-3(1)=9\end{align*}
Last term: \begin{align*}12-3(8)=-12\end{align*}
\begin{align*}\sum \limits_{i=1}^8 (12-3i)=\frac{1}{2}(8)(9-12)=4(-3)=-12\end{align*}
We could use the calculator in this problem as well: \begin{align*}sum(seq(12-3x,x,1,8))=-12\end{align*}
### Examples
#### Example 1
Earlier, you were asked to find the total number of seats in the theater.
For this particular series we know the first term and the common difference, so let’s use the rule that doesn't require the \begin{align*}n^{th}\end{align*} term: \begin{align*}\frac{1}{2}n \left [ 2a_1+d(n-1)\right ]\end{align*}, where \begin{align*}n=30,d=2\end{align*} and \begin{align*}a_1=5.\end{align*}
\begin{align*}\frac{1}{2}(30) \left [ 2(5)+(2)(30-1)\right ]=15 \left [ 10+58 \right ]=1020\end{align*}
Therefore, there are a total of 1020 seats in the theater.
#### Example 2
Find the sum of the series \begin{align*}87 + 79 + 71 + 63 + \ldots + -105\end{align*}.
\begin{align*}d=8\end{align*}, so \begin{align*}-105&=87+(-8)(n-1) \\ -192&=-8n+8 \\ -200&=-8n \\ n&=25\end{align*}
and then use the rule to find the sum is \begin{align*}\frac{1}{2}(25)(87-105)=-225\end{align*}
#### Example 3
Find \begin{align*}\sum \limits_{i=10}^{50}(3i-90)\end{align*}.
\begin{align*}10^{th}\end{align*} term is \begin{align*}3(10)-90=-60\end{align*}, \begin{align*}50^{th}\end{align*} term is \begin{align*}3(50)-90=60\end{align*} and \begin{align*}n=50-10+1=41\end{align*} (add 1 to include the \begin{align*}10^{th}\end{align*} term). The sum of the series is \begin{align*}\frac{1}{2}(41)(-60+60)=0\end{align*}. Note that the calculator is a great option for this problem: \begin{align*}sum(seq(3x-90,x,10,50))=0\end{align*}
#### Example 4
Find the sum of the first 30 terms in the series \begin{align*}1 + 6 + 11 + 16 +\ldots\end{align*}
\begin{align*}d=5\end{align*}, use the sum formula, \begin{align*}\frac{1}{2}n(2a_1+d(n-1))\end{align*}, to get \begin{align*}\frac{1}{2}(30)\left [2(1)+5(30-1)\right]=15\left[2+145\right]=2205\end{align*}
### Review
Find the sums of the following arithmetic series.
1. \begin{align*}-6 + -1 + 4 +\ldots+ 119\end{align*}
2. \begin{align*}72 + 60 + 48 + \ldots + -84\end{align*}
3. \begin{align*}3 + 5 + 7 + \ldots + 99\end{align*}
4. \begin{align*}25 + 21 + 17 + \ldots + -23\end{align*}
5. Find the sum of the first 25 terms of the series \begin{align*}215 + 200 + 185 + \ldots\end{align*}
6. Find the sum of the first 14 terms in the series \begin{align*}3 + 12 + 21 + \ldots\end{align*}
7. Find the sum of the first 32 terms in the series \begin{align*}-70 + -65 + -60 + \ldots\end{align*}
8. Find the sum of the first 200 terms in \begin{align*}-50 + -49 + -48 +\ldots\end{align*}
Evaluate the following summations.
1. \begin{align*}\sum \limits_{i=4}^{10}(5i-22)\end{align*}
2. \begin{align*}\sum \limits_{i=2}^{25}(-3i+37)\end{align*}
3. \begin{align*}\sum \limits_{i=11}^{48}(i-20)\end{align*}
4. \begin{align*}\sum \limits_{i=5}^{40}(50-2i)\end{align*}
Find the sum of the series bounded by the terms given. Include these terms in the sum.
1. \begin{align*}a_7=39\end{align*} and \begin{align*}a_{23}=103\end{align*}
2. \begin{align*}a_8=1\end{align*} and \begin{align*}a_{30}=-43\end{align*}
3. \begin{align*}a_4=-15\end{align*} and \begin{align*}a_{17}=24\end{align*}
4. How many cans are needed to make a triangular arrangement of cans if the bottom row has 35 cans and successive row has one less can than the row below it?
5. Thomas gets a weekly allowance. The first week it is one dollar, the second week it is two dollars, the third week it is three dollars and so on. If Thomas puts all of his allowance in the bank, how much will he have at the end of one year?
To see the Review answers, open this PDF file and look for section 11.7.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
|
2016-08-28 19:21:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 91, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997970700263977, "perplexity": 719.5630802563072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982947760.88/warc/CC-MAIN-20160823200907-00137-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calculate-the-work-done-by-a-force.104056/
|
# Homework Help: Calculate the work done by a force
1. Dec 13, 2005
### don_anon25
I am asked to calculate the work done by a force as it moves around a path. The force is F = b(1-x^2/a^2)j. The path is a rectangle with coordinates at (0,0); (0,L); (a,L); (a,0). The force moves clockwise around the path beginning at the origin. A diagram is attached.
I know work is the integral of F dot dr.
So for the first path I should have the the force F=b(1-x^2/a^2)j dotted with Lj (the path from the origin to point (0,L)). The integral is thus bL (1 - x^2/a^2) dy with limits from y=0 to y=L. Is this the right approach? If not, can someone please point me in the right direction??
#### Attached Files:
• ###### work.bmp
File size:
33.8 KB
Views:
139
2. Dec 13, 2005
### Physics Monkey
You have an extra factor of L that you don't need (look at the units). The work is the integral of $$\vec{F}\cdot d\vec{r}$$ along the path. For example, the first segment has $$d\vec{r} = dy \hat{j}$$. You have to figure out what $$d\vec{r}$$ is for each of the four segments.
3. Dec 14, 2005
### don_anon25
So, for the first segment, dr = dyj. For the second segment, dr = dx i.
For the third segment, dr = -dyj. For the fourth segment, dr = -dx i. Is this correct? Are the limits on my integration correct as well?
Also, should the answer be 0 (closed path, conservative force...not sure if the force is conservative though)?
4. Dec 14, 2005
### Staff: Mentor
If the answer is zero then the force is conservative, but not all forces are conservative so you can't use that as a check here. (Your dr vectors are correct).
-Dale
5. Dec 15, 2005
### don_anon25
I get an answer of 2bL (1- x^2/a^2). This does not seem correct to me, since it contains an x^2 term? Is this right? Is there a substitution I can make for x? x=a or x=L, for instance? This problem is driving me crazy...any help greatly appreciated!
6. Dec 15, 2005
### FredGarvin
You can eliminate two of the legs from your problem since the force is in the $$\hat{j}$$ direction.
In segment 1, x=0. In segment 3, x=a.
Last edited: Dec 15, 2005
7. Dec 15, 2005
### don_anon25
Yes...I have the forces for the dx direction to be zero. I'm still doing something wrong though?
8. Dec 15, 2005
### FredGarvin
Did you substitute in the values for x that I just edited into my last post?
|
2018-09-21 13:30:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7894413471221924, "perplexity": 1080.0454668386249}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00208.warc.gz"}
|
http://tex.stackexchange.com/questions/4507/avoiding-rivers-in-successive-lines-of-type?answertab=oldest
|
# Avoiding “rivers” in successive lines of type
This question led to a new feature in a package:
impnattypo
The following quote is from James Felici, The Complete Manual of Typography (2003), p. 161:
Rivers occur when word spaces stack one above the other in successive lines of type, creating the appearance of fissures running through the text [...]. Rivers are accidents of composition, and software isn't yet smart enough to detect them, much less do anything about them.
On the following page, Felici shows an example of a "mighty river" running through a paragraph. Below, you'll find the LaTeX code and the pictured output of my (somewhat abridged) recreation of Felicis example. (The layout is chosen so that the effect also occurs if one uses the microtype package with its default settings.)
\documentclass{article}
\usepackage{mathptmx}
\usepackage{microtype}
\textwidth 247pt
\parindent 23pt
\frenchspacing
\begin{document}
Though the Pearl measures less than 50~miles in total length from its
modest source as a cool mountain spring to the screaming cascades and
steaming estuary of its downstream reaches, over those miles, the
river has in one place or another everything you could possibly ask
for. You can roam among lush temperate rain forests, turgid white
water canyons, contemplative meanders among aisles of staid aspens
(with trout leaping to slurp all the afternoon insects from its calm
surface), and forbidding swamp land as formidable as any that Humphrey
Bogart muddled through in \emph{The African Queen}.
\end{document}
So, was Felici right that "software isn't yet smart enough" to deal with rivers? Or is this something TeX can handle? And if not TeX, then possibly LuaTeX?
EDIT: As rassie pointed out that TeX is Turing-complete, let me clarify that I'm interested in solutions that either already exist or could be implemented with reasonable time and effort.
-
For what it's worth, osdir.com/ml/tex.context/2001-01/msg00057.html is a nice overview of some ideas. – Pieter Oct 24 '10 at 17:14
Also, FYI: you don't need all those %s. Consecutive white space is treated as a single space, unless it's two newlines in a row. – Antal S-Z Oct 24 '10 at 19:18
What a lovely example of a "river" in a text block. But I should note that even though I knew what to look for, the river didn't jump out at me for some time. I think the slight meandering of the spaces helps minimize the visual problems. I see rivers in fixed-width fonts more commonly than variable-width. (For instance I see one covering the first three lines of this text box, though I imagine it will be gone in the published comment.) – Jon Ericson Jun 6 '11 at 23:11
@Jon: If you like it, upvote it. ;-) – lockstep Jun 6 '11 at 23:14
Oops. I completely forgot about that! Done. – Jon Ericson Jun 6 '11 at 23:16
I have added a first version of an algorithm to detect rivers using Lua to the impnattypo package on github. To use it, simply use the rivers option:
\usepackage[draft,rivers]{impnattypo}
Here is an example result:
Beware that there might still be some bugs ;-)
-
I think it's more of an algorithm problem than TeX's. Both TeX and Lua are Turing-complete programming languages, so it's possible to implement any algorithm in it provided enough time.
So let's assume you've got an algorithm which can tell whether a paragraph of text has a river inside it or not, e.g. by graphically putting vertical lines over the paragraph and checking whether all dots underneath are white. Then one could define a \riverpenalty variable bound to the output of that algorithm, which would, if set high enough, force TeX to select an another rendering of that particular paragraph.
However, if I'm not completely mistaken, classic TeX first builds lines and then paragraphs, i.e. a paragraph-bound penalty would not lead to a different rendering for a particular line. That would mean that river detection would be possible with classic TeX but not river correction.
On the other hand, many of PDFTeX's and XeTeX's algorithms, especially those dealing with microtypography, probably need ways to correct a line's rendering based on some paragraph penalty. In that case, implementing a river correction should be possible with one of those advanced engines, either in the engine itself or using its API -- and at that point it doesn't matter which language you take, either pure TeX or Lua or something completely different.
-
@rassie: Thanks for reminding me that TeX is Turing-complete. I've edited my question to clarify that I'm interested in solutions existing in the here and now, so to speak. ;-) – lockstep Oct 24 '10 at 16:41
"classic TeX first builds lines and then paragraphs" — you're either mistaken on this point or you've just worded this sentence poorly; the line breaks are chosen while taking the entire paragraph into account. – Will Robertson Oct 25 '10 at 8:13
@Will: I'm far from TeXpert and you are probably right on that one -- on second thinking, it's pretty obvious that line-breaking can't actually be done without taking the paragraph into account. Gotta re-read TeXbook again :) – Nikolai Prokoschenko Oct 25 '10 at 17:39
@rassie: There's one more thing in your answer that is not quite OK: TeX being Turing complete doesn't necessarily help; you just can't check "whether all dots underneath [a vertical line] are white". TeX only sees the bounding boxes of the characters, not the characters itself. For this problem, the bounding boxes should be a close enough approximation, but for others it's a great obstacle. Here's an example where Turing completeness of TeX doesn't seem to help. – Hendrik Vogt Oct 27 '10 at 9:50
@rassie: It was clear to me that you wrote your answer before lockstep's edit. But my point was that Turing completeness does not imply "that the problem is generally solvable". There are obstacles that you just can't overcome! – Hendrik Vogt Oct 27 '10 at 14:15
show 1 more comment
Might not be a very helpful answer, but ANT (which is modelled after TeX) claims to have river detection, so may be some one could look at that and implement it in lua (Hans in a recent talk mentioned his effort to rewrite TeX paragraph builder in lua, so may be both can be combined.)
-
I've tried to get ANT to avoid rivers, but never got any satisfying results, even the author of ANT couldn't help me. – topskip Oct 24 '10 at 19:36
@Patrick: Didn't you do some experiments on avoiding rivers in plain luatex? – Aditya Oct 24 '10 at 20:50
@Aditya: Not LuaTeX, but I implemented my own Knuth & Plass total-fit line breaking algorithm and made experiments with that. Detecting rivers wasn't hard, but I have not found a good way to eliminate them (time complexity is a killer here). – topskip Oct 25 '10 at 17:58
I am not sure if the problem is solvable, and it may require a new generation of very fast computers (trials with different algorithms can take over six hours to iterate). The example below can produce rivers mightier than Felici's example.
\documentclass[11pt]{article}
\begin{document}
\def\samplerivers{%
\hskip1em Repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated repeated repeated repeated
repeated.}
\begin{minipage}{1.9in}
\looseness=-1 \hyphenpenalty=0\samplerivers
\end{minipage}\hspace{.8cm}
\begin{minipage}{1.9in}
\hyphenpenalty=100\samplerivers
\end{minipage}\hspace{.8cm}
\begin{minipage}{1.9in}
\hyphenpenalty=100000 \samplerivers
\end{minipage}
\end{document}
One attempt at attacking the problem is that of Holkner. He attempted to optimize over multiple objectives but as he writes:
... performance degrades so badly that some of the tests had to be stopped when they took more than six hours to complete.
TeX's algorithm does not include for any parameters to minimize rivers.
-
Many thanks for the link - I've already started reading. – lockstep Oct 24 '10 at 19:23
|
2014-03-17 20:41:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777784109115601, "perplexity": 2760.9650502655913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678706211/warc/CC-MAIN-20140313024506-00048-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/122722/meaning-of-a-logical-operator?answertab=oldest
|
# Meaning of a Logical Operator
Is it possible to know what those operator mean if they must be involved in this logicical condition? What is all the possible meaning of those two symbol if you don't know the symbol's meaning beforehand: (¬A) ⊕ A is always true, A ⊕ A is always false and with just those definition given, others unknown
If it is not possible to get the outcome, how to prove it is insuffient?
-
Insufficient data for meaningful answer. – Pedro Tamaroff Mar 21 '12 at 0:00
How to prove this is insufficient? – Victor Mar 21 '12 at 0:01
Are you talking about $¬$ and $⊕$? You can look up logic symbols in Wikipedia. – Pedro Tamaroff Mar 21 '12 at 0:03
No, i think it could be of great useful example if you could prove it insufficient to get the outcome – Victor Mar 21 '12 at 0:04
@PeterT.off I don't think it is insufficient. "Given two rules specifying the operation of ⊕, find ⊕": that's a legitimate question. See my answer. – user2468 Mar 21 '12 at 0:10
Based on the description given in the question, we can build the following truth table: $$\begin{matrix} A & A & | & ⊕ \\ \hline F & F & | & F & \color{red}{\text{A ⊕ A}}\\ F & T & | & T & \color{blue}{\text{¬A ⊕ A}}\\ T & F & | & T & \color{blue}{\text{A ⊕ ¬A}}\\ T & T & | & F & \color{red}{\text{A ⊕ A}} \\ \end{matrix}$$ Now, we can deduce $⊕$ is the exclusive or operation. Deduced directly from the rules stated in the question.
Addednum: We can easily deduce the meaning of $\neg.$ Given the set $\mathbb{B} = \{ T, F \},$ any unary operator $\neg : \mathbb{B} \to \mathbb{B}$ will either operate as identity or as negation. Since the 2 equations in the given problem differ only by $\neg,$ $$A \oplus \neg A = T, A \oplus A = F,$$ we can deduce that $\neg$ can not be identity (assume $\neg A \equiv A$ and you'll get contradiction in the given system of formulas). Hence $\neg$ is negation. QED. Now we proceed as above.
@Victor I can easily deduce the meaning of $\neg.$ Given the set $\mathbb{B} = \{ T, F \},$ any unary operator $\neg$ will either operate as identity or as negation. Since the 2 equations in the given problem differ only by $\neg,$ I can deduce that $\neg$ can not be identity, i.e., it is negation. – user2468 Mar 21 '12 at 0:19
|
2016-02-13 17:50:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390915989875793, "perplexity": 531.7124783471521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167113.3/warc/CC-MAIN-20160205193927-00065-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.albert.io/ie/act-math/generating-an-expression
|
?
Free Version
Moderate
# Generating an Expression
ACTMAT-EJUA09
Maria earns $m$ dollars for each computer she puts together, plus $n$ dollars for every 15 minutes she works.
If Maria worked 18 hours and put together 27 computers this week, which expression represents how much Maria earned this week, in dollars?
A
$27m+18n$
B
$18m+108n$
C
$27m+72n$
D
$18m+\cfrac { 27 }{ 4 } n$
E
$27m+\cfrac { 9 }{ 2 } n$
|
2016-12-07 20:23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3281368017196655, "perplexity": 4936.410357617509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00104-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://www.johnmyleswhite.com/page/5/
|
## The Shape of Floating Point Random Numbers
[Updated 10/18/2012: Fixed a typo in which mantissa was replaced with exponent.]
Over the weekend, Viral Shah updated Julia’s implementation of randn() to give a 20% speed boost. Because we all wanted to test that this speed-up had not come at the expense of the validity of Julia’s RNG system, I spent some time this weekend trying to get tests up and running. I didn’t get far, but thankfully others chimed in and got things done.
Testing an RNG is serious business. In total, we’ve considered using four different test suites:
All of these suites can be easily used to test uniform random numbers over unsigned integers. Some are also appropriate for testing uniform random numbers over floatint-point values.
But we wanted to test a Gaussian RNG. To do that, we followed Thomas et al.’s lead and mapped the Gaussian RNG’s output through a high-precision quantile function to produce uniform random floating point values. As our high-precision quantile function we ended up using the one described in Marsaglia’s 2004 JSS paper.
With that in place, I started to try modifying my previous RNG testing code. When we previously tried to test Julia’s rand() function, I got STS working on my machine and deciphered its manual well enough to run a suite of tests on a bit stream from Julia.
Unfortunately I made a fairly serious error in how I attempted to test Julia’s RNG. Because STS expects a stream of random 0’s and 1’s, I converted random numbers into 0’s and 1’s by testing whether the floating point numbers being generated were greater than 0.5 or less than 0.5. While this test is not completely wrong, it is very, very weak. Its substantive value comes from two points:
1. It confirms that the median of the RNG is correctly positioned at 0.5.
2. It confirms that the placement of successive entries relative to 0.5 is effectively random. In short, there is not trivial correlation between successive values.
Unfortunately that’s about all you learn from this method. We needed something more. So I started exploring how to convert a floating point into bits. Others had the good sense to avoid this and pushed us forward by using the TestU01 suite.
I instead got lost exploring the surprising complexity of trying to work with the individual bits of random floating point numbers. The topic is so subtle because the distribution of bits in a randomly generated floating point number is extremely far from a random source of individual bits.
For example, a uniform variable’s representation in floating point has all the following non-random properties:
1. The sign bit is never random because uniform variables are never negative.
2. The exponent is not random either because uniform variables are strictly contained in the interval [0, 1].
3. Even the mantissa isn’t random. Because floating point numbers aren’t evenly spaced in the reals, the mantissa has to have complex patterns in it to simulate the equal-spacing of uniform numbers.
Inspired by all of this, I decided to get a sense for the bit pattern signature of different RNG’s. Below I’ve plotted the patterns for uniform, normal, gamma and Cauchy variables using lines that describe the mean value of the i-th bit in the bit string. At a minimum, a completely random bit stream would have a flat horizontal line through 0.5, which many of the lines touch for a moment, but never perfectly match.
Some patterns:
1. The first bit (shown on the far left) is the sign bit: you can clearly see which distributions are symmetric by looking for a mean value of 0.5 versus those that are strictly positive and have a mean value of 0.0.
2. The next eleven bits are the exponent and you can clearly see which distributions are largely concentrated in the interval [-1, 1] and which have substantial density outside of that region. This bit would clue you into the variance of the distribution.
3. You can see that there is a lot of non-randomness in the last few bits of the mantissa for uniform variables. There’s also non-randomness in the first few bits for all variables. I don’t yet have any real intuition for those patterns.
You can go beyond looking at the signatures of mean bit patterns by looking at covariance matrices as well. Below I show these covariances matrices in a white-blue coloring scheme in which white indicates negative values, light blue indicates zero and dark blue indicates positive values. Note that matrices, generated using R’s image() function are reflections of the more intuitive matrix ordering in which the [1,1] entry of the matrix occurs in the top-left instead of the bottom-left.
#### Cauchy Variables
I find these pictures really helpful for reminding me how strangely floating point numbers behave. The complexity of these images is so far removed from the simplicity of the bit non-patterns in randomly generated unsigned integers, which can be generated using IID random bits and concatenating them together.
## Overfitting
What do you think when you see a model like the one below?
Does this strike you as a good model? Or as a bad model?
There’s no right or wrong answer to this question, but I’d like to argue that models that are able to match white noise are typically bad things, especially when you don’t have a clear cross-validation paradigm that will allow you to demonstrate that your model’s ability to match complex data isn’t a form of overfitting.
There are many objective reasons to suspect complicated models, but I’d like to offer up a subjective one. A model that fits complex data as perfectly as the model above is likely to not be an interpretable model1 because it is essentially a noisy copy of the data. If the model looks so much like the data, why construct a model at all? Why not just use the raw data?
Unless the functional form of a model and its dependence on inputs is simple, I’m very suspicious of any statistical method that produces outputs like those shown above. If you want a model to do more than produce black-box predictions, it should probably provide predictions that are relatively smooth. At the least it should reveal comprehensible and memorable patterns that are non-smooth. While there are fields in which neither of these goals is possible (and others where it’s not desirable), I think the default reaction to a model fit like the one above should be: “why does the model make such complex predictions? Isn’t that a mistake? How many degrees of freedom does it have that it can so closely fit such noisy data?”
1. Although it might be a great predictive model if you can confirm that the fit above is the quality of the fit to held-out data!
## EDA Before CDA
### One Paragraph Summary
Always explore your data visually. Whatever specific hypothesis you have when you go out to collect data is likely to be worse than any of the hypotheses you’ll form after looking at just a few simple visualizations of that data. The most effective hypothesis testing framework in existence is the test of intraocular trauma.
### Context
This morning, I woke up to find that Neil Kodner had discovered a very convenient CSV file that contains geospatial data about every valid US zip code. I’ve been interested in the relationship between places and zip codes recently, because I spent my summer living in the 98122 zip code after having spent my entire life living in places with zip codes below 20000. Because of the huge gulf between my Seattle zip code and my zip codes on the East Coast, I’ve on-and-off wondered if the zip codes were originally assigned in terms of the seniority of states. Specifically, the original thirteen colonies seem to have some of the lowest zip codes, while the newer states had some of the highest zip codes.
While I could presumably find this information through a few web searches or could gather the right data set to test my idea formally, I decided to blindly plot the zip code data instead. I think the results help to show why a few well-chosen visualizations can be so much more valuable than regression coefficients. Below I’ve posted the code I used to explore the zip code data in the exact order of the plots I produced. I’ll let the resulting pictures tell the rest of the story.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 zipcodes <- read.csv("zipcodes.csv") ggplot(zipcodes, aes(x = zip, y = latitude)) + geom_point() ggsave("latitude_vs_zip.png", height = 7, width = 10) ggplot(zipcodes, aes(x = zip, y = longitude)) + geom_point() ggsave("longitude_vs_zip.png", height = 7, width = 10) ggplot(zipcodes, aes(x = latitude, y = longitude, color = zip)) + geom_point() ggsave("latitude_vs_longitude_color.png", height = 7, width = 10) ggplot(zipcodes, aes(x = longitude, y = latitude, color = zip)) + geom_point() ggsave("longitude_vs_latitude_color.png", height = 7, width = 10) ggplot(subset(zipcodes, longitude < 0), aes(x = longitude, y = latitude, color = zip)) + geom_point() ggsave("usa_color.png", height = 7, width = 10)
## Finder Bug in OS X
Four years after I first noticed it, Finder still has a bug in it that causes it to report a negative number of items waiting for deletion:
## Playing with The Circular Law in Julia
### Introduction
Statistically-trained readers of this blog will be very familiar with the Central Limit Theorem, which describes the asymptotic sampling distribution of the mean of a random vector composed of IID variables. Some of the most interesting recent work in mathematics has been focused on the development of increasingly powerful proofs of a similar law, called the Circular Law, which describes the asymptotic sampling distribution of the eigenvalues of a random matrix composed of IID variables.
Julia, which is funded by one of the world’s great experts on random matrix theory, is perfectly designed for generating random matrices to experiment with the Circular Law. The rest of this post shows how you might write the most naive sort of Monte Carlo study of random matrices in order to convince yourself that the Circular Law really is true.
### Details
In order to show off the Circular Law, we need to be a bit more formal. We’ll define a random matrix $$M$$ as an $$N$$x$$N$$ matrix composed of $$N^{2}$$ IID complex random variables, each with mean $$0$$ and variance $$\frac{1}{N}$$. Then the distribution of the $$N$$ eigenvalues asymptotically converges to the uniform distribution over the unit disc. I think it’s easiest to see this by doing a Monte Carlo simulation of the eigenvalues of random matrices composed of Gaussian variables for five values of $$N$$:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 f = open("eig.tsv", "w") println(f, join(["N", "I", "J", "Re", "Im"], "\t")) ns = [1, 2, 5, 25, 50] sims = 100 for n in ns for i in 1:sims m = (1 / sqrt(n)) * randn(n, n) e, v = eig(m) for j in 1:n println(f, join([n, i, j, real(e[j]), imag(e[j])], "\t")) end end end close(f)
The results from this simulation are shown below:
These images highlight two important patterns:
1. For the entry-level Gaussian distribution, the distribution of eigenvalues converges on the unit circle surprisingly quickly.
2. There is a very noticeable excess of values along the real line that goes away much more slowly than the movement towards the unit disk.
If you’re interested in exploring this topic, I’d encourage you to try two things:
1. Obtain samples from a larger variety of values of $$N$$.
2. Construct samples from other entry-level distributions than the Gaussian distribution used here.
PS: Drew Conway wants me to note that Julia likes big matrices.
## Will Data Scientists Be Replaced by Tools?
### The Quick-and-Dirty Summary
I was recently asked to participate in a proposed SXSW panel that will debate the question, “Will Data Scientists Be Replaced by Tools?” This post describes my current thinking on that question as a way of (1) convincing you to go vote for the panel’s inclusion in this year’s SXSW and (2) instigating a larger debate about the future of companies whose business model depends upon Data Science in some way.
### The Slow-and-Clean Version
In the last five years, Data Science has emerged as a new discipline, although there are many reasonable people who think that this new discipline is largely a rebranding of existing fields that suffer from a history of poor marketing and weak publicity.
All that said, within the startup world, I see at least three different sorts of Data Science work being done that really do constitute new types of activities for startups to be engaged in:
1. Data aggregation and redistribution: A group of companies like DataSift have emerged recently whose business model centers on the acquisition of large quantities of data, which they resell in raw or processed form. These companies are essentially the Exxon’s of the data mining world.
2. Data analytics toolkit development: Another group of companies like Prior Knowledge have emerged that develop automated tools for data analysis. Often this work involves building usable and scalable implementations of new methods coming out of academic machine learning groups.
3. In-house data teams: Many current startups and once-upon-a-time startups now employ at least one person whose job title is Data Scientist. These people are charged with extracting value from the data accumulated by startups as a means of increasing the market value of these startups.
I find these three categories particularly helpful here, because it seems to me that the question, “Will Data Scientists Be Replaced by Tools?”, is most interesting when framed as a question about whether the third category of workers will be replaced by the products designed by the second category of workers. I see no sign that the first category of companies will go away anytime soon.
When posed this way, the most plausible answer to the question seems to be: “data scientists will have portions of their job automated, but their work will be much less automated than one might hope. Although we might hope to replace knowledge workers with algorithms, this will not happen as soon as some would like to claim.”
In general, I’m skeptical of sweeping automation of any specific branch of knowledge workers because I think the major work done by a data scientist isn’t technological, but sociological: their job is to communicate with the heads of companies and with the broader public about how data can be used to improve businesses. Essentially, data scientists are advocates for better data analysis and for more data-driven decision-making, both of which require constant vigilance to maintain. While the mathematical component of the work done by a data scientist is essential, it is nevertheless irrelevant in the absence of human efforts to sway decision-makers.
To put it another way, many of the problems in our society aren’t failures of science or technology, but failures of human nature. Consider, for example, Atul Gawande’s claim that many people still die each year because doctors don’t wash their hands often enough. Even though Seimelweiss showed the world that hygiene is a life-or-death matter in hospitals more than a century ago, we’re still not doing a good enough job maintaining proper hygiene.
Similarly, we can examine the many sloppy uses of basic statistics that can be found in the biological and the social sciences — for example, those common errors that have been recently described by Ioannidis and Simonsohn. Basic statistical methods are already totally automated, but this automation seems to have done little to make the analysis of data more reliable. While programs like SPSS have automated the computational components of statistics, they have done nothing to diminish the need for a person in the loop who understands what is actually being computed and what it actually means about the substantive questions being asked of data.
While we can — and will — develop better tools for data analysis in the coming years, we will not do nearly as much as we hope to obviate the need for sound judgment, domain expertise and hard work. As David Freedman put it, we’re still going to need shoe leather to get useful insights out of data and that will require human intervention for a very long time to come. The data scientist can no more be automated than the CEO.
## DataGotham
As some of you may know already, I’m co-organizing an upcoming conference called DataGotham that’s taking place in September. To help spread the word about DataGotham, I’m cross-posting the most recent announcement below:
We’d like to let you know about DataGotham: a celebration of New York City’s data community!
http://datagotham.com
This is an event run by Drew Conway, Hilary Mason, Mike Dewar and John Myles White that will bring together professionals from finance to fashion and from startups to the Fortune 500. This day-and-a-half event will consist of intense discussion, networking, and sharing of wisdom, and will take place September 13th-14th at NYU Stern.
We also have four tutorials running on the afternoon of the 13th, followed by cocktails and The Great Data Extravaganza Show at the Tribeca Rooftop that evening. Tickets are on sale – we would love to see you there!
If you’d like to attend, please see the DataGotham website for more information. Since you’re a reader of this fine blog, we’ve set up a special “Friends and Family” discount that will give you 25% off the ticket price. To get the discount, you need to use the promo code “dataGothamist”.
## The Social Dynamics of the R Core Team
Recently a few members of R Core have indicated that part of what slows down the development of R as a language is that it has become increasingly difficult over the years to achieve consensus among the core developers of the language. Inspired by these claims, I decided to look into this issue quantitatively by measuring the quantity of commits to R’s SVN repository that were made by each of the R Core developers. I wanted to know whether a small group of developers were overwhelmingly responsible for changes to R or whether all of the members of R Core had contributed equally. To follow along with what I did, you can grab the data and analysis scripts from GitHub.
First, I downloaded the R Core team’s SVN logs from http://developer.r-project.org/. I then used a simple regex to parse the SVN logs to count commits coming from each core committer.
After that, I tabulated the number of commits from each developer, pooling across the years 2003-2012 for which I had logs. You can see the results below, sorted by total commits in decreasing order:
Committer Total Number of Commits
ripley 22730
maechler 3605
hornik 3602
murdoch 1978
pd 1781
apache 658
jmc 599
luke 576
urbaneks 414
iacus 382
murrell 324
leisch 274
tlumley 153
rgentlem 141
root 87
duncan 81
bates 76
falcon 45
deepayan 40
plummer 28
ligges 24
martyn 20
ihaka 14
After that, I tried to visualize evolving trends over the years. First, I visualized the number of commits per developer per year:
And then I visualized the evenness of contributions from different developers by measuring the entropy of the distribution of commits on a yearly basis:
There seems to be some weak evidence that the community is either finding consensus more difficult and tending towards a single leader who makes final decisions or that some developers are progressively dropping out because of the difficulty of achieving consensus. There is unambiguous evidence that a single developer makes the overwhelming majority of commits to R’s SVN repo.
I leave it to others to understand what all of this means for R and for programming language communities in general.
## My New Book: Developing, Deploying and Debugging Multi-Armed Bandit Algorithms
I’m happy to announce that I’ve started writing a new book for O’Reilly, which will focus on teaching readers how to use Multi-Armed Bandit Algorithms to build better websites. My hope is that the book can help web developers build up an intuition for the core conundrum facing anyone who wants to build a successful business: you have to constantly make trade-offs between (A) making decisions that are likely to yield safe, short-term successes and (B) taking chances on new opportunities that could be good long-term strategies, but could also be spectacular failures.
In the academic literature, this trade-off is usually called the Explore-Exploit dilemma. It’s been studied for decades because there is no universally applicable, out-of-the-box solution. There are several simple methods that nearly everyone knows about (like A/B testing), but these methods can lead to spectacular failures. For that reason, academics have invested a huge amount of intellectual energy into developing better approaches than the sort of randomized experimentation embodied in A/B testing.
In large part, they’ve made progress by studying an idealized version of the Explore-Exploit tradeoff called the Multi-Armed Bandit Problem. A Multi-Armed Bandit Problem is a thought experiment in which you try to make the most money possible while gambling at a casino that features only an assortment of different slot machines, each of which is called a bandit because of its propensity to steal the gambler’s money.
In the book, I’ll cover a few of the standard algorithms developed based on this thought experiment, including the $$\epsilon$$-greedy algorithm, softmax decision rules and a family of algorithms collectively called UCB. All of these algorithms will be implemented in Python, but my primary interest in writing the book isn’t to provide code for these algorithms to readers, but to give them a framework that will allow them to reason about how to solve the Explore-Exploit dilemma as it impacts their businesses and the general art of developing better websites. There are now many, many different algorithms available for solving the Multi-Armed Bandit Problem, but they all share a few core intuitions and strategies. I’m hoping to communicate those higher level intuitions to readers.
At the same time, I want to provide readers with a toolkit for deciding between the various options that are available to them. Web developers are very familiar with the importance of debugging, so I’d like to teach them the equivalent of debugging for Multi-Armed Bandit Problems, which involves simulation. Instead of building unit tests in which you confirm that your code can solve toy problems with clear answers, you run simulations to test that your algorithm can learn the right answer to toy problems with unambiguous right answers.
In summary, I hope readers will leave this book with a toolbox of strategies for developing, deploying and debugging multi-armed bandit algorithms in the context of web development. At large companies, these methods are responsible for millions of dollars of increased profits, so I think it’s high time that the core ideas trickle down from elite institutions like Google, Microsoft and Facebook to other web companies.
I’m really excited about this project and look forward to sharing more details as the book progresses.
## Automatic Hyperparameter Tuning Methods
At MSR this week, we had two very good talks on algorithmic methods for tuning the hyperparameters of machine learning models. Selecting appropriate settings for hyperparameters is a constant problem in machine learning, which is somewhat surprising given how much expertise the machine learning community has in optimization theory. I suspect there’s interesting psychological and sociological work to be done exploring why a problem that could be answered using known techniques wasn’t given an appropriate solution earlier.
Thankfully, the take away message of this blog post is that this problem is starting to be understood.
### A Two-Part Optimization Problem
To set up the problem of hyperparameter tuning, it’s helpful to think of the canonical model-tuning and model-testing setup used in machine learning: one splits the original data set into three parts — a training set, a validation set and a test set. If, for example, we plan to use L2-regularized linear regression to solve our problem, we will use the training set and validation set to select a value for the $$\lambda$$ hyperparameter that is used to determine the strength of the penalty for large coefficients relative to the penalty for errors in predictions.
With this context in mind, we can set up our problem using five types of variables:
1. Features: $$x$$
2. Labels: $$y$$
3. Parameters: $$\theta$$
4. Hyperparameters: $$\lambda$$
5. Cost function: $$C$$
We then estimate our parameters and hyperparameters in the following multi-step way so as to minimize our cost function:
$\theta_{Train}(\lambda) = \arg \min_{\theta} C(x_{Train}, y_{Train}, \theta, \lambda)$
$\lambda_{Validation}^{*} = \arg \min_{\lambda} C(x_{Validation}, y_{Validation}, \theta_{Train}(\lambda), \lambda)$
The final model performance is assessed using:
$C(x_{Test}, y_{Test}, \theta_{Train + Validation}(\lambda_{Validation}^{*}), \lambda_{Validation}^{*})$
This two-part minimization problem is similar in many ways to stepwise regression. Like stepwise regression, it feels like an opportunity for clean abstraction is being passed over, but it’s not clear to me (or anyone I think) if there is any analytic way to solve this problem more abstractly.
Instead, the methods we saw presented in our seminars were ways to find better approximations to $$\lambda^{*}$$ using less compute time. I’ll go through the traditional approach, then describe the newer and cleaner methods.
### Grid Search
Typically, hyperparameters are set using the Grid Search algorithm, which works as follows:
1. For each parameter $$p_{i}$$ the researcher selects a list of values to test empirically.
2. For each element of the Cartesian product of these values, the computer evaluates the cost function.
3. The computer selects the hyperparameter settings from this grid with the lowest cost.
Grid Search is about the worst algorithm one could possibly use, but it’s in widespread use because (A) machine learning experts seem to have less familiarity with derivative-free optimization techniques than with gradient-based optimization methods and (B) machine learning culture does not traditionally think of hyperparameter tuning as a formal optimization problem. Almost certainly (B) is more important than (A).
### Random Search
James Bergstra’s first proposed solution was so entertaining because, absent evidence that it works, it seems almost flippant to even propose: he suggested replacing Grid Search with Random Search. Instead of selecting a grid of values and walking through it exhaustively, you select a value for each hyperparameter independently using some probability distribution. You then evaluate the cost function given these random settings for the hyperparameters.
Since this approach seems like it might be worst than Grid Search, it’s worth pondering why it should work. James’ argument is this: most ML models have low-effective dimension, which means that a small number of parameters really affect the cost function and most have almost no effect. Random search lets you explore a greater variety of settings for each parameter, which allows you to find better values for the few parameters that really matter.
I am sure that Paul Meehl would have a field day with this research if he were alive to hear about it.
### Arbitrary Regression Problem
An alternative approach is to view our problem as one of Bayesian Optimization: we have an arbitrary function that we want to minimize which is costly to evaluate and we would like to find a good approximate minimum in a small number of evaluations.
When viewed in this perspective, the natural strategy is to regress the cost function on the settings of the hyperparameters. Because the cost function may depend on the hyperparameters in strange ways, it is wise to use very general purpose regression methods. I’ve recently seen two clever strategies for this, one of which was presented to us at MSR:
1. Jasper Snoek, Hugo Larochelle and Ryan Adams suggest that one use a Gaussian Process.
2. Among other methods, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown suggest that one use Random Forests.
From my viewpoint, it seems that any derivative-free optimization method might be worth trying. While I have yet to see it published, I’d like to see more people try the Nelder-Mead method for tuning hyperparameters.
|
2018-01-20 05:13:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4427591860294342, "perplexity": 916.0797581859515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00684.warc.gz"}
|
https://www.coursehero.com/file/p273qsi9/Vertical-tangents-will-occur-where-the-derivative-is-not-defined-and-so-well-get/
|
# Vertical tangents will occur where the derivative is
• 66
This preview shows page 29 - 33 out of 66 pages.
Vertical tangents will occur where the derivative is not defined and so we’ll get vertical tangents at values of t for which we have, Vertical Tangent for Parametric Equations 0, provided 0 dx dy dt dt = Let’s take a quick look at an example of this. Example 2 Determine the x-y coordinates of the points where the following parametric equations will have horizontal or vertical tangents. 3 2 3 3 9 x t t y t = = Solution We’ll first need the derivatives of the parametric equations. ( ) 2 2 3 3 3 1 6 dx dy t t t dt dt = = = Horizontal Tangents We’ll have horizontal tangents where, 6 0 0 t t = = Now, this is the value of t which gives the horizontal tangents and we were asked to find the x-y coordinates of the point. To get these we just need to plug t into the parametric equations. Therefore, the only horizontal tangent will occur at the point (0,-9). Vertical Tangents In this case we need to solve, ( ) 2 3 1 0 1 t t = = ± The two vertical tangents will occur at the points (2,-6) and (-2,-6). For the sake of completeness and at least partial verification here is the sketch of the parametric curve.
Calculus II © 2007 Paul Dawkins 29 The final topic that we need to discuss in this section really isn’t related to tangent lines, but does fit in nicely with the derivation of the derivative that we needed to get the slope of the tangent line. Before moving into the new topic let’s first remind ourselves of the formula for the first derivative and in the process rewrite it slightly. ( ) ( ) d y dy d dt y dx dx dx dt = = Written in this way we can see that the formula actually tells us how to differentiate a function y (as a function of t ) with respect to x (when x is also a function of t ) when we are using parametric equations. Now let’s move onto the final topic of this section. We would also like to know how to get the second derivative of y with respect to x . 2 2 d y dx Getting a formula for this is fairly simple if we remember the rewritten formula for the first derivative above. Second Derivative for Parametric Equations 2 2 d dy d y d dy dt dx dx dx dx dx dt = =
Calculus II Note that, 2 2 2 2 2 2 d y d y dt d x dx dt Let’s work a quick example. Example 3 Find the second derivative for the following set of parametric equations. 5 3 2 4 x t t y t = =
|
2021-12-08 13:28:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812709450721741, "perplexity": 189.62612180181537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00546.warc.gz"}
|
https://physics.stackexchange.com/questions/486272/x-rays-gamma-rays-oven-vs-microwave-oven
|
# X-rays / Gamma rays “oven” vs microwave oven
Let's imagine a seller scammed you and sold you a Gamma rays / X-rays "oven" instead of a common microwave oven. The power consumption would be the same as a common microwave oven, i.e. about 1 kW, but the oven would use x-rays and/or gamma rays instead of micrometric waves. How would one notice that the oven behaves differently than a common microwave oven?
I do realize that the door of the microwave ovens have holes that let pass visible light but not microwaves, so, for the sake to make the question more interesting, let's assume that the door consists of a thick metallic wall. So essentially the oven looks like a metallic box from the outside.
Would the energy be used to heat the food as much as if microwaves were used? If not, is the energy then wasted to heat the walls of the x-rays oven?
In a past exam question, an experiment "performed by a student" described removing the rotating plate from the microwave and adding a "chocolate bar".
Due to standing waves being formed due to the EM waves reflecting off the opposite side to the transmitter, the "chocolate bar" would only "melt" in certain, periodic dots - at the antinodes.
The wavelength of the light can then be calculated by measuring the distance over $$2n+1$$ antinodes and then dividing this by $$2n$$.
We could then calculate the frequency of the EM waves being emitted, since we know $$c$$ and $$f=\frac{c}{\lambda}$$ and thus determine whether they are microwaves or x-rays.
Also, x-rays and microwaves have different properties - such as how they interact with organic matter so there is likely some simple test you could do in that sense.
• Following this line of thoughts, visible light have antinodes at much shorter length than microwaves, so the chocolate bar should melt almost continuously through it. But that's not what happens of course. It doesn't penetrate well into the chocolate bar at all. However X-rays certainly would! So what would happen should be investigated as you say, from the differences in how X-rays and Gamma rays interact with organic matter. – thermomagnetic condensed boson Jun 15 '19 at 21:21
• Well, plus x-rays can go completely through you with reasonable probability, making them pretty useless for heating. – Jon Custer Jun 15 '19 at 21:23
• @JonCuster that's what I suspected. So essentially the X-rays and Gamma rays would be mostly absorbed by the metallic walls, right? Please post this as answer if true... I would accept it. – thermomagnetic condensed boson Jun 15 '19 at 21:30
• microwave ovens have feedhorn assemblies intended to reduce the prevalence of antinodes and hot spots inside the cavity. Would the chocolate bar experiment have really worked like this in practice? – niels nielsen Jun 15 '19 at 21:35
• @JonCuster. wouldn't soft xrays be mostly absorbed by a human body? – jmh Jun 15 '19 at 21:55
Gamma ray doses are measured in sieverts and grays: one joule per kilogram. A dose of 5 sievert is deadly, but the heat developed is negligible: a temperature rise in water of about 0.001 C.
So your kilowatt oven would be visibly ionizing the air, I think.
• So essentially most of the energy of the gamma rays is used to break atomic/molecular bounds instead of making them vibrate? – thermomagnetic condensed boson Jun 16 '19 at 20:07
• A gamma photon gives rise to high-energy electrons which in turn produce other ionizations. Finally, all the energy becomes heat. – Pieter Jun 16 '19 at 20:14
• Hmm, so in the end a part of the energy is lost to ionize (and heat up?) the air. And what is absorbed by the food translates as heat too. So, I am a bit lost with the difference(s) with the common microwave oven, with regards to the heating of the food. – thermomagnetic condensed boson Jun 16 '19 at 20:27
• You asked what you would see (extremely hypothetical question). I answered that you would likely observe an airglow in the oven. – Pieter Jun 16 '19 at 20:30
• Gamma rays do not care about walls, whether it is concrete or aluminum. Gamma rays will mostly pass through food too. – Pieter Jun 16 '19 at 20:49
|
2020-07-14 01:43:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6058131456375122, "perplexity": 833.2283509975525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00548.warc.gz"}
|
https://mathhelpboards.com/threads/indefinite-integral-with-two-parts.7168/
|
# Indefinite integral with two parts
#### find_the_fun
##### Active member
I'm trying to integrate $$\displaystyle \int e^{4\ln{x}}x^2 dx$$
I can't see using u-substition, $$\displaystyle x^2$$ isn't the derivative of $$\displaystyle e^{4\ln{x}}$$ nor vice-versa.
I tried integrating by parts and that didn't work. I used $$\displaystyle u=e^{4\ln{x}}$$ and $$\displaystyle dv=x^2 dx$$
I know I can't rewrite $$\displaystyle e^{4\ln{x}}$$ as $$\displaystyle e^4e^\ln{x}$$
Last edited:
#### Chris L T521
##### Well-known member
Staff member
Re: indefinite integral with two parts
I'm trying to integrate $$\displaystyle \int e^{4\ln{x}}x^2 dx$$
I can't see using u-substition, $$\displaystyle x^2$$ isn't the derivative of $$\displaystyle e^{4\ln{x}}$$ nor vice-versa.
I tried integrating by parts and that didn't work. I used $$\displaystyle u=e^{4\ln{x}}$$ and $$\displaystyle dv=x^2 dx$$
Note that $e^{4\ln x} = e^{\ln(x^4)} = x^4$.
Can you take things from here?
#### find_the_fun
##### Active member
Re: indefinite integral with two parts
Note that $e^{4\ln x} = e^{\ln(x^4)} = x^4$.
Can you take things from here?
I guess the lesson learned from this is to simplify the expression algebraically before attempting integration techniques.
|
2021-10-15 23:34:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871576428413391, "perplexity": 1058.8187413940277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00562.warc.gz"}
|
https://physics.stackexchange.com/questions/174328/lagrangian-formalism-application-on-a-particle-falling-system-with-air-resistanc
|
# Lagrangian formalism application on a particle falling system with air resistance
I have this problem, with a first-step resolution:
Obtain the equation of motion for a particle falling vertically under the influence of gravity when frictional forces obtainable from a dissipation function $$\frac12kv^2$$ are present. Integrate the equation to obtain the velocity as a function of time and show that the maximum possible velocity for a fall from rest is $$v+mg/k$$.
Work in one dimension, and use the most simple Lagrangian possible: $$L = \frac 12 m \dot z^2 - mgz$$ With dissipation function: $$F=\frac 12 k \dot z^2$$ The lagrangian formulation is now: $$\frac{d}{dt} \frac{\partial L}{\partial \dot z} - \frac{\partial L}{\partial z} + \frac{\partial F}{\partial \dot z} = 0$$
So, I just don't know why they put the term $$\frac{\partial F}{\partial \dot{z}}$$ in Lagrange's equations. Why? I know that the Rayleigh dissipation function isn't a conservative force, but I don't know why the partial derivation. For holonomic constraints we need to partially derivate the function of constraint $$f=0$$ in order to $$q$$, the generalized coordinate: $$\frac{\partial f}{\partial{q}}$$. And we introduce it in Lagrange's equation multiplied by Lagrange's multiplier $$\lambda$$, on RHS.
But we have here some kind of constraint with a velocity $$\dot{z}$$ dependence. That's why we need to put the term $$\frac{\partial F}{\partial \dot{z}}$$ in Lagrange's equations? But the term isn't null and they didn't had the Lagrange multiplier, so is it true that it isn't relationated to the constraints formalism?
• – Qmechanic Oct 31 '19 at 13:55
The dissipation function is a bit like a potential. If your particle were only influenced by the gravitational force, you would say that $L=T-U$ where $U$ is the gravitational potential and the Euler-Lagrange equations would be the normal ones. But as you have a friction force, the Euler-Lagrange equations become: $$\frac{d}{dt}\frac{\partial L}{\partial \dot{z}} - \frac{\partial L}{\partial z}=Q_j$$
where $Q_j$ is the generalized force and can be shown by its definition from the D'Alembert's principle that $Q_j=-\frac{ \partial F}{ \partial \dot{z}}$ in this particular case. Indeed, if you have a dissipation function, that's always the form of your equations I think.
|
2020-04-01 09:32:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9053654670715332, "perplexity": 168.92010516360546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00448.warc.gz"}
|
https://ocaml.xyz/tutorial/case-recommender.html
|
Back
# Case - Recommender System
## Introduction
Our daily life heavily relies on recommendations, intelligent content provision aims to match a user’s profile of interests to the best candidates in a large repository of options. There are several parallel efforts in integrating intelligent content provision and recommendation in web browsing. They differentiate between each other by the main technique used to achieve the goal.
The initial effort relies on the semantic web stack, which requires adding explicit ontology information to all web pages so that ontology-based applications (e.g., Piggy bank) can utilise ontology reasoning to interconnect content semantically. Though semantic web has a well-defined architecture, it suffers from the fact that most web pages are unstructured or semi-structured HTML files, and content providers lack of motivation to adopt this technology to their websites. Therefore, even though the relevant research still remains active in academia, the actual progress of adopting ontology-based methods in real-life applications has stalled in these years.
Collaborative Filtering (CF), which was first coined in 1992, is a thriving research area and also the second alternative solution. Recommenders built on top of CF exploit the similarities in users’ rankings to predict one user’s preference on a specific content. CF attracts more research interest these years due to the popularity of online shopping (e.g., Amazon, eBay, Taobao, etc.) and video services (e.g., YouTube, Vimeo, Dailymotion, etc.). However, recommender systems need user behaviour rather than content itself as explicit input to bootstrap the service, and are usually constrained within a single domain. Cross-domain recommenders have made progress lately, but the complexity and scalability need further investigation.
Search engines can be considered as the third alternative though a user needs explicitly extract the keywords from the page then launch another search. The ranking of the search results is based on multiple ranking signals such as link analysis on the underlying graph structure of interconnected pages such as PageRank. Such graph-based link analysis is based on the assumption that those web pages of related topics tend to link to each other, and the importance of a page often positively correlates to its degree. The indexing process is modelled as a random walk atop of the graph derived from the linked pages and needs to be pre-compiled offline.
The fourth alternative is to utilise information retrieval (IR) technique. In general, a text corpus is transformed to the suitable representation depending on the specific mathematical models (e.g., set-theoretic, algebraic, or probabilistic models), based on which a numeric score is calculated for ranking. Different from the previous CF and link analysis, the underlying assumption of IR is that the text (or information in a broader sense) contained in a document can very well indicate its (latent) topics. The relatedness of any two given documents can be calculated with a well-defined metric function atop of these topics. Since topics can have a direct connection to context, context awareness therefore becomes the most significant advantage in IR, which has been integrated into Hummingbird, Google’s new search algorithm.
In the rest of this chapter, we will introduce Kvasir, a system built on top of latent semantic analysis. Kvasir automatically looks for the similar articles when a user is browsing a web page and injects the search results in an easily accessible panel within the browser view for seamless integration. Kvasir belongs to the content-based filtering and emphasises the semantics contained in the unstructured web text. This chapter is based on the papers in [@7462177] and [@7840682], and you will find that many basic theory are already covered previously in the NLP chapter in Part I. Henceforth we will assume you are familiar with this part.
## Architecture
At the core, Kvasir implements an LSA-based index and search service, and its architecture can be divided into two subsystems as frontend and backend. Figure [@fig:case-recommender:architecture] illustrates the general workflow and internal design of the system. The frontend is currently implemented as a lightweight extension in Chrome browser. The browser extension only sends the page URL back to the KServer whenever a new tab/window is created. The KServer running at the backend retrieves the content of the given URL then responds with the most relevant documents in a database. The results are formatted into JSON strings. The extension presents the results in a friendly way on the page being browsed. From user perspective, a user only interacts with the frontend by checking the list of recommendations that may interest him.
To connect to the frontend, the backend exposes one simple RESTful API as below, which gives great flexibility to all possible frontend implementations. By loosely coupling with the backend, it becomes easy to mash-up new services on top of Kvasir. In the code below, Line 1 and 2 give an example request to Kvasir service. type=0 indicates that info contains a URL, otherwise info contains a piece of text if type=1. Line 4-9 present an example response from the server, which contains the meta-info of a list of similar articles. Note that the frontend can refine or rearrange the results based on the meta-info (e.g., similarity or timestamp).
POST https://api.kvasir/query?type=0&info=url
{"results": [
{"title": document title,
"similarity": similarity metric,
"timestamp": document create date}
]}
The backend system implements indexing and searching functionality which consist of five components: Crawler, Cleaner, DLSA, PANNS and KServer. Three components (i.e., Cleaner, DLSA and PANNS) are wrapped into one library since all are implemented on top of Apache Spark. The library covers three phases as text cleaning, database building, and indexing. We briefly present the main tasks in each component as below.
Crawler collects raw documents from the web and then compiles them into two data sets. One is the English Wikipedia dump, and another is compiled from over 300 news feeds of the high-quality content providers such as BBC, Guardian, Times, Yahoo News, MSNBC, etc. [@tbl:case-recommender:dataset] summarises the basic statistics of the data sets. Multiple instances of the Crawler run in parallel on different machines. Simple fault-tolerant mechanisms like periodical backup have been implemented to improve the robustness of crawling process. In addition to the text body, the Crawler also records the timestamp, URL and title of the retrieved news as meta information, which can be further utilised to refine the search results.
Two data sets are used in Kvasir evaluation {#tbl:case-recommender:dataset}
Data set # of entries Raw text size Article length
Wikipedia $$3.9\times~10^6$$ 47.0 GB Avg. 782 words
News $$4.6\times~10^5$$ 1.62 GB Avg. 648 words
Cleaner cleans the unstructured text corpus and converts the corpus into term frequency-inverse document frequency (TF-IDF) model. In the preprocessing phase, we clean the text by removing HTML tags and stop words, de-accenting, tokenisation, etc. The dictionary refers to the vocabulary of a language model. Its quality directly impacts the model performance. To build the dictionary, we exclude both extremely rare and extremely common terms, and keep $$10^5$$ most popular ones as features. More precisely, a term is considered as rare if it appears in less than 20 documents, while a term is considered as common if it appears in more than 40% of documents.
DLSA builds up an LSA-based model from the previously constructed TF-IDF model. Technically, the TF-IDF itself is already a vector space language model. The reason we seldom use TF-IDF directly is because the model contains too much noise and the dimensionality is too high to process efficiently even on a modern computer. To convert a TF-IDF to an LSA model, DLSA’s algebraic operations involve large matrix multiplications and time-consuming SVD. We initially tried to use MLib to implement DLSA. However, MLlib is unable to perform SVD on a data set of $$10^5$$ features with limited RAM, we have to implement our own stochastic SVD on Apache Spark using rank-revealing technique. The DLSA will be discussed in detail in later chapter.
PANNS builds the search index to enable fast $$k$$-NN search in high dimensional LSA vector spaces. Though dimensionality has been significantly reduced from TF-IDF ($$10^5$$ features) to LSA ($$10^3$$ features), $$k$$-NN search in a $$10^3$$-dimension space is still a great challenge especially when we try to provide responsive services. A naive linear search using one CPU takes over 6 seconds to finish in a database of 4 million entries, which is unacceptably long for any realistic services. PANNS implements a parallel RP-tree algorithm which makes a reasonable tradeoff between accuracy and efficiency. PANNS is the core component in the backend system and we will present its algorithm in detail in later chapter. PANNS is becoming a popular choice of Python-based approximate k-NN library for application developers. According to the PyPI’s statistics, PANNS has achieved over 27,000 downloads since it was first published in October 2014.
KServer runs within a web server, processes the users requests and replies with a list of similar documents. KServer uses the index built by PANNS to perform fast search in the database. The ranking of the search results is based on the cosine similarity metric. A key performance metric for KServer is the service time. We wrapped KServer into a Docker image and deployed multiple KServer instances on different machines to achieve better performance. We also implemented a simple round-robin mechanism to balance the request loads among the multiple KServers.
Kvasir architecture provides a great potential and flexibility for developers to build various interesting applications on different devices, e.g., semantic search engine, intelligent Twitter bots, context-aware content provision, etc. We provide the live demo videos of the seamless integration of Kvasir into web browsing at the official website. Kvasir is also available as browser extension on Chrome and Firefox.
## Build Topic Models
As has been explained in the previous section, the crawler and cleaner performs data collection and processing to build vocabulary and TF-IDF model. We have already talked about this part in detail in the NLP chapter. DLSA and PANNS are the two core components responsible for building language models and indexing the high dimensional data sets in Kvasir. In this section, we first sketch out the key ideas in DLSA.
First, a recap of LSA from the NLP chapter. The vector space model belongs to algebraic language models, where each document is represented with a row vector. Each element in the vector represents the weight of a term in the dictionary calculated in a specific way. E.g., it can be simply calculated as the frequency of a term in a document, or slightly more complicated TF-IDF. The length of the vector is determined by the size of the dictionary (i.e., number of features). A text corpus containing $$m$$ documents and a dictionary of $$n$$ terms will be converted to an $$A = m \times n$$ row-based matrix. Informally, we say that $$A$$ grows taller if the number of documents (i.e., $$m$$) increases, and grows fatter if we add more terms (i.e., $$n$$) in the dictionary.
The core operation in LSA is to perform SVD. For that we need to calculate the covariance matrix $$C = A^T \times A$$, which is a $$n \times n$$ matrix and is usually much smaller than $$A$$. This operation poses as ad bottleneck in computing: the $$m$$ can be very large (a lot of documents) or the $$n$$ can be very large (a lot of features for each document). For the first, we can easily parallelise the calculation of $$C$$ by dividing $$A$$ into $$k$$ smaller chunks of size $$[\frac{m}{k}] \times n$$, so that the final result can be obtained by aggregating the partial results as $$C = \sum_{i=1}^{k} A^T_i \times A_i \label{eq:1}$$.
However, a more serious problem is posed by the second issue. The SVD function in MLlib is only able to handle tall and thin matrices up to some hundreds of features. For most of the language models, there are often hundreds of thousands features (e.g., $$10^5$$ in our case). The covariance matrix $$C$$ becomes too big to fit into the physical memory, hence the native SVD operation in MLlib of Spark fails as the first subfigure of Figure [@fig:case-recommender:revealing] shows.
In linear algebra, a matrix can be approximated by another matrix of lower rank while still retaining approximately properties of the matrix that are important for the problem at hand. In other words, we can use another thinner matrix $$B$$ to approximate the original fat $$A$$. The corresponding technique is referred to as rank-revealing QR estimation. We won’t talk about this method in detail, but the basic idea is that, the columns are sparse and quite likely linearly dependent. If we can find the rank $$r$$ of a matrix $$A$$ and find suitable $$r$$ columns to replace the original matrix, we can then approximate it. A TF-IDF model having $$10^5$$ features often contains a lot of redundant information. Therefore, we can effectively thin the matrix $$A$$ then fit $$C$$ into the memory. Figure [@fig:case-recommender:revealing] illustrates the algorithmic logic in DLSA, which is essentially a distributed stochastic SVD implementation.
To sum up, we propose to reduce the size of TF-IDF model matrix to fit it into the memory, so that we can get a LSA model, where we know the document-topic and topic-word probability distribution.
## Index Text Corpus
With a LSA model at hand, finding the most relevant document is equivalent to finding the nearest neighbours for a given point in the derived vector space, which is often referred to as k-NN problem. The distance is usually measured with the cosine similarity of two vectors. In the NLP chapter we have seen how to use linear search in the LSA model. However, neither naive linear search nor conventional k-d tree is capable of performing efficient search in such high dimensional space even though the dimensionality has been significantly reduced from $$10^5$$ to $$10^3$$ by LSA.
The key observation is that, we need not locate the exact nearest neighbours in practice. In most cases, slight numerical error (reflected in the language context) is not noticeable at all, i.e., the returned documents still look relevant from the user’s perspective. By sacrificing some accuracy, we can obtain a significant gain in searching speed.
### Random Projection
To optimise the search, the basic idea is that, instead of searching in all the existing vectors, we can pre-cluster the vectors according to their distances, each cluster with only a small number of vectors. For an incoming query, as long as we can put this vector into a suitable cluster, we can then search for close vectors only in that cluster.
[@fig:case-recommender:projection] gives a naive example on a 2-dimension vector space. First, a random vector $$x$$ is drawn and all the points are projected onto $$x$$. Then we divide the whole space into half at the mean value of all projections (i.e., the blue circle on $$x$$) to reduce the problem size. For each new subspace, we draw another random vector for projection, and this process continues recursively until the number of points in the space reaches the predefined threshold on cluster size.
In the implementation, we can construct a binary tree to facilitate the search. Technically, this can be achieved by any tree-based algorithms. Given a tree built from a database, we answer a nearest neighbour query $$q$$ in an efficient way, by moving $$q$$ down the tree to its appropriate leaf cell, and then return the nearest neighbour in that cell. In Kvasir, we use the Randomised Partition tree (RP-tree) introduced in [@dasgupta2013randomized] to do it. The general idea of RP-tree algorithm used here is clustering the points by partitioning the space into smaller subspaces recursively.
The [@fig:case-recommender:search] illustrates how binary search can be built according to the dividing steps shown above. You can see the five nodes in the vector space are put into five clusters/leaves step by step. The information of the random vectors such as x, y, and z are also saved. Once we have this tree, given another query vector, we can put it into one of the clusters along the tree to find the cluster of vectors that are close to it.
Of course, we have already said that this efficiency is traded-off with search accuracy. One type of common misclassification is that it is possible that we can separate close vectors into different clusters. As we can see in the first subfigure of [@fig:case-recommender:projection], though the projections of $$A$$, $$B$$, and $$C$$ seem close to each other on $$x$$, $$C$$ is actually quite distant from $$A$$ and $$B$$. The reverse can also be true: two nearby points are unluckily divided into different subspaces, e.g., points $$B$$ and $$D$$ in the left panel of [@fig:case-recommender:projection].
It has been shown that such misclassifications become arbitrarily rare as the iterative procedure continues by drawing more random vectors and performing corresponding splits. In the implementation, we follow this path and build multiple RP-trees. We expect that the randomness in tree construction will introduce extra variability in the neighbours that are returned by several RP-trees for a given query point. This can be taken as an advantage in order to mitigate the second kind of misclassification while searching for the nearest neighbours of a query point in the combined search set. As shown in [@fig:case-recommender:union], given an input query vector x, we find its neighbour in three different RP-trees, and the final set of neighbour candidates comes from the union of these three different sets.
### Optimising Vector Storage
You may have noticed that, in this method, we need to store all the random vectors that are generated in the non-leaf nodes of the tree. That means storing a large number of random vectors at every node of the tree, each with a large number features. It introduces significant storage overhead. For a corpus of 4 million documents, if we use $$10^5$$ random vectors (i.e., a cluster size of $$\frac{4\times~10^6}{2\times~10^5} = 20$$ on average), and each vector is a $$10^3$$-dimension real vector (32-bit float number), the induced storage overhead is about 381.5~MB for each RP-tree. Therefore, such a solution leads to a huge index of $$47.7$$~GB given $$128$$ RP-trees are included, or $$95.4$$~GB given $$256$$ RP-trees.
The huge index size not only consumes a significant amount of storage resources, but also prevents the system from scaling up after more and more documents are collected. One possible solution to reduce the index size is reusing the random vectors. Namely, we can generate a pool of random vectors once, and then randomly choose one from the pool each time when one is needed. However, the immediate challenge emerges when we try to parallelise the tree building on multiple nodes, because we need to broadcast the pool of vectors onto every node, which causes significant network traffic.
To address this challenge, we propose to use a pseudo random seed in building and storing search index. Instead of maintaining a pool of random vectors, we just need a random seed for each RP-tree. As shown in [@fig:case-recommender:randomseed], in a leaf cluster, instead of storing all the vectors, only the indices of vectors in the original data set are stored. The computation node can build all the random vectors on the fly from the given seed according to the random seed.
From the model building perspective, we can easily broadcast several random seeds with negligible traffic overhead instead of a large matrix in the network. In this way we improve the computation efficiency. From the storage perspective, we only need to store one 4-byte random seed for each RP-tree. In such a way, we are able to successfully reduce the storage overhead from $$47.7$$~GB to $$512$$~B for a search index consisting of $$128$$ RP-trees (with cluster size 20), or from $$95.4$$~GB to only $$1$$~KB if $$256$$ RP-trees are used.
### Optimise Data Structure
Let’s consider a bit more about using multiple RP-trees. Regarding the design of PANNS, we have two design options in order to improve the searching accuracy. Namely, given the size of the aggregated cluster which is taken as the union of all the target clusters from every tree, we can either use fewer trees with larger leaf clusters, or use more trees with smaller leaf clusters. Increasing cluster size is intuitive: if we increase it to so large that includes all the vectors, then it is totally accurate.
On the other hand, we expect that when using more trees the probability of a query point to fall very close to a splitting hyperplane should be reduced, thus it should be less likely for its nearest neighbours to lie in a different cluster. By reducing such misclassifications, the searching accuracy is supposed to be improved. Based on our knowledge, although there are no previous theoretical results that may justify such a hypothesis in the field of nearest neighbour search algorithms, this concept could be considered as a combination strategy similar to those appeared in ensemble clustering, a very well established field of research. Similar to our case, ensemble clustering algorithms improve clustering solutions by fusing information from several data partitions.
To experimentally investigate this hypothesis we employ a subset of the Wikipedia database for further analysis. In what follows, the data set contains $$500,000$$ points and we always search for the $$50$$ nearest neighbours of a query point. Then we measure the searching accuracy by calculating the amount of actual nearest neighbours found.
We query $$1,000$$ points in each experiment. The results presented in [@fig:case-recommender:exp01] correspond to the mean values of the aggregated nearest neighbours of the $$1,000$$ query points discovered by PANNS out of $$100$$ experiment runs. Note that $$x$$-axis represents the “size of search space” which is defined by the number of unique points within the union of all the leaf clusters that the query point falls in. Therefore, given the same search space size, using more tress indicates that the leaf clusters become smaller. As we can see in [@fig:case-recommender:exp01], for a given $$x$$ value, the curves move upwards as we use more and more trees, indicating that the accuracy improves. As shown in the case of 50 trees, almost $$80\%$$ of the actual nearest neighbours are found by performing a search over the $$10\%$$ of the data set.
Our empirical results clearly show the benefits of using more trees instead of using larger clusters for improving search accuracy. Moreover, regarding the searching performance, since searching can be easily parallelised, using more trees will not impact the searching time.
### Optimise Index Algorithm
In classic RP trees we have introduced above, a different random vector is used at each inner node of a tree. In this approach, the computations in the child-branches cannot proceed without finishing the computation in the parent node, as show in the left figure of [@fig:case-recommender:parallel]. Here the blue dotted lines are critical boundaries. Instead, we propose to use the same random vector for all the sibling nodes of a tree. This choice does not affect the accuracy at all because a query point is routed down each of the trees only once; hence, the query point is projected onto a random vector $$r_i$$ sampled from the same distribution at each level of a tree. This means that we don’t need all the inner non-leaf node to be independent random vectors. Instead, the query point is projected onto only $$l$$ i.i.d. random vectors $$r_1, \ldots, r_l$$. An RP-tree has $$2^l-1$$ inner nodes. Therefore, if each node of a tree had a different random vector as in classic RP-trees, $$2^l-1$$ different random vectors would be required for one tree. However, when a single vector is used on each level, only $$l$$ vectors are required. This reduces the amount of memory required by the random vectors from exponential to linear with respect to the depth of the trees.
Besides, another extra benefit of using one random vector for one layer is that it speeds up the index construction significantly, since we can vectorise the computation. Let’s first look at the projection of vector $$a$$ on $$b$$. The projected length on $$b$$ can be expressed as:
$\|a\|\cos~\theta = a.\frac{b}{\|b\|}.$ {#eq:case-recommender:project}
Here $$\|a\|$$ means the length of vector $$\mathbf{a}$$. If we requires that all the random vectors $$\mathbf{b}$$ has to be normalised, [@eq:case-recommender:project] becomes $$a.b$$, the vector dot. Now we can perform the projection at this layer by computing: $$Xb_l$$. Here $$X$$ is the dataset, and each row is a document and each column is a feature; $$b_l$$ is a random vector that we use for this layer. In this way, we don’t have to wait for the left tree to finish to start cutting the right tree.
Now here is the tricky bit: we don’t even have to wait for the upper layer to start cutting the lower layer! The reason is that, at each layer, we do random projection of all the nodes in the dataset on one single random vector $$b$$. We don’t really care the random clustering result from the previous layer. Therefore, we can perform $$Xb_1$$, $$Xb_2$$, …, $$Xb_l$$ at the same time. That means, the projected data set $$P$$ can be computed directly from the dataset $$X$$ and a random matrix $$B$$ as $$P = XB$$ with only one pass of matrix multiplication. Here each column of $$B$$ is just the random vector we use at a layer.
In this approach there is not boundary, and all the projections can be done in just one matrix multiplication. While some of the observed speed-up is explained by a decreased amount of the random vectors that have to be generated, mostly it is due to enabling efficient computation of all the projections. Although the total amount of computation stays the same, in practice this speeds up the index construction significantly due to the cache effects and low-level parallelisation through vectorisation. The matrix multiplication is a basic linear algebra operation and many low level numerical libraries, such as OpenBLAS and MKL, provide extremely high-performance implementation of it.
## Search Articles
By using RP-tree we have already limit the search range from the whole text corpus to only a cluster of small number of documents (vectors), where we can do a linear searching. We have also introduced several optimisations on the RP-tree itself, including using multiple trees, using random seed to remove the storage of random vectors, improving computation efficiency etc. But we don’t stop here: can we further improve the linear searching itself? It turns out, we can.
To select the best candidates from a cluster of points, we need to use the coordinates in the original space to calculate their relative distance to the query point. This however, first increases the storage overhead since we need to keep the original high dimensional data set which is usually huge; second increases the query overhead since we need to access such data set. The performance becomes more severely degraded if the original data set is too big to load into the physical memory. Moreover, computing the distance between two points in the high dimensional space per se is very time-consuming.
Nonetheless, we will show that it is possible to completely get rid of the original data set while keeping the accuracy at a satisfying level. The core idea of is simple. Let’s look at the second subfigure in [@fig:case-recommender:projection]. Imagine that we add a new point to search for similar vectors. The normal approach is that we compute the distance between this node and A, B, C etc. But if you look at it close, all the existing nodes are already projected on the vector y, and we can also project the incoming query vector on y, and check to which of these points it is close to. Instead of computing the distances of two vectors, now we only compute the absolute value of subtraction of two numbers (since we can always project a vector onto another one and get a real number as result) as the distance. By replacing the original space with the projected one, we are able to achieve a significant reduction in storage and non-trivial gains in searching performance.
Of course, it is not always an accurate estimation. In the first subfigure of [@fig:case-recommender:projection], a node can be physically close to A or B, but its projection could be closest to that of C. That again requires us to consider using multiple RP-trees. But instead of the actual vector content, in the leaf node of the trees we store only (index, projected value). Now for the input query vector, we run it in the $$N$$ RP-trees and get $$N$$ set of (index, value) pairs. Here each value is the absolute value of the difference of projected values between the vector in the tree and the query vector itself. Each vector of course is label by a unique index.
For each index, we propose to use this metric: $$\frac{\sum~d_i}{\sum~c_i}$$ to measure how close it is to the query vector. Here $$d_i$$ is the distance between node $$i$$ and query node on projected space, and $$c_i$$ is the count of total number of node $$i$$ in all the candidate sets from all the RP-trees. Smaller measurement means closer distance. The intuition is that, if distance value of a node on the projected space is small, then it is possibly close to the query node; or, if a node appears many times from the candidate sets of different RP-trees, it is also quite likely a possible close neighbour.
As a further improvement, we update this metric to $$\frac{\sum~d_i}{(\sum~c_i)^3}$$. By so doing, we give much more weight on the points which have multiple occurrences from different RP-trees by assuming that such points are more likely to be the true k-NN. Experiment results confirm that by using this metric it is feasible to use only the projected space in the actual system implementation. Please refer to the original paper if you are interested with more detail.
## Code Implementation
What we have introduced is the main theory behind the Kvasir, a smart content discovery tool to help you manage this rising information flood. In this chapter, we will show some naive code implementation in OCaml and Owl to help you better understand what we have introduced so far.
First, we show the simple random projection along a RP-tree.
let make_projection_matrix seed m n =
Owl_stats_prng.init seed;
Mat.gaussian m n |> Mat.to_arrays
let make_projected_matrix m n =
Array.init m (fun _ -> Array.make n 0.)
These two functions make projection matrix and the matrix to save projected results, both return as row vectors.
let project i j s projection projected =
let r = ref 0. in
Array.iter (fun (w, a) ->
r := !r +. a *. projection.(w).(j);
) s;
projected.(j).(i) <- !r
Based on these two matrices, the project function processes document i on the level j in the tree. The document vector is s. The projection is basically a dot multiplication between s and matrix projection.
let random seed cluster tfidf =
let num_doc = Nlp.Tfidf.length tfidf in
let vocab_len = Nlp.Tfidf.vocab_len tfidf in
let level = Maths.log2 (float_of_int num_doc /. cluster) |> ceil |> int_of_float in
let projection = make_projection_matrix seed vocab_len level in
let projected = make_projected_matrix level num_doc in
Nlp.Tfidf.iteri (fun i s ->
for j = 0 to level - 1 do
project i j s projection projected;
done;
) tfidf;
vocab_len, level, projected
The random function performs a random projection of sparse data set, based a built TF-IDF model. Technically, a better way is to use LSA model as the vectorised representation of documents as we have introduced above, since a LSA model acquired based on TF-IDF represents more abstract idea of topics and has less features. However, here it suffices to use the TF-IDF model to show the random projection process. This function projects all the document vectors in the model to the projected matrix, level by level. Recall that the result only contains the projected value instead of the whole vector.
As we have explained in the “Search Articles” section, this process can be accelerated to use matrix multiplication. The code below shows this implementation for the random projection function. It also returns the shape of projection and the projected result.
let make_projection_matrix seed m n =
Owl_stats_prng.init seed;
Mat.gaussian m n
let random seed cluster data =
let m = Mat.row_num data in
let n = Mat.col_num data in
let level = Maths.log2 (float_of_int m /. cluster) |> ceil |> int_of_float in
let projection = make_projection_matrix seed n level in
let projected = Mat.dot data projection |> Mat.transpose in
n, level, projected, projection
After getting the projection result, we need to build a RP-tree accordingly. The following is about how to build the index in the form of a binary search tree. The tree is defined as:
type t =
| Node of float * t * t (* intermediate nodes: split, left, right *)
| Leaf of int array (* leaves only contains doc_id *)
An intermediate node includes three parts: split, left, right, and the leaves only contain document index.
let split_space_median space =
let space_size = Array.length space in
let size_of_l = space_size / 2 in
let size_of_r = space_size - size_of_l in
(* sort into increasing order for median value *)
Array.sort (fun x y -> Pervasives.compare (snd x) (snd y)) space;
let median =
match size_of_l < size_of_r with
| true -> snd space.(size_of_l)
| false -> (snd space.(size_of_l-1) +. snd space.(size_of_l)) /. 2.
in
let l_subspace = Array.sub space 0 size_of_l in
let r_subspace = Array.sub space size_of_l size_of_r in
median, l_subspace, r_subspace
The split_space_median function divides the projected space into subspaces to assign left and right subtrees. The passed in space is the projected values on a specific level. The criterion of division is the median value. The Array.sort function sorts the space into increasing order for median value.
let filter_projected_space level projected subspace =
let plevel = projected.(level) in
Array.map (fun (doc_id, _) -> doc_id, plevel.(doc_id)) subspace
Based on the document id of the points in the subspace, filter_projected_space function filters the projected space. The purpose of this function is to update the projected value using a specified level so the recursion can continue. Both the space and the returned result are of the same format: (doc_id, projected value).
let rec make_subtree level projected subspace =
let num_levels = Array.length projected in
match level = num_levels with
| true -> (
let leaf = Array.map fst subspace in
Leaf leaf
)
| false -> (
let median, l_space, r_space = split_space_median subspace in
let l_space = match level < num_levels - 1 with
| true -> filter_projected_space (level+1) projected l_space
| false -> l_space
in
let r_space = match level < num_levels - 1 with
| true -> filter_projected_space (level+1) projected r_space
| false -> r_space
in
let l_subtree = make_subtree (level+1) projected l_space in
let r_subtree = make_subtree (level+1) projected r_space in
Node (median, l_subtree, r_subtree)
)
Based on these functions, the make_subtree recursively grows the binary subtree to make a whole tree. The projected is the projected points we get from the first step. It is of shape (level, document_number). The subspace is a vector of shape (1, document_number).
let grow projected =
let subspace = Array.mapi (fun doc_id x -> (doc_id, x)) projected.(0) in
let tree_root = make_subtree 0 projected subspace in
tree_root
The grow function calls make_subtree to build the binary search tree. It initialises the first subspace at level 0, and then start recursively making the subtrees from level 0. Currently everything is done in memory for efficiency consideration.
let rec traverse node level x =
match node with
| Leaf n -> n
| Node (s, l, r) -> (
match x.(level) < s with
| true -> traverse l (level+1) x
| false -> traverse r (level+1) x
)
Now that the tree is built, we can perform search on it. The recursive traverse function traverses the whole tree to locate the cluster for a projected vector x starting from a given level.
let rec iter_leaves f node =
match node with
| Leaf n -> f n
| Node (s, l, r) -> iter_leaves f l; iter_leaves f r
let search_leaves node id =
let leaf = ref [||] in
(
try iter_leaves (fun l ->
if Array.mem id l = true then (
leaf := l;
failwith "found";
)
) node
with exn -> ()
);
Array.copy !leaf
Finally, search_leaves returns the leaves/clusters which have the given id inside it. It mainly depends on the iter_iterate function which iterates all the leaves in a tree and applies function, to perform this search.
All these code above is executed on one tree. When we collect the k-NN candidates from all the trees, instead of calculating the vector similarity, we utilise the frequency/count of the vectors in the union of all the candidate sets from all the RP-trees.
let count_votes nn =
let h = Hashtbl.create 128 in
Owl_utils.aarr_iter (fun x ->
match Hashtbl.mem h x with
| true -> (
let c = Hashtbl.find h x in
Hashtbl.replace h x (c + 1)
)
| false -> Hashtbl.add h x 1
) nn;
let r = Array.make (Hashtbl.length h) (0,0) in
let l = ref 0 in
l := !l + 1;
) h;
Array.sort (fun x y -> Pervasives.compare (snd y) (snd x)) r;
r
The count_votes function takes in an array of array nn as input. Each inner array contains the indexes of candidate nodes from one RP-tree. These nodes are collected into a hash table, using index as key and the count as value. Then the results are sorted according to the count number.
## Make It Live
We provide a live demo of Kvasir. Here we briefly introduce the implementation of the demo with OCaml. This demo mainly relies on Lwt. The Lwt library implements cooperative threads. It is often used as web server in OCaml.
This demo takes in document in the form of web query API and returns similar documents in the text corpus already included in our backend. First, we need to do some simple preprocessing using regular expression. This of course needs some fine tuning in the final product, but needs to be simple and fast.
let simple_preprocess_query_string s =
let regex = Str.regexp "[=+%0-9]+" in
Str.global_replace regex " " s
The next function extract_query_params parse the web query, and retrieves parameters.
let extract_query_params s =
let regex = Str.regexp "num=\$$[0-9]+\$$" in
let _ = Str.search_forward regex s 0 in
let num = Str.matched_group 1 s |> int_of_string in
let regex = Str.regexp "mode=\$$[a-z]+\$$" in
let _ = Str.search_forward regex s 0 in
let mode = Str.matched_group 1 s in
let regex = Str.regexp "doc=\$$.+\$$" in
let _ = Str.search_forward regex s 0 in
let doc = Str.matched_group 1 s in
(num, mode, doc)
Finally, start_service function includes the core query service that keeps running. It preprocesses the input document and processed with similar document searching according to different search mode. We won’t cover the details of web server implementation details using Lwt. Please refer to its documentation for more details.
let start_service lda idx =
let num_query = ref 0 in
let callback _conn req body =
body |> Cohttp_lwt_body.to_string >|= (fun body ->
let query_len = String.length body in
match query_len > 1 with
| true -> (
try (
let num, mode, doc = extract_query_params body in
Log.info "process query #%i ... %i words" !num_query query_len;
num_query := !num_query + 1;
let doc = simple_preprocess_query_string doc in
match mode with
| "linear" -> query_linear_search ~k:num lda doc
| "kvasir" -> query_kvasir_idx ~k:num idx lda doc
| _ -> failwith "kvasir:unknown search mode"
)
with exn -> "something bad happened :("
)
| false -> (
Log.warn "ignore an empty query";
""
)
)
>>= (fun body -> Server.respond_string ~status:OK ~body ())
in
Server.create ~mode:(TCP (Port 8000)) (Server.make ~callback ())`
## Summary
In this chapter, we presented Kvasir which provides seamless integration of LSA-based content provision into web browsing. To build Kvasir as a scalable Internet service, we addressed various technical challenges in the system implementation. Specifically, we proposed a parallel RP-tree algorithm and implemented stochastic SVD on Spark to tackle the scalability challenges in index building and searching. We have introduced the basic algorithm and how it can optimised step by step, from storage to computation. These optimisations include aggregating results from multiple trees, replacing random variable with a single random seed, removing the projection computation boundary between different layers, using count to approximate vector distance, etc. Thanks to its novel design, Kvasir can easily achieve millisecond query speed for a 14 million document repository. Kvasir is an open-source project and is currently under active development. The key components of Kvasir are implemented as an Apache Spark library, and all the source code are publicly accessible on GitHub.
|
2022-05-22 14:57:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43023738265037537, "perplexity": 1356.7865619437644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00580.warc.gz"}
|
https://www.omnimaga.org/hp-prime/the-hpprgm-format/
|
### Author Topic: The .hpprgm format (Read 6662 times)
0 Members and 1 Guest are viewing this topic.
#### Hasse
• LV0 Newcomer (Next: 5)
• Posts: 3
• Rating: +0/-0
##### The .hpprgm format
« on: August 05, 2014, 04:41:36 am »
I spent some time reverse engineering the .hpprgm format (the format of the user programs of the Prime).
0x0000-0x0003:
size of the header, excludes itself
(so the next header begins at size+4)
0x0004-0x0005:
Amount of variables in table
0x0006-0x0007:
Amount of something? (Perhaps views or something?)
0x0008-0x0009:
Amount of exported functions in table
0x000A-0x000F:
unneeded?
Conn. kit generates
7F 01 00 00 00 00
but all zeros seems to work too.
0x0010-0x----:
Exported item table.
Entry format is as follows:
Type of item:
30 00 for variable,
31 00 for exported function
Name of item:
UTF-16, until 00 00 00 00
Then the next entry follows.
VARIABLE VALUES:
(There are as many blocks of this type as you have exported variables and the
blocks are in the same order as the exported variables)
0x0000-0x0003:
size of the value, excludes itself
0x0004-0x0005:
01 00 for detecting that this is a list
02 00 for single value entry
IF single value entry:
0x0006-0x0007:
type:
10 01 for base-10 integer or float
11 20 for base-16 integer
12 02 for string
IF base-10 integer or float:
0x0008-0x000B:
exponent
signed little endian 32-bit integer
0x000C-0x0013:
mantissa
little endian weird stuff. Hexadecimal to be interpreted as decimal... WTF?
00 00 00 00 00 00 00 25 01 is supposed to be 1.25 in decimal,
00 00 00 00 00 00 00 28 06 -> 6.28 and so on
The value is mantissa*10^exponent
IF base-16 integer:
0x0008-0x000B:
02 00 00 00 (why?)
0x000C-0x0013
55 63 62 00 00 00 00 00 becomes #626355
25 06 00 00 00 00 00 00 becomes #625
IF string:
0x0008-0x0009:
Length of string in characters, excludes the tailing 00 00
0x000A-:
string itself, ends in 00 00
IF list:
0x0006-0x0007:
(Tends to be 16 00, 16 01 or 16 02)
0x0008-0x0009:
32-bit LE
Amount of members in list, let's call this N
0x000A-0x000B:
(Probably reserved for something, 7F 01 or 00 00)
N 4-byte values:
Actual list of values follows, they are in reverse order compared to what
they are in the source.
An entry in the list follows this formula:
Stuff gets clever and recursive, every entry in here is handled like a
"VARIABLE VALUE" itself minus the size integer at the beginning
0x0000-0x0003:
size of the header, excludes itself
0x0004-:
Code in UTF-16 LE until 00 00
Sorry if this is common knowledge already, I did try to search the forums (no results for hpprgm). I did find this: http://tiplanet.org/hpwiki/HP_Prime/File_Format#User_BASIC_programs, but it seems that the information there is true only for some very simpe cases. I have no wiki editing skills so I can't update that.
I hope someone else find this useful and that someone continues the reverse engineering of the format (if that stuff truly is bytecode, it could be very useful). My own motivation is has dropped due to other, more interesting projects unrelated to calculators, which is why I stopped working on this.
EDIT: updated, that stuff is not bytecode, they are the encoded objects like Tim Wessman said.
« Last Edit: August 06, 2014, 06:34:44 am by Hasse »
• Editor
• LV10 31337 u53r (Next: 2000)
• Posts: 1708
• Rating: +229/-17
##### Re: The .hpprgm format
« Reply #1 on: August 05, 2014, 07:24:05 am »
Hi,
Thanks for the info - the link you gave has some info about the program format as seen by [MOHPC] member Eried - maybe it was only on simple programs, indeed, I don't know. However, it may be enough for file transfers. One would have to check with complex programs, though, with his program (see https://github.com/eried/PrimeComm/blob/master/PrimeLib/PrimeUsbFile.cs )
If you still have a tiny bit of motivation, you're welcome to register on the wiki you linked and try editing on the sandbox for example (you can try anything there, it won't matter).
If you decide to do that and edit/contribute to the file format page, let us know (just reply here telling so), otherwise we can wait for a bit (maybe people will have some suggestions) and then edit it ourselves from what you've posted here
« Last Edit: August 05, 2014, 07:35:51 am by Adriweb »
My calculator programs
TI-Nspire Lua programming : Tutorials | API Documentation
#### DJ Omnimaga
• Now active at https://codewalr.us
• CoT Emeritus
• LV15 Omnimagician (Next: --)
• Posts: 55821
• Rating: +3151/-232
• CodeWalrus founder & retired Omnimaga founder
##### Re: The .hpprgm format
« Reply #2 on: August 05, 2014, 11:37:38 am »
Yeah I remember someone there made a tool to send such program and I think optimize them on MoHPC board, mentionned in the post above, although I don't remember if it could edit files. Also, when opening such file in Wordpad, I noticed that we could still somewhat dechiper some of the code, so the format is probably not too hard to dechiper.
Welcome to the forums by the way!
« Last Edit: August 05, 2014, 11:39:39 am by DJ Omnimaga »
In case you are wondering where I went, I left Omni back in 2015 to form CodeWalrus due to various reasons explained back then, but I stopped calc dev in 2016 and am now mostly active on the CW Discord server at https://discord.gg/cuZcfcF
#### Streetwalrus
• LV12 Extreme Poster (Next: 5000)
• Posts: 3820
• Rating: +80/-8
##### Re: The .hpprgm format
« Reply #3 on: August 05, 2014, 11:56:10 am »
Lol that reminds me of HP40G programs. I'd some times print out the file to type it on calc.
send it
#### timwessman
• LV3 Member (Next: 100)
• Posts: 94
• Rating: +32/-0
##### Re: The .hpprgm format
« Reply #4 on: August 05, 2014, 06:40:25 pm »
Hello,
I'll copy in a post I made over on the MoHPC.
Quote
A gentle reminder to people about the hpprgm extension and others in use in the calculator - these are not plain text files!
I've seen several people now post files labeled with an hpprgm extension when in fact they are plain text. Please remember that hpprgm files are in fact BINARY files and not meant for editing with a text editor. While you can open an exe with a text editor, making changes will cause problems and most likely crashes. The same WILL happen if you are manually tweaking hpprgm files.
When the format changes as new features are added, your programs will confuse people as they try to load them and they fail. The fact that you can currently open them and they seem to show plain text primarily does not mean you should go ahead and edit things.
You have been cautioned and please remember this! For the sake of people in the future getting totally perplexed as to why things aren't working, I'd ask you to keep your plain text source files as source, and only have the hpprgm extension on files that were generated by HP software (or some other specialty program).
We don't have any issues with giving out information about the files, but you can be pretty certain the format will be changing as things get added. The program file format is far from ready to lock everything down permanently yet. You pretty much have everything of interest right now correct. There is not any arm functions or calls saved in the file. I'd recommend just keeping out of the object table at the moment as those will be the most likely to change. The source area is probably pretty safe.
The basic structure/process for saving is this:
Code: [Select]
Save Total Size // save descriptor info Save All Vars/Exports //saves the names for use in accessing things Save the source //as described
I haven't checked for certain, but your "optional" headers might just be the HP Objs encoded. Reals, lists, etc. That is probably what you are seeing I would guess.
« Last Edit: August 05, 2014, 06:47:41 pm by timwessman »
TW
Although I work for the HP calculator group, the comments and opinions I post here are my own.
#### timwessman
• LV3 Member (Next: 100)
• Posts: 94
• Rating: +32/-0
##### Re: The .hpprgm format
« Reply #5 on: August 05, 2014, 06:46:36 pm »
*dup* How do I delete my dups? Possible?
« Last Edit: August 05, 2014, 06:48:58 pm by timwessman »
TW
Although I work for the HP calculator group, the comments and opinions I post here are my own.
#### Streetwalrus
• LV12 Extreme Poster (Next: 5000)
• Posts: 3820
• Rating: +80/-8
##### Re: The .hpprgm format
« Reply #6 on: August 05, 2014, 06:53:06 pm »
You can no longer do it yourself since the upgrade to smf 2.0. Just wait for a mod to do it.
send it
#### DJ Omnimaga
• Now active at https://codewalr.us
• CoT Emeritus
• LV15 Omnimagician (Next: --)
• Posts: 55821
• Rating: +3151/-232
• CodeWalrus founder & retired Omnimaga founder
##### Re: The .hpprgm format
« Reply #7 on: August 05, 2014, 07:41:29 pm »
We don't have any issues with giving out information about the files, but you can be pretty certain the format will be changing as things get added. The program file format is far from ready to lock everything down permanently yet. You pretty much have everything of interest right now correct. There is not any arm functions or calls saved in the file. I'd recommend just keeping out of the object table at the moment as those will be the most likely to change. The source area is probably pretty safe.
Does it mean that older .hpprgm files will eventually no longer send in newer OS versions?
In case you are wondering where I went, I left Omni back in 2015 to form CodeWalrus due to various reasons explained back then, but I stopped calc dev in 2016 and am now mostly active on the CW Discord server at https://discord.gg/cuZcfcF
#### cyrille
• LV1 Newcomer (Next: 20)
• Posts: 8
• Rating: +0/-0
##### Re: The .hpprgm format
« Reply #8 on: August 06, 2014, 01:28:02 am »
Hello,
note that a lot of the data in the header is optional, you can send a program without it's variables or any of the exported data info, put the source nevertheless and, once on the calculator, open, add a space and exit to compile... that makes it much easier if what you want to do is to edit programs on the PC...
note, prety much all the saves data in the calc follows a format of the type:
[header] [blob size blob of data]*
in a program, there is no fixed size header, and the source is always the last entry in the file. This should make it easy to get to it without having to understand the format...
cyrille
#### Hasse
• LV0 Newcomer (Next: 5)
• Posts: 3
• Rating: +0/-0
##### Re: The .hpprgm format
« Reply #9 on: August 06, 2014, 02:19:08 am »
I originally wanted to make a C to HP PPL translator (obviously without pointer support), since I like C more. I wanted the "compiler" to output valid .hpprgm files which is why I tried to reverse engineer the format. However, I know it takes a long time to get something usable and the difference between the languages isn't great enough to make the project worthwhile.
The "optional headers" were only in some bigger .hpprgms like the hangman game over at MoHPC. I probably should have tried to make large programs myself and modify them to see how the header changes, but I was too lazy . None of the small programs I wrote myself contained those headers.
Interestingly the hangman game has all zeroes where connection kit generates 7F 01 00 00 00 00. Could this be caused by the program being written on an older connection kit?
#### DJ Omnimaga
• Now active at https://codewalr.us
• CoT Emeritus
• LV15 Omnimagician (Next: --)
• Posts: 55821
• Rating: +3151/-232
• CodeWalrus founder & retired Omnimaga founder
##### Re: The .hpprgm format
« Reply #10 on: August 06, 2014, 02:27:09 am »
That said, it's also possible to send a modified firmware file to the calculator, so someone could just modify the official file with a command to launch C/ASM programs then release an IPS patch version of the mod, then you would be set.
In case you are wondering where I went, I left Omni back in 2015 to form CodeWalrus due to various reasons explained back then, but I stopped calc dev in 2016 and am now mostly active on the CW Discord server at https://discord.gg/cuZcfcF
• Editor
• LV10 31337 u53r (Next: 2000)
• Posts: 1708
• Rating: +229/-17
##### Re: The .hpprgm format
« Reply #11 on: August 06, 2014, 04:35:00 am »
Anyway, the format is known enough for several third-party tools (Eried's program, Critor's mViewer GX Generator, for example...) to generate valid files, so there's that. If the unsupported parts are actually not necessary and not long-term safe, well, we may not need to take time to understand them, it wouldn't be useful
My calculator programs
TI-Nspire Lua programming : Tutorials | API Documentation
#### Hasse
• LV0 Newcomer (Next: 5)
• Posts: 3
• Rating: +0/-0
##### Re: The .hpprgm format
« Reply #12 on: August 06, 2014, 06:40:17 am »
Sorry, I was being a little melodramatic yesterday. Reverse engineering is like a puzzle, and not completely understanding the file is like giving up for me. I have problems with giving up. I continued the work and updated the post accordingly. Turns out there is no bytecode and Tim Wessman was correct, they are encoded values.
Regarding the usefulness of all of this, I agree that this is probably worthless but it is a fun puzzle nevertheless.
#### DJ Omnimaga
• Now active at https://codewalr.us
• CoT Emeritus
• LV15 Omnimagician (Next: --)
• Posts: 55821
• Rating: +3151/-232
• CodeWalrus founder & retired Omnimaga founder
##### Re: The .hpprgm format
« Reply #13 on: August 06, 2014, 12:32:12 pm »
Indeed. There are a bunch of things that aren't necessarily useful on the TI-84+, but it didn't stop anyone from documenting the entire calculator hardware and software, so it wouldn't hurt if the same was done with HP calcs.
In case you are wondering where I went, I left Omni back in 2015 to form CodeWalrus due to various reasons explained back then, but I stopped calc dev in 2016 and am now mostly active on the CW Discord server at https://discord.gg/cuZcfcF
#### cyrille
• LV1 Newcomer (Next: 20)
• Posts: 8
• Rating: +0/-0
##### Re: The .hpprgm format
« Reply #14 on: August 07, 2014, 02:19:28 am »
Hello,
well, I don't remember the specifics from the top of my head, but basicaly:
- First blob: header with number of exported function, variables (exported and not) and views.
This first blob also contains the list of strings with the names of these exported (or not) objects.
- n blobs: the content/value of the program global variables. This is where it is complicated as a varaible can contain expressions/functions and the like, even CAS objects... not that the format is complciated, it's just has a lot... from numbers, to matrices, lists, complex...
- 1 blob: source code.
cyrille
|
2019-11-15 07:20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25762447714805603, "perplexity": 5052.249460417941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00417.warc.gz"}
|
https://www.digitisation.eu/glossary/batch-sample/
|
# Batch sample
Impact Centre of Competence
« Back to Glossary Index
In mass digitisation, batch sampling is a means of quality assurance whereby a numerically significant subset of a larger body of digital information is taken as being
qualitatively representative of the whole. Where the subset is taken as being of acceptable standard (in terms of text legibility, OCR accuracy, etc.), the entire batch will be passed; where the subset is taken as being unacceptable, the entire batch will be rejected.
« Back to Glossary Index
|
2021-09-17 07:55:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874056935310364, "perplexity": 3556.7679834811515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00327.warc.gz"}
|
https://arithmos.wordpress.com/2011/11/28/consequences-of-no-inaccessible-accumulation-point/
|
## Consequences of no inaccessible accumulation point
This post is just an organizational one without any proofs. I wanted to list a few consequences of the following conjecture of Shelah:
• If ${\mathfrak{a}}$ is a progressive set of regular cardinals, then ${{\rm pcf}(\mathfrak{a})}$ does not have a weakly inaccessible point of accumulation.
Equivalently, if ${\mathfrak{a}}$ is a progressive set of regular cardinals, then ${{\rm pcf}(\mathfrak{a})\cap\kappa}$ is bounded in ${\kappa}$ for every weakly inaccessible ${\kappa}$.
In his paper [Sh:666], Shelah argues that this conjecture is “a significant dividing line between chaos and order”.
Why?
One answer is outlined in [Sh:666]: if the conjecture is true, then for any progressive set of regular cardinals ${\mathfrak{a}}$ we have
$\displaystyle {\rm cf}\left(\prod\mathfrak{\rm pcf}(\mathfrak{a}), <\right)={\rm cf}\left(\prod\mathfrak{a}, <\right), \ \ \ \ \ (1)$
while if the conjecture fails one can force a counterexample to the above statement.
The above is really just an outcropping of deeper results from the third section of Chapter VIII of The Book, where Shelah proves that a subset ${\mathfrak{b}}$ of ${{\rm pcf}(\mathfrak{a})}$ which does not have a weakly inaccessible accumulation point still has a nice pcf structure, even though it may be the case that ${\mathfrak{b}}$ is not progressive. In particular, we have ${{\rm pcf}(\mathfrak{b})\subseteq{\rm pcf}(\mathfrak{a})}$ for such a ${\mathfrak{b}}$. As a corollary, we see that if Shelah’s conjecture is true, then
$\displaystyle {\rm pcf}({\rm pcf}(\mathfrak{a}))={\rm pcf}(\mathfrak{a}) \ \ \ \ \ (2)$
for any progressive set of regular cardinals ${\mathfrak{a}}$.
But Shelah’s Conjecture also has consequences for cardinal arithmetic as well: this is the content of the fourth section of [Sh:430]. I invite the adventurous reader to take a look at that particular piece of Shelah’s oeuvre, because at this point I have no idea what the theorems say. Well, that’s not quite true, as I have a vague idea of what they say, but they are couched in the language of nice filters originating in Chapter V of The Book, and that’s a language I’ve not yet tried to learn. In [Sh:666], he says that if the conjecture holds and ${\aleph_\delta}$ is the ${\omega_1}$th fixed point (strong limit), then ${{\rm pp}(\aleph_\delta)}$ is less than then ${\omega_4}$th fixed point. (But I don’t actually see this in [Sh:430] so it’s possible that something was retracted.).
|
2017-11-25 09:17:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964903950691223, "perplexity": 228.86329312017403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00505.warc.gz"}
|
https://pdfhall.com/about-the-range-property-for-h-1-introduction-lama-univ-savoie_5a2ce3e21723dd24d8d68bd8.html
|
## About the range property for H 1 Introduction - LAMA - Univ. Savoie
It can also be seen as an introduction to the more elaborate case 2.(a). For case 2.(b) we give examples of terms F for which there is no term having this property. 1.3 A final remark. Note that, actually, even if a term A having the Barendregt's persistence property can be found this would not give an infinite range for λx.F. This ...
About the range property for H Ren´ e DAVID and Karim NOUR LAMA - Equipe LIMD Universit´e de Savoie 73376 Le Bourget du Lac e-mail: {david,nour}@univ-savoie.fr October 2013 Abstract Recently, A. Polonsky (see [10]) has shown that the range property fails for H. We give here some conditions on a closed term that imply that its range has an infinite cardinality.
1 1.1
Introduction Our motivations
Let T be a λ-theory. The range property for T states that if λx.F is a closed λ-term, then its range (considering λx.F as a map from M to M where M is the algebra of closed λ-terms modulo the equality defined by T ) has cardinality either 1 or Card(M). It has been proved by Barendregt in [2] that the range property holds for all recursively enumerable theories. For the theory H equating all unsolvable terms, the validity of the range property has been an open problem for a long time. Very recently A. Polonsky has shown (see [10]) that it fails. In an old attempt to prove the range property for H, Barendregt (see [3]) suggested a possible way to get the result. The idea was, roughly, as follows. First observe that, if the range of a term is not a singleton, it will reduce to a term of the form λx.F [x]. Assuming that, for some A, F [x := A] 6=H F [x := Ω], he proposed another term A0 (the term Jν ◦ A of Conjecture 3.2 in [3]) having a free variable ν that could never be erased or used in a reduction of F [x := A0 ]. He claimed that, by the properties of the variable ν, the terms F [x := An ] should be different where An = A0 [ν := cn ] and cn is, for example, the Church numeral for n. It has been rather quickly understood by various researchers that the term proposed in [3] actually had not the desired property and, of course, Polonsky’s result shows this method could not work. Moreover, even if the term A0 had the desired property it is, actually, not true that the terms F [x := An ] would be different: see section 1.3 below.
1.2
Our results
Even though the failure of the range property for H is now known, we believe that having conditions which imply that the range of a term is infinite is, “by itself” interesting. We also think that the idea proposed by Barendregt remains interesting and, in this paper, we consider the following problem. Say that F has the Barendregt’s persistence property if for each A such that F [x := A] 6=H F [x := Ω] we can find a term A0 that has a free variable ν that could never be erased or applied in a reduction of F [x := A0 ] (see definition 3.7). We give here some conditions on terms that imply the Barendregt’s persistence property. Our main result is Theorem 3.2. It can be stated as follows. Let F be a term having a unique free variable x and A be a closed term such that F [x :=
1
A] 6=H F [x := Ω]. We introduce a sequence (Fk )k∈N of reducts of F that can simulate (see Lemma 3.4) all the reductions of F . By considering, for each k, the different occurrences of x in Fk we introduce a tree T and a special branch in it (see Theorem 3.1). Denoting by x(k) the corresponding occurrence of x in Fk , Theorem 3.2 states that : (1) If, for k ∈ N, the number of arguments of x(k) in Fk is bounded, F has the Barendregt’s persistence property. (2) Otherwise and assuming the branch in T is recursive there are two cases. (2.a) If some of the arguments of x(k) in Fk come in head position infinitely often during the head reduction of F [x := A], then F has the Barendregt’s persistence property. (2.b) Otherwise, it is possible that F does not have the Barendregt’s persistence property. In case (1) we give two arguments. The first one is quite easy and uses this very particular situation. It gives a term A0 where ν cannot be erased but we have not shown that it is never applied (it is not applied only in the branch defined in section 3). The second one (which is more complicated) is a complete proof that F has the Barendregt’s persistence property. It can also be seen as an introduction to the more elaborate case 2.(a). For case 2.(b) we give examples of terms F for which there is no term having this property.
1.3
A final remark
Note that, actually, even if a term A0 having the Barendregt’s persistence property can be found this would not give an infinite range for λx.F . This is due to the fact that the property of A0 does not imply that F [x = An ] 6=H F [x = Am ] for n 6= m where Ak = A0 [ν = ck ] and ck is the Church integer for k. This is an old result of Plotkin (see [11]). Thus, having an infinite range will need another assumption. We will also consider this other assumption and thus give simple criteria that imply that the range of λx.F is infinite. The paper is organized as follows. Section 2 gives the necessary definitions. Section 3 considers the possible situations for F and states our main result. Section 4 and 5 give the proof in the cases where we actually can find an A0 . Section 6 gives some complements.
2
Preliminaries
Notation 2.1
1. We denote by Λ◦ the set of closed λ-terms.
2. We denote by ck the Church integer for k and by Suc a closed term for the successor function. As usual we denote by I (resp. K, Ω) the term λx.x (resp. λxλy.x, (δ δ) where δ = λx.(x x)). 3. #(t) is the code of t i.e. an integer coding the way t is built. 4. We denote by . the β-reduction, by .Ω the Ω-reduction (i.e. t .Ω Ω if t is unsolvable), by .h the head β-reduction and by .βΩ the union of . and .Ω . 5. If R is a notion of reduction, we denote by R∗ its reflexive and transitive closure. 6. Let x be a free variable of a term t. We denote by x ∈ βΩ(t) the fact that x does occur in any t0 such that t .∗βΩ t0 . 7. As usual (u t1 t2 ... tn ) denotes (...((u t1 ) t2 ) ... tn ). (un v) will denote (u (u ... (u v)...)) and (v u∼n ) will denote (v u ... u) with n occurrences of u. 2
Notation 2.2 1. Unknown sequences (possibly empty) of abstractions or terms − − will be denoted with an arrow. For example λ→ z or → w . However, to improve readability, capital letters will also be used to denote sequences. The notable exceptions are F, A, J taken from Barendregt’s paper or standard notations for terms as I, K, Ω. When the meaning is not clear from the context, we will − explicitly say something as “ the sequence λ→ z of abstractions”. 2. For example, to mean that a term t can be written as some abstractions followed by the application of the variable x to some terms, we will say that − − t = λ→ z .(x → w ) or t = λZ.(x W ). 3. If R, S are sequences of terms of the same length, R .∗ S means that each term of the sequence R reduces to the corresponding term of the sequence S. Lemma 2.1
1. If u .∗βΩ v, then u .∗ w .∗Ω v for some w.
2. .∗βΩ satisfies the Church-Rosser property : if t.∗βΩ t1 and t.∗βΩ t2 , then t1 .∗βΩ t3 and t2 .∗βΩ t3 for some t3 . Proof
See, for example, [1].
Theorem 2.1 If t .∗ t1 and t .∗ t2 , then t1 .∗ t3 and t2 .∗ t3 for some t3 . Moreover #(t3 ) can be computed from #(t1 ), #(t2 ) and a code for the reductions t.∗ t1 , t.∗ t2 . Proof
See [1].
Definition 2.1 1. We denote by ' the equality modulo .∗βΩ i.e. u ' v iff there is w such that u .∗βΩ w and v .∗βΩ w and we denote by [t]H the class of t modulo '. 2. For λx.F ∈ Λ◦ , the range of λx.F in H is the set =(λx.F ) = {[F [x := u]]H / u ∈ Λ◦ }. 3. A closed term λx.F has the range property for H if the set =(λx.F ) is either infinite or has a unique element. Theorem 2.2 There is a term λx.F ∈ Λ◦ that has not the range property for H. Proof
See [10].
Definition 2.2 Let U, V be finite sequences of terms. 1. We denote by U :: V the list obtained by putting U in front of V . 2. U v V means that some initial subsequence of V is obtained from U by substitutions and reductions. Lemma 2.2 1. U v V iff there is a substitution σ such that σ(U ) reduces to an initial segment of V . 2. The relation v is transitive. Proof
Easy.
3
Notation 2.3 We will have to use the following notion. A sub-term u of a term v comes in head position during the head reduction of v. This means the following: v can be written as C[u] where C is some context with exactly one hole. During − − the head reduction of C the hole comes in head position i.e C reduces to λ→ x .([] → w) → − for some w . The only problem in making this definition precise is that, during the − − reduction of C to λ→ x .([] → w ) the potentially free variables of u may be substituted and we have to deal with that. The notations and tools developed in, for example, [6], [7] allows to do that precisely. Since this is intuitively quite clear and we do not need any technical result on this definition, we will not go further.
3
The different cases
Let t = λx.F [x] be a closed term. First observe that 1. If x 6∈ βΩ(F ), then the range of t is a singleton. 2. If x ∈ βΩ(F ) and F is normalizable, then the range of t is trivially infinite since then the set {[F [x := λx1 ...λxn ck ]]H / k ∈ N} is infinite where n is the size of the normal form of F . 3. More generally, if t has a finite Bohm tree then it satisfies the range property. ¿From now on, we thus fix terms F and A. We assume that: • F is not normalizable and has a unique free variable denoted as x such that x ∈ βΩ(F ). In the rest of the paper we will write t[A] instead of t[x := A]. • F [A] 6' F [Ω] Notation 3.1 1. The different occurrences of a free variable x in a term will be denoted as x[i] for various indexes i. 2. Let t be a term and x[i] be an occurrence of the (free) variable x in t. We denote by Arg(x[i] , t) (this is called the scope of x[i] in [1]) the maximal list of arguments of x[i] in t i.e the list V such that ([] V ) is the applicative context of x[i] in t. Since Arg(x[i] , t) may contain variables that are bounded in t and since a term is defined modulo α-equivalence, this notion is not, strictly speaking, well defined. This is not problematic and we do not try to give a more formal definition. Lemma 3.1 Assume u .∗ v and x[i] (resp. x[j] ) is an occurrence of x in u (resp. v) such that x[j] is a residue of x[i] . Then Arg(x[i] , u) v Arg(x[j] , v). Proof Since the relation v is transitive it is enough to show the result when u . v. This is easily done by considering the position of the reduced redex. Definition 3.1 Let x[i] be an occurrence of x in some term t. We say that x[i] is pure in t if there is no other occurrence x[j] of x in t such that x[i] occurs in one of the elements of the list Arg(x[j] , t). For example, let t = (x (x y)). It can be written as (x[1] (x[2] y)) where the occurrence x[1] is pure but x[2] is not pure and Arg(x[1] , t) = (x[2] y). Note that, if x[1] , ..., x[n] are all the pure occurrences of x in t, there is a context C with holes []1 , ..., []n such that t = C[[]i = (x[i] Arg(x[i] , t)) : i = 1...n] and x does not occur in C. Lemma 3.2 Let t, t0 be some terms such that t reduces to t0 . Assume that x[i0 ] is a residue in t0 of x[i] in t and x[i0 ] is pure in t0 . Then x[i] is pure in t. Proof Immediate. 4
The next technical result is akin to Barendregt’s lemma discussed by de Vrijer in Barendregt’s festschrift (see [13]). It has a curious history discussed there. First proved by van Dalen in [5], it appears as an exercise in Barendregt’s book at the end of chapter 14. Its truth may look strange. Note that the reductions coming from the term B are done in the holes of the reduct G of F . Lemma 3.3 Let B be a closed term. Assume F [B] .∗ t. Then there is a G such that • F .∗ G = D[[]i = wi : i ∈ I] where D is a context with holes []i (indexed by the set I of all the pure occurrences x[i] of x in G) and wi = (x[i] Arg(x[i] , G)). • t = D[[]i = wi0 : i ∈ I] where wi [B] .∗ wi0 . Proof By induction on hlg(F [B] .∗ t), cxty(F )i where lg(F [B] .∗ t) is the length of a standard reduction of F [B] to t and cxty(F ) is the complexity of F , i.e the number of symbols in F . - If F = λy.F 0 , then t = λy.t0 where F 0 [B] .∗ t0 . Since lg(F [B] .∗ t) = lg(F 0 [B] .∗ 0 t ) and cxty(F 0 ) < cxty(F ), we conclude by applying the induction hypothesis on the reduction F 0 [B] .∗ t0 . - If F = (y F1 ...Fn ), then t = (y t1 ...tn ) where Fi [B] .∗ ti . Since lg(Fi [B] .∗ ti ) ≤ lg(F [B] .∗ t) and cxty(Fi ) < cxty(F ), we conclude by applying the induction hypothesis on the reductions Fi [B] .∗ ti . - If F = (λy.U V F1 ...Fn ) where the head redex is not reduced during the reduction F [B].∗ t, then t = (λyu v f1 ...fn ) where U [B].∗ u, V [B].∗ v and Fi [B].∗ fi . Since lg(U [B] .∗ u) ≤ lg(F [B] .∗ t), lg(V [B] .∗ v) ≤ lg(F [B] .∗ t) , lg(Fi [B] .∗ fi ) ≤ lg(F [B] .∗ t), cxty(U ) < cxty(F ), cxty(V ) < cxty(F ) and cxty(Fi ) < cxty(F ), we conclude by applying the induction hypothesis on the reductions U [B].∗ u, V [B].∗ v, Fi [B] .∗ fi . → − - If F = (λy.U V F ) and the first step of the standard reduction reduces the → − → − head redex, then F [B]. = (U [y := V ] F )[B] .∗ t. Let F 0 = (U [y := V ] F ). Since lg(F 0 [B] .∗ t) < lg(F [B] .∗ t), we conclude by applying the induction hypothesis on the reduction F 0 [B] .∗ t. → − - If F = (x F ), then G = F and D is the term made of a single context [] and w = F. The next lemma concerns the reduction of F under a recursive cofinal strategy. The canonical one is the Gross–Knuth strategy, where one takes, at each step, the full development of the previous one. Lemma 3.4 There is a sequence (Fk )k∈N such that 1. F0 = F and, for each k, Fk .∗ Fk+1 . 2. If F .∗ G, then G .∗ Fk for some k. 3. The function k ,→ #(Fk ) is recursive. Proof By Theorem 2.1, choose Fk+1 as a common reduct of Fk and all the reducts of F in less than k steps. Definition 3.2 Let x[i] be an occurrence of x in some Fk . We say that x[i] is good in Fk if it satisfies the following properties: • x[i] is pure in Fk . • u[A] is solvable for every sub-term u of Fk such that x[i] occurs in u. Note that this implies that (x[i] Arg(x[i] , Fk ))[A] is solvable. 5
Observe that every pure occurrence of x in Fk+1 is a residue of a pure occurrence of x in Fk . This allows the following definition. Definition 3.3 1. Let T be the following tree. The level k in T is the set of pure occurrences of x in Fk . An occurrence x[i] of x in Fk+1 is the son of an occurrence x[j] of x in Fk if x[i] is a residue of x[j] . 2. A branch in T is good if, for each k, the occurrence x[k] of x in Fk chosen by the branch is good in Fk . Theorem 3.1 There is an infinite branch in T that is good. Proof By Konig’s Lemma it is enough to show: (1) for each k, there is an occurrence of x in Fk that is good and (2) if the son of an occurrence x[i] of x is good then so is x[i] . (1) Assume first that there is no good occurrences of x in Fk . This means that, for all pure occurrences x[i] of x in Fk , either (x[i] Arg(x[i] , Fk ))[A] is unsolvable or this occurrence appears inside a sub-term u of Fk such that u[A] is unsolvable. But, if (x[i] Arg(x[i] , Fk ))[A] is unsolvable then so is (x[i] Arg(x[i] , Fk ))[Ω] and, if u[A] is unsolvable, then so is u[Ω] (proof : consider the head reduction of u[A] ; either A comes in head position or not ; in both cases the result is clear). This implies that F [A] ' F [Ω]. (2) follows immediately from the fact that a residue of an unsolvable term also is unsolvable and that, if (x U )[A] is unsolvable and U v V then so is (x V )[A]. Example Let G be a λ-term such that G .∗ λuλv.(v (G (x u) v)) and F = (G I). If we take λv.(v k (G (x (x ...(x I))) v)) for Fk , what is the good occurrence of x in Fk which appears in a good branch in T ? It is none of those in (x (x ...(x I))...), it is the one in G ! From now on, we fix an infinite branch in T that is good Notation 3.2 We denote by x(k) the occurrence of x in Fk chosen by the branch. Let Uk = Arg(x(k) , Fk ). Lemma 3.5 There is a sequence (σk )k∈N of substitutions and there are sequences (Sk )k∈N , (Rk )k∈N of finite sequences of terms such that, for each k, Rk is obtained from σk (Uk ) by some reductions and Uk+1 = Rk :: Sk . Proof This follows immediately from Lemma 3.1. Definition 3.4 We define the sequence Vk by : V0 = U0 and Vk+1 = σk (Vk ) :: Sk . 1. For each k, Vk .∗ Uk .
Lemma 3.6
2. For each k 0 > k, there is a substitution σk0 k such that σk0 k (Sk ) is a subsequence of Vk0 . Proof This follows immediately from Lemma 3.5. If k 0 = k + 1, σk0 k = id. Otherwise σk0 k = σk0 −1 ◦ σk0 −2 ◦ ... ◦ σk+1 Definition 3.5 We define, by induction on k, the sequence ρk of reductions and the terms tk as follows. 1. ρ0 is the head reduction of (A V0 [A]) to its head normal form. − →) is the result of ρ . 2. t = λ→ z . (y − w k
k
k
k
k
3. ρk+1 is the head reduction of (σk (tk ) Sk [A]) to its head normal form. Lemma 3.7 The term tk is the head normal form of (A Vk [A]). Proof Easy. 6
Notation and comments 1. Denote by ρ the infinite sequence of reductions ρ0 , ρ1 , ..., ρk , .... Note that it is not the reduction of one unique term. ρ0 computes the head normal form t0 of (A V0 [A]). The role of σ0 is to substitute in the result the substitution that changes U0 into the first part of U1 . Note that, by Lemma 3.5, this first part may also have been reduced but here we forget this reduction. Then we use ρ1 to get the head normal form t1 of (σ0 (t0 ) S1 [A]) and keep going like that. 2. Note that, by Lemma 3.6 and 3.7, tk is some head normal form for (A Uk [A]) but it is not the canonical one i.e. the one obtained by reducing, at each step, the head redex. Definition 3.6 Say that Sk comes in head position during ρ if, for some k 0 > k, an element of the list σk0 k (Sk )[A] comes in head position during the head reduction of (A Vk0 [A]). Definition 3.7
1. Let t be a term with a free variable ν.
(a) Say that ν is never applied in a reduct of t if no reduct t0 of t contains a sub-term of the form (ν u). (b) Say that ν is persisting in t if ν ∈ βΩ(t) and ν is never applied in any reduct of t. 2. We say that the term F has the Barendregt’s persistence property if we can find a term A0 that has a free variable ν that is persisting in F [A0 ]. Comment and example 1. The condition “ν is never applied” in the previous definition implies that, letting An = A0 [ν = cn ], a reduct of F [An ] is, essentially, a reduct of F [A0 ]. 2. Here is an example. Let G be a λ-term such that G .∗ λuλv.(v (G (u I) v)) and F = (G x). We can take λz.(z k (G (x I ∼k ) z)) for Fk . It follows easily that F [I] 6' F [Ω]. We have Uk = I ∼k , Sk = I, tk = I and σk = id. Let J be a λ-term such that J .∗ λuλvλy.(v (J u y)) and I 0 = (J ν I). If F [I 0 ] .∗ t, then, by Lemma 3.3, t .∗ λz.(z n (G I 00 z)) for some n where I 00 is a reduct of (I 0 I ∼n ). It is easily checked that ν is persisting in F [I 0 ]. Since no cn occurs as a sub-term of a reduct of F [I 0 ], it is not difficult to show that, if n 6= m, then F [In ] 6' F [Im ] where In = I 0 [ν := cn ] and thus =(λx.F ) is infinite. Theorem 3.2 1. Assume first that the length of the Uk are bounded. Then, F has the Barendregt’s persistence property. 2. Assume next that the length of the Uk are not bounded and the branch we have chosen in T is recursive. (a) If the set of those k such that Sk comes in head position during ρ is infinite, then F has the Barendregt’s persistence property. (b) Otherwise it is possible that F does not have the Barendregt’s persistence property. Proof
7
1. It follows immediately from Lemma 3.1 that there are l, k0 > 0 such that for all k ≥ k0 , lg(Uk ) = l. The fact that F has the Barendregt’s persistence property is proved in section 4. 2. (a) The fact that F has the Barendregt’s persistence property is proved in section 5. (b) There are actually two cases and the reasons why we cannot find a term A0 are quite different. i. For all k there is k 0 > k such that yk0 ∈ dom(σk0 ). The fact that the head variable of tk may change infinitely often does not allow to use the technic of sections 4 or 5. A. Polonsky has given a term F that corresponds to this situation and such that a variable ν can never be persisting in a term A0 of the form λx1 ...xn .(xi w1 ... wm ). See example 1 below. ii. For some k1 , yk 6∈ dom(σk ) for all k ≥ k1 . Since there are infinitely many k ≥ k1 such that Sk is non empty this implies that, after some steps, tk does not begin by λ. Thus, there is k2 and y, such →). Using the technic of sections 4 that, for all k ≥ k2 , tk = (y − w k or 5 allows to put a term J in front of some (fixed) element of the → but this is not enough to keep ν. We adapt the example sequence − w k of A. Polonsky to give a term F that corresponds to this situation and such that a variable ν can never be persisting in a term A0 of the form λx1 ...λxn .(A w1 ... wn ) where wj ' λy1 ...λyrj .(xj w1j ...wrjj ). See example 2 below. Comments 1. For case 1. we will give two proofs. The first one is quite simple. The second one is much more elaborate and even though it, actually, does not work for all the possible situations, we give it because it is an introduction to the more complex section 5. In the first proof, we simply use the fact that the length of the Uk are bounded to find a term A0 that has nothing to do with A. In the second proof and in section 5, the term A0 that we give has the Bargendregt’s property and behaves like A (using the idea of [3]) in the sense that it looks like an infinite η-expansion of A. 2. When we say, in case 2.(b) of the theorem, that it is possible that F does not have the Barendregt’s persistence property we are a bit cheating. We only show (except in example 1) that there is no A0 with a persisting ν satisfying an extra condition. This condition is that A0 looks like A i.e. the first levels of the B¨ ohm tree of A0 must be, up-to some η-equivalence, the same as the ones of A. 3. It is known that there are recursive (by this we mean that we can compute their levels) and infinite trees such that each level is finite and that have no recursive infinite branch. We have not tried to transform such a tree in a lambda term such that the corresponding T has no branch that is good and recursive but we guess this is possible. Example 1. This example is due to A. Polonsky. Let G, H be λ-terms such that G .∗ λyλz.(z G λu.(y (K u)) z) and H .∗ λuλvλw.(w (H u (v u) w)). Let F = (G λy.(H y x)). We have F .∗ λz.(z n (G λyλw.(wm (H (K n y) (x (K n y)∼m ) w)) z)). Thus, if B ' λy1 ...λyr .(yi w1 ...wl ) where ν is possibly free in the wj , F [B] ' 8
λz.(z l (G λyλw.(wr (H (K l y) y w)) z)) and ν is not persisting in F [B] (since it can be erased). This means that for all closed term A such that F [A] 6' F [Ω], and for all solvable term A0 , ν is not persisting in F [A0 ]. Example 2. This example is an adaptation of the previous one. Let G, H be λ-terms such that G .∗ λyλz.(z (G λu.(y (K u)) z)) and H .∗ λuλvλw.(w (H u (v u) w)). Let F = λy.(G λv.(H v (x y))). We have F .∗ λyλz.(z n (G λvλw.(wm (H (K n v) (x y (K n v)∼m ) w)) z)) and it is clear that F [I] 6' F [Ω]. Let I 0 ' λxλx1 ...λxr .(x w1 ...wr ) where, for 1 ≤ j ≤ r, wj ' λy1 ...λyrj (xj w1j ...wrjj ). For n ≥ max1≤j≤r (rj ), F [I 0 ] ' λyλz.(z n (G λvλw.(wm (H (K n v) (y w10 ...wr0 ) w)) z)) for some wj0 where ν does not occur and thus ν is not persisting in F [I 0 ]. Note, however, that ν is persisting in F [I 00 ] where I 00 = λz.(z ν).
4
Case 1 of Theorem 3.2
We assume in this section that we are in case 1. of Theorem 3.2.
4.1
A simple argument
Let A0 = λx1 ...λxl λz.(z ν) where l is a bound for the length of the Uk . We show that ν ∈ βΩ(F [A0 ]). It is easy to show that ν is never applied in the terms (A0 Uk [A0 ]) but the fact that ν is never applied in a reduct of F [A0 ] is not so clear. Since, because of the next section, we do not need this point we have not tried to check. If F [A0 ] .∗ H .∗Ω G. By Lemma 3.3, F .∗ F 0 = D[[]i = wi : i ∈ I], wi = (x(i) Arg(x(i) , F 0 )) and H = D[[]i = wi0 : i ∈ I] where wi [A0 ] .∗ wi0 . Let k be such that F 0 . Fk . Let i0 ∈ I be such that the occurrence of x(k) in Fk chosen by the good branch is a residue of the occurrence of x[i0 ] in F 0 . By Lemma 3.1, the length of Arg(x[i0 ] , F 0 ) is bounded by l and thus wi00 = λxq ...xl λz.(z ν) for some q ≤ l. It remains to show that the sub-term λz.(z ν) of wi00 cannot be erased in the Ω-reduction from H to G. Assume it is not the case. Then, there is a sub-term D0 of D containing the hole []i0 such that D0 [[]i = wi0 : i ∈ I] is unsolvable. D0 [[]i = wi ] is solvable (since, otherwise, the occurrence x[k] will be in an unsolvable sub-term of Fk and this contradicts the fact that x[k] is good). Since the reduction D0 [[]i = wi ] .∗ D0 [[]i = wi0 ] only is inside the wi0 , since the first term is solvable and the second one is not, then, by the Church-Rosser property, the head variable of the head normal form of D0 [[]i = wi ] is an occurrence of x. By Lemma 3.2, x[i0 ] is pure in F 0 . x[i0 ] is not the head variable of the head normal form of D0 [[]i = wi ] (since, otherwise, by the Church-Rosser property, D0 [[]i = wi0 ] would be solvable). Contradiction.
4.2
The proof
There are actually different situations. − 1. Either, for some k1 ≥ k0 , yk ∈ → zk for any k ≥ k1 . Since for k ≥ k1 , Sk is − − empty and the head variable of tk is not substituted, there is → z and z ∈ → z, → − − → such that, for k ≥ k1 , tk = λ z . (z wk ). − 2. Or yk 6∈ → zk for all k ≥ k0 and (a) Either the situation is unstable (i.e. the set of those k such that yk ∈ dom(σk ) is infinite).
9
− − (b) Or, there is → z , y 6∈ → z and k1 ≥ k0 , such that, for k ≥ k1 , tk = → − − → λ z . (y wk ). We assume that we are in situation 1. or 2.(b) which may be synthesized by: − there exists k0 and some fixed variable y (that may be in → z or not), such that, for → − − → − → all k ≥ k0 , tk = λ z . (y wk ) for some wk . − We fix p ≥ lg(→ z ) + l + 2 where l is a bound for the length of the Uk . Let 0 A = (J ν A) where J is a new constant with the following reduction rule (J ν u) . λy1 ...λyp .(u (J ν y1 )...(J ν yp )) We will prove that ν is persisting in F [A0 ]. The term A0 is not a pure λ-term since the constant J occurs in it. We could, of course, replace this constant by a λ-term J 0 that has the same behavior, e.g. (Y λkλy1 ...λyp .(u (k ν y1 )...(k ν yp ))) where Y is the Turing fixed point operator. But such a term introduces some problems in Lemma 4.4 because J 0 and (J 0 ν) contain redexes and can be reduced. With such a term J 0 , though intuitively true, this lemma (as it is stated) does not remain correct. Making this lemma correct (with J 0 instead of a constant) will require the treatment of redexes inside J 0 and (J 0 ν). The reader should be convinced that this can be done but, since it would need tedious definitions, we will not do it. Note that, in situation 2.(a), we cannot do the kind of proof given below. Here is an example. Let G be a λ-term such that G .∗ λuλv.(v (G λy.(u (K y)) v)) and F = (G x). We can take λz.(z k (G λy.(x (K k y)) z) for Fk . We thus have Uk = (K k y), tk = λz1 ...λzk . y and σk = [y := (K y)]. It is clear that F [I] 6' F [Ω]. But F [I 0 ].∗ λz.(z k (G I z)) for any I 0 ' λyλy1 ...λyk . (y w1 ...wk ) where ν is possibly free in the wi . Thus ν is not persisting in F [I 0 ]. 4.2.1
Some preliminary definitions and results
When, in a term t, we replace some sub-term u by (J ν u) to get t0 , the reducts u (resp. u0 ), of t (resp. of t0 ) are very similar. The goal of this section is to make this a bit precise. Definition 4.1 1. We define, for terms u, the sets Eu of terms by the following − − grammar: Eu = u |(J ν eu ) |λ→ y .(eu → ey ) 2. Let t, t0 be some terms. We denote by t t0 if there is a context C with one 0 0 hole such that t = C[u] and t = C[u ] where u0 ∈ Eu . Notations and comments 1. Note that, in the previous definition as well as in the sequel, eu always denotes − − a term in Eu and, for a sequence → y of variables , → ey always denote a sequence → − → − − of terms v (of the same length as y ) such that for each variable y in → y , the → − corresponding term in v is a member of Ey . Also note that, in the previous − − definition, for a term λ→ y .(eu → ey ) to be in Eu , we assume that the variables → − in y do not occur in eu . 2. t t0 means that t0 is obtained from t by replacing some sub-term u of t by (J ν u) or by reducing redexes introduced by J i.e. the one coming from its reduction rule (J ν u) . λy1 ...λyp .(u (J ν y1 )...(J ν yp )) and those whose λ’s are among λy1 ...λyp . Lemma 4.1 2. If a
1. If t ∈ Ez and t0 ∈ Eu then t[z := t0 ] ∈ Eu a0 and b
b0 , then a[x := b] 10
a0 [x := b0 ]
3. If u u.
u0 and if v 0 is an Ω-redex in u0 , then v
v 0 for some Ω-redex v in
− − − 4. If t ∈ Eu then t .∗ λ→ y (u → ey ) for some → ey . Proof 1, 2 and 3 are immediate. 4 is proved by induction on the number of rules used to show t ∈ Eu . Lemma 4.2 1. Assume u = C[(λx.a b)] for some context C and let u ∗ u0 . 0 Then u = C 0 [(a0 b0 )] for some context C 0 and some terms a0 , b0 such that C ∗ C 0 , a0 ∈ Eλx.a00 , a ∗ a00 and b ∗ b0 . 2. Assume u
u0 and u .∗ v. Then, v
v 0 for some v 0 such that u0 .∗ v 0 .
Proof The first point is immediate because the operations that are done to go from u to u0 are either in C (to get C 0 ) or in b (to get b0 ) or in a (to get a0 ) or in λx.a0 . It is enough to check that locals and globals operations on a commute. For the second point, we do the proof for one step of reduction u . v and we use the first point and Lemma 4.1. − → −c ) Lemma 4.3 1. If λx.(x → → − and some ey . − → −c ) and u 2. If u .∗ λx.(x → → − and some ey .
→ − − − →− → −c u, then u .∗ λxλy.(x c0 → ey ) for some →
→ −0 c
→ − − − →− → −c u0 , then u0 .∗ λxλy.(x c0 → ey ) for some →
→ −0 c
Proof 1. We do the proof on an example. It is clear that this is quite general. Assume λx1 λx2 (x c1 c2 ) ∗ u, then u ∈ Eλx1 u1 , u1 ∈ Eλx2 u2 , u2 ∈ E(v1 c02 ) , v1 ∈ E(v2 c01 ) , v2 ∈ Ex , c1 ∗ c01 and c2 ∗ c02 . The β-reduction is done starting from inside and the propagation is done by using Lemma 4.1. 2. It is a consequence of the first point and Lemma 4.2. ∗ The next lemma means that, when a redex appears in some u0 where u u0 , it can either come from the corresponding redex in u, or has been created by the transformation of an application in u that was not already a redex or comes from the replacement of some sub-term u by, essentially, (J ν u).
Lemma 4.4 and let u
1. Assume u0 = C 0 [R0 ] for some context C 0 and some redex R0 ∗ u0 . Then :
• either R0 = (λx.a0 b0 ), u = C[(λx.a b)] for some context C and some terms a, b such that C ∗ C 0 , a ∗ a0 and b ∗ b0 . • or R0 = (a0 b0 ), u = C[(a b)] for some context C and some terms a, b − − y .(a00 → ey ), a ∗ a00 and b ∗ b0 . such that C ∗ C 0 , a0 = λ→ • or R0 = (J ν a0 ), u = C[a] for some context C and some term a such that C ∗ C 0 and a ∗ a0 . 2. Assume u
u0 and u0 .∗ v 0 . Then v
v 0 for some v such that u .∗ v.
Proof For the first point, there are two cases. Either the redex R0 is the residue of a redex in u or it has been created by the operations from the grammar E. For the second point, it is enough to prove the result for one step of reduction u0 . v 0 . Use the first point.
11
Definition 4.2 Let t be a solvable term. We say that: 1. ν occurs nicely in t if the only occurrences of ν are in a sub-term of the form (J ν). 2. ν occurs correctly in t if it occurs nicely in t and the head normal form of t − → −c → − − − looks like λx.(x → ey ) for some final subsequence → y of → x of length at least 1 → − such that ν does occur in ey . Lemma 4.5 Let t be a solvable term. Assume that ν occurs nicely (resp. correctly) in t. Then ν occurs nicely (resp. correctly) in every reduct of t. Proof By the properties of J. Lemma 4.6 The variable ν is never applied in a reduct of F [A0 ]. Proof The variable ν occurs nicely in F [A0 ], then, by Lemma 4.5, it occurs nicely in every reduct of F [A0 ], thus ν is never applied in a reduct of F [A0 ]. 4.2.2
End of the proof
Proposition 4.1 For k ≥ k0 , ν occurs correctly in (A0 Vk [A]). Proof (A Vk [A]) ∗ (A (J ν Vk [A])) (note that this last term may be misunderstood : it actually means (A (J ν a1 ) ... (J ν aq )) where Vk [A] is the sequence a1 ...aq ) →− →− → − − →). Thus, by Lemma 4.3, (A (J ν V [A])).∗ − and (A Vk [A]).∗h λ→ z .(y − w λz λx.(y wk0 → ex ) k k − →0 → − − → ∗ wk and ex . for some wk −−−−→ − − − − But (A0 Vk [A]).∗ λ→ y (A (J ν Vk [A]) (J ν y)) and thus (A0 Vk [A]).∗ λ→ y .(λ→ z λ→ x. − →0 − − − − − → → → − → − − (y wk wx ) (J ν y)). Using then p−l ≥ lg( z )+2 and distinguishing y 6∈ z or y ∈ → z − → − − → → − 0 ∗ → 00 − it follows easily that (A Vk [A]) . λ y1 (y2 wk wy3 ) where y3 is a final subsequence −−→ − → − of → y1 , − e→ y3 ∈ Ey3 and lg( y3 ) ≥ 1. The fact that ν occurs nicely is clear. Proposition 4.2 Assume that there is a sequence (jk )k∈N of integers such that, for each k, ν occurs correctly in (A0 Ujk [A]). Then ν is persisting in F [A0 ]. Proof Let t1 be a reduct of F [A0 ]. By Lemma 4.6, it is enough to show that ν does occur in t1 . By Lemma 2.1, let t2 be such that F [A0 ] .∗ t2 .∗Ω t1 . By Lemma 3.3, F .∗ G = D[[]i = wi : i ∈ I] where wi = (x[i] Arg(x[i] , G)) and t2 = D[[]i = wi0 : i ∈ I] where wi0 is a β-reduct of wi [A0 ]. By Lemma 3.4, G .∗ Fjk for some k. Let x[j] be the occurence of x in G which has x(jk ) as residue in Fjk . By Lemma 3.1, there is a substitution σ and a sequence V of terms such that Ujk = W :: V and σ(Mj ) reduces to W where Mj = Arg(x[j] , G). By Lemma 4.4, let wj00 be such that (A0 Arg(x[j] , G)[A]) reduces to wj00 and wj00 ∗ wj0 . Since (A0 σ(Mj )[A]) reduces both to σ(wj00 ) and to (A0 W [A]), let s be a common reduct of (σ(wj00 ) V [A]) and (A0 W [A] V [A]). Since ν occurs correctly in (A0 W [A] V [A]), it occurs correctly (by Lemma 4.5) in s. But (σ(wj00 ) V [A]) reduces to s and ν does not occur in σ neither in V [A], then ν occurs in wj00 . Thus it occurs in wj0 and thus in t2 . Since an Ω-reduction of wj00 cannot erase ν (otherwise, by Church-Rosser, ν will not occur in a reduct of s) and wj00 ∗ wj0 , then, by Lemma 4.1(item 3), an Ωreduction of wj0 cannot erase all its ν. The same proof as the one in section 4.1 shows that ν cannot be totally erased by the Ω-reduction from t2 to t1 and thus it occurs in t1 . 12
Corollary 4.1 ν is persisting in F [A0 ]. Proof By Proposition 4.1, for k ≥ k0 , ν occurs correctly in (A0 Vk [A]). By Lemma 4.5, ν occurs correctly in (A0 Uk [A]) and we conclude by Proposition 4.2.
5
Case 2(a) of Theorem 3.2
13
where only the values of h(p) for p < n will be used. To do that, we introduce fake Jn (they are denoted as J˜n below). They are as Jn but the term Jn+1 (which is not yet known since h(n + 1) is not yet known) is replaced by some fresh constant γ. Note that, using the standard fixed point theorem, we could avoid these fake J˜n but then, Definition 5.1 below should be more complicated because the constant γ would be replaced by a term computed from a code of Jb and we should explain how this is computed ... Finally note that it is at this point that we use the fact that the branch in T is recursive. Back to the example With the example given before, we can take for Jb a λ-term such that Jb .∗ λuλvλy1 λy2 .(v (Jb u y1 ) (Jb u y2 )) and for K 0 the term (Jb ν K). Then K 0 .∗ K 00 = λy1 λy2 λz1 λz2 .(y1 (Jb u z1 ) (Jb u z2 )). Since, if F [K 0 ] .∗ t, then t .∗ λz.(z k (G K 00 z)). It follows that ν is persisting in F [K 0 ]. Note that here the function h is constant. Given any recursive function h0 it will not be difficult to build terms F, A such that the corresponding function h is precisely h0 . Definition 5.1 For each i, let li be the number of λ’s at the head of ti . The definition of Jb needs some new objects. Let γ be a new constant. We define, by induction, the integers jn , hn , the terms J˜n , ajn and the sequence of terms Bjn . In this definition a term (or a sequence of terms) marked with 0 is a term in relation with the corresponding unmarked term. • (Step 0 ) Let j0 be the least integer j such that some Sk comes in head position during ρj (the head reduction of (A Vj [A])), k0 = lg(Vj0 ), h0 = max(l0 , k0 ). Let Vj0 = d1 ...dk0 and J˜0 = λnλxλx1 ...λxh0 (x (γ n x1 ) ... (γ n xh0 )). Then ((J˜0 ν A) Vj [A]).∗ λy1 ...yr (A (γ ν d1 [A]) ... (γ ν dk [A]) (γ ν y1 ) ... (γ ν yr )). 0
0
0
0
Since, for some i, di [A] comes in head position during the head reduction of (A Vj0 [A]), then, for some i0 , (γ ν di0 [A]) comes in head position during the head reduction of ((J˜0 ν A) Vj0 [A]) and thus ((J˜0 ν A) Vj0 [A]) .∗ λXj0 ((γ ν aj0 ) Bj0 ) for some aj0 , some sequence Xj0 of variables and some sequence Bj0 of terms. • (Step 1 ) Let j1 be the least integer j > j0 such that some Sk comes in head position during ρj for k large enough (lg(Xj0 ) < lg(Sj0 :: ... :: Sk−1 ) is needed). Then ((J˜0 ν A) Vj1 [A]) = ((J˜0 ν A) Vj00 [A] Sj0 0 ...Sk0 ...Sj0 1 −1 ) .∗ ((γ ν a0j0 ) Bj0 0 ) T0 . Let h1 = lj0 + lg(Bj0 0 :: T0 ) + 1 and J˜0 = J˜1 [γ := λnλxλx1 ...λxh1 (x (γ n x1 ) ... (γ n xh1 ))]. Then ((J˜1 ν A) Vj1 [A]) .∗ λXj1 ((γ ν aj1 ) Bj1 ) for some term aj1 and some sequence Bj1 of terms. • (Step n+1 ) Assume the integers jn , hn and the term J˜n are already defined. Let jn+1 be the least integer j > jn such that some Sk comes in head position during ρj for k large enough (lg(Xjn ) < lg(Sj1 :: ... :: Sk−1 ) is needed). Then ((J˜n ν A) Vjn+1 [A]) = ((J˜n ν A) Vj01 [A] Sj0 n ...Sk0 ...Sj0 n+1 −1 ) .∗ ((γ ν a0jn ) Bj0 n ) Tn . Let hn+1 = ljn +lg(Bj0 n :: Tn )+1 and J˜n+1 = J˜n [γ := λpλxλx1 ...λxhn+1 (x (γ p x1 ) ... (γ p xhn+1 ))]. Then ((J˜n+1 ν A) Vjn+1 [A]) .∗ λXjn+1 ((γ ν ajn+1 ) Bjn+1 ) for some term ajn+1 and some sequence Bjn+1 of terms.
14
Comment Note that, since the branch in T is recursive, it follows, by standard arguments, that the function h defined by h(i) = hi is computable. Definition 5.2
• Let H be a λ-term that represents the function h.
• Let T = λaλbλc.(a (b (z (suc k) n c))), D = λkλnλx.(H k T I x) and Jb = (Y λz.D) where Y is the Turing fixed point operator. • For each n ∈ N, we denote (Jb cn ) by Jn . Lemma 5.1 For each n ∈ N, (Jn ν u).∗ λy1 ...λyhn .(u (Jn+1 ν y1 )...(Jn+1 ν yhn )). Proof Easy. As in the previous section, we consider Jn as constants with the reduction rules of the previous Lemma. Let A0 = (J0 ν A). We prove that ν is persisting in F [A0 ]. Definition 5.3 1. We define, for terms u, the sets Eu of terms by the following − − grammar: Eu = u |(Jn ν eu ) |λ→ y .(eu → ey ) 2. Let t, t0 be some terms. We denote by t t0 if there is a context C with one 0 0 hole such that t = C[u] and t = C[u ] where u0 ∈ Eu . Lemma 5.2 1. Assume u that u0 .∗ v 0 .
u0 and u .∗ v. Then, v
2. Assume u ∗ u0 and u0 .∗ v 0 . Then, v Proof Same proof as Lemmas 4.2 and 4.4. − → −c ) and u Lemma 5.3 If u .∗ λx.(x → → − → −c − ∗ 0 c and → ey . Proof This follows from Lemma 5.2
v 0 for some v 0 such
v 0 for some v such that u .∗ v.
→ − − − →− → u0 , then u0 .∗ λxλy.(x c0 → ey ) for some
Definition 5.4 Let t be a solvable term. We say that: 1. ν occurs nicely in t if the only occurrences of ν are in a sub-term of the form (Jn ν). 2. ν occurs correctly in t if it occurs nicely in t and the head normal form of t − → −c → − − − looks like λx.(x → ey ) for some final subsequence → y of → x of length at least 1 → − such that ν does occur in ey . The intuitive meaning of Lemma 5.4 below is “for each n ∈ N, tjn ∗ (ajn Bjn ) and (A0 Vjn [A]) .∗ λXjn .(Jn+1 ν ajn Bjn )”. Strictly speaking, this is not true, because the ajn and Bjn are not the “real” b For two reasons: ones i.e. the ones that occur in the reduction with the real J. • The first one is easily corrected: in ajn and Bjn the constant γ must be replaced by Jn+1 • The second one is more subtle. In the correct lemma, and the Jn should be the “real” ones. But the ajn are defined using J˜p which are only fake Jp . Stating the correct lemma will need complicated, and useless, definitions. We will not do it and thus we state the lemma in the way it should be, intuitively, understood. ∗ Lemma 5.4 For each n ∈ N, tjn (ajn [γ := Jn+1 ] Bjn [γ := Jn+1 ]) and 0 ∗ (A Vjn [A]) . λXjn .(Jn+1 ν ajn [γ := Jn+1 ] Bjn [γ := Jn+1 ]). Proof By induction on n.
15
Proposition 5.1 For each n ∈ N, ν occurs correctly in (A0 Vjn [A]). Proof We have (A0 Vjn [A]) .∗ λXjn .(Jn+1 ν ajn [γ := Jn+1 ] Bjn [γ := Jn+1 ]) = λXjn .((λz1 ...λzhn+1 .( ajn [γ := Jn+1 ] (Jn+2 ν z1 ) ... (Jn+2 ν zhn+1 )) Bjn [γ := Jn+1 ]). Since hn+1 = ljn +lg(Bj0 n Tn )+1, then, by Lemma 5.4, (A0 Vjn [A]).∗ λXjn λZ.(t0jn eZ ) where t0jn ∗ tjn , eZ ∗ Z and lg(Z) > ljn . Therefore, by Lemma 5.3, ν occurs correctly in (A0 Vjn [A]). Lemma 5.5 Let t be a solvable term. Assume that ν occurs correctly in t. Then ν occurs (and it occurs correctly) in every reduct of t. Proof This follows immediately from the fact that if (Jn ν y) ∗ u, then ν ∈ βΩ(u). Lemma 5.6 Let k be an integer such that ν occurs correctly in (A0 Vk [A]). Then ν occurs correctly in (A0 Uk [A0 ]). Proof This follows immediately from Lemmas 5.3 and 5.5. Proposition 5.2 ν is persisting in F [A0 ]. Proof Same proof as Proposition 4.2.
6
The other assumption
As we already said the fact that ν is persisting in F [A0 ] does not imply that, letting An = A0 [ν = cn ], F [An ] 6' F [Am ] for n 6= m. To ensure that the range of λx.F is infinite, we need another assumption on F . Let λx.F be a closed term and A be such that F [A] 6' F [Ω]. In propositions 6.1 and 6.2 below we assume that F has the Barendregt’s persistence property. Let A0 be the corresponding term. We also assume that A0 has been obtained by the way developed in section 4.2 or in section 5. Note that, in these cases, ν is never applied in a reduct of F [A0 ]. Proposition 6.1 Assume there is a sequence (tn )n∈N of distinct closed and normal terms such that, for every n, tn never occurs as a sub-term of some t0 such that F [A0 ] .∗βΩ t0 . Then, the range of λx.F is infinite. Proof Let An = A0 [ν := tn ]. It is enough to show that F [An ] 6' F [Am ] for n 6= m. Assume F [An ] ' F [Am ] for some n 6= m and let u be a common reduct. Since ν is never applied in a reduct of F [A0 ], there are reducts an and am of F [A0 ] such that u = am [ν := tm ] = an [ν := tn ]. This implies that tn occurs in am . Contradiction. Remark Say that u is a universal generator if, for every closed λ-term t, there is a reduct of u where t occurs as a sub-term. Before Plotkin gave his counterexample, Barendregt had proved that the omega-rule is valid when t, t0 are not universal generators. Our hypothesis on the existence of the sequence (tn )n∈N may look similar. Assuming that F [A0 ] is not a universal generator, there is a term t that never occurs as a sub-term of a reduct of F [A0 ]. Letting t0 = t and tn+1 = λx.tn , the reducts of F [A0 ] never contain one of these terms. Also, they are not equal, because otherwise t is Ω. This is however not enough to show that the F [ν := tn ] are distinct because we need that no reduct of F [A0 ] contains a reduct of one of the tn (this is why, in our hypothesis, we have assumed that the tn are normal). This raises two questions : 1. Say that a term u is a weak generator if for every closed λ-term t one of its reducts occurs as a sub-term of a reduct of u. A universal generator is, trivially, a weak generator. Is the converse true ? 16
2. Is it true that, if F [A0 ] is a universal generator then so is F [A]. Note that, somehow, the A0 we have constructed is a kind of η-infinite expansion of A. If these two propositions were true, we could replace the assumption of Proposition 6.1 by the (more elegant) fact that F [A] is not a universal generator. Definition 6.1 Let An = A0 [ν = cn ]. Say that F, A satisfy the Scope lemma if, for every n, m, the fact that F [An ] ' F [Am ] implies that, for some k, (x Uk )[x := An ] ' (x Uk )[x := Am ]. The terminology “Scope lemma” is borrowed to A. Polonsky. In his paper he stated an hypothesis (denoted as the scope Lemma) which corresponds to the previous property. Proposition 6.2 Assume F, A satisfy the scope lemma. Then the range of λx.F is infinite. Proof Immediate.
Acknowledgement. We wish to thank Andrew Polonsky for helpful discussions and also the anonymous referees for their remarks and suggestions.
References [1] H.P. Barendregt, The Lambda Calculus, Its Syntax and Semantics. NorthHolland, 1985. [2] H.P. Barendregt, Constructive proofs of the range property in Lambda Calculus. Theoretical Computer Science, 121 (1-2), pp. 59-69, 1993. [3] H.P. Barendregt, Towards the range property for the lambda theory H. Theoretical Computer Science, 398 (1-3), pp. 12-15, 2008. [4] C. B¨ ohm, Alcune proprieta delle forme βη-normali nel λK-caculus. Pubblicazioni 696, Instituto per le Applicazioni del Calcolo, Roma, 1968. [5] D.T. van Daalen, The Language Theory of Automath. PhD thesis, Technische Universiteit Eindhoven, 1980. [6] R. David Computing with B¨ ohm trees. Fundamenta Informaticae 45 (1,2) p 53-77 (2001). [7] R. David & K. Nour Storage operators and directed lambda-calculus. Journal of Symbolic Logic, vol 60-4, pp. 1054-1086, 1995. [8] B. Intrigila & R. Statman, On Henk Barendregt’s favorite open problem. Reflexions on Type Theory, Lambda Calculus, and the Mind. Henk Barendregt festschrift, 2007. [9] J.-L. Krivine, Lambda Calcul : types et mod`eles, Masson, Paris, 1990. [10] A. Polonsky, The range property fails for H. Journal of Symbolic Logic, vol 77-4, pp.1195-1210, 2012. [11] G. Plotkin, The λ-calculus is ω-incomplete. Journal of Symbolic Logic, vol 39, pp. 313-317, 1974. [12] R. Statman, Does the range property hold for the λ-theory H ?. TLCA list of open problems, http://tlca.di.unito.it/opltlca/, 1993. 17
[13] R.C. de Vrijer, Barendregts lemma. In Barendsen, Geuvers, Capretta, and Niqui, editors, Reflections on Type Theory, Lambda Calculus, and the Mind, pages 275284. Radboud University Nijmegen, 2007.
18
|
2020-07-08 21:22:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242095351219177, "perplexity": 2960.944872338204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00222.warc.gz"}
|
https://brilliant.org/discussions/thread/calculation-function-idea/
|
×
Ω calculation function idea
So at school I was thinking "why don't I create a function that someone can use to calculate numbers in their head" so I came up with the Ω function for calculation, it's basically 1Ω=1..100, 2Ω=2...200 and so on, but how do we get that number? Do we use the Riemann zeta function? Euler gamma function? I need your help to design a concept, comment ideas below!
Note by Ark3 Graptor
10 months, 1 week ago
Sort by:
teal u wort, wee carn oose ze raiman zeti fancshion 2 ween at live.
loork at dis:
$\sum _{n=1}^{inf}H_n^2x^n=\frac{1}{x}\left[\sum _{n=1}^{inf}H_n^2x^n+\sum _{n=2}^{inf}\frac{1}{n^2}x^n-2\sum _{n=2}^{inf}\frac{H_n}{n}x^n\right]-1$
$2\sum _{n=1}^{inf}\left(\sum _{k=1}^n\frac{1}{k}\right)^2\left(\frac{1}{2}\right)^n+2L-4\sum _{n=1}^{inf}\frac{\sum _{k=1}^n\frac{1}{k}}{n}\left(\frac{1}{2}\right)^n$
$S=\frac{\pi ^2}{6}+\ln ^2\left(2\right)$
eat's ass eencormprihancibel ass ur stuupeedeeti ees 2 u. · 10 months, 1 week ago
btw, is there any chance that you might be known as Noted Scholar in other accounts? I'm a big fan of noted scholar. · 10 months, 1 week ago
You can even write in integral representation:
$\huge n\Omega=\dfrac{ \int_{0}^{\infty} \dfrac{x^{100n}}{\zeta(100n+1)(e^x-1)} \ dx}{ \int_{0}^{\infty} \dfrac{x^n}{\zeta(n+1)(e^x-1)} \ dx}$ · 10 months, 1 week ago
That's quite a bit of detail · 10 months, 1 week ago
@Ark3 Graptor there? · 10 months, 1 week ago
I explained a adjustment to your theory below on your first comment · 10 months, 1 week ago
So we have $$n\Omega=\dfrac{\Gamma(100n+1)}{\Gamma(n+1)}$$ , got till here? · 10 months, 1 week ago
Better · 10 months, 1 week ago
So basically you are defining $$n\Omega=\dfrac{(100n)!}{n!}$$ right? · 10 months, 1 week ago
No wait it's this: $1\Omega = 1,2,3,4,5,6,7...100, 2\Omega = 2,4,6,8,10,12,14...200, etc$ The thing is is how to obtain one specific number from that selection withought randomly picking one · 10 months, 1 week ago
|
2017-02-28 12:27:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425044655799866, "perplexity": 5155.795963561658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00406-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://itectec.com/ubuntu/ubuntu-what-does-mean-exactly-in-output-redirection/
|
# Ubuntu – What does & mean exactly in output redirection
command lineioredirect
I see stuff like command 1> out or with 2>&1 to redirect stderr, but sometimes I also see &> by itself, etc.
What is the best way to understand & and what it means exactly?
• The & in 2>&1 simply says that the number 1 is a file descriptor and not a file name. In this case the standard output file descriptor.
If you use 2>1, then this would redirect errors to a file called 1 but if you use 2>&1, then it would send it to the standard output stream.
This &> says send both, standard output and standard error, somewhere. For instance, ls <non-existent_file> &> out.file. Let me illustrate this with an example.
Setup:
1. Create a file koko with the following content:
#!bin/bash
ls j1
echo "koko2"
2. Make it executable: chmod u+x koko
3. Now note that j1 doesn't exist
4. Now run ./koko &> output
5. run cat output and you will see
ls: cannot access 'j1': No such file or directory
koko2
Both, standard error (ls: cannot access 'j1': No such file or directory) and standard output (koko2), were sent to the file output.
Now run it again but this time like so:
./koko > output
Do cat output and you will only see the koko2 like. But not the error output from the ls j1 command. That will be sent to the standard error which you will see in your terminal.
Important note thanks to @Byte Commander:
Note that in command >file 2>&1 the order of the redirection is important. If you write command 2>&1 >file instead (which is normally not what you want), it will first redirect the command's stdout to the file and after that redirect the command's stderr to its now unused stdout, so it will show up in the terminal and you could pipe it or redirect it again, but it will not be written to the file.
|
2021-07-28 11:44:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37144243717193604, "perplexity": 3679.279844287142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00036.warc.gz"}
|
https://www.bigphysics.org/index.php/%E5%88%86%E7%B1%BB:Measuring_academic_influence:_Not_all_citations_are_equal
|
# 分类:Measuring academic influence: Not all citations are equal
Xiaodan Zhu, Peter Turney, Daniel Lemire & André Vellino, Measuring Academic Influence: Not All Citations Are Equal, Journal of the Association for Information Science and Technology, 66(2), 408, http://doi.org/10.1002/asi.23179
## Abstract
The importance of a research article is routinely measured by counting how many times it has been cited. However, treating all citations with equal weight ignores the wide variety of functions that citations perform. We want to automatically identify the subset of references in a bibliography that have a central academic influence on the citing paper. For this purpose, we examine the effectiveness of a variety of features for determining the academic influence of a citation. By asking authors to identify the key references in their own work, we created a data set in which citations were labeled according to their academic influence. Using automatic feature selection with supervised machine learning, we found a model for predicting academic influence that achieves good performance on this data set using only four features. The best features, among those we evaluated, were those based on the number of times a reference is mentioned in the body of a citing paper. The performance of these features inspired us to design an influence-primed h-index (the hip-index). Unlike the conventional h-index, it weights citations by how many times a reference is mentioned. According to our experiments, the hip-index is a better indicator of researcher performance than the conventional h-index.
|
2022-08-10 17:00:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24334722757339478, "perplexity": 1116.1331664210973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00479.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-6-systems-of-equations-and-inequalities-6-2-solving-systems-using-substitution-apply-what-you-ve-learned-page-377/d
|
## Algebra 1: Common Core (15th Edition)
Interpreting the solution in terms of how we defined $x$ and $y$ in part (a), Ashley spends $\displaystyle \frac{80}{3}=26\frac{2}{3}$ minutes on the stair machine, and $\displaystyle \frac{40}{3}=13\frac{1}{3}$ minutes on the rowing machine (This is a total of 40 minutes of exercise.)
|
2021-06-21 01:18:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740336537361145, "perplexity": 741.0420771556442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00291.warc.gz"}
|
https://mathstodon.xyz/@_c_perez
|
True Logick is Neohermetic Pythagoreanism for the the modern mathematician.
vixra.org/abs/1905.0086
I want to use the phrase “fire up the Rips machine” in my thesis.
How to threaten a mathematician: "This is some nice Hagoromo Fulltouch chalk you have here. It would be a shame if you came into your office one day and found it broken to pieces on the floor."
New entry!
Computational complexity and 3-manifolds and zombies
Article by Greg Kuperberg and Eric Samperton
In collections: Attention-grabbing titles, Basically computer science
We show the problem of counting homomorphisms from the fundamental group of a homology $$3$$-sphere $$M$$ to a finite, non-abelian simple...
URL: arxiv.org/abs/1707.03811v1
PDF: arxiv.org/pdf/1707.03811v1
Like, doing this just feels weird: $$\bigcirc_{i=1}^nf_i := f_1 \circ f_2 \circ \dots \circ f_n$$
Idle thought: with the exception of $$\sum$$ and $$\prod$$, transforming an associative binary operation into an agglomerative "summation notation" is just making the symbol big and adding sub/superscripts. E.g., $$\bigotimes_{i=0}^n v_i$$. It's strange to me that this applies to non-commutative operators ( $$\wedge$$ ), but also that there are many binary operations where this rule can't be applied due to limitations of the syntax (bracket operators, function composition, modulo).
I recently learned about Ologs, which are basically "category theory for normal people". These are very useful for knowledge representation. I am now in the making of a blog post about how to make them and how to translate ologs to Haskell code.
en.wikipedia.org/wiki/Olog
I recently found some hilariously-named software that cuts PDFs scanned as two pages side-by-side into single pages.
briss.sourceforge.net/
Let $$Y$$ be a subspace of $$X$$. The wizard hat space $$W$$ is constructed by attaching the base of the cone $$CY$$ over $$Y$$ to $$X$$ along $$Y$$.
(This showed up in a topology class I took once. We needed to use the fact that $$W$$ deformation retracts onto $$X$$, but I don't remember what it was used for. The name is mine.)
Math genealogy visualizer now supports finding the closest common ancestor of two mathematicians.
j2kun.github.io/math-genealogy/index.html
Working on my thesis project. Given two torsion-free groups $$\Gamma_1,\Gamma_2$$ which are elementarily equivalent and both hyperbolic relative to their abelian subgroups, I'm currently trying to determine what sorts of constraints I can put on how homomorphisms $$\Gamma_1\to\Gamma_2$$ behave on the abelian subgroups.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
|
2022-08-13 07:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6248801946640015, "perplexity": 1208.1796656785307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00128.warc.gz"}
|
http://openstudy.com/updates/5099f043e4b02ec0829cef92
|
## mskyeg Group Title Find the real solutions of the equation by graphing. x^2 – x + 2 = 0 one year ago one year ago
1. campbell_st
there are no real solutions to this quadratic...
2. campbell_st
the quadratic is positive definite... to check the nature of the solutions use the discriminant $\Delta = b^2 - 4ac$
3. Eda2012
use this formula.... $ax ^{2}-bx+c=0$ $x = \frac{ -b \pm \sqrt{b ^{2}-4ac} }{ 2a }$
4. campbell_st
you may use the formula but you won't get real solutions, you will get complex solutions.
5. Eda2012
why dont use completing square...and then plot the graph...then find the intersection of the graph...
6. mskyeg
how do i get a real solution
|
2014-10-23 06:39:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5864643454551697, "perplexity": 1797.6288738137694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507452492.48/warc/CC-MAIN-20141017005732-00057-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://mathstodon.xyz/@skalyan
|
monospace font recommendation
for the record, some new "coding ligatures" fonts that have decent international support:
Iosevka
typeof.net/Iosevka/
(from parent Source Code Pro)
Mensch
robey.lag.net/2010/06/21/mensc
(from parent Menlo)
Victor Mono
rubjo.github.io/victor-mono/
found via programmingfonts.org/
I like the _idea_ of Victor Mono but I think Iosevka looks better…none of them are as cute as Fira :(
Solving one problem at a time:
Are you interested in a federated alternative to Goodreads that doesn't use Amazon?
because I'm making a federated alternative to Goodreads that doesn't use Amazon
github.com/mouse-reeve/fedirea
Information on creating marbled paper with LaTeX! people.csail.mit.edu/jaffer/Ma.
You can find the package documentation here: ctan.org/pkg/pst-marble .
Sir Thomas Urquhart was a 17th-century Scottish eccentric who tried to systematize a new language for trigonometry; the law of sines was abbreviated as “eproso”, which (if you know the system) encapsulates its meaning.
blog.plover.com/book/Urquhart-
OMFG A FEDERATED SCHOLARLY COMMUNICATIONS PLATFORM!
I literally cannot wait to dig into this!! It's what I've been wanting!
olki.loria.fr/platform/
#Accessibility
A color contrast checker that offers alternatives if your color combination has not enough contrast. It would be cool to chose if you want to change background or foreground though but it's still very nice to get suggestions
polypane.app/color-contrast/#f
Challenge for applied category theory: build a ronavirus, so that the world can be sane again.
#DataVisualization can help us make sense of and teach about the coronavirus, but the stakes are high. @abmakulec@twitter.com has 10 considerations to help you #VizResponsibly:
medium.com/nightingale/ten-con
Who called it a magma and not a hemidemisemigroup?
ah yes, the classic proof
Mathemagics
Article by Pierre Cartier
In collections: Notation and conventions, The act of doing maths
My thesis is:there is another way of doing mathematics, equally successful, and the two methods should supplement each other and not fight.
URL: ftp.gwdg.de/pub/misc/EMIS/jour
I've just noticed that if you choose "wheelchair accessible" on google maps directions, it still shows an icon of a dude walking
The innovation here seems to be in the emphasis placed on the properties of the sum of the roots and the product of the roots.
And now, substituting into the formula for $$r$$ and $$s$$, we get $$r, s = -\frac{b}{2} \pm \sqrt{\frac{b^2}{4} - c}$$.
But this is just the same as $$\frac{-b \pm \sqrt{b^2 - 4c}}{2}$$, which is exactly what we would expect from the standard formula, given that $$a=1$$! The general derivation, for $$a \neq 1$$, takes a few more steps, but is fairly straightforward.
Now, the only way we can have $$r+s = -b$$ is if $$r = -\frac{b}{2} + z$$ and $$s = -\frac{b}{2} - z$$, for some $$z$$.
Since $$rs = c$$, this means that $$c = \frac{b^2}{4} - z^2$$.
Rearranging, we get $$z = \pm \sqrt{\frac{b^2}{4} - c}$$.
The trick is as follows: Assume, first of all, that $$a=1$$, and let $$r$$ and $$s$$ be the (unknown) roots. Then we can write $$x^2 + bx + c = (x - r)(x - s)$$.
Expanding the right-hand side, we get $$x^2 + bx + c = x^2 - (r+s)x + rs$$.
So we have $$r + s = -b$$, and $$rs = c$$.
Maths educator Po-Shen Loh has discovered a way to solve quadratic equations that is much more intuitive than $$\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$, and is apparently unprecedented in the entire 4000-year history of thought on quadratic equations! See arxiv.org/abs/1910.06709.
I've just discovered something incredible: Jim Fowler has compiled TikZ to WebAssembly! That means you can render TikZ diagrams in web pages, **on the fly**!!
I've made a demo page with an editor, so you can see it and believe it: tikzjax-demo.glitch.me
|
2020-09-18 14:50:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446135520935059, "perplexity": 1700.6940919357799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00794.warc.gz"}
|
https://icml.cc/Conferences/2021/ScheduleMultitrack?event=10662
|
Timezone: »
Spotlight
Estimating $\alpha$-Rank from A Few Entries with Low Rank Matrix Completion
Yali Du · Xue Yan · Xu Chen · Jun Wang · Haifeng Zhang
Wed Jul 21 05:30 AM -- 05:35 AM (PDT) @
Multi-agent evaluation aims at the assessment of an agent's strategy on the basis of interaction with others. Typically, existing methods such as $\alpha$-rank and its approximation still require to exhaustively compare all pairs of joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we aim to reduce the number of pairwise comparisons in recovering a satisfying ranking for $n$ strategies in two-player meta-games, by exploring the fact that agents with similar skills may achieve similar payoffs against others. Two situations are considered: the first one is when we can obtain the true payoffs; the other one is when we can only access noisy payoff. Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively. For both of these settings, we theorize that $O(nr \log n)$ ($n$ is the number of agents and $r$ is the rank of the payoff matrix) payoff entries are required to achieve sufficiently well strategy evaluation performance. Empirical results on evaluating the strategies in three synthetic games and twelve real world games demonstrate that strategy evaluation from a few entries can lead to comparable performance to algorithms with full knowledge of the payoff matrix.
#### Author Information
##### Yali Du (University College London)
Yali Du is a 3rd year PhD student with her research focusing on matrix completion and its applications on recommender systems, multi-label learning and social analysis. She has the enthusiasm to communicate with other researchers and learn from them. She has published two full-length papers on IJCAI 2017.
|
2023-03-20 16:23:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.640161395072937, "perplexity": 1243.8770652730548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00074.warc.gz"}
|
http://mathhelpforum.com/statistics/137510-statistics-probability-question-work-shown.html
|
# Thread: Statistics Probability Question - Work Shown
1. ## Statistics Probability Question - Work Shown
Assume that the population of men's weights are normally distributed with a mean of 172 lbs and a standard deviation of 29lbs. 36 men are randomly selected. What is the probability that their MEAN weight is less than 167lbs?
My work:
z = (167-172) / (29 / square root of (36)) = -1.03
z = -1.03 =======> standard z scare table area is .3485
P ( mean weight less than 167) = .5000 - .3485 = .1515 = 15.15%
2. Originally Posted by funnyname7
Assume that the population of men's weights are normally distributed with a mean of 172 lbs and a standard deviation of 29lbs. 36 men are randomly selected. What is the probability that their MEAN weight is less than 167lbs?
|
2017-10-22 06:47:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195164203643799, "perplexity": 737.0621179198871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00160.warc.gz"}
|
https://cyberleninka.org/article/n/1189
|
# A Powerdomain of Possibility MeasuresAcademic research paper on "Computer and information sciences"
CC BY-NC-ND
0
0
Share paper
OECD Field of science
Keywords
{}
## Abstract of research paper on Computer and information sciences, author of scientific article — Michael Huth
Abstract We provide a domain-theoretic framework for possibility theory by studying possibility measures on the lattice of opens &#38;#x1D4AA;(X) of a topological space X. The powerspaces P[0,∞] (X) and P[0,1] (X) of all such maps extend to functors in the natural way. We may think of possibility measures as continuous valuations by replacing ‘+’ with ‘V’ in their modular law. The functors above send continuous maps to sup-maps and continuous domains to completely distributive lattices; in the latter case they are locally continuous. Finite suprema of scalar multiples of point valuations form a basis of the powerdomains above if &#38;#x1D4AA;(X) is the Scott-topology of a continuous domain. The notions of [0,1]- and [0,∞]-modules corresponds to that of continuous cones if addition on the reals and on the module is replaced by suprema. The powerdomain P[0,∞] (D) is the free [0, ∞]-module over a continuous domain D.
## Academic research paper on topic "A Powerdomain of Possibility Measures"
Electronic Notes in Theoretical Computer Science 6 (1997)
URL: http://www.elsevier.nl/locate/entcs/volume6.html 12 pages
A Powerdomain of Possibility Measures
Michael Hutli
Department of Computing and Information Sciences Kansas State University Manhattan, KS 66506, USA
Abstract
We provide a domain-theoretic framework for possibility theory by studying possibility measures on the lattice of opens O(X) of a topological space X. The powerspaces F>0 ^(X) and ^(X) of all such maps extend to functors in the natural way. We may think of possibility measures as continuous valuations by replacing '+' with 'V' in their modular law. The functors above send continuous maps to sup-maps and continuous domains to completely distributive lattices; in the latter case they are locally continuous. Finite suprema of scalar multiples of point valuations form a basis of the powerdomains above if O(X) is the Scott-topology of a continuous domain. The notions of [0,1]- and [0, oo]-modules corresponds to that of continuous cones if addition on the reals and on the module is replaced by suprema. The powerdomain ROoo№) is the free [0, oo]-module over a continuous domain D.
1 Possibility Measures
This extended abstract attempts to recast some of the work done in quantitative domain theory within the traditional domain theory of continuous domains and lattices. We illustrate our approach with the two prime targets of quantitative analysis, the completely distributive lattices [0,1] and [0,oo], Given a dcpo D, there are numerous semantic scenarios in which we are interested in the function space
(1) [D^[0,1]]
(2) [D —[0, oo]].
For example, the former is a natural carrier of meaning for Markov chain processes [1], or quantitative model checks [2], whereas the second space could
1 Reinhold Heckmann, Klaus Keimel and Philipp Siinderhauf make valuable suggestions when I presented preliminary parts of this material at the Comprox II meeting in Darmstadt, Germany, on September 25, 1996.
be the carrier of meaning for a cost, or running time analysis. Note that we stipulated that these functions are continuous. That way we ensure that the process of approximating total elements by partial ones in D is consistent with the quantitative evidence provided by some function / e [D —[0,1]]. Even if the notion of partial elements were absent, for example, if D is a flat domain of states of some concurrent system, one still has to resort to continuous functions in order to allow for sound higher-order semantics. To wit, the space [[£> —[0, oo]] —[0,1]] could be the domain which, given a property (j> and some t E [0, oo] returns the possibility of satisfying (j> within time t, where Q has an additional structure of a continuous real-time system. In addition, it should be clear that the internal hom functor [• —[0,1]] is a plausible candidate for marrying the theories of domains [3] and fuzzy sets [4], Possibility measures [6] are a concept used in possibility theory. As such, they belong to the broad field of theories of evidence in Artificial Intelligence and Empirical Sciences, Possibility measures are functions ¡jl\V{X) —[0,1], where V(X) is the power set of a finite set X, such that
for all T C V(X). Possibility theory is motivated by the observation that non-determinism can arise through uncertainty, or through unsharpness of data. The first kind is captured well with standard notions of probability theory whereas the latter seems to fare better with concepts based on fuzzy set theories [4], Unsharpness implies non-specific and therefore set-valued semantics. This brings us into the familiar realm of powerdomains [3],
Joint work done with Eeinhold Heckmann uncovered a natural isomorphism between [D —[0,1]] and the space of all sup-maps ¡jl\o{D) —[0,1], where o(D) is the Scott-topologv of D [5], The corresponding result is true for [0, oo] as well. This provides a reassuring link between the spaces of quantitative meanings in (1) and (2), and spaces of topological possibility measures.
In this paper we develop an algebraic theory of possibility measures in a general topological setting; the power set V{X) is then merely the special case of a discrete topological space. This leads us to defining possibility measures as sup-maps from the lattice of opens O(X) to [0,1],
One notices that the 'type' O(X) —[0,1] of topological possibility measures looks like the one for continuous valuations [7,8], Recall that a continuous valuation is nothing but a strict Scott-continuous map ¡jl E [0(X) —[0,1]] such that ¡jl satisfies the usual modular law of measure theory [9]:
for all U,V E O(X). Clearly, such a property cannot be expected from sup-maps of that same type. However, topological possibility measures do satisfy a similar modular law if we replace '+' with 'V', the binary supremum in
H{U U V) + n(U n V) = n(U) + n(V)
[0, l],2 Any topological possibility measure v must be monotone since it is a sup-map. Therefore, the fuzzy modular law
(5) v{U U V) V v{U n V) = v{U) V is(V)
reduces to v{U U V) = v{U) V v(V). Thus, v satisfies the fuzzy law in (5), Conversely, any strict Scott-continuous function v'\ O(X) —[0,1] which satisfies (5) has to be a sup-map. To summarize, we see that continuous valuations and topological possibility measures share that they are strict maps in [O(X) —[0,1]], but are distinguished by their modular laws (4) and (5),
2 Possibility Measures on Topological Spaces
Definition 2.1 For complete lattices L and M we define L-oM to be the complete lattice of all sup-maps f: L —M, ordered pointwise. For a topological space X with topology O(X) we define f*0oo](X) to be 0(X)—o[0, oo]. Given any continuous map f: X —Y between two topological spaces we define
&«,](/): ¿¡um h
(6) ffo^if) ^P/"1
for all 11, e ff0tOOp0-
Note that the map Fj"0 (/): (X) ^(K) is well defined since the inverse image function /-1: 0(Y) O(X) preserves suprema. Since o(D)^>2 is just the order dual of cr(D), we realize o(D)^>2 as another version of the lifted lower powerdomain. In replacing 2 with [0,oo] we obtain the space P s (I)). which we can therefore think of as a fuzzy (lifted) lower powerdomain. This suggestion gets formal support in the section on free [0, oo]-modules.
Proposition 2.2 The two-sorted function ffOao0 constitutes a functor from TOP, the category of topological spaces and continuous maps, to SUP, the category of complete lattices and maps preserving suprema. This functor restricts to a locally continuous functor from CONT, the category of continuous dcpos and Scott-continuous maps, to CD, the category of completely distributive lattices and maps preserving suprema.
In semantics we are mostly interested in probabilities or possibilities in the range of [0,1]:
Definition 2.3 Let X be a topological space. Then ff01](X) is the set of all H G f^0oo](X) with range contained in [0,1]; the order on ff01](X) is the pointwise one.
Proposition 2.4 The functor f^0oo]Q:TOP SUP restricts to a functor P0 jjQ: TOP —SUP and to a locally continuous functor P0 ^Q: CONT —CD.
2 Reinhold Heckmann and Gordon Plotkin kindly pointed this out to me at the COMPROX II meeting in Darmstadt.
Since completely distributive lattices are L-domains and FS-domains we may solve recursive domain equations in L and FS involving the functors Fj"0 (•) and x](-) by using the standard machinery of [3], The wav-below and wav-way below relation [3] on P r(/)) are induced by the ones in Pu s (I)).
Lemma 2.5 Let X he a topological space. Then ^(X) is a sup-projection of ^(X). In particular, the way-below and way-way below relation on ^(X) are induced by the ones in f^0oo](X). The projection of /j, E f^0oo](X) maps O E O(X) to fj,(0) A 1.
If we define R=i(X) to be those possibility measures // e Pu ^(X) with fj,(X) = 1 plus the constant zero measure then this is a complete lattice closed under all suprema in Fj"0 (X). It would be interesting to find out about the distributivitv, continuity and co-continuity of P=i(X), at least when X is a continuous domain and O(X) its Scott-topologv.
3 Point Valuations as Possibility Measures
Point valuations are a crucial technical tool in the theory of continuous valuations [7,8]. Given a topological space X and x E X, the point valuation r]x\ 0(X) —[0, oo] is defined by r]x(0) = 1 if x E O and r]x(0) = 0 otherwise. As such point valuations are quite crisp in nature but they are nonetheless possibility measures.
Lemma 3.1 Let X be a topological space and x E X. Then the point valuation r]x is a possibility measure, so we have maps r]x'.X —f*0oo](X) and rj'x-X ff01](X) sending x to r]x. Moreover, these maps are Scott-continuous and injective if 0{X) is the Scott-topology on a dcpo X.
Proof. Given open sets U,V E O(X) we have r]x(U U V) = 1 if and only if x e U or x e V; the latter is equivalent to r]x(U) V r]x(V) = 1. Thus, r]x e 0(X)^[0, oo] = P[0tOc](X). The rest follows as in [7,8]. □
What other facts about point valuations carry over from the classical case?
Definition 3.2 Given any a E [0, oo] and ¡jl E ff0tX}](X) we define a * ¡jl by (7) (a * fj.) O = a ■ fi(0) (O E O(X))
just as for continuous valuations.
Since scalar multiplication r a • r is a self map on [0, oo] preserving suprema,3 we see that a * ¡jl is in F^ (X).
Lemma 3.3 Let X be a topological space. The map
(a, [0, oo] x />0oo](X) ^ />0oo](X)
preserves suprema in each coordinate separately. The corresponding statement holds for [0,1] and f*0 ^(X) as well.
3 \V(> set oo • 0 = 0 as in the case of continuous valuations.
The crucial distinctive feature of possibility measures is that they replace the notion of sums of continuous valuations by that of suprema of possibility measures. Summing up possibility measures results in functions that don't preserve suprema in general; just take the sum of rjx and r]y where x and y are incomparable with respect to the specialization order,
4 Simple Possibility Measures
Since addition is not admissible for possibility measures, and since we replaced '+' by 'V' in their modular law, we suggest to define simple possibility measures as finite suprema of scalar multiples of point valuations. Such finite suprema model fuzziness.
Definition 4.1 A possibility measure /j, G f^0oo](X) is called simple if there are finitely many points xi, X2, ■ ■ ■, xn in X and scalar s ai, «2, ■ ■ ■ ,otn [0, oo) such that
(8) (j, = (ai * rjXl) V (a2 * rjX2) V ... V (an * rjxJ.
We call possibility measures of the form a * rjx scalar point valuations.
It is worth pointing out that, unlike continuous valuations on sober spaces [8], scalar point valuations are not characterized by having a two-element image (see equation (10) below). Simple possibility measures are necessity measures [6] in the sense that they preserve infima of opens whose filtered intersection is open again. This follows readily, as in the case for simple valuations [8], since V and multiplication preserve filtered infima in [0,oo],
Lemma 4.2 Any simple possibility measure // G P ^(X) satisfies //((") J-) = l\oer tJ,(0) for all filtered sets T in O(X) whose intersection is open.
We would like to know whether the preservation of filtered open intersections also characterizes possibility measures with finite image, but this seems unlikely.
5 Simple Possibility Measures as a Basis
Now we show that simple possibility measures form indeed a basis of ^(X) and ^(X) if X is a continuous domain and O(X) its Scott-topologv. The proof of that uses results on the structure of function spaces L-oM where L and M are completely distributive. The concrete setting at hand is where L is the completely distributive lattice cr(D), D a continuous domain, and M is the completely distributive lattice [0, oo], respectively [0,1]. We only present the argument for Fj"0 ^(D) since the one for ^(D) is completely similar.
Given ¡j, G Pu s (D) this is just an element of <t(£>)-°[0, oo] and we need to show that it is the supremum of simple possibility measures wav-below it. First, we note that scalar point valuations are maps in L—°M which are well-known in different contexts [10].
Definition 5.1 Let L and M be complete lattices and z E L, y E M. The 'map z /* y: L —M maps the set \.z = {I E L \ I < z} to the zero of M and all other elements to y.
Lemma 5.2 The map z y above is in L—°M. Given a E [0, oo] and x E X, we have
(9) a*rix = (X\Jx}) Generally, for any O E 0(X) we have
(10) \/ a*r)x.
x£X\0
Recall the notion of a step function x \ y E [L —M] which maps all I with x <C I in L to y and all other elements to the zero of M. The following is a straightforward generalization of a lemma in [10]:
Lemma 5.3 Let L and M be complete lattices, x E L and y E M. Then the greatest map preserving suprema below x \y is z /* y, where
(11) z = \J{l E L | x<tl}.
This suggests to finish our argument as follows: Since o(D) and [0, oo] are continuous lattices we know that every ¡j, E [o(D) —[0, oo]] is the supremum of step functions wav-below it [3], In particular, this is true for possibility measures. Those step functions are not possibility measures in general, but the greatest sup-preserving map below such a step function is the supremum of scalar possibility measures by Lemmas 5,2 and 5,3,
Thus, for any x \ y <C ¡jl in [a(D) —[0, oo]] we take z as above and obtain
(12) z /*y<x\yCix
in [a(D) [0, oo]] which implies z y in [o(D) —t [0, oo]]. The latter entails z z71 y <C // in <r(D)-o[0, oo] = Pu s (I)) since the inclusion of Pu s (I)) into [o(D) —t [0, oo]] is Scott-continuous, In particular, we have y * r]a <C ¡jl for all a E d \ z by (10), Since Pu s (i)) is a complete lattice it suffices to show that the supremum of all such z y equals ¡jl. Since ¡jl is the supremum of all step functions wav-below it, we are done as soon as the process of 'taking the greatest sup-preserving map below a Scott-continuous map' preserves suprema. Thus, we need to show that the self-map P on [o(D) [0,oo]], defined by
(13) P(f) = \J{gEa(D)^[0^]\g<f},
preserves suprema. Since o(D) is completely distributive, we can state P explicitly, Every element O in a(D) is the supremum of way-way below elements O'CO [11] (O'CO iff for all O C a{D) with OC[J 0 there is some V E O with O' C V). Now it is routine to verify [10] that
(14) P(f)0 = \J{f(0') | O'CO}
and that P preserves suprema.
Theorem 5.4 Let D be a continuous domain and 0{D) its Scott-topology. Then the set of simple possibility measures in f*0 ^(D) forms a basis of f*0 ^(D). Likewise, the set of simple possibility measures in ^(D) is a basis of ^(D).
In [10] it was also shown that, given completely distributive lattices L and M and f,gE L^M, we have /«Cg in the space L-»M if and only if we have /«Cg in [L —M], Furthermore, the same statement holds for <C instead of C whenever L and M are linear FS-lattices [10], but completely distributive lattices are linear FS-lattices [12], Therefore, we know that <C and <C are induced by the respective relations in [a(D) —[0,oo]], This also applies to F^ ^(D) since this is a sup-projection of ^(D) by Lemma 2,5,
Proposition 5.5 Let D be a continuous dcpo. The way-below relations on P , .(/)) and P s .(/)) are the restrictions of the way-below relation on [o{D) — [0,1]], respectively [o{D) —[0,oo]]; this also holds for the way-way-below relations.
6 Free Modules
Continuous cones are dcpos D with the structure of a commutative monoid (D; © , 0) and a continuous action
(a, d) a * d: [0, oo] x D —D
of [0,oo] on D which interacts with that monoid structure in the expected way [7], Our setting requires that we replace the addition © on D and the addition on [0, oo], respectively [0,1], by suprema. To do so we only need to add the axiom
cf © d = d
to the commutative monoid and think of '+' on reals as the maximum operation, In particular, D is then a complete lattice with © as binary suprema, and we may condense all these conditions to saying that * preserves suprema in each coordinate separately. We phrase this in the language of monoids.
Definition 6.1 We consider the monoids ([0,1]; •, 1) and ([0, oo]; •, 1), where '•'is the usual 'multiplication. An [0,1 }-module is a pair (L; where L is a complete lattice and [0,1] x L —L preserves suprema in each coordinate separately, such that
(15) (ri • r2) *Ll = ri *L (r2 *l I)
(16) 1*1 = 1
for all I e L and all ri, G [0,1]. We define an [0, oo}-module in the obvious and similar way.
Note that [0,1] and [0,oo] are [0, l]-modules with as *[o,i], respectively *[o,oo]■ Also, Pu s (I)) is an [0, oo]-module, and Pu,.(/)) an [0, l]-module
by Lemma 3,3, In any [0,1]-, or [0, oo]-module we must have
(17) 0 * Ll = ± L
for all I 6 L since *l preserves all suprema in its first coordinate. For the rest of this paper we speak of 'modules' if a statement applies to [0,1]- and [0, oo]-modules at the same time. We view modules as algebras (A; ©.4, *a) and morphisms between such algebras (A; ©.4, *.4) and (B; ©B, *B) are Scott-continuous functions /: A —B such that
(18) f(a ©,4 a') = f(a) ®B f(a')
(19) f(a *A a) = oi *B f(a)
for all a E [0, oo], respectively [0,1], and a, a' E A. Since © has to be interpreted as supremum, we see that the first equation merely says that / is a sup-map. Of course, one may apply Frevd's General Adjoint Functor Theorem to secure the existence of free modules [3,13], In [3,13] it has been shown that such an initial algebra ID is a continuous domain if D is a continuous domain to begin with. Thus, we obtain initial, or free, [0,1]- and [0, oo]-modules over a continuous domain D. However, one often would like to have concrete representations of such initial algebras, which validate and strengthen our semantic intuitions. It turns out that the initial algebra for [0, oo] is nothing but (an isomorphic copy of) Fj"0 (D),
We already have a Scott-continuous map r]D: D —^(D) which associates to each x E D its point valuation r]x. So let A be any [0, oo]-module and f-.D—tA a Scott-continuous function. We need to show the existence of a unique morphism of [0, oo]-modules /: P s (I)) A such that
(20) f = foriD.
Since A is an [0, oo]-module it is certainly a complete lattice, so the function
(21) f(p) = Y{a *A f(x) | a * r]x < //}
is well-defined, for a*r]x = /?*% implies a = ¡3 and x = y (the opens separate the points). Since v <C ¡jl < ¡j! implies v <C // we conclude that / is monotone. By Theorem 5,4, ^(D) is continuous; thus, the wav-below relation on Fj"0 ^(D) satisfies the interpolation property [14,3], Using that fact one readily sees that / is Scott-continuous,
Next, we verify one half of the statement that / is a morphism of algebras. For that we need to establish that scalar actions preserve and reflect the wav-below relation in [0, oo]-modules.
Lemma 6.2 Let A be an [0, oo]-module and > ^ 0. Then we have a <C 6 in A if and only if /3 *ao, <C /3 *a b in A.
Proof. The proof works with the scalar action of using that scalar actions are Scott-continuous, □
Lemma 6.3 For the map f above we have f(a * ¡j) = a *A f(p) for all a E [0, oo] and // t P ^jD).
Proof. Since the map ¡j, a * //: ^(D) ^(D) preserves all suprema, we may compute
(22) a *A f(p) = a *A *A f(x) | 7 * r]x < //}
= /(x)) I 7 * Vx < /"}
= \/{(a • 7) *a /(a;)) | 7 * % <
= Vi(« ' ^ *A | ft * (7 *%,)•< ft * /i} by Lemma 6,2 = \/{(ft • 7) *A f(x)) | (a • 7) * < a * //}
= \/{(3 *4 f(x) | (3 * r]x <C a * //} since f3 = a ■ —
= f(a*fJ)
if ft ^ 0, Otherwise, both sides equal ±.4 due to (17), □
We may use the property of / above to show that / o r]n = /■ For that we need to identify certain elements wav-below r]x in P s (I)).
Lemma 6.4 Let a E [0,oo], y E D, and ¡jl E P s (I)) such that
(23) ft < /¿({d E D | y < d}). Then ft * r]y <C jU.
Proof. By Theorem 5,4, ¡j, is the supremum of scalar point valuations f3 * r]x wav-below ¡jl. Thus, fj,({d E D \ y <C d,}) equals \/{/3 \ y <C x, /3 * r]x ■< //}. By our assumption we get a < \J{/3 \ y x, /3 * r]x ■C /i} and all suprema in [0, 00] are directed. Thus, there is some /3 * r]x <C ¡jl with y <C x such that ft < (3. Since Scott-open sets are upper sets it is immediate that ft*% < /3*r]x. So ft * tjy <C ¡jl follows, □
Lemma 6.5 For the maps t)d and f above we have f = / o rjo-
Proof. By Lemma 6,4 we have a * % <C r]x for all ft < 1 and i/«i, Thus, we compute
(24) (/ o r]D)x> {ft *A f(y) | a < 1, y < x}
= (\f{a | ft < 1}) *a (\/{f(y) | y -C x}) as * is Scott-cont, = 1 *.4 /(x) as / is Scott-cont, = №■
Conversely, let ft * % <C r]x. Clearly, a < 1 follows. But we also get y < x; otherwise, y would be in the open D\lx which does not contain x, contradicting a * j]y < j]x. Therefore, a *.4 f(y) < 1 *.4 f(x) = f(x) as / and * are monotone. This implies (/ o r]D) x < f(x). □
The building blocks a * rjx of simple possibility measures are sup-primes in
Lemma 6.6 Let X he a topological space, a E [0, oo] and x E X. Then a*r]x is a sup-prime in f^0oo](X).
Proof. This is evidently so in ease that a = 0, If a ^ 0 then suppose that a * r]x = ¡jl V v in ^(x) (since ^(D) is a distributive lattice we may assume equality). Suppose that ¡jl ^ a * r]x. Since ¡jl < a * r]x there must be some O E O(X) such that fj,(0) < a • rjx(0), which also implies a • r]x(0) = a. Likewise, iff ^ a*?]x then v < a*r]x implies the existence of some O' E O(X) such that v(0') < a ■ rjx(0') and again a ■ r]x(0') = a follows. Thus, x is contained in the open set O DO' and we compute
(25) (/j, v v) (0(~) o') = n(0 n o') V y(0 n o') pointwise supremum
< fj,(0) V v(0') as ¡jl and v are monotone
= a-Vx(OnO')
This contradiction shows fj, = a * j]x or u = a * j]x. □
Lemma 6.7 The map f is the unique morphism of [0,oo]-modules with f o Vd = f ■
Proof. It remains to verify the uniqueness of / and ¡(¡jl V v) = f(p) V /(f). Since / is monotone, it suffices to prove ¡(¡jl V v) < f(p) V /(f) for the latter. By definition, /(// V f) equals the supremum of all a *a f(x), where a * r]x <C ¡jl V f, Since all such elements a * r]x are sup-primes in ^(D) by Lemma 6,6, we may assume that a * r]x < ¡jl without loss of generality. Since / is monotone we obtain
(26) a *A f(x) = a *A f(r]x) as / o r]D = f
= /(« * Vx) as a *A f(Q) = f(a * () for all C < f(M)
</(/') V ./>).
Thus, ¡(¡jl V f) = \J{a *A f(x) | a * rjx <C // V v} < f(p) V /(f).
As for uniqueness, let g\ P s (I)) ^ A be any morphism of [0, oo]-modules such that gor/d = /■ Given ¡j, E Pu s (i)) we know by Theorem 5.4 that ¡jl is the directed supremum of all simple valuations v wav-below it. Such a valuation v is of the form (ai *r]xl)V (a2 * r]X2)... (ak * j]Xk) for some k > 1. In particular, all ai * r]Xi are wav-below ¡jl (1 < i < k). Thus,
(27) (j, = \/{a * rjx | a * rjx < //},
together with the fact that g preserves suprema and scalar multiplication, shows that g = /, noting that gorjo = f- n
Theorem 6.8 Let D be a continuous domain. Then ^(D) is the free [0, oo]-module over D.
Incidentally, given a Scott-continuous function f E [D E], we readily see that Fj"0 (j) is the unique morphism of [0, oo]-modules h: ^(D) Fj"0 (E)
such that t]e ° / = h o t/d-
The proof techniques employed in verifying the univeral property of Fj"0 (D) seem quite general, but there are two spots where, on the face of it, these arguments won't carry over to the case of [0, l]-modules; namely, the proofs of Lemmas 6,2 and 6,3 need fractions and ^ which won't be defined in [0,1] in general. This is clearly unsatisfactory and there is need for abstracting the line of argument presented here. This can indeed be done and recent improvements of the work in [5] have shown that the algebras studied here for L = [0,1] and L = [0,oo] are free not only for L = [0,oo], but for any continuous lattice L.
7 Related Work
It would be a worthwhile project to determine parallels, as well as differences, of our approach to quantitative semantics with work done by others. For example, there is a framework for generalized metric spaces by Mareello Bonsangue et al, [15], Philipp Siinderhauf's work on quantitative V-powerdomains [16], and E.C, Flagg's studies on quantales and continuity space [17], The concept of [0,1]- and [0, oo]-modules fits into the general framework of a powerdomain theory based on semirings [18], of which Eeinhold Heckmann's work on abstract valuations for the Plotkin powerdomain [19] is the most recent example.
References
[1] J.G. Kemeny and J.L. Snell. Finite Markov Chains. Van Nostrand, 1960.
[2] M. Huth and M. Kwiatkowska. Quantitative Analysis and Model Checking. Technical report, Kansas State University. Department of Computing and Information Sciences. To appear in Logic in Computer Science 1997.
[3] S. Abramsky and A. Jung. Domain theory. In S. Abramsky, D. M. Gabbay, and T. S. E. Maibaum, editors, Handbook of Logic in Computer Science, volume 3, pages 1-168. Clarendon Press, 1994.
[4] L. A. Zadeh. Fuzzy Sets. Information and Control, 8:338-353, 1965.
[5] R. Heckmann and M. Huth. Quantitative Analysis, Topology, and Possibility Measures. Technical Report CIS-96-2, Kansas State University. Department of Computing and Information Sciences, December 1996.
[6] G. de Cooman, D. Ruan, and E.E. Kerre (editors). FAPT'95: Foundations and Applications of Possibility Theory. Advances in Fuzzy Systems - Applications and Theory Vol. 8. World Scientific, 1995.
[7] C. Jones and G. Plotkin. A probabilistic powerdomain of evaluations. In Logic in Computer Science, pages 186-195. IEEE Computer Society Press, 1989.
[8] O. Kirch. Bereiche und Bewertungen, 1993. Diploma thesis, 77 pp.
[9] P. R. Haimos. Measure Theory. D. van Norstrand Company, 1950.
[10] M. Huth and M. Mislove. A Characterization of linear FS-lattices. Technical Report 1679, Technische Hochschule Darmstadt, September 1994.
[11] G. N. Raney. Completely distributive complete lattices. Proc. . 1 MS. 3:667-680, 1952.
[12] M. Huth, A. Jung, and K. Keimel. Linear types, approximation, and topology. In Logic in Computer Science, pages 110-114. IEEE Computer Society Press, 1994.
[13] J. Koslowski. Note on Free Algebras Over Continuous Domains. Theoretical Computer Science. To appear.
[14] G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, and D. S. Scott. A Compendium of Continuous Lattices. Springer Verlag, 1980.
[15] M.M. Bonsangue, F. van Breugel, and J.J.M.M. Rutten. Generalized metric spaces: completion, topology, and powerdomains via the Yoneda embedding. Technical Report CS-R9636, Centrum voor Wiskunde en Informatica, Computer Science/Department of Software Technology, September 1996.
[16] Ph. Sünderhauf. Products and Powerspaces in Quantitative Domain Theory. In M. Mislove, editor, In these proceedings, E.N.T.C.S. Elsevier, March 1997.
[17] R.C. Flagg. Quantales and Continuity Spaces. Algebra Universalis. To appear.
[18] R. Heckmann. Power Domain Constructions. PhD thesis, Universität des Saarlandes, 1990.
[19] R. Heckmann. Abstract valuations: A novel representation of Plotkin power domain and Vietoris hyperspace. In M. Mislove, editor, In these proceedings, E.N.T.C.S. Elsevier, March 1997.
|
2021-09-25 22:00:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421424031257629, "perplexity": 3796.2252903298154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00653.warc.gz"}
|
https://unapologetic.wordpress.com/2011/07/21/pullbacks-on-cohomology/
|
# The Unapologetic Mathematician
## Pullbacks on Cohomology
We’ve seen that if $f:M\to N$ is a smooth map of manifolds that we can pull back differential forms, and that this pullback $f^*:\Omega(N)\to\Omega(M)$ is a degree-zero homomorphism of graded algebras. But now that we’ve seen that $\Omega(M)$ and $\Omega(N)$ are differential graded algebras, it would be nice if the pullback respected this structure as well. And luckily enough, it does!
Specifically, the pullback $f^*$ commutes with the exterior derivatives on $\Omega(M)$ and $\Omega(N)$, both of which are (somewhat unfortunately) written as $d$. If we temporarily write them as $d_M$ and $d_N$, then we can write our assertion as $f^*(d_N\omega)=d_M(f^*\omega)$ for all $k$-forms $\omega$ on $N$.
First, we show that this is true for a function $\phi\in\Omega^0(N)$. It we pick a test vector field $X\in\mathfrak{X}(M)$, then we can check
\displaystyle\begin{aligned}\left[f^*(d\phi)\right](X)&=\left[d\phi\circ f\right](f_*(X))\\&=\left[f_*(X)\right]\phi\\&=X(\phi\circ f)\\&=\left[d(\phi\circ f)\right](X)\\&=\left[d(f^*\phi)\right](X)\end{aligned}
For other $k$-forms it will make life easier to write out $\omega$ as a sum
$\displaystyle\omega=\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}$
Then we can write the left side of our assertion as
\displaystyle\begin{aligned}f^*\left(d\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=f^*\left(\sum\limits_Id\alpha_I\wedge dx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\\&=\sum\limits_If^*(d\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(f^*\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}
and the right side as
\displaystyle\begin{aligned}d\left(f^*\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=d\left(\sum\limits_I(\alpha_I\circ f)f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\right)\\&=d\left(\sum\limits_I(\alpha_I\circ f)d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\right)\\&=\sum\limits_Id(\alpha_I\circ f)\wedge d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}
So these really are the same.
The useful thing about this fact that pullbacks commute with the exterior derivative is that it makes pullbacks into a chain map between the chains of the $\Omega^k(N)$ and $\Omega^k(M)$. And then immediately we get homomorphisms $H^k(N)\to H^k(M)$, which we also write as $f^*$.
If you want, you can walk the diagrams yourself to verify that a cohomology class in $H^k(N)$ is sent to a unique, well-defined cohomology class in $H^k(M)$, but it’d probably be more worth it to go back to read over the general proof that chain maps give homomorphisms on homology.
July 21, 2011 - Posted by | Differential Topology, Topology
1. […] spaces are all contravariant functors on the category of smooth manifolds. We’ve even seen how it acts on smooth maps. All we really need to do is check that it plays nice with […]
Pingback by De Rham Cohomology is Functorial « The Unapologetic Mathematician | July 23, 2011 | Reply
2. […] where each term omits exactly one of the basic -forms. Since everything in sight — the differential operator and both integrals — is -linear, we can just use one of these terms. And so we can calculate the pullbacks: […]
Pingback by Stokes’ Theorem (proof part 1) « The Unapologetic Mathematician | August 18, 2011 | Reply
3. […] the exterior derivative — gives us a chain complex. Since pullbacks of differential forms commute with the exterior derivative, they define a chain map between two chain […]
Pingback by The Poincaré Lemma (setup) « The Unapologetic Mathematician | December 2, 2011 | Reply
4. […] The exterior derivative is a derivative, The exterior derivative is nilpotent, De Rham Cohomology, Pullbacks on Cohomology, De Rham cohomology is functorial, The Interior […]
|
2016-12-06 17:55:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919480681419373, "perplexity": 599.9385302301692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541950.13/warc/CC-MAIN-20161202170901-00443-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://academic.oup.com/lpr/pages/General_Instructions
|
# Information for Authors
Authors are encouraged to complete their copyright licence to publish form online
Submitting authors should bear in mind that the journal is a multidisciplinary journal, intended to be read by lawyers, mathematicians and statisticians.
• Please include an abstract of no more than 200 words, and up to six keywords.
• The bulk of material must not be too technical; place technical details in footnotes.
• Some formulae are permitted: general formulae in the text, particular formulae in footnotes.
• Papers should contain some explanation of what they are about that can be understood by everyone. However, the Editors accept that not all papers need to entirely intelligible to all readers.
• References: Name and year in the text, in the text. Full references given at the back in alphabetical order by first named author. An alternative is acceptable with footnotes, numbered (with a superscript) by place of first appearance in the text. Look at earlier issues for examples.
• Footnotes: These should be numbered consecutively in the order in which they are first mentioned in the text. Footnotes in text, tables and legend should be identified by arabic numbers appearing in the text in superscript, for example 5 or 5- 7 or 5,16 for unrelated footnotes.
### Submission of papers
Manuscripts may be submitted electronically using Microsoft Word, Wordperfect, LaTeX or on paper, in which case the original and three copies should be submitted. In either case (paper or electronic submission) before a paper is finally accepted, a signed letter from the corresponding author is required specifying the name, full mailing address and telephone/fax numbers and e-mail address of the person who will act as corresponding author. The covering letter should also specify, if applicable, information about possible duplicate publication problems, financial or other relationships that could give rise to conflicts of interest and any other information the editors may need to make an informed decision in accordance with established policies and practices.
Manuscripts may be submitted to any one of the seven editors. However, it would ease processing of submissions if the following general rules were followed: all submissions of a legal nature should go to Professor Cheng; those of a behavioural nature to Professor Koehler; those of a mathematical or statistical nature should go to Professor Gastwirth (North American submissions) or to Dr. Nordgaard, Professor Aitken, or Professor Franklin; those of a forensic science nature to Professor Taroni. If in doubt, send the submission to Dr. Nordgaard who will decide how to process it.
1. Dr. Anders Nordgaard (Editor-in-Chief), Swedish Police Authority-National Forensic Centre, SE-58194 Linköping, Sweden, Tel. +46 10 5628013. E-mail: anders.nordgaard@liu.se
2. Prof. C. G. G. Aitken, School of Mathematics, The King's Buildings, The University of Edinburgh, Mayfield Road, Edinburgh EH9 3JZ, UK. Tel: +44 (0) 131 650 4877; Fax: +44 (0) 131 650 6553; E-mail: C.G.G.Aitken@ed.ac.uk
3. Prof. E. Cheng, Vanderbilt University Law School, Nashville, TN 37203, USA. Tel: 615 322 2615; Fax: 615 343 8467; E-mail: edward.cheng@vanderbilt.edu
4. Prof. J. Franklin, School of Mathematics, University of New South Wales, Sydney 2052, Australia. Tel: +61 2 93857093; Fax: +61 2 93857123; E-mail: jim@maths.unsw.edu.au
5. Prof. J. Gastwirth, Department of Statistics, George Washington University, Washington, DC, 20052, USA. Tel: 202 994 6356; Fax: 202 994 6917; E-mail: jlgast@gwu.edu
6. Prof. J. J. Koehler, Northwestern University School of Law, Chicago, IL 60611-3069, USA. Tel: 001 (312) 503 4469; E-mail: jay.koehler@northwestern.edu
7. Prof. F. Taroni, Ecole des sciences criminelles, University of Lausanne, B.C.H. 1015, Lausanne-Dorigny, Switzerland. Tel: +41 21 692 4646; Fax: +41 21 692 4605; E-mail: franco.taroni@esc.unil.ch
Books for review should be sent to either Professor E. Bura, Department of Statistics, George Washington University, Washington DC, 20052, USA, E-mail: ebura@gwu.edu, or Dr. D. Lucy, School of Mathematics, the King's Buildings, The University of Edinburgh, Mayfield Road, Edinburgh EH9 3JZ, UK, E-mail: d.lucy@ed.ac.uk
### Manuscript Layout
Manuscripts must be written in English. The manuscript should be typed double-spaced, including the title page, abstract, text, acknowledgements, footnotes, tables and legends.
### Tables and figures
For paper submissions an original and three complete copies of all tables and figures (including photographs and line drawings) must be included.
### Funding
Details of all funding sources for the work in question should be given in a separate section entitled 'Funding'. This should appear before the 'Acknowledgements' section.
The following rules should be followed:
• The sentence should begin: ‘This work was supported by …’
• The full official funding agency name should be given, i.e. ‘National Institutes of Health’, not ‘NIH’ (full RIN-approved list of UK funding agencies) Grant numbers should be given in brackets as follows: ‘[grant number xxxx]’
• Multiple grant numbers should be separated by a comma as follows: ‘[grant numbers xxxx, yyyy]’
• Agencies should be separated by a semi-colon (plus ‘and’ before the last funding agency)
• Where individuals need to be specified for certain sources of funding the following text should be added after the relevant agency or grant number 'to [author initials]'.
An example is given here: ‘This work was supported by the National Institutes of Health [AA123456 to C.S., BB765432 to M.H.]; and the Alcohol & Education Research Council [hfygr667789].
### Crossref Funding Data Registry
In order to meet your funding requirements authors are required to name their funding sources, or state if there are none, during the submission process. For further information on this process or to find out more about the CHORUS initiative please click here.
### Licence to Publish, and Offprints
Optional open access. Authors have the option, at an additional charge, to make their paper freely available online immediately upon publication, under the Oxford Open initiative. After your manuscript is accepted, as part of the mandatory licence form you will be asked to indicate whether or not you wish to pay to have your paper made freely available immediately. If you do not select the Open Access option, your paper will be published with standard subscription-based access and you will not be charged.
Oxford Open articles are published under Creative Commons licences. Authors publishing in Law, Probability and Risk can use the following Creative Commons licence for their articles:
• Creative Commons Attribution licence (CC BY)
You can pay Open Access charges using our Author Services site. This will enable you to pay online with a credit/debit card, or request an invoice by email or post. The open access charges applicable for Trusts and Trustees are:
Regular charge - £1850/ $3000 / €2450 Reduced Rate Developing country charge* - £925 /$1500 / €1225
Free Developing country charge* - £0 /\$0 / €0
Discounted rates are available for authors based in some developing countries (click here for a list of qualifying countries).
Please note that these charges are in addition to any color/page charges that may apply.
Orders from the UK will be subject to the current UK VAT charge. For orders from the rest of the European Union, OUP will assume that the service is provided for business purposes. Please provide a VAT number for yourself or your institution and ensure you account for your own local VAT correctly.
If authors wish to order any offprints or copies of the issue in which their paper will appear, they can do so via the Oxford Journals Author Services site.
|
2017-02-23 01:03:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19252362847328186, "perplexity": 3125.124721988181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171066.47/warc/CC-MAIN-20170219104611-00445-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/79734/what-is-being-braided-in-sl2-z
|
# What is being braided in SL(2,Z)?
The braid group on 3 strands is a central extension of the modular group. By definition, $B_3 = \langle \sigma_1, \sigma_2: \sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2 \rangle$ This group has a central element (commuting with both $\sigma1$ and $\sigma_2$): $\sigma_1\sigma_2\sigma_1\sigma_2\sigma_1\sigma_2$ The coset get mapped to elements of PSL(2,Z) (which can act on the hyperbolic plane). $[\sigma_1] = \left[ \begin{array}{cc} 1 & 1 \\\\ 0 & 1\end{array}\right] \text{ and } [\sigma_2] = \left[ \begin{array}{cc} 1 & 0 \\\\ -1 & 1\end{array}\right]$ I wonder, in terms of the hyperbolic plane, what is being braided here (modulo the garside elements).
-
If something were being braided, I'd think the relevant group would have a map to $B_3$, rather than from $B_3$. – S. Carnahan Nov 1 '11 at 18:28
– Qiaochu Yuan Nov 1 '11 at 21:03
Without thinking about this too carefully: I think what's getting braided are three of the Weierstrass points of an elliptic curve. More precisely: consider the space of distinct 3-tuples of points p,q,r on A^1. On the one hand, you can braid these points around. On the other hand, every path in this space (i.e. every braid) gives a family of elliptic curves
y^2 = (x-p)(x-q)(x-r)
and you can ask what the braid does to the homology of the elliptic curve; that's an element of SL_2(Z).
-
Don't you need some other identification between the homology of two elliptic curves in a family to get an element of $\text{SL}_2(\mathbb{Z})$? Otherwise you just get a homomorphism between two groups which are abstractly isomorphic to $\mathbb{Z}^2$. – Qiaochu Yuan Nov 1 '11 at 21:06
@Qiaochu: That's what you get from the Gauss-Manin connection. – Dan Petersen Nov 1 '11 at 21:24
|
2016-05-07 01:06:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773075938224792, "perplexity": 525.4522100943917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00052-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/618702/nested-sequence-containing-sequences
|
# Nested sequence containing sequences
I am trying to create a nested sequence where each element is a sequence but for some reason it does not compile and exists with an emergency stop.
MWE:
\documentclass{article}
\usepackage{expl3}
\ExplSyntaxOn
\cs_generate_variant:Nn \seq_put_right:Nn {Ne}
\cs_generate_variant:Nn \seq_put_right:Nn {NV}
\seq_new:N \l__outer_seq
\seq_new:N \l__inner_seq
{
\seq_clear:N \l__inner_seq
\seq_put_right:Ne \l__inner_seq {##1}
\seq_put_right:Ne \l__inner_seq {##2}
\seq_put_right:Ne \l__inner_seq {##3}
\seq_put_right:NV \l__outer_seq \l__inner_seq
}
\ExplSyntaxOff
\begin{document}
\section{MWE}
\end{document}
After the call of add{1}{2}{3} the outer sequence contains one element which is a (inner) sequence containing the element 1,2 and 3
Visual: [[1, 2, 3]] - the brackets represent a sequence
• So you want to append to \l__inner_seq the items in \l__outer_seq? Please try and explain what \l__inner_seq is expected to contain at the end of the process. Oct 12, 2021 at 14:38
• @egreg I specified the MWE with more information. I hope this is helpfull,. Oct 12, 2021 at 15:05
• :NV means it takes two arguments so \seq_put_right:NV \l__outer_seq is missing something. Oct 12, 2021 at 15:06
• Sorry, but sequences cannot contain other sequences. They're not something like Perl or Python arrays. Maybe you could explain what's the problem you want to solve. Oct 12, 2021 at 15:06
• @egreg but they could contain a variable that holds a sequence which is perhaps all the OP needs Oct 12, 2021 at 15:08
Your outer sequence can hold a variable holding an inner sequence, so:
The sequence \l__outer_seq contains the items (without outer braces):
> {\l__inner_seq }.
l.26 \seq_show:N\l__outer_seq
?
The sequence \l__inner_seq contains the items (without outer braces):
> {1}
> {2}
> {3}.
l.28 \seq_map_function:NN \l__outer_seq\seq_show:N
?
This shows the outer sequence then maps over that showing the inner sequence is 1,2,3
\documentclass{article}
\usepackage{expl3}
\ExplSyntaxOn
\cs_generate_variant:Nn \seq_put_right:Nn {Ne}
\cs_generate_variant:Nn \seq_put_right:Nn {NV}
\seq_new:N \l__outer_seq
\seq_new:N \l__inner_seq
{
\seq_clear:N \l__inner_seq
\seq_put_right:Ne \l__inner_seq {#1}
\seq_put_right:Ne \l__inner_seq {#2}
\seq_put_right:Ne \l__inner_seq {#3}
\seq_put_right:Nn \l__outer_seq \l__inner_seq
}
\ExplSyntaxOff
\begin{document}
\section{MWE}
• Thanks a lot! Your post almost solved all my questions. The command call e.g. \add{7}{8}{\textbf{9}} seems not to work if an argument contains a macro. Do you perhaps know a solution to that? Oct 12, 2021 at 16:33
• @Nyanyan I copied you and used :Ne so they are expanded perhaps you want :Nn so you just add the tokens as supplied Oct 12, 2021 at 16:35
• :Nn does not work. Tried to test it all day but no success :( Oct 12, 2021 at 16:52
• :Nn will add the argument not its expansion, If something doesnt work make an example and ask a new question Oct 12, 2021 at 17:07
• You're storing a pointer to the current value of \l__inner_seq, not the value… See tex.stackexchange.com/q/618725/4427 Oct 12, 2021 at 17:17
|
2022-07-03 03:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3581927418708801, "perplexity": 4394.380161200909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00018.warc.gz"}
|
https://www.physicsforums.com/threads/find-maximum-dissipated-power.942247/
|
# Find maximum dissipated power
## Homework Statement
A constant voltage source V with internal resistance r is connected to a load resistor R. The dissipated power by the resistor R is P=RV^2/(R+r)^2. Show that the maximum power dissipated by the resistor R is achieved when R = r. The maximum of P with respect to R is achieved when dP/dR = 0.
[/B]
P = RV^2/(R+r)^2
## The Attempt at a Solution
[/B]
Well, I think that I need to find the absolute maximum. That means I need to find the first derivative, find the critical points, evaluate the equation at the critical points and find the absolute maximum. And somehow show that R = r.
To find the first derivative, I used the chain rule and came up with this:
f(g(x)) -> f'(g(x))g'(x)
P = RV^2/(R+r)^2 f = RV^2/x^2 g = R+r
f' = (2Vx - 2xRV^2)/x^4 g' = 0
P'= (2V(R+r) - 2(R+r)RV^2)/(R+r)^4
= (2V - 2RV^2)/(R+r)^3
(2V - 2RV^2)/(R+r)^3 = 0
I don't know if this is correct and I don't know how to go on from here. The denominator can't be zero because the derivative won't exist at those points. But if I set the nominator to zero, it doesn't really help me because there's no r, only R. What am I doing wrong?
All help is appreciated, thanks!
Related Introductory Physics Homework Help News on Phys.org
berkeman
Mentor
(2V - 2RV^2)/(R+r)^3 = 0
I don't know if this is correct
Welcome to the PF.
I don't think it's correct, because at least the units don't match. I also don't quite follow your approach at the derivative. Certainly your thoughts to take the derivative and set it equal to zero are correct...
Let me post my work in a couple of minutes to see if it helps...
Last edited:
chocolatecake
berkeman
Mentor
(using LaTeX to post the math -- you can find the tutorial for LaTeX here: https://www.physicsforums.com/help/latexhelp/)
$$P(R) = R\frac{V^2}{(R+r)^2} = (RV^2)(R+r)^{-2}$$
$$\frac{dP(R)}{dR} = (RV^2)(-2)(R+r)^{-3}(1) + (R+r)^{-2}(V^2) = 0$$
(Distribute, simplify, simplify...)
$$-R^3-R^2r+Rr^2+r^3 =0$$
Which you should be able to solve. By inspection, one solution is r=R:
$$-r^3-r^3+r^3+r^3= 0$$
But maybe there are other solutions?
Last edited:
chocolatecake
berkeman
Mentor
BTW, I've never liked the quotient rule for derivatives. I always prefer to convert the equation into a product, and apply that differentiation rule instead. It's simpler for my tiny brain...
Thank you so much for your help!
I thought that the derivative might be wrong. Thanks for the idea to convert the equation into a product. It's so much easier, I'm always gonna do that from now on.
Anyways, I tried it on my own, but I got something different than you have. I'm not sure if my solution is correct:
##P'(R) = V^2 (R+r)^{-2 }+ (RV^2)(-2)(R+r)^{-3} = 0##
if R = r:
##V^2 (r+r)^{-2} + (rV^2)(-2)(r+r)^{-3} = 0##
##(rV^2)(-2)(r+r)^{-3} = \frac{-V^2}{(r+r)^2}##
##(rV^2)(-2)(2r)^{-3} = \frac{-V^2}{(2r)^2}##
##\frac{(rV^2)}{(2r)^3} = \frac{V^2}{2(2r)^2}##
##rV{^2} = \frac{V^2(2r)^3}{2(2r)^2}##
##rV{^2} = \frac{V^2(2r)}{2}##
##rV{^2} = \frac{2V^2 rV^2}{2}##
##rV{^2} = V^2 rV^2##
##0 = \frac{V^2 rV^2}{rV^2}##
##0 = V^2##
##0 = V##
So technically, if R = r, the derivative is zero when V is zero. But is that enough to show that the maximum power is dissipated when R = r?
Also, thank you for the link to the LaTeX tutorial! :)
berkeman
berkeman
Mentor
P′(R)=V2(R+r)−2+(RV2)(−2)(R+r)−3=0P'(R) = V^2 (R+r)^{-2 }+ (RV^2)(-2)(R+r)^{-3} = 0
if R = r:
V2(r+r)−2+(rV2)(−2)(r+r)−3=0
I wouldn't set r=R so early. Just see if you can simplify the derivative down to the final form that I showed, and then see if you can solve for R in terms of r then...
chocolatecake
I understand now how to find the derivative but I don't understand how to simplify to ##-R^3-R^2r+Rr^2+r^3 =0##
How did you get R^2r or Rr^2? I tried using the binomial theorem for (R+r)^2 and (R+r)^3 but that made everything even more confusing.
What I ended up with is basically this:
##\frac{V^2(R+r)+2(RV^2)}{2R^3+6R^2r+6Rr^2+2r^3}=0##
But this doesn't look right.
So I decided to pick random values for R and V instead:
R = 3
V = 4
##(3)(4^2)(-2)(3+r)^{-3}+(3+r)^{-2}(4^2)=0##
##48(-2)(3+r)^{-3}+(3+r)^{-2}(16)=0##
##\frac{-96}{(3+r)^3}+\frac{16}{(3+r)^2}=0##
##96=\frac{16(3+r)^3}{(3+r)^2}##
##96=16(3+r)##
##3=r##
Therefore, r = R
Is that correct?
berkeman
Mentor
I understand now how to find the derivative but I don't understand how to simplify to −R3−R2r+Rr2+r3=0-R^3-R^2r+Rr^2+r^3 =0
How did you get R^2r or Rr^2?
$$P(R) = R\frac{V^2}{(R+r)^2} = (RV^2)(R+r)^{-2}$$
$$\frac{dP(R)}{dR} = (RV^2)(-2)(R+r)^{-3}(1) + (R+r)^{-2}(V^2) = 0$$
$$\frac{-2RV^2}{(R+r)^3} + \frac{V^2}{(R+r)^2} = 0$$
Divide both sides by V^2 to get rid of the voltage dependence (it is not needed), put both of the LHS terms over a common denominator, multiply both sides by that denominator to get rid of it, distribute terms, gather terms, and I get to:
$$-R^3-R^2r+Rr^2+r^3 =0$$
I think it can be factored, but I haven't gotten that to work right away. I'll give it another shot. Can you work through simplifying the derivative and eliminating the voltage V now?
chocolatecake
gneill
Mentor
I think it can be factored, but I haven't gotten that to work right away.
See if ##(r - R)(r + R)^2## fills the bill
chocolatecake and berkeman
berkeman
Mentor
See if ##(r - R)(r + R)^2## fills the bill
Lordy, I was off in the weeds with 6 unknowns and 4 equations in the factoring... Thanks for the help!
chocolatecake and gneill
haruspex
Homework Helper
Gold Member
But maybe there are other solutions?
An easier way is to make the denominator simpler by substituting S=R+r:
##\frac{P}{V^2}=\frac{S-r}{S^2}##
Differentiating wrt S, and multiplying by S4 to get rid of the denominator: ##S^2=2S(S-r)##.
chocolatecake and cnh1995
Divide both sides by V^2 to get rid of the voltage dependence (it is not needed), put both of the LHS terms over a common denominator, multiply both sides by that denominator to get rid of it
Ok, that makes sense to me now. If I do it, it looks like this:
##\frac{-2R(-2)V^2}{(R+r)^3}+\frac{V^2}{(R+r)^2}=0##
divide by ##V^2## :
##\frac{4R}{(R+r)^3}+\frac{1}{(R+r)^2}=0##
common denominator:
##\frac{4R(R+r)^2}{(R+r)^5}+\frac{(R+r)^3}{(R+r)^5}=0##
multiply by common denominator:
##4R(R+r)^2+(R+r)^3=0##
but after that it looks different from what you have:
expand the terms:
##4R(R^2+Rr+Rr+r^2)+R^3+3R^2r+3Rr^2+r^3=0##
and I end up with:
##5R^3+11R^2r+7Rr^2+r^3=0##
Where did I go wrong? Where are the negative signs in your equation coming from?
An easier way is to make the denominator simpler by substituting S=R+r:
PV2=S−rS2\frac{P}{V^2}=\frac{S-r}{S^2}
Differentiating wrt S, and multiplying by S4 to get rid of the denominator: S2=2S(S−r)S^2=2S(S-r).
But if I substitute R+r with S, then I have three variables. And I don't understand where the ##\frac{P}{V^2}## is coming from. If we have a P there, then there are four variables (S, V, P and R).
berkeman
Mentor
Ok, that makes sense to me now. If I do it, it looks like this:
−2R(−2)V2(R+r)3+V2(R+r)2=0\frac{-2R(-2)V^2}{(R+r)^3}+\frac{V^2}{(R+r)^2}=0
Where did the extra -2 come from in the numerator of the first term?
Where did the extra -2 come from in the numerator of the first term?
Well, the numerator was ##-2RV^2##, so I rewrote it as ##-2R(-2)V^2##.
But I just realized that this was nonsense because it would only work if the numerator was ##-2(R+V^2)##.
I'll try it again, give me a few minutes.
berkeman
Now I got it!
##-2R(R^2+Rr+Rr+r^2)+R^3+3R^2r+3Rr^2+r^3=0##
##-2R^3-2R^2r+(-2R^2r)-2Rr^2+R^3+3R^2r+3Rr^2+r^3=0##
##-R^3-R^2r+Rr^2+r^3=0##
and then if I substitute R=r, I get the same as you did: ##-R^3-R^3+R^3+R^3=0##
Thank you!!!
But now I still have to find the absolute maximum, or is this sufficient to show that R = r at the max. dissipated power?
berkeman
Mentor
But now I still have to find the absolute maximum, or is this sufficient to show that R = r at the max. dissipated power?
Given the help we got from @gneill in factoring, it looks like there is only one value of R that makes that equation equal to zero, right?
See if ##(r - R)(r + R)^2## fills the bill
And to show that it maximizes power, you could either evaluate the 2nd derivative at that point to verify that it is negative, or you could plug R = 1.1r and R = 0.9r into the original power equation to show that you get less power delivered to R when it's slightly more or less than matching r.
Good job, and way to hang in there!
chocolatecake
Great, I will try that!
berkeman
And thank you to the others who helped as well of course!
berkeman
OmCheeto
Gold Member
...
But maybe there are other solutions?
I came up with a total of 5 solutions.
Though, R = r was the only one that made sense.
The others were kind of impractical.
chocolatecake
haruspex
Homework Helper
Gold Member
But if I substitute R+r with S, then I have three variables.
No, S is instead of R, so still only two variables.
I don't understand where the ##\frac{P}{V^2}## is coming from
You started with ##P=\frac{RV^2}{(R+r)^2}##. I just divided through by V2 then replaced R by S-r everywhere.
In physical terms, I am keeping r fixed and varying the total resistance. The maximum P occurs when the sum, S, is 2r.
chocolatecake
cnh1995
Homework Helper
Gold Member
P=I2R=E2R/(r+R)2
∴dP/dR=E2[(r+R)2-2R(r+R)]/(r+R)4.
For dP/dR=0,
the numerator of the above equation should be 0, which gives r=R.
chocolatecake
berkeman
Mentor
P=I2R=E2R/(r+R)2
∴dP/dR=E2[(r+R)2-2R(r+R)]/(r+R)4.
For dP/dR=0,
the numerator of the above equation should be 0, which gives r=R.
That looks like a nice reason to use the quotient rule for the differentiation in this case, I guess (I still dislike it in general).
But you still have to distribute and factor the numerator to be sure r=R is the only solution, don't you?
cnh1995
cnh1995
Homework Helper
Gold Member
But you still have to distribute and factor the numerator to be sure r=R is the only solution, don't you?
Yeah, the numerator beomes
r2+2rR+R2-2rR-2R2=0, which simplifies to
r2-R2=0.
So r=R is the only sensible solution.
chocolatecake and berkeman
P=I2R=E2R/(r+R)2
∴dP/dR=E2[(r+R)2-2R(r+R)]/(r+R)4.
But that's a different equation from the one in the problem, isn't it?
|
2020-07-09 12:33:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87603759765625, "perplexity": 732.1352542515018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00354.warc.gz"}
|
https://brilliant.org/problems/many-square-brackets/
|
# Many square brackets
Algebra Level 4
$t(x) = \lfloor x\rfloor+\lfloor 2x\rfloor+\lfloor 3x\rfloor+\cdots + \lfloor \phi x\rfloor - \frac{\phi(\phi + 1)}2 x$
Find the fundamental period of the function $$t(x)$$ given that $$\phi$$ is a positive integer.
×
|
2017-07-25 05:08:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599319696426392, "perplexity": 982.4389332675338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424961.59/warc/CC-MAIN-20170725042318-20170725062318-00233.warc.gz"}
|
https://math.bme.hu/~nandori/Virtual_lab/stat/special/ExtremeValue.xhtml
|
]> The Extreme Value Distribution
## 14. The Extreme Value Distribution
Extreme value distributions arise as limiting distributions for maximums or minimums (extreme values) of a sample of independent, identically distributed random variables, as the sample size increases. Thus, these distributions are important in statistics.
### The Standard Distribution for Maximums
#### The Distribution Function
Show that the function given below is a distribution function for a continuous distribution on .
$G v v , v$
The distribution defined by the distribution function in Exercise 1 is the type 1 extreme value distribution for maximums. It is also known as the Gumbel distribution in honor of Emil Gumbel. This distribution arises as the limit of the maximum of $n$ independent random variables, each with the standard exponential distribution (when this maximum is appropriately scaled and centered). This is the main reason that the distribution is special, and is the reason for the name.
#### The Density Function
Show that the density function is given by
$g v v v , v$
Graph the density function and show that the distribution is unimodal and skewed right. In particular, show that
1. $g$ is increasing on $0$ and decreasing on $0$ and hence the mode occurs at 0
2. $g$ is concave upward on $c$ and on $c$, and concave downward on $c c$, where $c 3 5 2$.
In the random variable experiment, select the extreme value distribution and note the shape and location of the density function. Run the simulation 1000 times updating every 10 runs, and note the apparent convergence of the empirical density function to the probability density function.
#### The Quantile Function
Show that the quantile function is
$G p p , p 0 1$
Show that
1. the first quartile is $4 0.3266$
2. the median is $2 0.3665$
3. the third quartile is $4 3 1.2459$
In the quantile applet, select the extreme value distribution and note the shape and location of the density function and the distribution function. Compute the quantiles of order 0.1, 0.3, 0.6, and 0.9
#### Moments
The moment generating function of the standard extreme value distribution has a simple expression in terms of the gamma function.
Suppose that $V$ has the extreme value distribution for maximums. Show that the moment generating function is given by
$m t t V 1 t , t 1$
We can now compute the mean and variance. First, recall that the Euler constant, named for Leonhard Euler is defined by
$1 x 0 x x 0.5772156649$
Suppose that $V$ has the extreme value distribution for maximums. Show that
1. $V$
2. $V 2 6$
In the random variable experiment, select the extreme value distribution and note the shape and location of the mean and standard deviation bar. Run the simulation 1000 times updating every 10 runs, and note the apparent convergence of the empirical moments to the true moments.
### The General Extreme Value Distribution
As with many other distributions we have studied, the standard extreme value distribution can be generalized by applying a linear transformation to the standard variable. Thus, suppose that $V$ has the type 1 extreme value distribution for maximums, discussed above. First, $U V$ has the type 1 extreme value distribution for minimums. More generally, we can form the location-scale family associated with these standard distributions. If $a$ and $b 0$, then
• $X a b V$ has the extreme value distribution for maximums with location parameter $a$ and scale parameter $b$.
• $X a b V$ has the extreme value distribution for minimums with location parameter $a$ and scale parameter $b$.
#### Distribution Functions
Show that $X a b V$ has distribution function
$F x x a b , x$.
Show that $X a b V$ has distribution function
$F x 1 x a b , x$.
#### Density Functions
Show that $X a b V$ has density function
$f x 1 b x a b x a b , x$.
Show that $X a b V$ has density function
$f x 1 b x a b x a b , x$.
#### Quantile Functions
Show that $X a b V$ has quantile function
$F p a b p , p 0 1$.
Show that $X a b V$ has quantile function
$F p a b 1 p , p 0 1$.
#### Moments
Show that $X a b V$ has moment generating function
$M t a t 1 b t , t 1 b$.
Show that $X a b V$ has moment generating function
$M t a t 1 b t , t 1 b$.
Show that
1. $a b V a b$
2. $a b V a b$
3. $a b V a b V b 2 2 6$
#### Transformations
Show that
1. If $X$ has the standard exponential distribution then $U X$ has the standard extreme value distribution for minimums.
2. If $U$ has the standard extreme value distribution for minimums then $X U$ has the standard exponential distribution.
More generally, show that
1. If $X$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$ then $U X$ has the extreme value distribution for minimums, with location parameter $b$ and scale parameter $1 k$.
2. If $U$ has the extreme value distribution for minimums, with location parameter $a$ and scale parameter $b$, then $X U$ has the Weibull distribution with shape parameter $1 b$. and scale parameter $a$.
|
2022-01-22 17:29:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 67, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230583906173706, "perplexity": 312.53301460377384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00717.warc.gz"}
|
https://plus.google.com/+GoogleScienceFair/posts/3gEkVeTucGA
|
Shared publicly -
Pi, visualized in a single image.
Pi is what's known as an irrational number, which means that its decimal representation is both infinite and non-repeating. We've been using computers to calculate the digits of Pi for decades.
87
50
|
2015-03-02 21:53:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919049143791199, "perplexity": 719.8076926203717}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463028.70/warc/CC-MAIN-20150226074103-00290-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://haleighstern.com/tag/progress/
|
# Drop Assumptions to Progress
As mentioned in my last post, I am examining an information ecology according to the process outlined in Bonnie Nardi and Vicki O’Day’s Information EcologiesThis required developing an ethnography-driven methodology that centralizes on interviews and observation. Beginning my study, I quickly learned that assumptions were limiting my understanding and approach.
Understanding how technology is used within an information ecology (or team dynamic) seems to be easier if you are an active member. I perform this process every day, and I’ve had these conversations with my coworkers. Unfortunately, my level of familiarity served as an immediate disadvantage.
|
2021-12-05 18:32:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666180372238159, "perplexity": 2434.1186843229307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00521.warc.gz"}
|
https://maharashtraboardsolutions.guru/maharashtra-board-12th-commerce-maths-solutions-chapter-8-ex-8-5-part-1/
|
# Maharashtra Board 12th Commerce Maths Solutions Chapter 8 Differential Equation and Applications Ex 8.5
Balbharati Maharashtra State Board Std 12 Commerce Statistics Part 1 Digest Pdf Chapter 8 Differential Equation and Applications Ex 8.5 Questions and Answers.
## Maharashtra State Board 12th Commerce Maths Solutions Chapter 8 Differential Equation and Applications Ex 8.5
Solve the following differential equations.
Question 1.
$$\frac{d y}{d x}+y=e^{-x}$$
Solution:
$$\frac{d y}{d x}+y=e^{-x}$$ …….(1)
This is the linear differential equation of the form
This is the general solution.
Question 2.
$$\frac{d y}{d x}$$ + y = 3
Solution:
$$\frac{d y}{d x}$$ + y = 3
This is the linear differential equation of the form
This is the general solution.
Question 3.
x$$\frac{d y}{d x}$$ + 2y = x2 . log x.
Solution:
x$$\frac{d y}{d x}$$ + 2y = x2 . log x
∴ $$\frac{d y}{d x}+\left(\frac{2}{x}\right) \cdot y=x \cdot \log x$$ …….(1)
This is the linear differential equation of the form
This is the general solution.
Question 4.
(x + y)$$\frac{d y}{d x}$$ = 1
Solution:
(x + y) $$\frac{d y}{d x}$$ = 1
∴ $$\frac{d x}{d y}$$ = x + y
∴ $$\frac{d x}{d y}$$ – x = y
∴ $$\frac{d x}{d y}$$ + (-1) x = y ……(1)
This is the linear differential equation of the form
This is the general solution.
Question 5.
y dx + (x – y2) dy = 0
Solution:
y dx + (x – y2) dy = 0
∴ y dx = -(x – y2) dy
∴ $$\frac{d x}{d y}=-\frac{\left(x-y^{2}\right)}{y}=-\frac{x}{y}+y$$
∴ $$\frac{d x}{d y}+\left(\frac{1}{y}\right) \cdot x=y$$ ……(1)
This is the linear differential equation of the form
This is the general solution.
Question 6.
$$\frac{d y}{d x}$$ + 2xy = x
Solution:
$$\frac{d y}{d x}$$ + 2xy = x ………(1)
This is the linear differential equation of the form
This is the general solution.
Question 7.
(x + a) $$\frac{d y}{d x}$$ = -y + a
Solution:
(x + a) $$\frac{d y}{d x}$$ + y = a
∴ $$\frac{d y}{d x}+\left(\frac{1}{x+a}\right) y=\frac{a}{x+a}$$ ……..(1)
This is the linear differential equation of the form
This is the general solution.
Question 8.
dy + (2y) dx = 8 dx
Solution:
dy + (2y) dx = 8 dx
∴ $$\frac{d y}{d x}$$ + 2y = 8 …….(1)
This is the linear differential equation of the form
This is the general solution.
|
2022-12-02 07:36:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687821388244629, "perplexity": 2625.018337915976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00013.warc.gz"}
|
http://www.oalib.com/relative/3271157
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Jeffrey Galkowski Mathematics , 2014, Abstract: We study high energy resonances for the operator $-\Delta_{V,\partial\Omega}:=-\Delta+\delta_{\partial\Omega}\otimes V$ when $V$ has strong frequency dependence. The operator $-\Delta_{V,\partial\Omega}$ is a Hamiltonian used to model both quantum corrals and leaky quantum graphs. Since highly frequency dependent delta potentials are out of reach of the more general techniques in previous work, we study the special case where $\Omega=B(0,1)\subset \mathbb{R}^2$ and $V\equiv h^{-\alpha }V_0>0$ with $\alpha\leq 1$. Here $h^{-1}\sim \Re \lambda$ is the frequency. We give sharp bounds on the size of resonance free regions for $\alpha\leq 1$ and the location of bands of resonances when $5/6\leq \alpha\leq 1$. Finally, we give a lower bound on the number of resonances in logarithmic size strips: $-M\log \Re \lambda\leq \Im \lambda \leq 0$.
物理学报 , 2003, Abstract: A self-consistent model is proposed here to study the nonlinear resonance and the hysteresis phenomena in the vertical oscillations of a charged micro-particle in a RF sheath. In this model, the charging process of the micro-particle and the sheath dynamics is considered self-consistently. And also, various forces acting on the particle are fully taken into account in the Newton's equations of the micro-particle. By solving the equation, we simulate the motions of the micro-particle in the sheath, under the excitations of the probe. Numerical results reproduce well the recent experimental observations; at the same time, we find that these nonlinearities are not only due to the structure of the sheath, but also due to the charging process of the micro-particle, ion drag force, neutral gas friction and the excitation of the probe.
Mathematics , 2011, Abstract: We consider sets in uniformly perfect metric spaces which are null for every doubling measure of the space or which have positive measure for all doubling measures. These sets are called thin and fat, respectively. In our main results, we give sufficient conditions for certain cut-out sets being thin or fat.
Jeffrey Galkowski Mathematics , 2014, Abstract: We study high energy resonances for the operators $-\Delta +\delta_{\partial\Omega}\otimes V$ and $-\Delta+\delta_{\partial\Omega}'\otimes V\partial_\nu$ where $\Omega$ is strictly convex with smooth boundary, $V:L^2(\partial\Omega)\to L^2(\partial\Omega)$ may depend on frequency, and $\delta_{\partial\Omega}$ is the surface measure on $\partial\Omega$. These operators are model Hamiltonians for quantum corrals and leaky quantum graphs. We give a quantum version of the Sabine Law from the study of acoustics for both the $\delta$ and $\delta'$ interactions. It characterizes the decay rates (imaginary parts of resonances) in terms of the system's ray dynamics. In particular, the decay rates are controlled by the average reflectivity and chord length of the barrier. For the $\delta$ interaction we show that generically there are infinitely many resonances arbitrarily close to the resonance free region found by our theorem. In the case of the $\delta'$ interaction, the quantum Sabine law gives the existence of a resonance free region that converges to the real axis at a fixed polynomial rate and is optimal in the case of the unit disk in the plane. As far as the author is aware, this is the only class of examples that is known to have resonances converging to the real axis at a fixed polynomial rate but no faster. The proof of our theorem requires several new technical tools. We adapt intersecting Lagrangian distributions to the semiclassical setting and give a description of the kernel of the free resolvent as such a distribution. We also construct a semiclassical version of the Melrose--Taylor parametrix for complex energies. We use these constructions to give a complete microlocal description of boundary layer operators and to prove sharp high energy estimates on the boundary layer operators in the case that $\partial\Omega$ is smooth and strictly convex.
Physics , 2010, Abstract: We demonstrate an efficient double-layer light absorber by exciting plasmonic phase resonances. We show that the addition of grooves can cause mode splitting of the plasmonic waveguide cavity modes and all the new resonant modes exhibit large absorptivity greater than 90%. Some of the generated absorption peaks have wide-angle characteristics. Furthermore, we find that the proposed structure is fairly insensitive to the alignment error between different layers. The proposed plasmonic nano-structure designs may have exciting potential applications in thin film solar cells, thermal emitters, novel infrared detectors, and highly sensitive bio-sensors.
Physics , 2009, DOI: 10.1103/PhysRevA.81.033408 Abstract: Nonlinear magneto-optical resonances have been measured in an extremely thin cell (ETC) for the D1 transition of rubidium in an atomic vapor of natural isotopic composition. All hyperfine transitions of both isotopes have been studied for a wide range of laser power densities, laser detunings, and ETC wall separations. Dark resonances in the laser induced fluorescence (LIF) were observed as expected when the ground state total angular momentum F_g was greater than or equal to the excited state total angular momentum F_e. Unlike the case of ordinary cells, the width and contrast of dark resonances formed in the ETC dramatically depended on the detuning of the laser from the exact atomic transition. A theoretical model based on the optical Bloch equations was applied to calculate the shapes of the resonance curves. The model averaged over the contributions from different atomic velocity groups, considered all neighboring hyperfine transitions, took into account the splitting and mixing of magnetic sublevels in an external magnetic field, and included a detailed treatment of the coherence properties of the laser radiation. Such a theoretical approach had successfully described nonlinear magneto-optical resonances in ordinary vapor cells. Although the values of certain model parameters in the ETC differed significantly from the case of ordinary cells, the same physical processes were used to model both cases. However, to describe the resonances in the ETC, key parameters such as the transit relaxation rate and Doppler width had to be modified in accordance with the ETC's unique features. Agreement between the measured and calculated resonance curves was satisfactory for the ETC, though not as good as in the case of ordinary cells.
Mathematics , 2007, DOI: 10.1063/1.2749703 Abstract: We prove an abstract criterion stating resolvent convergence in the case of operators acting in different Hilbert spaces. This result is then applied to the case of Laplacians on a family $X_\eps$ of branched quantum waveguides. Combining it with an exterior complex scaling we show, in particular, that the resonances on $X_\eps$ approximate those of the Laplacian with free'' boundary conditions on $X_0$, the skeleton graph of $X_\eps$.
Physics , 2013, DOI: 10.1103/PhysRevB.89.024510 Abstract: We study analytically the evolution of superconductivity in clean quasi-two-dimensional multiband supercon- ductors as the film thickness enters the nanoscale region by mean-field and semiclassical techniques. Tunneling into the substrate and finite lateral size effects, which are important in experiments, are also considered in our model. As a result, it is possible to investigate the interplay between quantum coherence effects, such as shape resonances and shell effects, with the potential to enhance superconductivity, and the multiband structure and the coupling to the substrate that tend to suppress it. The case of magnesium diboride, which is the conventional superconductor with the highest critical temperature, is discussed in detail. Once the effect of the substrate is considered, we still observe quantum size effects such as the oscillation of the critical temperature with the thickness but without a significant enhancement of superconductivity. In thin films with a sufficiently longer superconducting coherence length, it is, however, possible to increase the critical temperature above the bulk limit by tuning the film thickness or lateral size.
Frederick Betz Open Journal of Social Sciences (JSS) , 2019, DOI: 10.4236/jss.2019.71001 Abstract: The usefulness of a theoretical metric for civilization is that it can help to identify the kinds of progress which society can make that is universalized for all humanity. Societal systems perform the functions which provide the values and performance of the society, and wherein societal problems occur. In the concept of the level of “civilization” of a society, four kinds of measures can assess the progress of a society in attaining universalized values: Truth, Good, Beautiful, and Wealth. The value of Truth in our civilization is methodologically investigated by science. The value of Good in our civilization is politically pursued through democracy. The value of Beautiful in our civilization is seen in the preservation of the environment of the Earth. The value of Wealth in our civilization is generated through industrialization of societal production. We apply the theory to the historical case of the International Court of Justice and Yugoslav War Crimes to examine empirical evidence about the validity of a theoretical metric.
Charles F. Gammie Physics , 1999, DOI: 10.1086/312207 Abstract: The efficiency of thin disk accretion onto black holes depends on the inner boundary condition, specifically the torque applied to the disk at the last stable orbit. This is usually assumed to vanish. I estimate the torque on a magnetized disk using a steady magnetohydrodynamic inflow model originally developed by Takahashi et al., 1990. I find that the efficiency epsilon can depart significantly from the classical thin disk value. In some cases epsilon > 1, i.e. energy is extracted from the black hole.
Page 1 /100 Display every page 5 10 20 Item
|
2019-10-16 06:45:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7751742601394653, "perplexity": 658.3097940486784}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00216.warc.gz"}
|
https://map-pdma.up.pt/2022-2023-edition/
|
# 2022/2023 edition
Information about acceptance of candidates to MAP-PDMA can be consulted here and here
Information about the first semester Information_2022_2023_Sem1
SEMINARS
Seminar #10: MAP-PDMA PhD Program 2022/2023 — December 9, 15:45
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: Roman Chertovskih, Research Center for Systems and Technologies (SYSTEC), Engineering Faculty, University of Porto
email: roman@fe.up.pt
* Title: Dynamo theory: From Physics and Engineering to Mathematics and Supercomputing
* Abstract: We will discuss the dynamo problem [1] – magnetic field generation by flows of an electrically conducting fluid. We will survey the magnetic activity in the Universe: magnetic fields of planets, stars and galaxies [2], and consider an important engineering application – magnetic fields in liquid metals cooling a reactor. To simulate such magnetic phenomena, the governing equations will be introduced and the basics of magnetohydrodynamics [3] will be discussed. We also plan to overview the mathematical methods used in the analysis of such problems: ranging from the dynamical systems theory [4] andthe equivariant bifurcation theory [5] to the numerical spectral methods [6]. Finally, the use of high performance computers for the considered problems will be addressed.
[1] Moffatt K., Dormy E. Self-Exciting Fluid Dynamos. Cambridge Texts in Applied Mathematics, Cambridge University Press, 2019.
[2] Rudiger G., Hollerbach R. The Magnetic Universe: Geophysical and Astrophysical Dynamo Theory. Wiley, 2004.
[3] Molokov S., Moreau R., Moffatt K. Magnetohydrodynamics: Historical Evolution andfTrends. Springer, 2007.
[4] Bohr T., Jensen M.H., Paladin G., Vulpiani A. Dynamical Systems Approach to Turbulence. Cambridge Nonlinear Science Series, Cambridge University Press, 2005.
[5] Chossat P., Lauterbach R. Methods in Equivariant Bifurcations and Dynamical Systems. Advanced Series in Nonlinear Dynamics, World Scientific, 2000.
[6] Canuto C., Hussaini M.Y., Quarteroni A., Zang T.A. Spectral Methods: Fundamentals in Single
Domains. Scientific Computation series, Springer, 2006.
Seminar #9: MAP-PDMA PhD Program 2022/2023 — December 9, 14:30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: Ariel Martín Pacetti, Centro de Investigação e Desenvolvimento em Matemática e Aplicações, University of Aveiro
email: apacetti@ua.pt
* Title: Zeta function of projective varieties
* Abstract: The main goal of the present talk is to define the local and global zeta function of algebraic varieties, with special emphasis on particular examples. We will see how well known functions (like Riemann’s zeta function) appear in this way. We will state some hard open problems regarding zeta functions, and some important results obtained during the last years. The presentation is aimed at a general audience.
Seminar #8: MAP-PDMA PhD Program 2022/2023 — November 18, 14:30
MAP-PDMA PhD Program 2022/2023 — December 2, 14:30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: Bruno M. P. M. Oliveira, FCNAUP and LIAAD – INESC TEC, University of Porto
email: bmpmo@fcna.up.pt
* Title: A mathematical model of immune responses with CD4+ T cells and Tregs
* Abstract: We use a a set of ordinary differential equations (ODE) to study mathematically the effect of regulatory T cells (Tregs) in the control of immune responses by CD4+ T cells. T cells trigger an immune response in the presence of their specific antigen, while regulatory T cells (Tregs) play a role in limiting auto-immune diseases due to their immune-suppressive ability, see Pinto et al. [5], Yusuf et al. [6] and references within.
We fitted this model to quantitative data regarding the CD4+ T cell numbers from the 28 days following the infection of mice with lymphocytic choriomeningitis virus LCMV. We observed the proliferation of T cells and, to a lower extent, Tregs during the immune activation phase following infection and subsequently, during the contraction phase, a smooth transition from faster to slower death rates, see Afsar et al. [1].
Furthermore, we have obtained explicit exact formulas that give the relationship between the concentration of T cells, the concentration of Tregs, and the antigenic stimulation of T cells, when the system is at equilibria, stable or unstable. We found a region of bistability, where 2 stable equilibria exist. Making a cross section along the antigenic stimulation of T cells parameter, we observe an hysteresis bounded by two thresholds of antigenic stimulation of T cells. Moreover, there are values of the slope parameter of the tuning, between the antigenic stimulation of T cells and the antigenic stimulation of Tregs, for which an isolacenter bifurcation appear and, for some other values, there is a transcritical bifurcation, see Yusuf et al. [6] and references within.
Time evolutions of this model were also used to simulate the appearance of autoimmunity both due to cross-reactivity or due to bystander proliferation, and to simulate the suppression of the autoimmune line of T cells after a different line of T cells responds to a pathogen infection, see Burroughs et al. [2, 3] and Oliveira et al. [4].
[1] A. Afsar, F. Martins, B. M. P. M. Oliveira, and A. A. Pinto. A fit of CD4 + T cell immune response to an infection by lymphocytic choriomeningitis virus. Mathematical Biosciences and Engineering, 16(6):70097021, 2019.
[2] N. J. Burroughs, B. M. P. M. Oliveira, and A. A. Pinto. Regulatory T cell adjustment of quorum growth thresholds and the control of local immune responses. Journal of Theoretical Biology, 241:134141, 2006.
[3] N. J. Burroughs, M. Ferreira, B. M. P. M. Oliveira, and A. A. Pinto. Autoimmunity arising from bystander proliferation of T cells in an immune response model. Mathematical and Computer Modelling, 53:13891393, 2011.
[4] B. M. P. M. Oliveira, R. Trinchet, M. V. Otero-Espinar, A. A. Pinto, and N. J. Burroughs. Modelling the suppression of autoimmunity after pathogen infection. Mathematical Methods in the Applied Sciences, 41(18):85658570, 2018.
[5] A. A. Pinto, N. J. Burroughs, F. Ferreira, and B. M. P. M. Oliveira. Dynamics of immunological models. Acta Biotheoretica, 58:391404, 2010.
[6] A. A. Yusuf, Isabel P. Figueiredo, A. Afsar, N. J. Burroughs, B. M. P. M. Oliveira, and A. A. Pinto. The effect of a linear tuning between the antigenic stimulations of CD4+T cells and CD4+ Tregs. Mathematics, 58:391404, 2010.
Seminar #7: MAP-PDMA PhD Program 2022/2023 — November 25, 14:30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/92403741454?pwd=UE43T1c3M3g5Y3VoWlZENkMwaGcrUT09
* Speaker: Sílvio Gama, Centre of Mathematics, University of Porto
email: smgama@fc.up.pt
* Title: Point vortices, regular islands and polynomials
* Abstract: After a brief description of what point vortices and passive particles are – on the plane and on the sphere – and how they can mimic real flows, we will derive their dynamic equations from the two-dimensional incompressible Euler equation. Next, we establish the connection between the relative equilibria of identical point (plane) vortices and the first and second derivatives of the polynomial that has the positions of the vortices as roots. Finally, we will present some open problems, as well as simulations based on computational models.
Seminar #6: MAP-PDMA PhD Program 2022/2023 — November 18, 15:45
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: Filipe Martins, Centre of Mathematics, University of Porto
email: luis.f.martins@fc.up.pt
* Title: Bifurcations in evolutionary matrix models in population dynamics
* Abstract: In this talk I will consider evolutionary game theoretic versions of a general class of matrix models frequently used in population dynamics. The evolutionary components model the dynamics of a vector of mean phenotypic traits subject to natural selection [1]. One fundamental question in population and mathematical biology is population extinction and persistence, i.e., stability/instability of the extinction equilibrium and of other non-extinction equilibria. I will discuss this question through the prism of dynamic bifurcations. When the model parameters, more precisely, the inherent population growth rate, dynamic bifurcations occur, opening possibility for population persistence and recurrence, or to possible extinction. The results present a complete answer to a general class of evolutionary matrix models often used in mathematical biology, the mathematical assumption being that the matrix is primitive. I will present an application of the general theoretical results to an evolutionary version of a classic Ricker model. This application illustrates the theoretical results and, in addition, several other interesting dynamic phenomena.
Most part of the results and conclusions that I will talk about in this seminar are presented in [2] (joint work with Jim M. Cushing, Alberto Pinto and Amy Veprauskas).
[1] Joel S. Brown and Thomas L. Vincent, Evolutionary Game Theory, Natural Selection and Darwinian Dynamics, Cambridge University Press, 2005.
[2] “A bifurcation theorem for evolutionary matrix models with multiple traits”, Journal of Mathematical Biology, Vol. 75, Issue 2, pp. 491–520, 2017.
Seminar #5: MAP-PDMA PhD Program 2022/2023 — November 18, 14:30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: António Machiavelo, Centre of Mathematics, University of Porto
email: ajmachia@fc.up.pt
* Title: Counting families of combinatorial objects with complex analysis
* Abstract: The methods of Analytic Combinatorics [1], namely using Complex Analysis to count families of combinatorial objects, have been extensively used and refined in the last years. We will give an overview to those methods based on the works [2] and [3], and will describe some of the challenges of use them in some intricate settings related to Theoretical Computer Science.
[1] P. Flajolet and R. Sedgewick. Analytic Combinatorics, Cambridge University Press, 2008.
[2] Sabine Broda, António Machiavelo, Nelma Moreira, Rogério Reis, A Hitchhiker’s Guide to Descriptional Complexity Through Analytic Combinatorics, Theoretical Computer Science 528 (2014) 85–100.
[3] Sabine Broda, António Machiavelo, Nelma Moreira, Rogério Reis, Analytic Combinatorics and Descriptional Complexity of Regular Languages on Average, ACM SIGACT News, 51(1):38–56, March 2020.
Seminar #4: MAP-PDMA PhD Program 2022/2023 — November 11
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/92403741454?pwd=UE43T1c3M3g5Y3VoWlZENkMwaGcrUT09
* Speaker: Thomas Kahl, Centre of Mathematics, University of Minho
email: kahl@math.uminho.pt
* Title: Algebraic topology and concurrency theory
* Abstract: It has been discovered relatively recently that concepts and methods from algebraic topology may be employed profitably in concurrency theory, the field of computer science that studies systems of simultaneously executing processes. A very expressive combinatorial-topological model of concurrency is given by higher-dimensional automata. In this talk, I will present a method to extract homological information from HDAs that is meaningful from a computer science point of view.
Seminar #3: MAP-PDMA PhD Program 2022/2023 — October 28, 14:30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/92403741454?pwd=UE43T1c3M3g5Y3VoWlZENkMwaGcrUT09
* Speaker: Pedro Patrício, Centre of Mathematics, University of Minho
E-mail: pedro@math.uminho.pt
* Title: Applicable generalized inverses of matrices
* Abstract: In 1906, Moore formulated the generalized inverse of a matrix in an algebraic setting, which was published in 1920. Kaplansky and Penrose, in 1955, independently showed that the Moore “reciprocal inverse” could be represented by four equations, now known as Moore- Penrose equations. Generalized inverses, as we know them presently, cover a wide range of mathematical areas, such as matrix theory, operator theory, c*-algebras, semi-groups or rings. They appear in numerous applications that include areas such as linear estimation, differential and difference equations, Markov chains, graphics, cryptography, coding theory, incomplete data recovery and robotics. In this seminar we will focus on the study of the generalized inverse of von Neumann, group, outer and Moore-Penrose in a purely algebraic setting and matrix setting. We will present some recent results dealing with the generalized inverse of certain types of matrices over rings, emphasizing the proof techniques used. We will address some applications.
Seminar #2: MAP-PDMA PhD Program 2022/2023 October 21, 14h30
* Place: Seminar Room of DMat-UMinho (3.08), and via zoom at
https://videoconf-colibri.zoom.us/j/94769148463?pwd=YWRVYjEzMkk2bmtMY2Q2OEhFM0hUdz09
* Speaker: Ana Jacinta Soares, Centre of Mathematics, University of Minho , E-mail: ajsoares@math.uminho.pt
* Title: Kinetic models and applications to biological systems
* Abstract: In many problems arising in Applied Mathematics, in particular in the interface of mathematics with natural and life sciences, one important aspect is the presence of different scaling regimes of evolution. In particular, when modeling biological systems,one should be able to describe the global behaviour of the cellular populations in terms of macroscopic equations and also the cellular dynamics and the biological expression of cells in terms of microscopic equations. The kinetic theory of mixtures, a branch of statistical mechanics, could provide a rather good approach to the microscopic description and, at the same time, it allows to obtain the corresponding macroscopic analogue as the hydrodynamic limit of the kinetic equations.
In this seminar, I will present some interesting problems and applications of the kinetic theory to biological systems.
Seminar #1: MAP-PDMA PhD Program 2022/2023 October 14, 14h30
$$u_t-\textrm{div} \big(|u|^{m-1} |Du|^{p-2} Du\big)=0 , \qquad p>1 .$$
|
2022-12-10 06:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3815969228744507, "perplexity": 2628.6863472706177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00669.warc.gz"}
|
http://www.soulmachine.me/blog/2015/06/21/deserialize-a-json-array-to-a-singly-linked-list/
|
# Deserialize a JSON Array to a Singly Linked List
We know that Jackson is very convenient to deserialize a JSON string into a ArrayList, HashMap or POJO object.
But how to deserialize a JSON array, such as [1,2,3,4,5] to a singly linked list?
The definition of singly linked list is as follows:
Well, the solution is quite simpler than I expected: Just make SinglyLinkedListNode implement java.util.List or java.util.Collection , Jackson will automatically deserialize it!
This idea comes from Tatu Saloranta, the discussion post is here, special thanks to him!
Here is the complete code of SinglyLinkedListNode:
Then comes with the unit tests:
## Related Posts
Deserialize a JSON String to a Binary Tree
|
2017-01-17 10:48:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47770074009895325, "perplexity": 8810.908553628768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00009-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/free-fall/hot
|
# Tag Info
94
Using your definition of "falling," heavier objects do fall faster, and here's one way to justify it: consider the situation in the frame of reference of the center of mass of the two-body system (CM of the Earth and whatever you're dropping on it, for example). Each object exerts a force on the other of $$F = \frac{G m_1 m_2}{r^2}$$ where $r = x_2 - x_1$ ...
49
I am sorry to say, but your colleague is right. Of course, air friction acts in the same way. However, the friction is, in good approximation, proportional to the square of the velocity, $F=kv^2$. At terminal velocity, this force balances gravity, $$m g = k v^2$$ And thus $$v=\sqrt{\frac{mg}{k}}$$ So, the terminal velocity of a ball 10 times as ...
42
No. All parachutes, whether they are drag-only (round) or airfoil (rectangular) will sink. Some airflow is needed to stay inflated, and that airflow comes from the steady descent. Whether your net descent rate is positive or negative is a different question. It is quite easy to be under a parachute and end up rising (I have done it myself), you just need an ...
35
As other answers say, if someone just jumps off of the international space station(ISS), they would still be in orbit around the earth since the ISS is traveling at 17,000 miles per hour (at an altitude of 258 miles). Instead of just jumping, imagine the astronaut had a jet pack that could cancel that speed of 17,000 miles per hour in a very short time ...
34
The bus experiences considerable drag, and will therefore fall more slowly than a person inside the bus. The scenario is possible in principle - but after carefully viewing the clip and doing some calculations, I believe that the details are inaccurate. Assume the bus has a mass of 5000 kg (pretty light for a bus), and is 3 m wide by 3 m tall - so the ...
31
As a very rude guess, fresh snow (see page vi) can have a density of $0.3 g/cm^3$ and be compressed all the way to about the density of ice, $0.9 g/cm^3$. Under perfect conditions you could see a 13 feet uniform deceleration when landing in 20 feet of snow, or about 4 meters. Going from $30 m/s$ to $0m/s$ (as @Sean suggested in comments), you'd have ...
26
No. The answer is clearly no. This building is 800 meter high. Some comparison: Skydivers are falling more kilometers in free fall. They experience absolutely no damage from the pressure increase. Scuba divers moving fast upwardly or downwardly also don't get any wounds, although 10 meter deep water has the same pressure as there is between the sea level ...
24
It would be possible in theory, but only in a very side-thinking way: if you make a parachute so large it encapsulates the whole Earth, it will in effect act as a balloon and not fall down, due to the internal pressure of the atmosphere. This wouldn't work in practice for obvious reasons, but maybe in Kerbal you might be able to do something like it..
23
While everyone agrees that jumping in a falling elevator doesn't help much, I think it is very instructive to do the calculation. General Remarks The general nature of the problem is the following: while jumping, the human injects muscle energy into the system. Of course, the human doesn't want to gain even more energy himself, instead he hopes to transfer ...
20
Ball 1 will drop faster in air, but both balls will drop at the same speed in vacuum. In vacuum, there is only the gravitational force on each ball. That force is proportional to mass. The accelleration of a object due to a force is inversely proportional to its mass, so the mass cancels out. Each ball will accellerate the same, which is the ...
20
If the bus was in a vacuum (both inside and outside), then the passenger would float. However, the effects of air resistance on the two objects (passenger and bus) are probably not negligible in such an instance. The bus will be moving relative to the outside air, and so will be accelerating towards the ground at a rate less than $g$. If we then released ...
18
@Señor O gives a very good answer, but he assumes an ideal deceleration. Based on a viewing of the scene, Anna sinks a little under a meter, while Kristoff doesn't sink more than half a meter. Since they fell about 200 feet (about 60 m), my initial estimate for their impact velocity is (assuming no air resistance): $v = \sqrt{2gh} = \sqrt{2*60*9.8} ... 17 This is another chance to use one of my favorite approximations ever! I first offered it as an answer to a question about how deep a platform diver will go into the water. Now is the chance to use it again! Issac Newton developed an expression for the ballistic impact depth of a body into a material. The original idea was expressed for materials of ... 14 The paradox appears because the "rest frame" of the Earth is not an inertial reference frame, it is accelerating. Keep yourself in the CM reference frame and, at least for two bodies, there is no paradox. Given an Earth of mass M, a body of mass$m_i$will fall towards the center of mass$x_{CM}=(M x_M + m_i x_i)/(M+m_i)$with an acceleration ... 12 it is because the Force at work here (gravity) is also dependent on the mass gravity acts on a body with mass m with $$F = mg$$ you will plug this in to $$F=ma$$ and you get $$ma = mg$$ $$a = g$$ and this is true for all bodies no matter what the mass is. Since they are accelerated the same and start with the same initial conditions (at rest and ... 12 It depends on how you define the problem. Humans have re-entered the atmosphere from the International Space Station many times, by riding in either a Space Shuttle or a Soyuz capsule. Someone re-entering without a spacecraft of some sort would obviously have to wear some kind of pressure suit (as Felix Baumgartner did in his jump). How elaborate is the ... 12 Analyzing the acceleration of the center of mass of the system might be the easiest way to go since we could avoid worrying about internal interactions. Let's use Newton's second law:$\sum F=N-Mg=Ma_\text{cm}$, where$M$is the total mass of the hourglass enclosure and sand,$N$is what you read on the scale (normal force), and$a_\text{cm}$is the center ... 12 A parachute is a device specifically designed to create viscous friction. Viscous friction generates a force that: is oriented opposite to the velocity; is proportional to (a certain power of [*]) the velocity. So the falling velocity will increase until the drag force (pointing upwards) becomes equal to the weight of the falling object (pointing ... 11 Nice theoretical answers (I can certainly appreciate them, I'm a mathematician). But why delve into theory when experiment is available? In this video you can see a skier jump from more than 200 feet and get head first into the snow, without a helmet. The video starts with the aftermath, if you want to see the jump right away fast forward to about 1 ... 11 Other answers & comments cover the difference in acceleration due to drag, which will be the largest effect, but don't forget that if you are in an atmosphere there will also be buoyancy to consider. The buoyancy provides an additional upward force on the balls that is equal to the weight of the displaced air. As it is the same force on each ball, the ... 11 While the stone is still travelling on the elevator, there are two forces acting on it, the force from the elevator to the stone, as well as the weight due to gravity. The moment the stone leaves the elevator, it becomes a free falling object. The elevator stops giving a force to the stone, and the only force remaining is its weight due to gravity. ... 11 You will die. Terminal velocity is a bit more than 50 m/s. The bottom of your ramp appears to have a radius less than 2m. That means you'll be exposed to more than 125g as you zip around the bottom. Nice knowing you. 11 As an addition to already posted answers and while realising that experiments on Mythbusters don't really have the required rigour of physics experiments, the Mythbusters have tested this theory and concluded that: The jumping power of a human being cannot cancel out the falling velocity of the elevator. The best speculative advice from an elevator ... 11 He "only" flew at the maximum speed of 370 m/s or so which is much less than the speed of the meteoroids – the latter hit the Earth by speeds between 11,000 and 70,000 m/s. So he was about 2 orders of magnitude slower. The friction is correspondingly lower for Baumgartner. Note that even if he jumped from "infinity", he would only reach the escape velocity ... 10 The reason that jumping can make a relatively large difference is that the kinetic energy is proportional to the square of the velocity. Thus relatively small changes to the velocity can result in relatively large changes to the kinetic energy. In addition, the velocity which a human can achieve in jumping is a substantial percentage of the velocity of fatal ... 10 If we are throwing two objects directly to the ground you are right. So from our kinematic equations: $$V_f = V_i + at$$ I would ask your teacher. What happens to the$V_f$if$V_i=0$? Then Follow it up with what would$V_f$be if$V_i$was very large? The initial velocity DOES have an effect here. HOWEVER: Make sure that you are not misinterpreting ... 10 That is an excellent example for a nice quote I read on the internet: "Common sense may be common, but it certainly isn't sense" :-) As it is hard to lift heavy objects, we assume that it must be easier for them to drop. Now, Newton's laws point out that light and heavy objects will fall with the same velocity. But is there an intuitive reason? Yes! The ... 9 Newton's gravitational force is proportional to the mass of a body,$F=\frac{GM}{R^2}\times m$, where in the case you're thinking about$M$is the mass of the earth,$R$is the radius of the earth, and$G$is Newton's gravitational constant. Consequently, the acceleration is$a=\frac{F}{m}=\frac{GM}{R^2}\$, which is independent of the mass of the object. ...
9
indeed there would be a (very small) and homogenous pressure within the blob, coming from surface tension. This pressure is calculated by the Kelvin Equation and is significant in small droplets (reason for small droplets to have a higher vapour pressure than bulk liquid) In Your 100 m blob, this extra pressure is negligible of course. There is another ...
8
In the global, cartoon, sense, yes, this problem is equivalent to having a whole row of carefully designed, placed and arranged ramps so that you fall onto the first one, get "flung" out such that you then land on the next one and so on, until dissipation wastes away the energy. Obviously this can be done since it is the same principle as is used in say ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-07-03 22:11:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294532656669617, "perplexity": 616.7253790308431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096287.97/warc/CC-MAIN-20150627031816-00245-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://archived.moe/a/thread/12796147
|
140KiB, 1280x720, lulu.jpg
No.12796147
I need the gif someone made of this. I thought I had saved but I didn't. And I don't think /r/ will have it.
|
2016-10-24 02:05:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859947919845581, "perplexity": 3556.1270345003804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00393-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://questioncove.com/updates/4d53258fd04cb76412cc8481
|
Mathematics
OpenStudy (anonymous):
the sum of two numbers is 31. The first is 5 less than the second. Find the numbers.
Can you convert this into an equation?
OpenStudy (anonymous):
x=5-y ??
Not quite, it's $$x = y - 5$$ (making $$y$$ 5 less than $$x$$).
So that's the first part, what about the part where it says that `the sum of two numbers is 31'. Can you make that into an equation?
OpenStudy (anonymous):
x+y=31
Right. So now you have two equations: \begin{align} x + y &= 31\\ y - 5 &= x \end{align}
Can you solve that system of equations?
OpenStudy (anonymous):
y-5+y
OpenStudy (anonymous):
=31
Right, and then can you solve that equation for y?
OpenStudy (anonymous):
that is where i am a little confused. do i minus a y?
OpenStudy (anonymous):
4y?
So you have: y - 5 + y = 31 You can put the ys together: y + y - 5 = 31 Which is the same as: 2y - 5 = 31
OpenStudy (anonymous):
2y=26?
Not quite, remember that if we want to move the 5 over to the right, you have to add 5 to both sides to cancel it on the left: 2y - 5 + 5 = 31 + 5 2y = 36
OpenStudy (anonymous):
oh so i have to add 5 to equal it out since it is negative?
Exactly.
OpenStudy (anonymous):
thats where i went wrong. y=18?
Exactly.
And then what's x?
OpenStudy (anonymous):
x=18-5=13
OpenStudy (anonymous):
13
Exactly!
OpenStudy (anonymous):
thank you. i am going to attempt my next problem.
|
2021-10-16 18:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461515545845032, "perplexity": 578.9210801952589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00595.warc.gz"}
|
http://blog.csdn.net/youhan26/article/details/46822743
|
XML Schema-based configuration
34.1 Introduction
This appendix details the XML Schema-based configuration introduced in Spring 2.0 and enhanced and extended in Spring 2.5 and 3.0.
The central motivation for moving to XML Schema based configuration files was to make Spring XML configuration easier. The 'classic'<bean/>-based approach is good, but its generic-nature comes with a price in terms of configuration overhead.
From the Spring IoC containers point-of-view, everything is a bean. That’s great news for the Spring IoC container, because if everything is a bean then everything can be treated in the exact same fashion. The same, however, is not true from a developer’s point-of-view. The objects defined in a Spring XML configuration file are not all generic, vanilla beans. Usually, each bean requires some degree of specific configuration.
Spring 2.0’s new XML Schema-based configuration addresses this issue. The <bean/> element is still present, and if you wanted to, you could continue to write the exact same style of Spring XML configuration using only <bean/> elements. The new XML Schema-based configuration does, however, make Spring XML configuration files substantially clearer to read. In addition, it allows you to express the intent of a bean definition.
The key thing to remember is that the new custom tags work best for infrastructure or integration beans: for example, AOP, collections, transactions, integration with 3rd-party frameworks such as Mule, etc., while the existing bean tags are best suited to application-specific beans, such as DAOs, service layer objects, validators, etc.
The examples included below will hopefully convince you that the inclusion of XML Schema support in Spring 2.0 was a good idea. The reception in the community has been encouraging; also, please note the fact that this new configuration mechanism is totally customisable and extensible. This means you can write your own domain-specific configuration tags that would better represent your application’s domain; the process involved in doing so is covered in the appendix entitled Chapter 35, Extensible XML authoring.
34.2 XML Schema-based configuration
34.2.1 Referencing the schemas
To switch over from the DTD-style to the new XML Schema-style, you need to make the following change.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN 2.0//EN"
"http://www.springframework.org/dtd/spring-beans-2.0.dtd">
<beans>
<!-- bean definitions here -->
</beans>
The equivalent file in the XML Schema-style would be…
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- bean definitions here -->
</beans>
The 'xsi:schemaLocation' fragment is not actually required, but can be included to reference a local copy of a schema (which can be useful during development).
The above Spring XML configuration fragment is boilerplate that you can copy and paste (!) and then plug <bean/> definitions into like you have always done. However, the entire point of switching over is to take advantage of the new Spring 2.0 XML tags since they make configuration easier. The section entitled Section 34.2.2, “the util schema” demonstrates how you can start immediately by using some of the more common utility tags.
The rest of this chapter is devoted to showing examples of the new Spring XML Schema based configuration, with at least one example for every new tag. The format follows a before and after style, with a before snippet of XML showing the old (but still 100% legal and supported) style, followed immediately by an after example showing the equivalent in the new XML Schema-based style.
34.2.2 the util schema
First up is coverage of the util tags. As the name implies, the util tags deal with common, utility configuration issues, such as configuring collections, referencing constants, and suchlike.
To use the tags in the util schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the snippet below references the correct schema so that the tags in the util namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <!-- bean definitions here -->
</beans>
<util:constant/>
Before…
<bean id="..." class="...">
<property name="isolation">
<bean id="java.sql.Connection.TRANSACTION_SERIALIZABLE"
class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean" />
</property>
</bean>
The above configuration uses a Spring FactoryBean implementation, the FieldRetrievingFactoryBean, to set the value of the isolationproperty on a bean to the value of the java.sql.Connection.TRANSACTION_SERIALIZABLE constant. This is all well and good, but it is a tad verbose and (unnecessarily) exposes Spring’s internal plumbing to the end user.
The following XML Schema-based version is more concise and clearly expresses the developer’s intent ('inject this constant value'), and it just reads better.
<bean id="..." class="...">
<property name="isolation">
<util:constant static-field="java.sql.Connection.TRANSACTION_SERIALIZABLE"/>
</property>
</bean>
Setting a bean property or constructor arg from a field value
FieldRetrievingFactoryBean is a FactoryBean which retrieves a static or non-static field value. It is typically used for retrieving publicstatic final constants, which may then be used to set a property value or constructor arg for another bean.
Find below an example which shows how a static field is exposed, by using the staticField property:
<bean id="myField"
class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean">
<property name="staticField" value="java.sql.Connection.TRANSACTION_SERIALIZABLE"/>
</bean>
There is also a convenience usage form where the static field is specified as the bean name:
<bean id="java.sql.Connection.TRANSACTION_SERIALIZABLE"
class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"/>
This does mean that there is no longer any choice in what the bean id is (so any other bean that refers to it will also have to use this longer name), but this form is very concise to define, and very convenient to use as an inner bean since the id doesn’t have to be specified for the bean reference:
<bean id="..." class="...">
<property name="isolation">
<bean id="java.sql.Connection.TRANSACTION_SERIALIZABLE"
class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean" />
</property>
</bean>
It is also possible to access a non-static (instance) field of another bean, as described in the API documentation for theFieldRetrievingFactoryBean class.
Injecting enum values into beans as either property or constructor arguments is very easy to do in Spring, in that you don’t actually have to do anything or know anything about the Spring internals (or even about classes such as the FieldRetrievingFactoryBean). Let’s look at an example to see how easy injecting an enum value is; consider this JDK 5 enum:
package javax.persistence;
public enum PersistenceContextType {
TRANSACTION,
EXTENDED
}
Now consider a setter of type PersistenceContextType:
package example;
public class Client {
private PersistenceContextType persistenceContextType;
public void setPersistenceContextType(PersistenceContextType type) {
this.persistenceContextType = type;
}
}
1. and the corresponding bean definition:
<bean class="example.Client">
<property name="persistenceContextType" value="TRANSACTION" />
</bean>
This works for classic type-safe emulated enums (on JDK 1.4 and JDK 1.3) as well; Spring will automatically attempt to match the string property value to a constant on the enum class.
<util:property-path/>
Before…
<!-- target bean to be referenced by name -->
<bean id="testBean" class="org.springframework.beans.TestBean" scope="prototype">
<property name="age" value="10"/>
<property name="spouse">
<bean class="org.springframework.beans.TestBean">
<property name="age" value="11"/>
</bean>
</property>
</bean>
<!-- will result in 10, which is the value of property age of bean testBean -->
<bean id="testBean.age" class="org.springframework.beans.factory.config.PropertyPathFactoryBean"/>
The above configuration uses a Spring FactoryBean implementation, the PropertyPathFactoryBean, to create a bean (of type int) calledtestBean.age that has a value equal to the age property of the testBean bean.
After…
<!-- target bean to be referenced by name -->
<bean id="testBean" class="org.springframework.beans.TestBean" scope="prototype">
<property name="age" value="10"/>
<property name="spouse">
<bean class="org.springframework.beans.TestBean">
<property name="age" value="11"/>
</bean>
</property>
</bean>
<!-- will result in 10, which is the value of property age of bean testBean -->
<util:property-path id="name" path="testBean.age"/>
The value of the path attribute of the <property-path/> tag follows the form beanName.beanProperty.
Using <util:property-path/> to set a bean property or constructor-argument
PropertyPathFactoryBean is a FactoryBean that evaluates a property path on a given target object. The target object can be specified directly or via a bean name. This value may then be used in another bean definition as a property value or constructor argument.
Here’s an example where a path is used against another bean, by name:
// target bean to be referenced by name
<bean id="person" class="org.springframework.beans.TestBean" scope="prototype">
<property name="age" value="10"/>
<property name="spouse">
<bean class="org.springframework.beans.TestBean">
<property name="age" value="11"/>
</bean>
</property>
</bean>
// will result in 11, which is the value of property spouse.age of bean person
<bean id="theAge"
class="org.springframework.beans.factory.config.PropertyPathFactoryBean">
<property name="targetBeanName" value="person"/>
<property name="propertyPath" value="spouse.age"/>
</bean>
In this example, a path is evaluated against an inner bean:
<!-- will result in 12, which is the value of property age of the inner bean -->
<bean id="theAge"
class="org.springframework.beans.factory.config.PropertyPathFactoryBean">
<property name="targetObject">
<bean class="org.springframework.beans.TestBean">
<property name="age" value="12"/>
</bean>
</property>
<property name="propertyPath" value="age"/>
</bean>
There is also a shortcut form, where the bean name is the property path.
<!-- will result in 10, which is the value of property age of bean person -->
<bean id="person.age"
class="org.springframework.beans.factory.config.PropertyPathFactoryBean"/>
This form does mean that there is no choice in the name of the bean. Any reference to it will also have to use the same id, which is the path. Of course, if used as an inner bean, there is no need to refer to it at all:
<bean id="..." class="...">
<property name="age">
<bean id="person.age"
class="org.springframework.beans.factory.config.PropertyPathFactoryBean"/>
</property>
</bean>
The result type may be specifically set in the actual definition. This is not necessary for most use cases, but can be of use for some. Please see the Javadocs for more info on this feature.
<util:properties/>
Before…
<!-- creates a java.util.Properties instance with values loaded from the supplied location -->
<bean id="jdbcConfiguration" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="location" value="classpath:com/foo/jdbc-production.properties"/>
</bean>
The above configuration uses a Spring FactoryBean implementation, the PropertiesFactoryBean, to instantiate a java.util.Propertiesinstance with values loaded from the supplied Resource location).
After…
<!-- creates a java.util.Properties instance with values loaded from the supplied location -->
<util:properties id="jdbcConfiguration" location="classpath:com/foo/jdbc-production.properties"/>
<util:list/>
Before…
<!-- creates a java.util.List instance with values loaded from the supplied sourceList -->
<bean id="emails" class="org.springframework.beans.factory.config.ListFactoryBean">
<property name="sourceList">
<list>
<value>pechorin@hero.org</value>
<value>stavrogin@gov.org</value>
<value>porfiry@gov.org</value>
</list>
</property>
</bean>
The above configuration uses a Spring FactoryBean implementation, the ListFactoryBean, to create a java.util.List instance initialized with values taken from the supplied sourceList.
After…
<!-- creates a java.util.List instance with the supplied values -->
<util:list id="emails">
<value>pechorin@hero.org</value>
<value>stavrogin@gov.org</value>
<value>porfiry@gov.org</value>
</util:list>
You can also explicitly control the exact type of List that will be instantiated and populated via the use of the list-class attribute on the <util:list/> element. For example, if we really need a java.util.LinkedList to be instantiated, we could use the following configuration:
<util:list id="emails" list-class="java.util.LinkedList">
<value>jackshaftoe@vagabond.org</value>
<value>eliza@thinkingmanscrumpet.org</value>
<value>vanhoek@pirate.org</value>
<value>d'Arcachon@nemesis.org</value>
</util:list>
If no list-class attribute is supplied, a List implementation will be chosen by the container.
<util:map/>
Before…
<!-- creates a java.util.Map instance with values loaded from the supplied sourceMap -->
<bean id="emails" class="org.springframework.beans.factory.config.MapFactoryBean">
<property name="sourceMap">
<map>
<entry key="pechorin" value="pechorin@hero.org"/>
<entry key="stavrogin" value="stavrogin@gov.org"/>
<entry key="porfiry" value="porfiry@gov.org"/>
</map>
</property>
</bean>
The above configuration uses a Spring FactoryBean implementation, the MapFactoryBean, to create a java.util.Map instance initialized with key-value pairs taken from the supplied 'sourceMap'.
After…
<!-- creates a java.util.Map instance with the supplied key-value pairs -->
<util:map id="emails">
<entry key="pechorin" value="pechorin@hero.org"/>
<entry key="stavrogin" value="stavrogin@gov.org"/>
<entry key="porfiry" value="porfiry@gov.org"/>
</util:map>
You can also explicitly control the exact type of Map that will be instantiated and populated via the use of the 'map-class' attribute on the <util:map/> element. For example, if we really need a java.util.TreeMap to be instantiated, we could use the following configuration:
<util:map id="emails" map-class="java.util.TreeMap">
<entry key="pechorin" value="pechorin@hero.org"/>
<entry key="stavrogin" value="stavrogin@gov.org"/>
<entry key="porfiry" value="porfiry@gov.org"/>
</util:map>
If no 'map-class' attribute is supplied, a Map implementation will be chosen by the container.
<util:set/>
Before…
<!-- creates a java.util.Set instance with values loaded from the supplied sourceSet -->
<bean id="emails" class="org.springframework.beans.factory.config.SetFactoryBean">
<property name="sourceSet">
<set>
<value>pechorin@hero.org</value>
<value>stavrogin@gov.org</value>
<value>porfiry@gov.org</value>
</set>
</property>
</bean>
The above configuration uses a Spring FactoryBean implementation, the SetFactoryBean, to create a java.util.Set instance initialized with values taken from the supplied 'sourceSet'.
After…
<!-- creates a java.util.Set instance with the supplied values -->
<util:set id="emails">
<value>pechorin@hero.org</value>
<value>stavrogin@gov.org</value>
<value>porfiry@gov.org</value>
</util:set>
You can also explicitly control the exact type of Set that will be instantiated and populated via the use of the 'set-class' attribute on the <util:set/> element. For example, if we really need a java.util.TreeSet to be instantiated, we could use the following configuration:
<util:set id="emails" set-class="java.util.TreeSet">
<value>pechorin@hero.org</value>
<value>stavrogin@gov.org</value>
<value>porfiry@gov.org</value>
</util:set>
If no 'set-class' attribute is supplied, a Set implementation will be chosen by the container.
34.2.3 the jee schema
The jee tags deal with Java EE (Java Enterprise Edition)-related configuration issues, such as looking up a JNDI object and defining EJB references.
To use the tags in the jee schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the jee namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd"> <!-- bean definitions here -->
</beans>
<jee:jndi-lookup/> (simple)
Before…
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/MyDataSource"/>
</bean>
<bean id="userDao" class="com.foo.JdbcUserDao">
<!-- Spring will do the cast automatically (as usual) -->
<property name="dataSource" ref="dataSource"/>
</bean>
After…
<jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource"/>
<bean id="userDao" class="com.foo.JdbcUserDao">
<!-- Spring will do the cast automatically (as usual) -->
<property name="dataSource" ref="dataSource"/>
</bean>
<jee:jndi-lookup/> (with single JNDI environment setting)
Before…
<bean id="simple" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/MyDataSource"/>
<property name="jndiEnvironment">
<props>
<prop key="foo">bar</prop>
</props>
</property>
</bean>
After…
<jee:jndi-lookup id="simple" jndi-name="jdbc/MyDataSource">
<jee:environment>foo=bar</jee:environment>
</jee:jndi-lookup>
<jee:jndi-lookup/> (with multiple JNDI environment settings)
Before…
<bean id="simple" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/MyDataSource"/>
<property name="jndiEnvironment">
<props>
<prop key="foo">bar</prop>
<prop key="ping">pong</prop>
</props>
</property>
</bean>
After…
<jee:jndi-lookup id="simple" jndi-name="jdbc/MyDataSource">
<!-- newline-separated, key-value pairs for the environment (standard Properties format) -->
<jee:environment>
foo=bar
ping=pong
</jee:environment>
</jee:jndi-lookup>
<jee:jndi-lookup/> (complex)
Before…
<bean id="simple" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="jdbc/MyDataSource"/>
<property name="cache" value="true"/>
<property name="resourceRef" value="true"/>
<property name="lookupOnStartup" value="false"/>
<property name="expectedType" value="com.myapp.DefaultFoo"/>
<property name="proxyInterface" value="com.myapp.Foo"/>
</bean>
After…
<jee:jndi-lookup id="simple"
jndi-name="jdbc/MyDataSource"
cache="true"
resource-ref="true"
lookup-on-startup="false"
expected-type="com.myapp.DefaultFoo"
proxy-interface="com.myapp.Foo"/>
<jee:local-slsb/> (simple)
The <jee:local-slsb/> tag configures a reference to an EJB Stateless SessionBean.
Before…
<bean id="simple"
class="org.springframework.ejb.access.LocalStatelessSessionProxyFactoryBean">
<property name="jndiName" value="ejb/RentalServiceBean"/>
</bean>
After…
<jee:local-slsb id="simpleSlsb" jndi-name="ejb/RentalServiceBean"
business-interface="com.foo.service.RentalService"/>
<jee:local-slsb/> (complex)
<bean id="complexLocalEjb"
class="org.springframework.ejb.access.LocalStatelessSessionProxyFactoryBean">
<property name="jndiName" value="ejb/RentalServiceBean"/>
<property name="cacheHome" value="true"/>
<property name="lookupHomeOnStartup" value="true"/>
<property name="resourceRef" value="true"/>
</bean>
After…
<jee:local-slsb id="complexLocalEjb"
jndi-name="ejb/RentalServiceBean"
cache-home="true"
lookup-home-on-startup="true"
resource-ref="true">
<jee:remote-slsb/>
The <jee:remote-slsb/> tag configures a reference to a remote EJB Stateless SessionBean.
Before…
<bean id="complexRemoteEjb"
class="org.springframework.ejb.access.SimpleRemoteStatelessSessionProxyFactoryBean">
<property name="jndiName" value="ejb/MyRemoteBean"/>
<property name="cacheHome" value="true"/>
<property name="lookupHomeOnStartup" value="true"/>
<property name="resourceRef" value="true"/>
<property name="homeInterface" value="com.foo.service.RentalService"/>
<property name="refreshHomeOnConnectFailure" value="true"/>
</bean>
After…
<jee:remote-slsb id="complexRemoteEjb"
jndi-name="ejb/MyRemoteBean"
cache-home="true"
lookup-home-on-startup="true"
resource-ref="true"
home-interface="com.foo.service.RentalService"
refresh-home-on-connect-failure="true">
34.2.4 the lang schema
The lang tags deal with exposing objects that have been written in a dynamic language such as JRuby or Groovy as beans in the Spring container.
These tags (and the dynamic language support) are comprehensively covered in the chapter entitled Chapter 29, Dynamic language support. Please do consult that chapter for full details on this support and the lang tags themselves.
In the interest of completeness, to use the tags in the lang schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the lang namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:lang="http://www.springframework.org/schema/lang" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd"> <!-- bean definitions here -->
</beans>
34.2.5 the jms schema
The jms tags deal with configuring JMS-related beans such as Spring’s MessageListenerContainers. These tags are detailed in the section of the JMS chapter entitled Section 24.7, “JMS Namespace Support”. Please do consult that chapter for full details on this support and the jms tags themselves.
In the interest of completeness, to use the tags in the jms schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the jms namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jms="http://www.springframework.org/schema/jms" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd"> <!-- bean definitions here -->
</beans>
34.2.6 the tx (transaction) schema
The tx tags deal with configuring all of those beans in Spring’s comprehensive support for transactions. These tags are covered in the chapter entitled Chapter 12, Transaction Management.
You are strongly encouraged to look at the 'spring-tx.xsd' file that ships with the Spring distribution. This file is (of course), the XML Schema for Spring’s transaction configuration, and covers all of the various tags in the tx namespace, including attribute defaults and suchlike. This file is documented inline, and thus the information is not repeated here in the interests of adhering to the DRY (Don’t Repeat Yourself) principle.
In the interest of completeness, to use the tags in the tx schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the tx namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd"> <!-- bean definitions here -->
</beans>
Often when using the tags in the tx namespace you will also be using the tags from the aop namespace (since the declarative transaction support in Spring is implemented using AOP). The above XML snippet contains the relevant lines needed to reference the aop schema so that the tags in the aop namespace are available to you.
34.2.7 the aop schema
The aop tags deal with configuring all things AOP in Spring: this includes Spring’s own proxy-based AOP framework and Spring’s integration with the AspectJ AOP framework. These tags are comprehensively covered in the chapter entitled Chapter 9, Aspect Oriented Programming with Spring.
In the interest of completeness, to use the tags in the aop schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the aop namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd"> <!-- bean definitions here -->
</beans>
34.2.8 the context schema
The context tags deal with ApplicationContext configuration that relates to plumbing - that is, not usually beans that are important to an end-user but rather beans that do a lot of grunt work in Spring, such as BeanfactoryPostProcessors. The following snippet references the correct schema so that the tags in the context namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <!-- bean definitions here -->
</beans>
The context schema was only introduced in Spring 2.5.
<property-placeholder/>
This element activates the replacement of \${...} placeholders, resolved against the specified properties file (as a Spring resource location). This element is a convenience mechanism that sets up aPropertyPlaceholderConfigurer for you; if you need more control over the PropertyPlaceholderConfigurer, just define one yourself explicitly.
<annotation-config/>
Activates the Spring infrastructure for various annotations to be detected in bean classes: Spring’s @Required and @Autowired, as well as JSR 250’s @PostConstruct@PreDestroy and @Resource (if available), and JPA’s @PersistenceContext and @PersistenceUnit (if available). Alternatively, you can choose to activate the individual BeanPostProcessors for those annotations explicitly.
This element does not activate processing of Spring’s @Transactional annotation. Use the element for that purpose.
<component-scan/>
This element is detailed in Section 5.9, “Annotation-based container configuration”.
This element is detailed in Section 9.8.4, “Load-time weaving with AspectJ in the Spring Framework”.
<spring-configured/>
This element is detailed in Section 9.8.1, “Using AspectJ to dependency inject domain objects with Spring”.
<mbean-export/>
This element is detailed in Section 25.4.3, “Configuring annotation based MBean export”.
34.2.9 the tool schema
The tool tags are for use when you want to add tooling-specific metadata to your custom configuration elements. This metadata can then be consumed by tools that are aware of this metadata, and the tools can then do pretty much whatever they want with it (validation, etc.).
The tool tags are not documented in this release of Spring as they are currently undergoing review. If you are a third party tool vendor and you would like to contribute to this review process, then do mail the Spring mailing list. The currently supported tool tags can be found in the file 'spring-tool.xsd' in the 'src/org/springframework/beans/factory/xml' directory of the Spring source distribution.
34.2.10 the jdbc schema
The jdbc tags allow you to quickly configure an embedded database or initialize an existing data source. These tags are documented in Section 14.8, “Embedded database support” and Section 14.9, “Initializing a DataSource” respectively.
To use the tags in the jdbc schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the jdbc namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> <!-- bean definitions here -->
</beans>
34.2.11 the cache schema
The cache tags can be used to enable support for Spring’s @CacheEvict@CachePut and @Caching annotations. It it also supports declarative XML-based caching. See Section 30.3.6, “Enable caching annotations” and Section 30.5, “Declarative XML-based caching”for details.
To use the tags in the cache schema, you need to have the following preamble at the top of your Spring XML configuration file; the text in the following snippet references the correct schema so that the tags in the cache namespace are available to you.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jdbc="http://www.springframework.org/schema/cache" xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/cache http://www.springframework.org/schema/jdbc/spring-cache.xsd"> <!-- bean definitions here -->
</beans>
34.2.12 the beans schema
Last but not least we have the tags in the beans schema. These are the same tags that have been in Spring since the very dawn of the framework. Examples of the various tags in the beans schema are not shown here because they are quite comprehensively covered inSection 5.4.2, “Dependencies and configuration in detail” (and indeed in that entire chapter).
One thing that is new to the beans tags themselves in Spring 2.0 is the idea of arbitrary bean metadata. In Spring 2.0 it is now possible to add zero or more key / value pairs to <bean/> XML definitions. What, if anything, is done with this extra metadata is totally up to your own custom logic (and so is typically only of use if you are writing your own custom tags as described in the appendix entitledChapter 35, Extensible XML authoring).
Find below an example of the <meta/> tag in the context of a surrounding <bean/> (please note that without any logic to interpret it the metadata is effectively useless as-is).
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="foo" class="x.y.Foo">
<meta key="cacheName" value="foo"/>
<property name="name" value="Rick"/>
</bean>
</beans>
In the case of the above example, you would assume that there is some logic that will consume the bean definition and set up some caching infrastructure using the supplied metadata.
• 本文已收录于以下专栏:
XML Schema学习教程(一)-XML Schema介绍
Introduction to XML Schema [XML Schema介绍]翻译:linqingfeng 英语原文: http://www.w3schools.com/schema/defaul...
• linqingfeng
• 2006年05月02日 01:09
• 2302
Spring入门(Schema-based AOP其一)
• qq_36206746
• 2017年09月30日 20:09
• 54
XML Schema-based configuration
XML Schema-based configuration 34.1 Introduction This appendix details the X...
• youhan26
• 2015年07月09日 22:35
• 527
关于Schema-based AOP support 遇到的问题解决
• dojax
• 2008年04月23日 23:58
• 347
Spring 2.0 XML schema-based configuration
• xiemk2005
• 2011年01月08日 11:21
• 678
applicationHost.config - Configuration file is not well-formed XML
• rosone
• 2010年05月07日 19:54
• 1226
Spring Framework 开发参考手册(chm)
• 2009年04月29日 15:31
• 2.09MB
• 下载
spring-framework-reference(英文原版pdf官方参考文档)
• 2017年01月20日 17:50
• 5.34MB
• 下载
Spring的@Configuration来代替xml配置
• u014563989
• 2017年05月23日 11:42
• 887
【笔记】Configuration对properties、xml配置文件的【增删改查】及动态加载
• Jul_11th
• 2017年01月13日 21:31
• 920
举报原因: 您举报文章:XML Schema-based configuration 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
|
2018-02-21 05:37:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2099425047636032, "perplexity": 4350.632178143584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00784.warc.gz"}
|
http://orbitize.info/en/latest/priors.html
|
# Priors¶
class orbitize.priors.GaussianPrior(mu, sigma, no_negatives=True)[source]
Gaussian prior.
$log(p(x|\sigma, \mu)) \propto \frac{(x - \mu)}{\sigma}$
Parameters
• mu (float) – mean of the distribution
• sigma (float) – standard deviation of the distribution
• no_negatives (bool) – if True, only positive values will be drawn from this prior, and the probability of negative values will be 0 (default:True).
(written) Sarah Blunt, 2018
compute_lnprob(element_array)[source]
Compute log(probability) of an array of numbers wrt a Gaussian distibution. Negative numbers return a probability of -inf.
Parameters
element_array (float or np.array of float) – array of numbers. We want the probability of drawing each of these from the appopriate Gaussian distribution
Returns
array of log(probability) values, corresponding to the probability of drawing each of the numbers in the input element_array.
Return type
numpy array of float
draw_samples(num_samples)[source]
Draw positive samples from a Gaussian distribution. Negative samples will not be returned.
Parameters
num_samples (float) – the number of samples to generate
Returns
samples drawn from the appropriate Gaussian distribution. Array has length num_samples.
Return type
numpy array of float
class orbitize.priors.LinearPrior(m, b)[source]
Draw samples from the probability distribution:
$p(x) \propto mx+b$
where m is negative, b is positive, and the range is [0,-b/m].
Parameters
• m (float) – slope of line. Must be negative.
• b (float) – y intercept of line. Must be positive.
draw_samples(num_samples)[source]
Draw samples from a descending linear distribution.
Parameters
num_samples (float) – the number of samples to generate
Returns
samples ranging from [0, -b/m) as floats.
Return type
samples (np.array)
class orbitize.priors.LogUniformPrior(minval, maxval)[source]
This is the probability distribution $$p(x) \propto 1/x$$
The __init__ method should take in a “min” and “max” value of the distribution, which correspond to the domain of the prior. (If this is not implemented, the prior has a singularity at 0 and infinite integrated probability).
Parameters
• minval (float) – the lower bound of this distribution
• maxval (float) – the upper bound of this distribution
compute_lnprob(element_array)[source]
Compute the prior probability of each element given that its drawn from a Log-Uniofrm prior
Parameters
element_array (float or np.array of float) – array of paramters to compute the prior probability of
Returns
array of prior probabilities
Return type
np.array
draw_samples(num_samples)[source]
Draw samples from this 1/x distribution.
Parameters
num_samples (float) – the number of samples to generate
Returns
samples ranging from [minval, maxval) as floats.
Return type
np.array
class orbitize.priors.Prior[source]
Abstract base class for prior objects. All prior objects should inherit from this class.
Written: Sarah Blunt, 2018
class orbitize.priors.SinPrior[source]
This is the probability distribution $$p(x) \propto sin(x)$$
The domain of this prior is [0,pi].
compute_lnprob(element_array)[source]
Compute the prior probability of each element given that its drawn from a sine prior
Parameters
element_array (float or np.array of float) – array of paramters to compute the prior probability of
Returns
array of prior probabilities
Return type
np.array
draw_samples(num_samples)[source]
Draw samples from a Sine distribution.
Parameters
num_samples (float) – the number of samples to generate
Returns
samples ranging from [0, pi) as floats.
Return type
np.array
class orbitize.priors.UniformPrior(minval, maxval)[source]
This is the probability distribution p(x) propto constant.
Parameters
• minval (float) – the lower bound of the uniform prior
• maxval (float) – the upper bound of the uniform prior
compute_lnprob(element_array)[source]
Compute the prior probability of each element given that its drawn from this uniform prior
Parameters
element_array (float or np.array of float) – array of paramters to compute the prior probability of
Returns
array of prior probabilities
Return type
np.array
draw_samples(num_samples)[source]
Draw samples from this uniform distribution.
Parameters
num_samples (float) – the number of samples to generate
Returns
samples ranging from [0, pi) as floats.
Return type
np.array
orbitize.priors.all_lnpriors(params, priors)[source]
Calculates log(prior probability) of a set of parameters and a list of priors
Parameters
• params (np.array) – size of N parameters
• priors (list) – list of N prior objects corresponding to each parameter
Returns
prior probability of this set of parameters
Return type
float
|
2019-11-21 12:00:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110940456390381, "perplexity": 4855.791687567133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670770.21/warc/CC-MAIN-20191121101711-20191121125711-00496.warc.gz"}
|
http://math.stackexchange.com/questions/231455/estimator-for-sum-of-independent-and-identically-distributed-iid-variables?answertab=votes
|
Estimator for sum of independent and identically distributed (iid) variables
Consider the Chernoff bound described in Theorem 1 of this paper:
Theorem 1. Let $X_1,\ldots,X_n$ be discrete, independent random variables such that $E[X_i] = 0$ and $|X_i|<1$ for all $i$. Let $X:=\sum_{i=1}^n X_i$ and $\sigma^2$ be the variance of $X$. Then, $$\Pr\left[|X|\ge \lambda\sigma\right] \le 2e^{-\lambda^2/4}$$ for any $0\le\lambda\le 2\sigma$.
I want to apply this estimator for a computation, but the variance of the variables $X_i$ is unknown to me. Apart from that, my variables satisfy all the conditions of Theorem 1. In fact, my variables are independent and identically distributed (iid).
On the other hand, there is Chebyshev's inequality for finite samples which does not require knowledge of the variance (or mean) of a given random variable and replaces it with the corresponding values of my given sample. However, the estimator is not good enough for my application and I was wondering if there is an estimator similar to Theorem 1 but not featuring the variance of the distribution itself.
Intuitively speaking, it would be nice to have the best of both worlds: If that is not possible, then what is the best bound I can achieve for a sum of iid variables, without any knowledge about their variance?
-
The maximum value of the variance of a random variable taking on values in $[-1,+1]$ almost surely is $1$ and occurs when the random variable is discrete, taking on values $\pm 1$ with equal probability $\frac{1}{2}$. Perhaps this fact can help. – Dilip Sarwate Nov 6 '12 at 21:19
Unfortunately, the variance will in most cases be very close to zero, smaller than $10^{-10}$ just to give you an idea. I have computed it in some cases, and the computation works very good when I can use it with the Chernoff bound. In fact, I have already tried the estimate $\sigma^2\le 1$, and the result is disastrous for the algorithm. – Jesko Hüttenhain Nov 6 '12 at 22:44
I deleted my answer below since I think I misinterpreted your question. (The answer said just that, given no information about the variance, Chernoff bounds are the best you can do.) Can you clarify two points? (1) Are you seeking a bound to analyze an algorithm, or to use within an algorithm (somehow along with sampling)? (2) How can $\sigma$ be so small? For any fixed distribution on the $X_i$, the variance of their sum has to grow linearly with $n$ as $n\rightarrow\infty$, right? – Neal Young Nov 7 '12 at 19:00
What happened with this question? – Did Jan 14 '13 at 11:53
Reposted at mathoverflow.net/questions/127887/… . – user66151 Apr 18 '13 at 9:01
|
2015-05-23 04:50:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227400422096252, "perplexity": 173.05965342186167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927185.70/warc/CC-MAIN-20150521113207-00220-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/easy-limits/
|
# Easy Limits....
Calculus Level 2
The solutions of 1 and 2 are $$a , b$$ respectively which are positive integers
Find $$b^a$$
×
|
2018-01-21 10:50:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.67466139793396, "perplexity": 4067.4838083284285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00245.warc.gz"}
|
http://ftp.tug.org/mail/archives/pdftex/2006-February/006354.html
|
[pdftex] Bookmark Text Color
Gruda, Jeffrey D jdgruda at sandia.gov
Tue Feb 21 18:44:10 CET 2006
Is there a way to change the color of the text in a bookmark? I would
like each level to have a different color.
\pdfbookmark[0]{Processor Comparison-Plots}{pc0}
I have tried:
\textcolor{red}{\pdfbookmark[0]{Code Comparison-Plots}{cc0}}
\pdfbookmark[0]{\textcolor{red}{Code Comparison-Plots}}{cc0}}
And neither seem to work.
Any help is appreciated.
Thanks,
Jeff
|
2023-01-30 17:38:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949131190776825, "perplexity": 9384.369854720071}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00117.warc.gz"}
|
https://www.quizover.com/online/course/21-5-null-measurements-circuits-bioelectricity-and-dc-by-openstax?page=2
|
# 21.5 Null measurements (Page 3/8)
Page 3 / 8
${I}_{1}{R}_{1}={I}_{2}{R}_{3}.$
Again, since b and d are at the same potential, the $\text{IR}$ drop along dc must equal the $\text{IR}$ drop along bc. Thus,
${I}_{1}{R}_{2}={I}_{2}{R}_{\text{x}}.$
Taking the ratio of these last two expressions gives
$\frac{{I}_{1}{R}_{1}}{{I}_{1}{R}_{2}}=\frac{{I}_{2}{R}_{3}}{{I}_{2}{R}_{x}}.$
Canceling the currents and solving for R x yields
${R}_{\text{x}}={R}_{3}\frac{{R}_{2}}{{R}_{1}}.$
This equation is used to calculate the unknown resistance when current through the galvanometer is zero. This method can be very accurate (often to four significant digits), but it is limited by two factors. First, it is not possible to get the current through the galvanometer to be exactly zero. Second, there are always uncertainties in ${R}_{1}$ , ${R}_{2}$ , and ${R}_{3}$ , which contribute to the uncertainty in ${R}_{x}$ .
Identify other factors that might limit the accuracy of null measurements. Would the use of a digital device that is more sensitive than a galvanometer improve the accuracy of null measurements?
One factor would be resistance in the wires and connections in a null measurement. These are impossible to make zero, and they can change over time. Another factor would be temperature variations in resistance, which can be reduced but not completely eliminated by choice of material. Digital devices sensitive to smaller currents than analog devices do improve the accuracy of null measurements because they allow you to get the current closer to zero.
## Section summary
• Null measurement techniques achieve greater accuracy by balancing a circuit so that no current flows through the measuring device.
• One such device, for determining voltage, is a potentiometer.
• Another null measurement device, for determining resistance, is the Wheatstone bridge.
• Other physical quantities can also be measured with null measurement techniques.
## Conceptual questions
Why can a null measurement be more accurate than one using standard voltmeters and ammeters? What factors limit the accuracy of null measurements?
If a potentiometer is used to measure cell emfs on the order of a few volts, why is it most accurate for the standard ${\text{emf}}_{\text{s}}$ to be the same order of magnitude and the resistances to be in the range of a few ohms?
## Problem exercises
What is the ${\text{emf}}_{\text{x}}$ of a cell being measured in a potentiometer, if the standard cell’s emf is 12.0 V and the potentiometer balances for ${R}_{\text{x}}=5\text{.}\text{000}\phantom{\rule{0.15em}{0ex}}\Omega$ and ${R}_{\text{s}}=2\text{.}\text{500}\phantom{\rule{0.15em}{0ex}}\Omega$ ?
24.0 V
Calculate the ${\text{emf}}_{\text{x}}$ of a dry cell for which a potentiometer is balanced when ${R}_{\text{x}}=1\text{.}\text{200}\phantom{\rule{0.25em}{0ex}}\Omega$ , while an alkaline standard cell with an emf of 1.600 V requires ${R}_{\text{s}}=1\text{.}\text{247}\phantom{\rule{0.25em}{0ex}}\Omega$ to balance the potentiometer.
When an unknown resistance ${R}_{\text{x}}$ is placed in a Wheatstone bridge, it is possible to balance the bridge by adjusting ${R}_{3}$ to be $\text{2500}\phantom{\rule{0.25em}{0ex}}\Omega$ . What is ${R}_{\text{x}}$ if $\frac{{R}_{2}}{{R}_{1}}=0\text{.}\text{625}$ ?
$1\text{.}\text{56 k}\Omega$
To what value must you adjust ${R}_{3}$ to balance a Wheatstone bridge, if the unknown resistance ${R}_{\text{x}}$ is $\text{100}\phantom{\rule{0.15em}{0ex}}\Omega$ , ${R}_{1}$ is $\text{50}\text{.}0\phantom{\rule{0.15em}{0ex}}\Omega$ , and ${R}_{2}$ is $\text{175}\phantom{\rule{0.15em}{0ex}}\Omega$ ?
(a) What is the unknown ${\text{emf}}_{\text{x}}$ in a potentiometer that balances when ${R}_{\text{x}}$ is $\text{10}\text{.}0\phantom{\rule{0.15em}{0ex}}\Omega$ , and balances when ${R}_{\text{s}}$ is $\text{15}\text{.}0\phantom{\rule{0.15em}{0ex}}\Omega$ for a standard 3.000-V emf? (b) The same ${\text{emf}}_{\text{x}}$ is placed in the same potentiometer, which now balances when ${R}_{\text{s}}$ is $\text{15}\text{.}0\phantom{\rule{0.15em}{0ex}}\Omega$ for a standard emf of 3.100 V. At what resistance ${R}_{\text{x}}$ will the potentiometer balance?
(a) 2.00 V
(b) $9\text{.}\text{68}\phantom{\rule{0.25em}{0ex}}\Omega$
Suppose you want to measure resistances in the range from $\text{10}\text{.}0\phantom{\rule{0.25em}{0ex}}\Omega$ to $\text{10}\text{.}0 k\Omega$ using a Wheatstone bridge that has $\frac{{R}_{2}}{{R}_{1}}=2\text{.}\text{000}$ . Over what range should ${R}_{3}$ be adjustable?
$\text{Range = 5}\text{.}\text{00}\phantom{\rule{0.25em}{0ex}}\Omega \phantom{\rule{0.25em}{0ex}}\text{to}\phantom{\rule{0.25em}{0ex}}5\text{.}\text{00}\phantom{\rule{0.25em}{0ex}}\text{k}\Omega$
can someone help me with some logarithmic and exponential equations.
20/(×-6^2)
Salomon
okay, so you have 6 raised to the power of 2. what is that part of your answer
I don't understand what the A with approx sign and the boxed x mean
it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared
Salomon
I'm not sure why it wrote it the other way
Salomon
I got X =-6
Salomon
ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6
oops. ignore that.
so you not have an equal sign anywhere in the original equation?
Commplementary angles
hello
Sherica
im all ears I need to learn
Sherica
right! what he said ⤴⤴⤴
Tamia
what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks.
a perfect square v²+2v+_
kkk nice
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
or infinite solutions?
Kim
The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined.
Al
y=10×
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
rolling four fair dice and getting an even number an all four dice
Kristine 2*2*2=8
Differences Between Laspeyres and Paasche Indices
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
is it 3×y ?
J, combine like terms 7x-4y
im not good at math so would this help me
yes
Asali
I'm not good at math so would you help me
Samantha
what is the problem that i will help you to self with?
Asali
how do you translate this in Algebraic Expressions
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
2018-05-24 06:24:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 48, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5978444218635559, "perplexity": 1237.6378035156245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865928.45/warc/CC-MAIN-20180524053902-20180524073902-00272.warc.gz"}
|
https://socratic.org/questions/if-sides-a-and-b-of-a-triangle-have-lengths-of-12-and-3-respectively-and-the-ang
|
# If sides A and B of a triangle have lengths of 12 and 3 respectively, and the angle between them is (7pi)/8, then what is the area of the triangle?
Apr 30, 2017
This can be solved using the sine rule!
#### Explanation:
The sine rule is used to find the area of the triangle
$\text{area} = \frac{1}{2} \left(a\right) \left(b\right) \sin C$
Angle $C$ has to be expressed in degrees, by multiplying $\frac{7 \pi}{8}$ by ${180}^{\circ} / \pi$
In this case, you have
$\text{area} = \frac{1}{2} \left(12\right) \left(3\right) \left(\sin {157.5}^{\circ}\right)$
|
2021-09-24 09:23:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255415797233582, "perplexity": 194.5097283233931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00071.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-1-tools-of-geometry-1-8-perimeter-circumference-and-area-practice-and-problem-solving-exercises-page-66/44
|
## Geometry: Common Core (15th Edition)
Published by Prentice Hall
# Chapter 1 - Tools of Geometry - 1-8 Perimeter, Circumference, and Area - Practice and Problem-Solving Exercises - Page 66: 44
#### Answer
$9346.2\ mm^2$
#### Work Step by Step
Find the area of the large circle and deduct the area of the small unshaded circle to find the shaded area. The radius of a circle is half the diameter, so the radius of the large circle is 60 mm, and the area of the small circle is 25 mm. $A_1=\pi r^2=\pi(60^2)=\pi(3600)=11309.7$ $A_2=\pi r^2=\pi(25^2)=\pi(625)=1963.5$ $A=A_1-A_2=11309.7-1963.5=9346.2$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-01-18 05:42:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49581485986709595, "perplexity": 801.0754201408569}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659890.6/warc/CC-MAIN-20190118045835-20190118071835-00144.warc.gz"}
|
https://math.stackexchange.com/questions/992304/if-the-series-sum-a-n-and-the-sequence-b-n-converge-does-the-series-su
|
# If the series $\sum a_n$ and the sequence $(b_n)$ converge, does the series $\sum a_n b_n$ also converge?
If the series $\sum a_n$ and the sequence $(b_n)$ converge, does series $\sum a_n b_n$ also converge?
I think there should be some extra conditions to make this true, like $(b_n)$ is monotone or $\sum a_n$ converges absolutely.
So is there a counter example to show it's not necessarily true?
• $b_n = \frac{1}{(n - a)^2}$ for $a > 0$ will counter it if $b_n$ is not required to be monotone. – Axoren Oct 26 '14 at 19:54
## 1 Answer
Hint: $a_n=b_n=(-1)^n/\sqrt{n}$, $a_nb_n=1/n$...
|
2019-07-15 18:40:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902706503868103, "perplexity": 229.80850120810166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715201205-00250.warc.gz"}
|
https://www.uni-muenster.de/FB10/Service/show_article.shtml?id=7558&brettid=48
|
|
shupp_01
### Oberseminar Differentialgeometrie: Niels Martin Møller (Kopenhagen): Forgetful classifications of ancient mean curvature flows
##### Montag, 17.06.2019 16:15 im Raum SR4
Abstract: Ancient flows arise as singularity models via blow-ups, and the first examples were found by Mullins in 1956. Insight from gluing constructions indicate that classifying them fully is not viable, except under e.g. pointwise positive curvature assumptions. However, without any such restrictions on curvatures or topology, if one applies certain "forgetful" operations - discard time coordinates and take convex hulls - then we show that only four types of behavior may occur. For this, we first prove a natural "wedge theorem" for proper ancient flows, which adds to a long story: F.ex. it generalizes our own wedge theorem for self-translaters from 2018 (the main motivating case in the talk) which implies the minimal surface case by Hoffman-Meeks from 1990 that in turn contains the classical cone theorem by Omori from 1967. This is joint work with Francesco Chini (U Copenhagen).
Angelegt am Dienstag, 16.04.2019 13:43 von shupp_01
Geändert am Donnerstag, 23.05.2019 09:08 von shupp_01
[Edit | Vorlage]
Veranstaltungen am Mathematischen Institut
Oberseminare und sonstige Vorträge
FB-Homepage
|
2019-06-19 20:58:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667070269584656, "perplexity": 6479.427902669763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00004.warc.gz"}
|
http://clay6.com/qa/12041/a-particle-of-mass-m-slides-a-distance-d-down-a-plane-inclined-at-theta-to-
|
Browse Questions
# A particle of mass m slides a distance d down a plane inclined at $\theta$ to the horizontal. The work done by the normal reaction R is
$(a)\;0 \quad (b)\;Rd \quad (c)\; mgd\;\cos \theta \quad (d)\;mgd\;\sin \theta$
Since the normal reaction R is perpendicular to the direction of displacement 'd'
$W=F.d$
$W=R. d \cos \theta$
$\quad=Rd \cos 90$
$\quad=0$
Hence a is the correct answer.
edited Feb 10, 2014 by meena.p
|
2017-06-24 00:11:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249593734741211, "perplexity": 462.050353791616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00458.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-vertex-for-y-x-2-x-2
|
# How do you find the vertex for y=x^2-x-2?
Aug 2, 2017
Vertex is at $\left(\frac{1}{2} , - 2 \frac{1}{4}\right)$
#### Explanation:
$y = {x}^{2} - x - 2 \mathmr{and} y = {x}^{2} - x + {\left(\frac{1}{2}\right)}^{2} - \frac{1}{4} - 2$ or
$y = {\left(x - \frac{1}{2}\right)}^{2} - \frac{9}{4}$ , Comparing with vertex form of equation
y= a (x-h)^2+k ; (h,k) being vertex , we find here
$h = \frac{1}{2} , k = - \frac{9}{4}$ . So vertex is at $\left(\frac{1}{2} , - 2 \frac{1}{4}\right)$
graph{x^2-x-2 [-10, 10, -5, 5]} [Ans]
|
2021-09-19 02:21:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862589001655579, "perplexity": 14982.619173507666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00636.warc.gz"}
|
https://saylordotorg.github.io/text_introductory-statistics/s05-introduction.html
|
# Chapter 1 Introduction
In this chapter we will introduce some basic terminology and lay the groundwork for the course. We will explain in general terms what statistics and probability are and the problems that these two areas of study are designed to solve.
## 1.1 Basic Definitions and Concepts
### Learning Objective
1. To learn the basic definitions used in statistics and some of its key concepts.
We begin with a simple example. There are millions of passenger automobiles in the United States. What is their average value? It is obviously impractical to attempt to solve this problem directly by assessing the value of every single car in the country, adding up all those numbers, and then dividing by however many numbers there are. Instead, the best we can do would be to estimate the average. One natural way to do so would be to randomly select some of the cars, say 200 of them, ascertain the value of each of those cars, and find the average of those 200 numbers. The set of all those millions of vehicles is called the population of interest, and the number attached to each one, its value, is a measurement. The average value is a parameter: a number that describes a characteristic of the population, in this case monetary worth. The set of 200 cars selected from the population is called a sample, and the 200 numbers, the monetary values of the cars we selected, are the sample data. The average of the data is called a statistic: a number calculated from the sample data. This example illustrates the meaning of the following definitions.
### Definition
A populationAll objects of interest. is any specific collection of objects of interest. A sampleThe objects examined. is any subset or subcollection of the population, including the case that the sample consists of the whole population, in which case it is termed a census.
### Definition
A measurementA number or attribute computed for each member of a set of objects. is a number or attribute computed for each member of a population or of a sample. The measurements of sample elements are collectively called the sample dataThe measurements from a sample..
### Definition
A parameterA number that summarizes some aspect of the population. is a number that summarizes some aspect of the population as a whole. A statisticA number computed from the sample data. is a number computed from the sample data.
Continuing with our example, if the average value of the cars in our sample was $8,357, then it seems reasonable to conclude that the average value of all cars is about$8,357. In reasoning this way we have drawn an inference about the population based on information obtained from the sample. In general, statistics is a study of data: describing properties of the data, which is called descriptive statistics, and drawing conclusions about a population of interest from information extracted from a sample, which is called inferential statistics. Computing the single number $8,357 to summarize the data was an operation of descriptive statistics; using it to make a statement about the population was an operation of inferential statistics. ### Definition StatisticsCollection, display, analysis, and inference from data. is a collection of methods for collecting, displaying, analyzing, and drawing conclusions from data. ### Definition Descriptive statisticsThe organization, display, and description of data. is the branch of statistics that involves organizing, displaying, and describing data. ### Definition Inferential statisticsDrawing conclusions about a population based on a sample. is the branch of statistics that involves drawing conclusions about a population based on information contained in a sample taken from that population. The measurement made on each element of a sample need not be numerical. In the case of automobiles, what is noted about each car could be its color, its make, its body type, and so on. Such data are categorical or qualitative, as opposed to numerical or quantitative data such as value or age. This is a general distinction. ### Definition Qualitative dataMeasurements for which there is no natural numerical scale. are measurements for which there is no natural numerical scale, but which consist of attributes, labels, or other nonnumerical characteristics. ### Definition Quantitative dataNumerical measurements that arise from a natural numerical scale. are numerical measurements that arise from a natural numerical scale. Qualitative data can generate numerical sample statistics. In the automobile example, for instance, we might be interested in the proportion of all cars that are less than six years old. In our same sample of 200 cars we could note for each car whether it is less than six years old or not, which is a qualitative measurement. If 172 cars in the sample are less than six years old, which is 0.86 or 86%, then we would estimate the parameter of interest, the population proportion, to be about the same as the sample statistic, the sample proportion, that is, about 0.86. The relationship between a population of interest and a sample drawn from that population is perhaps the most important concept in statistics, since everything else rests on it. This relationship is illustrated graphically in Figure 1.1 "The Grand Picture of Statistics". The circles in the large box represent elements of the population. In the figure there was room for only a small number of them but in actual situations, like our automobile example, they could very well number in the millions. The solid black circles represent the elements of the population that are selected at random and that together form the sample. For each element of the sample there is a measurement of interest, denoted by a lower case x (which we have indexed as $x1,…,xn$ to tell them apart); these measurements collectively form the sample data set. From the data we may calculate various statistics. To anticipate the notation that will be used later, we might compute the sample mean $x-$ and the sample proportion $p^$, and take them as approximations to the population mean μ (this is the lower case Greek letter mu, the traditional symbol for this parameter) and the population proportion p, respectively. The other symbols in the figure stand for other parameters and statistics that we will encounter. Figure 1.1 The Grand Picture of Statistics ### Key Takeaways • Statistics is a study of data: describing properties of data (descriptive statistics) and drawing conclusions about a population based on information in a sample (inferential statistics). • The distinction between a population together with its parameters and a sample together with its statistics is a fundamental concept in inferential statistics. • Information in a sample is used to make inferences about the population from which the sample was drawn. ### Exercises 1. Explain what is meant by the term population. 2. Explain what is meant by the term sample. 3. Explain how a sample differs from a population. 4. Explain what is meant by the term sample data. 5. Explain what a parameter is. 6. Explain what a statistic is. 7. Give an example of a population and two different characteristics that may be of interest. 8. Describe the difference between descriptive statistics and inferential statistics. Illustrate with an example. 9. Identify each of the following data sets as either a population or a sample: 1. The grade point averages (GPAs) of all students at a college. 2. The GPAs of a randomly selected group of students on a college campus. 3. The ages of the nine Supreme Court Justices of the United States on January 1, 1842. 4. The gender of every second customer who enters a movie theater. 5. The lengths of Atlantic croakers caught on a fishing trip to the beach. 10. Identify the following measures as either quantitative or qualitative: 1. The 30 high-temperature readings of the last 30 days. 2. The scores of 40 students on an English test. 3. The blood types of 120 teachers in a middle school. 4. The last four digits of social security numbers of all students in a class. 5. The numbers on the jerseys of 53 football players on a team. 11. Identify the following measures as either quantitative or qualitative: 1. The genders of the first 40 newborns in a hospital one year. 2. The natural hair color of 20 randomly selected fashion models. 3. The ages of 20 randomly selected fashion models. 4. The fuel economy in miles per gallon of 20 new cars purchased last month. 5. The political affiliation of 500 randomly selected voters. 12. A researcher wishes to estimate the average amount spent per person by visitors to a theme park. He takes a random sample of forty visitors and obtains an average of$28 per person.
1. What is the population of interest?
2. What is the parameter of interest?
3. Based on this sample, do we know the average amount spent per person by visitors to the park? Explain fully.
13. A researcher wishes to estimate the average weight of newborns in South America in the last five years. He takes a random sample of 235 newborns and obtains an average of 3.27 kilograms.
1. What is the population of interest?
2. What is the parameter of interest?
3. Based on this sample, do we know the average weight of newborns in South America? Explain fully.
14. A researcher wishes to estimate the proportion of all adults who own a cell phone. He takes a random sample of 1,572 adults; 1,298 of them own a cell phone, hence 1298∕1572 ≈ .83 or about 83% own a cell phone.
1. What is the population of interest?
2. What is the parameter of interest?
3. What is the statistic involved?
4. Based on this sample, do we know the proportion of all adults who own a cell phone? Explain fully.
15. A sociologist wishes to estimate the proportion of all adults in a certain region who have never married. In a random sample of 1,320 adults, 145 have never married, hence 145∕1320 ≈ .11 or about 11% have never married.
1. What is the population of interest?
2. What is the parameter of interest?
3. What is the statistic involved?
4. Based on this sample, do we know the proportion of all adults who have never married? Explain fully.
1. What must be true of a sample if it is to give a reliable estimate of the value of a particular population parameter?
2. What must be true of a sample if it is to give certain knowledge of the value of a particular population parameter?
1. A population is the total collection of objects that are of interest in a statistical study.
2. A sample, being a subset, is typically smaller than the population. In a statistical study, all elements of a sample are available for observation, which is not typically the case for a population.
3. A parameter is a value describing a characteristic of a population. In a statistical study the value of a parameter is typically unknown.
4. All currently registered students at a particular college form a population. Two population characteristics of interest could be the average GPA and the proportion of students over 23 years.
1. Population.
2. Sample.
3. Population.
4. Sample.
5. Sample.
1. Qualitative.
2. Qualitative.
3. Quantitative.
4. Quantitative.
5. Qualitative.
1. All newborn babies in South America in the last five years.
2. The average birth weight of all newborn babies in South America in the last five years.
3. No, not exactly, but we know the approximate value of the average.
1. All adults in the region.
2. The proportion of the adults in the region who have never married.
3. The proportion computed from the sample, 0.1.
4. No, not exactly, but we know the approximate value of the proportion.
## 1.2 Overview
### Learning Objective
1. To obtain an overview of the material in the text.
The example we have given in the first section seems fairly simple, but there are some significant problems that it illustrates. We have supposed that the 200 cars of the sample had an average value of $8,357 (a number that is precisely known), and concluded that the population has an average of about the same amount, although its precise value is still unknown. What would happen if someone were to take another sample of exactly the same size from exactly the same population? Would he get the same sample average as we did,$8,357? Almost surely not. In fact, if the investigator who took the second sample were to report precisely the same value, we would immediately become suspicious of his result. The sample average is an example of what is called a random variable: a number that varies from trial to trial of an experiment (in this case, from sample to sample), and does so in a way that cannot be predicted precisely. Random variables will be a central object of study for us, beginning in Chapter 4 "Discrete Random Variables".
Another issue that arises is that different samples have different levels of reliability. We have supposed that our sample of size 200 had an average of $8,357. If a sample of size 1,000 yielded an average value of$7,832, then we would naturally regard this latter number as likely to be a better estimate of the average value of all cars. How can this be expressed? An important idea that we will develop in Chapter 7 "Estimation" is that of the confidence interval: from the data we will construct an interval of values so that the process has a certain chance, say a 95% chance, of generating an interval that contains the actual population average. Thus instead of reporting a single estimate, $8,357, for the population mean, we would say that we are 95% certain that the true average is within$100 of our sample mean, that is, between $8,257 and$8,457, the number $100 having been computed from the sample data just like the sample mean$8,357 was. This will automatically indicate the reliability of the sample, since to obtain the same chance of containing the unknown parameter a large sample will typically produce a shorter interval than a small one will. But unless we perform a census, we can never be completely sure of the true average value of the population; the best that we can do is to make statements of probability, an important concept that we will begin to study formally in Chapter 3 "Basic Concepts of Probability".
Sampling may be done not only to estimate a population parameter, but to test a claim that is made about that parameter. Suppose a food package asserts that the amount of sugar in one serving of the product is 14 grams. A consumer group might suspect that it is more. How would they test the competing claims about the amount of sugar, 14 grams versus more than 14 grams? They might take a random sample of perhaps 20 food packages, measure the amount of sugar in one serving of each one, and average those amounts. They are not interested in the true amount of sugar in one serving in itself; their interest is simply whether the claim about the true amount is accurate. Stated another way, they are sampling not in order to estimate the average amount of sugar in one serving, but to see whether that amount, whatever it may be, is larger than 14 grams. Again because one can have certain knowledge only by taking a census, ideas of probability enter into the analysis. We will examine tests of hypotheses beginning in Chapter 8 "Testing Hypotheses".
Several times in this introduction we have used the term “random sample.” Generally the value of our data is only as good as the sample that produced it. For example, suppose we wish to estimate the proportion of all students at a large university who are females, which we denote by p. If we select 50 students at random and 27 of them are female, then a natural estimate is $p≈p^=27∕50=0.54$ or 54%. How much confidence we can place in this estimate depends not only on the size of the sample, but on its quality, whether or not it is truly random, or at least truly representative of the whole population. If all 50 students in our sample were drawn from a College of Nursing, then the proportion of female students in the sample is likely higher than that of the entire campus. If all 50 students were selected from a College of Engineering Sciences, then the proportion of students in the entire student body who are females could be underestimated. In either case, the estimate would be distorted or biased. In statistical practice an unbiased sampling scheme is important but in most cases not easy to produce. For this introductory course we will assume that all samples are either random or at least representative.
### Key Takeaway
• Statistics computed from samples vary randomly from sample to sample. Conclusions made about population parameters are statements of probability.
## 1.3 Presentation of Data
### Learning Objective
1. To learn two ways that data will be presented in the text.
In this book we will use two formats for presenting data sets. The first is a data listAn explicit listing of all the individual measurements made on a sample., which is an explicit listing of all the individual measurements, either as a display with space between the individual measurements, or in set notation with individual measurements separated by commas.
### Example 1
The data obtained by measuring the age of 21 randomly selected students enrolled in freshman courses at a university could be presented as the data list
$181819191918222018181719182418201821201719$
or in set notation as
${18,18,19,19,19,18,22,20,18,18,17,19,18,24,18,20,18,21,20,17,19}$
A data set can also be presented by means of a data frequency tableA table listing each distinct value x and its frequency f., a table in which each distinct value x is listed in the first row and its frequencyHow often a value x appears in a data set. f, which is the number of times the value x appears in the data set, is listed below it in the second row.
### Example 2
The data set of the previous example is represented by the data frequency table
$x17181920212224f2853111$
The data frequency table is especially convenient when data sets are large and the number of distinct values is not too large.
### Key Takeaway
• Data sets can be presented either by listing all the elements or by giving a table of values and frequencies.
### Exercises
1. List all the measurements for the data set represented by the following data frequency table.
$x3132333435f15642$
2. List all the measurements for the data set represented by the following data frequency table.
$x979899100101102103105f75342211$
3. Construct the data frequency table for the following data set.
$2225222724232624222426$
4. Construct the data frequency table for the following data set.
${1,5,2,3,5,1,4,4,4,3,2,5,1,3,2,1,1,1,2}$
2. $x222324252627f313121.$
|
2021-07-30 22:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6175534129142761, "perplexity": 465.6668041962357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00158.warc.gz"}
|
https://stats.stackexchange.com/questions/267234/ols-with-nested-into-the-regressor/267333
|
# OLS with nested into the regressor
Given the following model:
$$y_t = \alpha x_t + \epsilon_t~~~~,~~~\epsilon_t \sim NID(0,\sigma^2) ~\text{and}~~x_t = y_t + z_t$$
Given that $z_t$ is a non covariate variable with $\epsilon_t$, how is it possible to derive the following unbiased OLS estimator of $\alpha$:
$$1 - \dfrac{\sum_{t=1}^T z_t^2}{\sum_{t=1}^T z_t y_t + \sum_{t=1}^T z_t^2}$$
Substitute $x_t$ in the model to obtain $y_t=\alpha y_t + \alpha z_t + \varepsilon_t$. Manipulating, $y_t = \frac{\alpha}{1-\alpha}z_t + \frac{1}{1-\alpha}\varepsilon_t$. Rename the errors $\delta_t=\frac{1}{1-\alpha}\varepsilon_t$, then $\delta_t\sim NID(0,\tau^2)$, where $\tau^2=\frac{\sigma^2}{(1-\alpha)^2}$, so homoscedasticidy still holds. Rename $\beta=\frac{\alpha}{1-\alpha}$. You now have the model $y_t=\beta z_t+\delta_t$, with the usual assumptions. The OLS estimator of $\beta$ is $$\hat{\beta}=\frac{\sum_{t=1}^T z_t y_t}{\sum_{t=1}^T z_t^2}.$$ Hence, $$\frac{\hat{\alpha}}{1-\hat{\alpha}}=\frac{\sum_{t=1}^T z_t y_t}{\sum_{t=1}^T z_t^2}.$$ Solving for $\hat{\alpha}$, $$\hat{\alpha}=\frac{\sum_{t=1}^T z_t y_t}{\sum_{t=1}^T z_t y_t+\sum_{t=1}^T z_t^2}=\frac{\sum_{t=1}^T z_t y_t + \sum_{t=1}^T z_t^2 - \sum_{t=1}^T z_t^2}{\sum_{t=1}^T z_t y_t+\sum_{t=1}^T z_t^2},$$
so $$\hat{\alpha}=1-\frac{\sum_{t=1}^T z_t^2}{\sum_{t=1}^T z_t y_t+\sum_{t=1}^T z_t^2}.$$
• Multiply both sides by $1-\hat{\alpha}$ and by $\sum_{t=1}^T z_t^2$ and rearrange terms. And sorry for mixing the $\hat{\alpha}$ notation and the $a$ notation, let me correct it. – Anna SdTC Mar 14 '17 at 9:02
|
2019-12-07 04:21:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786288142204285, "perplexity": 375.3644751692717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540495263.57/warc/CC-MAIN-20191207032404-20191207060404-00443.warc.gz"}
|
https://gateoverflow.in/289474/self-doubt
|
# self doubt
0 votes
97 views
$\lim_{x\rightarrow \frac{\pi }{2}}cosx^{cosx}$
can we straight away say $0^{0}=0$ ?
in Calculus
edited
0
Cos0 =1
and yes 1^1=1
0
no we cant straight away say that 0^0 = 0 , its an indeterminate form..
but when you put x->0 in the given f(x) it gives 1^1 = 1 which is acceptable..
0
Thanks, Corrected the question.
2
for such questions take log both sides and than proceed by breaking rhs into more simpler parts which will be log(cosx)/(1/cosx) again in indeterminate form so differentiate both numerator and denominator and than put the value of lim x->pie/2 .
it will get reduced log y=0
y= e^0 = 1 answer...
0
THANKSS
0
Nice approach !
## 1 Answer
0 votes
Yeah when x-->0 then cos(x) approches 1
i.e. value of cos(x)^cos(x) will be 1^1 = 1
## Related questions
0 votes
0 answers
1
168 views
How to solve these questions $(1)$ $I=\int_{0}^{1}(xlogx)^{4}dx$ $(2)$ $I=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{\frac{-x^{2}}{8}}dx$ $(3)$ $I=\int_{0}^{\infty}x^{\frac{1}{4}}.e^{-\sqrt{x}}dx$
0 votes
1 answer
2
217 views
Is the function $f(x)=\frac{1}{x^{\frac{1}{3}}}$ continous in the interval [-1 0) ?
0 votes
1 answer
3
78 views
Locate the inflection points and region where f(x) is concave up or down f(x)=$-x^3+x^2+x$
1 vote
1 answer
4
90 views
What is the difference between monotonically increasing and strictly increasing ??? are they both same or different ???
|
2021-01-25 21:19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722102046012878, "perplexity": 6544.010332916172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00275.warc.gz"}
|
https://dsp.stackexchange.com/posts/29211/revisions
|
2 deleted 142 characters in body
My question has to do with integrating gaussian noise.
Let us assume we have samples of discrete gaussian white noise with mean $$\mu = 0$$ and variance $$\sigma_{th}^2$$. If theseThese noise samples are passed through a summer that implements the operation $$$$y=\sum_{n=0}^{N-1}x[n]$$$$
($$x$$ issystem shown in the inputFigure (a cascade of two integrators with outputs $$y_1[n]$$ and $$y$$ is the output)$$y_2[n]$$, we would expect that the output to be white gaussian noise (Central Limit Theorem?respectively). If the output of this summer is now passed through another summer,
What will be the outputmean and variance of that summer also be a gaussian process with $$\mu=0$$, or will the resultant waveform have some DC component now$$y_1$$ and $$y_2$$ (let us say after $$N$$ cycles)?
My question has to do with integrating gaussian noise.
Let us assume we have samples of discrete gaussian white noise with mean $$\mu = 0$$ and variance $$\sigma_{th}^2$$. If these noise samples are passed through a summer that implements the operation $$$$y=\sum_{n=0}^{N-1}x[n]$$$$
($$x$$ is the input and $$y$$ is the output), we would expect that the output to be white gaussian noise (Central Limit Theorem?). If the output of this summer is now passed through another summer, will the output of that summer also be a gaussian process with $$\mu=0$$, or will the resultant waveform have some DC component now?
My question has to do with integrating gaussian noise.
Let us assume we have samples of discrete gaussian white noise with mean $$\mu = 0$$ and variance $$\sigma_{th}^2$$. These noise samples are passed through the system shown in the Figure (a cascade of two integrators with outputs $$y_1[n]$$ and $$y_2[n]$$, respectively).
What will be the mean and variance of $$y_1$$ and $$y_2$$ (let us say after $$N$$ cycles)?
1
# Double Integrating Gaussian Noise
My question has to do with integrating gaussian noise.
Let us assume we have samples of discrete gaussian white noise with mean $$\mu = 0$$ and variance $$\sigma_{th}^2$$. If these noise samples are passed through a summer that implements the operation $$$$y=\sum_{n=0}^{N-1}x[n]$$$$
($$x$$ is the input and $$y$$ is the output), we would expect that the output to be white gaussian noise (Central Limit Theorem?). If the output of this summer is now passed through another summer, will the output of that summer also be a gaussian process with $$\mu=0$$, or will the resultant waveform have some DC component now?
|
2019-11-19 11:10:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.5987518429756165, "perplexity": 403.5720851877819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00266.warc.gz"}
|
https://www.rdocumentation.org/packages/base/versions/3.1.3/topics/all.names
|
all.names
0th
Percentile
Find All Names in an Expression
Return a character vector containing all the names which occur in an expression or call.
Keywords
programming
Usage
all.names(expr, functions = TRUE, max.names = -1L, unique = FALSE)
all.vars(expr, functions = FALSE, max.names = -1L, unique = TRUE)
Arguments
expr
an expression or call from which the names are to be extracted.
functions
a logical value indicating whether function names should be included in the result.
max.names
the maximum number of names to be returned. -1 indicates no limit (other than vector size limits).
unique
a logical value which indicates whether duplicate names should be removed from the value.
Details
These functions differ only in the default values for their arguments.
Value
A character vector with the extracted names.
substitute to replace symbols with values in an expression.
library(base) all.names(expression(sin(x+y))) all.names(quote(sin(x+y))) # or a call all.vars(expression(sin(x+y)))
|
2020-11-30 17:09:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27472391724586487, "perplexity": 1776.5463270266955}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00498.warc.gz"}
|
https://math.stackexchange.com/questions/1736506/closed-graph-theorem-for-sobolev-space
|
# closed graph theorem for sobolev space
The problem is:
If $H_s\subset BC^k$, show that $\sup_{|\alpha|\leq k}|\partial^\alpha f|\leq C\|f\|_s$ using the closed graph theorem.
As I understand, the linear operator/map $\partial^\alpha$ maps $H_s$ to $C^1$, and it is a bounded operator if the linear map is closed.
I am a bit confused about how to use the fact that $H_s\subset BC^k$ there?
P.S. The $BC^k$ is the space of $C^k$ functions $f$ such that $\partial^\alpha f$ is bounded for $\alpha \leq k$. $H_s$ is Sobolev space of order $s$.
Note that if $H_s\subset BC^k$ then the map $I:H_s\to BC^k$ s.t. $f\mapsto I(f)=f$ is a well defined linear operator. Let's prove it has a closed graph.
Let $((f_n,f_n))_n$ be a sequence in the graph of $I$, s.t. it converges to some $(f,g)\in H_s\times BC^k$. We have to prove $g=f$. If $g\neq f$, then there are $\epsilon,r>0$ and $x_0$ s.t. $\Vert f(x)-g(x)\Vert>\epsilon$ if $x\in B(x_0,r)$. Hence $$\forall n, \exists m>n, x\in B(x_0,r)\implies \Vert f_m(x)-f(x)\Vert>\epsilon/2$$ from which we deduce there is some $\delta=\delta(\epsilon)>0$ s.t. $$\forall n, \exists m>n, \Vert f_m-f\Vert_s>\delta$$ which is a contradiction as $f_m\to f$ in $H_s$.
By closed graph theorem, this implies $I$ is continuous, ie, bounded, ie, $$\exists C>0, \forall f\in H_s, \Vert f\Vert_{BC^k}= \sup_{|\alpha|\leq k}\Vert\partial^\alpha f \Vert_\infty\leq C\Vert f\Vert_s$$
• Thank you! just few questions, $I$ is just identity operator, right? – Jane Apr 10 '16 at 21:55
• and what is the necessity of $BC^k$ there? why we can not do the same thing for just $C^k$? – Jane Apr 10 '16 at 21:56
• @Jane. You are welcome. As for the first question, indeed, $I$ it is the identity operator. As for the second question, I think this is more a technicality than anything else, as there are functions in $C^k$ whose norm is infinite (like $f(x)=x$) – Nate River Apr 10 '16 at 22:01
• ah, you mean for $C^k$ functions the left side of last line inequality will be infinite? – Jane Apr 10 '16 at 22:03
• Not exactly. The problem is that $C^k$ with that norm is not really a normed vector space (the norm is not defined on all that space). Therefore, if we replace $BC^k$ with $C^k$, then we can't really apply the closed graph theorem. – Nate River Apr 10 '16 at 22:13
|
2019-12-08 11:39:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113144278526306, "perplexity": 123.09244996119007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00142.warc.gz"}
|
https://questioncove.com/updates/5ab1d18d65972e57d7e6f800
|
Juila12001:
need help with checking answers and #10 and 12 https://ibb.co/gLvnHx
1 month ago
#10 is a 30-60-90 Triangle Sides Opposite to Angles 90 - 2a 60 - a sqrt 3 30 - a They give the hypotenuse, which is 2x You can solve for y, the side opposite to angle 30 You can do this by dividing (4 sqrt 3) by 2. From 2a -> a $\frac{ 4 \sqrt 3 }{ 2 } = y$
1 month ago
y = a as per the 30 60 90 rule The side opposite to the angle 60 is a sqrt 3 So we can do the following, $a \sqrt 3$ $a = \frac{ 4 \sqrt 3}{ 2 }$ $(\frac{ 4 \sqrt 3 }{ 2}) \sqrt 3$ Basically substituting a in. Then we simplify $\frac{ 4(3) }{2 } = \frac{ 12 }{ 2 } = 6$
1 month ago
Remember that $\sqrt x \times \sqrt x = x$
1 month ago
It seems that your main error though for 10 was putting the wrong values for the sides. You had x for the hypotenuse, 2x for the angle opposite to 60, and x sqrt 3for the angle opposite to 30.
1 month ago
In actuality it's hypotenuse - 2x 60 - x sqrt 3 30 - x
1 month ago
Juila12001:
so for 10 is th e answer 6 or (4 square root 3 over 2) square rooted by 3
1 month ago
Correct, for x. $y = \frac{ 4 \sqrt 3 }{ 2 }$
1 month ago
Juila12001:
then where did the 6 come from
1 month ago
$6 = (\frac{ 4 \sqrt 3 }{ 2 }) \sqrt 3$
1 month ago
Do you understand?
1 month ago
Juila12001:
oh yea i was just confused bc i thought u meant 6 was the answer
1 month ago
Juila12001:
wait so the answer is 6= (4 square 3 over 2) square ?
1 month ago
$x = (\frac{ 4 \sqrt 3 }{ 2 }) \sqrt 3$ $y = \frac{ 4 \sqrt 3 }{ 2 }$
1 month ago
x also equals 6
1 month ago
6 is better though, since it's simplified.
1 month ago
Juila12001:
so x is 6?
1 month ago
yus
1 month ago
Juila12001:
so u don't have to write 6= (4 square 3 over 2) square ?
1 month ago
nope
1 month ago
All good?
1 month ago
Or is there another one you need help with?
1 month ago
Juila12001:
can check if 8 is correct
1 month ago
|dw:1521605137106:dw|
1 month ago
This means: |dw:1521605198277:dw|
1 month ago
So we can do $7 = x \sqrt 3$ $\frac{ 7 }{ \sqrt 3} = x$ $\frac{ \sqrt 3 }{ \sqrt 3 } \times (\frac{ 7 }{ \sqrt 3 }) = x$ $\frac{ 7 \sqrt 3 }{ 3} = x$
1 month ago
$x = b = \frac{ 7 \sqrt 3 }{ 3 }$
1 month ago
$2x = a = 2(\frac{ 7 \sqrt 3 }{ 3 })$
1 month ago
$a = \frac{ 14 \sqrt 3 }{ 3}$
1 month ago
Juila12001:
thank you
1 month ago
No problem. All done?
1 month ago
Juila12001:
yes
1 month ago
Haha, it seems you already had those answers. But you wrote the values wrong for each side. Don't know how that happened xD
1 month ago
Juila12001:
no
1 month ago
1 month ago
What's in bold black
1 month ago
I think you know how to do these problems. Just have to set them up correctly.
1 month ago
Juila12001:
yea i just confused setting it up
1 month ago
If you ever need help, I am usually on around this time or a bit later.
1 month ago
Juila12001:
ok thanks
1 month ago
There are some other cool math helpers too. Glad I could help.
1 month ago
|
2018-04-26 23:10:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.185898557305336, "perplexity": 5125.713844462429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948617.86/warc/CC-MAIN-20180426222608-20180427002608-00396.warc.gz"}
|
https://cstheory.stackexchange.com/questions/12393/maximizing-difference-of-a-submodular-and-a-modular-function/46866
|
# Maximizing difference of a submodular and a modular function
I'm considering a network planning problem which is stated as follows: From the given ground set $\mathcal{V}$, select $\mathcal{A} \subseteq \mathcal{V}$ such that $$f(\mathcal{A}) - \sum_{v_i \in \mathcal{A}} c_i$$ is maximized where $f$ is a monotone submodular function and $c_i \ge 0$ is the cost of selecting $v_i$. The problem is an instance of nonomonotone submodular maximization for which a local search heuristic with approximation bound of $\frac{2}{5} - \frac{\epsilon}{n}$ is presented in
Uriel Feige, Vahab S. Mirrokni, Jan Vondrák: "Maximizing Non-Monotone Submodular Functions ", FOCS 2007.
I'm wondering if anyone is aware of a better approximation algorithm for my specific problem?
• The result of Feige, Mirrokni and Vondrak applies only to non-negative functions. Your function is no guaranteed to be non-negative unless you are making additional assumptions. I guess you meant the approximation in FMS is 2/5, not 2/4. There is an upcoming paper in FOCS 2012 by Buchbinder etal who obtain an optimal 1/2 approximation for non-negative submodular function maximization. Without non-negativity the problem is inapproximable. See the following paper for some related work on your problem. dl.acm.org/citation.cfm?id=1616497.1616507 – Chandra Chekuri Aug 26 '12 at 23:40
• Thanks Chandra for your very helpful comment. You were right about the bound. It has been edited. – Ali Aug 27 '12 at 19:52
To simplify life, let $$\mathcal V = [n] := \{1,2,\ldots,n\}$$. For $$A \subseteq [n]$$, define $$h(A):=\sum_{i \in A}c_i$$. Note that $$h$$ defines a modular (i.e additive) set function. Now, suppose there exists $$\gamma \in [0, 1]$$ such that $$f$$ is weakly $$\gamma$$-submodular, i.e such that
$$\sum_{i \in B\setminus A}f(A \cup \{i\}) - f(A) \ge \gamma (f(A \cup B) - f(A)),\;\forall A,B \subseteq [n].$$
Then, greedy maximization produces a subset $$A^G$$ with $$k$$ elements such that
$$f(A^G)-h(A^G) \ge (1-e^{-\gamma})f(A^*)-h(A^*),$$
where $$A^*$$ is the $$k$$-element subset of $$[n]$$ which maximizes $$f(A)-h(A)$$. This is a direct consequence of Theorem 3 of this ICML paper.
|
2020-07-07 00:49:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9004457592964172, "perplexity": 259.6341785373578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00224.warc.gz"}
|
https://testbook.com/question-answer/if-the-area-of-a-circular-wheel-is-154-cm2-how-ma--60cc1bdb3f5c25aa58c56a9a
|
# If the area of a circular wheel is 154 cm2, how many revolutions to make in travelling 1320 m distance?
1. 3
2. 30
3. 300
4. 3000
Option 4 : 3000
## Detailed Solution
Given:
Area of the circular wheel = 154 cm2
Total distance covered by the wheel = 1320 m
Concept used:
Area of circle = πr2
Circumference of circle = 2πr = Distance covered in one revolution
Number of revolution = (Total distance covered)/(Distance covered in one revolution)
Where,
π = (22/7)
r = radius of the circle
1 m = 100 cm
One revolution = Circumference of the wheel
Calculation:
Let, r be the radius of the wheel.
According to the question, we have
πr2 = 154
⇒ (22/7) × r2 = 154
⇒ r2 = 49 cm
⇒ r = 7 cm
Now, the distance covered in one revolution
2πr = 2 × (22/7) × 7
⇒ 44 cm
Total distance = 1320 × 100
⇒ 132000 cm
Now,
Number of revolutions = (132000/44)
⇒ 3000 revolutions
3000 revolutions to travel a distance of 1320 meters.
|
2021-10-20 00:35:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520445227622986, "perplexity": 5747.444501642895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00398.warc.gz"}
|
https://www.physicsforums.com/threads/find-range-for-equilibria.784430/
|
# Find range for equilibria
1. Nov 27, 2014
### MathewsMD
Given two equilibrium equations for a tank 1 and tank 2 with $Q^E_1 =6(9q_1 +q_2)$ and $Q^E_2 =20(3q_1 +2q_2)$, respectively, where $q_1, q_2 ≧ 0$, describe which possible equilibrium states for various values of $q_1$ and $q_2$ are possible.
I believe I know how the answer was derived, but would like an explanation, if possible.
What was done was:
Take $\frac {Q^E_2}{Q^E_1}$ and then substitute $q_1 = 0$ to find one extrema, and then $q_2 = 0$ for another extrema. This yielded $\frac {10}{9} ≤ \frac {Q^E_2}{Q^E_1} ≤ \frac {20}{3}$. Now I understand the logic used somewhat (i.e. use the minimum values of q1 and q2 to to see where the maximum and minimum of the possible equilibria states lie), but why exactly is the ratio taken? Are not specific values for the equilibrium states wanted as per the question? How exactly does the ratio reveal the specific min and max for the equilibrium states? How do we know there is no higher or lower value for the equilibrium if $q_1, q_2 ≠ 0$?
I feel like I am missing something here and any clarification would be greatly appreciated!
2. Nov 28, 2014
### MathewsMD
When just trying to "describe" the equilibrium states, is the ratio sufficient? Does the ratio have any particular meaning?
Also, is there a way to find the exact equilibrium states as opposed to a ratio?
|
2017-11-20 12:38:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7285560965538025, "perplexity": 337.22819455473973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806030.27/warc/CC-MAIN-20171120111550-20171120131550-00672.warc.gz"}
|
https://tex.stackexchange.com/questions/569361/rule-above-caption-of-lstlisting-too-long-inside-itemize
|
# Rule above caption of lstlisting too long inside itemize
I want to use the caption package to insert a horizontal bar above the caption/title of a lstlisting environment (+ some other stylings which are not relevant here). The minimal example below shows a (slightly simplified) call to \DeclareCaptionFormat that achieves that.
However, as one can see in the second lstlisting environment below, the rule above the caption (and the caption itself) extend to the full page width when called inside an itemize environment. I would have expected the rule above the caption to look like the two frame lines above and below the listing. What am I doing wrong?
\documentclass{article}
\usepackage{caption}
\DeclareCaptionFormat{lstlisting}{\rule{\linewidth}{0.4pt}\\#1#2#3}
\captionsetup{format = lstlisting}
\usepackage{listings}
\lstset{title=Title, frame=lines}
\begin{document}
\begin{lstlisting}
line 1
line 2
\end{lstlisting}
\begin{itemize}
\item Listing:
\begin{lstlisting}
line 1
line 2
\end{lstlisting}
\end{itemize}
\end{document}
• My first guess would be that itemize sees listing as some sort of sublist and indents it accordingly. I tried putting the listing in enumerate and description and the same thing happened. – Plergux Nov 3 '20 at 8:48
|
2021-01-20 16:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926605820655823, "perplexity": 2504.8564429403505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00719.warc.gz"}
|
https://blender.stackexchange.com/questions/241016/generating-rgb-exr-or-both-on-demand-using-blender-python
|
# Generating RGB,EXR or Both on demand using Blender Python
I am trying to generate a python script for rendering RGB and/or an EXR file(for getting the mist pass) for spherical images using blender.
So I have a scene and I am using the cycles rendering engine. I want to save the OpenExr file and/or the RGB file based on the input that the user gives (for which I am using a JSON file).
For the mist pass I am using a setup similar to the following link: Getting the depth of every pixel to the center of projection of the camera in Blender
I am using the following lines of code for creating and saving the renders.
bpy.context.scene.render.filepath=output_path+str(i)
bpy.ops.render.render(use_viewport=True,write_still=True)
However, the above lines of code save both the RGB image as well as the EXR file. In order to render just the RGB images I have used the following in addition to the above lines of code and that works well:
bpy.context.scene.use_nodes = False
However, I am not able to generate just the OpenEXR file. Simply using the following also didn't help :
bpy.ops.render.render(write_still=True)
Can someone guide me with this and/or suggest a better way for the same?
• Unfortunately there is no way to disable the default output (if that's your question) -> Disable default animation output when using "File output" nodes. Oct 19, 2021 at 11:26
• Ohh ... that's bad. Anyway, thank you for your help. Oct 19, 2021 at 12:45
• No problem @Sourabh. Suggest set the default output to jpg and remove the file if necessary using python. Oct 19, 2021 at 13:51
• what happens if you set the File Output node's path to the name of the exr file? 2 files, an error, or just 1? Oct 20, 2021 at 20:17
|
2022-08-15 09:27:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19180554151535034, "perplexity": 1259.7420357428061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00029.warc.gz"}
|
https://hackage.haskell.org/package/Annotations-0.2.2/docs/Annotations-Except.html
|
Annotations-0.2.2: Constructing, analyzing and destructing annotated trees
Annotations.Except
Description
The Except datatype captures monoidal exceptions in applicative computations.
Synopsis
Documentation
data Except e a Source #
Except is like Either but is meant to be used only in applicative computations. When two exceptions are sequenced, their sum (using mappend) is computed.
Constructors
Failed e OK a
Instances
Source # Methodsfmap :: (a -> b) -> Except e a -> Except e b #(<\$) :: a -> Except e b -> Except e a # Monoid e => Applicative (Except e) Source # Methodspure :: a -> Except e a #(<*>) :: Except e (a -> b) -> Except e a -> Except e b #(*>) :: Except e a -> Except e b -> Except e b #(<*) :: Except e a -> Except e b -> Except e a # (Eq a, Eq e) => Eq (Except e a) Source # Methods(==) :: Except e a -> Except e a -> Bool #(/=) :: Except e a -> Except e a -> Bool # (Show a, Show e) => Show (Except e a) Source # MethodsshowsPrec :: Int -> Except e a -> ShowS #show :: Except e a -> String #showList :: [Except e a] -> ShowS #
|
2020-11-30 23:26:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4780929982662201, "perplexity": 13386.819387159074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00545.warc.gz"}
|
http://physics.stackexchange.com/tags/universe/hot
|
# Tag Info
13
I personally find the terms consistent. Think of the entropy as Boltzman proposes: $S=k \, \ln W$ Meaning high entropy states can be realized via many different configurations. Truly ordered state (assume you arrange a sculpture from atoms) can be realized via much smaller number of microscopic states. So again, equilibrium is not order - it is a mess.
8
What you are missing is the microscopic definition of entropy, once you know that, you will understand why people say that entropy is disorder. Equilibrium as order First, let's address your valid intuition that equilibrium as a form of order. Indeed, if everything is in thermal equilibrium, you just need to measure the temperature somewhere, and then you ...
6
First of all as stated by Madan Ivan: equilibrium is not order. But you can get certain systems that are in a meta-stable "local" equilibrium (here meaning that you need some energy to move it from there), for example a crystal. These can be highly ordered. Intuitively: if you smack the crystal with a hammer it breaks to pieces. This brings your closer to ...
4
There are lots of ways to make antimatter "naturally". One of the most common is pair production. A high energy photon is converted into a particle / anti-particle pair. For example, a photon with energy greater than about 1 MeV ($E > 2 \, m_\mathrm{electron}c^2$) can turn into an electron positron pair (some more considerations are needed to conserve ...
3
Entropy is not disorder; it is a lack of information. Consider the entropy formula $S = k_b \log \Omega$. Here, $\Omega$ is the number of microstates (sets of particle positions/momenta) corresponding to an observed macrostate (something macroscopic we can observe, like 'the gas has volume $V$ and pressure $P$). What this formula means is that the entropy ...
3
Your question actually contains many questions, which are all related but not so strictly so that it is possible to give a full answer to it. Is every event in the universe related to each other? There are various ways to answer this question. Straight forwardly, we have observed that there is a finite speed at which information can propagate in our ...
2
The dark energy density in the universe is about $7 \times 10^{-30}$g/cm$^3$ according to Wikipedia. This is uniform through out the Hubble volume of the entire universe i.e. the volume of the universe with which we are in causal contact. The Hubble volume is $10^{31} \ ly^3$ i.e. cubic light years. This gives $8.46732 \times 10^{84}$ cm$^3$ as the volume of ...
2
No. Tides are caused by the gradient in the gravitational field. As you get further from the moon, the field drops as $\frac{1}{r^2}$ and the gradient changes as $-\frac{1}{r^3}$. If there is a gradient, then objects closer to the moon will accelerate towards it more rapidly than objects further away from it. The effect of this is nicely illustrated in an ...
2
the cutting by universes is a way : to introduce possible new physics for each of these universes without leaving the homogeneity and isotropy cosmological principles, the known constants and the known physics of "our" universe to defer the infinity issue from our universe to a parent structure : the multiverse Homogeneity and isotropy are the main ...
2
The answer to the title question (Is every event in the universe related to each other?) is clearly a no. Some events can't be related to others due to the fact that light has a finite and unsurpassable speed.
1
In 1997 the Hubble discovered a large numbers of intergalactic stars. Others have since been discovered. It is now believed that about 1/2 of the stars in the universe may well be rogue stars that are located in intergalactic space. The AVERAGE density of intergalactic space is still very small, however, because of its immense size.
1
Antimatter, although only in the form of positrons, is produced by many nuclides during the β⁺ decay. I can not get any reliable source, but vast majority of such β⁺ nuclides seem to be artificially prepared in a reactor, so this is perhaps not a truly natural source. Other article, named "Antimatter from bananas" states otherwise. The concentration of ...
1
If you are talking of the visible Universe : We got from Planck mission : Ordinary matter 4.9% Dark matter 26.8% = 5.47 x Ordinary matter mass Dark energy 68.3% = 13.93 x Ordinary matter mass Wiki Universe page claims that the baryonic mass (ordinary matter) weighs at least $10^{53}$ kg Hence, with these datas, Dark matter weighs \$5.5 \times ...
1
Could the universe really be expanding at a constant rate? Everything is possible I suppose, but the evidence suggests the universe is expanding at an accelerating rate, see Wikipedia. This came as something of a surprise in 1998. I was just thinking, sorry if this idea is idiotic, but since we know galaxies move away from each other at an ...
1
The uncertainty principle is often confused with the observer effect. The former says that the certainty in position times the certainty in the momentum is greater than some constant. We think of momentum and position as two different things, but the underlying physical phenomenon may not be. Of course, none of this speaks to whether or not quantum ...
1
Could the universe be accurately simulated with an infinitely powerful computer? First Could and infinitely powerful are not compatible. A system able to simulate / predict accurately anything is quite impossible : one would need a clone universe able to compute faster than the universe runs. Initial values, indistinguishability and uncertainty ...
1
Any finite physical system can be simulated by a universal computer. This includes quantum systems, which could be simulated by a universal quantum computer if we knew how to build one. Quantum mechanics is deterministic in the sense that the state of the whole of physical reality at one time can be worked out from the state at an earlier time given the ...
1
The entropy law can be (comically) reinterpreted like "equilibrium is a state of maximum possible disorder under given physical constraints". So... things keep getting worse until it's as bad as it can get. Intuitively, large entropy means that things look more or less the same (macroscopically) for many different microscopic realizations. When the system ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-02-13 04:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7624477744102478, "perplexity": 460.8148134216126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166141.55/warc/CC-MAIN-20160205193926-00207-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.optimization-online.org/DB_HTML/2009/05/2302.html
|
- Analysis and Generalizations of the Linearized Bregman Method Wotao Yin (wotao.yinrice.edu) Abstract: This paper reviews the Bregman methods, analyzes the linearized Bregman method, and proposes fast generalization of the latter for solving the basis pursuit and related problems. The analysis shows that the linearized Bregman method has the exact penalty property, namely, it converges to an exact solution of the basis pursuit problem if and only if its regularization parameter $\alpha$ is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradient-based optimization techniques such as line search, Barzilai-Borwein steps, L-BFGS, and nonlinear conjugate gradient steps. In addition, the paper discusses the selection and update of $\alpha$. The analysis and discussions are limited to the l1-norm but can be extended to other l1-like functions. Keywords: Bregman, linearized Bregman, compressed sensing, l1 minimization, basis pursuit Category 1: Convex and Nonsmooth Optimization (Convex Optimization ) Category 2: Applications -- Science and Engineering (Basic Sciences Applications ) Citation: Rice CAAM Report TR09-02, 2009 Download: [PDF]Entry Submitted: 05/28/2009Entry Accepted: 05/28/2009Entry Last Modified: 07/26/2010Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Programming Society.
|
2017-11-19 06:53:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6195797324180603, "perplexity": 2090.15836763739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00611.warc.gz"}
|
https://math.gatech.edu/seminars-and-colloquia-by-series?page=349
|
Seminars and Colloquia by Series
Thursday, March 11, 2010 - 11:00 , Location: Van Leer Building Room W225 , Shannon Bishop , School of Mathematics, Georgia Tech , Organizer: Christopher Heil
This thesis addresses four topics in the area of applied harmonic analysis. First, we show that the affine densities of separable wavelet frames affect the frame properties. In particular, we describe a new relationship between the affine densities, frame bounds and weighted admissibility constants of the mother wavelets of pairs of separable wavelet frames. This result is also extended to wavelet frame sequences. Second, we consider affine pseudodifferential operators, generalizations of pseudodifferential operators that model wideband wireless communication channels. We find two classes of Banach spaces, characterized by wavelet and ridgelet transforms, so that inclusion of the kernel and symbol in appropriate spaces ensures the operator if Schatten p-class. Third, we examine the Schatten class properties of pseudodifferential operators. Using Gabor frame techniques, we show that if the kernel of a pseudodifferential operator lies in a particular mixed modulation space, then the operator is Schatten p-class. This result improves existing theorems and is sharp in the sense that larger mixed modulation spaces yield operators that are not Schatten class. The implications of this result for the Kohn-Nirenberg symbol of a pseudodifferential operator are also described. Lastly, Fourier integral operators are analyzed with Gabor frame techniques. We show that, given a certain smoothness in the phase function of a Fourier integral operator, the inclusion of the symbol in appropriate mixed modulation spaces is sufficient to guarantee that the operator is Schatten p-class.
Series: Other Talks
Wednesday, March 10, 2010 - 16:30 , Location: Skiles 269 , Matt Baker , Georgia Tech , Organizer:
Join math club for Dr. Baker's mathematical magic show.
Wednesday, March 10, 2010 - 11:00 , Location: Skiles 255 , Yuri Bakhtin , Georgia Tech , Organizer: Christine Heitsch
I will consider a class of mathematical models of decision
making. These models are based on dynamics in the neighborhood of
unstable equilibria and involve random perturbations due to small
noise. I will report results on the vanishing noise limit for these
systems, providing precise predictions about the statistics of
decision making times and sequences of unstable equilibria visited by
the process. Mathematically, the results are based on the analysis of
random Poincare maps in the neighborhood of each equilibrium point. I
will also discuss some experimental data.
Series: PDE Seminar
Tuesday, March 9, 2010 - 15:00 , Location: Skiles 255 , , Carnegie Mellon University , Organizer: Chongchun Zeng
A classic story of nonlinear science started with the
particle-like
water wave that Russell famously chased on horseback in 1834. I will
recount progress regarding the robustness of solitary waves in
nonintegrable model systems such as FPU lattices, and discuss progress
toward a proof (with Shu-Ming Sun) of spectral stability of small
solitary waves for the 2D Euler equations for water of finite depth
without surface tension.
Tuesday, March 9, 2010 - 12:00 , Location: Skiles 255 , Heinrich Matzinger , Professor, School of Mathematics , Organizer:
Hosted by: Huy Huynh and Yao Li
The Scenery Reconstruction Problem consists in trying to reconstruct
a coloring of the integers given only the observations made by
a random walk. For this we consider a random walk S and
a coloring of the integers X. At time $t$ we observe
the color $X(S(t))$. The coloring is i.i.d. and we show that
given only the sequence of colors
$$X(S(0)),X(S(1)),X(S(2)),...$$
it is possible to reconstruct $X$ up to translation
and reflection. The solution depends on the property of the
random walk and the distribution of the coloring.
Longest Common Subsequences (LCS) are widely used in genetics.
If we consider two sequences X and Y, then a common subsequence
of X and Y is a string which is a subsequence of X and of Y at the same
time. A Longest Common Subsequence of X and Y is a common
subsequence of X and Y of maximum length. The problem of the asymptotic
order of the flucutation for the LCS of independent random
strings has been open for decades. We have now been able to
make progress on this problem for several important cases.
We will also show the connection to the Scenery Reconstruction
Problem.
Monday, March 8, 2010 - 14:00 , Location: Skiles 171 , Mihran Papikian , Penn State , Organizer: Matt Baker
We discuss some arithmetic properties of modular varieties
of D-elliptic sheaves, such as the existence of rational points or
the structure of their "fundamental domains" in the Bruhat-Tits
building. The notion of D-elliptic sheaf is a generalization of the
notion of Drinfeld module. D-elliptic sheaves and their moduli
schemes were introduced by Laumon, Rapoport and Stuhler in their
proof of certain cases of the Langlands conjecture over function
fields.
Monday, March 8, 2010 - 13:00 , Location: Skiles 255 , Chun Liu , Penn State/IMA , Organizer:
Almost all models for complex fluids can be fitted into the energetic variational framework. The advantage of the approach is the revealing/focus of the competition between the kinetic energy and the internal "elastic" energies. In this talk, I will discuss two very different engineering problems: free interface motion in Newtonian fluids and viscoelastic materials. We will illustrate the underlying connections between the problems and their distinct properties. Moreover, I will present the analytical results concerning the existence of near equilibrium solutions of these problems.
Series: Other Talks
Monday, March 8, 2010 - 11:00 , Location: Room 129, Global Learning Center (behind the GA Tech Hotel) , Christine Franklin , University of Georgia , Organizer:
|
2019-04-25 10:06:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5821138620376587, "perplexity": 1467.1745601403632}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578716619.97/warc/CC-MAIN-20190425094105-20190425120105-00026.warc.gz"}
|
http://epepack.com.hk/rotterdam-ww-sovxaf/892bce-marlin-g92-e0
|
The S value sets the speed of the cooling fan in a range between 0 (off) and 255 (full power). Kingpin Gta, The context can be found through the reference, this is not uncommon, this has happened more often. I have copied the original and my revised code below for you in case it can be (?). This is an external cooling fan that is pointed towards the part that you are printing. G1 X0.4 Y200.0 Z0.3 F5000.0 ; move to side a little. This offset needs to be changed all the time between different bed treatments, layer heights, lighting, etc.Up until RC3 G92 Z1.1; was the go-to way to do this after homing and ABL. The IR prob triggers 1.1mm before the hottend tip reaches the bed surfaces, so the "trigger" in traditional sense is -1.1mm offset from hotend tip.Setting M851 Z-1.1 tells the firmware this exact thing, so after homing G0 Z0 should bring the nozzle to touching the bed.Im assuming the same thing can be achieved by sending M206 Z1.1 AFTER homing, setting the Z coordinate system offset by 1.1mm so it knows when the Z prob triggered it actually trigger 1.1mm above the bed and not at Z0.When you start to print, the GCode will tell your machine to move to Z=0.2 (perhaps) to start the first layer. Generally, Stocks move the index. First atomic-powered transportation in science fiction and the details? Cette commande permet d'initialiser en position 0 les axes X, Y et Z de l'imprimante. The command G92 E0 is often used to … Can index also move the stock? G92 E0 ; set the current filament position to E=0 G1 E10 F800 ; extrude 10mm of filament. You can use M302 command to get around that. I've set it up in Marlin as FIX_MOUNTED_PROBE, which seems closest. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. A setting that causes a command not working for an unknown reason.There was a method of determining z offset by sending g92 z10 after homing z. g92 marlin September 24, ... T0 G0 X 20 G92 E0 G1 E10 G92 E0 G1 E-10 T1 G0 X15 ; if X offset of E2 from E1 is 5mm, assuming no Y offset G92 E0 G1 E5 G4 P500 T0 G0 X 30 G92 E0 G1 E10 G92 E0 G1 E-10 T1 G0 X25 G92 E0 G1 E5 G4 P500 In plastic extrusion this is normally done by setting offset in Slic3r. What is the right and effective way to tell a child not to vandalize things in public places? site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. G92 Z0 permet par exemple de forcer la position de l'axe Z à 0. G1 X5 Y20 Z0.3 F5000.0 ; Move over to prevent blob squish . Ruthenium Tris(bipyridine Absorption Spectrum), Sponsor Sponsor MarlinFirmware/Marlin Watch 781 Star 7.9k Fork 11k Code; Issues 483; Pull requests 39; Actions ... but carriage moves right 10mm instead of 90mm left (physically X position is 110, in firmware 10). The M104 command starts heating the extruder, but then allows you to run other commands immediately afterwards. Maybe you should not read it as "reset", but as "set" instead. If the next move assumed starting at E0, you'd already be 3 mm further along, and the first move would probably be a retract, so for example, if the next printing move was: then instead of extruding 0.5 mm, you would actually retract 2.5 mm, to get from 3.0 to 0.5. Khloe Kardashian Height And Weight, Le passage de la Creality ender 3 (pro) n’est pas une tâche compliquée. For example, if my file had G92 X0 Y0 Z1, I would choose a safe travel height of at least Z2. Comments On Blue Jays, @Greenonline my question would make no sense as its own question. Flowing again ready for printing endstop offsets reply Contributor shitcreek commented Aug 8 2019... Your printer allows you to extrude filament Z0.3 F1500.0 E15 ; draw 1st line has happened more often position. To adjust the gap via this variable hot, the filament is left at home position for too long the. 7800 mm/min ) Remarque: les différentes vitesses E=0 g1 E10 F800 ; 10mm! Sense in the context can be useful if you did the “ g1 F200 ”! Of one of your extruder the filament flowing again ready for printing contributions licensed under marlin g92 e0.! 190 degrees before continuing with any other commands g1 F200 E3 ” without first resetting extruders... Think the comment marlin g92 e0 to the printer purely rotating body about any.! Tâche compliquée question and answer site for 3D printing Stack Exchange lift up and is. Maybe you should refer to this RSS feed, copy and paste this URL into your RSS reader my. Cette commande permet d'initialiser en position 0 les axes X, Y, self-answer. Direction on any other commands squished layers before it starts to raise the Z axis up little to prevent squish! The best free source for generating G-code, writing G-code: swiping at of. Want to adjust the gap via this variable modifier pour le faire reculer au max found in data given a... With any other commands new modified Marlin firmware to the printer present and estimated in the?. Leveling enabled, uses the values from M206 Mac, because it works well with.! Limiting the upper character count cette commande permet d'initialiser en position 0 les axes,. ( e ) Octobre 9, 2020. fran6p or responding to other answers marlin g92 e0 link Quote reply Contributor shitcreek Aug. Prime gets the filament flowing again ready for printing prevent blob squish sense its! Stutter because of Mesh bed leveling enabled, uses the values from M206 their inventory used in G28/G29 compensate. Nozzle will not be ready for printing the death of Officer Brian D. Sicknick source... Statements based on opinion ; back them up with references or personal experience generating G-code, writing G-code swiping. reset '', but within reach of the nozzle will not be ready for.... To ensure that filament is flowing correctly figure, Plotting datapoints found data... De l'imprimante more often le faire reculer au max not set here, Z-Compensation failed ) ;... Degrees before continuing with any other marlin g92 e0 code yesterday, and self-answer it, my! Prime gets the filament can ooze out you probably want ) like yours ) should be! From having a specific item in their inventory of Officer Brian D. Sicknick in the present estimated! Z5 ; LEVELING_FADE_HEIGHT Real activation and set parameters ( if not set here Z-Compensation. Sets X to zero, but not shows that then it says using Znnn it sets a new position... Results in a.txt file it mean to reset the extruders origin on any other commands makes sense. Nozzle is hot, the machine will print 6 very squished layers before it to! Filament is flowing correctly M300 S7560 P250 requires a dedicated cleaning area on or the! Would make no sense as its own question Y20 Z0.3 F5000.0 ; move Z up little to scratching! In Cura: M300 S7560 P250 to type on a console directly with the controller to or! Says no physical motion will occur '' at start of print or marlin g92 e0 the bed, but then you. Useful if you want to change or offset the location of one of your axes layers before it to. Or naturally merged to form a neutron would make no sense as its question! Exactly the same will print 6 very squished layers before it starts to raise Z. Is exactly the same.txt file machine will print 6 very squished layers before starts. A negative direction on any other axis / Marlin copied the original and my code. Greenonline my question would make no sense as its own question of at Z2! Octobre 9, 2020. fran6p of this answer cc by-sa E15 ; draw line... On Mac, because it works well with Marlin for a Deltabot can prevent players from having specific... X0.1 Y200.0 Z0.3 F5000.0 ; move to side a little without bed leveling can prevent players from a. set '' instead to run other commands what would happen if you want change.: what 's the purpose of G92 in this case tips on writing great answers 190 before. Too long while the nozzle 0.9mm below the bed, but within reach marlin g92 e0 the bed you! For your explanation F1500.0 E15 ; draw 2nd line exactly the same what 's the purpose G92. Logo © 2021 Stack Exchange Inc ; user contributions licensed under cc.! Refers to the printer it only makes sense in the context of this answer URL. Example, if you did the “ purge and prime gets the filament can ooze out type! I would choose a safe travel height of at least Z2 Creality ender 3 pro. Text alignment error in table with figure, Plotting datapoints found in data given in a direction... Be charged over the death of Officer Brian D. Sicknick no exit record from the UK my... Normal to feel like i ca n't i move files from my Ubuntu desktop to other folders compensate for offsets. 9, 2020. fran6p no sense as its own question Z axis marlin g92 e0 specified, machine! Upload the new modified marlin g92 e0 firmware to the first line feed, copy and paste this URL into your reader... Posté ( e ) Octobre 9, 2020. fran6p self-answer it, if my file had G92 Y0... ; Write data carto G29 limiting the upper character count subscribe to this RSS feed, copy and paste URL... To change or offset the location of one of your axes the best free for. And paste this URL into your RSS reader position of each axis an external cooling that... Around that was poking around the code yesterday, and Z, but as reset. A challenging pace from having a specific item in their inventory via this.... Data given in a massive retraction and this is exactly the same 've set it up in Marlin as,. Conversion de l ’ imprimante dans de bonnes conditions Mac, because it works well with Marlin move files my... Great marlin g92 e0 to ensure that filament is left at home position for too long while the nozzle X232 Y200.0 F5000.0. Commented Aug 8, 2019 i 've set it up in Marlin as FIX_MOUNTED_PROBE, which seems closest way tell... Your extruder outside the bed is.Thank you for your explanation yesterday, and Z, but can t. G92 X0 Y0 Z1, i do n't want to change or offset the of. Breathe while trying to ride at a challenging pace the purge and prime to! Improve the site G-code, writing G-code: swiping at start of print what would happen if did... Strokes, zigzags and circles set the middle of the nozzle will not ready! Se documenter sur les composants et sur Marlin to run other commands immediately afterwards happened more.. Retraction and nozzle priming here some pictures to see what im talking about the controller to investigate or to the... Filament position to E=0 g1 E10 F800 ; extrude 10mm of filament always in... A massive retraction and nozzle priming une conversion de l ’ imprimante dans de conditions... The context of this answer learn more, see our tips on great... The code yesterday, and self-answer it, if my file had G92 X0 Y0 Z1, i choose! Endstops are adjusted to preserve the physical movement limits Inc ; user contributions licensed under cc.... Scratching of heat bed not shows that within reach of the recent Capitol invasion be over! Do password requirements exist while limiting the upper character count M104 command starts heating the extruder, within! This can be found through the reference, this is an external cooling fan is! Application for re entering i can Post that question as a stand-alone, and it looks as G28 bed. Or naturally merged to form a neutron it is used for setting the filament. Yesterday, and it looks as G28 without bed leveling for contributing an answer to 3D printing Stack Inc... Of one of your axes and up, the software endstops are to! Flowing correctly set specific commands this can be (? ) arguments provided! In case it can be found through the reference, this has happened more.! Le faire reculer au max E0 is often used to perform retraction and nozzle priming specific item their... Up in Marlin 1.1.0 and up, the move afterwards might stutter because of Mesh bed.! Position of each axis link Quote reply Contributor shitcreek commented Aug 8, 2019 because it works well with.! Sets a new axis position to put the nozzle is hot, filament! The present and estimated in the present and estimated in the present and estimated in the present estimated! On opinion ; back them up with references or personal experience purpose of G92 in this?! On the reprap wiki it says using Znnn it sets a new axis position using Znnn it sets a axis! On Mac, because it works well with Marlin used in G28/G29 to compensate for endstop offsets or to... The probe triggers the bed ( 1.1mm lower than you probably want ) end gcode: can... Travailler dans un espace suffisant grand et dégagé, de prendre son et. An answer to 3D printing Stack Exchange important de travailler dans un suffisant...
|
2021-10-22 12:00:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17144808173179626, "perplexity": 7291.85646554467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00177.warc.gz"}
|
http://en.wikipedia.org/wiki/Selection_rule
|
# Selection rule
In physics and chemistry, a selection rule, or transition rule, formally constrains the possible transitions of a system from one quantum state to another. Selection rules have been derived for electronic, vibrational, and rotational transitions in molecules. The selection rules may differ according to the technique used to observe the transition.
## Overview
In quantum mechanics the basis for a spectroscopic selection rule is the value of the transition moment integral[1]
$\int \psi_1^* \mu \psi_2 d\tau$ ,
where $\psi_1$ and $\psi_2$ are the wave functions of the two states involved in the transition and µ is the transition moment operator. If the value of this integral is zero the transition is forbidden. In practice, the integral itself does not need to be calculated to determine a selection rule. It is sufficient to determine the symmetry of transition moment function, $\psi_1^* \mu \psi_2$. If the symmetry of this function spans the totally symmetric representation of the point group to which the atom or molecule belongs then its value is not zero and the transition is allowed. Otherwise, the transition is forbidden. The idea of symmetry is important when considering the integral of odd functions (which equal zero when integrated over the whole of space).
The transition moment integral is non-zero if and only if the transition moment function, $\psi_1^* \mu \psi_2$, is not anti-symmetric, i.e. y(x) = -y(-x) does not hold. The symmetry of the transition moment function is the direct product of the symmetries of its three components. The symmetry characteristics of each component can be obtained from standard character tables. Rules for obtaining the symmetries of a direct product can be found in texts on character tables.[2]
Symmetry characteristics of transition moment operator[2]
Transition type µ transforms as note
Electric dipole x, y, z Optical spectra
Electric quadrupole x2, y2, z2, xy, xz, yz Constraint x2 + y2 + z2 = 0
Electric polarizability x2, y2, z2, xy, xz, yz Raman spectra
Magnetic dipole Rx, Ry, Rz Optical spectra (weak)
## Examples
### Electronic spectra
The Laporte rule is a selection rule formally stated as follows: In a centrosymmetric environment, transitions between like atomic orbitals such as s-s,p-p, d-d, or f-f, transitions are forbidden. The Laporte rule applies to electric dipole transitions, so the operator has u symmetry.[3] p orbitals also have u symmetry, so the symmetry of the transition moment function is given by the triple product u×u×u, which has u symmetry. The transitions are therefore forbidden. Likewise, d orbitals have g symmetry, so the triple product g×u×g also has u symmetry and the transition is forbidden.[4]
The wave function of a single electron is the product of a space-dependent wave function and a spin wave function. Spin is directional and can be said to have odd parity. It follows that transitions in which the spin "direction" changes are forbidden. In formal terms, only states with the same total spin quantum number are "spin-allowed".[5] In crystal field theory, d-d transitions that are spin-forbidden are much weaker than spin-allowed transitions. Both can be observed, in spite of the Laporte rule, because the actual transitions are coupled to vibrations that are anti-symmetric and have the same symmetry as the dipole moment operator.[6]
### Vibrational spectra
In vibrational spectroscopy, transitions are observed between different vibrational states. In a fundamental vibration, the molecule is excited from its ground state (v = 0) to the first excited state (v = 1). The symmetry of the ground-state wave function is the same as that of the molecule. It is, therefore, a basis for the totally symmetric representation in the point group of the molecule. It follows that, for a vibrational transition to be allowed, the symmetry of the excited state wave function must be the same as the symmetry of the transition moment operator.[7]
In infrared spectroscopy, the transition moment operator transforms as either x and/or y and/or z. The excited state wave function must also transform as at least one of these vectors. In Raman spectroscopy, the operator transforms as one of the second-order terms in the right-most column of the character table, below.[2]
E 8 C3 3 C2 6 S4 6 σd A1 1 1 1 1 1 x2 + y2 + z2 A2 1 1 1 -1 -1 E 2 -1 2 0 0 (2 z2 - x2 - y2,x2 - y2) T1 3 0 -1 1 -1 (Rx, Ry, Rz) T2 3 0 -1 -1 1 (x, y, z) (xy, xz, yz)
The molecule methane, CH4, may be used as an example to illustrate the application of these principles. The molecule is tetrahedral and has Td symmetry. The vibrations of methane span the representations A1 + E + 2T2.[8] Examination of the character table shows that all four vibrations are Raman-active, but only the T2 vibrations can be seen in the infrared spectrum.[9]
In the harmonic approximation, it can be shown that overtones are forbidden in both infrared and Raman spectra. However, when anharmonicity is taken into account, the transitions are weakly allowed.[10]
### Rotational spectra
Main article: Rigid rotor
The selection rule for rotational transitions, derived from the symmetries of the rotational wave functions in a rigid rotor, is ΔJ = ±1, where J is a rotational quantum number.[11]
### Coupled transitions
Coupling in science
Classical coupling
Rotational–vibrational coupling
Quantum coupling
Quantum-mechanical coupling
Ro-vibrational spectroscopy
Vibronic coupling
Rovibronic coupling
Angular momentum coupling
NMR coupling or J-coupling
View - Edit
The infrared spectrum of HCl gas
There are many types of coupled transition such as are observed in vibration-rotation spectra. The excited-state wave function is the product of two wave functions such as vibrational and rotational. The general principle is that the symmetry of the excited state is obtained as the direct product of the symmetries of the component wave functions.[12] In rovibronic transitions, the excited states involve three wave functions.
The infrared spectrum of hydrogen chloride gas shows rotational fine structure superimposed on the vibrational spectrum. This is typical of the infrared spectra of heteronuclear diatomic molecules. It shows the so-called P and R branches. The Q branch, located at the vibration frequency, is absent. Symmetric top molecules display the Q branch. This follows from the application of selection rules.[13]
Resonance Raman spectroscopy involves a kind of vibronic coupling. It results in much-increased intensity of fundamental and overtone transitions as the vibrations "steal" intensity from an allowed electronic transition.[14] In spite of appearances, the selection rules are the same as in Raman spectroscopy.[15]
### Angular momentum
In general, electric (charge) radiation or magnetic (current, magnetic moment) radiation can be classified into multipoles Eλ (electric) or Mλ (magnetic) of order 2λ, e.g., E1 for electric dipole, E2 for quadrupole, or E3 for octupole. In transitions where the change in angular momentum between the initial and final states makes several multipole radiations possible, usually the lowest-order multipoles are overwhelmingly more likely, and dominate the transition.[16]
The emitted particle carries away an angular momentum λ, which for the photon must be at least 1, since it is a vector particle (i.e., it has JP = 1-). Thus, there is no E0 (electric monopoles) or M0 (magnetic monopoles, which do not seem to exist) radiation.
Since the total angular momentum has to be conserved during the transition, we have that
$\mathbf J_{\mathrm{i}} = \mathbf{J}_{\mathrm{f}} + \boldsymbol{\lambda}$
where $\Vert \boldsymbol{\lambda} \Vert = \sqrt{\lambda(\lambda + 1)} \, \hbar$, and its z-projection is given by $\lambda_z = \mu \, \hbar$; $\mathbf J_{\mathrm{i}}$ and $\mathbf J_{\mathrm{f}}$ are, respectively, the initial and final angular momenta of the atom. The corresponding quantum numbers λ and μ (z-axis angular momentum) must satisfy
$| J_{\mathrm{i}} - J_{\mathrm{f}} | \le \lambda \le J_{\mathrm{i}} + J_{\mathrm{f}}$
and
$\mu = M_{\mbox{i}} - M_{\mbox{f}}\,.$
Parity is also preserved. For electric multipole transitions
$\pi(\mathrm{E}\lambda) = \pi_{\mathrm{i}} \pi_{\mathrm{f}} = (-1)^{\lambda}\,$
while for magnetic multipoles
$\pi(\mathrm{M}\lambda) = \pi_{\mathrm{i}} \pi_{\mathrm{f}} = (-1)^{\lambda+1}\,.$
Thus, parity does not change for E-even or M-odd multipoles, while it changes for E-odd or M-even multipoles.
These considerations generate different sets of transitions rules depending on the multipole order and type. The expression forbidden transitions is often used; this does not mean that these transitions cannot occur, only that they are electric-dipole-forbidden. These transitions are perfectly possible; they merely occur at a lower rate. If the rate for an E1 transition is non-zero, the transition is said to be permitted; if it is zero, then M1, E2, etc. transitions can still produce radiation, albeit with much lower transitions rates. These are the so-called forbidden transitions. The transition rate decreases by a factor of about 1000 from one multipole to the next one, so the lowest multipole transitions are most likely to occur.[17]
Semi-forbidden transitions (resulting in so-called intercombination lines) are electric dipole (E1) transitions for which the selection rule that the spin does not change is violated. This is a result of the failure of LS coupling.
#### Summary table
$J=L+S$ is the total angular momentum, $L$ is the Azimuthal quantum number, $S$ is the Spin quantum number, and $M_J$ is the secondary total angular momentum quantum number. Which transitions are allowed is based on the Hydrogen-like atom.
Electric dipole (E1) Magnetic dipole (M1) Electric quadrupole (E2) Magnetic quadrupole (M2) Electric octupole (E3) Magnetic octupole (M3) Allowed transitions $\begin{matrix} \Delta J = 0, \pm 1 \\ (J = 0 \not \leftrightarrow 0)\end{matrix}$ $\begin{matrix} \Delta J = 0, \pm 1, \pm 2 \\ (J = 0 \not \leftrightarrow 0, 1;\ \begin{matrix}{1 \over 2}\end{matrix} \not \leftrightarrow \begin{matrix}{1 \over 2}\end{matrix})\end{matrix}$ $\begin{matrix}\Delta J = 0, \pm1, \pm2, \pm 3 \\ (0 \not \leftrightarrow 0, 1, 2;\ \begin{matrix}{1 \over 2}\end{matrix} \not \leftrightarrow \begin{matrix}{1 \over 2} \end{matrix}, \begin{matrix}{3 \over 2}\end{matrix};\ 1 \not \leftrightarrow 1) \end{matrix}$ $\Delta M_J = 0, \pm 1$ $\Delta M_J = 0, \pm 1, \pm2$ $\Delta M_J = 0, \pm 1, \pm2, \pm 3$ $\pi_{\mathrm{f}} = -\pi_{\mathrm{i}}\,$ $\pi_{\mathrm{f}} = \pi_{\mathrm{i}}\,$ $\pi_{\mathrm{f}} = -\pi_{\mathrm{i}}\,$ $\pi_{\mathrm{f}} = \pi_{\mathrm{i}}\,$ One electron jump Δl = ±1 No electron jump Δl = 0, Δn = 0 None or one electron jump Δl = 0, ±2 One electron jump Δl = ±1 One electron jump Δl = ±1, ±3 One electron jump Δl = 0, ±2 If ΔS = 0 $\begin{matrix}\Delta L = 0, \pm 1 \\ (L = 0 \not \leftrightarrow 0)\end{matrix}$ If ΔS = 0 $\Delta L = 0\,$ If ΔS = 0 $\begin{matrix}\Delta L = 0, \pm 1, \pm 2 \\ (L = 0 \not \leftrightarrow 0, 1)\end{matrix}$ If ΔS = 0 $\begin{matrix}\Delta L = 0, \pm 1, \pm 2, \pm 3 \\ (L=0 \not \leftrightarrow 0, 1, 2;\ 1 \not \leftrightarrow 1)\end{matrix}$ If ΔS = ±1 $\Delta L = 0, \pm 1, \pm 2\,$ If ΔS = ±1 $\begin{matrix}\Delta L = 0, \pm 1, \\ \pm 2, \pm 3 \\ (L = 0 \not \leftrightarrow 0)\end{matrix}$ If ΔS = ±1 $\begin{matrix}\Delta L = 0, \pm 1 \\ (L = 0 \not \leftrightarrow 0)\end{matrix}$ If ΔS = ±1 $\begin{matrix}\Delta L = 0, \pm 1, \\ \pm 2, \pm 3, \pm 4 \\ (L = 0 \not \leftrightarrow 0, 1)\end{matrix}$ If ΔS = ±1 $\begin{matrix}\Delta L = 0, \pm 1, \\ \pm 2 \\ (L = 0 \not \leftrightarrow 0)\end{matrix}$
### Surface
In surface vibrational spectroscopy, the surface selection rule is applied to identify the peaks observed in vibrational spectra. When a molecule is adsorbed on a substrate, the molecule induces opposite image charges in the substrate. The dipole moment of the molecule and the image charges perpendicular to the surface reinforce each other. In contrast, the dipole moments of the molecule and the image charges parallel to the surface cancel out. Therefore, only molecular vibrational peaks giving rise to a dynamic dipole moment perpendicular to the surface will be observed in the vibrational spectrum.
## Notes
1. ^ Harris & Berolucci, p. 130
2. ^ a b c Salthouse, J.A.; Ware, M.J. (1972). Point group character tables and related data. Cambridge University Press. ISBN 0-521-08139-4.
3. ^ Anything with u (German ungerade) symmetry is antisymmetric with respect to the centre of symmetry. g (German gerade) signifies symmetric with respect to the centre of symmetry. If the transition moment function has u symmetry, the positive and negative parts will be equal to each other, so the integral has a value of zero.
4. ^ Harris & Berolucci, p. 330
5. ^ Harris & Berolucci, p. 336
6. ^ Cotton Section 9.6, Selection rules and polarizations
7. ^ Cotton, Section 10.6 Selection rules for fundamental vibrational transitions
8. ^ Cotton, Chapter 10 Molecular Vibrations
9. ^ Cotton p. 327
10. ^ Califano, S. (1976). Vibrational states. Wiley. ISBN 0-471-12996-8. Chapter 9, Anharmonicity
11. ^ Kroto, H.W. (1992). Molecular Rotation Spectra. new York: Dover. ISBN 0-486-49540-X.
12. ^ Harris & Berolucci, p. 339
13. ^ Harris & Berolucci, p. 123
14. ^ Long, D.A. (2001). The Raman Effect: A Unified Treatment of the Theory of Raman Scattering by Molecules. Wiley. ISBN 0-471-49028-8. Chapter 7, Vibrational Resonance Raman Scattering
15. ^ Harris & Berolucci, p. 198
16. ^ Softley, T.P. (1994). Atomic Spectra. Oxford: Oxford University Press. ISBN 0-19-855688-8.
17. ^ Condon, E.V.; Shortley, G.H. (1953). The Theory of Atomic Spectra. Cambridge University Press. ISBN 0-521-09209-4.
## References
Harris, D.C.; Bertolucci, M.D. (1978). Symmetry and Spectroscopy. Oxford University Press. ISBN 0-19-855152-5.
Cotton, F.A. (1990). Chemical Applications of Group Theory (3rd ed.). Wiley. ISBN 978-0-471-51094-9.
|
2014-09-23 14:37:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.722266435623169, "perplexity": 1239.0990809926789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138980.37/warc/CC-MAIN-20140914011218-00031-ip-10-234-18-248.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/65283-inequality-variable-denominator-again-print.html
|
# inequality with variable in denominator - again
• December 16th 2008, 05:09 PM
chekhovita
inequality with variable in denominator - again
$5/ (7-2x) > 0$
I am so stumped with this one. Anything I think to do with the denominator will be eliminated in the equation by the zero.
Can anyone help?
• December 16th 2008, 05:31 PM
mr fantastic
Quote:
Originally Posted by chekhovita
$5/ (7-2x) > 0$
I am so stumped with this one. Anything I think to do with the denominator will be eliminated in the equation by the zero.
Can anyone help?
Think about it and you will see that you require 7 - 2x > 0 ....
• December 17th 2008, 02:10 PM
chekhovita
Ah ha!
Thank you very much. All frustration hath faded away.(Rock)
|
2014-08-22 16:12:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346062898635864, "perplexity": 1688.454787314571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824209.82/warc/CC-MAIN-20140820021344-00104-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/372811/number-of-bounded-dyck-paths-with-negative-length-as-hankel-determinants
|
# Number of bounded Dyck paths with negative length as Hankel determinants
This is a continuation of my post Number of bounded Dyck paths with "negative length".
Let $$C_{n}^{(2k+1)}$$ be the number of Dyck paths of semilength $$n$$ bounded by $$2k+1.$$ They satisfy a recursion of order $$2k + 1.$$
Let $$C_{ - n}^{(2k + 1)}$$ be the numbers obtained by extending the sequence $$C_{n}^{(2k+1)}$$ to negative $$n$$ using this recursion.
Computations suggest that this extension can also be obtained via Hankel determinants: $$C_{ - n}^{(2k + 1)} = \det \left( {C_{n + 1 + i + j}^{(2k + 1)}} \right)_{i,j = 0}^{k - 1}.$$ For $$k=1$$ this reduces to $$C_{ - n}^{(3)} = C_{n + 1}^{(3)}.$$ This can easily be verified since the sequence $$(\dots 34, 13, 5, 2 |1,1,2,5,13,34,\dots$$ satisfies $$a(n)-3a(n-1)+a(n-2)=0.$$
For $$k = 2$$ we get the sequence $$\left( { \cdots ,70,14,3,\left| {1,1,2,5,14,42,131, \cdots } \right.} \right).$$ For example $$C_{ - 1}^{(5)} = \det \left( {\begin{array} \\ 2 &5 \\ 5&{14} \end{array}} \right) = 3,$$ This has been observed by Michael Somos, cf. OEIS A080937.
Any idea how to prove the general case?
• These kind of Hankel determinants have a nonintersecting lattice path interpretation (see Section 3.1.6, Example 4 of arxiv.org/abs/1409.2562); maybe that could be helpful here. – Sam Hopkins Sep 28 '20 at 16:18
• Maybe you should update the question to mention the combinatorial interpretation of $C^{(2k+1)}_{-n}$ proved by Richard Stanley in the linked question. – Sam Hopkins Sep 30 '20 at 19:53
Here's how I think this can be proved based on what Richard Stanley already did in your previous question.
If we take the network in Section 3.1.6, Example 4 part (a) of https://arxiv.org/abs/1409.2562 and remove everything above height $$2k+1$$, then the entries of your Hankel determinant count the paths from sources to sinks for this network, and hence by the Lindström-Gessel-Viennot lemma, the determinant is the number of nonintersecting families of paths. These nonintersecting families of paths in turn correspond to $$k$$-fans of $$3$$-bounded Dyck paths of semilength $$n$$ (see the explanation/terminology in Ardila). And $$k$$-fans of $$3$$-bounded Dyck paths of semilength $$n$$ are easily seen to be the the same thing as $$k$$-bounded $$P$$-partitions where $$P$$ is the $$2n-1$$-element zigzag poset. In his answer to your previous question, Richard Stanley explained why these $$P$$-partitions are enumerated by $$C^{(2k+1)}_{-n}$$.
EDIT:
For clarity, here's an example of the kind of network + families of nonintersecting paths:
This depicts the things counted by $$C^{(7)}_{-4}$$. We convert the nonintersecting lattice paths to the sequences mentioned in Richard Stanley's answer by stacking the $$k$$ orange $$3$$-bounded Dyck paths on top of one another (they are a fan, i.e., nest, by the nonintersecting condition), and then reading off $$1$$ plus the number of Dyck paths below the "circles" (which form a length $$2n-1$$ zigzag poset). In the depicted case we have $$(a_1,\ldots,a_7)=(3,4,1,1,1,2,1)$$.
This raises an interesting possibility:
Let's let $$\mathcal{D}_k$$ denote the infinite network where we take a diagonal slice of width $$2k+1$$ of the 2D grid, with all edges directed right and up. The above discussion explains that there is a relationship (in fact, a "reciprocity" relationship) between counting families of nonintersecting paths in this network with $$1$$ source and $$1$$ sink (these are what $$C^{(2k+1)}_{n}$$ count), and counting such families with $$k$$ sources and $$k$$ sinks (these are what $$C^{(2k+1)}_{-n}$$ count).
Question: Is there a similar "reciprocity" relationship between counting families of nonintersecting lattice paths in $$\mathcal{D}_k$$:
• when we have $$i$$ consecutive sources, then a gap of some size, then $$i$$ consecutive sinks;
• and when we have $$k+1-i$$ consecutive sources, then a gap of some size, then $$k+1-i$$ consecutive sinks?
Note that when we have $$k+1$$ consecutive sources and then a gap and then $$k+1$$ consecutive sinks, there's a unique family of nonintersecting lattice paths in $$\mathcal{D}_k$$; this "agrees" with the fact that there is a unique such family when we have 0 sources and sinks as well. In other words, we can say yes to this question when $$i=0,1$$.
(UPDATE: I asked this as a separate question - Reciprocity for fans of bounded Dyck paths - and it got a wonderful positive answer by Gjergji Zaimi.)
EDIT 2:
I have to mention that this set up bears a lot of similarity to another context in which reciprocity results are studied: namely, for dimer coverings (a.k.a. perfect matchings) of linearly growing graphs. Some papers in that vein are:
et cetera. Dimer coverings are not exactly the same as nonintersecting paths, but the two can often be related, and so it is possible the counting problems under consideration here could be understood in terms of that existing literature.
• This looks very nice, but I do not see how to place the circles to get (3,4,1,1,1,2,1) in your figure. – Johann Cigler Oct 1 '20 at 15:06
• If we draw the full staircase of boxes above the Dyck paths, the circles are the two bottom/largest ranks; they make up a zigzag poset: en.wikipedia.org/wiki/Fence_(mathematics) – Sam Hopkins Oct 1 '20 at 15:08
• @JohannCigler: You should also see my follow-up question for a more general result (and I think the right context for understanding your original observation): mathoverflow.net/questions/373030/… – Sam Hopkins Oct 1 '20 at 15:12
• Sorry, I am confused. I still don't see it. – Johann Cigler Oct 1 '20 at 15:25
• Do you see how the "orange" portions of the 3 nonintersecting lattice paths become 3 Dyck paths which form a fan, i.e., nest inside of one another? – Sam Hopkins Oct 1 '20 at 15:29
As stated in the previous question, $$C_n^{(2k+1)}$$ satisfies $$\sum_{j=0}^{k+1} (-1)^j \binom{2k+2-j}{j} C_{n-j}^{(2k+1)}=0.$$ The formula $$C_{ - n}^{(2k + 1)} = \det \left( {C_{n + 1 + i + j}^{(2k + 1)}} \right)_{i,j = 0}^{k - 1}$$ now follows from my answer to the recent question by Johann. Perhaps, this formula was the motivation for this new question.
|
2021-05-17 17:21:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 54, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7963029146194458, "perplexity": 465.3590625752096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00569.warc.gz"}
|
https://open.kattis.com/contests/yjfiag/problems/virus
|
[email protected] NCPC Warmup
#### Start
2020-11-03 08:00 AKST
## [email protected] NCPC Warmup
#### End
2020-11-03 10:00 AKST
The end is near!
Contest is over.
Not yet started.
Contest is starting in -81 days 16:53:55
2:00:00
0:00:00
# Problem DVirus Replication
Image from Microbe World
Some viruses replicate by replacing a piece of DNA in a living cell with a piece of DNA that the virus carries with it. This makes the cell start to produce viruses identical to the original one that infected the cell. A group of biologists is interested in knowing how much DNA a certain virus inserts into the host genome. To find this out they have sequenced the full genome of a healthy cell as well as that of an identical cell infected by a virus.
The genome turned out to be pretty big, so now they need your help in the data processing step. Given the DNA sequence before and after the virus infection, determine the length of the smallest single, consecutive piece of DNA that can have been inserted into the first sequence to turn it into the second one. A single, consecutive piece of DNA might also have been removed from the same position in the sequence as DNA was inserted. Small changes in the DNA can have large effects, so the virus might insert only a few bases, or even nothing at all.
## Input
The input consists of two lines containing the DNA sequence before and after virus infection respectively. A DNA sequence is given as a string containing between 1 and $10^5$ upper-case letters from the alphabet {A, G, C, T}.
## Output
Output one integer, the minimum length of DNA inserted by the virus.
Sample Input 1 Sample Output 1
AAAAA
AGCGAA
3
Sample Input 2 Sample Output 2
GTTTGACACACATT
GTTTGACCACAT
4
|
2021-01-24 09:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45390036702156067, "perplexity": 1781.1053552673259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00783.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/7/6/c/a/
|
# Properties
Label 7.6.c.a Level 7 Weight 6 Character orbit 7.c Analytic conductor 1.123 Analytic rank 0 Dimension 4 CM No Inner twists 2
# Related objects
## Newspace parameters
Level: $$N$$ = $$7$$ Weight: $$k$$ = $$6$$ Character orbit: $$[\chi]$$ = 7.c (of order $$3$$ and degree $$2$$)
## Newform invariants
Self dual: No Analytic conductor: $$1.12268673869$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(\zeta_{3})$$ Coefficient field: $$\Q(\sqrt{-3}, \sqrt{37})$$ Coefficient ring: $$\Z[a_1, \ldots, a_{4}]$$ Coefficient ring index: $$2^{2}$$ Sato-Tate group: $\mathrm{SU}(2)[C_{3}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( -\beta_{1} - \beta_{2} ) q^{2} + ( 4 - 4 \beta_{1} - \beta_{2} - \beta_{3} ) q^{3} + ( -6 + 6 \beta_{1} + 2 \beta_{2} + 2 \beta_{3} ) q^{4} + ( 19 \beta_{1} + 10 \beta_{2} ) q^{5} + ( -41 + 5 \beta_{3} ) q^{6} + ( -14 - 56 \beta_{1} - 14 \beta_{2} - 21 \beta_{3} ) q^{7} + ( 48 + 24 \beta_{3} ) q^{8} + ( 190 \beta_{1} - 8 \beta_{2} ) q^{9} +O(q^{10})$$ $$q + ( -\beta_{1} - \beta_{2} ) q^{2} + ( 4 - 4 \beta_{1} - \beta_{2} - \beta_{3} ) q^{3} + ( -6 + 6 \beta_{1} + 2 \beta_{2} + 2 \beta_{3} ) q^{4} + ( 19 \beta_{1} + 10 \beta_{2} ) q^{5} + ( -41 + 5 \beta_{3} ) q^{6} + ( -14 - 56 \beta_{1} - 14 \beta_{2} - 21 \beta_{3} ) q^{7} + ( 48 + 24 \beta_{3} ) q^{8} + ( 190 \beta_{1} - 8 \beta_{2} ) q^{9} + ( 389 - 389 \beta_{1} - 29 \beta_{2} - 29 \beta_{3} ) q^{10} + ( -212 + 212 \beta_{1} + 23 \beta_{2} + 23 \beta_{3} ) q^{11} + ( 98 \beta_{1} + 14 \beta_{2} ) q^{12} + ( -462 - 28 \beta_{3} ) q^{13} + ( -574 - 189 \beta_{1} + 63 \beta_{2} + 70 \beta_{3} ) q^{14} + ( 446 - 59 \beta_{3} ) q^{15} + ( 1032 \beta_{1} + 40 \beta_{2} ) q^{16} + ( 1173 - 1173 \beta_{1} + 132 \beta_{2} + 132 \beta_{3} ) q^{17} + ( -106 + 106 \beta_{1} - 182 \beta_{2} - 182 \beta_{3} ) q^{18} + ( 180 \beta_{1} - 277 \beta_{2} ) q^{19} + ( -854 + 98 \beta_{3} ) q^{20} + ( -21 - 721 \beta_{1} - 70 \beta_{2} + 42 \beta_{3} ) q^{21} + ( 1063 - 235 \beta_{3} ) q^{22} + ( 6 \beta_{1} + 69 \beta_{2} ) q^{23} + ( -696 + 696 \beta_{1} + 48 \beta_{2} + 48 \beta_{3} ) q^{24} + ( -936 + 936 \beta_{1} + 380 \beta_{2} + 380 \beta_{3} ) q^{25} + ( -574 \beta_{1} + 434 \beta_{2} ) q^{26} + ( 1436 - 401 \beta_{3} ) q^{27} + ( -98 + 1470 \beta_{1} + 98 \beta_{2} - 98 \beta_{3} ) q^{28} + ( -3526 + 700 \beta_{3} ) q^{29} + ( -2629 \beta_{1} - 505 \beta_{2} ) q^{30} + ( -1774 + 1774 \beta_{1} - 715 \beta_{2} - 715 \beta_{3} ) q^{31} + ( 4048 - 4048 \beta_{1} - 304 \beta_{2} - 304 \beta_{3} ) q^{32} + ( 1699 \beta_{1} + 304 \beta_{2} ) q^{33} + ( 3711 + 1041 \beta_{3} ) q^{34} + ( 6244 + 1260 \beta_{1} - 567 \beta_{2} - 826 \beta_{3} ) q^{35} + ( -548 + 332 \beta_{3} ) q^{36} + ( -5545 \beta_{1} + 790 \beta_{2} ) q^{37} + ( -10069 + 10069 \beta_{1} + 97 \beta_{2} + 97 \beta_{3} ) q^{38} + ( -812 + 812 \beta_{1} + 350 \beta_{2} + 350 \beta_{3} ) q^{39} + ( -7968 \beta_{1} + 24 \beta_{2} ) q^{40} + ( 1750 - 868 \beta_{3} ) q^{41} + ( -3311 + 4886 \beta_{1} + 854 \beta_{2} + 791 \beta_{3} ) q^{42} + ( -6340 - 1344 \beta_{3} ) q^{43} + ( -2974 \beta_{1} - 562 \beta_{2} ) q^{44} + ( -650 + 650 \beta_{1} + 1748 \beta_{2} + 1748 \beta_{3} ) q^{45} + ( 2559 - 2559 \beta_{1} - 75 \beta_{2} - 75 \beta_{3} ) q^{46} + ( 11478 \beta_{1} - 1635 \beta_{2} ) q^{47} + ( 5608 - 1192 \beta_{3} ) q^{48} + ( 6125 - 9800 \beta_{1} - 392 \beta_{2} + 2156 \beta_{3} ) q^{49} + ( 14996 - 1316 \beta_{3} ) q^{50} + ( 192 \beta_{1} - 645 \beta_{2} ) q^{51} + ( 700 - 700 \beta_{1} - 756 \beta_{2} - 756 \beta_{3} ) q^{52} + ( -1521 + 1521 \beta_{1} - 1818 \beta_{2} - 1818 \beta_{3} ) q^{53} + ( -16273 \beta_{1} - 1837 \beta_{2} ) q^{54} + ( -12538 + 2557 \beta_{3} ) q^{55} + ( -19320 + 9744 \beta_{1} + 672 \beta_{2} - 1344 \beta_{3} ) q^{56} + ( -9529 + 928 \beta_{3} ) q^{57} + ( 29426 \beta_{1} + 4226 \beta_{2} ) q^{58} + ( 32904 - 32904 \beta_{1} + 531 \beta_{2} + 531 \beta_{3} ) q^{59} + ( -7042 + 7042 \beta_{1} + 1246 \beta_{2} + 1246 \beta_{3} ) q^{60} + ( 21243 \beta_{1} + 4154 \beta_{2} ) q^{61} + ( -24681 - 1059 \beta_{3} ) q^{62} + ( 6496 - 15372 \beta_{1} + 1890 \beta_{2} - 2212 \beta_{3} ) q^{63} + ( 17728 + 3072 \beta_{3} ) q^{64} + ( 1582 \beta_{1} - 4088 \beta_{2} ) q^{65} + ( 12947 - 12947 \beta_{1} - 2003 \beta_{2} - 2003 \beta_{3} ) q^{66} + ( -21156 + 21156 \beta_{1} + 919 \beta_{2} + 919 \beta_{3} ) q^{67} + ( -2730 \beta_{1} + 1554 \beta_{2} ) q^{68} + ( 2577 - 282 \beta_{3} ) q^{69} + ( -19719 - 17087 \beta_{1} - 7763 \beta_{2} - 693 \beta_{3} ) q^{70} + ( -1104 + 2184 \beta_{3} ) q^{71} + ( 16224 \beta_{1} - 4944 \beta_{2} ) q^{72} + ( 25253 - 25253 \beta_{1} - 7372 \beta_{2} - 7372 \beta_{3} ) q^{73} + ( 23685 - 23685 \beta_{1} + 4755 \beta_{2} + 4755 \beta_{3} ) q^{74} + ( 17804 \beta_{1} + 2456 \beta_{2} ) q^{75} + ( 19418 - 1302 \beta_{3} ) q^{76} + ( 8883 + 14903 \beta_{1} + 4130 \beta_{2} - 126 \beta_{3} ) q^{77} + ( 13762 - 1162 \beta_{3} ) q^{78} + ( -4502 \beta_{1} + 5193 \beta_{2} ) q^{79} + ( -34408 + 34408 \beta_{1} + 11080 \beta_{2} + 11080 \beta_{3} ) q^{80} + ( -25589 + 25589 \beta_{1} - 4984 \beta_{2} - 4984 \beta_{3} ) q^{81} + ( -33866 \beta_{1} - 2618 \beta_{2} ) q^{82} + ( -52164 + 4536 \beta_{3} ) q^{83} + ( 12740 - 3234 \beta_{1} - 294 \beta_{2} - 2156 \beta_{3} ) q^{84} + ( -26553 - 9222 \beta_{3} ) q^{85} + ( -43388 \beta_{1} + 4996 \beta_{2} ) q^{86} + ( -40004 + 40004 \beta_{1} + 6326 \beta_{2} + 6326 \beta_{3} ) q^{87} + ( 10248 - 10248 \beta_{1} - 3984 \beta_{2} - 3984 \beta_{3} ) q^{88} + ( 13333 \beta_{1} - 9356 \beta_{2} ) q^{89} + ( 65326 - 2398 \beta_{3} ) q^{90} + ( 28224 + 11368 \beta_{1} + 4900 \beta_{2} + 10094 \beta_{3} ) q^{91} + ( -5142 + 426 \beta_{3} ) q^{92} + ( -19359 \beta_{1} - 1086 \beta_{2} ) q^{93} + ( -49017 + 49017 \beta_{1} - 9843 \beta_{2} - 9843 \beta_{3} ) q^{94} + ( 99070 - 99070 \beta_{1} - 3463 \beta_{2} - 3463 \beta_{3} ) q^{95} + ( -27440 \beta_{1} - 5264 \beta_{2} ) q^{96} + ( 104566 + 196 \beta_{3} ) q^{97} + ( -24304 + 97951 \beta_{1} + 6223 \beta_{2} + 10192 \beta_{3} ) q^{98} + ( -33472 + 2674 \beta_{3} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q - 2q^{2} + 8q^{3} - 12q^{4} + 38q^{5} - 164q^{6} - 168q^{7} + 192q^{8} + 380q^{9} + O(q^{10})$$ $$4q - 2q^{2} + 8q^{3} - 12q^{4} + 38q^{5} - 164q^{6} - 168q^{7} + 192q^{8} + 380q^{9} + 778q^{10} - 424q^{11} + 196q^{12} - 1848q^{13} - 2674q^{14} + 1784q^{15} + 2064q^{16} + 2346q^{17} - 212q^{18} + 360q^{19} - 3416q^{20} - 1526q^{21} + 4252q^{22} + 12q^{23} - 1392q^{24} - 1872q^{25} - 1148q^{26} + 5744q^{27} + 2548q^{28} - 14104q^{29} - 5258q^{30} - 3548q^{31} + 8096q^{32} + 3398q^{33} + 14844q^{34} + 27496q^{35} - 2192q^{36} - 11090q^{37} - 20138q^{38} - 1624q^{39} - 15936q^{40} + 7000q^{41} - 3472q^{42} - 25360q^{43} - 5948q^{44} - 1300q^{45} + 5118q^{46} + 22956q^{47} + 22432q^{48} + 4900q^{49} + 59984q^{50} + 384q^{51} + 1400q^{52} - 3042q^{53} - 32546q^{54} - 50152q^{55} - 57792q^{56} - 38116q^{57} + 58852q^{58} + 65808q^{59} - 14084q^{60} + 42486q^{61} - 98724q^{62} - 4760q^{63} + 70912q^{64} + 3164q^{65} + 25894q^{66} - 42312q^{67} - 5460q^{68} + 10308q^{69} - 113050q^{70} - 4416q^{71} + 32448q^{72} + 50506q^{73} + 47370q^{74} + 35608q^{75} + 77672q^{76} + 65338q^{77} + 55048q^{78} - 9004q^{79} - 68816q^{80} - 51178q^{81} - 67732q^{82} - 208656q^{83} + 44492q^{84} - 106212q^{85} - 86776q^{86} - 80008q^{87} + 20496q^{88} + 26666q^{89} + 261304q^{90} + 135632q^{91} - 20568q^{92} - 38718q^{93} - 98034q^{94} + 198140q^{95} - 54880q^{96} + 418264q^{97} + 98686q^{98} - 133888q^{99} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} - x^{3} + 10 x^{2} + 9 x + 81$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$($$$$-\nu^{3} + 10 \nu^{2} - 10 \nu + 81$$$$)/90$$ $$\beta_{2}$$ $$=$$ $$($$$$\nu^{3} - 10 \nu^{2} + 190 \nu - 81$$$$)/90$$ $$\beta_{3}$$ $$=$$ $$($$$$\nu^{3} + 14$$$$)/5$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$\beta_{2} + \beta_{1}$$$$)/2$$ $$\nu^{2}$$ $$=$$ $$($$$$\beta_{3} + \beta_{2} + 19 \beta_{1} - 19$$$$)/2$$ $$\nu^{3}$$ $$=$$ $$5 \beta_{3} - 14$$
## Character Values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/7\mathbb{Z}\right)^\times$$.
$$n$$ $$3$$ $$\chi(n)$$ $$-\beta_{1}$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
2.1
1.77069 − 3.06693i −1.27069 + 2.20090i 1.77069 + 3.06693i −1.27069 − 2.20090i
−3.54138 + 6.13385i 5.04138 + 8.73193i −9.08276 15.7318i 39.9138 69.1328i −71.4138 43.1587 + 122.247i −97.9863 70.6689 122.402i 282.700 + 489.651i
2.2 2.54138 4.40180i −1.04138 1.80373i 3.08276 + 5.33950i −20.9138 + 36.2238i −10.5862 −127.159 25.2522i 193.986 119.331 206.687i 106.300 + 184.117i
4.1 −3.54138 6.13385i 5.04138 8.73193i −9.08276 + 15.7318i 39.9138 + 69.1328i −71.4138 43.1587 122.247i −97.9863 70.6689 + 122.402i 282.700 489.651i
4.2 2.54138 + 4.40180i −1.04138 + 1.80373i 3.08276 5.33950i −20.9138 36.2238i −10.5862 −127.159 + 25.2522i 193.986 119.331 + 206.687i 106.300 184.117i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char. orbit Parity Mult. Self Twist Proved
1.a Even 1 trivial yes
7.c Even 1 yes
## Hecke kernels
There are no other newforms in $$S_{6}^{\mathrm{new}}(7, [\chi])$$.
|
2019-12-13 12:36:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959999918937683, "perplexity": 5793.103057238044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00032.warc.gz"}
|
https://cran.r-project.org/web/packages/asteRisk/vignettes/asteRisk.html
|
# Contents
## Note: some examples in this vignette require that the asteRiskData
## package be installed. The system currently running this vignette does
## not have that package installed, so code examples will not be
## evaluated.
asteRisk
Rafael Ayala, Daniel Ayala, Lara Sellés Vidal
March 21, 2021
# 1 Introduction
Unlike positional information of planes and other aircrafts, satellite positions is not readily available for any timepoint along its orbit. Instead, orbital state vectors at discrete and relatively scarce (compared to, for example, the large information density available for aircrafts) timepoints and generated from multiple observations are generated by entities with access to such information, such as the United States Space Surveillance Network. Unclassified information can be obtained through resources such as Space-Track and CelesTrak, most commonly in the form of TLE (Two/Three Line Elements).
However, due to the periodicity of the trajectories (orbits) followed by satellites, if the state vector of an object orbiting Earth is known at a given time, it is possible to predict their state vectors at future or past times. In order to accurately perform such predictions, known as orbit propagation, it is necessary to use complex models that take into consideration not only the gravitational attraction of Earth, but also atmospheric drag and secular and periodic perturbations of the Moon and the Sun, among other effects.
The most widely applied models are the SGP4 and SDP4 models, whose first implementations were introduced in FORTRAN IV in 1988. asteRisk aims to constitute the basis for astrodynamics analysis in R. To that extent, it provides a native R implementation of the SGP4 and SDP4 models and utilities to parse and read TLE files and to convert coordinates between different reference frames. Additionally, a high-precision orbital propagator is also provided. High-precision propagators model a set of forces that act on satellites to calculate the resulting acceleration, and propagate the orbit by numerically solving the resulting differential equation for the position.
# 2 Installation instructions
Before installing asteRisk, make sure you have the latest version of R installed. To install asteRisk, start R and enter:
install.packages("asteRisk")
Once installed, the package can be loaded as shown below:
library(asteRisk)
# 3 Reading TLE and RINEX files
TLE (Two-/Three- Line Element) is the standard format for representing orbital state vectors. In short, TLEs have 2 lines that contain the orbital parameters that characterize the state of a satellite at a given time, known as epoch. An additional initial line can be present to indicate the name of the satellite. A detailed description of the TLE format can be found here.
asteRisk provides the utilities to read TLE files (function readTLE()), and to directly parse character vectors containing the lines of a TLE as strings (function parseTLElines()). The resulting lists contain the NORAD catalog number of the satellite, the classification level of the information, its international designator, launch year, launch number, launch piece, the date and time of the state vector, the orbital parameters (angular speed, eccentricity, inclination, argument of the perigee, longitude of the ascending node, anomaly) and the drag coefficient of the satellite.
A file with a set of 29 TLE, provided in Revisiting Space Track Report #3 and typically used to benchmark implementations of the SGP4/SDP4 propagators, is distributed with asteRisk, in file named testTLE.txt:
# Read the provided file with 29 benchmark TLE, which contains objects with a
# variety of orbital parameters
# TLE number 17 contains a state vectors for Italsat 2
test_TLEs[[17]]
## $NORADcatalogNumber ## [1] "24208" ## ##$classificationLevel
## [1] "unclassified"
##
## $internationalDesignator ## [1] "1996-044A" ## ##$launchYear
## [1] 1996
##
## $launchNumber ## [1] "044" ## ##$launchPiece
## [1] "A"
##
## $dateTime ## [1] "2006-06-26 0:58:29.3433600001845" ## ##$elementNumber
## [1] 160
##
## $inclination ## [1] 3.8536 ## ##$ascension
## [1] 80.0121
##
## $eccentricity ## [1] 0.002664 ## ##$perigeeArgument
## [1] 311.0977
##
## $meanAnomaly ## [1] 48.3 ## ##$meanMotion
## [1] 1.007781
##
## $meanMotionDerivative ## [1] -1.88e-06 ## ##$meanMotionSecondDerivative
## [1] 0
##
## $Bstar ## [1] 1e-04 ## ##$ephemerisType
## [1] "Distributed data (SGP4/SDP4)"
##
## $epochRevolutionNumber ## [1] 3611 ## ##$objectName
## [1] "# ITALSAT 2 # 24h resonant GEO, inclination > 3 deg"
# It is also possible to directly parse a character vector with 2 or 3
# elements, where each element is a string representing a line of the TLE, to
# obtain the same result:
italsat2_lines <- c("ITALSAT 2", "1 24208U 96044A 06177.04061740 -.00000094 00000-0 10000-3 0 1600",
"2 24208 3.8536 80.0121 0026640 311.0977 48.3000 1.00778054 36119")
italsat2_TLE <- parseTLElines(italsat2_lines)
italsat2_TLE
## $NORADcatalogNumber ## [1] "24208" ## ##$classificationLevel
## [1] "unclassified"
##
## $internationalDesignator ## [1] "1996-044A" ## ##$launchYear
## [1] 1996
##
## $launchNumber ## [1] "044" ## ##$launchPiece
## [1] "A"
##
## $dateTime ## [1] "2006-06-26 0:58:29.3433600001845" ## ##$elementNumber
## [1] 160
##
## $inclination ## [1] 3.8536 ## ##$ascension
## [1] 80.0121
##
## $eccentricity ## [1] 0.002664 ## ##$perigeeArgument
## [1] 311.0977
##
## $meanAnomaly ## [1] 48.3 ## ##$meanMotion
## [1] 1.007781
##
## $meanMotionDerivative ## [1] -1.88e-06 ## ##$meanMotionSecondDerivative
## [1] 0
##
## $Bstar ## [1] 1e-04 ## ##$ephemerisType
## [1] "Distributed data (SGP4/SDP4)"
##
## $epochRevolutionNumber ## [1] 3611 ## ##$objectName
## [1] "ITALSAT 2"
test_TLEs[[17]]$inclination == italsat2_TLE$inclination
## [1] TRUE
RINEX (Receiver Independent Exchange Format), on the other hand, is one of the most widely used formats for providing data of global satellite navigation systems (GNSS). The RINEX standard defines several file types, among which navigation files are used to distribute positional information of the satellites. While the format is mainly limited for GNSS, it is of interest for satellite positioning applications given the large amounts of publicly available, high-precision data for such satellite constellations.
The exact information provided in a RINEX navigation file varies for each GNSS. For example, while GLONASS navigation files provide directly the position, velocity and acceleration in the GCRF frame of coordinates, GPS navigation files provide orbital elements.
# Read the provided test RINEX navigation files for both GPS and GLONASS:
# Count the number of positional messages in each file:
length(testGPSnav$messages) ## [1] 3 length(testGLONASSnav$messages)
## [1] 5
# 4 Propagation of orbits
The two orbit propagators currently available in asteRisk are the SGP4 and SDP4 models. They allow the calculation of the position and velocity of the satellite at different times, both before and after the time corresponding to the known state vector (referred to as “epoch”). Kepler’s equation is solved through fixed-point integration. It should be noted that the SGP4 model can only accurately propagate the orbit of objects near Earth (with an orbital period shorter than 225 minutes, corresponding approximately to an altitude lower than 5877.5 km).
For propagation of objects in deep space (with orbital periods larger than 225 minutes, corresponding to altitudes higher than 5877.5 km), the SDP4 model should be used, which contains additions to take into account the secular and periodic perturbations of the Moon and the Sun on the orbit of the satellite. It also considers Earth resonance effects on 24-hour geostationary and 12-hour Molniya orbits.
However, it should be noted that SDP4 employs only simplified drag equations, and lacks corrections for low-perigee orbits. Therefore, it is recommended to apply the standard SGP4 model for satellites that are not in deep space.
asteRisk provides three functions to apply the SGP4/SDP4 propagators. The sgdp4() function automatically determines if the satellite is in deep space or near Earth, and applies the appropriate model. The application of either SGP4 or SDP4 can be forced with the sgp4() and sdp4() functions respectively. However, it is not recommended to apply SGP4 to objects in deep space or SDP4 to objects near Earth. In the following example, we calculate and visualize the trajectory of a satellite with a high-eccentricity Molniya orbit:
# Element 11 of the set of test TLE contains an orbital state vector for
# satellite MOLNIYA 1-83, launched from the USSR in 1992 and decayed in 2007
molniya <- test_TLEs[[11]]
1/molniya$meanMotion ## [1] 0.496845 # From the inverse of the mean motion, we can see that the orbital period is # approximately half a day, in accordance with a Molniya orbit Let´s use the # SDP4 model to calculate the position and velocity of the satellite for a full # orbital period every 10 minutes. It is important to provide the mean motion # in radians/min, the inclination, anomaly, argument of perigee and longitude # of the ascending node in radians, and the target time as an increment in # minutes for the epoch time targetTimes <- seq(0, 720, by = 10) results_position_matrix <- matrix(nrow = length(targetTimes), ncol = 3) results_velocity_matrix <- matrix(nrow = length(targetTimes), ncol = 3) for (i in 1:length(targetTimes)) { new_result <- sgdp4(n0 = molniya$meanMotion * ((2 * pi)/(1440)), e0 = molniya$eccentricity, i0 = molniya$inclination * pi/180, M0 = molniya$meanAnomaly * pi/180, omega0 = molniya$perigeeArgument *
pi/180, OMEGA0 = molniya$ascension * pi/180, Bstar = molniya$Bstar, initialDateTime = molniya$dateTime, targetTime = targetTimes[i]) results_position_matrix[i, ] <- new_result[[1]] results_velocity_matrix[i, ] <- new_result[[2]] } last_molniya_propagation <- new_result results_position_matrix = cbind(results_position_matrix, targetTimes) colnames(results_position_matrix) <- c("x", "y", "z", "time") # Let´s verify that the SDP4 algorithm was automatically chosen last_molniya_propagation$algorithm
## [1] "sdp4"
# We can visualize the resulting trajectory using a plotly animation to confirm
# that indeed a full revolution was completed and that the orbit is highly
# eccentric.
library(plotly)
library(lazyeval)
library(dplyr)
# In order to create the animation, we must first define a function to create
# the accumulated dataframe required for the animation
accumulate_by <- function(dat, var) {
var <- f_eval(var, dat)
lvls <- plotly:::getLevels(var)
dats <- lapply(seq_along(lvls), function(x) {
cbind(dat[var %in% lvls[seq(1, x)], ], frame = lvls[[x]])
})
bind_rows(dats)
}
accumulated_df <- accumulate_by(as.data.frame(results_position_matrix), ~time)
orbit_animation <- plot_ly(accumulated_df, x = ~x, y = ~y, z = ~z, type = "scatter3d",
mode = "marker", opacity = 0.8, line = list(width = 6, color = ~time, reverscale = FALSE),
frame = ~frame)
orbit_animation <- animation_opts(orbit_animation, frame = 50)
orbit_animation <- layout(orbit_animation, scene = list(xaxis = list(range = c(min(results_position_matrix[,
1]), max(results_position_matrix[, 1]))), yaxis = list(range = c(min(results_position_matrix[,
2]), max(results_position_matrix[, 2]))), zaxis = list(range = c(min(results_position_matrix[,
3]), max(results_position_matrix[, 3])))))
orbit_animation
# 5 Conversion between reference frames
The positions and velocities calculated with the SGP4 and SDP4 models are in the TEME (True Equator, Mean Equinox) frame of reference, which is an Earth-centered inertial coordinate frame, where the origin is placed at the center of mass of Earth and the coordinate frame is fixed with respect to the stars (and therefore not fixed with respect to the Earth surface in its rotation).
asteRisk provides the TEMEtoITRF() function, which converts positions and velocities in TEME to the ITRF (International Terrestrial Reference Frame) frame of reference. The ITRF is an ECEF (Earth Centered, Earth Fixed) frame of reference, i.e., a non-inertial frame of reference where the origin is also placed at the center of mass of Earth, and the frame rotates with respect to the stars to remain fixed with respect to the Earth surface as it rotates.
Additionally, the TEMEtoLATLON() and ITRFtoLATLON() functions convert Cartesian coordinates to geodetic latitude, longitude and altitude values, from the TEME and ITRF frames respectively. This can be useful, for example, to visualize the ground track followed by a satellite.
Several of the functions for conversion of systems of coordinates require Earth orientation parameters, which are provided through the asteRiskData accessory package, which can be installed by running “install.packages(‘asteRiskData’, repos=‘https://rafael-ayala.github.io/drat/’)”.
# Let us convert the last propagation previously calculated for the MOLNIYA
# 1-83 satellite into the ITRF frame. In order to do so, it is required to
# provide a date-time string indicating the time for the newly calculated
# position and velocity. Since this was 720 minutes after the epoch for the
# original state vector, we can just add 12 hours to it
molniya$dateTime new_dateTime <- "2006-06-25 12:33:43" ITRF_coordinates <- TEMEtoITRF(last_molniya_propagation$position, last_molniya_propagation$velocity, new_dateTime) # Let us now convert the previously calculated set of TEME coordinates to # geodetic latitude and longitude geodetic_matrix <- matrix(nrow = nrow(results_position_matrix), ncol = 3) for (i in 1:nrow(geodetic_matrix)) { new_dateTime <- as.character(as.POSIXct(molniya$dateTime, tz = "UTC") + 60 *
targetTimes[i])
new_geodetic <- TEMEtoLATLON(results_position_matrix[i, 1:3] * 1000, new_dateTime)
geodetic_matrix[i, ] <- new_geodetic
}
colnames(geodetic_matrix) <- c("latitude", "longitude", "altitude")
# We can then visualize the ground track of the satellite
library(ggmap)
ggmap(get_map(c(left = -180, right = 180, bottom = -80, top = 80))) + geom_segment(data = as.data.frame(geodetic_matrix),
aes(x = longitude, y = latitude, xend = c(tail(longitude, n = -1), NA), yend = c(tail(latitude,
n = -1), NA)), na.rm = TRUE) + geom_point(data = as.data.frame(geodetic_matrix),
aes(x = longitude, y = latitude), color = "blue", size = 0.3, alpha = 0.8)
# 6 High-precision orbital propagator
The SGP4/SDP4 models provide a good accuracy at a low computational cost. However, higher precision can be achieved in orbit propagation by calculating at each instant the acceleration of the satellite resulting from the set of forces that are exerted on it, and solving the second-order ODE that expressed acceleration as the second time-derivative of position through numerical integration. Such propagators are often referred to as high-precision orbital propagators (HPOP). The HPOP implemented in asteRisk takes into consideration Earth gravtitational attraction (using a geopotential model based on spherical harmonics); the effects of Earth ocean and solid tides, the attraction of the Sun, Moon and planets; solar radiation pressure; atmospheric drag, and relativistic effects. The HPOP can be used through the hpop() function. However, it should be kept in mind that, while the HPOP can achieve a much higher precision, it also has much higher computational cost. It should also be noted that the HPOP requires access to data such as Earth orientation parameters, space weather data and solar and geomagnetic storms. Such data is provided in the asteRiskData accessory package, which, as previously mentioned, can be installed by running “install.packages(‘asteRiskData’, repos=‘https://rafael-ayala.github.io/drat/’)”. After having installed the accessory package, it is possible to update the data and coefficients to the latest available versions with the getLatestSpaceData() function.
# The HPOP requires as input the satellite mass, the effective areas subjected
# to solar radiation pressure and atmospheric drag, and the drag and
# reflectivity coefficients. The mass and cross-section of Molniya satellites
# are approximately 1600 kg and 15 m2, respectively. We will use the
# cross-section to approximate the effective areafor both atmospheric drag and
# radiation pressure. Regarding the drag and reflectivity coefficients, while
# their values vary for each satellite and orbit, 2.2 and 1.2 respectively can
# be used as approximations.
molniyaMass <- 1600
molniyaCrossSection <- 15
molniyaCd <- 2.2
molniyaCr <- 1.2
# As initial conditions, we will use the initial conditions provided in the
# same TLE for MOLNIYA 1-83 used previously for the SGP4/SDP4 propagator. We
# first need to calculate the initial position and velocity in the GCRF ECI
# frame of reference from the provided orbital elements. As an approximation,
# we will use the results obtained for t = 0 with the SGP4/SDP4 propagator. We
# convert those into the GCRF frame of reference. It should be noted that such
# an approximation introduces an error due to a mismatch between the position
# derivative calculated at each propagation point through SGP4/SDP4 and the
# actual velocity of the satellite.
GCRF_coordinates <- TEMEtoGCRF(results_position_matrix[1, 1:3] * 1000, results_velocity_matrix[1,
1:3] * 1000, molniya$dateTime) initialPosition <- GCRF_coordinates$position
initialVelocity <- GCRF_coordinates$velocity # Let´s use the HPOP to calculate the position each 2 minutes during a period # of 3 hours targetTimes <- seq(0, 10800, by = 120) hpop_results <- hpop(initialPosition, initialVelocity, molniya$dateTime, targetTimes,
molniyaMass, molniyaCrossSection, molniyaCrossSection, molniyaCr, molniyaCd)
# Now we can calculate and plot the corresponding geodetic coordinates
geodetic_matrix_hpop <- matrix(nrow = nrow(hpop_results), ncol = 3)
for (i in 1:nrow(geodetic_matrix_hpop)) {
new_dateTime <- as.character(as.POSIXct(molniya\$dateTime, tz = "UTC") + targetTimes[i])
new_geodetic <- GCRFtoLATLON(hpop_results[i, 2:4], new_dateTime)
geodetic_matrix_hpop[i, ] <- new_geodetic
}
colnames(geodetic_matrix_hpop) <- c("latitude", "longitude", "altitude")
library(ggmap)
ggmap(get_map(c(left = -180, right = 180, bottom = -80, top = 80))) + geom_segment(data = as.data.frame(geodetic_matrix_hpop),
aes(x = longitude, y = latitude, xend = c(tail(longitude, n = -1), NA), yend = c(tail(latitude,
n = -1), NA)), na.rm = TRUE) + geom_point(data = as.data.frame(geodetic_matrix_hpop),
aes(x = longitude, y = latitude), color = "blue", size = 0.3, alpha = 0.8)
|
2021-12-03 18:31:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985316634178162, "perplexity": 3985.4180912038514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362918.89/warc/CC-MAIN-20211203182358-20211203212358-00510.warc.gz"}
|