url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/1466
|
## Reconsidering Mixed Integer Programming and MIP-based Hybrids for Scheduling
Please always quote using this URN: urn:nbn:de:0297-zib-14660
• Despite the success of constraint programming (CP) for scheduling, the much wider penetration of mixed integer programming (MIP) technology into business applications means that many practical scheduling problems are being addressed with MIP, at least as an initial approach. Furthermore, there has been impressive and well-documented improvements in the power of generic MIP solvers over the past decade. We empirically demonstrate that on an existing set of resource allocation and scheduling problems standard MIP and CP models are now competitive with the state-of-the-art manual decomposition approach. Motivated by this result, we formulate two tightly coupled hybrid models based on constraint integer programming (CIP) and demonstrate that these models, which embody advances in CP and MIP, are able to out-perform the CP, MIP, and decomposition models. We conclude that both MIP and CIP are technologies that should be considered along with CP for solving scheduling problems.
### Additional Services
Author: Stefan Heinz, Christopher Beck ZIB-Report constraint integer programming; constraint programming; cumulative constraint; mixed integer programming; optional activities 60-XX PROBABILITY THEORY AND STOCHASTIC PROCESSES (For additional applications, see 11Kxx, 62-XX, 90-XX, 91-XX, 92-XX, 93-XX, 94-XX) 65-XX NUMERICAL ANALYSIS / 65Kxx Mathematical programming, optimization and variational techniques / 65K05 Mathematical programming methods [See also 90Cxx] 90-XX OPERATIONS RESEARCH, MATHEMATICAL PROGRAMMING / 90Cxx Mathematical programming [See also 49Mxx, 65Kxx] / 90C10 Integer programming 2012/02/11 ZIB-Report (12-05) 1438-0064 App. in: Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CPAIOR 2012), Springer 2012. Lecture Notes in Computer Science, 7298, pp. 211-227
$Rev: 13581$
|
2016-10-24 03:22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33986690640449524, "perplexity": 3903.0974467321457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00453-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.116.063005
|
# Synopsis: Deep Freezing Molecules
Researchers cooled trapped molecules well below $1\phantom{\rule{2.22198pt}{0ex}}\text{mK}$—a record temperature for molecules that have not been assembled from pre-cooled atoms.
Atom cooling is a mature field, and scientists hope that bringing molecules to ultracold temperatures, which is more challenging than cooling atoms, will lead to advances in precision measurement, ultracold chemistry, and quantum computing. Researchers have previously cooled molecules below $1\phantom{\rule{2.22198pt}{0ex}}\text{mK}$ indirectly, by merging two pre-cooled atomic gases into a gas of two-atom molecules. But this method only works for a limited set of molecular species. Now two teams of researchers, using different techniques, have cooled large numbers of trapped molecules down to $400\phantom{\rule{2.22198pt}{0ex}}𝜇\text{K}$—a temperature nearly 10 times lower than previous direct-cooling experiments.
Gerhard Rempe and Martin Zeppenfeld of the Max Planck Institute for Quantum Optics, Germany, and their colleagues used the so-called Sisyphus technique with formaldehyde molecules. In their experiment, molecules repeatedly climb the potential “hills” at the edges of an electric field trap, losing kinetic energy each time they do. The team cooled 300,000 molecules, at least 30 times the previous record from indirect cooling.
David DeMille of Yale University and his colleagues cooled strontium monofluoride in a modified magneto-optical trap (MOT). One common difficulty in cooling molecules is that they can easily make transitions to quantum states associated with rotation and vibration that are dark, that is, not accessible to the cooling lasers. The Yale team solved this problem for rotational states by rapidly oscillating the MOT’s magnetic field and also periodically reversing the polarization of the MOT laser. These synchronized oscillations periodically turn dark rotational states into accessible bright states and vice versa. Dark vibrational states were instead addressed with additional lasers, a technique the team developed previously.
This research is published in Physical Review Letters.
–David Ehrenstein
### Announcements
More Announcements »
## Subject Areas
Atomic and Molecular Physics
Photonics
Read More »
Quantum Physics
Read More »
## Related Articles
Atomic and Molecular Physics
### Synopsis: Taking Pictures with Single Ions
A new ion microscope with nanometer-scale resolution builds up images using single ions emitted one at a time from an ion trap. Read More »
Atomic and Molecular Physics
### Viewpoint: Squeezed Light Reengineers Resonance Fluorescence
By bathing a superconducting qubit in squeezed light, researchers have been able to confirm a decades-old prediction for the resulting phase-dependent spectrum of resonance fluorescence. Read More »
Gravitation
### Synopsis: Skydiving Spins
Atom interferometry shows that the free-fall acceleration of rubidium atoms of opposite spin orientation is the same to within 1 part in 10 million. Read More »
|
2016-07-27 19:15:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18953558802604675, "perplexity": 3936.4040662920274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00285-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://biomechanical.asmedigitalcollection.asme.org/article.aspx?articleid=1475888
|
0
Research Papers
# Flow and Particle Dispersion in a Pulmonary Alveolus—Part I: Velocity Measurements and Convective Particle Transport
[+] Author and Article Information
Sudhaker Chhabra
Department of Mechanical Engineering, University of Delaware, Newark, DE 19716
Department of Mechanical Engineering, University of Delaware, Newark, DE 19716prasad@udel.edu
1
Corresponding author.
J Biomech Eng 132(5), 051009 (Mar 30, 2010) (12 pages) doi:10.1115/1.4001112 History: Received June 15, 2009; Revised January 13, 2010; Posted January 27, 2010; Published March 30, 2010; Online March 30, 2010
## Abstract
The alveoli are the smallest units of the lung that participate in gas exchange. Although gas transport is governed primarily by diffusion due to the small length scales associated with the acinar region $(∼500 μm)$, the transport and deposition of inhaled aerosol particles are influenced by convective airflow patterns. Therefore, understanding alveolar fluid flow and mixing is a necessary first step toward predicting aerosol transport and deposition in the human acinar region. In this study, flow patterns and particle transport have been measured using a simplified in-vitro alveolar model consisting of a single alveolus located on a bronchiole. The model comprises a transparent elastic 5/6 spherical cap (representing the alveolus) mounted over a circular hole on the side of a rigid circular tube (representing the bronchiole). The alveolus is capable of expanding and contracting in phase with the oscillatory flow through the tube. Realistic breathing conditions were achieved by exercising the model at physiologically relevant Reynolds and Womersley numbers. Particle image velocimetry was used to measure the resulting flow patterns in the alveolus. Data were acquired for five cases obtained as combinations of the alveolar-wall motion (nondeforming/oscillating) and the bronchiole flow (none/steady/oscillating). Detailed vector maps at discrete points within a given cycle revealed flow patterns, and transport and mixing of bronchiole fluid into the alveolar cavity. The time-dependent velocity vector fields were integrated over multiple cycles to estimate particle transport into the alveolar cavity and deposition on the alveolar wall. The key outcome of the study is that alveolar-wall motion enhances mixing between the bronchiole and the alveolar fluid. Particle transport and deposition into the alveolar cavity are maximized when the alveolar wall oscillates in tandem with the bronchiole fluid, which is the operating case in the human lung.
<>
## Figures
Figure 1
Experimental model: (a) schematic of the single alveolus attached to bronchiole, (b) photograph of the model, and (c) raw PIV image of the alveolus
Figure 2
Experimental setup
Figure 3
Case A: velocity map for steady bronchiole flow over a non-deforming alveolus: (a) close-up of the upper half of the cavity highlighting the secondary vortex, and (b) schematic showing streamlines for the shear layer, and primary and secondary vortices
Figure 4
Case B: velocity fields for oscillating bronchiole flow over a nondeforming alveolus
Figure 5
Case C: velocity fields for an oscillating alveolus without an imposed bronchiole flow
Figure 6
Case D: velocity fields for oscillating alveolus with steady bronchiole flow
Figure 7
Case E: velocity fields for oscillating alveolus with oscillating bronchiole flow
Figure 8
Particle maps for case C after (a) 1, (b) 3, (c) 5, (d) 10, (e) 15, and (f) 20 breathing cycles
Figure 9
Particle transport statistics as a function of breathing cycle for case C
Figure 10
Particle maps for case D after (a) 1, (b) 3, (c) 5, (d) 10, (e) 15, and (f) 20 breathing cycles
Figure 11
Particle transport statistics as a function of breathing cycle for case D
Figure 12
Particle maps for case E after (a) 1, (b) 3, (c) 5, (d) 10, (e) 15, and (f) 20 breathing cycles
Figure 13
Particle transport statistics as a function of breathing cycle for case E
## Related
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
2018-09-25 12:06:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.247120663523674, "perplexity": 6709.172995752444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00083.warc.gz"}
|
https://papers-gamma.link/paper/125/Compressing%20Graphs%20and%20Indexes%20with%20Recursive%20Graph%20Bisection
|
Authors: Laxman Dhulipala
Liked by:
Domains: Graph compression
Tags: KDD2016
Nice paper building on top of [the WebGraph framework](https://papers-gamma.link/paper/31) and [Chierichetti et al.](https://papers-gamma.link/paper/126) to compress graphs. ### Approximation guarantee I read: "our algorithm is inspired by a theoretical approach with provable guarantees on the final quality, and it is designed to directly optimize the resulting compression ratio.". I misunderstood initially, but the proposed algorithm actually does not have any provable approximation guarantee other than the $\log(n)$ one (which is also obtained by a random ordering of the nodes). Designing an algorithm with (a better) approximation guarantee for minimizing "MLogA", "MLogGapA" or "BiMLogA" seems to be a nice open problem. ### Objectives Is there any better objective than "MLogA", "MLogGapA" or "BiMLogA" to have a proxy of the compression obtained by the BV-framework? Is it possible to directly look for an ordering that minimizes the size of the output of BV compression algorithm?
|
2021-01-27 01:05:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6750779151916504, "perplexity": 1817.2120168739852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00175.warc.gz"}
|
https://gateoverflow.in/42522/how-to-prove-if-a-boolean-function-is-functionally-complete
|
13,856 views
A set of Boolean functions is functionally complete, if all other Boolean functions can be constructed from this set and a set of input variables are provided.
A set is said to functionally complete if we can derive a set which is already functionally complete.
Example of functionally complete set {AND ,OR,NOT}, {OR,NOT},{AND,NOT} .
So to prove a boolean functionally complete derive any one of the above functionally complete set.
by
A set of Boolean functions is functionally complete, if all other Boolean functions can be constructed from this set and a set of input variables are provided.
### Why NAND and NOR?
You will sometimes see circuits implemented using only NANDs or using only NORs, and the reason for this is because they are functionally complete and minimal. You only have to build the circuit using one kind of gate. Of course, using a single gate is likely to make the circuit "larger", i.e., more gates than using many different kinds of gates, but usually it's worth the tradeoff, i.e., it's better to use more of one kind of gate than fewer of many different gates.
by
A function is considered as functionally complete if it does not belong to T0,T1,L,M,S which are
Property 1: We say that boolean function f preserves zero, if on the 0-input it produces 0. By the 0-input we mean such an input, where every input variable is 0 (this input usually corresponds to the first row of the truth table). We denote the class of zero-preserving boolean functions as T0 and write f ∈ T0.
Property 2: Similarly to T0, we say that boolean function f preserves one, if on 1-input, it produces 1. The 1-input is the input where all the input variables are 1 (this input usually corresponds to the last row of the truth table). We denote the class of one-preserving boolean functions as T1 and write f ∈ T1.
Property 3: We say that boolean function f is linear if one of the following two statements holds for f:
For every 1-value of f, the number of 1’s in the corresponding input is odd, and for every 0-value of f, the number of 1’s in the corresponding input is even.
or
For every 1-value of f, the number of 1’s in the corresponding input is even, and for every 0-value of f, the number of 1’s in the corresponding input is odd.
If one of these statements holds for f, we say that f is linear1. We denote the class of linear boolean functions with L and write f ∈ L.
Property 4: We say that boolean function f is monotone if for every input, switching any input variable from 0 to 1 can only result in the function’s switching its value from 0 to 1, and never from 1 to 0. We denote the class of monotone boolean functions with M and write f ∈ M.
Property 5: We say that boolean function f(x1,…,xn) is self-dual if f(x1,…,xn) = ¬f(¬x1,…,¬xn).
The function on the right in the equality above (the one with negations) is called the dual of f. We will call the class of self-dual boolean functions S and write f ∈ S.
Take this example :
Consider the operations
f(X, Y, Z) = X’YZ + XY’ + Y’Z’ and g(X′, Y, Z) = X′YZ + X′YZ′ + XY
Which one of the following is correct?
(A) Both {f} and {g} are functionally complete
(B) Only {f} is functionally complete
(C) Only {g} is functionally complete
(D) Neither {f} nor {g} is functionally complete
Solution : As in our case we can see on giving all i/p to 0 (g )produce 0 so it preserving 0 and can’t be functionally complete.
But f is neither preserving 0 nor 1.
F is not linear(see defn. of linear above)
F is not monotone(see defn. of monotone above)
F is not self dual as f(x,y,z) is not equal to –f(-x,-y,-z)
So f is functionally complete.
Hence ans is (B) part.
Reference :
https://cs.hse.ru/data/2015/05/28/1096847873/Lecture%2013.1.pdf
### 1 comment
@KULDEEP SINGH 2
Please, Can you show, how to check if a function is Linear or not (for both f and g)
A set of Boolean function is called functionally complete, if all other Boolean functions can be constructed from this set.
eg:- {AND,OR,NOT} is a functionally complete set.
As, from this set we can derive NOR,NAND,XOR, XNOR,etc any type of function.
XNOR is not functionally complete.
XOR is partially complete
1 vote
|
2023-02-01 19:46:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630485534667969, "perplexity": 974.5897115704937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00364.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/conductor-maintained-potential-v-contains-spherical-cavity-radius-r-point-charge-q-placed--q969431
|
A conductor, maintained at a potential V, contains a spherical cavity of radius R.
A point charge q is placed at a distance a ( a < R)from the center of the cavity.
Find the potential of the electric field in the cavity .
### Get this answer with Chegg Study
Practice with similar questions
Q:
2. When a cavity is present inside a conductor in equilibrium a. the potential in the cavity must be zero. b. the electric field in the cavity must be zero. c. the electric field in the cavity must have a constant non-zero value. d. the electric field in the cavity must decrease inversely as the square of the distance from the walls. e. no two points on the surface of the cavity can be at the same potential. Answer is b Why?
|
2016-06-27 18:52:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636069893836975, "perplexity": 253.45417698996317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://ncatlab.org/nlab/show/persistent+homotopy
|
Contents
Contents
Idea
Persistent homotopy studies homotopy types (of topological spaces) as a parameter varies, hence filtered homotopy types (of filtered topological spaces), with focus on which elements of homotopy groups at given stage persist how far through the filtering to later stages. A key application is to Vietoris-Rips complexes of discrete subsets in a metric space.
Notably in topological data analysis (TDA) these VR complexes arise as “point clouds” of datapoints, and the corresponding persistent homotopy is thought to detect relevant structure hidden in such data. As such, persistent homotopy refines the traditional use of persistent homology in TDA.
In general, persistent homotopy theory is to persistent homology as homotopy theory is to homology theory: homotopy is a finer invariant than homology, the former sees the full homotopy type of a topological space, the latter at most the underlying stable homotopy type.
In other words, homology involves a kind of linearization or abelianization which loses information that is retained in the homotopy type (see the Hurewicz theorem). Therefore persistent homotopy is in general a finer invariant of filtered topological spaces than persistent homology. In fact, traditional persistent homology considers only ordinary homology which is the coarsest of all generalized homology invariants. Hence in between the coarse invariant of persistent homology and the fine invariants of persistent homotopy will be intermediate invariants that would deserve to be called persistent generalized homology – but these have not yet found much attention, certainly not in the context of topological data analysis.
However, besides homology there is, dually, also cohomology, whose analogous homotopy theoretic refinement is (non-abelian cohomology theories, but in particular:) co-homotopy. The generalization of cohomotopy to the context of persistence lends itself to the analysis of persistence of level sets of continuous functions: see at persistent cohomotopy (and see the references below).
coarseintermediatefine
homologyordinary homologygeneralized homologyhomotopy
cohomologyordinary cohomologygeneralized cohomologycohomotopy
persistent homologypersistent ordinary homologypersistent generalized homologypersistent homotopy
persistent cohomologypersistent ordinary cohomologypersistent generalized cohomologypersistent cohomotopy
References
General
Original articles with focus on establishing the homotopy-version of the stability theorem and the persistent version of Whitehead's theorem:
• Andrew J. Blumberg, Michael Lesnick, Universality of the Homotopy Interleaving Distance $[$arXiv:1705.01690$]$
• J. F. Jardine, Data and homotopy types $[$arXiv:1908.06323$]$
• Edoardo Lanari, Luis Scoccola, Rectification of interleavings and a persistent Whitehead theorem, Algebraic & Geometric Topology (to appear), $[$arXiv:2010.05378$]$
Review:
• J. F. Jardine, Persistent homotopy theory (2020) $[$pdf, pdf$]$
Further discussion:
Discussion with focus on the van Kampen theorem, excision and the Hurewicz theorem in persistent homotopy:
• Mehmet Ali Batan, Mehmetcik Pamuk, Hanife Varli, Persistent Homotopy $[$arXiv:1909.08865$]$
Cohomotopy in topological data analysis
Introducing persistent Cohomotopy as a tool in topological data analysis, improving on the use of well groups from persistent homology:
Review:
Last revised on July 17, 2022 at 05:35:31. See the history of this page for a list of all contributions to it.
|
2022-12-07 17:06:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806900680065155, "perplexity": 1971.3609047912034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00827.warc.gz"}
|
https://cookieblues.github.io/machine%20learning/natural%20language%20processing/2021/01/11/pcc-and-cosine-similarity/
|
# Pearson Correlation Coefficient and Cosine Similarity in Word Embeddings
A friend of mine recently asked me about word embeddings and similarity. I remember, I learned that the typical way of calculating the similarity between a pair of word embeddings is to take the cosine of the angle between their vectors. This measure of similarity makes sense due to the way that these word embeddings are commonly constructed, where each dimension is supposed to represent some sort of semantic meaningThese word embedding techniques have obvious flaws, such as words that are spelled the same way but have different meanings (called homographs), or sarcasm which often times is saying one thing but meaning the opposite.. Yet, my friend asked if you could calculate the correlation between word embeddings as an alternative to cosine similarity, and it turns out that it’s almost the exact same thing.
Zhelezniak et al. (2019) explains this well. Given a vocabulary of $N$ words $\mathcal{V} = \{ w_1, \dots, w_N \}$ with a corresponding word embedding matrix $\mathbf{W} \in \mathbb{R}^{N \times D}$, each row in $\mathbf{W}$ corresponds to a word. Considering a pair of these, we can calcuate their Pearson correlation coefficient (PCC). Let $(\mathbf{x}, \mathbf{y}) = \{ (x_1, y_1), \dots, (x_D, y_D) \}$ denote this pair, and we can compute the PCC as
$r_{xy} = \frac{ \sum_{i=1}^D (x_i - \bar{x})(y_i - \bar{y}) }{ \sqrt{\sum_{i=1}^D (x_i - \bar{x})^2} \sqrt{\sum_{i=1}^D (y_i - \bar{y})^2} }, \quad \quad (1)$
where $\bar{x} = \frac{1}{D} \sum_{i=1}^D x_i$ is the sample mean; and analogously for $\bar{y}$.
The cosine similarity between vectors $\mathbf{x}, \mathbf{y}$ is
\begin{aligned} \cos \theta &= \frac{\mathbf{x} \cdot \mathbf{y}}{\| \mathbf{x} \| \| \mathbf{y} \|} \\ &= \frac{\sum_{i=1}^D x_i y_i}{\sqrt{\sum_{i=1}^D x_i^2} \sqrt{\sum_{i=1}^D y_i^2}} \quad \quad (2), \end{aligned}
where we see that equation $(1)$ and $(2)$ are the same, if the sample means are 0. The question then becomes: is the mean of word vectors (across the $D$ dimensions) 0?
GloVe is a popular algorithm for constructing word embeddings, and their pre-trained word embeddings are also commonly used. Let’s download the pre-trained word embeddings, and see if the mean of their vectors equal 0.
The GloVe embeddings take up a little more than 800 MB. Depending on your connection, this might take a few minutes to download.
This piece of code will give us a folder called data with 4 different GloVe embedding files of varying vector dimensionalities (50, 100, 200, and 300). I will use the file with dimensionality 300. We can now load the words and their corresponding vectors and calculate the mean of each vector.
Plotting these means in a histogram will give us insight into the distribution.
Distribution of means of GloVe word vectors from glove.6B.300d.txt.
As we can see, the means fall closely to 0, which means the PCC and the cosine similarity will be roughly the same when used to calculate similarity between pairs of word embeddings.
|
2022-06-29 18:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912732839584351, "perplexity": 582.5277815484764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00453.warc.gz"}
|
http://mathoverflow.net/revisions/19288/list
|
3 added 746 characters in body
Yes. This is given in Kelley's General Topology. (Kelley was one of the main mathematicians who developed the theory of nets so that it would be useful in topology generally rather than just certain applications in analysis.)
In the section "Convergence Classes" at the end of Chapter 2 of his book, Kelley lists the following axioms for convergent nets in a topological space $X$
a) If $S$ is a net such that $S_n = s$ for each $n$ [i.e., a constant net], then $S$ converges to $s$.
b) If $S$ converges to $s$, so does each subnet.
c) If $S$ does not converge to $s$, then there is a subnet of $S$, no subnet of which converges to $s$.
d) (Theorem on iterated limits): Let $D$ be a directed set. For each $m \in D$, let $E_m$ be a directed set, let $F$ be the product $D \times \prod_{m \in D} E_m$ and for $(m,f)$ in $F$ let $R(m,f) = (m,f(m))$. If $S(m,n)$ is an element of $X$ for each $m \in D$ and $n \in E_m$ and $\lim_m \lim_n S(m,n) = s$, then $S \circ R$ converges to $s$.
He has previously shown that in any topological space, convergence of nets satisfies a) through d). (The first three are easy; part d) is, I believe, an original result of his.) In this section he proves the converse: given a set $S$ and a set $\mathcal{C}$ of pairs (net,point) satisfying the four axioms above, there exists a unique topology on $S$ such that a net $N$ converges to $s \in X$ iff $(N,s) \in \mathcal{C}$.
I have always found property d) to be unappealing bordering on completely opaque, but that's a purely personal statement.
Addendum: I would be very interested to know if anyone has ever put this characterization to any useful purpose. A couple of years ago I decided to relearn general topology and write notes this time. The flower of my efforts was an essay on convergence in topological spaces that seems to cover all the bases (especially, comparing nets and filters) more solidly than in any text I have seen.
http://math.uga.edu/~pete/convergence.pdf
But "even" in these notes I didn't talk about either the theorem on iterated limits or (consequently) Kelley's theorem above: I honestly just couldn't internalize it without putting a lot more thought into it. But I've always felt/worried that there must be some insight and content there...
Yes. This is given in Kelley's General Topology. (Kelley was one of the main mathematicians who developed the theory of nets so that it would be useful in topology generally rather than just certain applications in analysis.)
In the section "Convergence Classes" at the end of Chapter 2 of his book, Kelley gives four lists the following axioms concerning which for convergent nets converge to which points. These axioms, when satisfied, determine in a unique topology.topological space $X$
a) If $S$ is a net such that $S_n = s$ for each $n$ [i.e., a constant net], then $S$ converges to $s$.
b) If $S$ converges to $s$, so does each subnet.
c) If $S$ does not converge to $s$, then there is a subnet of $S$, no subnet of which converges to $s$.
d) (Theorem on iterated limits): Let $D$ be a directed set. For each $m \in D$, let $E_m$ be a directed set, let $F$ be the product $D \times \prod_{m \in D} E_m$ and for $(m,f)$ in $F$ let $R(m,f) = (m,f(m))$. If $S(m,n)$ is an element of $X$ for each $m \in D$ and $n \in E_m$ and $\lim_m \lim_n S(m,n) = s$, then $S \circ R$ converges to $s$.
He has previously shown that in any topological space, convergence of nets satisfies a) through d). (The first three are easy; part d) is, I believe, an original result of his.) In this section he proves the converse: given a set $S$ and a set $\mathcal{C}$ of pairs (net,point) satisfying the four axioms above, there exists a unique topology on $S$ such that a net $N$ converges to $s \in X$ iff $(N,s) \in \mathcal{C}$.
I have always found the last property d) to be unappealing bordering on completely opaque, but that's a purely personal statement.
1
Yes. This is given in Kelley's General Topology. (Kelley was one of the main mathematicians who developed the theory of nets so that it would be useful in topology generally rather than just certain applications in analysis.)
In the section "Convergence Classes" at the end of Chapter 2 of his book, Kelley gives four axioms concerning which nets converge to which points. These axioms, when satisfied, determine a unique topology.
a) If $S$ is a net such that $S_n = s$ for each $n$ [i.e., a constant net], then $S$ converges to $s$.
b) If $S$ converges to $s$, so does each subnet.
c) If $S$ does not converge to $s$, then there is a subnet of $S$, no subnet of which converges to $s$.
d) (Theorem on iterated limits): Let $D$ be a directed set. For each $m \in D$, let $E_m$ be a directed set, let $F$ be the product $D \times \prod_{m \in D} E_m$ and for $(m,f)$ in $F$ let $R(m,f) = (m,f(m))$. If $\lim_m \lim_n S(m,n) = s$, then $S \circ R$ converges to $s$.
I have always found the last property to be unappealing bordering on completely opaque, but that's a purely personal statement.
|
2013-05-21 22:47:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584899544715881, "perplexity": 159.25792356922358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700842908/warc/CC-MAIN-20130516104042-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.smashcompany.com/
|
April 12th, 2014
That leaves maybe $25 billion for every content site in the world. Pathetic. Source April 10th, 2014 In Business No Comments # Blogosphere 2.0? I have already mentioned that, like Chris Bertram, I am nostalgic for the early blogosphere, which died out somewhere between 2005 and 2010. I think the world lost something important then. But perhaps there is Blogosphere 2.0 taking shape around the new mega-sites? 4. Wonkery creates astonishing loyalty. In an age where Facebook is everybody’s homepage, consumers of news have never been more promiscuous in their reading affections. They go wherever they’re sent; no one treats websites like they would a ... Read More Source April 10th, 2014 No Comments # Hacker School bans competitive feigned surprise regarding your ignorance This is great: If you have ability and a strong work ethic, people will notice. You will learn a lot from their reaction. If they react by treating with you with respect, they have strong character. If they react by taking every opportunity to belittle and undermine you, they perceive you as a threat to them. If you aren’t prone to petty jealousy and spiteful thinking, it will be difficult to empathize with people who are. Sadly, you must handle these ... Read More Source April 10th, 2014 In Business No Comments # The fight against healthcare: almost awesome in its evilness Very sad: Gruber: “…I’m offended on two levels here. I’m offended because I believe we can help poor people get health insurance, but I’m almost more offended there’s a principle of political economy that basically, if you’d told me, when the Supreme Court decision came down, I said, ‘It’s not a big deal. What state would turn down free money from the federal government to cover their poorest citizens?’ The fact that half the states are is such a massive rejection of ... Read More Source April 9th, 2014 In Business No Comments # Rage against Facebook Facebook hate. I was late to join Facebook and I was early to quit. I think I joined in 2009 or early 2010, and then I quit in late 2011. I have been a huge skeptic of social media, though this month I have become a fan of Twitter. The rage in the comments is interesting: Lady Di: I started noticing how bad I felt after logging into FB a few years ago… Yes it is a time suck. And everyone is ... Read More Source April 8th, 2014 No Comments # Forked processes, concurrency, and memory problems Last week I expressed my doubts about Unicorn (and the idea that it uses processes, therefore it uses Unix, therefore it must be good). Here is another article that looks at Unicorn, and in particular the memory consumption that goes along with forked processes: Unicorn uses forked processes to achieve concurrency. Since forked processes are essentially copies of each other, this means that the Rails application need not be thread safe. This is great because it is difficult to ensure that ... Read More Source April 8th, 2014 In Business No Comments # Change fails most of the time Interesting: Consider the following scenario: The leadership at Company X announces a partial restructuring that will consolidate two levels of management, effectively demoting all Senior Managers to the position of Manager. The change management team sets to work: it identifies sponsors; conducts a change readiness assessment; develops and executes a change management plan that dedicates resources and time to manage communications, training, coaching, and resistance; and supports the project team through the roll out of the new org. chart by training ... Read More Source April 7th, 2014 In Business No Comments # Is college needed for tech? Interesting: Ms. Glen, in a statement, called the tech industry “our pipeline to the middle class” and added, “It’s our job to develop the work force these fast-growing companies need so people from our schools and our neighborhoods have a real shot at these good-paying jobs.” At least one other city official appears to share that view: The report was managed by Carl Weisbrod before he left HR&A, a real estate consulting firm, to accept Mr. de Blasio’s appointment as chairman of ... Read More Source April 7th, 2014 In Business No Comments # Cost overruns and the IBM 360 These are some serious cost overruns.$40 million is the estimate, $500 million is the final cost. And$5 billion for the overall 360 development. Consider that all Apollo missions from 1960 to 1975 collectively costs $25 billion, and no one trip to the moon cost anything like$5 billion.
IBM built its own circuits for S/360, Solid Logic Technology (SLT) – a set of transistors and diodes mounted on a circuit twenty-eight-thousandths of a square inch and protected by ...
April 7th, 2014
# Women moved into the work force from 1930s to 1970s
This is a big surprise:
The participation rate for women increased significantly from the mid 30s to the mid 70s and then flattened out.
And the chart shows no post-war decline:
There is the big question, what changed during the time 1930 to 1980, and why did it stop?
Women have always worked, though on the farm much of that work escaped any measurement that the government or historians have at their disposal. It’s likely the decline of farms drove some of ...
April 7th, 2014
# Yahoo has some very stupid programmers
Good lord, why is this developer at Yahoo so slow on the uptake?
Thank you for your submission to Yahoo! Unfortunately we are unable to reproduce the bug due to insufficient information. Please provide us with a proof of concept or any other additional evidence required to reproduce the issue.
** The attacker would have to know the invitation id correct?
One has the sense that the person reporting the bug is shocked by the lack of concern shown by Yahoo:
d4d1a179c0f3 changed ...
April 7th, 2014
# The Clojure workflow still suffers and the REPL is not a cure all
Stuff like this happens to me:
Here is a scenario that you might recognize. You’ve done a pretty substantial refactor, including new dependencies in project.clj. You need to bounce the REPL. Knowing that this will take forever you immediately switch to Prismatic. 15 minutes later you look at your Emacs again where you notice that there is a syntax error so the REPL didn’t launch. You parse the impossibly long stack trace and fix the bug. cider-jack-in again and switch back ...
April 6th, 2014
# People can rationalize any amount of greed: tech industry collusion
Incredible and sickening:
In the meantime, one the most interesting misconceptions I’ve heard about the ‘Techtopus‘ conspiracy is that, while these secret deals to fix recruiting were bad (and illegal), they were also needed to protect innovation by keeping teams together while avoiding spiraling costs.
That was said to me, almost verbatim, over dinner by an industry insider, who quickly understood he’d said something wrong— “But of course, it’s illegal, so it’s wrong,” he corrected himself.
The view that whatever Jobs and Google ...
April 6th, 2014
# Behavior driven development is broken
This is very good:
If it takes you ten lines to communicate the idea of adding subpages, then you’ve wasted my time. I’m not alone in thinking this. BDD expert Elizabeth Keogh tells us:
“If your scenario starts with ‘When the user enters ‘Smurf’ into ‘Search’ text box…’ then that’s far too low-level. However, even “When the user adds ‘Smurf’ to his basket, then goes to the checkout, then pays for the goods” is also too low-level. You’re looking for something ...
April 6th, 2014
# You don’t have to run faster than the bear
You just need to run faster than the guy next to you.
Source
April 6th, 2014
# It’s like – I don’t know
There is a large gap between spoken and written language, in particular, spoken language has an abundance of half sentences that never finish. Most quotes in newspapers clean up the grammar that the person used when speaking. So I like this, as it goes against the grain:
“I don’t understand what they were thinking to begin with. I’m sorry, I don’t even like to take my kids in a car ride that would be too dangerous, and it’s like taking ...
April 6th, 2014
# Where established companies might see risks or threats, startups see opportunity
Interesting:
The convergence of digital trends along with the rise of China and globalization has upended the rules for almost every business in every corner of the globe. It’s worth noting that everything from the Internet, to electric cars, genomic sequencing, mobile apps, and social media — were pioneered by startups, not existing companies. Perhaps that’s because where established companies might see risks or threats, startups see opportunity. As the venture capital business has come roaring back in the last 5 ...
April 6th, 2014
# Apple is secretive
Interesting:
“A fairly heavy corporate controlling hand.”
Richard Francis worked at Intel and got to know Apple employees when the two companies partnered on projects.
“There is a fairly heavy corporate controlling hand governing a lot of what Apple locally can / can’t ‘do’ as a business. That made for a fair degree of tension with some senior staff coming in from other parts of the technology industry.”
“I dreaded Sunday nights.” Designer Jordan Price hated the long, rigid hours he was expected to work.
“I ...
April 5th, 2014
# Why has Linux not seen more forks?
Strong language, and strong opinions, as always, from Linus Torvalds. Now that I think about it, isn’t it amazing that Linux remains stable, even after all these years. I remember someone predicting, years ago, that Linux would split apart into a million useless forks, just like Unix did a long time ago. But that never happened. There are a lot of distros, but the kernel remains 100% under the control of Torvalds. That must mean people trust him. And ...
April 4th, 2014
# Using Gloss to change bytes into Clojure data structures
Interesting:
I started creating a very simple protocol to allow clients to connect via telnet. So it is:
PUT LSA |*
We have two main commands, PUT and LSA. For PUT, author is the guy speaking, via is who noted it, and the fact is the statement itself. And for LSA command, you can pass the author’s name and the system will return all the facts spoken by the author. * means you want to read all the facts.
Any other command ...
April 4th, 2014
# Hashmaps versus btrees
Interesting:
Unsuprisingly, a hash map performs far and above the rest. This is to be expected, mapping is exactly what hash maps are for and, in most situations, they should perform insertions and lookups with amortized O(1) time complexity. However, for situations where you made wish to preserve order, a tree may be a better choice. For that, you can see that a well-tuned btree was outperforming a red/black tree by more than 2 times.
As memory architectures begin to behave more ...
April 4th, 2014
# Zero is a function
Is zero a number or a function? Probably a function:
I also wish to re-state zero is a function. It separates positive and negative numbers, real and imaginary numbers. So if smart people wish to argue 0^0 = 1 or NOT then same said people should arguably disagree that 1^(1/2) =1 … Or NOT Because -1 x -1 = 1
Source
April 4th, 2014
# Shareholders do not legally own a corporation
Interesting:
Did Carr have a choice? Was he truly beholden to his shareholders’ desire to take the deal? If not, how can directors act against the wishes of shareholders to preserve value for other stakeholders—value that is often less easily measured than a buyout price? In the wake of the scandals that caused the recession, the management world has been immersed in trying to answer such questions.
Oddly, no previous management research has looked at what the legal literature says about the ...
April 4th, 2014
# Germany versus America
Written by a German who has been living in America for a long time:
The German system gives more power to the parties, since they decide which candidates to place on the list from which the parliamentarians will later be drawn. Parties finance the election campaigns; the candidates themselves do not need to raise substantial amounts of money. In return, there is a very high party loyalty in the German parliament. Parliamentarians vote their conscience only on rare, very important ...
April 4th, 2014
# The end of Steve Blank’s Epiphany
Everyone who wants to be an entrepreneur should read Steve Blank’s book, The Four Steps to the Epiphany. But be aware that the era when this book was relevant is coming to an end, due to the high speed of innovation in some sectors:
This possibility allows the world to turn on its head very quickly, for Instagram to create a $1B company in 18 months with 30M users and for Whatsapp to amass a rabidly engaged mobile user base ... Read More Source April 4th, 2014 No Comments # Low expectations for sitcoms I agree with “low expectations”. Sitcoms are slowly dying out: in 2000 there were 36 in prime time major networks, by 2013 there were only 16. They are being replaced by reality shows. Sitcoms were invented to fill time while being low-cost, but reality shows are even cheaper and can draw just as much audience. The rise of unscripted reality shows (when they are unscripted, which is rare) suggests that Keith Johnstone might have been correct when he suggested that ... Read More Source April 4th, 2014 No Comments # The war on science Interesting and sad: Doesn’t the Entire Earth Have the Same Climate? Dana Rohrabacher (R-CA) demonstrated his inability to grasp the idea that the world’s climate varies across different regions (which, in fairness, is a sensible line of questioning—if we were living on the forest moon of Endor): Rohrabacher: Do you believe that tornadoes and hurricanes today are more ferocious and more frequent than they were in the past? Holdren: There is no evidence relating to tornadoes. None of all. And I don’t know any ... Read More Source April 4th, 2014 In Business No Comments # Is Hacker News bad for the tech world? The implication is that Hacker News is dominant because the competition is weak, and there is some truth to that, mostly because Hacker News does not sell ads, whereas all the major tech blogs sell ads, and the ads get in the way of my ability to read the story, and the ads might also influence the editorial policy of the blog, and yet Hacker News has its own editorial policy, influenced by its economic concerns, and less obvious than ... Read More Source April 4th, 2014 No Comments # Just Libraries – the composition of small apps Clojure favors the composition of small apps. The Clojure community has shown a resistance to monolithic frameworks like Rails. Now Immutant is moving further down the small app road. For its second major release, Immutant will simply be a collection of libraries, one for each of the commodity services currently available to applications using an Immutant 1.x container: web, scheduling, messaging, caching, and transactions. These services will be provided by the following underlying components: Undertow, Quartz, HornetQ, Infinispan, and Narayana, ... Read More Source April 4th, 2014 No Comments # Work should be fun Interesting: It’s also quite scary when you consider that we’re entering an era of technological unemployment. More and more jobs are being automated: they aren’t going to provide money, social validation, or occupation for anyone any longer. We saw this first with agriculture and the internal combustion engine and artificial fertilizers, which reduced the rural workforce from around 90% of the population in the 17th-18th century to around 1% today in the developed world. We’ve seen it in steel, coal, and ... Read More Source April 4th, 2014 In Business No Comments # The good and the bad of Facebook advertising Much of this thread is devoted to the problems with Facebook advertising. This was one of the few positive stories: Like others have said, it really depends on what you’re selling and who you’re targetting. Our example (country specific mobile app for doctors), spent 100 € on AdWords, end result was literally 0 app installs, 0 sign-ups, 0 everything. Medical keywords are expensive, no chance of sending them directly to the App Store/Play Store (that we saw at least), and no other ... Read More Source April 4th, 2014 No Comments # Woman takes a grant, is then called a hypocrite for criticizing university If true, then this is a worrisome attitude for someone who offers scholarships to college students. Shouldn’t college kids be encouraged to make thoughtful dissents against the institutions they find themselves in? Even more damningly, the administration seems to conflate “promoting civility” with “quashing dissent.” Over email, the current Coastal student told me, “I’ve been reluctant to write in the school newspaper and [be] critical thereof because students have warned me they’ve been called in by administration after publishing op-ed ... Read More Source April 1st, 2014 No Comments # The impact on gender relations of unpaid labor in open source? Interesting: A note on meritocracy It’s difficult to go much further without mentioning the undercurrent belief in meritocracy that is particularly pervasive in open source communities, especially around participation in GitHub. Meritocracy is the belief that those with merit float to the top – that they should be given more opportunities and be paid higher. We prize the idea of meritocracy and weigh merit on contribution to OSS. Those who contribute the most, goes the general belief, have the most merit and are ... Read More Source April 1st, 2014 In Business No Comments # Can open-floor plans be useful in an office? I only hear the negatives, so this positive argument is interesting: Suffers from the same flaw as most critiques of open plan: it focuses on individual productivity while failing to understand how it contributes to team productivity. Cornell did a study of open plan awhile back that you should all read. I posted it here: https://news.ycombinator.com/item?id=7507404 The misunderstanding here is that it’s just about serendipitously “overhearing” other conversations. 1. Open plan makes it easier to ask questions. Those are “disruptions”, yes, but what the ... Read More Source March 31st, 2014 In Business No Comments # OKCupid takes a stand against Brendan Eich Interesting: Hello there, Mozilla Firefox user. Pardon this interruption of your OkCupid experience. Mozilla’s new CEO, Brendan Eich, is an opponent of equal rights for gay couples. We would therefore prefer that our users not use Mozilla software to access OkCupid. Politics is normally not the business of a website, and we all know there’s a lot more wrong with the world than misguided CEOs. So you might wonder why we’re asserting ourselves today. This is why: we’ve devoted the last ... Read More Source March 31st, 2014 In Business No Comments # Changing ideas for a startup I built a custom CRM for the private club Parlor New York, and I just discovered an article about their original application. The application has changed a lot in the last 3 years. It is now focused on more detailed questions that try to figure out what your profession is. For me, this is one more data point about how ideas change once you try to make them real. The Parlor Club first sent out invites in 2009 ... Read More Source March 31st, 2014 No Comments # The worst web site ever: healthcare.gov What an incredible disaster. I say this as a professional who develops websites. Several states, such as Kentucky, built their own web sites, which have worked great. But the Federal site, even 6 months after launch, remains a disaster. This is the error message I got when I just now tried to sign up: Today’s the last day to sign up for Obamacare if you’re planning on using the healthcare.gov website. Unfortunately for people who tried to log onto the ... Read More Source March 31st, 2014 No Comments # A genuinely new thought about the history of human expansion I thought I knew every theory of possible human expansion, but this was entirely new for me: Dr. Guidon remains defiant about her findings. At her home on the grounds of a museum she founded to focus on the discoveries in Serra da Capivara, she said she believed that humans had reached these plateaus even earlier, around 100,000 years ago, and might have come not overland from Asia but by boat from Africa. Humans traveled by boat from Africa to South America ... Read More Source March 31st, 2014 No Comments # Java 8 has an Optional to deal with NullPointerException I don’t think I am impressed with this. The idea is borrowed from Scala. I have no love for Java or Scala, and I only follow Java because it impacts Clojure. If this enables Clojure to do something clever with NullPointerException, then maybe I will reevaluate this. Source March 31st, 2014 No Comments # Content Security Policy and Ruby and Clojure Although I love Clojure, I must admit that Ruby and Rails have an impressive depth of gems to help with every aspect of web development, including security. John P Hackworth recently wrote of the weakness of the Clojure eco-system, although his criticism is also an attack on the whole of idea of “small libraries that compose well” which amounts to an attack on the idea of “small pieces, loosely joined”. Clearly, good security can be achieved with small libraries that ... Read More Source March 31st, 2014 No Comments # Using Clojure to build a microservices CMS Many of us become cynical about technologies that promise big breakthroughs in productivity, so we become overly careful in our choices, but this is a good question for managers to always be asking: “Why would a large organisation with a mix of technologies and legacy systems want to muddy the waters with a completely new language?” If you want to make the conservative choice, and stick with what you already have, you should be able to articulate the reasons as clearly as ... Read More Source March 31st, 2014 In Business No Comments # Declining wages for men Interesting: For reference: Here are changes in hourly real wages of men, 1973-2012, at different percentiles of the wage distribution, calculated from Census data by the Economic Policy Institute. As you can see, wages have fallen for 60 percent of men. Source March 30th, 2014 In Business No Comments # Bitcoin has a great future in crime I agree with this entirely: The IRS now treats bitcoin as a property asset, very much the same way they treat stocks, and not all stocks are prone to speculation. So this is expected news for those who hold bitcoin as an investment. The major impact of this decision will be on consumer adoption. Now, every time I want to make a transaction, I need to keep track of my taxes. I know that some startups are already developing ways to ... Read More Source March 30th, 2014 No Comments # eat food for food in foods when food isnt ‘chocolate’ Of the many attempts to re-invent Javascript, the mostly puzzling to me are those that do not fix any problems, and then invents some more. I realize there is a strong desire to borrow ideas from Ruby and bring them to Javascript, but where one can’t do that cleanly, one shouldn’t do it at all. It’s a tool, that is all. Ambiguous code is a poorly thought out contrived example with a simple solution. To me, this: eat food for ... Read More Source March 30th, 2014 No Comments # Debtors prisons raise the risk of corruption in the USA The problem with putting people in jail for debts is that the courts themselves get corrupted by the confluence of money and power. This is a step down a dark road: In the spring of 2009, Burdette was doing well. For a year she had worked at the Piggly Wiggly in Childersburg, where nearly a quarter of the 5,200 citizens live in poverty. Burdette’s cashier job didn’t pay much, but it helped her get by. One May afternoon, she was ringing up ... Read More Source March 30th, 2014 No Comments # Chris Granger: more problems with object oriented programming At this point the evidence against object oriented programming seems overwhelming. I’ve linked to many articles here on this blog. Chris Granger offers another take on this issue: Programming is unobservable We can’t see how our programs execute. We can’t see how our changes affect our programs. And we can’t see how our programs are connected together. That basically means we can’t observe anything. The state of the art in observability is a stepwise debugger, which forces us to stop the world ... Read More Source March 30th, 2014 In Business No Comments # Half the board of Mozilla resigns because of the new CEO This is a curious story, for sure. If the half the Board hates the new CEO, then how did he become CEO? Brendan Eich is apparently a homophobe, having donated money to Proposition 8. I understand why the Board members would resign, but why were they unable to stop the appointment? Source March 30th, 2014 No Comments # NoSQL is a new of doing things, not a drop-in replacement for SQL I like this: Both NoSQL and Erlang had a burst of use and interest but because they were seen as silver bullets. Soon people realized you couldn’t simply translate your imperative code to Erlang and see improvements but instead regressions. Additionally, throwing your relational data at a NoSQL databases caused the same. I feel the NoSQL culture and programmers haven’t retracted to the core yet as much as Erlang. Though Erlang may see another surge of misuse and misinterpretation now with the ... Read More Source March 28th, 2014 No Comments # Sensitivity training: I have a knife and you have a gun I am curious what Frances Hocutt believes sensitivity training can achieve? Is it an appropriate tool for changing a culture? I wanted to lead a research team and solve pressing problems in medicine, energy, or the environment while treating my employees fairly. I thought about being able to hire people like my incredibly competent but PhD-less co-worker into management roles. I thought about instituting management and diversity training for PhD-level chemists. I thought about inviting some of the women ... Read More Source March 27th, 2014 No Comments # The advantages of Ruby on Unicorn This is an interesting way to look at things. Since so much of Ruby code is not thread safe, the fact that Unicorn spins up processes that don’t talk to each other is the most safe way to get concurrency in Ruby. That is a good point, though it is equivalent to saying “Since the code is broken, the the application server to do something weird to compensate for the brokenness.” Clearly, some people have good results with this, though ... Read More Source March 27th, 2014 No Comments # More negative views about Rails Rails lacks a story for concurrency. This is written by a Go programmer. Their criticisms are similar to mine, though for me the answer is “use Clojure” and so I end up doing JVM tuning, which is brought up as something scary to keep people away from jRuby. My impression is that the case against jRuby is weaker than the case against MRI Ruby (the C version). Rails is fundamentally – and catastrophically – slow. This well-known set of webapp ... Read More Source March 25th, 2014 No Comments # If Unix is good for Unicorn, why can’t Unicorn handle slow connections? I wrote about this recently, but I want to add to what I said. In what I now think of as a famous essay, Ryan Tomayko said “I like Unicorn because it’s Unix“. There must be something to this because the essay has been widely quoted, and I remember it, and I have re-read 3 times in the last 4 years. It had an impact. And yet, nothing in it convinced me to adopt that model. I rejected it and went ... Read More Source March 24th, 2014 No Comments # Photon could save PHP I have been extremely critical of PHP for the last 2 years. See “Why PHP is obsolete“. However, I just stumbled across Photon, which seems to address some of the core problems I see with PHP (especially the lack of any tools for dealing with concurrency): Why targeting Mongrel2? Mongrel2 is a very well designed, high performance server developed by pragmatic users who do not like bloated software. The use of ZeroMQ as the communication hub makes it extremely flexible while keeping ... Read More Source March 24th, 2014 No Comments # The tremendous innovation in Javascript There is no question that tremendous innovation is happening in the vast extended eco-system that touches upon Javascript. Sadly, I am not much interested in it. Maybe that is because I am not focused on the frontend right now. But also because I’m interested in solving these issues in other ways. All the same, Sam Ruby’s walkthrough of Angular.js is interesting: We have a model, view, and controller on the client, seemlessly interacting with the model, view, and controller on ... Read More Source March 24th, 2014 In Business No Comments # Sexism in Silicon Valley An interesting look at the extent to which the investors/angels in Silicon Valley help promote stereotypes that in turn promote a backwards view of gender relations: Silicon Valley fetishizes a particular type of engineer — young, male, awkward, unattached. This fetish is so normalized in startup culture that it often goes unseen for what it is: the specific, narrow fantasy of venture capitalists, deployed to focus their investment and attention. The disproportionate success of a very few individuals who fit this ... Read More Source March 24th, 2014 No Comments # Radical workarounds for the limits of MongoDB Whoa. This gives me interesting ideas: To reduce lock contention, we decided to run multiple MongoDB instances on one machine and create more granular databases in each instance. Basically data is stored in different instances based on its usage and in every MongoDB instance one database is created for each partner. Some people hate the fact that MongoDB forces you to do more in your own app, but I prefer designing with those constraints in mind. This has similarities to ... Read More Source March 24th, 2014 In Business No Comments # Julie Ann Horvath struggles with Github I suspect this story will be one of those stories that we will talk about for many years, sort of in the same way some of us still reference the treatment that Blaine Cook got in the media, and how unfair it was to blame him for the technical problems in Rails, at a site that was growing 1,000% a year. Some stories reveal a lot about the mood of the tech community in a given year. The Blaine Cook ... Read More Source March 24th, 2014 No Comments # Strange facts about HTML I feel like I’ve been away from HTML for awhile. 10 years ago I thought of myself as having some design skill, and I did a lot of front-end work, but in 2009, I moved to New York City and worked in some big companies with strict divisions of labor. I was a backender, and backenders are never frontenders. So I’ve been away from the frontend for awhile. It is slowly becoming foreign territory to me. I was surprised to ... Read More Source March 24th, 2014 No Comments # Why is the technology for blogs so difficult? Back in 2005, David Heinemeier Hansson offered a Rails tutorial showing how you could create a blog in 15 minutes : This was a world changing moment. Everyone I knew watched that video and talked about it. Here was a huge shift away from the overly complex frameworks of the past, and yet here was a framework that really worked, something we could use instead of dealing with the chaos of writing everything ourselves. You could build blog software in 15 ... Read More Source March 24th, 2014 No Comments # Once again, the shift to “smart services, dumb pipes” Yesterday I linked to the article over at Martin Fowler’s website where he wrote about the shift away from complex routing frameworks, towards a system of “smart services, dumb pipes”. Here is one more data point: At Digg our SOA consisted of many Python backend services communicating with each other as well as being used by our PHP frontend servers and Tornado API servers. They used Apache Thrift for defining the interfaces, clients and as the underlying protocol. …Coming off the Digg ... Read More Source March 24th, 2014 In Business No Comments # What is the future of news? Consider the ambitions of Vox: Will Vox be a bunch of articles like this one? Our commitment to explaining the news is a commitment to an outcome not a commitment to any particular article format. We do think, however, that the traditional article format is ripe for reinvention. In journalism, you’ll sometimes hear articles about hard topics referred to as “vegetables” or “the spinach” — the idea being that readers don’t like those subjects but they should be reading about them anyway. Our ... Read More Source March 24th, 2014 No Comments # Often businesses handle a degree of inconsistency in order to respond quickly to demand Perfect consistency is too rigid for most businesses, and it is painful when technical teams try to enforce this on a company, out of some ideological commitment to doing things the “correct” computer science way. “Eventual consistency” has been the standard that businesses have striven after since the Arab-Hindu cultures first invented dual-entry accounting, more than 500 years ago, and this is the standard that tech teams should enable for the businesses they serve. Choosing to manage inconsistencies in this ... Read More Source March 24th, 2014 No Comments # What kind of standards are useful to your team? I love this: Its a bit of a diochotomy that microservice teams tend to eschew the kind of rigid enforced standards laid down by enterprise architecture groups but will happily use and even evangelise the use of open standards such as HTTP, ATOM and other microformats. The key difference is how the standards are developed and how they are enforced. Standards managed by groups such as the IETF only become standards when there are several live implementations of them in the wider ... Read More Source March 24th, 2014 No Comments # The conceptual model of the world will differ between systems There is nothing wrong or bad about this, but rather, this is healthy: Decentralization of data management presents in a number of different ways. At the most abstract level, it means that the conceptual model of the world will differ between systems. This is a common issue when integrating across a large enterprise, the sales view of a customer will differ from the support view. Some things that are called customers in the sales view may not appear at all in ... Read More Source March 23rd, 2014 No Comments # A complexity that is frankly breathtaking How can anyone possibly think this is a good idea? To quote James Lewis and Martin Fowler: Certainly, many of the techniques in use in the microservice community have grown from the experiences of developers integrating services in large organisations. The Tolerant Reader pattern is an example of this. Efforts to use the web have contributed, using simple protocols is another approach derived from these experiences – a reaction away from central standards that have reached a complexity that is, frankly, ... Read More Source March 23rd, 2014 No Comments # The pushback against the monolithic framework I am pleased to think that others are as ready as I am to abandon the concept of the monolithic framework: Monolithic applications can be successful, but increasingly people are feeling frustrations with them – especially as more applications are being deployed to the cloud . Change cycles are tied together – a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it’s often hard to keep a good modular ... Read More Source March 23rd, 2014 No Comments # Leave the error checking in your code I leave the asserts in my Clojure code. I see a similarity of spirit expressed in the sentiment of James Hague (I especially like the use of the word “reckless”): That error checking is great during development was not controversial, but opinions after that were divided. One side believed it wasteful to keep all that byte and cycle eating around when you knew it wasn’t needed. The other group claimed you could never guarantee an absence of bugs, and wouldn’t ... Read More Source March 23rd, 2014 No Comments # A defense of MongoDB I posted this on Hacker News and now re-post it here. MongoDB offers the greatest benefit to those who have an evolving concept of their schema, and that tends to be startups, though I have worked in large firms that entirely re-invented their schemas. I worry that I would seem tedious if I listed the places that I have worked, and yet, on Hacker News, when I speak in abstract terms, I tend to get downvoted, so I will name a ... Read More Source March 23rd, 2014 In Business No Comments # Millions at stake but programming mistakes everywhere How long does this go on? Tens of millions of dollars get traded in virtual currencies, yet the exchanges seem to be slapped together by amateurs, with none of the caution that a bank would use when building its exchange software. Vircurex had a computer programming bug that caused the loss of a huge amount of virtual currency, so they are now insolvent, and they are trying to offer their own solvency-resque, without going to the courts: Frozen Funds In preparation of ... Read More Source March 21st, 2014 No Comments # Announcing Humorus-MG I just released Humorus-MG. This is an admin CMS for managing a collection in MongoDB. The app is written in Clojure. The README contains an unintentional mini-manifesto of what I believe about creating web software. This part in particular comes close to summarizing the kind of software that I would like to create this year: —————- Things that will never change about this app 1.) This app will never have more than 2,000 lines of Clojure code. None of my apps will ... Read More Source March 21st, 2014 In Business No Comments # Making the VC process more tolerant of women Sam Altman seems serious about making it easier for female founders to find resources through the VC and incubator systems: I realize it’s always a bit ridiculous for a guy to talk about what it’s like for female founders, but I’m interested in doing whatever I can to help, because the venture business has definitely been unfair to women. The women on our team also care deeply about this issue, and in fact can probably do more than I can ... Read More Source March 20th, 2014 No Comments # To what extent can artists be political? An interesting bit, suggesting an unresolveable divide between art and politics: What gets in the way of artists’ making substantive political contributions? The collection’s title essay proposes that artists’ class position opposes their interests to those of typical protesters, even when both are concerned with economic survival. Because artists, unlike wage laborers, have a direct stake in what they produce and face no workplace discipline other than what they impose on themselves, their political attitudes are structurally different from those ... Read More Source March 20th, 2014 No Comments # Why I use MongoDB I posted this on HackerNews. I am in agreement with what Jun Xu wrote. I think this is true: “For a technology startup with limited resources, broadly adopting a new DBMS means betting its own future on the DBMS. ” It has become popular to attack MongoDB, but I think it is difficult to get an objective view of what people are doing with it. If you want to read a really scathing attack on MongoDB, consider this post: http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/ But I recall reading that ... Read More Source March 11th, 2014 In Business No Comments # The gentrification of San Francisco Interesting: City officials estimate that there are over 40,000 illegal in-law units attached to San Francisco properties, and they account for about ten percent of the city’s housing stock. Most of these units were constructed during World War II, when workers flooded to the Bay Area to take wartime industrial jobs; today, property owners often choose to rent them to lower-income tenants under the table. Historically, San Francisco has had a “don’t ask, don’t tell” policy regarding illegal in-law units; ... Read More Source March 11th, 2014 In Business No Comments # Assumptions to avoid when starting your web startup I have seen$1 million dollars wasted in what turned out to be a year long brainstorming session. I have seen self-doubts rationalized as doubts about a product, and honest doubts about a product transformed into tests of personal self-worth. I have seen smart people lose their bearings when they face the test of the market. Startups are hard, and the public’s reaction to your product is always personal.
Pasha Galbreath and I continue to give talks to first-time entrepreneurs ...
March 9th, 2014
# Criticker gives away all of its users passwords
Criticker gives away all of its users passwords in plain text. Of course the site is written in PHP. While you can make mistakes in any language, this kind of laziness is what you expect in PHP.
Every request contains the secret key in the url. So all I need to do is capture a single request sent by the app and I have the key. Easy.
My theory was that I’d get the list of users that the app had ...
March 9th, 2014
March 8th, 2014
# Working with images using Clojure
I am intrigued by Mike Anderson’s “imagez” library:
Source
March 8th, 2014
# How to monitor Clojure apps?
Interesting:
Powerful stream primitives
(where (or (service #”^api”) (service #”^app”)) (where (tagged “exception”) (rollup 5 3600 (email “dev@foo.com”)) (else (changed-state (email “ops@foo.com”)))))
Riemann streams are just functions which accept an event. Events are just structs with some common fields like :host and :service You can use dozens of ...
March 8th, 2014
# How to bankrupt a successful software company
Interesting:
Quark 5 and OS9 was what we were used to, but it was pretty miserable. The things that stick out:
Restarting your computer and losing your unsaved work over software freezes was a regular part of your day. Like, many times a day. We had all these crazy workarounds to achieve certain effects like drop shadows or change-and-repeat. It was all pretty rudimentary and hard to standardize across many designers in a department. Shapes were pretty much a non-issue, so we had to ...
March 7th, 2014
# The downside of Unit Testing
Interesting:
I’m back in Java-land these days, which is culturally very pro-unit testing. After getting exposed to it again for a few months again I’ve come to side with the author here. I’ve never really been comfortable with the amount of time certain people dedicate to unit testing, especially the TDD crowd, but in my hiatus something has arisen in popularity which has made it all the worse: mockito. Prior to mockito, unit testing was (more or less) limited to testing that ...
March 4th, 2014
# Darren Holloway walks through the philosophy of Ring/Clojure
Darren Holloway has written a post that should be added to the wiki on Github where Ring is hosted. He covers all the stuff that had me the most confused when I started doing web development with Clojure. He offers easy examples in pseudo-code to get the basic ideas across. I wish every project on Github had an introductory tutorial written in this style.
An excerpt:
Ring Conceptually
Technically, Ring isn’t a framework or an application, but rather a specification ...
March 4th, 2014
# What it is like to think you are talented when you are ignorant
Despite the “worst practices” approach, the thing worked.
I like this story very much. My own story is a bit different, circa 2000-2005 I built a CMS out of PHP, and I did eventually find good ways to structure it, and I remain an opponent of “object oriented programming”. But other than that, a lot of this story overlaps my own.
Despite what I now refer to as my “worst practices” approach, the thing worked. Every bad tutorial, every anti-PHP ...
March 3rd, 2014
# Emotional intelligence and success with Bitcoin
Or rather, maladaptive ways to deal with stress:
After Mt. Gox was hacked for the first time in summer of 2011, a friend asked Powell to help out, and soon, the San Francisco entrepreneur found himself on a plane to Tokyo. After landing, he rushed to Shibuya station, where he was met by his friend, Roger Ver, one of the world’s biggest bitcoin supporters who just happened to live across the street from Mt. Gox. Without bothering to drop off ...
March 3rd, 2014
# The difference between database indexes and database histograms
Several things occurred to me when I read this, some of them off-topic, including my use of MongoDb, and how I have been unthinkingly re-creating histograms without even giving them that name. I do not regard that as a problem with MongoDb, it gives flexibility by doing very little itself, everyone using it is hopefully aware of the need to re-create database functionality within one’s own app.
Then I asked myself the question: how does Oracle estimate that there are ...
March 3rd, 2014
# I still don’t get PAAS
This is the problem for me:
In my ideal world, deploying my apps wouldn’t require any platform-specific code, or if it did, that code would be portable between platforms.
If I have to be aware of my servers, at all, then I’m still doing sysadmin, and if I have to do sysadmin, I want all the tools of sysadmin. I don’t want to do sysadmin on a crippled account that limits my options. Maybe someday there will be a real PAAS such ...
March 2nd, 2014
# The many problems with Bitcoin
Interesting:
I’m actually shocked that Mt. Gox did not lose money to a database screwup. There are so many flawed NoSQL databases out there that, if you adopt the technologies advertised as “hip” on techcrunch, you’ll most likely end up with a broken exchange (more on this in subsequent blog posts, because there are many funny examples that deserve their own discussion). It is quite easy for well-meaning developers to build an exchange on a database that loses transactions, or to ...
March 1st, 2014
# The new Formal blogging
Like Chris Bertram, I have a certain nostalgia for the world of blogging that existed during the years, roughly, 2000-2008. I sad that the conversational aspects have moved to Twitter, and now the blogs are mostly op-eds, rather than conversations. I am also surprised to see it now being treated as something to be done formally.
To be fair, may of the new initiatives, such as The Conversation, Politics in Spires, and the LSE Blogs are great, content-wise. But they ...
March 1st, 2014
# Russia mobilizes troops to occupy parts of the Ukraine
Russia mobilizes troops to occupy parts of the Ukraine.
I am speculating. What could Putin really hope to accomplish? And at what expense?
I am looking at Wikipedia:
http://en.wikipedia.org/wiki/Ukraine
Ethnic groups (2001)
77.8% Ukrainians
17.3% Russians
4.9% others / unspecified
The Russians are concentrated in the eastern-most provinces, and also in the Crimea.
Russia has 145 million people, the Ukraine has 46 million people, so in terms of the ratio of people, Russia invading the Ukraine would be a bit like the USA invading Mexico. Russia also has a ...
March 1st, 2014
# What is correct HTML syntax?
Matias Meno of Colorglare asks the question “TO CLOSE OR NOT TO CLOSE?”
This is from Ian Hickson in 2006, regarding the emergence of HTML5:
Regarding your original suggestion: based on the arguments presented by the various people taking part in this discussion, I’ve now updated the specification to allow “/” characters at the end of void elements.
To which Sam Ruby responded:
This is big. PHP’s nl2br function is now HTML5 compliant. WordPress won’t have to completely convert to HTML4 before people who ...
February 25th, 2014
# Greedy bankers, the lazy poor: moralizing wealth
Interesting:
Sadly, Mr Rooney did not respond in the manner of one of his celebrated predecessors. But he should have, because the chant is wrong. Mr Rooney is not getting £300,000 a week because he is unusually greedy: in the improbable event of being offered such money, who among us would turn it down? He is getting it because he is unusually powerful – a power which is not entirely due merely to his exceptional skill.
Palace fans, then, are committing ...
February 25th, 2014
# Corporate welfare in the USA, from state and local numbers
$110 billion just from state and local governments. And of course the Federal government adds in a lot more. State and local governments have awarded at least$110 billion in taxpayer subsidies to business, with 3 of every 4 dollars going to fewer than 1,000 big corporations, the most thorough analysis to date of corporate welfare revealed today.
Boeing ranks first, with 137 subsidies totaling $13.2 billion, followed by Alcoa at$5.6 billion, Intel at $3.9 billion, General Motors at$3.5 ...
February 25th, 2014
# Assortative Mating plays no role in current income inequality
Rarely does one see such blatant lying. Here a group of economists post a graph that very clearly contradicts everything they say, yet they go ahead and say it anyway.
The authors conclude that “rising assortative mating together with increasing labour-force participation by married women [emphasis added by me] are important in order to account for the determinants of growth in household income inequality in the US.” So, right out of the gate, a key influence not trumpeted in the headline ...
February 11th, 2014
# Who should be in charge when policy actually matters?
Take the title of this post and change it so it is about technology:
Who should be in charge when technology actually matters?
I am intrigued by a Paul Krugman post in which talks about policy mattering.
Change the word “policy” to “technology” and this gets at my complaint about many of the tech disasters I’ve seen in recent years, from the companies I worked for, to stuff I read about such as the roll out of the website for Obamacare.
So ...
February 9th, 2014
# The culture of girls and computers
Interesting:
It Really Is about Girls (and Boys)
Twelve-year-old girls today don’t generally get to have the experiences that I did. Parents are warned to keep kids off the computer lest they get lured away by child molesters or worse—become fat! That goes doubly for girls, who then grow up to be liberal arts majors. Then, in their late teens or early twenties, someone who feels the gender skew in technology communities is a problem drags them to a LUG meeting ...
February 9th, 2014
# Bloated software promises a stability which might be a liability
Interesting:
IT organizations are facing accelerating pressure to support companies’ growing need for business agility, innovation, customer responsiveness, and adaptability. This pressure doesn’t stop with so-called systems of engagement. It goes all the way back to systems of record. In fact, the distinction between the two is starting to erode. Enterprises are responding to this pressure by upgrading application architectures within and around the system-of-record tier. They are starting to view the “stability” of their legacy applications as a liability rather ...
February 9th, 2014
# Drupal is bloated software
Stuff like Drupal offers ease of use for standard operations, and yet, when I work with clients, I find they have very few “standard” operations. Everything needs to be customized, and that is where Drupal becomes difficult:
Drupal, much like many other CMSs, follows a development methodology that I call reverse development. It is the simple idea that the most fundamental moving parts of the technology have been already built for you, or are modifiable using a trivial UI, and ...
February 9th, 2014
# What is a Spruce Goose software project?
I have worked on software that was just like this:
|
2014-04-17 01:35:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17965605854988098, "perplexity": 3819.839133130413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/gold-sold-troy-ounce-31103-g-what-volume-1-troy-ounce-pure-gold
|
Question
Gold is sold by the troy ounce (31.103 g). What is the volume of 1 troy ounce of pure gold?
$1.610 \textrm{ cm}^3$
|
2018-12-13 15:24:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3359522521495819, "perplexity": 13937.279484168252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00307.warc.gz"}
|
https://www.ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=688a1dfb6ca4a628664e57bd8f8b4407962378b1
|
## Theory of Combinatorial Algorithms
Prof. Emo Welzl and Prof. Bernd Gärtner
# Mittagsseminar (in cooperation with M. Ghaffari, A. Steger and B. Sudakov)
Mittagsseminar Talk Information
Date and Time: Tuesday, September 02, 2003, 12:15 pm
Duration: This information is not available in the database
Location: This information is not available in the database
Speaker: Dirk Nowotka (Turku University, Finland)
## Periodicity and Unbordered Words
A relationship between the length of a word and the maximum length of its unbordered factors will be presented in this talk.
Consider a finite word w of length n. We call a word bordered, if it has a proper prefix which is also a suffix of that word. Let f(w) denote the maximum length of all unbordered factors of w, and let p(w) denote the (shortest) period of w. Clearly, f(w) is less than or equal to p(w).
We establish that f(w) = p(w), if w has an unbordered prefix of length f(w) and n > 2 f(w) - 2. This bound is tight and solves the stronger version of a 21 years old conjecture by Duval. It follows from this result that, in general, n > 3 f(w) - 3 implies f(w) = p(w) which gives an improved bound for a question asked by Ehrenfeucht and Silberger in 1979.
Previous talks by year: 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996
Information for students and suggested topics for student talks
|
2018-11-19 08:15:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464902639389038, "perplexity": 866.4243755801951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745486.91/warc/CC-MAIN-20181119064334-20181119090334-00444.warc.gz"}
|
https://encyclopediaofmath.org/wiki/Zonal_harmonics
|
Zonal harmonics
zonal harmonic polynomials
Zonal harmic polynomials are spherical harmonic polynomials (cf. also Spherical harmonics) that assume constant values on circles centred on an axis of symmetry. They characterize single-valued harmonic functions on simply-connected domains with rotational symmetry.
To be specific, one introduces the spherical coordinates $( r , \theta , \varphi )$ as $x _ { 1 } = r \operatorname { sin } \theta \operatorname { cos } \varphi$, $x _ { 2 } = r \operatorname { sin } \theta \operatorname{sin} \phi$, $x _ { 3 } = r \operatorname { cos } \theta$, where $( x _ { 1 } , x _ { 2 } , x _ { 3 } ) \in \mathbf{R} ^ { 3 }$. The zonal harmonics $H _ { n }$ are the polynomial solutions of the Laplace equation
\begin{equation*} \left[ \partial _ { r r } + \frac { 2 } { r } \partial _ { r } + \frac { 1 } { r ^ { 2 } } \partial _ { \theta \theta } + \frac { \operatorname { ctan } \theta } { r ^ { 2 } } \partial _ { \theta } + \frac { 1 } { r ^ { 2 } \operatorname { sin } ^ { 2 } \theta } \partial _ { \varphi \varphi } \right] H = 0 \end{equation*}
that are axially symmetric (i.e. independent of the angle $\varphi$). They can be expressed in terms of Legendre polynomials $P_n$ of degree $n$, as $H _ { n } ( r , \theta ) = r ^ { n } P _ { n } ( \operatorname { cos } \theta )$ for $n = 0,1 , \dots$, and form a complete orthogonal set of functions in $L ^ { 2 } [ D ]$, where $D$: $r \leq r_0$. The $H _ { n }$ vanish on cones that divide a sphere centred at the origin into $n$ zones, hence the name zonal harmonics. The $H _ { n }$ are sometimes referred to as solid zonal harmonics and the $P_n$ as surface zonal harmonics.
Applications.
Two types of applications arise in classical potential theory (see [a4], [a6], [a7]).
In the first, one determines the potential in a sphere from its boundary values $H ( r _ { 0 } , \theta )$. By specifying appropriate regularity conditions, the orthogonality of the Legendre polynomials is used to expand $H ( r _ { 0 } , \theta )$ as the Fourier–Legendre series $\sum _ { n = 0 } ^ { \infty } a _ { n } n_{0} ^ { n } P _ { n } ( \operatorname { cos } \theta )$. The potential in the sphere is recovered as $H ( r , \theta )$. The exterior boundary value problem is formulated by means of the Kelvin transformation $H ( r , \theta ) \rightarrow ( 1 / r ) H ( 1 / r ^ { 2 } , \theta )$. The potential between two concentric spheres is determined by combining solutions of the interior and the exterior problems.
In the second, one determines the potential at points in space from its values on a segment of the symmetry axis. The solution relies on the fact that along this axis the zonal harmonics $H _ { n } ( r , 0 ) = r ^ { n }$, $n = 0,1 , \dots$. Thus, if $H ( r , 0 ) = \sum _ { n = 0 } ^ { \infty } a _ { n } H _ { n } ( r , 0 )$, then $H ( r , \theta ) = \sum _ { n = 0 } ^ { \infty } a _ { n } H _ { n } ( r , \theta )$ for $r < r_{0}$, where $r_0$ is the radius of convergence of the Taylor series.
Relation with analytic functions.
There are many connections between the properties of the potentials $H$ and those of analytic functions $f$ of a complex variable (cf. also Analytic function; Harmonic function). One such connection, related to the previous example, concerns singularities and uses the generating function for zonal harmonics to construct reciprocal integral transforms connecting $H$ with $f$. The following fact is immediate (see [a3], [a8]). Let $\{ a _ { n } \} _ { n = 0 } ^ { \infty }$ be a sequence of real constants for which $\operatorname {lim} \operatorname {sup}_{n \rightarrow \infty} | a _ { n } | ^ { 1 / n } = 1$. Consider the associated harmonic and analytic functions $H ( r , \theta ) = \sum _ { n = 0 } ^ { \infty } a _ { n } H _ { n } ( r , \theta )$ and $f ( z ) = \sum _ { n = 0 } ^ { \infty } a _ { n } z ^ { n }$, which are regular for $r = | z | < 1$. Then the boundary point $( 1 , \theta _ { 0 } )$ is a singularity of $H ( r , \theta )$ if and only if the boundary point $z = \operatorname { exp } ( i \theta _ { 0 } )$ is a singularity of $f ( z )$. Thus, the singularities of solutions of a singular partial differential equation are characterized in terms of those of associated analytic functions and vice versa.
From the 1950s onwards, an extensive literature has developed using integral transform methods to study solutions of large classes of multi-variable partial differential equations. The analysis is based on the theory of analytic and harmonic functions in several variables. Zonal harmonics play an important role in axially symmetric problems in $\mathbf{R} ^ { 3 }$ (see [a1], [a2], [a3], [a5]).
References
[a1] H. Begher, R.P. Gilbert, "Transmutations, transformations and kernel functions" , Monographs and Surveys in Pure and Applied Math. , 58–59 , Pitman (1992) [a2] S. Bergman, "Integral operators in the theory of linear partial differential equations" , Springer (1963) MR0239239 MR1532808 MR0180735 MR0141880 Zbl 0209.40002 Zbl 0176.08501 Zbl 0121.07802 Zbl 0093.28701 [a3] R.P. Gilbert, "Function theoretic methods in partial differential equations" , Math. in Sci. and Engin. , 54 , Acad. Press (1969) MR0241789 Zbl 0187.35303 [a4] O.D. Kellogg, "Foundations of potential theory" , F. Ungar (1929) MR0222317 MR1522134 Zbl 0152.31301 Zbl 0053.07301 [a5] M. Kracht, E. Kreyszig, "Methods of complex analysis in partial differential equations with applications" , Wiley/Interscience (1988) MR0941372 Zbl 0644.35005 [a6] W.D. MacMillan, "The theory of the potential" , Dover (1958) [a7] P.M. Morse, H. Feshbach, "Methods of theoretical physics" , 1–2 , McGraw-Hill (1953) MR0059774 Zbl 0051.40603 [a8] G. Szegö, "On the singularities of real zonal harmonic series" J. Rat. Mech. Anal. , 3 (1954) pp. 561–564
How to Cite This Entry:
Zonal harmonics. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Zonal_harmonics&oldid=50073
This article was adapted from an original article by Peter A. McCoy (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
2023-02-04 05:16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578762054443359, "perplexity": 528.5122083312317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00310.warc.gz"}
|
https://astronomy.stackexchange.com/questions/33424/how-to-calculate-phase-angle-of-a-satellite/36559
|
# How to calculate phase angle of a satellite?
I'm making a program for predicting satellite passes. I'm trying to find out if the satellite is illuminated by the Sun and not in Earth's shadow. I need to know its phase angle: the angle between the observer on Earth, the satellite and the Sun.
Please explain it in simple terms if possible.
(Inserted by reviewer, extension to question, posted originally as answer)
I have TLE data for the satellite (contains right ascension). From that I got ECI position, azimuth, elevation, altitude. For observer I have latitude and longitude and for the sun: elevation and azimuth.
• What do you have of available data? The right ascension and declination of the satellite? – Tosic Sep 18 '19 at 16:45
• If you're using CSPICE, naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/phaseq_c.html may help -- if not, you could try looking at the source. – user21 Sep 18 '19 at 17:26
Draw the triangle Sun-Earth-Satellite, we will first find the angle Sun-Earth-Sat.
The angle between the Sun, the observer, and the satellite will be the angular separation between the Sun and the satellite on the observer's sphere. Here is how you could calculate that:
1. You have horizontal coordinates for both bodies so the easiest way to go from there would be to look at the spherical triangle Zenith-Sun-Sat, the angle at zenith will be the difference between azimuths, and the Zenith-Sun and Zenith-Sat lengths will be $$90^{\circ}-h_{Sun}$$ and $$90^{\circ}-h_{Sat}$$, respectively.
2. Now using the cosine formula for spherical triangles, one may obtain the following formula:
$$\cos^{−1}(\sin(h_{Sun})\sin(h_{Sat})+\cos(h_{Sun})\cos(h_{Sat})\cos(A_{Sun}−A_{Sat})).$$
1. Now use the sine theorem to find the angle E-Sun-Sat (the sine of this angle divided by the sine of the one we calculated will be equal to the ratio of distances from the Earth to the satellite, and from the satellite to the Sun, respectively), and to find the third angle, subtract the two from $$180^{\circ}$$.
Note: if you do not know the distance from the Sun to the satellite, I am certain you may use the distance from the Earth to the Sun as the error is probably negligible.
• For most satellites, Earth-Sun-Sat is 1 arcminute or less and can be neglected. – Mike G Oct 19 '19 at 4:27
• You can estimate sinx = x, but I wouldn't say you can completely disregard it, that would change the picture altogether ... – Tosic Oct 19 '19 at 6:06
• Assuming Earth-Sun-Sat = 0 adds an arcminute of error to Sun-Sat-Earth. Whether you go to the additional trouble depends on the precision you require. – Mike G Oct 19 '19 at 6:09
• You may be right, it was just way more convenient for me to explain it this way, with all the additional troubles :-) – Tosic Oct 20 '19 at 6:52
• I've adjusted the formatting of your post, can you take a look to make sure this is what you intended? Thanks! – uhoh Jun 15 '20 at 6:40
As long as you have the required ephemerides, which I assume you do given the problem at your hand, doesn't it just suffice to compute the dot product between the satellite-to-sun and satellite-to-observer vectors and thereby get the arccos of the phase angle ? In my point of view, this would be the most straight-forward way.
|
2021-07-26 16:26:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7520091533660889, "perplexity": 528.9457309179217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00522.warc.gz"}
|
http://new.aquariofilia.net/ttf3fm/archive.php?id=ordered-probit-in-r-4b91f8
|
A widely used approach to estimating models of this type is an ordered response model, which almost allows employs the probit link function. The variable rank takes on the The default logistic case is proportional oddslogistic regression, after which the function is named. polr: Ordered Logistic or Probit Regression In MASS: Support Functions and Datasets for Venables and Ripley's MASS. significantly better than a model with just an intercept (i.e. At one point, however, I calculate marginal effects that seem to be unrealistically small. First, we use the setx() function to set values for the independent variables in the model to specific values in order to create profiles of interest. b Instead one relies on maximum likelihood estimation (MLE). Below we discuss how to use summaries of the deviance statistic to asses model fit. The default logistic case is proportional oddslogistic regression, after which the function is named. The test statistic is distributed Applied Logistic Regression (Second Edition). For our data analysis below, we are going to expand on Example 2 about getting The second argument tells the sim() function which profile to use for the values of the independent variables. The default logistic case is proportional odds logistic regression, ... (corresponding to a Cauchy latent variable and only available in R >= 2.1.0). Later we show an example of how you can use these values to help assess model fit. 11.3 Estimation and Inference in the Logit and Probit Models. Separation or quasi-separation (also called perfect prediction), a independent variables. 2. optionally, a data frame in which to look for variables with which to predict. Say you want to represent the status of five projects. In R, there is a special data type for ordinal data. This example uses a subset of data from the 2016 General Social Survey (http://gss.norc.org/). Arguments object. Empty cells or small cells: You should check for empty or small significantly better than an empty model. the values we want for the independent variables. The code below estimates a probit regression model using the glm (generalized linear model) function. These are stored as new variable in the data frame with the original data, so we can Predicted probabilities in a proportional odds model with categorical predictor. Responses for the dependent variable (WRKSTAT) are recorded on a 3-level scale that follows an order from not working to working full-time, making this example appropriate for ordered probit. The output produced by Hence, only two formulas (for $$\mu_1$$ and $$\mu_2$$) are required. order in which the coefficients are given in the table of coefficients is the describe conditional probabilities. The ordered probit and logit models have a dependent variable that are ordered categories. The terms “Parallel Lines Assumption” and Parallel Regressions Assumption” apply equally well for both the ordered logit and ordered probit models. Example 1: Suppose that we are interested in the factors that influencewhether a political candidate wins an election. a fitted object of class probit.. newdata. 72383, posted 06 Jul 2016 06:59 UTC. The generalization of probit analysis to the case of multiple responses. You will find links to the example dataset, and you are encouraged to replicate this example. A multivariate method for 1957. levels of rank. change in deviance distributed as chi square on the change in degrees Use the ordered() function. Specifying a probit model is similar to logistic regression, i.e. Via the distribution function parameters, binaryChoice supports generic latent linear index binary choice models with additive disturbance terms. Example 2: A researcher is interested in how variables, such as GRE (Graduate Record E… In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. Each project has a status of low, medium, or high: > status <- c("Lo", "Hi", "Med", "Med", "Hi") Now create an ordered factor with this status data: This part Institutions with a rank of 1 have the highest prestige, I am trying to find the marginal effects of my probit (but if anyone knows how to do it with a logit regression I can use that one instead) regression. Ordered logit or ordered probit? Fits a logistic or probit regression model to an ordered factorresponse. Below we I am doing an ordered probit with 3 outcomes (Help the economy, make no difference, hurt the economy). For a more mathematical treatment of the interpretation of results refer to: How do I interpret the coefficients in an ordinal logistic regression in R? Ich habe eine latente, kategoriale Abhängige Variable Y, welche die Werte 1-3 (niedrig-mittel-hoch) annehmen kann. While logistic regression used a cumulative logistic function, probit regression uses a normal cumulative density function for the estimation model. The choice of probit versus logit depends largely on The function follows the usual model formula conventions. It is typically for this reason that generalized linear models, like probit or logit, are used to model binary dependent variables in applied research, and an approach that extends the probit model to account for endogeneity was proposed by Rivers & Vuong (1988). Is there a way to obtian a single coefficient for all the 3 outcomes? the test To find the difference in deviance for the two models (i.e. into graduate school. The best way to explore the impact of a continuous independent variable or an independent variable that takes on many values is to compute the predicted probability of falling into one of the employment categories based on values of the independent variable in question and present the results graphically. To see the If omitted, the fitted linear … 0. drop.unused.levels: default TRUE, if FALSE, it interpolates the intermediate values if the data have integer levels. The dataset is a subset of data derived from the 2012 Cooperative Congressional Election Study (CCES), and the example presents an analysis of whether survey respondents believe that laws covering the sale of firearms should be more strict, kept as they are, or … Ordered Probit Regression. You will probably recognize the -part of this exercise. OLS regression because they use maximum likelihood estimation techniques. To obtain approximate p-values of the estimates, we can use the following code: pnorm(abs(m1$get_coef()[[1]] / m1$get_se()[[1]][1:3]), lower.tail = FALSE) * 2. 05 Jul 2016, 12:00. There are many functions and packages that can be used to estimate an ordered probit model in R. In this example, we use a set of functions from the Zelig package available for R. The three core functions are zelig (), setx (), and sim (). The outcome (response) variableis binary (0/1); win or lose. We can also test additional hypotheses about the differences in the We can look at the results for the profiles using the summary() function again, and the results are shown in Figure 3. The generalization of probit analysis to the case of multiple responses. whether a political candidate wins an election. oprobit— Ordered probit regression 5 Methods and formulas See Methods and formulas of[R] ologit.References Aitchison, J., and S. D. Silvey. However the ordered probit model does not require nor does it meet the proportional odds assumption. dichotomous outcome variables. are to be tested, in this case, terms 4, 5, and 6, are the three terms for the normality of errors assumptions of OLS a null model). Ask Question Asked 9 years, 7 months ago. regression and how do we deal with them? This test asks whether the model with predictors fits difficult to estimate a probit model. the current and the null model (i.e. negatively and whether the candidate is an incumbent. The code for doing so looks like this: The first argument inside the sim() function refers to the model estimated using the zelig() function, which we named “m1” in this case. Controlling for number of children and education level, age is significantly and negatively associated with employment, suggesting that older women are less likely to be working full-time and more likely to be not working. Example 22.1 Ordered Data Modeling. In statistics, ordered probit is a generalization of the widely used probit analysis to the case of more than two outcomes of an ordinal dependent variable (a dependent variable for which the potential values have a natural ordering, as in poor, fair, good, excellent). probit regression. One of 'logistic', 'probit', 'loglog', 'cloglog' or 'cauchit', but can be abbreviated. Ordered probit models are typically used when the dependent variable has three to seven ordered categories. In the ordered logit model, there is an observed ordinal variable, Y. At the bottom of each table, the results also include a mean for the predicted value of Y and those values of Y that represent the 50.0, 2.5, and 97.5 percentiles. Like many models for qualitative dependent variables, this model has its origins in In order to develop and motivate the idea behind random parameter models, consider the 4 Rchoice: Discrete Choice Models with Random Parameters in R followinglatentprocess plot the predicted probabilities for different gre scores. Input Values. The results are shown in Figure 1. Ordered probit and logit models: topics covered. Think of it as creating a descriptive profile for a case in the dataset and computing a predicted probability for someone with that profile to be in one of the employment status. This type is called ordered factors and is an extension of factors that you’re already familiar with. rank is statistically significant. The models considered here are specifically designed for ordered … You can also use predicted probabilities to help you understand the model. Here is the code for producing all of the information you need to generate a plot like Figure 4: Complete interpretation of the results of an ordered probit model would present similar tables or figures for every independent variable in the model. This data set has a binary response (outcome, dependent) variable called admit. The code looks like this: Part of the results is shown in Figure 2. One measure of model fit is the significance of Use the ordered() function. Regression (Second Edition), Stat Books for Loan, Logistic Regression and Limited Dependent Variables, A Handbook of Statistical Analyses Using R. Probit regression, the focus of this page. More than that, and researchers often turn to ordinary least squares regression, while if the dependent variable only has two categories, the ordered probit model reduces to simple probit. These will be profiled confidence intervals by default, created by profiling the likelihood function. However, the errors (i.e., residuals) Regression Models for Categorical and Limited Dependent Variables. Two-group discriminant function analysis. r regression probit. the overall model. GLMs connect a linear combination of independent variables and estimated parameters – often called the linear predictor – to a dependent variable using a link function. The table shows that 626 subjects was working full-time, 231 part-time, and 332 not working. The disadvantage of this approach is that the LPM may imply probabilities outside the unit interval. particular, it does not cover data cleaning and checking, verification of assumptions, model How to predict using ordered probit regression and calculate prediction accuracy? In the code below, I demonstrate a similar function that calculates ‘the average of the sample marginal effects’. We may also wish to see measures of how well our model fits. Controlling for age and education level, the variable number of children is significantly and negatively associated with employment, suggesting that women with more children are less likely to be working full-time and more likely to be not working. prior: Prior for coefficients. 0 ‘No’ 1 ‘Yes’ Do you prefer to use public transportation or to drive a car? diagnostics and potential follow-up analyses. This model is thus often referred to as the ‘‘ordered probit’’ model. There is a lot of information in the results. New York: John Wiley & Sons, Inc. Long, J. Scott (1997). model). If you do not have See our page. OLS regression. oprobit — Ordered probit regression DescriptionQuick startMenuSyntax OptionsRemarks and examplesStored resultsMethods and formulas ReferencesAlso see Description oprobit fits ordered probit models of ordinal variable depvar on the independent variables indepvars. Die exogenen Variablen bestimmen diese Wahrscheinlichkeit nicht auf eine lineare Weise, sondern beim Probit-Modell wird dafür die … deviance residuals and the AIC. We use the zelig() function to estimate the model and assign the results to an object named m1. Haven't thought much about this y>=1. • In order to use maximum likelihood estimation (ML), we need to make some assumption about the distribution of the errors. Say you want to […] 1. I am estimating an Ordered Probit model with three independent variables and five possible outcomes. while those with a rank of 4 have the lowest. particularly useful when comparing competing models. I've got something that gets me pretty close to the results from a clm function, but not quite. It Probit Regression. It does not cover all aspects of the research process which researchers are expected to do. Fits a logistic or probit regression model to an ordered factor response. (1−� Cameron and Trivedi (1986) studied Australian Health Survey data. with only a small number of cases using exact logistic regression. 1The ordered probit model is a popular alternative to the ordered logit model. variables gre and gpa as continuous. on your hard drive. Ordered logit in R I ran the follow code for an ordered logit, but don't know why two levels of my dependent variable are at the topic of my list of variables. We can see that the probability of “Not working” increases with the number of children between 0 and 5 children, and start to decrease as the number of children continues to increase. (grade point average) and prestige of the undergraduate institution, effect However, by default the levels are ordered alphabetically and this makes puts '1' after '0', 'TRUE' after 'FALSE' nad 'yes' after 'no'. For more information, see “Making the most of statistical analyses: improving interpretation and presentation” by King, Tomz, and Wittenberg (American Journal of Political Science, 44(2): 341–355). is sometimes possible to estimate models for binary outcomes in datasets when the outcome is rare, even if the overall dataset is large, it can be Is there a theoretical justification for choosing an ordered logit model over the ordered probit, and verse versa? This can be 11.2 Probit and Logit Regression. When estimating an ordered probit model, it is a good idea to start with a simple frequency distribution of the dependent variable. p-values. one for each level of gpa we used (2.5, 3, 3.5, 4) with the colour of the lines So far nothing has been said about how Logit and Probit models are estimated by statistical software. Some examples include: 1 Education, measured categorically, (e.g. logistic or probit or complementary log-log or cauchit (corresponding to a Cauchy latent variable and only available in R >= 2.1.0). in the model. same as the order of the terms in the model. Diagnostics: The diagnostics for probit regression are different exist. Ordered Probit and the EM Algorithm Step 2: M-Step: To implement the M step, we must evaluate this expectation and then maximize over and ˙2. The word is a portmanteau, coming from probability + unit. This example assumes that you have the data file stored in the working directory being used by R. Ordered probit models explain variation in an ordered categorical dependent variable as a function of one or more independent variables. I don't know why this appears, and what I'm supposed to take from them y>=0. By now, you know that there is an order to credit ratings, and your plots should reflect that! In order to get access to the functions and features within a package while working in R, those packages must be loaded into the R work session each time R is launched. amount of money spent on the campaign, the amount of time spent campaigning the terms for rank=2 and rank=3 (i.e. In the output above, the first thing we see is the call, This is done in R with the table() function as the following: We use the dollar sign within the table() function to tell R to look for an object named WRKSTAT inside the object named data. Here is the code for creating profiles for women with at most high school degrees and women with at most college degrees. In some cases, the variable to be modeled has a natural ordinal interpretation. In this example, we focus our attention on the individual coefficient estimates linking the independent variables to the dependent variable and their corresponding level of statistical significance. logistic regression, see Hosmer and Lemeshow (2000, Chapter 5). This analysis examines whether having children influences the working status of women. This page uses the following packages. The diagnostics for probit models are similar R-squared in OLS regression; however, none of them can be interpreted can use the summary function to get a summary of the It is also important to keep in mind that Since we stored our model output in the object “myprobit”, R will not print anything to the console. To create an ordered factor in R, you have two options: Use the factor() function with the argument ordered=TRUE. Education (DEGREE): Highest degree earned; it is an ordinal variable with possible values: 1 = Little high school, 2 = High school, 3 = Junior college, 4 = Bachelor, 5 = Graduate. Probit regression can used to solve binary classification problems, just like logistic regression. associated with a p-value of less than 0.001 indicating that the overall effect of Active 1 year, 11 months ago. The main goal of linear regression is to predict an outcome value on the basis of one or multiple predictor variables.. In this chapter, we’ll describe how to predict outcome for new observations data using R.. You will also learn how to display the confidence intervals and the prediction intervals. low to high), then use ordered logit or ordered probit … The next part of the output shows the coefficients, their standard errors, The terms parallel lines model and parallel regressions model are also sometimes used, for reasons we will see in a moment. On: 2012-12-15 values 1 through 4. We provide a script file with this example that executes all of the operations described here. statistic) we can compute the change in deviance, and test it using a chi square test—the In ordinal regression models, the outcome is an ordinal variable—a variable that is categorical and ordered, for instance, “poor”, “good”, and “excellent”. Hosmer, D. & Lemeshow, S. (2000). data = read.csv(’dataset-gss-2016-subset1.csv’), levels=c(’Not working’, ’Working parttime’,’Working fulltime’)), m1 = zelig(WRKSTAT ~ CHILDS + AGE + DEGREE, data=data, model = ’oprobit’, cite = FALSE). The code is as follows: Note that the Zelig package also depends on additional packages in R that will be installed as well automatically. We The k +1 model parameters to be estimated are the parameter vector b and the scalar r. In a spatial probit model, z is regarded as a latent variable, which … But as far as I have three outcomes if I use margins I obtain 3 different coefficients (one for help, one for make no difference, one for hurt). No order was specified when you created the factor, so, when R tried to plot it, it just placed the levels in alphabetical order. ln . A full discussion of this process is beyond the scope of this example, but briefly, the process computes 1,000 sets of predicted probabilities by simulating values for the model coefficients based on their estimated values, variances, and covariances. We have generated hypothetical data, which Ordered Probit and Logit Models in R https://sites.google.com/site/econometricsacademy/econometrics-models/ordered-probit-and-logit-models share | cite | improve this question | follow | edited May 15 '14 at 1:06. of freedom. The chi-square of 41.56 with 5 degrees of freedom and an associated p-value of less than 0.001 tells us that our model as a whole fits Hallo zusammen, ich muss eine ordered probit regression mit R ausführen. Probit regression, also called a probit model, is used to model dichotomous By assumption, (υi, υi)˜N(0,Σ), where σ11 is normalized to one to identify the model. 6,830 1 1 gold badge 21 21 silver badges 47 47 bronze badges. However, from what I can see, few researchers perform heteroskedasticity tests after estimating probit/logit models. In every bivariate probit specification, there are three equations which correspond to each dependent variable ($$Y_1$$, $$Y_2$$), and the correlation parameter $$\rho$$.Since the correlation parameter does not correspond to one of the dependent variables, the model estimates $$\rho$$ as a constant by default. distribution of errors • Probit • Normal . Version info: Code for this page was tested in R Under development (unstable) (2012-11-16 r61126) I am using R to replicate a study and obtain mostly the same results the author reported. Please Note: The purpose of this page is to show how to use various data analysis commands. This is generally where researchers focus their attention. Pseudo-R-squared: Many different measures of psuedo-R-squared wald.test function refers to the coefficients by their order in One of 'logistic', 'probit', 'loglog', 'cloglog' or 'cauchit', but can be abbreviated. Packages only need to be installed in R one time. For the model. from the linear probability model violate the homoskedasticity and oprobit— Ordered probit regression 5 Methods and formulas See Methods and formulas of[R] ologit.References Aitchison, J., and S. D. Silvey. Logit versus Probit • The difference between Logistic and Probit models lies in this assumption about the distribution of the errors • Logit • Standard logistic . 6.5 Ordered Logit Models. and the coefficient for rank=3 is statistically significant. Obviously the multinomial and sequential logit models can be applied as well, but they make no explicit use of the fact that the categories are ordered. 1 == 2.1.0). Thousand Oaks, CA: Sage Publications. We then use the summary() function on the object m1 to print the results to the screen. In order to use these functions, the Zelig package must be installed. model). Ordered probit model prediction: why highest probabilities and not number of thresholds exceeded? See polr for more details. Dez 2015, 20:14 . ivporbit:An R package to estimate the probit model with continuous endogenous … In the probit model, the inverse standard normal distribution of the probability is modeled if you see the version is out of date, run: update.packages(). Should be a call to R2 to specify the prior location of the $$R^2$$ but can be NULL to indicate a standard uniform prior. Variable definitions are given in Cameron and Trivedi (1998, p. 68). In order to use these functions, the Zelig package must be installed. Here are several typical observations to be made from Figure 2: Interpretation of the results from an ordered probit model requires more than just examining the direction and level of statistical significance for the coefficient estimates themselves. Below is a list of some analysis methods you may have encountered. This model is what Agresti (2002) calls a cumulative link model. Fits a logistic or probit regression model to an ordered factor response. Example 2: A researcher is interested in how variables, such as GRE (Graduate Record Exam scores), GPA individual preferences. Alternative vluaes for options are passed using the tag=new.value syntax (same with the par() function.. To see all default values, type anchors.options() without arguments. These models can be fitted in R using the polr function, short for proportional odds logistic regression, in the package MASS. An intercept ( i.e spatial data using Bayesian Inference via MCMC later we show an of... Lineare Weise, sondern beim Probit-Modell wird dafür die sample marginal effects for both the probit or logit require... Variable that are estimated using post-estimation simulation m1 to print the results in Zelig can use the summary to! Should reflect that: why highest probabilities and their confidence intervals for estimation! Separation in logistic/probit regression and calculate prediction accuracy, dvisits, has nine ordered.. Gre, gpa and rank turn our attention to models for binary outcomes in datasets with only small! Example that executes all of the sample marginal effects for both the ordered probit models are used! Supports generic latent linear index binary choice models with additive disturbance terms 1, and are! Binary outcome variables how do we deal with them family of Generalized linear models ( GLMs.. That you can load them before trying to run the examples on this page is predict. First create a data frame in which to look for variables with which to for! The coefficients: default TRUE, if FALSE, it is intended to linear. When the dependent variable, dvisits, has nine ordered values this Question | follow | edited 15! Usage Arguments details Value Note References see also examples summaries of the independent variables and five possible outcomes page to! Dataset into R. we show an example of how well our model fits of! Individual preferences of cases using exact logistic regression used a cumulative link model the computed p-values we... Of econometric practice that seem to be installed which are a measure of model fit is dependence! Particular, it is sometimes called a likelihood ratio test ( the deviance statistic to asses model fit the argument! Nicht auf eine lineare Weise, sondern beim Probit-Modell wird ordered probit in r die R... Cumulative density function for the coefficients for different levels of rank the diagnostics for probit models regression. Described here via MCMC do we deal with them parameter and will assumed (. Exact logistic regression, after which the function is named cover data and! Eine lineare Weise, sondern beim Probit-Modell wird dafür die, see Hosmer and Lemeshow ( 2000, Chapter ). Residual is -2 * log likelihood ) fit indices, including the hypothesis... Fits significantly better than a model with categorical predictor are interested in the ordered logit and probit!: what is complete or quasi-complete separation in logistic/probit regression and calculate prediction?... ( GLMs ) variable rank takes on the basis of one or multiple predictor..... Assumed abs ( R ) < 1 take from them Y >.! ’ 1 ‘ Yes ’ do you Prefer to use maximum likelihood estimation ( MLE ) know! Data using Bayesian Inference via MCMC first create a data frame in which to predict FALSE it... Model prediction: why highest probabilities and their confidence intervals by default, created by profiling likelihood. Example 2 about getting into graduate school, gpa and rank Inc. Long J.. This would lead us to reject the null and deviance residuals and other. Zum Probit-Modell ist in der Statistik, einem Teilgebiet der Mathematik, die Spezifikation verallgemeinerten. Is the significance of the research process which researchers are expected to do this, we use factor! Test ( the deviance residuals and the other terms in the coefficients by their order in model. Graduate school been said about how logit and probit models male subjects results the author reported these to. ( \mu_2\ ) ) are required this part of output shows the distribution function parameters, binaryChoice supports latent! Dichotomous or binary outcome variables, S. ( 2000, Chapter 5.... Ratio test ( the deviance residual is -2 * log likelihood ) diagnostics. Or have limitations reasonable while others have either fallen out of favor or have limitations and the. Of cases using exact logistic regression, after which the function is named require more cases OLS! Variable rank takes on the basis of one or multiple predictor variables probit ’ ’ model get a of. Is that both models are estimated by statistical software understand the model summary function to called. Habe eine latente, kategoriale Abhängige variable Y, welche die Werte 1-3 ( niedrig-mittel-hoch ) annehmen.... R is the difference in deviance for the model that each coefficient estimate is statistically significantly different from those logit! 3 = some college, etc. ) bronze badges so they are multiplied by 0 calculate marginal effects both. Allows employs the probit link function below is a special data type for ordinal data alternative zum Probit-Modell in! Make sure that you ’ re already familiar with R, you have two options: the... The variable to be called by wrappers like probit results from a clm function ordered probit in r. Calculates ‘ the average of the estimates follow | edited may 15 '14 at 1:06 working full-time 231... Interestingly, the fitted linear … example 22.1 ordered data Modeling to assess... Sure that you ’ re already familiar with R, there is a complicated process is!, but look at the end of this page distribution of the overall model wird. Faq page nor does it meet the proportional odds logistic regression the computation of p-values is not in! Variables: gre, gpa and rank different levels of rank estimate is statistically different. For probit regression can used to solve binary classification problems, just like logistic used! Use various data analysis commands theoretical justification for choosing an ordered probit and logit models more. Done using the glm ( Generalized linear model ) function rank=3 ( i.e model using the glm ( linear! Manual located at http: //gss.norc.org/ ) Seite 1 von 1 to create an ordered,! Or complementary log-log or cauchit ( corresponding to ordered probit in r Cauchy latent variable and only in., while those with a rank of 4 have the highest prestige, while those a! Been said about how logit and ordered probit model, it interpolates the intermediate values if the data we. Going to expand on example 2 about getting into graduate school made by. And see whether you can replicate these results replicate this example uses a normal density. Of cases using exact logistic regression used a cumulative link model next we see the deviance,. The variable to be unrealistically small defines the test statistic is the below! Of model diagnostics and potential follow-up analyses is 0 depends upon will be loaded automatically influence a... One point, however, the inverse standard normal distribution of the operations described here regression uses a cumulative! Below creates a vector l that defines the test we want for two... Integer levels but look at the end of this guide is thus often referred as. To see measures of how you can evaluate how different independent variables impact changes in predicted probabilities by features. Parameters, binaryChoice supports generic latent linear index binary choice models with additive terms... Have either fallen out of favor or have limitations p. 68 ) in... Für binäre Daten in cameron and Trivedi ( 1986 ) studied Australian Survey. Of Biomathematics Consulting Clinic, https: //stats.idre.ucla.edu/stat/data/binary.csv '' our faq page first line code... Would appreciate it, including the null hypothesis of a coefficient being equal to the ordered probit regression mit ausführen! Regression is to show how to use these values to help assess model fit uses subset! 68 ) make No difference, hurt the economy, make No difference, hurt economy! Well our model fits, sondern beim Probit-Modell wird dafür die to models for binary outcomes datasets. Single coefficient for rank=3 ; win or lose nicht auf eine lineare Weise, beim... This, we can see that each coefficient estimate is statistically significantly different from those for models! Line of code below creates a vector l that defines the test is. 2 = HS, 2 = HS, 2 = HS, 3 = college! Asked 9 years, 7 months ago eine latente, kategoriale Abhängige variable Y, welche die Werte (. Why highest probabilities and confidence intervals by default, created by profiling likelihood! “ myprobit ”, R will not print anything to the screen ) kann! Show two ways of doing this polr function, but not quite and as. Working status of five projects lot of information in the code for creating profiles for with! Some examples include: 1 Education, measured categorically, ( e.g provide a script file with this that..., https: //stats.idre.ucla.edu/stat/data/binary.csv '' gpa and rank of one or multiple variables! Value on the object “ myprobit ”, R will not print anything to the coefficients by order... The dependent variable is binary ( 0/1 ) ; win or lose the family of Generalized model... Values are not familiar with R, you have two options: ordered probit in r the summary function to obtain confidence for. Thus can not be estimated using OLS: the diagnostics for probit regression model an... Linked paper also supplies some R code which calculates marginal effects for the... Referred to as the ‘ ‘ ordered probit models are similar to those for logit models alternative. Measure of model fit is the dependence parameter and will assumed abs ( R ) <.... | follow | edited may 15 '14 at 1:06 comparing competing models ’ do you Prefer to various! ( 1998, p. 68 ) statistical analysis below creates a vector l that defines test.
2020 ordered probit in r
|
2021-09-28 20:13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901808738708496, "perplexity": 1417.4181249883025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00558.warc.gz"}
|
https://www.physicsforums.com/threads/objects-move-into-gravity-shorter-time.552726/
|
# Objects move into gravity=shorter time?
1. Nov 21, 2011
### gabeeisenstei
Although I can follow many of the equations, given the Schwarzschild metric, I am stuck on what seems like a contradiction between two things.
1) Objects follow geodesics, which exhibit the principle of extremal aging, meaning most (or least) time along the path between given start/endpoints.
2) Decreasing distance from a center of gravitational mass corresponds to slower clocks (and increasing distances), as compared to far-away observers.
My understanding of these two ideas would imply that things move away from gravity, not toward it. What do I have backwards?
In trying to understand curvature at its most basic, I picture a particle released from a position at rest above the earth; then I imagine three nearby events in space time: 1) the particle is motionless during the time-interval, 2) the "next event" in the direction of gravitational mass (down), and 3) the "next event" in the opposite direction (up). Someone show me why event 2) is the natural one because the spacetime interval to it exhibits extremal aging.
(Bonus: in what cases does "extremal aging" turn into minimal rather than maximal?)
2. Nov 21, 2011
### Staff: Mentor
If you read your principle #1 carefully, you will see that it contains the answer to your dilemma. I'll quote it and emphasize the key phrase:
In other words, the principle of extremal aging only tells you an object's trajectory if you already know the start and endpoints of the trajectory. It does not tell you what those start and endpoints are; you have to figure those out some other way.
So, for example, if I am on the surface of the Earth and I throw a ball up into the air, and I stipulate that it must return to me exactly ten seconds after I throw it, then that fixes the start and endpoints of the trajectory, and the principle of extremal aging, combined with your principle #2, which tells how the "rate of aging" varies with height, tells me that the ball's trajectory will be a parabola (well, strictly speaking it will be the arc of an ellipse which can be approximated very closely by a parabola, but we'll ignore fine points of that sort here).
However, if I am standing on the Earth and release an object and watch it fall, the principle of extremal aging by itself can't tell me its trajectory, because I only know its start point, not its end point. Only if I already know its end point by some other means can I apply the principle of extremal aging to calculate its trajectory. (Of course that invites the question, how do I figure out its end point? See below for further comments on that.)
In summary, the principle of extremal aging does not say, categorically, that "objects will move to where they age faster". It only says that, given specified start and end points, the geodesic trajectory between them will be the one with extremal aging.
Per the above, the principle of extremal aging does not say that "objects moving on geodesics will move to where they age faster". So that principle by itself cannot choose between these three alternatives. In order to see what *does* choose between them, we need a better way to think about geodesics.
Try to visualize the trajectory of the particle in the above scenario as a curve in spacetime; the usual term for such a curve is "worldline". We know one point on this worldline, the point where the particle is momentarily at rest relative to the Earth. At that point, the particle not only has a "position" in spacetime; it also has a "velocity" in spacetime, called a 4-velocity, which is a 4-vector (a vector with four components, one time and three spatial components) that is tangent to the worldline at the given point. (I realize you can't visualize 4 dimensions, but we can make do with three or even two by suppressing one or two of the spatial dimensions. The key is to see the worldline as a curve with a tangent vector at each point.)
The rule for a geodesic can now be stated as follows: a geodesic is a curve whose tangent vector does not change along itself. But we have to be careful about the meaning of "does not change", because we're in a curved spacetime. It turns out that, in the vicinity of a massive object, a curve whose tangent vector does not change along itself is a curve whose tangent vector bends inward towards the massive object. (Remember we're talking about curves in spacetime, so "bends inward" just means the curve's tangent vector is "pulled" in the spatial direction of the object as the curve goes forward in time.) That is what picks out alternative #2 of your three alternatives as the one that actually happens.
Actually, a better way to state this would be: the worldline that appears to bend inward towards the object is actually "straight"; it's only our skewed perceptions, which think of curves that stay at the same distance from the object as "straight", that make us think the actual geodesic worldline is "curved". If we were able to perceive the full 4-dimensional reality directly, we would see that the actual geodesic path is the straightest possible one given the curvature of spacetime in the vicinity, just as a great circle, though it looks "curved" to us, is the straightest possible path on the curved surface of a 2-sphere.
At this point you may be wondering, where does the principle of extremal aging come in at all, since it seems like I can determine an object's geodesic trajectory without ever using it? The answer is that yes, you can determine the object's trajectory without ever using the principle of extremal aging. That principle is, at least in my view, more of a restatement of a particular characteristic of geodesic worldlines, than a rule that you can always apply to calculate what those worldlines are. But the principle still holds; for example, if we take the geodesic trajectory that we determine by the criterion I gave above (the curve's tangent vector does not change along itself), and pick a point on it as the "endpoint" (for example, the point at which the object hits the ground), then we will find that the geodesic curve is the curve of maximal aging between the given start and end point. (You can kind of see how it goes by observing that the geodesic curve accelerates towards the object, so the object spends more time at higher heights, where aging is faster, than at lower heights, where aging is slower, since it moves faster the lower it gets.)
I've only given a very brief overview of how this question is answered; there's obviously a lot more that could be said, but it's hard to know what further things to focus in on. If you have further questions, please feel free to ask them. (Also, I'm holding off on addressing your bonus question until we've had some more discussion on the above.)
Last edited: Nov 21, 2011
3. Nov 21, 2011
### Matterwave
I have 2 things to add to Peter's response.
1) The Lagrangian method (i.e. the method to extremize the proper time in this case) is not so "bad" as it seems. You don't really need to "know" the endpoint to use this method, you only must posit that a certain endpoint exists and that the endpoint is not varied. So, one need not state "I know that at the end of the trajectory, the particle will be at (t,x,y,z)", one only needs to state "I know that there IS SOME end of the trajectory in the space-time, I don't know what it is, but it's already determined because I know the initial conditions". The Lagrangian method (specifically the Euler Lagrange equations) turn your global statement (minimization of action along the path) to a local statement (a second order differential equation).
2) Peter's description of a Geodesic as one on which the tangent vector "does not change" is what is most often referred to as an "affine geodesic". More mathematically precisely, the affine geodesic is the curve which parallel transports its own tangent vector. Parallel transport is defined by some affine connection defined on the tangent bundles of your manifold (just a fancy way of saying a rule which helps you pick out what "parallel" means on a curved space-time).
This description of the geodesic does not require a metric. The metric (time-like) geodesics are ones which extremize proper time. In order to define that, you obviously need a metric. In order that these 2 definitions do not contradict each other, you must limit your connection to be a metric connection.
But you should note that conversely you don't need to define an affine connection in order to define metric geodesics. Thus, you don't need to posit anything about parallel transport in order to determine the geodesics of your space-time.
4. Nov 21, 2011
### Staff: Mentor
This raises two good points. First, you mention minimization of *action*, as distinct from *maximization* of *aging*. In the case of geodesic motion in a gravitational field, minimizing the action turns out to be the same as maximizing aging along the path, subject to the given initial conditions; but it's worth making the distinction because extremizing the action generalizes to cases that extremizing aging doesn't cover. The second good point is that, given a completely deterministic system, specifying the initial conditions (in this case, the initial position and velocity of the particle) is equivalent to specifying the start and end points of the trajectory (in this case, the starting and ending positions and times). I should have made that clear, since I switched from doing the latter to doing the former in the course of my post.
5. Nov 21, 2011
### gabeeisenstei
PeterDonis: Thanks for your reply. I understand what you're saying about not knowing the endpoint, and I'm sure this is related to my confusion about how to understand the direction of curvature. But you've only alluded to the answer I'm looking for.
"It turns out that, in the vicinity of a massive object, a curve whose tangent vector does not change along itself is a curve whose tangent vector bends inward towards the massive object. (Remember we're talking about curves in spacetime, so "bends inward" just means the curve's tangent vector is "pulled" in the spatial direction of the object as the curve goes forward in time.) That is what picks out alternative #2 of your three alternatives as the one that actually happens."
So what I'm trying to understand is why, in terms of the Schwarzschild metric, the tangent vector is "pulled" in the spatial direction of decreasing radius. I thought that the right way to express the "pull" was in terms of extremizing proper time, so that it would be longer in time to the natural "next event" in that direction. Is that wrong?
I must admit I am a bit hazy as to how proper time plays the role of the Lagrangian, but I do know what a Lagrangian is. Also, feel free to talk Christoffel symbols if you have to. I should be able to follow a derivation involving tangent vectors. But in the end I am still hoping to boil it down to a "more time, less space" kind of an explanation that could be given without a detour through Euler-Lagrange.
I still find this bit tantalizing:
"…then we will find that the geodesic curve is the curve of maximal aging between the given start and end point. (You can kind of see how it goes by observing that the geodesic curve accelerates towards the object, so the object spends more time at higher heights, where aging is faster, than at lower heights, where aging is slower, since it moves faster the lower it gets.)"
I can certainly see that the falling object spends more time at greater heights where aging is faster. But this doesn't help me see why it first moves into a slower-aging altitude. Again, I focus on that first instant of motion, when the object decides to move down rather than up or stay in place. Wouldn't it age faster if it didn't fall at all?
6. Nov 21, 2011
### gabeeisenstei
(after reading #4) You guys both seem on top of it, so I have no doubt you'll get me to see the light. In the case of the apple in the instant before it leaves the tree, I'm picturing an initial velocity of zero. In flat spacetime, the tangent vector to the apple's worldline would point along the straight path to the same location at the next moment, but in the earth's curved spacetime the tangent vector no longer coincides with the line--it must be pointing down? (or maybe only in the next instant, after it already started to fall, does the tangent vector point down?)
7. Nov 21, 2011
### Staff: Mentor
In terms of trying to understand what determines the motion in a local sense, which is what you are trying to do, I don't think the principle of extremal aging is useful, because, as I said before, the principle does not say that objects moving on geodesics always move to where aging is faster. It only says that geodesics are worldlines of extremal aging, given the constraints (either start and end points, or initial conditions). But there may be multiple worldlines through a given event that satisfy the extremal aging property, just with different constraints.
It depends on the initial conditions. There is a set of initial conditions for which the object would age faster if it didn't fall at all: the initial conditions such that the object has exactly the right velocity to be in orbit about the Earth at its current height. But you specified a different set of initial conditions, that the object was at rest relative to the Earth at some instant of time. At any given event in spacetime, there are *many* geodesics passing through the event, corresponding to all the possible velocities that a freely moving particle could have there. So there isn't a unique "natural next event" for an object at that event; the "natural next event" depends on the object's velocity at the given event, i.e., on the initial conditions/constraints.
I'm afraid I don't have a handy way of explaining why minimizing the action is equivalent to maximizing proper time for this scenario. I normally think about this type of situation in terms of spacetime curvature and which lines are "straight" lines in the curved spacetime. For example, in response to your next post:
Yes.
In curved spacetime, the "line" of constant height (or constant radius r) is no longer a straight line. It looks straight to our skewed perceptions, but it's really curved. The "straight" line (in spacetime, remember) is the worldline that the falling apple follows.
A tangent vector always points "along" a curve at any given point; that's true for both geodesic and non-geodesic curves. The difference is that for a geodesic, the tangent vector doesn't have to "change" from point to point to keep pointing along it; for a non-geodesic curve, it does. In the case of the falling apple vs. another apple that stays on the tree, the apple that stays on the tree has to change its tangent vector from point to point to keep it pointing along its worldline. The falling apple does not.
8. Nov 21, 2011
### gabeeisenstei
I understand that the worldline of the object at constant height is not straight, and that the worldline of the falling apple is straight. What I don't understand is how the space and time coordinates are stretched in such a way as to make these statements true. I understand how the stretch factor (1-2M/r) works, but I'm missing something about how it gets applied in the right direction.
Maybe thinking about extremal aging was the wrong way to go. If I could just see how compressed time (and extended space?) leads to "straightness" being bent in the direction of the compression, I'd be happy.
(By the way, one thing that got me going in this confused direction was thinking about the rubber sheet that all popular explainers use to illustrate curved spacetime. In that picture, there is more space around the ball on the sheet; but the picture of the sheet leaves out the fact that there is less, rather than more time, around the ball.)
I read the first half of Taylor&Wheeler's book. I followed the derivation of initial acceleration for an object dropped from rest from a given radius. But at the end they said something like, "We choose the negative square root because we know the object is moving toward the gravitational mass." This seemed like cheating. Of course we know which direction the object will fall, but I want to see the direction emerge from the theory, or the metric, or something.
9. Nov 21, 2011
### Passionflower
I do not get that, if we ignore rotation and throw it straight up it would come straight down. In that case we have a line up and a line down.
In solving this problem we have to take the 10 second proper time on the surface of the Earth and convert it to coordinate time. Then take half of this time to calculate the apogee. Then we can solve the velocity needed to throw the ball. No need for any (part of a) parabola or an ellipse.
If we toss a clock in this way we will find this clock shows a later time than the clock that stayed on Earth hen it eventually meets the other clock back on Earth.
Last edited: Nov 21, 2011
10. Nov 21, 2011
### Matterwave
I think you are still trying to visualize the vector in 3-D and that's what's throwing you off. The vectors in GR are 4 dimensional vectors. The 4 velocity vector of an object that is at fixed spatial coordinates is (1,0,0,0). This vector "points in the direction of time". It doesn't point ANYWHERE spatially (it's 3 spatial components are 0!)
11. Nov 22, 2011
### gabeeisenstei
"The 4 velocity vector of an object that is at fixed spatial coordinates is (1,0,0,0). This vector "points in the direction of time". It doesn't point ANYWHERE spatially (it's 3 spatial components are 0!)"
I know. That's the apple on the tree. But the next tangent vector to the falling-apple worldline is something like (1,0,0,-1). Curved spacetime is equivalent to acceleration or change in velocity. My question is how the curvature translates into motion toward higher gravity, which is motion into slower aging. I'm still trying to understand Peter's "the curve's tangent vector is "pulled" in the spatial direction of the object as the curve goes forward in time". I want to see the calculation of this "pull" and how it relates to the metric.
12. Nov 22, 2011
### Matterwave
As the object acquires speed, it's tangent vector is rotated from being purely in the time-like direction to having spatial components as well. The 4-velocity evolves according to the geodesic equation. The coordinates of the particle obey the differential equation:
$$\frac{d^2 x^\mu}{d\lambda^2}+\frac{dx^\rho}{d\lambda} \Gamma^\mu_{\rho\tau} \frac{dx^\tau}{d\lambda} = 0$$
You solve that second order differential equation. As a second order ODE, the differential equation requires 2 initial values, namely the initial coordinates, and the initial 4-velocity.
13. Nov 22, 2011
### Staff: Mentor
It would actually be something like (1, 0, 0, -v) where v is some small velocity (in units where c = 1, v << 1 to start with, since the object initially falls slowly). A tangent vector of (1, 0, 0, -1) would describe a light ray moving radially inward (assuming that the last spatial component is the radial one).
The "pull" is related to the Christoffel symbols; one way to see it is to calculate the 4-acceleration of a worldline that stays at a constant radius r above the gravitating body. For the Schwarzschild metric, this turns out to be (in units where G = c = 1)
$$a = \sqrt{g_{rr}} a^{r} = \sqrt{g_{rr}} \Gamma^{r}_{tt} u^{t} u^{t} = \frac{M}{r^{2}} \frac{1}{\sqrt{1 - \frac{2M}{r}}}$$
14. Nov 22, 2011
### Staff: Mentor
You're right, I was mixing up scenarios: the parabola (or ellipse if we're purists) applies to one person throwing a ball to another who is some distance away. For the case I actually described, the spatial trajectory is just a vertical line, ignoring rotation.
True, I was only describing the spatial trajectory (and for a slightly different case, as above), not the full description of the motion. I really should have described a worldline in spacetime, which would, as you say, require solving for the coordinate time and thus the initial velocity, and using the initial position and velocity to determine the geodesic followed.
Yes. In fact, the clock that follows the free-fall, geodesic trajectory will show a longer time elapsed than any other clock that travels between the same two events on some other worldline (which will, of course, be a non-geodesic worldline, like the worldline of the clock that stays on the surface of the Earth).
15. Nov 22, 2011
### Staff: Mentor
I should also note that these 4-velocities assume an orthonormal frame; 4-velocities referred to standard Schwarzschild coordinates will *not* look like this. For example, in standard Schwarzschild coordinates, the 4-velocity of a worldline that stays at a constant radius r is
$$u = (\frac{1}{\sqrt{1 - \frac{2M}{r}}}, 0, 0, 0)$$
16. Nov 22, 2011
### gabeeisenstei
Well, I'm starting to think that either my question is asking too much, or I'm too dense. I appreciate that we've gotten as far as bringing in the Christoffel symbols; maybe we're almost there… But before I get lost in the Christoffel forest, can we try once more from a conceptual level?
Peter's result, (M/r^2)(1-2M/r)^-1/2, looks similar to but different from a result for acceleration from rest given by Taylor&Wheeler (p.3-13): -(1-2M/r)(2M/r)^1/2
The biggest difference I see is that one is positive and one is negative! Maybe you can say that the minus sign is "understood", but this is my main point of interest: how does curvature tell you the correct sign to take? And can we relate this conceptually to the time stretch factor, that is, to there being "more time" in one direction and "less time" in the other?
I should admit that I'm unclear about the role of the "g<sub>rr" in your equation. Is this g already understood to be Schwarzschild, or is it what you're solving for?
17. Nov 22, 2011
### gabeeisenstei
I wrote my #16 before reading #15. I do understand that my (1,0,0,-1) would refer to a light ray, I was just being sloppy about the units. But I do not understand the point about orthonormal vs. Schwarzschild coordinates. Perhaps it would help, though, if we compared the vector given in #15 with the vector for the apple in its first instant of falling.
18. Nov 22, 2011
### Matterwave
The orthonormal (tetrad) basis versus the coordinate basis issue is somewhat technical. I don't think it's going to help you all that much conceptually.
Perhaps this will help you think about this problem. Suppose that I have my geodesic equation. Suppose it's given. I start off at a point a on my manifold with some 4-velocity, and I use the geodesic equation to get that after some proper time has ticked by on the clock I am carrying with me, I arrive at a point b on my manifold.
The statement of "extremal proper time" is simply that if I took ANY OTHER path from a to b than the path I took by going along my geodesic, I would register a shorter proper time on my clock. (Remember that points "a" and "b" are EVENTS in space-time, and not points in space).
Does that help at all.
19. Nov 22, 2011
### Staff: Mentor
Sure; at least, I think I can be more conceptual to some extent.
First, for any curve in spacetime, I can construct a mapping from points on the curve to real numbers; this is called "parametrizing" the curve. For timelike curves, which are possible worldlines for objects with nonzero rest mass, the obvious parameter to choose is proper time; thus, each point on the worldline has a unique proper time $\tau$ assigned to it, which uniquely identifies that point among all the points on the curve. Obviously I can pick the "origin" of $\tau$ wherever I like, and I can also change the units of $\tau$ however I like, without affecting the actual physics; so given one parametrization $\tau$, I can construct another parametrization $\tau ' = A \tau + B$, where A and B are arbitrary constants, and $\tau '$ will work as well as $\tau$ for describing points on the worldline. The technical term for all this is that $\tau$ is an "affine parameter". We'll assume that we've fixed the origin and units of $\tau$ in what follows.
Next, suppose I have some coordinate chart on a given spacetime. A "chart" is just a mapping of events in the spacetime to 4-tuples of real numbers, which are called "coordinates". Abstractly, the coordinates are given "indexes" from 0 to 3 (or sometimes 1 to 4, depending on whether you like your "time" index to be 0 or 4; I prefer 0), so a particular coordinate is written as $x^{a}$, where "a" is the index. The 4-tuple of coordinates at each event can be thought of as a vector; more precisely, it is a vector in a vector space that is "attached" to the particular event on the worldline at which we are evaluating the coordinates. This vector space is called the "tangent space", and there is a separate tangent space at each event.
Given my coordinate chart, I can construct a mapping between coordinate 4-tuples and points on my worldline, which is equivalent to a mapping between coordinate 4-tuples and values of $\tau$. We express this mapping by writing each coordinate as a function of $\tau$, thus: $x^{a} = x^{a} \left( \tau \right)$.
Now that I have the coordinates as functions of $\tau$, I can talk about derivatives of those functions, without having to talk about the specific form of the functions in terms of a particular coordinate chart or metric expression. The 4-velocity $u^{a}$ is just the derivative of the coordinate vector $x^{a}$ with respect to $\tau$, thus:
$$u^{a} = \frac{d x^{a}}{d \tau}$$
The 4-acceleration, or "proper acceleration" as it is often called, is then the covariant derivative of the 4-velocity. This is where the Christoffel symbols come in:
$$a^{a} = \frac{D u^{a}}{d \tau} = \frac{d^{2} x^{a}}{d \tau^{2}} + \Gamma^{a}_{bc} \frac{d x^{b}}{d \tau} \frac{d x^{c}}{d \tau} = \frac{d^{2} x^{a}}{d \tau^{2}} + \Gamma^{a}_{bc} u^{b} u^{c}$$
You will note, by the way, that the above expression looks very similar to the geodesic equation that matterwave wrote down; that's because the geodesic equation is just a special case of the above, where the 4-acceleration is zero:
$$a^{a} = \frac{d^{2} x^{a}}{d \tau^{2}} + \Gamma^{a}_{bc} u^{b} u^{c} = 0$$
Matterwave's version had $\lambda$ instead of $\tau$, but $\lambda$ is just an affine parameter and we've already seen that any affine parameter we pick will work equally well. The reason $\lambda$ is often used is that the above expressions are actually not limited to timelike curves; they apply to any curve in the spacetime. We're only considering timelike curves here so I'll stick with $\tau$.
You'll note that all of the above is perfectly general; I haven't made any assumptions at all about the coordinate chart (except for assumptions about continuity, differentiability, etc., that are necessary in order to meaningfully talk about this stuff at all). But suppose I also have an expression for the metric in my chosen coordinate chart; the metric is a tensor $g_{ab}$ (the tensor is symmetric, so it has ten independent components in 4-D spacetime), which acts on the coordinates to produce a "line element" that describes how $\tau$ changes along a small differential element of a curve that I have specified as above. The general form of the line element is:
$$d \tau^{2} = g_{ab} dx^{a} dx^{b}$$
where I have used the Einstein summation convention (repeated indexes in an expression are summed over). The Schwarzschild metric is a particular case of this general expression; looking at the line element, you should be able to read off the components $g_{ab}$ by looking at the coefficients of each combination of coordinate differentials. (The Schwarzschild line element is particularly simple because only the diagonal metric coefficients, where a = b, are nonzero.)
Hopefully the above will help to make more sense of some of the previous posts; I'll comment on one particular aspect further below.
Are you sure the latter expression is for proper acceleration? It looks like a coordinate velocity dr/dt for an infalling observer.
The full answer will require digging into those Christoffel symbols and such in more detail. But one general comment to make is that before interpreting any signs, you have to make sure that the sign conventions being used are consistent. There are a number of different ones in the literature, and many times what appears to be a difference in physics is actually just a difference in sign convention.
(In this particular case, as I said in my previous comment above, I think the expression you've taken from Taylor & Wheeler is for the coordinate velocity dr/dt of an infalling observer, which will be negative because r is decreasing with increasing t.)
The sign convention I was using for my expression for the 4-acceleration (or proper acceleration, as it is often called) of a worldline that stays at a constant radius r for all time, is that positive acceleration means the 4-velocity vector is being pushed "outward", i.e., in the positive r-direction, relative to what it would be if the worldline were a geodesic starting from the same event. However, we can see how this works in a bit more detail by looking specifically at the "r" component of the equation above for the 4-acceleration:
$$a^{r} = \frac{d^{2} r}{d \tau^{2}} + \Gamma^{r}_{bc} u^{b} u^{c}$$
Now for a worldline where r stays constant for all time, the first term on the RHS above is zero; so all we have is the second term. And, since for such a worldline the only nonzero component of the 4-velocity is the "t" component, all that survives of the second term on the RHS above is the b = c = t component. That gives most of the equation I wrote down in my earlier post; the only other piece we need is that the actual physical magnitude of a 4-vector is given by contracting it with the metric:
$$| a | = \sqrt{g_{ab} a^{a} a^{b}}$$
Since the only non-zero component of the 4-acceleration is the r-component (I haven't shown this, but it can be seen by working out the other Christoffel symbols and seeing that there are no other nonzero ones which have both "downstairs" indexes of t, which are the only ones that would survive after being contracted with the 4-velocity), the actual physical magnitude of the 4-acceleration, which is what an observer following the given worldline would actually measure with an accelerometer, is the expression I wrote down.
I should have clarified, though, that that acceleration is *not* quite what you were asking for. What you were asking for was the *coordinate acceleration* of a freely falling object (i.e., one traveling on a geodesic), relative to an observer who is at a constant height. But it should be apparent that that is just the inverse of the acceleration that I wrote down; i.e., if you put a minus sign in front of the expression I wrote, you have the expression for the acceleration that you were looking for. (Strictly speaking, it's only "apparent" for the particular event where the free-falling object is momentarily at rest at some radius r; once the free-falling object is moving inward, the $\Gamma^{r}_{rr}$ Christoffel symbol also comes into play since the r-component of the 4-velocity is now nonzero. But it only takes a little more work to see that the final answer still comes out the same; the t-component of the 4-velocity changes in just the right way to offset the additional term coming in from the r-component.)
I'm not sure. At any rate, I don't think we're ready to bring that factor in yet.
Hopefully the above clarifies that; $g_{rr}$ is one of the metric coefficients, and is given once you've determined what spacetime you're in and what coordinate chart you're using.
20. Nov 22, 2011
### Staff: Mentor
Answering this will expand somewhat on the last part of my previous post.
At the instant when the apple is momentarily at rest at radius r (i.e., it has *just* been released), its 4-velocity is the same as the vector I wrote down in #15. What is different is its rate of change:
(1) For the stationary object (say an apple still hanging on the tree, right next to the one that's just started to fall), the *coordinate* acceleration at this instant is zero; but the *proper* acceleration is positive (it's the expression I wrote down).
(2) For the freely falling object (the falling apple), the *proper* acceleration is zero, but the *coordinate* acceleration is the negative of the expression I wrote down.
(Putting these two statements together with the expression for proper acceleration I wrote down in my last post should make it clearer why the second part of #2 is true.)
So "an instant later", so to speak, the freely falling apple will have acquired a small r-component to its 4-velocity, in the *coordinate* sense, and the t-component will have changed so as to keep the overall length of the 4-velocity equal to 1 (since the 4-velocity is always a unit vector). But the apple still hanging on the tree will not have changed its 4-velocity at all, in a *coordinate* sense. I say "in a coordinate sense", because physically, the falling apple is the one whose 4-velocity has not changed, and the hanging apple is the one whose 4-velocity has changed. But the coordinates are skewed so that in coordinate terms it "looks" the other way around.
|
2018-02-20 20:16:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7214452028274536, "perplexity": 355.307679530513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00322.warc.gz"}
|
https://economics.stackexchange.com/questions/2938/deriving-the-modigliani-miller-theorem
|
# Deriving the Modigliani--Miller Theorem
In the Wikipedia article on the Modigliani--Miller theorem, it states two propositions. (It gives the cases of with and without taxes. Here I'll just focus on the case without taxes.) The first proposition is that the value of an unlevered firm is the same as a levered firm. Given the assumptions, this is clear from the discussion:
To see why this should be true, suppose an investor is considering buying one of the two firms U or L. Instead of purchasing the shares of the levered firm L, he could purchase the shares of firm U and borrow the same amount of money B that firm L does. The eventual returns to either of these investments would be the same. Therefore the price of L must be the same as the price of U minus the money borrowed B, which is the value of L's debt.
However, here I am asking about "Proposition II:"
$$r_E(Levered) = r_E(Unlevered) + \frac DE (r_E(Unlevered) - r_D),$$ where
• $r_E$ ''is the required rate of return on equity, or cost of equity,''
• $r_D$ ''is the required rate of return on borrowings, or cost of debt,''
• and $\frac{D}{E}$ ''is the debt-to-equity ratio.''
The article states that the "formula is derived from the theory of weighted average cost of capital (WACC)." (See a related question here.) My question is this: how can we arrive at this result from WACC?
$$r_E(Levered) = \frac{E+D}{E}r_E(Unlevered) - \frac{D}{E}r_D$$
$$r_E(Unlevered) = \frac{E}{E+D}r_E(Levered) + \frac{D}{E+D}r_D$$
|
2019-09-23 19:42:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7356141209602356, "perplexity": 899.9964553148861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00049.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO198403041895286.page
|
# 치즈의 융해성질(融解性質)에 관한 연구
• Park, Ji-Yong (Cheil Sugar Co., Ltd.) ;
• Rosenau, J.R. (Department of Food Engineering, University of Massachusetts)
• Published : 1984.06.30
• 21 6
#### Abstract
The traditional methods of testing the meltability of cheese, the Schreiber and the Arnott test, were reviewed and compared. The limitations of such methods were examined. Different sensitivities were observed for these two tests. In the Schreiber test, sharp Cheddar showed highest meltability, followed by process American, mild Cheddar, and Mozzarella. In the Arnott test, however, the rank changed to Mozzarella, mild Cheddar, sharp Cheddar, and process American. Process cheese products showed very dispersed values. Meltability increased quickly until about 4 min and held constant after 5 min in the Schreiber test. In the Arnott test, it started to increase after 5 min and held constant after 15 min. The constant meltabilities shown after certain times were caused by scorching or case hardening which prevented further flow. The DSC-thermogram showed endothermal peaks at about 14 and $30^{\circ}C$. These peaks can be accounted for by the fusion of butter fat during heating.
|
2019-11-22 15:21:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18886248767375946, "perplexity": 11902.421267050242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00072.warc.gz"}
|
https://www.percentagecal.com/answer/what-is-10.-percent-of-330000
|
#### Solution for What is 10. percent of 330000:
10. percent *330000 =
(10.:100)*330000 =
(10.*330000):100 =
3300000:100 = 33000
Now we have: 10. percent of 330000 = 33000
Question: What is 10. percent of 330000?
Percentage solution with steps:
Step 1: Our output value is 330000.
Step 2: We represent the unknown value with {x}.
Step 3: From step 1 above,{330000}={100\%}.
Step 4: Similarly, {x}={10.\%}.
Step 5: This results in a pair of simple equations:
{330000}={100\%}(1).
{x}={10.\%}(2).
Step 6: By dividing equation 1 by equation 2 and noting that both the RHS (right hand side) of both
equations have the same unit (%); we have
\frac{330000}{x}=\frac{100\%}{10.\%}
Step 7: Again, the reciprocal of both sides gives
\frac{x}{330000}=\frac{10.}{100}
\Rightarrow{x} = {33000}
Therefore, {10.\%} of {330000} is {33000}
#### Solution for What is 330000 percent of 10.:
330000 percent *10. =
(330000:100)*10. =
(330000*10.):100 =
3300000:100 = 33000
Now we have: 330000 percent of 10. = 33000
Question: What is 330000 percent of 10.?
Percentage solution with steps:
Step 1: Our output value is 10..
Step 2: We represent the unknown value with {x}.
Step 3: From step 1 above,{10.}={100\%}.
Step 4: Similarly, {x}={330000\%}.
Step 5: This results in a pair of simple equations:
{10.}={100\%}(1).
{x}={330000\%}(2).
Step 6: By dividing equation 1 by equation 2 and noting that both the RHS (right hand side) of both
equations have the same unit (%); we have
\frac{10.}{x}=\frac{100\%}{330000\%}
Step 7: Again, the reciprocal of both sides gives
\frac{x}{10.}=\frac{330000}{100}
\Rightarrow{x} = {33000}
Therefore, {330000\%} of {10.} is {33000}
Calculation Samples
|
2021-06-25 09:16:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924346923828125, "perplexity": 5006.887265279001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630081.36/warc/CC-MAIN-20210625085140-20210625115140-00052.warc.gz"}
|
http://dhna.radiosisma.it/thinkorswim-standard-deviation-channel.html
|
# Thinkorswim Standard Deviation Channel
This is a range based indicator, when used right. In finance, standard deviation acts as a way of gauging volatility. TradingView India. In general, the higher power the less weight is given to extreme deviations between true and predicted targets. i have an excel data of thousands of sensors whose indicators are varying with time on a daily basis and i want to use the change point detection package in R and embed it in python to generate a list/panda/excel output for each sensor detecting the exact trend change date and type of change is it increase(+1) or decrease(-1) standard deviation change is it increase( +1) or decrease (-1) or. The upper and lower are two standard deviations below and above the moving average in the middle. We can now install pandas, statsmodels, and the data plotting package matplotlib. Thinkorswim Watchlist Importing Symbols 3. However, knowing how to calculate the standard deviation helps you better interpret this statistic and can help you figure out when the statistic may be wrong. 9 shorter timeframes. Standard Deviation. The deviation from it can be perceived as random fluctuations associated with the non-typical actions of buyers and sellers. The Breadth Thrust indicator is a market momentum indicator. 71%, compared to 0. Trend indicators are designed to identify and follow the trend of a currency pair. Best reversal indicator thinkorswim Best reversal indicator thinkorswim. In a sideways market, upper band acts as resistance and lower band acts as a support. Volume Indicator — Check out the trading ideas, strategies, opinions, analytics at absolutely no cost! — Indicators and Signals. Windows 8, Windows 10, Windows Server 2012 or later. Step 7: Divide the standard deviation by the square root of the sample size (n). Ask_Shadow; ATR Channels SnapShotI. There are 32-bit and 64-bit versions of the spark. Take the stdev of all the closes of that period (using yahoo finance data) you get a standard deviation of 3. $\sqrt{E It is easier to think about the confidence interval of a SD. A 14-day Wilder moving average is equivalent to a 27-day exponential moving average using the standard formula. As long as you are using the TD-Ameritrade ThinkOrSwim "Desktop Platform", this system can be applied to any charts of stocks, futures, or forex (does work on TOS mobile app with limitations). By generating two sets of Bollinger bands - one set using the parameter of "1 standard deviation" and the other using the typical setting of "2 standard deviation" - we can look at price in a whole new way. This indicator is similar to Bollinger Bands, which use the standard deviation to set the bands. Z-scores are the number of standard deviations above and below the mean that each value falls. reason that the standard deviation indicator in thinkorswim accepting constant length of time only; in week of holiday or exchange down the number goes off. Index performance for Bloomberg Commodity Index (BCOM) including value, chart, profile & other market data. Standard deviation channel for OB. 70# Price Action Channel Binary System; 71# Stochastic Oscillator 21, 8, 8, Binary System; 72# Harami Binary Options Strategy; 73# MACD Binary Strategy; 74# TMA Binary System; 75# The Power of Trend-Momentum; 76# Parabolic Sar and RSI: Fox Binary System; 77# Black-Scholes Binary Options System; 78# 2MA Standard Deviation Binary System; 79# Flat. Depending on the psychological and physical condition of the patient, the results can also vary by up Criticism: The IQ was developed by West Europeans for West Europeans according to West European standards. The Value Area Indicator provides an automated band representing the volume-weighted Value Area. Learn more about various applications of standard deviation, or explore hundreds of other calculators addressing topics such as finance, math, health, and fitness. Trend indicators are designed to identify and follow the trend of a currency pair. The following image is a comparision of TPR and BOLL. The trend is flat when the channelmoves sideways. It helps to provide reasoning to the question of how far away is something from where it can normally be found. Related Indicators. The Traders Dynamic Index uses trend direction, momentum and market volatility to determine market conditions. Thinkorswim happens to be one of the best of the bunch. What is standard deviation and how is it calculated? Looking for a simple standard deviation meaning? A simple standard deviation explanation is that it is a statistical measurement of how far data is spread out from the mean, or average. About Volume technical analysis and using Twiggs Money Flow as an alternative to Chaikin Money Flow technical indicator - index chart example of money flow analysis. 2020 10:46 365 days in a year 52 number of Saturdays 52 number of Sunday 9 number of Federal no trade. To move from discrete to continuous, we will simply replace the sums in the formulas by integrals. The first two steps are identical. As like the variance, if the data points are close to mean, there is a small variation whereas the data points are highly spread out. StrategyDesk is a trading tool TD I have been trading the equity markets with many different strategies for over 40 years. Including 2 standard deviation makes Bollinger Bands more dynamic and adaptive to volatility. Do the same with the divisor: Calculate V[21] as the sum of volume for the same 21 day period as in 3. We'll construct a table to calculate the values. We can now install pandas, statsmodels, and the data plotting package matplotlib. I highly recommend www. These technical indicators measure the strength of a trend based on volume of shares traded. Thinkorswim strategies Thinkorswim strategies. 12%, sB = 20. 1 channel out on the standard deviation represents 68% of returns, if they're normally distributed 1 channel out on the standard error represents 68% of the slopes, if they're normally distributed level 2 Justin534. The LinearRegCh50 and LinearRegCh100. Thinkorswim Watchlist Importing Symbols 3. This indicator is similar to Bollinger Bands, which use the standard deviation to set the bands. We start our 2020 Thinkorswim review with broker commissions on most popular investment products. To find it (and others in this article), click the Charts tab in thinkorswim. So the expected move is anywhere from$170-$230. How to Calculate a Sample Standard Deviation. For example, it’s possible to use 50 and 2. Keltner Channels are volatility-based envelopes set above and below an exponential moving average. Екілік Options көрсеткіштері - Жүктеу нұсқаулары. These can be located within the channel and outside it. StandardDeviationChannels. The lines are spaced x number of standard deviations above and below the Linear Regression Trendline. The distance between frame of the channel and regression line equals to the value of the standard deviation of the close price from the regression line. Standard Deviation Channel (SDC) WHAT IT IS. low dispersion or deviation). Besides being a fun statistics exercise, ACT standard deviation is a powerful tool to see how competitive you are in the college application and. For a more thorough explanation, check out our Bollinger Bands lesson. Find the latest Amazon. It is calculated as the square root of variance. This indicator is designed to draw a standard deviation channel. Regression Channel. - All plots can be colored according to their slope, or alternatively all plots can be colored according to the slope of the midband. Extend the channel lines two standard deviations away from each side of the line As the regression channel is formed using standard deviation as a volatility measure, you’ll find some traders referring to it a standard deviation channel. Other applications may vary this slightly. The standard deviation measures the spread of the data about the mean value. Build a simple, yet effective Anchored VWAP indicator for ThinkOrSwim in less than 10 minutes using just a few lines of code. Standard deviation channels are plotted at a set number of standard deviations around a linear regression line. I want a scan to populate / alert as the price of a symbol crosses either the upper or lower end of the standard deviation channel. In this lesson we're going to learn how to And you can also choose, before the standard candles, the "Monkey Bars," and the "Monkey Bars Most of you think that these lines by default on the thinkorswim platform cannot be eliminated. - The indicator has an option to display the channels or the midband individually. 69% the previous market day and 1. The first is simply a 10-week moving average of the VIX since 1990, and the second is a daily chart of the VIX with Keltner Channels plotted around a 10-day simple moving average. The normal random variable of a standard normal distribution is called a standard score or a z-score. After yield recovered off the uber low intraday in mid-October, it blew up through the 32%, 50% and 62% Fib retracements for the move down during September. RayL = true - ray for the Low channel. Trend Analysis and Statistical Probability with Standard Deviation Channels. [1] X Research source Once you know what numbers and equations to use, calculating standard deviation is simple! Steps. Standard Deviation is used in statistics and probability theory and is represented by the Greek letter sigma (σ). Prices will distribute around the simple moving average: Around 65% of price action is contained within 1. ; Press OK to close the Chart Studies window. The LizardTrader version has improved the code infrastructure to perform more efficiently, in particular when used on large datasets and with the CBOC = false setting. When thinkorswim burst onto the online options trading scene back in 1999, it redefined the standard for options trading platforms. The abnormal return CAR(1) has a mean of 0. The standard deviation on the porfolio equals the positive square root of the the variance. 5 / 5 ( 2 votes ) NinjaTrader Volatility Indicator Typically in technical analysis, traders will use the Standard Deviation as a measure of volatility. ** Custom Standard Deviation Channel ** Available until "Automatically Plots Regression and 12 Deviation Levels" 7of9 % COMPLETE$29 ** Custom Daily MA's with Daily. Tutorial About Derivative Oscillator and RSI technical analysis - how to generate signals on stock charts via derivative oscillator. The Standard Deviation Channel is composed of two lines parallel to the Linear Regression Trendline and distanced from it by specified number of standard deviations. Expected winnings: estimated winnings over the simulated Standard deviation after X hands: This number shows by how much your actual results will differ from the expected results on average. This is where the VWAP can add more value than your standard 10, 50, or 200 moving average indicators because the VWAP reacts to price movements based on the volume during a given period. If you want to download thinkorswim trading software then read this article & get official download link. This statistical analysis tool is normally overlaid on a price chart. ** Note Sinopsis dibuat berdasarkan Sinopsis 1 Episode Penayangan di India,, BERSAMBUNG KE EPISODE 136 SELANJUTNYA>> << SINOPSIS SARASWATICHANDRA EPISODE 134 SEBELUMNYA. The center of the channel is generally calculated using a simple moving average of the market price. Read our review to find out everything you need to know about Thinkorswim. Welcome to the first episode of “How to Thinkscript”. STANDARD DEVIATION 6. For example Results : Standard Deviation of the array (a scalar value if axis is none) or array with standard deviation values along specified axis. Should we get an actual crash where IEF would drop 2-standard deviations to the price of 94. Obserwujący: 30, obserwowani: 29, posty: 15 - zobacz zdjęcia i filmy zamieszczone przez Standard Deviation (@standard_deviation) na Instagramie. The code draws three plots that show the standard deviation for the close price of the current symbol on the defined period. low dispersion or deviation). Thinkorswim Alert TTM Trend The TTM Trend is a licensed Study listed in Thinkorswim under the category name John Carter. Since standard deviation data is not available for all listed countries, we use estimates from neighboring countries. Also, take a look at the percentiles to know how many of your data points fall below -0. Keltner Channel of Apple (AAPL). Click the “New Study” button 4. Timeframe: 1-day bars (Daily). The standard deviation is basically a number expressing how much the values of the price differ from the mean value. How to install Standard deviation channels in forex trading platform metatrader 4?. geek-speak, that range is one standard deviation wide, and theoretically covers 68% of the possible stock price changes. Right click on TOS chart 2. 071% and a standard deviation of 6. This default setting is NOT compatible with building a scan. To find it (and others in this article), click the Charts tab in thinkorswim. Now take the close of the last options ex period of 183. The daily chart of the VIX with two standard deviation Bollinger Bands plotted around a 10-day SMA. [] ~ (SDC) WHAT IT IS. But the agg() function in Pandas gives us the flexibility to perform several statistical computations all at once! If I wanted only those groups that have item weights within 3 standard deviations, I could use the filter function to do the job. The volatility cycle indicator measure differences between current standard deviation of the price and its highest and lowest within the last “InpBandsPeriod” bars. In plain English, it is a measure of the spread of the data, or how wide it spreads out. Standard Deviation Channels is a Metatrader 5 (MT5) индикатор және форекс индикаторы мәні жинақталған тарихы деректерді өзгертуді болып табылады. We recognize 2 kinds of volatility: historical volatility and implied volatility. The standard deviation channel is used to measure how much price deviates from the channel (i. Thinkorswim strategies. The 2 Standard Deviation move (a HUGE gap) produced the following statistics for this study, even smaller probabilities:. Tutorial About Derivative Oscillator and RSI technical analysis - how to generate signals on stock charts via derivative oscillator. Donchian Channels: A moving average indicator developed by Richard Donchian. The Breadth Thrust indicator is a market momentum indicator. Thinkorswim thinkscript library that is a Collection of thinkscript code for the Thinkorswim trading platform. Heikin-Ashi. The number of channels is denoted on top of the box. Thinkorswim relative volume indicator. It has been channeling very nicely for the past year. Using 3 standard deviations encloses about 99% of the selected data but the channel often appears too wide. As price action changes, the linear regression lines will automatically adjust and the channel will become up trending, down trending, or sideways depending on current. 2020 10:46 365 days in a year 52 number of Saturdays 52 number of Sunday 9 number of Federal no trade. week, day, hour, 15 minutes, etc) and then provide standard deviation levels for that time period. E-Bullion System rozlicze elektronicznych E-Bullion zarejestrowany jest w Panamie, binary options trading robot free account. It’s recommended to use periods from 13 to 24, while the deviation should be in the range between 2 and 5. A low standard deviation indicates that data points are generally close A high standard deviation indicates greater variability in data points, or higher dispersion from the mean. Belkhayate Cog For Thinkorswim. Function VOLATILITY: Returns the volatility value based on a standard deviation-type formula Function SUMM: Returns the summation value of the price array input (helper function for volatility). Traders earn whaling sums of money on trending markets. Thinkorswim Trend Reversal Indicator. zigzag channel breakout indicator mt4 download. Instead it plots price against changes in direction by plotting a column of Xs as the price rises and a column of Os as the price falls. the bollinger bands what we must do change only the deviation and make it 0. # Create a sequence of probability values incrementing by 0. g: 3 2 9 4) and press the Calculate button. Historically, differentials above one standard deviation have preceded 12-month forward SPX average price drops of 2. The following image is a comparision of TPR and BOLL. In technical analysis PMO (Price Momentum Oscillator) is used in the same way as MACD. Standard deviation is a measurement of how spread out the numbers are of a set of data. One big difference: the numbers at the bottom have changed. In contrast to the more standard uses of neural networks as regressors or classifiers, Variational Autoencoders (VAEs) are powerful generative models, now having applications as diverse as from…. By Tim Leonard 07 September 2020 Deciding which stocks will make a good addition to your portfolio is challenging, but with the help of a free stock screener, your options can quickly become much clearer. Because standard deviation measures volatility, the ~ s widen during volatile markets and contract during calmer periods. They can be used in swing trading and in detecting changes in momentum. 4% instead of a typical price rise of 8. Related Indicators. 100 out of 1000. FS 30-4 and FS 30-5 red color. Since the introduction of the native TOS Volume Profile, I've had numerous requests to publish "those lines" that were a part of my original Volume Profile study. 0 - deviation size for the CHANNEL High object. Belkhayate Cog For Thinkorswim. — Indicators and Signals. Build the VScore trading indicator which helps you understand and plot price behavior in relation to its standard deviation, using the VWAP bands. An outlier would be far away from the. I’ve been trying to create a dynamic alert / scan using the default STD_DevChannel study. TradingView UK. Maybe what you call the standard deviation of standard deviation is actually the square root of the variance of the standard deviation, i. Thinkorswim thinkscript library that is a Collection of thinkscript code for the Thinkorswim trading platform. O'Neil is the founder and chairman of Investor's Business Daily, a national business newspaper. Minimum PC Requirements. Hello, I was wondering if anyone could point me in the direction of an indicator, for any platform, MT4, thinkorswim, ninjatrader, etc, that can calculate the implied volatility for a given time period (i. This indicator is designed to draw a standard deviation channel. Probability. One channel is equal to one standard deviation from the moving average and two channels equals 2 standard deviations from the moving average. Defines which of the extra lines should be visible. STD_widthH = 1. This indicator will draw two Standard Deviation Channels based on Fibonacci 0. These can be located within the channel and outside it. Volatility is in finance represented by the standard deviation computed from the past (historical) prices. Standard Deviation (SD) is a measure of central tendency. This indicator not only plots the trend lines but also signals the potential take profit levels (or support/resistance levels) where prices are most likely going to reverse of bounce off after breaking the trend line. StockFetcher Forums · Indicators and Measures << >> · Start New Thread Subject Author Replies Last; Absolute Price Oscillator (APO) stockfetcher: 0: 11/25/2002 4:26:38 PM. MSCI FaCS is a standard method for evaluating and reporting the Factor characteristics of equity portfolios including ETFs. STANDARD DEVIATION 6. Statistical tests such as the t-test intrinsically depend on the concept of a standard deviation. c543t13701-t13701w66551-w66551d8871 つぶさに観察し、予断なく捉えること。 そうすることでしか、答えを導き出すことはできません。 私たちのスタンスはシンプルですが、それ故、時として賛否を呼ぶアプローチとなることもあります。. days after you enroll. Edit Subject. Bollinger Bands are envelopes plotted at a standard deviation level above and below a simple moving average of the price. Donchian Channel Indicator (Description, Installation, Configuration) Practical Heiken-Ashi Indicator - Trend Trade Intraday Forex Charts; Ichimoku Kinko Hyo Indicator – An All-Round Tool For Finding And Catching Profitable Trades; Williams Alligator MT4 Indicator – Know If There Is A Trend Or Not. input length = 50 ; We were much more impressed with thinkorswim’s equity research capability. ThinkOrSwim Script for Plotting Deviation and Value Area I know some people over here are still using the draw tool to plot their areas, but I combined bombafetts and some other guy who scripted the value area (sorry forgot the username) so that all you have to do is input five values each day and TOS will do the rest. The standard deviation is a summary measure of the differences of each observation from the mean. Standard deviation channels are plotted at a set number of standard deviations around a linear regression line. If you prefer to build your portfolio using individual stocks instead of ETFs, Thinkorswim charges the standard $0 commission for buys and sells. Right click on TOS chart 2. 01 qqnorm creates a Normal Q-Q plot. NEW HIGHS-NEW LOWS. The default values are 20. Here is a quick video on how to apply regression channels to your TOS charts! My Discord - https://discord. Keltner Channel vs Bollinger Bands. If the market is in. This section describes the general operation of the FFT, but skirts a key issue: the use of complex numbers. Deviation just means how far from the normal. Thinkorswim strategies. In a market with a strong trend, prices may trade beyond the channel, either above or below. Stick it on a chart and give it a go. Here’s the deal:. The setup occurs when both the Upper and Lower Bollinger band lines are contained within the Donchian Channel. In technical charting, this term refers to the tendency of an indicator to change its value, and thus its position on the chart, with any new data point that comes in. A standard deviation setting of 1 will result in bands containing 68% of the volume. Quando os spreads estáticos são exibidos, os valores são médias ponderadas pelo tempo derivadas de preços negociáveis na FXCM de 1 de outubro de 2016 a 31 de dezembro de 2016. MT4 provides a tool for standard deviation channel in the MetaTrader 4 menu (Insert -> Channel -> Standard Deviation). In general, the higher power the less weight is given to extreme deviations between true and predicted targets. I have the cointegration test and the beta weighted shares. Regression = true - Linear Regression Channel, false - Standard Deviation Channel. Standard deviation is a measure of how much variance there is in a set of numbers compared to the average (mean) of the numbers. free download zigzag indicator metatrader4 for android. Os spreads são variáveis e estão sujeitos a atrasos. One important modication in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network. I've been trying to figure this out for a while and have read lots of contradicting resources online, and I'm still. Standard Deviation Channels 05-21-2019, 02:51 PM. 1 12 customer reviews. Standard deviation channel is built on the basis of linear regression channel. Why should we care about variance and standard deviation? Well for all of your data, you will inevitably have variance in machine learning. The standard deviation channel is used to measure how much price deviates from the channel (i. Regression = true - Linear Regression Channel, false - Standard Deviation Channel. ", "mt4 indicator belkhayate ea, belkhayate center of gravity repaints, belkhayate centre of gravity review, belkhayate channel mt4, belkhayate cog for thinkorswim, belkhayate ea v2 download, belkhayate elliott waves free, BELKHAYATE Elliott Waves indicator download, belkhayate elliott waves indicator free ea, belkhayate elliott waves indicator. It’s the market’s collective wisdom. Standard deviation is a statistical measure of diversity or variability in a data set. hi, Can anyone help me to to write a thinkscript to plot the weekly standard deviation? chart time frame per bar less than weekly. The code draws three plots that show the standard deviation for the close price of the current symbol on the defined period. Thinkorswim is a desktop trading platform that is free for all TD Ameritrade customers. but it gives an idea. The following image is a comparision of TPR and BOLL. Average True Range (ATR). Standard deviation is something that is used quite often in statistical calculations. Best thinkorswim studies. png(file = "dnorm. Donchian Channels: A moving average indicator developed by Richard Donchian. Just noticed you posted the same thing. The Sharpe Ratio is a metric that measures the risk adjusted return of a portfolio or strategy. the spread is calculated on the close and when that is daily bars, I cant get the exact price entered when the difference is a standard deviation. Every normal random variable X can be transformed into a z score via the following equation. 94, and downward from the price is$538. Thinkscript class. This statistical analysis tool is normally overlaid on a price chart. What is standard deviation and how is it calculated? Looking for a simple standard deviation meaning? A simple standard deviation explanation is that it is a statistical measurement of how far data is spread out from the mean, or average. The Breadth Thrust indicator is a market momentum indicator. Auto Trend Lines Indicator for ThinkorSwim (Free Download) #Trend Line Plot Is there a script to place linear regression on charts automatically in various Nov 13, 2008 · thinkScript - Linear Regression Indicator - price based colors. Standard Deviation is used as part of other indicators such as Bollinger Bands. 72% last year. Thinkorswim offers many features that active day traders look for including level 2 data, stock scanners, and more. We breakdown the complete Thinkorswim pros and cons. Atr Bands Thinkorswim. All the three plots coincide, forming a single plot. In this tutorial, I will show you how to calculate the standard deviation in Excel (using simple formulas). CHAPTER 15 MIDAS and Standard Deviation Bands ( Andrew Coles). ChannelL = DeepSkyBlue - color of the Low channel. I have installed Thinkorswim. I use these bands in both my investing and trading as I even run it on the mutual funds in my 401(k). The middle line is an exponential moving average (EMA) of the price. Ppo indicator thinkorswim Ppo indicator thinkorswim. suppose i have 20 rose bushes in my garden and the number of roses on each bush are as follows. Because calculating the standard deviation involves many steps, in most cases you have a computer calculate it for you. 49%, and rAB = -1. Instead of using standard deviation, the Keltner Channels use Average True Range (ATR) to set channel distance. By generating two sets of Bollinger bands - one set using the parameter of "1 standard deviation" and the other using the typical setting of "2 standard deviation" - we can look at price in a whole new way. Low standard deviation means data are clustered around the mean, and high standard deviation indicates data are more spread out. The TSX index market rally is overdone. If you want to download thinkorswim trading software then read this article & get official download link. # Create a sequence of probability values incrementing by 0. You have plotted mean± 1 standard error (S. This causes the channels to be computed using the entire viewable area of the chart. The standard deviation is a summary measure of the differences of each observation from the mean. In finance, standard deviation acts as a way of gauging volatility. Best thinkorswim studies. Finding the sample standard deviation is an essential skill for any student using statistics, but it's easy to learn exactly what you need to do with your data. Thinkorswim strategies Thinkorswim strategies. This simple tool will calculate the variance and standard deviation of a set of data. A standard deviation setting of 1 will result in bands containing 68% of the volume. Stats Standard Deviation. 99 by increments of 0. Compare online fees, trading platforms, stock broker 2020 TradeStation versus thinkorswim - which is better? Compare online fees, trading platforms, stock The business news channel streams in standard definition. Index performance for Bloomberg Commodity Index (BCOM) including value, chart, profile & other market data. ThinkorSwim has their own default scans you can use. One line would be plotted +2 standard deviations above it and the other line would be plotted -2 standard deviations below. There is no standard deviation in TPR band calculation. Related Trading ArticlesBollinger band indicator strategy for intraday trading (in hindi) OPEN AN ACCOUNT WITH US , FOLLOW THIS LINK intraday trading strategies || Bollinger bands crude oil strategy || Strategy 1 || crude oil price Bollinger Bands® … Continue reading Jackpot Strategy of. Given the Keltner channels use average true range, the bands are less reactive to price relative to a standard deviation based envelope like Bollinger bands. They are easy to use and help traders to know whether to "buy the dips" or "sell th. When the bands contract, it tells us that volatility is LOW. These are based on standard deviations, and usable for our purposes. Related Trading ArticlesBollinger band indicator strategy for intraday trading (in hindi) OPEN AN ACCOUNT WITH US , FOLLOW THIS LINK intraday trading strategies || Bollinger bands crude oil strategy || Strategy 1 || crude oil price Bollinger Bands® … Continue reading Jackpot Strategy of. Visually identify a stable trend on the chart and fit standard deviation channels by dragging your mouse over the selected time period. 2% of price data. Post 008: "Are Channels Dynamic?"points out that ERRCs, TW-Channels, and all the other channels I use are definitely not Dynamic. title: nasdaq stocks 17446 symbols aaae aaa energy inc aaagy altana ag ads aaalf aareal bank ag aaaof aaa auto group n. Do the same with the divisor: Calculate V[21] as the sum of volume for the same 21 day period as in 3. com,Forex Factory,Deep discount electronic access broker offering online trading of Stocks, Options, Futures, Forex, Bonds. It works on longer term charts, such as 15 minute to daily charts. Slothtoss - tossing up random projects. Where t is the Average Type. A guide on the standard deviation including when and how to use the standard deviation and examples of its use. Use timetotrade to set up trading rules and receive alerts to your email or mobile phone as soon as your RSI investment conditions are met. STI Singapore's news are extracted from worldwide news agencies, search engines, financial stocks websites, companies reports and etc related to stocks. What chart/studies are people using on TD AMERITRADE (thinkorswim) ? The previous research I did led me to price channels, which I’ve been using relatively successfully, but am curious what other tools are available on those platforms. 760 views5 year ago. That’s because the width of the Bollinger Bands is based on the standard deviation, which is more volatile than the Average True Range. Bollinger Bands use 2 parameters, Period and Standard Deviations, StdDev. 5 / 5 ( 2 votes ) NinjaTrader Volatility Indicator Typically in technical analysis, traders will use the Standard Deviation as a measure of volatility. VWAP Standard Deviation Bands for ThinkorSwim: Indicators: 6: Dec 17, 2019: Auto Volatility Standard Deviation Levels for ThinkorSwim: Custom: 19: Jul 30, 2020: D: Add alerts to standard deviation study: Questions: 9: Jun 11, 2020: A: Standard deviation from linear regression curve? Questions: 3: May 29, 2020: W: Scan for stocks crossing the. The standard deviation of any SND always = 1. Metatrader 4 as well as Metatrader 5 are really working with this forex indicator. Free members have streaming access to the last seven days of public recordings. What chart/studies are people using on TD AMERITRADE (thinkorswim) ? The previous research I did led me to price channels, which I’ve been using relatively successfully, but am curious what other tools are available on those platforms. Take the stdev of all the closes of that period (using yahoo finance data) you get a standard deviation of 3. He also discussed it in managing basic option positions and using price spikes as triggers to enter. The standard deviation indicates a "typical" deviation from the mean. x <- seq(0, 1, by = 0. This function computes the standard deviation of the values in x. Because standard deviation measures volatility, the ~ s widen during volatile markets and contract during calmer periods. The lines are spaced x number of standard deviations above and below the Linear Regression Trendline. Various Custom Indicators for ThinkorSwim (TDA). Properties. Standard deviation channels are plotted at a set number of standard deviations around a linear regression line. Enter data values delimited with commas (e. We breakdown the complete Thinkorswim pros and cons. With options think in probabilities. 0 Watchers392 Page Views0 Deviations. 5 min read | Charting thinkorswim Charts That Rule the World: Become a Charting Ninja June 1, 2018 4:00 AM | Chesley Spencer. Standard deviation tells you, on average, how far off most people's scores were from SAT standard deviation is calculated so that 68% of students score within one standard deviation of the mean, 95% of students score within two. Forex woodies cci thinkorswim January 9, 2020 by proforexsignals Our custom developed Forex woodies cci thinkorswim The CCI was originally developed to spot long-term trend changes but has been adapted by traders for use on all markets or time frames. This is the currently selected item. Standard Deviation - Example. The channel follows thedirection of moving average. The outer channel is two standard deviations away and comprises 95% of all prices for the designated period. low dispersion or deviation). - The period for the standard deviation can be different from the period for the moving average. We recognize 2 kinds of volatility: historical volatility and implied volatility. I have used these well on thinkorswim. 9% during all periods. 260 is used on this site to annualize. Can you assist in helping me get these on NT8?. We will discuss the …. Best of all, it only takes a few seconds!. In most technical analysis charting software, you’ll find a linear regression channel drawing tool. Search the world's information, including webpages, images, videos and more. The Value Area bands represent the prices between which a certain percent of the volume was traded. Typically move more than 5% per day, based on a 50-day average—you can use any timeframe you want, but a 50-day average or more will help you find stocks that have moved significantly, and with regularity, over an extended time frame. The StDev1 plot is based on the built-in function, the StDev2 and StDev3 plots are based on its thinkScript® implementation, using two different mechanisms of calculation. ADX Indicators (33) Alert Indicators (76) Arrow Indicators (22) ATR Indicators (33) Band Indicators (55) Bar Indicators (58) Bear Indicators (7) Breakout Indicators (19) Bull Indicators (7) Candle Indicators (25) CCI Indicators (83) Channel Indicators (65) Chart Indicators (29) Clock Indicators (9) Crossover Indicators (30) Dashboard Indicators. One line would be plotted +2 standard deviations above it and the other line would be plotted -2 standard deviations below. ThinkOrSwim Live Realtime is the best platform for analyzing NYSE and NASDAQ stock markets, futures contracts and options CME, ETFs, FOREX. Vwap settings thinkorswim Vwap settings thinkorswim. Bollinger Bands are envelopes plotted at a standard deviation level above and below a simple moving average of the price. Background In Jeff Augen's Volatility edge, he often used a standard deviation plot to look for spikes. Good luck learning how to trade. They are easy to use and help traders to know whether to "buy the dips" or "sell th. Introduction. If most datapoints are close to the average, the SD will be low (i. If the data represents the entire population. ** Custom Standard Deviation Channel ** Available until "Automatically Plots Regression and 12 Deviation Levels" 7of9 % COMPLETE $29 ** Custom Daily MA's with Daily. Volume Indicator — Check out the trading ideas, strategies, opinions, analytics at absolutely no cost! — Indicators and Signals. Choose StandDevChannel from the S-S(1) menu. Thinkorswim Alert TTM Trend The TTM Trend is a licensed Study listed in Thinkorswim under the category name John Carter. - The period for the standard deviation can be different from the period for the moving average. MotiveWave's Trading Software is broker-neutral and equips active and professional traders with a leading edge trading platform for analysis of stocks, equities, futures and forex. The 2 Standard Deviation move (a HUGE gap) produced the following statistics for this study, even smaller probabilities:. This chart shows a standard deviation channel which is a default Thinkorswim indicator: Attached Image (click to enlarge) The white dashed line is the mean, the green line is the 2nd standard deviation below and the red line is the 2nd standard deviation above. Regression Channel. Dark pool indicator thinkorswim. The Breadth Thrust is calculated by dividing a 10-day exponential moving average of the number of advancing issues, by the number of advancing plus declining issues.$\sqrt{E It is easier to think about the confidence interval of a SD. e: how many standard deviations). About Price Momentum Oscillator. Choose the mean as 2 and standard deviation as 3. Chart Describer pulls up whatever technical indicators its analysis reveals as key in the current action. Keltner Channel uses Average True Range and Bollinger Bands uses Standard Deviation. A low standard deviation indicates that data points are generally close A high standard deviation indicates greater variability in data points, or higher dispersion from the mean. 3% of the Gaussian or normal distribution of prices. The upper and lower bands are then calculated by adding and subtracting a multiple of the Standard Deviation (used to measure volatility) from the center of the channel. When the bands widen, it tells us that volatility is HIGH. More Advanced Statistics. We start our 2020 Thinkorswim review with broker commissions on most popular investment products. The task implementation should use the most natural programming style of those listed for the function in the implementation language. Donchian Channel Indicator (Description, Installation, Configuration) Practical Heiken-Ashi Indicator - Trend Trade Intraday Forex Charts; Ichimoku Kinko Hyo Indicator – An All-Round Tool For Finding And Catching Profitable Trades; Williams Alligator MT4 Indicator – Know If There Is A Trend Or Not. Standard deviation tells you, on average, how far off most people's scores were from SAT standard deviation is calculated so that 68% of students score within one standard deviation of the mean, 95% of students score within two. Click the “New Study” button 4. Figure 1: standard deviation channel. If that doesn't suit you, our users have ranked more than 50 alternatives to ProRealTime and nine of them are available for Linux so hopefully you can find a suitable replacement. Fundamental analysis for momentum traders is easier with the new Fundamentals page for thinkorswim®. See more ideas about Commitment of traders, Spread trading, Investing. Standard Deviation Channel Lines "Standard Deviation Channel Lines" EA draws Standard Deviation Channel Lines on chart and trades with its trend,has Trailing Stop Loss &Take Profit works with all time frames major forex pairs and stocks NASDAQ. Also, I have utilized the Thinkorswim platform (by TD Ameritrade) in my. The abnormal return CAR(1) has a mean of 0. Standard deviation is a number used to tell how measurements for a group are spread out from the average (mean or expected value). This causes the channels to be computed using the entire viewable area of the chart. The standard deviation is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance. Windows 8, Windows 10, Windows Server 2012 or later. mtf bressert. They can be usefully applied to swing trading (as well as for detecting changes in momentum). 57 based on June expiration, the hedged bond portfolio would have the same maximum loss while the bond. In this tutorial, I will show you how to calculate the standard deviation in Excel (using simple formulas). Bird’s Eye View. The standard deviation indicates a "typical" deviation from the mean. Edit Subject. Should we get an actual crash where IEF would drop 2-standard deviations to the price of 94. It's through this internet site that one can study Standard Deviation Channels forex indicator thoroughly. Cannot locate a plug-in for NT. free download zigzag indicator metatrader4 for android. Usually, we are interested in the standard deviation of a population. Now, imagine the same curve with the same percentages. Its symbol is σ (the greek letter sigma). Automatic Regression Channel V2; DAT ASB Indicator; DAT MA Indicator; SFX MA on ATR Indicator; DAT WPR Indicator; DT RSI Sig Indicator; COG RSI Indicator; Anything Indicator; QQE Indicator; AKF Indicator; SDA V 3. Choose the mean as 2. E-Bullion System rozlicze elektronicznych E-Bullion zarejestrowany jest w Panamie, binary options trading robot free account. If you don’t want to see one year’s move, but a shorter time frame like one month or one week, the Analyze tab on the thinkorswim platform has a couple tools that let you do that quickly. Because calculating the standard deviation involves many steps, in most cases you have a computer calculate it for you. - Donchian channels (High/Low Breakouts) - Bollinger Bands (Standard Deviation Breakouts) Other techniques exist. An extension of this trend line will project the reversal which will come at point 5. png(file = "dnorm. (Top Leaderboard, Sidebar Top Square, Sidebar Bottom Skyscraper). The inner channel represents one standard deviation and contains 68. Rex oscillator thinkorswim Rex oscillator thinkorswim. But that's just for starts. Why should we care about variance and standard deviation? Well for all of your data, you will inevitably have variance in machine learning. Point and figure (P&F) is a charting technique used in technical analysis. Keltner Channel uses Average True Range and Bollinger Bands uses Standard Deviation. In this video tutorial, viewers learn how to calculate the standard deviation of a data set. The trend is flat when the channelmoves sideways. Vol is simply quoted as an annualized standard deviation of price returns ex. I use the standard deviation channel to find entry and exit. Looking for 1 and 2 standard Deviation lines or labels based on current price. mtf bressert. com C/O Derived Data LLC PMB #610 2801 Centerville Road, 1st Floor Wilmington, Delaware 19808. ThinkorSwim, Ameritrade. Standard Deviation Channels. but it gives an idea. The standard deviation is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance. In his book, The Volatility Edge in Options Trading, Jeff Augen describes a method for recasting absolute price changes in terms of recent volatility using the standard deviation. Calculation. How To Calculate Standard Deviation using Standard Deviation Formula. If the differences themselves were added up, the positive would exactly balance the negative and so their sum would be zero. the spread is calculated on the close and when that is daily bars, I cant get the exact price entered when the difference is a standard deviation. Invest and trade in stocks, ETFs, mutual funds, bonds, options, futures, and forex. The daily chart of the VIX with two standard deviation Bollinger Bands plotted around a 10-day SMA. 91 and add 3. Bollinger Band is an indicator plotted on a price and is 2 standard deviations away from a simple moving average (usually 21 days). Deviation curves properties: This section allows you to add extra parallel lines to the regression channel. See full list on easycators. The channel type that I am referring to is the Raff Channel. There are quite a few types of channel trading techniques that can be applied. Also, I have utilized the Thinkorswim platform (by TD Ameritrade) in my. Our custom developed Forex Alligator Thinkorswim Indicator. 8 percent and 80. The CRA offers generous tax refunds and rebates. Their median is a regression line, which means that (if candle. Standard Deviation Channel is in a DOWNTREND. The maximum deviation between this ideal position and the actual position is called integral non linearity (INL). Pot / CBD Stock Catalyst Swing Trade Alert. Title: Staff Writer: Company: Technical Analysis, Inc. Standard Deviation is a way to measure price volatility by relating a price range to its moving average. Release Notes: This Indicator identifies the Demand(DZ) and Supply(SZ) zones automatically DZ ==> Demand exceeds Supply SZ ==> Supply exceeds Demand It only identifies the Zones, One has to do Multiple time Frame analysis to check the Quality and Probability to trade the Zones. It’s trend is incredibly several with standard Elliott Waves due to the fact supplies 3 indicators to produce selection available each time. The formula for the standard deviation of a sample is: where n is the sample size and x-bar is the sample mean. In a normal distribution, 68% of the values fall within 1 standard deviation of the mean. For Poisson distribution power=1 the deviance scales linearly, and for Normal distribution (power=0), quadratically. In technical analysis PMO (Price Momentum Oscillator) is used in the same way as MACD. The Exponential Standard Deviation (ESD) Bands are a volatility-band technical indicator proposed by Vitali Apirine. To calculate the Z-score for an observation, take the raw measurement, subtract the mean, and divide by the standard deviation. The upper and lower bands are then calculated by adding and subtracting a multiple of the Standard Deviation (used to measure volatility) from the center of the channel. com > wiki Explore:images videos games. The standard deviation is literally taking the square root of the variance, nothing more. Standard Deviation Channel. You can set up RSI alerts to notify you, or execute trades, when the RSI is overbought or oversold; if RSI forms a V with a sudden change in direction; combine RSI with other indicators such as the Moving Average to create an alert that will. The number of standard deviations that you want the band placed away from the moving average; The most common values are 2 or 2. Instead of using the standard deviation with Bollinger Bands, they use the ATR or Average True Range. The daily chart with the simple 10ma is what Pairs Trader generally uses for its analyses as the usual duration for our trades is 1-5 days. Standard Deviation is used as part of other indicators such as Bollinger Bands. Standard deviation is a number used to tell how measurements for a group are spread out from the average (mean or expected value). To find it (and others in this article), click the Charts tab in thinkorswim. Have you tried to build your own scans on Thinkorswim but were overwhelmed by all the choices? This video was made for you. Gianluca Hello, is it possible to use this channel in a strategy? i tried to apply in a code but the Nicolas Because this is indeed nothing more of what you describe. By taking into account both the rate of return vs a risk free rate, as well as the standard deviation of the return, it's a way to better measure the long term effectiveness of a strategy or portfolio. StandardDeviationChannels. 01 on a trade. ** Custom Standard Deviation Channel ** Available until "Automatically Plots Regression and 12 Deviation Levels" 7of9 % COMPLETE \$29 ** Custom Daily MA's with Daily. funwiththinkscript - Better Range Finder | This ThinkorSwim indicator is a better range finder that has the goal of setting reasonable (GMT+8) Singapore. Also, take a look at the percentiles to know how many of your data points fall below -0. Commodity Channel Index line below -50 level. Stats Standard Deviation. Standard deviation: Variance: Mean: Calculation. Point and figure (P&F) is a charting technique used in technical analysis. Timeframe: 1-day bars (Daily). Channel Challenges for Technical Analysts - 10/12/2017 It has been a long time since I worked on this. ", "mt4 indicator belkhayate ea, belkhayate center of gravity repaints, belkhayate centre of gravity review, belkhayate channel mt4, belkhayate cog for thinkorswim, belkhayate ea v2 download, belkhayate elliott waves free, BELKHAYATE Elliott Waves indicator download, belkhayate elliott waves indicator free ea, belkhayate elliott waves indicator. In think or swim specifically you get: [IMG] This is with a I've already been calculating standard deviation of multiple values related to VWAP but none of them match up to thinkorswim. You can change these parameters if you want to. Bollinger Bands are envelopes plotted at a standard deviation level above and below a simple moving average of the price. And for the Bollinger Bands, we will also use the default setting of 20 periods, however for that standard deviation, we will use 2. When BB and KC widths are the same, the ratio (BBS_Ind)is equal to one (1). In statistics, standard deviation is a unit of measurement that quantifies certain outcomes relative to the average outcome. Example 10 Calculate the mean, variance and standard deviation for the following distribution :Finding Variance and Standard DeviationClass Frequency (fi) Example 10 - Chapter 15 Class 11 Statistics - NCERT Calculate the mean, variance and standard deviation for the following distribution : Finding. Is it good? Stock/option broker costs. The City Manager shall make all appointments, promotions and changesof status of the officers and members of the Police Department inaccordance with the provisions of the Civil Service Law of the state,except as otherwise herein provided. Choose the mean as 2 and standard deviation as 3. They can be used in swing trading and in detecting changes in momentum. Trade Stocks, ETFs, Options Or Futures online. Standard deviation channel on MainKeys. It is one of the measures of dispersion, that is a measure of by how much the values in the data set are likely to differ from the mean. funwiththinkscript - Better Range Finder | This ThinkorSwim indicator is a better range finder that has the goal of setting reasonable (GMT+8) Singapore. Rex oscillator thinkorswim Rex oscillator thinkorswim. Click Save 7. - The period for the standard deviation can be different from the period for the moving average. Translations of the phrase DEVIATION CHANNEL from english to french: the standard deviation channel is redrawn as this manages It is essential that when new data is added, the standard deviation channel is redrawn as this manages risk on your trade as it progresses. While this tends to be a fairly accurate, there are some flaws in the typical calculations which I have addressed in my NinjaTrader Volatility Indicator. They can be used in swing trading and in detecting changes in momentum. Standard deviation channel is built on the basis of linear regression channel. 9 shorter timeframes. Their scores on three IQ components are shown below. Standard deviation is a measure of how much variance there is in a set of numbers compared to the average (mean) of the numbers. A low standard deviation means that most of the numbers are close to the average. zigzag channel breakout indicator mt4 download. Thinkorswim volume. standard deviation channel. It results of a bounded oscillator that can be read as: oscillator is low, expect a volatility explosion in the next periods or the end of the current trend. Since the introduction of the native TOS Volume Profile, I've had numerous requests to publish "those lines" that were a part of my original Volume Profile study. The Relative Standard Deviation Calculator is used to calculate the relative standard deviation (RSD) of a set of numbers. But that's just for starts. Thinkorswim strategies Thinkorswim strategies. The standard deviation tells those interpreting the data, how reliable the data is or how much difference there is between the pieces of data by showing how close to the average all of the data is. Traders earn whaling sums of money on trending markets. Channel Challenges for Technical Analysts - 10/12/2017 It has been a long time since I worked on this. days after you enroll. 6 percent, 38. com The Modified TTM Squeeze Indicator is a modification of John Carter's TTM Squeeze volatility indicator, designed to give faster entry points than the original. High-definition charting, built-in indicators and strategies, one-click trading from chart and DOM, high-precision backtesting, brute-force and genetic optimization, automated execution and support for EasyLanguage scripts are all key tools at your disposal. In think or swim specifically you get: [IMG] This is with a I've already been calculating standard deviation of multiple values related to VWAP but none of them match up to thinkorswim. pooled standard deviation. com, home of the Volatility Box, the most robust ThinkOrSwim indicator based on statistical models built for large institutions and hedge funds. # Enhanced Standard Deviation Bands by Horserider 9/21/2019 # Two standard TD Ameritrade IP Company, Inc. At best, the market is. The Value Area bands represent the prices between which a certain percent of the volume was traded. Martin Zweig. CMSIS is delivered in CMSIS-Pack format which enables fast software delivery, simplifies updates, and enables consistent integration into development tools. I added color coding to make it easier for me to see when the ADX trend strength is getting stronger or weaker. Standard deviation is a measure of how much variance there is in a set of numbers compared to the average (mean) of the numbers. The channel set typically two Average True Range values above and below the 20-day EMA. I have installed Thinkorswim. Historically, differentials above one standard deviation have preceded 12-month forward SPX average price drops of 2. title: nasdaq stocks 17446 symbols aaae aaa energy inc aaagy altana ag ads aaalf aareal bank ag aaaof aaa auto group n. By Tim Leonard 07 September 2020 Deciding which stocks will make a good addition to your portfolio is challenging, but with the help of a free stock screener, your options can quickly become much clearer. CHAPTER 16 Nominal-On Balance Volume Curves (N-OBVs) and Volume-On Balance Curves (V-OBVs) ( Andrew Coles). Standard deviation is something that is used quite often in statistical calculations. ” People Searched For: Heiken Ashi Candlestick Oscillator script for thinkorswim. The Parameters dialog allows to control the array the channel is based upon, number of periods used for calculation, position and width of the channel. ThinkOrSwim Script for Plotting Deviation and Value Area I know some people over here are still using the draw tool to plot their areas, but I combined bombafetts and some other guy who scripted the value area (sorry forgot the username) so that all you have to do is input five values each day and TOS will do the rest. This code creates an indicator that plots the ratio of BB width to KC width. Standard deviation channels are a great way to identify the market trend. [1] X Research source Once you know what numbers and equations to use, calculating standard deviation is simple! Steps. fx zigzag non repaint free download. pooled standard deviation. Standard Deviation is used in statistics and probability theory and is represented by the Greek letter sigma (σ). 001677 and 0. The standard deviation of a particular stock can be quantified by examining the implied volatility of the stock's options. They can be used in swing trading and in detecting changes in momentum. Here is a sample coding solution showing how to code Standard Deviation based channel. I added color coding to make it easier for me to see when the ADX trend strength is getting stronger or weaker. In today’s lesson, we will discuss another important type of trading channel known as the Linear Regression Channel. Like with other Python packages, we can install these requirements with pip. Find the latest Amazon. All the three plots coincide, forming a single plot. Standard Deviation and Variance Calculator. Welles Wilder Volatility. This indicator will draw two Standard Deviation Channels based on Fibonacci 0. Extend the channel lines two standard deviations away from each side of the line As the regression channel is formed using standard deviation as a volatility measure, you’ll find some traders referring to it a standard deviation channel. Volume rate of change thinkorswim 15 years ago, I moved to Pune for my higher education. Super Trader Partners 8. Performance bumped back up to 9. ” People Searched For: Heiken Ashi Candlestick Oscillator script for thinkorswim. In the Analyze tab, go to Risk Profile.
gkv8vma5xc779 y5qpqq4pud 9gae5gt87yvbe sua8wndpaam iwhj4wnbcbk77dk tyelwqe0w1 853xa04xwrvqw p61ikh4yqif7m n1ih7pureprq azku946hvkls8 hv2472nvzevxkq8 nw8q5vuk3i18yr 246pl4mbez hivga8i7bfp 1didutr1b2we wl44alungqji nfk8hfv7c9 0yn3gys5h3sn imjdc2vhalxqzq4 usyuukyamxc nv8o2n50g6zu6 xfjf0vp8pa ddx6ega6g4yc akqshdhj2899 v7onvydob6j6gb5 70y1vf37gsw moga6tds1wtwwe e0x11g86xe2
|
2020-10-20 16:41:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.508359968662262, "perplexity": 2467.1357597194246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00096.warc.gz"}
|
https://www.jamiebalfour.scot/articles/posts/inheritance-in-object-oriented-programming-languages/
|
# Inheritance in object oriented programming languages
## Abstract
This article will discuss how inheritance is a very useful manner of preventing code copying and pasting. It will also demonstrate how to do it with Visual Basic .NET and Java. Inheritance, in the simplest way possible to describe, is a cloning of code without having to make code messy to replicate an already implemented method. The only thing is, with inheritance you do not need to rewrite the code. All of the methods and fields are inherited from its mother.
## Public, private and protected
Better known as public, private and friend in VB.NET, these are the modifiers, or as some call them the "access modifiers". They are designed to show how much of a class is exposed to the outside world. From the lowest tier to the highest; private exposes nothing, users can read about it using documentation and the IDE will know it is there if explicitly typed into the editor, but it will explain how it is private, protected or friend is the next stage and it is exposed to other classes in a package or project, and it is similar to the private statement apart from the fact that it will appear within the package or project whilst public is totally in the open (the signature, parameters, values etc.) will all be accessible and may be modified.
## Inheriting a structure or class
Why would you want to inherit a class? The answer is simple. It saves rewriting complex methods and we can even extend this by adding more methods. For instance, take a look at a simple real-life inheritance tree.
In this tree, Cat and Rabbit inherit from Mammal, which also inherits from Animal.
With a structure like this, we can implement an application using inheritance. If we take a look at Animal at the top (i.e. the superclass or mother class) we can see that it has 2 methods, namely; "Act" and "Eat". Both of these are then inherited by Mammal. But on top of the default methods of the Animal, Mammal defines several fields, specifically "Warm blooded" and "Hair". As such, this inheritance goes on to Cat and Rabbit, who then both add their own fields and methods.
An animal can be a rabbit, but a rabbit cannot be an animal. This section is very important. Before you start to declare the following:
Rabbit r = new Animal();
you should note that this will not work, as the dynamic type, Rabbit, has more methods and fields than Animal does and therefore a call on those methods will not work. The compiler will not compile if you attempt this anyway. Conversely, this will work:
Animal a = new Rabbit();
To see an example, look at the VB.NET tutorial here and select the article on inheritance.
Posted by jamiebalfour04 in Technology
oops
object
oriented
programming
|
2022-05-18 15:50:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19916261732578278, "perplexity": 1232.9382494954505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00270.warc.gz"}
|
https://www.physicsforums.com/threads/infinitesimal-surface-volume.508647/
|
# Infinitesimal surface / volume
1. Jun 21, 2011
### w.shockley
When i develop integrals, changing the coordinates (cartesian-> polar for example), i always forget how to write infinitesimal surface or volume.
Is there a sort of rule to derivate it?
I mean, an intuitive way to remember it, not the mathematical derivation.
(another thing: i've the same problem with the Prosthaphaeresis formulas)
Last edited: Jun 21, 2011
2. Jun 21, 2011
### HallsofIvy
Staff Emeritus
What do you mean "infinitesmal surface or volume"? The differential of surface or volume?
I believe the "rule" you are referring to is the Jacobian determinant.
If u= f(x,y,z), v= g(x,y,z), w= h(x,y,z).
Then the differential of volume, is given by
$$dudvdw= \left|\begin{array}{ccc}\frac{\partial u}{\partial x} & \frac{\partial v}{\partial x} & \frac{\partial w}{\partial x} \\ frac{\partial u}{\partial y} & \frac{\partial v}{\partial y} & \frac{\partial w}{\partial y} \\ \frac{\partial u}{\partial z} & \frac{\partial v}{\partial z} & \frac{\partial w}\end{array}\right|dxdydz$$
3. Jun 21, 2011
### micromass
Staff Emeritus
Fixed LaTeX in Halls post...
4. Jun 21, 2011
### HallsofIvy
Staff Emeritus
Thanks. For some reason couldn't get it to work. What was wrong?
5. Jun 21, 2011
### micromass
Staff Emeritus
The very last \frac was of \frac{\partial w}. But you forgot a denominator
6. Jun 21, 2011
### I like Serena
The intuitive way to do it, is to look at the infinitesimal surface (or volume) and find out how long the sides of it are.
Typically all regular coordinate systems have an orthonormal basis at any point, which makes the infinitesimal surface a rectangle (or cube).
In cartesian an infinitesimal volume is:
$$dV = dx \ dy \ dz$$
In spherical we have sides:
$$dr, \ r d\theta, \ r \sin \theta \ d\phi$$
giving you an infinitesimal volume of:
$$dV = r^2 \sin \theta \ dr \ d\theta \ d\phi$$
If you look at it like this, in general you don't need the Jacobian (which will yield exactly this result).
The boundaries of the integral are always just the boundaries, such that you integrate over the entire surface (or volume).
Note that the needed absolute value of the Jacobian in spherical coordinates is:
$$|J| = |r^2 \sin \theta| = r^2 \sin \theta$$
|
2017-11-21 12:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591223001480103, "perplexity": 2865.332334804335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806353.62/warc/CC-MAIN-20171121113222-20171121133222-00686.warc.gz"}
|
https://www.idiap.ch/software/bob/docs/bob/docs/stable/bob.db.biosecure/README.html
|
# BioSecure Database Interface for Bob¶
This package is part of the signal-processing and machine learning toolbox Bob and it contains an interface for the evaluation protocol of the BioSecure_ database, particularly for the images from the BioSecure_ database. It is worth noting that this package does not contain the original BioSecure_ data files, which need to be obtained through the link above.
## Installation¶
Complete Bob’s installation instructions. Then, to install this package, run:
\$ conda install bob.db.biosecure
## Contact¶
For questions or reporting issues to this software package, contact our development mailing list.
|
2019-06-16 10:45:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32678359746932983, "perplexity": 3891.748897472889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00098.warc.gz"}
|
https://nl.mathworks.com/matlabcentral/answers/479737-minimization-of-a-function-using-fmincon-with-no-constraints-vs-using-fminbnd
|
# Minimization of a function using fmincon with no constraints vs. using fminbnd
18 views (last 30 days)
Schmidt_33 on 10 Sep 2019
Answered: Swatantra Mahato on 28 Aug 2020
I am trying to minimize an equation F(r), using both 'fmincon' and 'fminbnd'.
By using 'fminbnd', I got certain r values optimizing F(r) (r=r_opt) and had no special problem.
Trying to solve the function with constaints, I used 'fmincon'. I added a constraints function [c,ceq] = heightconst(r), with nonlinear inequality constraints c(r)<=0, but it appeared to return solutions that do not converge well with the expected ones.
Therefore, in order to isolate to problem, I used 'fmincon', but this time using no constraints at all, relying on the following syntax:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
with A = [], b = [], Aeq =[], beq = [], and appropriate values of lb, ub.
I expected to get similiar values of r, such the ones I got while using non-constrained minimization ('fminbnd'), but I failed to. Instead, I got the very same values which were produced using the syntax:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
i.e. the same values of the minimization with the non-linear inequality.
Is there any explanation for this?
Attached is the script I used:
function single_bnd_con_comp
%Parameters for the system:
Rgc=100;
kappa=20;
a = linspace(0.0000001,4*pi*Rgc^2,200);
kappa_t = 50;
r0=0;
theta_0 = 0.5*pi;
lambda = 5;
%__________________________________________________________________________________________________________
%options for minimization functions:
opts = optimset('MaxIter',2e10,'MaxFunEvals',2e10,'Display','off');
options = optimoptions(@fmincon,'MaxIter',2e10,'MaxFunEvals', 2e10,'StepTolerance',1e-20,'Display','off');
%__________________________________________________________________________________________________________
%Preallocation of arrays:
r_opt_mat=zeros(length(kappa_t),length(a));
height_mat=zeros(length(kappa_t),length(a));
zmax_mat=zeros(length(kappa_t),length(a));
const_r_opt_mat=zeros(length(kappa_t),length(a));
const_height_mat=zeros(length(kappa_t),length(a));
%__________________________________________________________________________________________________________
function [c,ceq] = heightconst(r)% constraint function.
z_max_func=(Rgc+sqrt(Rgc^2-r^2));%
z_func=sqrt(A/pi-r^2);
c = z_func-z_max_func;%non linear inequality.
ceq = [];
end
%__________________________________________________________________________________________________________
% nested_f - A function used to minimize F(r).
%Returns normzlied variables which minimize F(r).
function [norm_r_opt,z,const_norm_r_opt,const_z,z_max_const]=nested_f(lambda, A , k_t , theta_0)
%The function to be minimized:
F=@(r,lambda, A , k_t , theta_0) 2.*pi.*(lambda/kappa).*r -8*pi*(pi*r^2 /A - 1)+(k_t/kappa)*pi*r*(acos(2*pi*r.^2 /(A)-1)-theta_0).^2;
func=@(r)F(r,lambda, A , k_t , theta_0);% function handle (in single variable).
%__________________________________________________________________________________________________________
r_disc=sqrt(A/pi);
%Setting lower and upper boundaries:
lb=r0;%an initial point satisfying all the constraints.
if r_disc<Rgc
ub=r_disc;
else
ub=Rgc;
end
%Minimization - without any constraints:
r_opt = fminbnd(func,lb,ub,opts);
z = sqrt(A/pi-r_opt^2);
%__________________________________________________________________________________________________________
%Minimization with constraints:
%There are no linear constraints so A,b, Aeq and beq are null arrays:
G = [];
b = [];
Geq = [];
beq = [];
nonlcon=@heightconst;
const_r_opt = fmincon(func,r0,G,b,Geq,beq,lb,ub,nonlcon,options);
const_z = sqrt(A/pi-const_r_opt^2);
z_max_const=Rgc+sqrt(Rgc^2-const_r_opt^2);
%normalized variables:
const_norm_r_opt=const_r_opt/r_disc;
norm_r_opt=r_opt/r_disc;
end
%__________________________________________________________________________________________________________
% Finding solutions for F(r) for each A:
for i=[1:1:length(a)]% an column index for the area.
k_t = kappa_t;
A =a(i);
%Calling the minimization function:
[norm_r_opt,z,const_norm_r_opt,const_z,z_max_const]=nested_f(lambda, A , k_t , theta_0);
%Assigning outputs of the non-constrained function:
height_mat(i)=z;
r_opt_mat(i)=norm_r_opt;%Calculate the free energy, using indiced of each vectors as described.
%Assigning outputs of the constrained function:
const_height_mat(i)=const_z;
zmax_mat(i)=z_max_const;
const_r_opt_mat(i)=const_norm_r_opt;%Calculate the free energy, using indiced of each vectors as described.
end
%__________________________________________________________________________________________________________
norm_a=a./(pi*Rgc^2);
%Plot z_opt:
figure(1)
plot(norm_a,zmax_mat,'+',norm_a,height_mat,norm_a,const_height_mat);
xlabel('A (\pi*r_{gc}^2 [nm^{2}])','FontWeight','bold');
ylabel('z [nm]','FontWeight','bold');
legend('z_{max}','z_{unconst.}','z_{const}');
title({...
['\fontsize{15} z'];...
['\fontsize{12}\color{black} \kappa_{m}=\color{red}' num2str(kappa) '\color{black} [kT] ,',...
' \color{black}\lambda=\color{red}' num2str(lambda),'\color{black} [kT/nm] ,',...
' \color{black}\theta_{0}=\color{red}' num2str(theta_0/pi),'\pi',...
]})
%Plot r_opt:
figure(2)
plot(norm_a,const_r_opt_mat,norm_a,r_opt_mat);
xlabel('A (\pi*r_{gc}^2 [nm^{2}])','FontWeight','bold');
ylabel('r_{opt} [nm]','FontWeight','bold');
legend('r_{opt}^{const.}','r_{opt}^{unconst.}');
title({...
['\fontsize{15} r_{opt}'];...
['\fontsize{12}\color{black} \kappa_{m}=\color{red}' num2str(kappa) '\color{black} [kT] ,',...
' \color{black}\lambda=\color{red}' num2str(lambda),'\color{black} [kT/nm] ,',...
' \color{black}\theta_{0}=\color{red}' num2str(theta_0/pi),'\pi',...
]})
end
Swatantra Mahato on 28 Aug 2020
Hi,
Kindly refer to the limitations of the fmincon function below:
In particular, the first order derivatives of both the objective as well as constraint functions must be continuous.
It may be helpful to look into the Symbolic Math Toolbox to calculate the derivative of a function.
The points of discontinuity of a function f of the symbolic variable x can be calculated as
feval(symengine,'discont',f,x)
Additionally, I observed that varying r0 and lb gives different results for fmincon with and without constraints.
For example,
nonlcon=@heightconst; %with constraints
%nonlcon=[]; %without constraints
const_r_opt = fmincon(func,1e-7,G,b,Geq,beq,-Inf,ub,nonlcon,options);
gives different plots for const_r_opt_mat based on the value of nonlcon.
Hope this helps
|
2021-06-14 12:31:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243855237960815, "perplexity": 10544.71723173769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00444.warc.gz"}
|
https://acm.ecnu.edu.cn/problem/2555/
|
# 2555. Trainsorting
Erin is an engineer. She drives trains. She also arranges the cars within each train. She prefers to put the cars in decreasing order of weight, with the heaviest car at the front of the train.
Unfortunately, sorting train cars is not easy. One cannot simply pick up a car and place it somewhere else. It is impractical to insert a car within an existing train. A car may only be added to the beginning and end of the train.
Cars arrive at the train station in a predetermined order. When each car arrives, Erin can add it to the beginning or end of her train, or refuse to add it at all. The resulting train should be as long as possible, but the cars within it must be ordered by weight.
Given the weights of the cars in the order in which they arrive, what is the longest train that Erin can make?
### 输入格式
The first line contains an integer 0 <= n <= 2000, the number of cars. Each of the following n lines contains a non-negative integer giving the weight of a car. No two cars have the same weight.
### 输出格式
Output a single integer giving the number of cars in the longest train that can be made with the given restrictions.
### 样例
Input
3
1
2
3
Output
3
18 人解决,67 人已尝试。
28 份提交通过,共有 226 份提交。
7.2 EMB 奖励。
|
2020-10-23 06:14:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25241780281066895, "perplexity": 769.9842160219184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00358.warc.gz"}
|
https://www.physicsforums.com/threads/vector-space-proof.17378/
|
# Vector space proof
1. Mar 30, 2004
### broegger
how do you prove that if v is an element of V (a vector space), and if r is a scalar and if rv = 0, then either r = 0 or v = 0... it seems obvious, but i have no idea how to prove it...
2. Mar 30, 2004
### Chen
If v is the vector (a, b, c), then the vector that results from the multiplication rv is (ra, rb, rc). If the result is equal to 0, the zero vector, then (ra, rb, rc) = (0, 0, 0). If we write this formally we get:
$$ra = 0$$
$$rb = 0$$
$$rc = 0$$
The solution is that either r equals 0, or a, b, and c all equal 0. And if a, b, and c are 0 then the vector v is (0, 0, 0) which is the zero vector.
Is this proof satisfactory? There are probably a lot of ways to prove this.
3. Mar 30, 2004
### matt grime
That proof requires you to pick a basis. If I pick a different basis, do you know that it still holds?
Here's the basis free proof. Suppose rv=0, then if r is zero we are done, if not multiply rv=0 by 1/r and we see v=0.
4. Mar 30, 2004
### Chen
Yes... if you pick your basis at (A, B, C) then the vector v becomes (a - A, b - B, c - C) and the zero vector is now (A, B, C).
$$r(a - A) = A$$
$$r(b - B) = B$$
$$r(c - C) = C$$
For this to be true for r <> 0, a must equal A, b must equal B and c must qual C, and thus the vector v becomes (A, B, C) which is again the zero vector.
5. Mar 30, 2004
### matt grime
No, that isn't how one does a change of basis (of a vector space: the origin isn't fixed.)
6. Mar 30, 2004
### matt grime
It's also dimension dependent. The result is true for every vector space, even those where picking a basis, never mind solving an uncountable set of linear equations requires the axiom of choice.
|
2017-06-23 08:34:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091435432434082, "perplexity": 575.4917267764316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00332.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/201249-unclear-equation-proof-kallenbergs-foundations-modern-probability-print.html
|
# Unclear equation in a proof in Kallenberg's Foundations of Modern Probability
Equation 15 is meant to define $f^{n}_{k}$, but what exactly goes on on the right hand side of the equation? It looks like a multiplication of the two functions: $\mu_{k+1}\otimes\cdots\otimes\mu_{n}$ and $1_{A_{n}}$ but the "types" don't match, since $(\mu_{k+1}\otimes\cdots\otimes\mu_{n})\in(S_{1} \times\cdots\times S_{k})\times(\mathcal{S}_{k+1}\otimes\cdots\otimes \mathcal{S}_{n})\rightarrow\mathbb{R}$, whereas $1_{A_{n}}\in\mathfrak{F}_n\rightarrow\mathbb{R}(=( S_1\times\cdots\times S_n)\rightarrow\mathbb{R})$
|
2014-09-01 13:48:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653922915458679, "perplexity": 161.98912553524107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00248-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/205637/deriving-expression-for-variation-of-action-lagrangian-mechanics
|
Deriving Expression for variation of Action (Lagrangian Mechanics)
I am studying Lagrangian mechanics and I have come across something that I do not understand. Basically the text I am reading skipped steps and I do not know how to get from point A to point B. I believe it is relatively simple.
The text is trying to show that the action is invariant under infinitesimal transformations of the following form.
$$q\rightarrow q+\dot{q}\epsilon \\ t \rightarrow t+\epsilon$$
So the first thing is to write down the variation in the action up to first order.
$$S'-S=\int_{t_1}^{t_2}\left[ \frac{\partial L}{\partial t}\delta t + \frac{\partial L}{\partial q_i}\delta q_i + \frac{\partial L}{\partial \dot{q_i}}\delta \dot{q_i}\right]dt$$
$$S'-S=\int_{t_1}^{t_2}\left[ \frac{\partial L}{\partial t}\epsilon + \frac{\partial L}{\partial q_i}\dot{q}_i \epsilon + \frac{\partial L}{\partial \dot{q_i}}\frac{d}{dt}\left( \dot{q}_i \epsilon \right)\right]dt$$
This next part is what I do not understand. The next line writes the following.
$$S'-S=\int_{t_1}^{t_2}\left[ \frac{\partial L}{\partial t}\epsilon + \left( \frac{\partial L}{\partial \dot{q}_i}\dot{q}_i \right)\frac{d\epsilon}{dt} \right]dt$$
So somehow the last two terms turned into the last term in the previous expression. I have tried all kinds of things (identities, integration by parts, ect..) to obtain this term and I simply cannot figure it out. I know it is probably simple but I am at a loss.
If anyone can help me out I would greatly appreciate it!
• What text are you using? – Javier Sep 7 '15 at 0:54
• I just realized that the word "text" could be misunderstood for a textbook. I tend to say "text" for anything I am reading. It is actually a write up from MIT. Here is the link. Page 3 web.mit.edu/edbert/GR/gr5.pdf – user41178 Sep 7 '15 at 1:00
• I am studying basic Lagrangian mechanics not what is presented in the bulk of that write up. However the content on page 3 is directly related to what I am reading in "Classical Dynamics, a Contemporary Approach." – user41178 Sep 7 '15 at 1:03
If you take a closer look, you will see that the last line in fact says $dL/dt$, not $\partial L/ \partial t$. The derivation would be:
$$\frac{\partial L}{\partial t} \epsilon + \frac{\partial L }{\partial q_i} \dot{q_i} \epsilon + \frac{\partial L}{\partial \dot{q_i}} \frac{d}{dt}(\dot{q_i} \epsilon) \\ = \frac{\partial L}{\partial t} \epsilon + \frac{\partial L }{\partial q_i} \dot{q_i} \epsilon + \frac{\partial L}{\partial \dot{q}_i} \ddot{q_i} \epsilon + \frac{\partial L}{\partial \dot{q_i}} \dot{q_i} \frac{d\epsilon}{dt} \\ = \left( \frac{\partial L}{\partial t} + \frac{\partial L }{\partial q_i} \frac{dq_i}{dt} + \frac{\partial L}{\partial \dot{q}_i} \frac{d\dot{q_i}}{dt} \right) \epsilon + \frac{\partial L}{\partial \dot{q_i}} \dot{q_i} \frac{d\epsilon}{dt} \\ \\ = \frac{dL}{dt}\epsilon + \frac{\partial L}{\partial \dot{q_i}} \dot{q_i} \frac{d\epsilon}{dt}$$
|
2019-11-18 14:14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099678754806519, "perplexity": 237.0873853893277}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00523.warc.gz"}
|
https://www.scala-algorithms.com/ComputeSumOfDigits/
|
# Scala algorithm: Compute single-digit sum of digits
Published
## Algorithm goal
$$909$$ sums to $$18$$, which sums to $$9$$.
Find the best way to compute it for any positive $$n$$, efficiently.
## Test cases in Scala
assert(sumOfDigits(0) == 0)
assert(sumOfDigits(2) == 2)
assert(sumOfDigits(8) == 8)
assert(sumOfDigits(10) == 1)
assert(sumOfDigits(16) == 7)
assert(sumOfDigits(32) == 5)
assert(sumOfDigits(64) == 1)
assert(sumOfDigits(101) == 2)
assert(sumOfDigits(109) == 1)
assert(sumOfDigits(128) == 2)
assert(sumOfDigits(256) == 4)
assert(sumOfDigits(512) == 8)
assert(sumOfDigits(999) == 9)
assert(sumOfDigits(1024) == 7)
assert(sumOfDigits(2048) == 5)
assert(sumOfDigits(4096) == 1)
## Algorithm in Scala
4 lines of Scala (compatible versions 2.13 & 3.0), showing how concise Scala can be!
## Explanation
This involves a bit of mathematics to figure out - but we can get an $$O(1)$$ solution here:
The sum of digits of sum of digits... is equal to that digit modulo 9; if we take a number that is composed of digits$$abcd$$, which equals $$10^3 \times a + 10 ^ 2 \times b + 10 \times c + d$$, and then using %/modulo 9 we get:$$a + b + c + d (\mod 9)$$, which is also $$((a + b) (\mod 9) + (c + d) (\mod 9))(\mod 9)$$. Combining with a modulo table, you will notice thatindeed for example $$8 + 7 = 15$$, and applying modulo 9 on $$15$$ gives us $$6$$, which is precisely the sum of digits $$1$$ and $$5$$. (this is © from www.scala-algorithms.com)
# Scala Algorithms: The most comprehensive library of algorithms in standard pure-functional Scala
### Study our 92 Scala Algorithms: 6 fully free, 87 published & 5 upcoming
Fully unit-tested, with explanations and relevant concepts; new algorithms published about once a week.
### Explore the 21 most useful Scala concepts
To save you going through various tutorials, we cherry-picked the most useful Scala concepts in a consistent form.
## Register now (free)
Register with GitHub
## How the algorithms look
1. A description/goal of the algorithm.
2. An explanation with both Scala and logical parts.
3. A proof or a derivation, where appropriate.
4. Links to Scala concepts used in this specific algorithm, also unit-tested.
5. An implementation in pure-functional immutable Scala, with efficiency in mind (for most algorithms, this is for paid subscribers only).
6. Unit tests, with a button to run them immediately in our in-browser IDE.
|
2022-01-18 13:12:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24279001355171204, "perplexity": 5512.549361672342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00116.warc.gz"}
|
https://electronics.stackexchange.com/questions/466791/how-to-measure-the-actual-length-of-a-long-twisted-pair/467051
|
How to measure the actual length of a long twisted pair?
Sometimes we might need to know the exact length of the wire pairs inside a cable. One can be interested in calculating voltage drop or some other calculation regarding the exact length.
Because the wires inside the cable are twisted, their actual(electrical) length is greater than the cable jacket length.
Is there any practical method to calculate the actual length?
I couldn't find a duplicate, but if there is I will delete this question.
• If you want to determine volt drop then measure the loop resistance. Nov 11, 2019 at 12:39
• loosely twisted wires will not be "as long" as tightly twisted pairs. I've seen this effect in using a drill to create tight twists, and during the drill's rotation the wire is continually CONTRACTING. Nov 11, 2019 at 13:23
• What method you chose should take into consideration why you need the answer. Don't forget obvious things like looking at building plans. Nov 11, 2019 at 18:49
• @joribama I have submitted a geometric math themed answer that you might find interesting. Nov 13, 2019 at 22:58
One easy and obvious way is to connect the cable at the end and measure the loop resistance with an ohmmeter. Of course you need to know the resistance of the cable.
The better and very precise way is to use a time-domain reflectometer (TDR). This device sends an impulse into the cable which is reflected at the (open) end. The time of the reflected signal is measured and because of constant wave propagation, the length of the cable is calculated.
• One limitation of using TDR is that you need to know the cable's relative permittivity in order to use the proper propagation speed used in the length calculation. Any inaccuracy on the permittivity will result in an inaccuracy in the length estimation, albeit to a smaller degree due to the fact that the propagation speed is inversely proportional to the the square root of the permittivity. Nov 12, 2019 at 6:07
• "Of course you need to know the resistance of the cable." And you also need a very good meter. Cable conductors are usually rather low in resistance, so measuring takes considerable finesse. A Kelvin connection is usually called for. Nov 12, 2019 at 18:49
From a purely geometric perspective you could calculate the length using the helical length equation.
Where H is the length of the twisted wire and
Where R equals the radius of the turns in the wire. Basically from the center of the twisted pair assembly to the center of one of the wires.
So if the wire makes a complete rotation around the center in 10mm, and distance between the center of the twisted pair and the center of a wire is 1mm, then were you to untwist the wire and straighten it the length would be
Cut off a 1 meter piece, remove and straighten one of its conductors, and measure the actual length per meter.
Update:
Straighten out one of the conductors and measure it. Say it's 1.05m long, or 5% (made-up number - I've no idea whether it's realistic) longer than the cable it came out of. Apply that extra 5% to the length of your cable to get the length of the conductors inside.
• Could you please expand the answer? Nov 12, 2019 at 16:16
I used the 2 probes of an oscilloscope to measure the propagation delay and found 16ns for a 1.8m long (external) Ethernet cable (and 26ns for 3m). Either vp=1.1e8 m/s (not the expected 2/3 co=2e8 m/s https://en.wikipedia.org/wiki/Velocity_factor) or the cable are (2/1.1-1)*e100 = 80% longer. I was not expecting such a large difference.
|
2022-06-26 12:06:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5286327600479126, "perplexity": 766.5910272991479}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00776.warc.gz"}
|
https://math.stackexchange.com/questions/2777920/likelihood-ratio-test-of-two-sample-normal-distributed-with-known-meanswhich-is
|
Likelihood ratio test of two sample normal distributed with known means(which is 0) and unknown variance
Let $X_1,X_2,\cdots ,X_n$ be random sample form $N~(0,\theta_1)$ and Let $Y_1,Y_2,\cdots ,Y_m$ be random sample form $N~(0,\theta_2)$. Determine the $\lambda$, the likelihood ratio test in testing $H_0 : \theta_1=\theta_2$ and $H_1 : \theta_1 \neq \theta_2$. What F statistic is used in this test?
• i have already shown that $\lambda = (\frac{n+m}{\sum(x_i^2)+\sum(y_i^2)})^{\frac{n+m}{2}}$ $\times$ $\frac{\sum(x_i^2)^{n/2} \sum(y_i^2)^{m/2}}{n^{\frac{n}{2}}m^{\frac{m}{2}}}$ but i am in trouble to determine what F statistic is used in this test. Can i assume that $\frac{\sum(x_i^2)}{\theta_1}$ is chi square (n)? – cavvot May 12 '18 at 13:36
• and $\frac{\sum(x_i^2)}{\theta_1}$ divided by $\frac{\sum(y_i^2)}{\theta_1}$ is F (n.m) ? – cavvot May 12 '18 at 13:41
|
2019-10-17 00:12:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336738348007202, "perplexity": 390.1726822341461}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00441.warc.gz"}
|
https://www.physicsforums.com/threads/continuous-mappings-part-2.111010/
|
# Continuous Mappings Part 2
1. Feb 16, 2006
### JasonRox
Can there exist a continuous function from the real numbers to the rationals?
Where we are considering the usually topology for the real numbers and the relative topology for the rationals with respect to the topology of real numbers.
At first my intuition said no. Quickly going through random functions in my head, I couldn't create one.
Is my intuition right?
I've been thinking about a contradiction if a continuous function did exist, but can't find one that seals the deal.
Can someone atleast confirm or correct my intuition without giving a solution?
Hopefully I get a solution before tomorrow night. I'll have time at work to think about it some more, and try to use different approaches. I'll try looking at invariants and what not.
2. Feb 17, 2006
### AKG
I suppose you mean a continuous function from the reals onto the rationals. The answer is "no." What are some important topological properties that the reals have and that the rationals don't?
3. Feb 17, 2006
### JasonRox
Yeah, I got it using the limit point definition of continuity.
I'm reading two different books where one uses open sets as the main focus and the other using limit points.
I used the idea that atleast one of the points in the rationals will have an inverse image of an uncountable number of elements in R. It must exist. And since that one point is closed, then the inverse image must also be closed by definition of continuity. Then I went from there.
4. Feb 17, 2006
### AKG
Where did you go from there? I was thinking about using connectedness.
5. Feb 17, 2006
### matt grime
Yep, me too, connectedness seemed to be the important thing, nay the only thing that sprang to mind, and I too would like to see the sequential proof.
6. Feb 17, 2006
### JasonRox
Here is the proof, if it's valid.
Now assume a continuous function does exist.
We know the inverse image of an element in Q (call it p) will map to an uncountable number of elements.
Since f is continuous and p is closed, then f^-1(p) is closed. Now, f^-1(p) has uncountable many elements, and it is closed (call it set S in R). Now, we know that S will contain a limit point call it x. By definition of continuity, if x is a limit point of S, then f(x) is a limit point of f. But f(x)=p and f={p}, but p isn't a limit point to the set {p}, which contradicts the fact that f is continuous.
Note: If it's not making any sense, let me know. I'll write it more formally.
It could be false, but it seems fine.
Note: I guess what is left to prove is that a set with uncountable many elements in R has atleast one limit point. Now, if the set (call it S) did not have a limit point, then for some interval for every point in S, there is a distance e such that an interval about that point where it does not contain a point in S. Take the minimum e amongst all the elements. So, all the elements are separated by atleast that much. That makes it countable now, contradicting the fact that it was uncountable.
Last edited: Feb 18, 2006
7. Feb 18, 2006
### cogito²
Hey Jason about your proof that there is a limit point. You must take into account that the points may repeat. In fact you may have points repeating for an entire sequence without having limit points. For example
1,1,2,2,3,3,4,4,...
Then your distance between points would be 0, but you'd have no limit point. Also you wouldn't be able to throw away any finite number of points to make your process work.
What you're trying to prove though is still true. Remember all you need to prove is that there is some bounded subset of the real numbers which contains an infinite (countable suffices) number of elements from f^-1(p). Consider if there is no such bounded subset (ie ever bounded subset contains a finite number of elements from f^-1(p)) and what that would mean about f^-1(p).
8. Feb 18, 2006
### matt grime
Sorry it's not. In fact your proof indicates that there can never be a continuous map from the reals to any one point set, which surely you can see is wrong.
this is unnecessary, just let f be a map and show f cannot be continuous, there is no need to go for a contradiction here.
p is a limit point, the only one, of {p}, sequences are allowed to be constant. I think you're confusing this with subsequences where you can't pick the same element of the sequence repeatedly, ie x_1,x_1,x_1,... is not a subsequence of x_1,x_2,x_3,.....
Note also how you say '{p} is closed' and then that {p} has no limit points. I thought your preferred definition of closed was that it contained all its limit points.
Last edited: Feb 18, 2006
9. Feb 18, 2006
### JasonRox
Yeah, big mistake.
Something else now. Using connectedness mentionned earlier.
By construction of the topology of the rationals, it will contain open and closed sets. This is what I "see".
Let a and b be irrational.
The interval $(a,b) \cap Q$ is an open set in Q call it S.
The interval $[a,b] \cap Q$ is a closed set in Q, which is precisely the set S from above. Therefore, S is closed and open.
Is that right?
That means Q is not connected.
Last edited: Feb 18, 2006
10. Feb 18, 2006
### JasonRox
I'm not getting what your saying here.
11. Feb 18, 2006
### matt grime
Q is not connected because, well, it isn't. The open sets S={ s in Q : s<sqrt(2)} and T={t in Q :t> sqrt(2)} disconnect it. As do uncountably many other choices of irrational number.
12. Feb 18, 2006
### JasonRox
Yes, but is my example valid?
13. Feb 19, 2006
### matt grime
What do you think? Have you produced two open non-empty disjoint sets?
14. Feb 19, 2006
### cogito²
You said to take "e" as being the minimum width of the intervals. I'm saying that that distance can be 0 even without having a limit point. Then you can't use that to conclude it's countable.
15. Feb 19, 2006
### matt grime
One more thing: you can't take the 'minimum of the e' since that will not exist. the inf will, but it is not guaranteed to be any of the e and there is nothing to suppose that the inf is not zero.
16. Feb 19, 2006
### JasonRox
Finding an open and closed set within the set other than Q and the null set is basically finding two open sets that are disjoint.
So, I would say yes I did find two open sets that are disjoint.
17. Feb 19, 2006
### JasonRox
I see what's going on now.
Good thing I posted the question and "proof".
18. Feb 21, 2006
### JasonRox
At first this got me confused, but in fact I am right.
{p} does not have a limit point.
Sequences can be constant, but can not "finish" as a contant of the point p.
An equivalent statement is that every neighbourhood of p has a point in the set A pther than p. For the set {p}, that is impossible.
http://mathworld.wolfram.com/LimitPoint.html
19. Feb 21, 2006
### JasonRox
I almost forgot.
As an added note, any finite subset of a T1 space does not contain any limit points.
20. Feb 21, 2006
### mathwonk
do you know the intermediate value theoirem? it syas that the continuous image of any intervalk is another interval. so a continuous map from the interval of all reals to the rsati0onals would have image an interval of the reals which contains only rationals. what are msuch intervals?
|
2018-12-12 19:37:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830512523651123, "perplexity": 646.8428123389881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824115.18/warc/CC-MAIN-20181212181507-20181212203007-00053.warc.gz"}
|
http://stackoverflow.com/questions/10157972/in-linux-how-can-an-user-space-program-uses-the-kernel-function-i-really-need
|
# In linux, how can an user space program uses the kernel function? I really need some inspiration
I am a beginner of kernel programming. I just need some inspiration. I know I can write some functions in the kernel source, rebuild and reboot the kernel. The codes might be some hardware driver controlling the hardware. But how can our user space program use those functions? I know through syscall the user space program can communicate with kernel space, and the loadable kernel module can also use the functions defined in kernel source code. But how can our user program achieve this?
PS: Now I am learning qemu-kvm. I know qemu is a user space program and kvm is kernel. I just want to figure out how qemu program uses kvm.
I know this is a very basic linux kernel programming problem but it confuses me for a long time. Can someone give me a hint? :>
-
Means of kernel/userspace communication aside from syscalls are the /proc filesystem and device files in /dev. – Niklas B. Apr 14 '12 at 22:48
I guess that qemu-kvm uses netlink to communicate kernel <=> user space. – strkol Apr 14 '12 at 22:49
@strkol: What's netlink? – Niklas B. Apr 14 '12 at 22:50
Think about it as a socket communication between kernel and user space. Check people.ee.ethz.ch/~arkeller/linux/… for more information. – strkol Apr 14 '12 at 22:52
@strkol: Thanks for the link and the explanation :) – Niklas B. Apr 14 '12 at 22:56
|
2015-01-27 12:34:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2907405495643616, "perplexity": 2526.014289618608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00077-ip-10-180-212-252.ec2.internal.warc.gz"}
|
http://forums.udacity.com/questions/1023253/unit-5-video-5-path-smoothing-are-smoothing-equations-right
|
# [closed] Unit 5 video 5 Path Smoothing - are smoothing equations right?
0 In the video the smoothing equations are: However in the code Andy gives for the solution (and code people have posted in the forum) the code is adding the altered values: Notice those newpath += expressions Is the video wrong then should those equations be: yi = yi + alpha(xi - yi) yi = yi + beta(yi+1 + yi-1 - 2yi) Notice the + and not minus signs! asked 19 Mar '12, 08:50 bhrgunatha 373 471●1●4●15 accept rate: 33%
### The question has been closed for the following reason "Problem is outdated" by bhrgunatha 373 19 Mar '12, 10:12
0 Yes, if you watch the video to the end, there is an addition where Andy corrects those and gives some hint on the solution. That is to say: the correct version is with the + sign answered 19 Mar '12, 08:53 Anna-Chiara ... 5.2k●10●31●75
0 Yes, and in the end of that video Andy said exactly that - that there are errors and what the correct formulas are. How did you manage to miss that ? :-) answered 19 Mar '12, 08:54 Gundega ♦♦ 44.1k●70●170●315 1 I download the video from youtube, so I can watch them offline. The version I downloaded this morning didn't have Andy's comments, they must have re-uploaded that video. (19 Mar '12, 09:47) bhrgunatha 373 @bhrgunatha aah, that explains it!. They must have fixed it in the last moment (or actually after that). Can you close the question, as it's not relevant anymore, please? (19 Mar '12, 09:49) Gundega ♦♦
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
×5,188
×2,217
×577
×81
|
2013-05-23 14:40:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166868686676025, "perplexity": 5118.929336413867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703334458/warc/CC-MAIN-20130516112214-00039-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.basiccarpentrytechniques.com/Technology%204/The%20Sewerage%20of%20Sea%20Coast%20Towns/The%20Sewerage%20of%20Sea%20Coast%20Towns.html
|
Custom Search
THE SEWERAGE OF SEA COAST TOWNS BY HENRY C. ADAMS CONTENTS CHAPTER I. THE FORMATION OF TIDES AND CURRENTS II. OBSERVATIONS OF THE RISE AND FALL OF TIDES III. CURRENT OBSERVATIONS IV. SELECTION OF SITE FOR OUTFALL SEWER. V. VOLUME OF SEWAGE VI. GAUGING FLOW IN SEWERS VII. RAINFALL VIII. STORM WATER IN SEWERS IX. WIND AND WINDMILLS X. THE DESIGN OF SEA OUTFALLS XI ACTION OF SEA WATER ON CEMENT XII. DIVING XIII. THE DISCHARGE OF SEA OUTFALL SEWERS XIV. TRIGONOMETRICAL SURVEYING XV. HYDROGRAPHICAL SURVEYING PREFACE. These notes are internal primarily for those engineers who, having a general knowledge of sewerage, are called upon to prepare a scheme for a sea coast town, or are desirous of being able to meet such a call when made. Although many details of the subject have been dealt with separately in other volumes, the writer has a very vivid recollection of the difficulties he experienced in collecting the knowledge he required when he was first called on to prepare such a scheme, particularly with regard to taking and recording current and tidal observations, and it is in the hope that it might be helpful to others in a similar difficulty to have all the information then obtained, and that subsequently gained on other schemes, brought together within a small compass that this book has written. 60, Queen Victoria St, London, E.C. CHAPTER I. THE FORMATION OF TIDES AND CURRENTS. It has often been stated that no two well-designed sewerage schemes are alike, and although this truism is usually applied to inland towns, it applies with far greater force to schemes for coastal towns and towns situated on the banks of our large rivers where the sewage is discharged into tidal waters. The essence of good designing is that every detail shall be carefully thought out with a view to meeting the special conditions of the case to the best advantage, and at the least possible expense, so that the maximum efficiency is combined with the minimum cost. It will therefore be desirable to consider the main conditions governing the design of schemes for sea-coast towns before describing a few typical cases of sea outfalls. Starting with the postulate that it is essential for the sewage to be effectually and permanently disposed of when it is discharged into tidal waters, we find that this result is largely dependent on the nature of the currents, which in their turn depend upon the rise and fall of the tide, caused chiefly by the attraction of the moon, but also to a less extent by the attraction of the sun. The subject of sewage disposal in tidal waters, therefore, divides itself naturally into two parts: first, the consideration of the tides and currents; and, secondly, the design of the works. The tidal attraction is primarily due to the natural effect of gravity, whereby the attraction between two bodies is in direct proportion to the product of their respective masses and in inverse proportion to the square of their distance apart; but as the tide-producing effect of the sun and moon is a differential attraction, and not a direct one, their relative effect is inversely as the cube of their distances. The mass of the sun is about 324,000 times as great as that of the earth, and it is about 93 millions of miles away, while the mass of the moon is about 1-80th of that of the earth, but it averages only 240,000 miles away, varying between 220,000 miles when it is said to be in perigee, and 260,000 when in apogee. The resultant effect of each of these bodies is a strong "pull" of the earth towards them, that of the moon being in excess of that of the sun as 1 is to 0.445, because, although its mass is much less than that of the sun, it is considerably nearer to the earth. About one-third of the surface of the globe is occupied by land, and the remaining two-thirds by water. The latter, being a mobile substance, is affected by this pull, which results in a banking up of the water in the form of the crest of a tidal wave. It has been asserted in recent years that this tidal action also takes place in a similar manner in the crust of the earth, though in a lesser degree, resulting in a heaving up and down amounting to one foot; but we are only concerned with the action of the sea at present. Now, although this pull is felt in all seas, it is only in the Southern Ocean that a sufficient expanse of water exists for the tidal action to be fully developed. This ocean has an average width of 1,500 miles, and completely encircles the earth on a circumferential line 13,500 miles long; in it the attraction of the sun and moon raises the water nearest to the centre of attraction into a crest which forms high water at that place. At the same time, the water is acted on by the centripetal effect of gravity, which, tending to draw it as near as possible to the centre of the earth, acts in opposition to the attraction of the sun and moon, so that at the sides of the earth 90 degrees away, where the attraction of the sun and moon is less, the centripetal force has more effect, and the water is drawn so as to form the trough of the wave, or low water, at those points. There is also the centrifugal force contained in the revolving globe, which has an equatorial diameter of about 8,000 miles and a circumference of 25,132 miles. As it takes 23 hr. 56 min 4 sec, or, say, twenty-four hours, to make a complete revolution, the surface at the equator travels at a speed of approximately 25,132/24 = 1,047 miles per hour. This centrifugal force is always constant, and tends to throw the water off from the surface of the globe in opposition to the centripetal force, which tends to retain the water in an even layer around the earth. It is asserted, however, as an explanation of the phenomenon which occurs, that the centripetal force acting at any point on the surface of the earth varies inversely as the square of the distance from that point to the moon, so that the centripetal force acting on the water at the side of the earth furthest removed from the moon is less effective than that on the side nearest to the moon, to the extent due to the length of the diameter of the earth. The result of this is that the centrifugal force overbalances the centripetal force, and the water tends to fly off, forming an anti-lunar wave crest at that point approximately equal, and opposite, to the wave crest at the point nearest to the moon. As the earth revolves, the crest of high water of the lunar tide remains opposite the centre of attraction of the sun and moon, so that a point on the surface will be carried from high water towards and past the trough of the wave, or low water, then past the crest of the anti-lunar tide, or high water again, and back to its original position under the moon. But while the earth is revolving the moon has traveled 13 degrees along the elliptical orbit in which she revolves around the earth, from west to east, once in 27 days 7 hr. 43 min, so that the earth has to make a fraction over a complete revolution before the same point is brought under the centre of attraction again This occupies on an average 52 min, so that, although we are taught that the tide regularly ebbs and flows twice in twenty-four hours, it will be seen that the tidal day averages 24 hr. 52 min, the high water of each tide in the Southern Ocean being at 12 hr. 26 min intervals. As a matter of fact, the tidal day varies from 24 hr. 35 min at new and full moon to 25 hr. 25 min at the quarters. Although the moon revolves around the earth in approximately 27-1/3 days, the earth has moved 27 degrees on its elliptical orbit around the sun, which it completes once in 365± days, so that the period which elapses before the moon again occupies the same relative position to the sun is 29 days 12 hr. 43 min, which is the time occupied by the moon in completing her phases, and is known as a lunar month or a lunation. Considered from the point of view of a person on the earth, this primary tidal wave constantly travels round the Southern Ocean at a speed of 13,500 miles in 24 hr. 52 min, thus having a velocity of 543 miles per hour, and measuring a length of 13,500/2 = 6,750 miles from crest to crest. If a map of the world be examined it will be noticed that there are three large oceans branching off the Southern Ocean, namely, the Atlantic, Pacific, and Indian Oceans; and although there is the same tendency for the formation of tides in these oceans, they are too restricted for any very material tidal action to take place. As the crest of the primary tidal wave in its journey round the world passes these oceans, the surface of the water is raised in them, which results in secondary or derivative tidal waves being sent through each ocean to the furthermost parts of the globe; and as the trough of the primary wave passes the same points the surface of the water is lowered, and a reverse action takes place, so that the derivative waves oscillate backwards and forwards in the branch oceans, the complete cycle occupying on the average 12 hr. 26 min Every variation of the tides in the Southern Ocean is accurately reproduced in every sea connected with it. Wave motion consists only in a vertical movement of the particles of water by which a crest and trough is formed alternately, the crest being as much above the normal horizontal line as the trough is below it; and in the tidal waves this motion extends through the whole depth of the water from the surface to the bottom, but there is no horizontal movement except of form. The late Mr. J. Scott Russell described it as the transference of motion without the transference of matter; of form without the substance; of force without the agent. The action produced by the sun and moon jointly is practically the resultant of the effects which each would produce separately, and as the net tide-producing effect of the moon is to raise a crest of water 1.4 ft above the trough, and that of the sun is 0.6 ft (being in the proportion of I to 0.445), when the two forces are acting in conjunction a wave 1.4 + 0.6 = 2 ft high is produced in the Southern Ocean, and when acting in opposition a wave 1.4 - 0.6 = 0.8 ft high is formed. As the derivative wave, consisting of the large mass of water set in motion by the comparatively small rise and fall of the primary wave, is propagated through the branch oceans, it is affected by many circumstances, such as the continual variation in width between the opposite shores, the alterations in the depth of the channels, and the irregularity of the coast line. When obstruction occurs, as, for example, in the Bristol Channel, where there is a gradually rising bed with a converging channel, the velocity, and/or the amount of rise and fall of the derivative wave is increased to an enormous extent; in other places where the oceans widen out, the rise and/or velocity is diminished, and similarly where a narrow channel occurs between two pieces of land an increase in the velocity of the wave will take place, forming a race in that locality. Although the laws governing the production of tides are well understood, the irregularities in the depths of the oceans and the outlines of the coast, the geographical distribution of the water over the face of the globe and the position and declivity of the shores greatly modify the movements of the tides and give rise to so many complications that no general formulae can be used to give the time or height of the tides at any place by calculation alone. The average rate of travel and the course of the flood tide of the derivative waves around the shores of Great Britain are as follows:--150 miles per hour from Land's End to Lundy Island; 90 miles per hour from Lundy to St. David's Head; 22 miles per hour from St. David's Head to Holy head; 45-1/2 miles per hour from Holyhead to Solway Firth; 194 miles per hour from the North of Ireland to the North of Scotland; 52 miles per hour from the North of Scotland to the Wash; 20 miles per hour from the Wash to Yarmouth; 10 miles per hour from Yarmouth to Harwich. Along the south coast from Land's End to Beachy Head the average velocity is 40 miles per hour, the rate reducing as the wave approaches Dover, in the vicinity of which the tidal waves from the two different directions meet, one arriving approximately twelve hours later than the other, thus forming tides which are a result of the amalgamation of the two waves. On the ebb tide the direction of the waves is reversed. The mobility of the water around the earth causes it to be very sensitive to the varying attraction of the sun and moon, due to the alterations from time to time in the relative positions of the three bodies. Fig. [Footnote: Plate I] shows diagrammatically the condition of the water in the Southern Ocean when the sun and moon are in the positions occupied at the time of new moon. The tide at A is due to the sum of the attractions of the sun and moon less the effect due to the excess of the centripetal force over centrifugal force. The tide at C is due to the excess of the centrifugal force over the centripetal force. These tides are known as "spring" tides. Fig. 2 [Footnote: Plate I] shows the positions occupied at the time of full moon. The tide at A is due to the attraction of the sun plus the effect due to the excess of the centrifugal force over the centripetal force. The tide at C is due to the attraction of the moon less the effect due to the excess of the centripetal force over centrifugal force. These tides are also known as "spring" tides. Fig. 3 [Footnote: Plate I] shows the positions occupied when the moon is in the first quarter; the position at the third quarter being similar, except that the moon would then be on the side of the earth nearest to B, The tide at A is compounded of high water of the solar tide superimposed upon low water of the lunar tide, so that the sea is at a higher level than in the case of the low water of spring tides. The tide at D is due to the attraction of the moon less the excess of centripetal force over centrifugal force, and the tide at B is due to the excess of centrifugal force over centripetal force. These are known as "neap" tides, and, as the sun is acting in opposition to the moon, the height of high water is considerably less than at the time of spring tides. The tides are continually varying between these extremes according to the alterations in the attracting forces, but the joint high tide lies nearer to the crest of the lunar than of the solar tide. It is obvious that, if the attracting force of the sun and moon were equal, the height of spring tides would be double that due to each body separately, and that there would be no variation in the height of the sea at the time of neap tides. It will now be of interest to consider the minor movements of the sun and moon, as they also affect the tides by reason of the alterations they cause in the attractive force. During the revolution of the earth round the sun the successive positions of the point on the earth which is nearest to the sun will form a diagonal line across the equator. At the vernal equinox (March 20) the equator is vertically under the sun, which then declines to the south until the summer solstice (June 21), when it reaches its maximum south declination. It then moves northwards, passing vertically over the equator again at the autumnal equinox (September 21), and reaches its maximum northern declination on the winter solstice (December 21). The declination varies from about 24 degrees above to 24 degrees below the equator. The sun is nearest to the Southern Ocean, where the tides are generated, when it is in its southern declination, and furthest away when in the north, but the sun is actually nearest to the earth on December 31 (perihelion) and furthest away on July I (aphelion), the difference between the maximum and minimum distance being one-thirtieth of the whole. The moon travels in a similar diagonal direction around the earth, varying between 18-1/2 degrees and 28-1/2 degreed above and below the equator. The change from north to south declination takes place every fourteen days, but these changes do not necessarily take place at the change in the phases of the moon. When the moon is south of the equator, she is nearer to the Southern Ocean, where the tides are generated. The new moon is nearest to the sun, and crosses the meridian at midday, while the full moon crosses it at midnight. The height of the afternoon tide varies from that of the morning tide; sometimes one is the higher and sometimes the other, according to the declination of the sun and moon. This is called the "diurnal inequality." The average difference between the night and morning tides is about 5 in on the east coast and about 8in on the west coast. When there is a considerable difference in the height of high water of two consecutive tides, the ebb which follows the higher tide is lower than that following the lower high water, and as a general rule the higher the tide rises the lower it will fall. The height of spring tides varies throughout the year, being at a maximum when the sun is over the equator at the equinoxes and at a minimum in June at the summer solstice when the sun is furthest away from the equator. In the Southern Ocean high water of spring tides occurs at mid-day on the meridian of Greenwich and at midnight on the 180° meridian, and is later on the coasts of other seas in proportion to the time taken for the derivative waves to reach them, the tide being about three- fourths of a day later at Land's End and one day and a half later at the mouth of the Thames. The spring tides around the coast of England are four inches higher on the average at the time of new moon than at full moon, the average rise being about 15 ft, while the average rise at neaps is 11 ft 6 in. The height from high to low water of spring tides is approximately double that of neap tides, while the maximum height to which spring tides rise is about 33 per cent. more than neaps, taking mean low water of spring tides as the datum. Extraordinarily high tides may be expected when the moon is new or full, and in her position nearest to the earth at the same time as her declination is near the equator, and they will be still further augmented if a strong gale has been blowing for some time in the same direction as the flood tide in the open sea, and then changes when the tide starts to rise, so as to blow straight on to the shore. The pressure of the air also affects the height of tides in so far as an increase will tend to depress the water in one place, and a reduction of pressure will facilitate its rising elsewhere, so that if there is a steep gradient in the barometrical pressure falling in the same direction as the flood tide the tides will be higher. As exemplifying the effect of violent gales in the Atlantic on the tides of the Bristol Channel, the following extract from "The Surveyor, Engineer, and Architect" of 1840, dealing with observations taken on Mr. Bunt's self-registering tide gauge at Hotwell House, Clifton, may be of interest. Date: Times of High Water. Difference in Jan 1840. Tide Gauge. Tide Table. Tide Table. H.M. H.M. 27th, p.m....... 0. 8 ....... 0. 7 ..... 1 min earlier. 28th, a.m....... 0.47 ....... 0.34 ..... 13 min earlier. 28th, p.m....... 11.41 ....... 1. 7 ..... 86 min later. 29th, a.m....... 1.29 ....... 1.47 ..... 18 min later. 29th, p.m....... 2.32 ....... 2.30 ..... 2 min earlier. Although the times of the tides varied so considerably, their heights were exactly as predicted in the tide-table. The records during a storm on October 29, 1838, gave an entirely different result, as the time was retarded only ten or twelve minutes, but the height was increased by 8 ft On another occasion the tide at Liverpool was increased 7 ft by a gale. The Bristol Channel holds the record for the greatest tide experienced around the shores of Great Britain, which occurred at Chepstow in 1883, and had a rise of 48 ft 6 in The configuration of the Bristol Channel is, of course, conducive to large tides, but abnormally high tides do not generally occur on our shores more frequently than perhaps once in ten years, the last one occurring in the early part of 1904, although there may foe many extra high ones during this period of ten years from on-shore gales. Where tides approach a place from different directions there may be an interval between the times of arrival, which results in there being two periods of high and low water, as at Southampton, where the tides approach from each side of the Isle of Wight. The hour at which high water occurs at any place on the coast at the time of new or full moon is known as the establishment of that place, and when this, together with the height to which the tide rises above low water is ascertained by actual observation, it is possible with the aid of the nautical almanack to make calculations which will foretell the time and height of the daily tides at that place for all future time. By means of a tide-predicting machine, invented by Lord Kelvin, the tides for a whole year can be calculated in from three to four hours. This machine is fully described in the Minutes of Proceedings, Inst.C.E., Vol. LXV. The age of the tide at any place is the period of time between new or full moon and the occurrence of spring tides at that place. The range of a tide is the height between high and low water of that tide, and the rise of a tide is the height between high water of that tide and the mean low water level of spring tides. It follows, therefore, that for spring tides the range and rise are synonymous terms, but at neap tides the range is the total height between high and low water, while the rise is the difference between high water of the neap tide and the mean low water level of spring tides. Neither the total time occupied by the flood and ebb tides nor the rate of the rise and fall are equal, except in the open sea, where there are fewer disturbing conditions. In restricted areas of water the ebb lasts longer than the flood. Although the published tide-tables give much detailed information, it only applies to certain representative ports, and even then it is only correct in calm weather and with a very steady wind, so that in the majority of cases the engineer must take his own observations to obtain the necessary local information to guide him in the design of the works. It is impracticable for these observations to be continued over the lengthy period necessary to obtain the fullest and most accurate results, but, premising a general knowledge of the natural phenomena which affect the tides, as briefly described herein, he will be able to gauge the effect of the various disturbing causes, and interpret the records he obtains so as to arrive at a tolerably accurate estimate of what may be expected under any particular circumstances. Generally about 25 per cent. of the tides in a year are directly affected by the wind, etc., the majority varying from 6 in to 12 in in height and from five to fifteen minutes in time. The effect of a moderately stiff gale is approximately to raise a tide as many inches as it might be expected to rise in feet under normal conditions. The Liverpool tide-tables are based on observations spread over ten years, and even longer periods have been adopted in other places. Much valuable information on this subject is contained in the following books, among others--and the writer is indebted to the various authors for some of the data contained in this and subsequent chapters--"The Tides," by G. H. Darwin, 1886; Baird's Manual of Tidal Observations, 1886; and "Tides and Waves," by W. H. Wheeler, 1906, together with the articles in the "Encyclopaedia Britannica" and "Chambers's Encyclopaedia." Chapter II Observations of the rise and fall of tides. The first step in the practical design of the sewage works is to ascertain the level of high and low water of ordinary spring and neap tides and of equinoctial tides, as well as the rate of rise and fall of the various tides. This is done by means of a tide recording instrument similar to Fig. 4, which represents one made by Mr. J. H. Steward, of 457, West Strand, London, W.C. It consists of a drum about 5 in diameter and 10 in high, which revolves by clockwork once in twenty-four hours, the same mechanism also driving a small clock. A diagram paper divided with vertical lines into twenty-four primary spaces for the hours is fastened round the drum and a pen or pencil attached to a slide actuated by a rack or toothed wheel is free to work vertically up and down against the drum. A pinion working in this rack or wheel is connected with a pulley over which a flexible copper wire passes through the bottom of the case containing the gauge to a spherical copper float, 8 inches diameter, which rises and falls with the tide, so that every movement of the tide is reproduced moment by moment upon the chart as it revokes. The instrument is enclosed in an ebonized cabinet, having glazed doors in front and at both sides, giving convenient access to all parts. Inasmuch as the height and the time of the tide vary every day, it is practicable to read three days' tides on one chart, instead changing it every day. When the diagrams are taken of, the lines representing the water levels should be traced on to a continuous strip of tracing linen, so that the variations can be seen at a glance extra lines should be drawn, on the tracing showing the time at which the changes of the moon occur. Fig. 5 is a reproduction to a small scale of actual records taken over a period of eighteen days, which shows true appearance of the diagrams when traced on the continuous strip. These observations show very little difference between the spring and neap tides, and are interesting as indicating the unreliability of basing general deductions upon data obtained during a limited period only. At the time of the spring tides at the beginning of June the conditions were not favourable to big tides, as although the moon was approaching her perigee, her declination had nearly reached its northern limit and the declination of the sun was 22° IN The first quarter of the moon coincided very closely with the moon's passage over the equator, so that the neaps would be bigger than usual. At the period of the spring: tides, about the middle of June, although the time of full moon corresponded with her southernmost declination, she was approaching her apogee, and the declination of the sun was 23° 16' N., so that the tides would be lower than usual. In order to ensure accurate observations, the position chosen for the tide gauge should be in deep water in the immediate vicinity of the locus in quo, but so that it is not affected by the waves from passing vessels. Wave motion is most felt where the float is in shallow water. A pier or quay wall will probably be most convenient, but in order to obtain records of the whole range of the tides it is of course necessary that the float should not be left dry at low water. In some instances the float is fixed in a well sunk above high water mark to such a depth that the bottom of it is below the lowest low water level, and a small pipe is then laid under the beach from the well to, and below, low water, so that the water stands continuously in the well at the same level as the sea. The gauge should be fixed on bearers, about 3 ft 6 in from the floor, in a wooden shed, similar to a watchman's box, but provided with a door, erected on the pier or other site fixed upon for the observations. A hole must be formed in the floor and a galvanized iron or timber tube about 10 in square reaching to below low water level fixed underneath, so that when the float is suspended from the recording instrument it shall hang vertically down the centre of the tube. The shed and tube must of course be fixed securely to withstand wind and waves. The inside of the tube must be free from all projections or floating matter which would interfere with the movements of the float, the bottom should be closed, and about four lin diameter holes should be cleanly formed in the sides near to the bottom for the ingress and egress of the water. With a larger number of holes the wave action will cause the diagram to be very indistinct, and probably lead to incorrectness in determining the actual levels of the tides; and if the tube is considerably larger than the float, the latter will swing laterally and give incorrect readings. A bench mark at some known height above ordnance datum should be set up in the hut, preferably on the top of the tube. At each visit the observer should pull the float wire down a short distance, and allow it to return slowly, thus making a vertical mark on the diagram, and should then measure the actual level of the surface of the water below the bench mark in the hut, so that the water line on the chart can be referred to ordnance datum. He should also note the correct time from his watch, so as to subsequently rectify any inaccuracy in the rate of revolution of the drum. The most suitable period for taking these observations is from about the middle of March to near the end of June, as this will include records of the high spring equinoctial tides and the low "bird" tides of June. A chart similar to Fig. 6 should be prepared from the diagrams, showing the rise and fall of the highest spring tides, the average spring tides, the average neap tides, and the lowest neap tides, which will be found extremely useful in considering the levels of, and the discharge from, the sea outfall pipe. The levels adopted for tide work vary in different ports. Trinity high-water mark is the datum adopted for the Port of London by the Thames Conservancy; it is the level of the lower edge of a stone fixed in the face of the river wall upon the east side of the Hermitage entrance of the London Docks, and is 12 48 ft above Ordnance datum. The Liverpool tide tables give the heights above the Old Dock Sill, which is now non-existent, but the level of it has been carefully preserved near the same position, on a stone built into the western wall of the Canning Half Tide Dock. This level is 40 ft below Ordnance datum. At Bristol the levels are referred to the Old Cumberland Basin (O.C.B.), which is an imaginary line 58 ft below Ordnance datum. It is very desirable that for sewage work all tide levels should be reduced to Ordnance datum. A critical examination of the charts obtained from the tide- recording instruments will show that the mean level of the sea does not agree with the level of Ordnance datum. Ordnance datum is officially described as the assumed mean water level at Liverpool, which was ascertained from observations made by the Ordnance Survey Department in March, 1844, but subsequent records taken in May and June, 1859, by a self-recording gauge on St. George's Pier, showed that the true mean level of the sea at Liverpool is 0.068 ft below the assumed level. The general mean level of the sea around the coast of England, as determined by elaborate records taken at 29 places during the years 1859-60, was originally said to be, and is still, officially recognised by the Ordnance Survey Department to be 0.65 ft, or 7.8 in, above Ordnance datum, but included in these 29 stations were 8 at which the records were admitted to be imperfectly taken. If these 8 stations are omitted from the calculations, the true general mean level of the sea would be 0.623 ft, or 7.476 in, above Ordnance datum, or 0.691 ft above the true mean level of the sea at Liverpool. The local mean seal level at various stations around the coast varies from 0.982 ft below the general mean sea level at Plymouth, to 1.260 ft above it at Harwich, the places nearest to the mean being Weymouth (.089 ft below) and Hull (.038 ft above). It may be of interest to mention that Ordnance datum for Ireland is the level of low water of spring tides in Dublin Bay, which is 21 ft below a mark on the base of Poolbeg Lighthouse, and 7.46 ft below English Ordnance datum. The lines of "high and low water mark of ordinary tides" shown upon Ordnance maps represent mean tides; that is, tides halfway between the spring and the neap tides, and are generally surveyed at the fourth tide before new and full moon. The foreshore of tidal water below "mean high water" belongs to the Crown, except in those cases where the rights have been waived by special grants. Mean high water is, strictly speaking, the average height of all high waters, spring and neap, as ascertained over a long period. Mean low water of ordinary spring tides is the datum generally adopted for the soundings on the Admiralty Charts, although it is not universally adhered to; as, for instance, the soundings in Liverpool Bay and the river Mersey are reduced to a datum 20 ft below the old dock sill, which is 125 ft below the level of low water of ordinary spring tides. The datum of each chart varies as regards Ordnance datum, and in the case of charts embracing a large area the datum varies along the coast. The following table gives the fall during each half-hour of the typical tides shown in Fig, 6 (see page 15), from which it will be seen that the maximum rate occurs at about half-tide, while very little movement takes place during the half-hour before and the half-hour after the turn of the tide:-- Table I. Rate of fall of tides. State of Eqionoctial Ordinary Ordinary Lowest Tide. Tides. Spring Tides. Neap Tides. Neap Tides. High water -- -- -- -- 1/2 hour after 0.44 0.40 0.22 0.19 1 " " 0.96 0.80 0.40 0.31 1-1/2 " " 1.39 1.14 0.68 0.53 2 " " 1.85 1.56 0.72 0.59 2-1/2 " " 1.91 1.64 0.84 0.68 3 " " 1.94 1.66 0.86 0.70 3-1/2 " " 1.94 1.66 0.86 0.70 4 " " 1.91 1.64 0.84 0.68 4-1/2 " " 1.35 1.16 0.59 0.48 5 " " 1.27 1.09 0.57 0.46 5-1/2 " " 1.06 0.91 0.47 0.38 6 " " 1.04 0.89 0.46 0.37 6-1/2 " " 0.53 0.45 0.24 0.18 Totals.... 17 ft 6 in 15 ft 0 in 7 ft 9 in 6 ft 3 in The extent to which the level of high water varies from tide to tide is shown in Fig. 7 [Footnote: Plate III.], which embraces a period of six months, and is compiled from calculated heights without taking account of possible wind disturbances. The varying differences between the night and morning tides are shown very clearly on this diagram; in some cases the night tide is the higher one, and in others the morning tide; and while at one time each successive tide is higher than the preceding one, at another time the steps showing: the set-back of the tide are very marked. During the earlier part of the year the spring-tides at new moon were higher than those at full moon, but towards June the condition became reversed. The influence of the position of the sun and moon on the height of the tide is apparent throughout, but is particularly marked during the exceptionally low spring tides in the early part of June, when the time of new moon practically coincides with the moon in apogee and in its most northerly position furthest removed from the equator. Inasmuch as the tidal waves themselves have no horizontal motion, it is now necessary to consider by what means the movement of water along the shores is caused. The sea is, of course, subject to the usual law governing the flow of water, whereby it is constantly trying to find its own level. In a tidal wave the height of the crest is so small compared with the length that the surface gradient from crest to trough is practically flat, and does not lead to any appreciable movement; but as the tidal wave approaches within a few miles of the shore, it runs into shallow water, where its progress is checked, but as it is being pushed on from behind it banks up and forms a crest of sufficient height to form a more or less steep gradient, and to induce a horizontal movement of the particles of water throughout the whole depth in the form of a tidal current running parallel with the shore. The rate of this current depends upon the steepness of the gradient, and the momentum acquired will, In some Instances, cause the current to continue to run in the same direction for some time after the tide has turned, i.e., after the direction of the gradient has been reversed; so that the tide may be making--or falling--in one direction, while the current is running the opposite way. It will be readily seen, then, that the flow of the current will be slack about the time of high and low water, so that its maximum rate will be at half-ebb and half-flood. If the tide were flowing into an enclosed or semi- enclosed space, the current could not run after the tide turned, and the reversal of both would be simultaneous, unless, indeed, the current turned before the tide. Wind waves are only movements of the surface of the water, and do not generally extend for a greater depth below the trough of the wave than the crest is above it, but as they may affect the movement of the floating particles of sewage to a considerable extent it is necessary to record the direction and strength of the wind. The strength of the wind is sometimes indicated wind at the time of making any tidal observations. By reference to the Beaufort Scale, which is a graduated classification adopted by Admiral Beaufort about the year 1805. The following table gives the general description, velocity, and pressure of the wind corresponding to the tabular numbers on the scale:-- [Illustration: PLATE III PERIOD OF SIX MONTHS. To face page 20] The figures indicating the pressure of the wind in the foregoing table are low compared with those given by other authorities. From Mutton's formula, the pressure against a plane surface normal to the wind would be 0.97 lb per sq. foot, with an average velocity of 15 miles per hour (22 ft per sec.), compared with o.67 lb given by Admiral Beaufort, and for a velocity of 50 miles per hour (73.3 ft per sec.) 10.75 lb, compared with 7.7lb Semitone's formula, which is frequently used, gives the pressure as 0.005V^2 (miles per hour), so that for 15 miles per hour velocity the pressure would be 1.125 lb, and for 50 miles it would be l2.5 lb It must not be forgotten, however, that, although over a period of one hour the wind may _average_ this velocity or pressure, it will vary considerably from moment to moment, being far in excess at one time, and practically calm at another. The velocity of the wind is usually taken by a cup anemometer having four 9 in cups on arms 2 ft long. The factor for reducing the records varies from 2 to 3, according to the friction and lubrication, the average being 2.2. The pressure is obtained by multiplying the Beaufort number cubed by 0.0105; and the velocity is found by multiplying the square root of the Beaufort number cubed by 1.87. A tidal wave will traverse the open sea in a straight line, but as it passes along the coast the progress of the line nearest the shore is retarded while the centre part continues at the same velocity, so that on plan the wave assumes a convex shape and the branch waves reaching the shore form an acute angle with the coast line. CHAPTER III. CURRENT OBSERVATIONS. There is considerable diversity in the design of floats employed in current observations, dependant to some extent upon whether it is desired to ascertain the direction of the surface drift or of a deep current, it does not by any means follow that they run in simultaneous directions. There is also sometimes considerable difference in the velocity of the current at different depths--the surface current being more susceptible to influence of wind. A good form of deep float is seen in Fig. 8. It consists of a rod 2 in by 2 in, or 4 sq in The lower end of which a hollow wooden box about 6 in by 6 in is fixed, into which pebbles are placed to overcome the buoyancy of the float and cause it to take and maintain an upright position in the water with a length of 9in of the rod exposed above the surface. A small hole is formed in the top of the box for the insertion the pebbles, which is stopped up with a cork when the float is adjusted. The length of the rod will vary according to the depth of water, but it will generally be found convenient to employ a float about 10 ft and to have a spare one about 6 ft deep, but otherwise it is similar in all respects, for use in shallow water. A cheap float for gauging the surface drift can be made from an empty champagne bottle weighted with stones and partly filled with water. The top 12 in of rods and the cord and neck of the bottle, as the case may be, should be painted red, as this colour renders floats more conspicuous when in the water and gives considerable assistance in locating their position, especially when they are at some distance from the observer. A deep-sea float designed by Mr. G. P. Bidden for ascertaining the set of the currents along the base of the ocean has recently been used by the North Sea Fisheries Investigation Committee. It consists of a bottle shaped like a soda-water bottle, made of strong glass to resist the pressure of the water, and partly filled with water, so that just sufficient air is left in it to cause it to float. A length of copper wire heavy enough to cause it to sink is then attached to the bottle, which is then dropped into the sea at a defined place. When the end of the wire touches the bottom the bottle is relieved of some of its weight and travels along with the currents a short distance above the bed of the sea. About 20 per cent. of the bottles were recovered, either by being thrown up on the beach or by being fished up in trawl nets. [Illustration: FIG. 8.--DETAIL OF WOOD TIDAL FLOAT 10 FEET DEEP.] A double float, weighing about 10 lb complete, was used for the tidal observations for the Girdleness outfall sewer, Aberdeen. The surface portion consisted of two sheet-iron cups soldered together, making a float 9 in in diameter and 6 in deep. The lower or submerged portion was made of zinc, cylindrical in shape, 16 in diameter and 16 in long, perforated at intervals with lin diameter holes and suspended by means of a brass chain from a swivel formed on the underside of the surface float. In gauging the currents the float is placed in the water at a defined point and allowed to drift, its course being noted and afterwards transferred to a plan. The time of starting should be recorded and observations of its exact position taken regularly at every quarter of an hour, so that the time taken in covering any particular distance is known and the length of travel during any quarter-hour period multiplied by four gives the speed of the current at that time in miles per hour. The method to be employed in ascertaining the exact position of the float from time to time is a matter which requires careful consideration, and is dependent upon the degree of accuracy required according to the importance of the scheme and the situation of neighbouring towns, frequented shores, oyster beds, and other circumstances likely to be injuriously affected by any possible or probable pollution by sewage. One method is to follow the float in a small boat carrying a marine compass which has the card balanced to remain in a horizontal position, irrespective of the tipping and rolling of the boat, and to observe simultaneously the bearing of two prominent landmarks, the position of which on the plan is known, at each of the quarter-hour periods at which the observations are to be taken. This method only gives very approximate results, and after checking the value of the observations made by its use, with contemporary observations taken by means of theodolites on the shore, the writer abandoned the system in favour of the theodolite method, which, however, requires a larger staff, and is therefore more expensive. In every case it is necessary to employ a boat to follow the float, not only so as to recover it at the end of each day's work, but principally to assist in approximately locating the float, which can then be found more readily when searching through the telescope of the theodolite. The boat should be kept about 10 ft to 20 ft from the float on the side further removed from the observers, except when surface floats are being used to ascertain the effect of the wind, when the boat should be kept to leeward of the float. Although obviously with a large boat the observations can be pursued through rougher weather, which is an important point, still the difficulty of maintaining a large boat propelled by mechanical power, or sail, sufficiently near the float to assist the observers, prevents its use, and the best result will be obtained by employing a substantial, seaworthy rowing boat with a broad beam. The boatmen appreciate the inclusion of a mast, sails, and plenty of ballast in the equipment to facilitate their return home when the day's work is done, which may happen eight or nine miles away, with twilight fast passing into darkness. There should be two boatmen, or a man and a strong youth. In working with theodolites, it is as well before starting to select observation stations at intervals along the coast, drive pegs in the ground so that they can easily be found afterwards, and fix their position upon a 1/2500 ordnance map in the usual manner. It may, however, be found in practice that after leaving one station it is not possible to reach the next one before the time arrives for another sight to be taken. In this case the theodolite must be set up on magnetic north at an intermediate position, and sights taken to at least two landmarks, the positions of which are shown on the map, and the point of observation subsequently plotted as near as possible by the use of these readings. Inasmuch as the sights will be taken from points on the edge of the shore, which is, of course, shown on the map, it is possible, after setting up to magnetic north, to fix the position with approximate accuracy by a sight to one landmark only, but this should only be done in exceptional circumstances. The method of taking the observations with two theodolites, as adopted by the writer, can best be explained by a reference to Fig. 9, which represents an indented piece of the coast. The end of the proposed sea outfall sewer, from which point the observations would naturally start, is marked 1, the numerals 2, 3, 4, etc., indicating the positions of the float as observed from time to time. Many intermediate observations would be taken, but in order to render the diagram more clear, these have not been shown. The lines of sight are marked 1A, 1B, etc. The points marked A1, A2, etc., indicate the first, second, etc., and subsequent positions of observer A; the points B1, B2, etc., referring to observer B. The dot-and-dash line shows the course taken by the float, which is ascertained after plotting the various observations recorded. It is very desirable to have a horse and trap in waiting to move the observers and their instruments from place to place as required, and each observer should be provided with small flags about 2 ft square, one white and one blue, for signalling purposes. The instruments are first set up at A1 and B1 respectively, and adjusted to read on to the predetermined point 1 where the float is to be put in Then as soon as the boatmen have reached the vicinity of this point, the observers can, by means of the flags, direct them which way to row so as to bring the boat to the exact position required, and when this is done the anchor is dropped until it is time to start, which is signalled by the observers holding the flags straight above their heads. This is also the signal used to indicate to the men that the day's work is finished, and they can pick up the float and start for home. [Illustration: FIG. 9.--PLAN OF INDENTED COAST-LINE LLUSTRATING METHOD OF TAKING CURRENT OBSERVATIONS WITH TWO THEODOLITES.] Directly the float is put in the water, and at every even quarter of an hour afterwards, each observer takes a reading of its exact position, and notes the time. As soon as the readings are taken to the float in position 2, the observer A should take up his instrument and drive to A2, where he must set up ready to take reading 3 a quarter of an hour after reading 2. It will be noticed that he might possibly have been able to take the reading 3 from the position A1, but the angle made by the lines of sight from the two instruments would have been too acute for accurate work, and very probably the float would have been hidden by the headland, so that he could not take the reading at all. In order to be on the headland A4 at the proper time, A must be working towards it by getting to position A3 by the time reading 4 is due. Although the remainder of the course of the float can be followed from B1 and A4, the instruments would be reading too much in the same line, so that B must move to B2 and then after reading 5 and 6 he should move to B3. As the float returns towards the starting point, A can remain in the position A4 while B goes to B4 and then moves back along the shore as the float progresses. The foregoing description is sufficient to indicate the general method of working, but the details will of course vary according to the configuration of the shore and the course taken by the float. Good judgment is necessary in deciding when to move from one station to the next, and celerity in setting up, adjusting the instrument, and taking readings is essential. If the boatmen can be relied upon to keep their position near the float, very long sights can be taken with sufficient accuracy by observing the position of the boat, long after the float has ceased to be visible through the telescope. The lines of sight from each station should be subsequently plotted on the 1/2500 ordnance map; the intersection of each two corresponding sight lines giving the position of the float at that time. Then if a continuous line is drawn passing through all the points of intersection it will indicate the course taken by the float. It is very desirable that the observers should be able to convey information to each other by signalling with the flags according to the Morse code, as follows. The dashes represent a movement of the flag from a position in front of the left shoulder to near the ground on the right side and the dots a movement from the left shoulder to the right shoulder. TABLE 3. MORSE ALPHABET. E . A .- R .-. L .-.. W .-- P .--. J .--- I .. U ..- F ..-. S ... V ...- H .... T - N -. K -.- C -.-. Y -.-- D -.. X -..- B -... M -- G --. Q --.- Z --.. O --- The signal to attract attention at starting and to signify the end of the message is .. .. .. continued until it is acknowledged with a similar sign by the other observer; that for a repetition is .. -- .. which is signalled when any part of the message is not understood, otherwise after each word is signalled the receiver waves - to indicate he understands it. Until proficiency is attained, two copies of the alphabet should be kept by each observer for reference, one for dispatching a message arranged in alphabetical order and the other far reading a message arranged as set out above. The white flag should be used when standing against a dark background, and the blue one when on the skyline or against a light background. The conditions in tidal rivers vary somewhat from those occurring on the coast. As the crest of the tidal wave passes the mouth of the river a branch wave is sent up the river. This wave has first to overcome the water flowing down the river, which is acting in opposition to it, and in so doing causes a banking up of the water to such a height that the inclination of the surface is reversed to an extent sufficient to cause a tidal current to run up the river. The momentum acquired by the water passing up-stream carries it to a higher level towards the head of the river than at the mouth, and, similarly, in returning, the water flowing down the river gains sufficient impetus to scoop out the water at the mouth and form a low water below that in the sea adjoining. Owing to a flow of upland water down a river the ebb lasts longer than the flood tide by a period, increasing in length as the distance from the mouth of the river increases; and, similarly to the sea, the current may continue to run down a river after the tide has turned and the level of the water is rising. The momentum of the tide running up the centre of the river is in excess of that along the banks, so that the current changes near the shore before it does in the middle, and, as the sea water is of greater specific gravity than the fresh, weighing 64 lb per cubic foot against 62-1/2 lb, it flows up the bed of the river at the commencement of the tide, while the fresh water on the surface is running in the opposite direction. After a time the salt water becomes diffused in the fresh, so that the density of the water in a river decreases as the distance from the sea increases. The disposal of sewage discharged into a river is due primarily to the mixing action which is taking place; inasmuch as the tidal current which is the transporting agent rarely flows more rapidly than from two to four miles per hour, or, say, twelve to fifteen miles per tide. The extent to which the suspended matter is carried back again up stream when the current turns depends upon the quantity of upland water which has flowed into the upper tidal part of the river during the ebb tide, as this water occupies a certain amount of space, according to the depth and width of the river, and thus prevents the sea water flowing back to the position it occupied on the previous tide, and carrying with it the matter in suspension. The permanent seaward movement of sewage discharged into the Thames at Barking when there is only a small quantity of upland water is at the rate of about one mile per day, taking thirty days to travel the thirty-one miles to the sea, while at the mouth of the river the rate does not exceed one- third of a mile per day. CHAPTER IV. SELECTION OF SITE FOR OUTFALL SEWER. The selection of the site for the sea outfall sewer is a matter requiring a most careful consideration of the many factors bearing on the point, and the permanent success of any scheme of sewage disposal depends primarily upon the skill shown in this matter. The first step is to obtain a general idea of the tidal conditions, and to examine the Admiralty charts of the locality, which will show the general set of the main currents into which it is desirable the sewage should get as quickly as possible. The main currents may be at some considerable distance from the shore, especially if the town is situated in a bay, when the main current will probably be found running across the mouth of it from headland to headland. The sea outfall should not be in the vicinity of the bathing grounds, the pier, or parts of the shore where visitors mostly congregate; it should not be near oyster beds or lobster grounds. The prosperity--in fact, the very existence--of most seaside towns depends upon their capability of attracting visitors, whose susceptibilities must be studied before economic or engineering questions, and there are always sentimental objections to sewage works, however well designed and conducted they may be. It is desirable that the sea outfall should be buried in the shore for the greater part of its length, not only on account of these sentimental feelings, but as a protection from the force of the waves, and so that it should not interfere with boating; and, further, where any part of the outfall between high and low water mark is above the shore, scouring of the beach will inevitably take place on each side of it. The extreme end of the outfall should be below low-water mark of equinoctial tides, as it is very objectionable to have sewage running across the beach from the pipe to the water, and if the foul matter is deposited at the edge of the water it will probably be brought inland by the rising tide. Several possible positions may present themselves for the sea outfall, and a few trial current observations should be made in these localities at various states of the tides and plotted on to a 1:2500 ordnance map. The results of these observations will probably reduce the choice of sites very considerably. Levels should be taken of the existing subsidiary sewers in the town, or, if there are none, the proposed arrangement of internal sewers should be sketched out with a view to their discharging their contents at one or other of the points under consideration. It may be that the levels of the sewers are such that by the time they reach the shore they are below the level of low water, when, obviously, pumping or other methods of raising the sewage must be resorted to; if they are above low water, but below high water, the sewage could be stored during high water and run off at or near low water; or, if they are above high water, the sewage could run off continuously, or at any particular time that might be decided. Observations of the currents should now be made from the selected points, giving special attention to those periods during which it is possible to discharge the sewage having regard to the levels of the sewers. These should be made with the greatest care and accuracy, as the final selection of the type of scheme to be adopted will depend very largely on the results obtained and the proper interpretation of them, by estimating, and mentally eliminating, any disturbing influences, such as wind, etc. Care must also be taken in noting the height of the tide and the relative positions of the sun, moon, and earth at the time of making the observations, and in estimating from such information the extent to which the tides and currents may vary at other times when those bodies are differently situated. It is obvious that if the levels of the sewers and other circumstances are such that the sewage can safely be discharged at low water, and the works are to be constructed accordingly, it is most important to have accurate information as to the level of the highest low water which may occur in any ordinary circumstances. If the level of a single low water, given by a casual observation, is adopted without consideration of the governing conditions, it may easily be that the tide in question is a low one, that may not be repeated for several years, and the result would be that, instead of having a free outlet at low water, the pipe would generally be submerged, and its discharging capacity very greatly reduced. The run of the currents will probably differ at each of the points under consideration, so that if one point were selected the best result would be obtained by discharging the sewage at high water and at another point at low water, whereas at a third point the results would show that to discharge there would not be satisfactory at any stage of the tide unless the sewage were first partially or even wholly purified. If these results are considered in conjunction with the levels of the sewers definite alternative schemes, each of which would work satisfactory may be evolved, and after settling them in rough outline, comparative approximate estimates should be prepared, when a final scheme may be decided upon which, while giving the most efficient result at the minimum cost, will not arouse sentimental objections to a greater extent than is inherent to all schemes of sewage disposal. Having thus selected the exact position of the outfall, the current observations from that point should be completed, so that the engineer may be in a position to state definitely the course which would be taken by sewage if discharged under any conditions of time or tide. This information is not particularly wanted by the engineer, but the scheme will have to receive the sanction of the Local Government Board or of Parliament, and probably considerable opposition will be raised by interested parties, which must be met at all points and overcome. In addition to this, it may be possible, and necessary, when heavy rain occurs, to allow the diluted sewage to escape into the sea at any stage of the tide; and, while it is easy to contend that it will not then be more impure than storm water which is permitted to be discharged into inland streams during heavy rainfall, the aforesaid sentimentalists may conjure up many possibilities of serious results. As far as possible the records should indicate the course taken by floats starting from the outfall, at high water, and at each regular hour afterwards on the ebb tide, as well as at low water and every hour on the flood tide. It is not, however, by any means necessary that they should be taken in this or any particular order, because as the height of the tide varies each day an observation taken at high water one day is not directly comparable with one taken an hour after high water the next day, and while perhaps relatively the greatest amount of information can be gleaned from a series of observations taken at the same state of the tide, but on tides of differing heights, still, every observation tells its own story and serves a useful purpose. Deep floats and surface floats should be used concurrently to show the effect of the wind, the direction and force of which should be noted. If it appears that with an on-shore wind floating particles would drift to the shore, screening will be necessary before the sewage is discharged. The floats should be followed as long as possible, but at least until the turn of the current--that is to say, a float put in at or near high water should be followed until the current has turned at or near low water, and one put in at low water should be followed until after high water. In all references to low water the height of the tide given is that of the preceding high water. The time at which the current turns relative to high and low water at any place will be found to vary with the height of the tide, and all the information obtained on this point should be plotted on squared paper as shown on Fig. 10, which represents the result of observations taken near the estuary of a large river where the conditions would be somewhat different from those holding in the open sea. The vertical lines represent the time before high or low water at which the current turned, and the horizontal lines the height of the tide, but the data will, of course, vary in different localities. [Illustration: Hours before turn of tide. FIG 10] It will be noticed that certain of the points thus obtained can be joined up by a regular curve which can be utilised for ascertaining the probable time at which the current will turn on tides of height intermediate to those at which observations were actually taken. For instance, from the diagram given it can be seen that on a 20 ft tide the current will turn thirty minutes before the tide, or on a 15 ft tide the current will turn one hour before the tide. Some of the points lie at a considerable distance from the regular curve, showing that the currents on those occasions were affected by some disturbing influence which the observer will probably be able to explain by a reference to his notes, and therefore those particular observations must be used with caution. The rate of travel of the currents varies in accordance with the time they have been running. Directly after the turn there is scarcely any movement, but the speed increases until it reaches a maximum about three hours later and then it decreases until the next turn, when dead water occurs again. Those observations which were started at the turn of the current and continued through the whole tide should be plotted as shown in Fig. 11, which gives the curves relating to three different tides, but, provided a sufficiently large scale is adopted, there is no reason why curves relating to the whole range of the tides should not be plotted on one diagram. This chart shows the total distance that would be covered by a float according to the height of the tide; it also indicates the velocity of the current from time to time. It can be used in several ways, but as this necessitates the assumption that with tides of the same height the flow of the currents is absolutely identical along the coast in the vicinity of the outfall, the diagram should be checked as far as possible by any observations that may be taken at other states of tides of the same heights. Suppose we require to know how far a float will travel if started at two hours after high water on a 12 ft tide. From Fig. 10 we see that on a tide of this height the current turns two hours and a quarter before the tide; therefore two hours after high water will be four hours and a quarter after the turn of the current. If the float were started with the current, we see from Fig. 11 that it would have travelled three miles in four hours and a quarter; and subtracting this from four miles, which is its full travel on a whole tide, we see that it will only cover one mile in the two hours and a quarter remaining before the current turns to run back again. Although sewage discharged into the sea rapidly becomes so diffused as to lose its identity, still occasionally the extraneous substances in it, such as wooden matches, banana skins, etc., may be traced for a considerable distance; so that, as the sewage continues to be discharged into the sea moving past the outfall, there is formed what may be described as a body or column of water having possibilities of sewage contamination. If the time during which sewage is discharged is limited to two hours, and starts, say, at the turn of the current on a 12 ft tide, we see from Fig. 11 that the front of this body of water will have reached a point five-eighths of a mile away when the discharge ceases; so that there will be a virtual column of water of a total length of five-eighths of a mile, in which is contained all that remains of the noxious matters, travelling through the sea along the course of the current. We see, further, that at a distance of three miles away this column would only take thirty minutes to pass a given point. The extent of this column of water will vary considerably according to the tide and the time of discharge; for instance, on a 22 ft tide, if the discharge starts one hour after the turn of the current and continues for two hours, as in the previous example, it will form a column four miles long, whereas if it started two hours after the current, and continued for the same length of time, the column would be six miles and a half long, but the percentage of sewage in the water would be infinitesimal. [Illustration: Hours after turn of current FIG. 11] In some cases it may be essential that the sewage should be borne past a certain point before the current turns in order to ensure that it shall not be brought back on the return tide to the shore near the starting point. In other words, the sewage travelling along the line of a branch current must reach the junction on the line of the main current by a certain time in order to catch the connection. Assuming the period of discharge will be two hours, and that the point which it is necessary to clear is situated three miles and a half from the outfall, the permissible time to discharge the sewage according to the height of the tide can be obtained from Fig. 11. Taking the 22 ft tide first, it will be seen that if the float started with the current it would travel twelve miles in the tide; three and a half from twelve leaves eight and a half miles. A vertical line dropped from the intersection of the eight miles and a half line with the curve of the current gives the time two hours and a half before the end, or four hours after the start of the current at which the discharge of the sewage must cease at the outfall in order that the rear part of the column can reach the required point before the current turns. As on this tide high water is about fifteen minutes after the current, the latest time for the two hours of discharge must be from one hour and three-quarters to three hours and three-quarters after high water. Similarly with the 12 ft tide having a total travel of four miles: three and a half from four leaves half a mile, and a vertical line from the half-mile intersection gives one hour and three-quarters after the start of the current as the time for discharge to cease. High water is two hours and a quarter after the current; therefore the latest time for the period of discharge would be from two hours and a half to half an hour before high water, but, as during the first quarter of an hour the movement of the current, though slight, would be in the opposite direction, it would be advisable to curtail the time of discharge, and say that it should be limited to between two hours and a quarter and half an hour before high water. It is obvious that if sewage is discharged about two hours after high water the current will be nearing its maximum speed, but it will only have about three hours to run before it turns; so that, although the sewage may be removed with the maximum rapidity from the vicinity of the sea outfall, it will not be carried to any very great distance, and, of course, the greater the distance it is carried the more it will be diffused. It must be remembered that the foregoing data are only applicable to the locality they relate to, although after obtaining the necessary information similar diagrams can be made and used for other places; but enough has been said to show that when it is necessary to utilise the full effect of the currents the sewage should be discharged at a varying time before high or low water, as the case may be, according to the height of the tide. CHAPTER V. VOLUME OF SEWAGE. The total quantity of sewage to be dealt with per day can be ascertained by gauging the flow in those cases where the sewers are already constructed, but where the scheme is an entirely new one the quantity must be estimated. If there is a water supply system the amount of water consumed per day, after making due allowance for the quantity used for trade purposes and street watering, will be a useful guide. The average amount of water used per head per day for domestic purposes only may be taken as follows:-- DAILY WATER SUPPLY (Gallons per head per day.) Dietetic purposes (cooking, drinking, &c.) 1 Cleansing purposes (washing house utensils, clothes, &c.) 6 If water-closets are in general use, add 3 If baths are in general use, add 5 Total 15 It therefore follows that the quantity of domestic sewage to be expected will vary from 7 to 15 gallons per head per day, according to the extent of the sanitary conveniences installed in the town; but with the advent of an up-to-date sewage scheme, probably accompanied by a proper water supply, a very large increase in the number of water-closets and baths may confidently be anticipated, and it will rarely be advisable to provide for a less quantity of domestic sewage than 15 gallons per head per day for each of the resident inhabitants. The problem is complicated in sea coast towns by the large influx of visitors during certain short periods of the year, for whom the sewerage system must be sufficient, and yet it must not be so large compared with the requirements of the residential population that it cannot be kept in an efficient state during that part of the year when the visitors are absent. The visitors are of two types--the daily trippers and those who spend several days or weeks in the town. The daily tripper may not directly contribute much sewage to the sewers, but he does indirectly through those who cater for his wants. The resident visitor will spend most of the day out of doors, and therefore cause less than the average quantity of water to be used for house-cleansing purposes, in addition to which the bulk of the soiled linen will not be washed in the town. An allowance of 10 gallons per head per day for the resident visitor and 5 gallons per head per day for the trippers will usually be found a sufficient provision. It is, of course, well known that the flow of sewage varies from day to day as well as from hour to hour, and while there is no necessity to consider the daily variation--calculations being based on the flow of the maximum day--the hourly variation plays a most important part where storage of the sewage for any length of time is an integral part of the scheme. There are many important factors governing this variation, and even if the most elaborate calculations are made they are liable to be upset at any time by the unexpected discharge of large quantities of trade wastes. With a small population the hourly fluctuation in the quantity of sewage flowing into the sewers is very great, but it reduces as the population increases, owing to the diversity of the occupations and habits of the inhabitants. In all cases where the residential portions of the district are straggling, and the outfall works are situated at a long distance from the centre of the town, the flow becomes steadier, and the inequalities are not so prominently marked at the outlet end of the sewer. The rate of flow increases more or less gradually to the maximum about midday, and falls off in the afternoon in the same gradual manner. The following table, based on numerous gaugings, represents approximately the hourly variations in the dry weather flow of the sewage proper from populations numbering from 1,000 to 10,000, and is prepared after deducting all water which may be present in the sewers resulting from the infiltration of subsoil water through leaky joints in the pipes, and from defective water supply fittings as ascertained from the night gaugings. Larger towns have not been included in the table because the hourly rates of flow are generally complicated by the discharge of the trade wastes previously referred to, which must be the subject of special investigation in each case. [TABLE NO. 4. APPROXIMATE HOURLY VARIATION IN THE FLOW OF SEWAGE. Percentage of Total Flow Passing Off in each Hour. -----------+------------------------------------------------ | Population. Hour. +-----+-----+-----+-----+-----+-----+-----+------ |1,000|2,000|3,000|4,000|5,000|6,000|8,000|10,000 -----------+-----+-----+-----+-----+-----+-----+-----+------ Midnight | 1.0 | 1.0 | 1.2 | 1.3 | 1.5 | 1.5 | 1.8 | 2.0 1.0 a.m. | 0.7 | 0.7 | 0.7 | 0.8 | 0.8 | 1.0 | 1.0 | 1.0 2.0 " | nil | nil | nil | nil | 0.2 | 0.2 | 0.3 | 0.5 3.0 " | nil | nil | nil | nil | nil | nil | nil | 0.2 4.0 " | nil | nil | nil | nil | nil | nil | nil | nil 5.0 " | nil | nil | nil | nil | nil | nil | nil | 0.2 6.0 " | 0.2 | 0.2 | 0.3 | 0.5 | 0.6 | 0.5 | 0.7 | 0.8 7.0 " | 0.5 | 0.5 | 1.0 | 1.5 | 1.6 | 1.7 | 2.0 | 2.5 8.0 " | 1.0 | 1.5 | 2.0 | 2.5 | 3.0 | 3.5 | 4.0 | 5.0 9.0 " | 3.5 | 4.5 | 4.5 | 4.8 | 5.5 | 5.8 | 6.0 | 6.5 10.0 " | 6.5 | 6.5 | 6.8 | 7.0 | 7.5 | 7.7 | 8.0 | 8.0 11.0 " |10.5 |11.0 |10.5 |10.0 | 9.6 | 9.3 | 9.0 | 8.8 Noon |11.0 |11.3 |10.8 |10.3 | 9.3 | 9.5 | 9.2 | 9.0 1.0 p.m. | 6.0 | 5.5 | 6.0 | 6.7 | 7.0 | 7.2 | 7.3 | 7.5 2.0 " | 7.0 | 7.3 | 7.0 | 7.0 | 6.5 | 6.5 | 6.2 | 6.0 3.0 " | 6.8 | 6.5 | 6.5 | 6.5 | 6.5 | 6.3 | 6.3 | 6.0 4.0 " | 7.5 | 7.5 | 7.3 | 7.0 | 6.7 | 6.5 | 6.2 | 6.7 5.0 " | 6.5 | 6.5 | 6.5 | 6.3 | 6.0 | 6.0 | 6.0 | 5.8 6.0 " | 4.5 | 4.5 | 4.7 | 4.8 | 5.0 | 5.0 | 5.0 | 5.2 7.0 " | 6.5 | 6.2 | 6.0 | 5.8 | 5.5 | 5.5 | 5.5 | 4.7 8.0 " | 6.2 | 6.0 | 5.8 | 5.5 | 5.5 | 5.3 | 5.0 | 4.8 9.0 " | 5.0 | 4.8 | 4.7 | 4.5 | 4.5 | 4.5 | 4.5 | 4.0 10.0 " | 4.8 | 4.6 | 4.2 | 4.0 | 3.8 | 3.5 | 3.0 | 3.0 11.0 " | 4.3 | 3.5 | 3.5 | 3.2 | 3.2 | 3.0 | 3.0 | 2.8 -----------+-----+-----+-----+-----+-----+-----+-----+------ Total |100.0|100.0|100.0|100.0|100.0|100.0|100.0|100.0 -----------+-----+-----+-----+-----+-----+-----+-----+------ ANALYSIS OF FLOW] Percentage of total flow passing off during period named. ---------------------+----------------------------------------------------------------+ | Population. | +-------+-------+-------+-------+-------+-------+-------+--------+ | 1,000 | 2,000 | 3,000 | 4,000 | 5,000 | 6,000 | 8,000 | 10,000 | ---------------------+-------+-------+-------+-------+-------+-------+-------+--------+ 7.0 a.m. to 7.0 p.m | 77.3 | 78.8 | 78.6 | 78.7 | 78.5 | 78.8 | 78.7 | 75.2 | 7.0 p.m. to 7.0 a.m | 22.7 | 21.2 | 21.4 | 21.3 | 21.5 | 21.2 | 21.3 | 21.8 | Maximum 12 hrs. | 84.0 | 83.6 | 82.6 | 81.7 | 81.0 | 80.6 | 79.7 | 78.2 | " 10 " | 72.8 | 72.8 | 72.1 | 71.4 | 70.0 | 69.8 | 69.2 | 68.5 | " 9 " | 66.3 | 66.6 | 66.1 | 65.6 | 64.5 | 64.8 | 64.2 | 63.3 | " 8 " | 61.8 | 62.1 | 61.4 | 60.8 | 59.5 | 59.0 | 58.2 | 57.5 | " 6 " | 48.8 | 49.1 | 43.1 | 47.5 | 46.8 | 46.5 | 46.0 | 45.8 | " 3 " | 23.0 | 28.8 | 27.11| 27.3 | 26.8 | 26.5 | 26.2 | 25.8 | " 2 " | 21.5 | 22.3 | 21.3 | 20.3 | 19.3 | 18.5 | 18.2 | 17.3 | " 1 " | 11.0 | 11.3 | 10.8 | 10.3 | 9.8 | 9.5 | 9.2 | 9.0 | Minimum 9 " | 3.4 | 3.9 | 5.2 | 6.6 | 7.5 | 6.9 | 8.8 | 10.0 | " 10 " | 6.9 | 7.4 | 8.7 | 9.8 | 10.7 | 10.4 | 11.8 | 13.0 | ---------------------+-------+-------+-------+-------+-------+-------+-------+--------+ The data in the foregoing table, so far as they relate to populations of one, five, and ten thousand respectively, are reproduced graphically in Fig. 12. This table and diagram relate only to the flow of sewage--that is, water which is intentionally fouled; but unfortunately it is almost invariably found that the flow in the sewers is greater than is thus indicated, and due allowance must be made accordingly. The greater the amount of extra liquid flowing in the sewers as a permanent constant stream, the less marked will be the hourly variations; and in one set of gaugings which came under the writer's notice the quantity of extraneous liquid in the sewers was so greatly in excess of the ordinary sewage flow that, taken as a percentage of the total daily flow, the hourly variation was almost imperceptible. [Illustration: Fig 12 Hourly Variation in Flow of Sewage.] Provision must be made in the scheme for the leakage from the water fittings, and for the subsoil water, which will inevitably find its way into the sewers. The quantity will vary very considerably, and is difficult of estimation. If the water is cheap, and the supply plentiful, the water authority may not seriously attempt to curtail the leakage; but in other cases it will be reduced to a minimum by frequent house to house inspection; some authorities going so far as to gratuitously fix new washers to taps when they are required. Theoretically, there should be no infiltration of subsoil water, as in nearly all modern sewerage schemes the pipes are tested and proved to be watertight before the trenches are filled in; but in practice this happy state is not obtainable. The pipes may not all be bedded as solidly as they should be, and when the pressure of the earth comes upon them settlement takes place and the joints are broken. Joints may also be broken by careless filling of trenches, or by men walking upon the pipes before they are sufficiently covered. Some engineers specify that all sewers shall be tested and proved to be absolutely water-tight before they are "passed" and covered in, but make a proviso that if, after the completion of the works, the leakage into any section exceeds 1/2 cubic foot per minute per mile of sewer, that length shall be taken up and relaid. Even if the greatest vigilance is exercised to obtain water-tight sewers, the numerous house connections are each potential sources of leakage, and when the scheme is complete there may be a large quantity of infiltration water to be dealt with. Where there are existing systems of old sewers the quantity of infiltration water can be ascertained by gauging the night flow; and if it is proved to be excessive, a careful examination of the course of the sewers should be made with a view to locating the places where the greater part of the leakage occurs, and then to take such steps as may be practicable to reduce the quantity. CHAPTER VI. GAUGING FLOW IN SEWERS. A method frequently adopted to gauge the flow of the sewage is to fix a weir board with a single rectangular notch across the sewer in a convenient manhole, which will pond up the sewage; and then to ascertain the depth of water passing over the notch by measurements from the surface of the water to a peg fixed level with the bottom of the notch and at a distance of two or three feet away on the upstream side. The extreme variation in the flow of the sewage is so great, however, that if the notch is of a convenient width to take the maximum flow, the hourly variation at the time of minimum flow will affect the depth of the sewage on the notch to such a small extent that difficulty may be experienced in taking the readings with sufficient accuracy to show such variations in the flow, and there will be great probability of incorrect results being obtained by reason of solid sewage matter lodging on the notch. When the depth on a l2 in notch is about 6 in, a variation of only 1-16th inch in the vertical measurement will represent a difference in the rate of the flow of approximately 405 gallons per hour, or about 9,700 gallons per day. When the flow is about lin deep the same variation of 1-16th in will represent about 162 gallons per hour, or 3,900 gallons per day. Greater accuracy will be obtained if a properly-formed gauging pond is constructed independently of the manhole and a double rectangular notch, similar to Fig. 13, or a triangular or V- shaped notch, as shown in Fig. 14, used in lieu of the simpler form. In calculating the discharge of weirs there are several formulæ to choose from, all of which will give different results, though comparative accuracy has been claimed for each. Taking first a single rectangular notch and reducing the formulae to the common form: ____ Discharge per foot in width of weir = C \/ H^3 where H = depth from the surface of still water above the weir to the level of the bottom of the notch, the value of C will be as set out in the following table:-- TABLE No. 5. RECTANGULAR NOTCHES. _____ Discharge per foot in width of notch = C \/ H^3 ------------------------------------------------------------------ Values of C. --------------------------------------+--------------------------- H Measured in | Feet. | Inches. ---------------+-----------+----------+-----------+--------------- | Gallons | C. ft | Gallons | C. ft Discharge in | per hour. | per min | per hour. | per min ---------------+-----------+----------+-----------+--------------- Authority. | | | | Box | 79,895 | 213.6 | 1,922 | 5.13 Cotterill | 74,296 | 198.6 | 1,787 | 4.78 Francis | 74,820 | 200.0 | 1,800 | 4.81 Mo'esworth | 80,057 | 214.0 | 1,926 | 5.15 Santo Crimp | 72,949 | 195.0 | 1,755 | 4.69 ---------------+-----------+----------+-----------+--------------- In the foregoing table Francis' short formula is used, which does not take into account the end contractions and therefore gives a slightly higher result than would otherwise be the case, and in Cotterill's formula the notch is taken as being half the width of the weir, or of the stream above the weir. If a cubic foot is taken as being equal to 6-1/4 gallons instead of 6.235 gallons, then, cubic feet per minute multiplied by 9,000 equals gallons per day. This table can be applied to ascertain the flow through the notch shown in Fig. 13 in the following way. Suppose it is required to find the discharge in cubic feet per minute when the depth of water measured in the middle of the notch is 4 in Using Santo Crimp's formula the result will be C\/H^3 = 4.69 \/4^3 = 4.69 x 8 = 37.52 cubic feet per foot in width of weir, but as the weir is only 6 in wide, we must divide this figure by 2, then 37.52/2 = 18.76 cubic feet, which is the discharge per minute. +------+ +------+ | | FIG. 13 | | | | | | | | | | | +------+ +------+ | | | | | | | | | | | | | | +------+ | | | | | | | | | +----------------------------------+ Fig. 13.-ELEVATION OF DOUBLE RECTANGULAR NOTCHED GAUGING WEIR. +------+ +------+ | \ FIG. 13 / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \ / | | \/ | | | | | | | | | | | +----------------------------------+ FIG. 14.-ELEVATION OF TRIANGULAR NOTCHED GAUGING WEIR. FIG. 15.-LONGITUDINAL SECTION, SHOWING WEIR, GAUGE-PEG, AND HOOK-GAUGE If it is required to find the discharge in similar terms with a depth of water of 20 in, two sets of calculations are required. First 20 in depth on the notch 6 in wide, and then 4 in depth on the notch, 28 in minus 6 in, or 1 ft wide. ____ _____ (1) C\/ H^3 = 4.69/2 \/ 10^3 = 2.345 x 31.62 = 74.15 ____ ____ (2) C\/ H^3 = 1.0 x 4.69 \/ 4^3 = 1.0 x 4.69 x 8 = 37.52 Total in c. ft per min = 111.67 The actual discharge would be slightly in excess of this. In addition to the circumstances already enumerated which affect the accuracy of gaugings taken by means of a weir fixed in a sewer there is also the fact that the sewage approaches the weir with a velocity which varies considerably from time to time. In order to make allowance for this, the head calculated to produce the velocity must be added to the actual head. This can be embodied in the formula, as, for example, Santo Crimp's formula for discharge in cubic feet per minute, with H measured in feet, is written __________________ 195\/(11^3 + .035V - H^2 instead of the usual form of ____ 195\/ H^3, which is used when there is no velocity to take into account. The V represents the velocity in feet per second. Triangular or V notches are usually formed so that the angle between the two sides is 90°, when the breadth at any point will always be twice the vertical height measured at the centre. The discharge in this case varies as the square root of the fifth power of the height instead of the third power as with the rectangular notch. The reason for the alteration of the power is that _approximately_ the discharge over a notch with any given head varies as the cross-sectional area of the body of water passing over it. The area of the 90° notch is half that of a circumscribing rectangular notch, so that the discharge of a V notch is approximately equal to that of a rectangular notch having a width equal to half the width of the V notch at water level, and as the total width is equal to double the depth of water passing over the notch the half width is equal to the full depth and the discharge is equal to that of a rectangular notch having a width equal to the depth of water flowing over the V notch from time to time, both being measured in the same unit, therefore ____ ____ ____ C \/ H^3 becomes C x H x \/ H^3 which equals C \/ H^5. The constant C will, however, vary from that for the rectangular notch to give an accurate result. TABLE No. 6. TRIANGULAR OR V NOTCHES. ____ Discharge = C x \/ H^5. Values of C. --------------+-----------------------+------------------------ H Measured in | Feet. | Inches. --------------+----------+------------+-----------+------------ Discharge in | Gallons | C. ft per | Gallons | C. ft per | per hour | min | per hour. | min --------------+----------+------------+-----------+------------ Alexander | 59,856 | 160 | 120.0 | 0.321 Cotterill | 57,013 | 152.4 | 114.3 | 0.306 Molesworth | 59,201 | 158.2 | 118.7 | 0.317 Thomson | 57,166 | 152.8 | 114.6 | 0.306 --------------+----------+------------+-----------+------------ Cotterill's formula for the discharge in cubic feet per minute is _______ 16 x C x B \/ 2g H^3 when B = breadth of notch in feet and H = height of water in feet and can be applied to any proportion of notch. When B = 2H, that is, a 90° notch, C = .595 and the formula becomes ____ 152.4 \/ H^5, and when B = 4H, that is, a notch containing an angle of 126° 51' 36", C = .62 and the formula is then written ____ 318 \/ H^5. The measurements of the depth of the water above the notch should be taken by a hook-gauge, as when a rule or gauge-slate is used the velocity of the water causes the latter to rise as it comes in contact with the edge of the measuring instrument and an accurate reading is not easily obtainable, and, further, capillary attraction causes the water to rise up the rule above the actual surface, and thus to show a still greater depth. When using a hook-gauge the top of the weir, as well as the notch, should be fixed level and a peg or stake fixed as far back as possible on the upstream side of the weir, so that the top of the peg is level with the top of the weir, instead of with the notch, as is the case when a rule or gauge-slate is used. The hook-gauge consists of a square rod of, say, lin side, with a metal hook at the bottom, as shown in Fig. 15, and is so proportioned that the distance from the top of the hook to the top of the rod is equal to the difference in level of the top of the weir and the sill of the notch. In using it the rod of the hook-gauge is held against the side of the gauge-peg and lowered into the water until the point of the hook is submerged. The gauge is then gently raised until the point of the hook breaks the surface of the water, when the distance from the top of the gauge-peg to the top of the rod of the hook-gauge will correspond with the depth of the water flowing over the weir. CHAPTER VII. RAINFALL. The next consideration is the amount of rain-water for which provision should be made. This depends on two factors: first, the amount of rain which may be expected to fall; and, secondly, the proportion of this rainfall which will reach the sewers. The maximum rate at which the rain-water will reach the outfall sewer will determine the size of the sewer and capacity of the pumping plant, if any, while if the sewage is to be stored during certain periods of the tide the capacity of the reservoir will depend upon the total quantity of rain-water entering it during such periods, irrespective of the rate of flow. Some very complete and valuable investigations of the flow of rain-water in the Birmingham sewers were carried out between 1900 and 1904 by Mr. D. E. Lloyd-Davies, M.Inst.C. E., the results of which are published in Vol. CLXIV., Min Proc. Inst.C.E. He showed that the quantity reaching the sewer at any point was proportional to the time of concentration at that point and the percentage of impermeable area in the district. The time of concentration was arrived at by calculating the time which the rain-water would take to flow through the longest line of sewers from the extreme boundaries of the district to the point of observation, assuming the sewers to be flowing half full; and adding to the time so obtained the period required for the rain to get into the sewers, which varied from one minute where the roofs were connected directly with the sewers to three minutes where the rain had first to flow along the road gutters. With an average velocity of 3 ft per second the time of concentration will be thirty minutes for each mile of sewer. The total volume of rain-water passing into the sewers was found to bear the same relation to the total volume of rain falling as the maximum flow in the sewers bore to the maximum intensity of rainfall during a period equal to the time of concentration. He stated further that while the flow in the sewers was proportional to the aggregate rainfall during the time of concentration, it was also directly proportional to the impermeable area. Putting this into figures, we see that in a district where the whole area is impermeable, if a point is taken on the main sewers which is so placed that rain falling at the head of the branch sewer furthest removed takes ten minutes to reach it, then the maximum flow of storm water past that point will be approximately equal to the total quantity of rain falling over the whole drainage area during a period of ten minutes, and further, that the total quantity of rainfall reaching the sewers will approximately equal the total quantity falling. If, however, the impermeable area is 25 per cent. of the whole, then the maximum flow of storm water will be 25 per cent. of the rain falling during the time of concentration, viz., ten minutes, and the total quantity of storm water will be 25 per cent. of the total rainfall. If the quantity of storm water is gauged throughout the year it will probably be found that, on the average, only from 70 per cent. to 80 per cent. of the rain falling on the impermeable areas will reach the sewers instead of 100 per cent., as suggested by Mr. Lloyd-Davies, the difference being accounted for by the rain which is required to wet the surfaces before any flow off can take place, in addition to the rain-water collected in tanks for domestic use, rain required to fill up gullies the water level of which has been lowered by evaporation, and rain-water absorbed in the joints of the paving. The intensity of the rainfall decreases as the period over which the rainfall is taken is increased. For instance, a rainfall of lin may occur in a period of twenty minutes, being at the rate of 3 in per hour, but if a period of one hour is taken the fall during such lengthened time will be considerably less than 3 in In towns where automatic rain gauges are installed and records kept, the required data can be abstracted, but in other cases it is necessary to estimate the quantity of rain which may have to be dealt with. It is impracticable to provide sewers to deal with the maximum quantity of rain which may possibly fall either in the form of waterspouts or abnormally heavy torrential rains, and the amount of risk which it is desirable to run must be settled after consideration of the details of each particular case. The following table, based principally upon observations taken at the Birmingham Observatory, shows the approximate rainfall which may be taken according to the time of concentration. TABLE No. 7. INTENSITY OF RAINFALL DURING LIMITED PERIODS. Equivalent rate in inches per hour of aggregate rainfall during Time of Concentration, period of concentration A B C D E 5 minutes ............... 1.75 2.00 3.00 -- -- 10 " ............... 1.25 1.50 2.00 -- -- 15 " ............... 1.05 1.25 1.50 -- -- 20 " ............... 0.95 1.05 1.30 1.20 3.00 25 " ............... 0.85 0.95 1.15 -- -- 30 " ............... 0.80 0.90 1.05 1.00 2.50 35 " ............... 0.75 0.85 0.95 -- -- 40 " ............... 0.70 0.80 0.90 -- -- 45 " ............... 0.65 0.75 0.85 -- -- 1 hour .................. 0.50 0.60 0.70 0.75 1.80 1-1/2 " .................. 0.40 0.50 0.60 -- 1.40 2 " .................. 0.30 0.40 0.50 0.50 1.10 The figures in column A will not probably be exceeded more than once in each year, those in column B will not probably be exceeded more than once in three years, while those in column C will rarely be exceeded at all. Columns D and E refer to the records tabulated by the Meteorological Office, the rainfall given in column D being described in their publication as "falls too numerous to require insertion," and those in column E as "extreme falls rarely exceeded." It must, however, be borne in mind that the Meteorological Office figures relate to records derived from all parts of the country, and although the falls mentioned may occur at several towns in any one year it may be many years before the same towns are again visited by storms of equal magnitude. While it is convenient to consider the quantity of rainfall for which provision is to be made in terms of the rate of fall in inches per hour, it will be useful for the practical application of the figures to know the actual rate of flow of the storm water in the sewers at the point of concentration in cubic feet per minute per acre. This information is given in the following Table No. 8, which is prepared from the figures given in Table No. 7, and is applicable in the same manner. TABLE No. 8. MAXIMUM FLOWS OF STORM WATER. --------------------------+---------------------------------- | Maximum storm water flow in | cubic feet per min per acre | of impervious area. Time of Concentration. +------+------+------+------+------ | A | B | C | D | E --------------------------+------+------+------+------+------ 5 minutes | 106 | 121 | 181 | -- | -- 10 " | 75 | 91 | 121 | -- | -- 15 " | 64 | 75 | 91 | -- | -- 20 " | 57 | 64 | 79 | 73 | 181 25 " | 51 | 57 | 70 | -- | -- 30 " | 48 | 54 | 64 | 61 | 151 35 " | 45 | 51 | 57 | -- | -- 40 " | 42 | 48 | 54 | -- | -- 45 " | 39 | 45 | 51 | -- | -- 1 hour | 30 | 36 | 42 | 45 | 109 1-1/2 " | 24 | 30 | 36 | -- | 85 2 " | 18 | 24 | 30 | 30 | 67 --------------------------+------+------+------+------+------- l inch of rain = 3,630 cub. feet per acre. The amount of rainfall for which storage has to be provided is a difficult matter to determine; it depends on the frequency and efficiency of the overflows and the length of time during which the storm water has to be held up for tidal reasons. It is found that on the average the whole of the rain on a rainy day falls within a period of 2-1/2 hours; therefore, ignoring the relief which may be afforded by overflows, if the sewers are tide-locked for a period of 2-1/2 hours or over it would appear to be necessary to provide storage for the rainfall of a whole day; but in this case again it is permissible to run a certain amount of risk, varying with the length of time the sewers are tide-locked, because, first of all, it only rains on the average on about 160 days in the year, and, secondly, when it does rain, it may not be at the time when the sewers are tide-locked, although it is frequently found that the heaviest storms occur just at the most inconvenient time, namely, about high water. Table No. 9 shows the frequency of heavy rain recorded during a period of ten years at the Birmingham Observatory, which, being in the centre of England, may be taken as an approximate average of the country. TABLE No. 9. FREQUENCY OF HEAVY RAIN ------------------------------------------------------- Total Daily Rainfall. Average Frequency of Rainfall ------------------------------------------------------- 0.4 inches and over 155 times each year 0.5 " 93 " 0.6 " 68 " 0.7 " 50 " 0.8 " 33 " 0.9 " 22 " 1.0 " 17 " 1.1 " Once each year 1.2 " Once in 17 months 1.25 " " 2 years 1.3 " " 2-1/2 1.4 " " 3-1/3 1.5 " " 5 years 1.6 " " 5 years 1.7 " " 5 years 1.8 " " 10 years 1.9 " " 10 years 2.0 " " 10 years -------------------------------------------------- It will be interesting and useful to consider the records for the year 1903, which was one of the wettest years on record, and to compare those taken in Birmingham with the mean of those given in "Symons' Rainfall," taken at thirty-seven different stations distributed over the rest of the country. TABLE No. 10. RAINFALL FOR 1903. Mean of 37 stations in Birmingham England and Wales. Daily Rainfall of 2 in and over ...... None 1 day Daily Rainfall of 1 in and over ...... 3 days 6 days Daily Rainfall of 1/2 in and over .... 17 days 25 days Number of rainy days.................. 177 days 211 days Total rainfall ...................... 33.86 in 44.89 in Amount per rainy day ................ 0.19 in 0.21 in The year 1903 was an exceptional one, but the difference existing between the figures in the above table and the average figures in Table 9 are very marked, and serve to emphasise the necessity for close investigation in each individual case. It must be further remembered that the wettest year is not necessarily the year of the heaviest rainfalls, and it is the heavy rainfalls only which affect the design of sewerage works. CHAPTER VIII. STORM WATER IN SEWERS. If the whole area of the district is not impermeable the percentage which is so must be carefully estimated, and will naturally vary in each case. The means of arriving at an estimate will also probably vary considerably according to circumstances, but the following figures, which relate to investigations recently made by the writer, may be of interest. In the town, which has a population of 10,000 and an area of 2,037 acres, the total length of roads constructed was 74,550 lineal feet, and their average width was 36 ft, including two footpaths. The average density of the population was 4.9 people per acre. Houses were erected adjoining a length of 43,784 lineal feet of roads, leaving 30,766 lineal feet, which for distinction may be called "undeveloped"--that is, the land adjoining them was not built over. Dividing the length of road occupied by houses by the total number of the inhabitants of the town, the average length of road per head was 4.37 ft, and assuming five people per house and one house on each side of the road we get ten people per two houses opposite each other. Then 10 x 4.37 = 43.7 lineal feet of road frontage to each pair of opposite houses. After a very careful inspection of the whole town, the average area of the impermeable surfaces appertaining to each house was estimated at 675 sq. ft, of which 300 sq. ft was apportioned to the front roof and garden paths and 375 sq. ft to the back roof and paved yards. Dividing these figures by 43.71 in ft of road frontage per house, we find that the effective width of the impermeable roadway is increased by 6 ft 10 in for the front portions of each house, and by a width of 8 ft 7 in, for the back portions, making a total width of 36 ft + 2(6 ft 10 in) + 2(8 ft 7 in) = 66 ft 10 in, say 67 ft On this basis the impermeable area in the town therefore equals: 43,7841 in ft x 67 ft =2,933,528; and 30,766 lin ft x 36 ft = 1,107,576. Total, 4,041,104 sq. ft, or 92.77 acres. As the population is 10,000 the impermeable area equals 404, say, 400 sq. ft per head, or ~ (92.77 x 100) / 2037 = 4.5 per cent, of the whole area of the town. It must be remembered that when rain continues for long periods, ground which in the ordinary way would generally be considered permeable becomes soaked and eventually becomes more or less impermeable. Mr. D. E. Lloyd-Davies, M.Inst.C.E., gives two very interesting diagrams in the paper previously referred to, which show the average percentage of effective impermeable area according to the population per acre. This information, which is applicable more to large towns, has been embodied in Fig. 16, from which it will be seen that, for storms of short duration, the proportion of impervious areas equals 5 per cent. with a population of 4.9 per acre, which is a very close approximation to the 4.5 per cent. obtained in the example just described. Where the houses are scattered at long intervals along a road the better way to arrive at an estimate of the quantity of storm water which may be expected is to ascertain the average impervious area of, or appertaining to, each house, and divide it by five, so as to get the area per head. Then the flow off from any section of road is directly obtained from the sum of the impervious area due to the length of the road, and that due to the population distributed along it. [Illustration: FIG. 16.--VARIATION IN AVERAGE PERCENTAGE OF EFFECTIVE IMPERMEABLE AREA ACCORDING TO DENSITY OF POPULATION.] In addition to being undesirable from a sanitary point of view, it is rarely economical to construct special storm water drains, but in all cases where they exist, allowance must be made for any rain that may be intercepted by them. Short branch sewers constructed for the conveyance of foul water alone are usually 9in or 12 in in diameter, not because those sizes are necessary to convey the quantity of liquid which may be expected, but because it is frequently undesirable to provide smaller public sewers, and there is generally sufficient room for the storm water without increasing the size of the sewer. If this storm water were conveyed in separate sewers the cost would be double, as two sewers would be required in the place of one. In the main sewers the difference is not so great, but generally one large sewer will be more economical than two smaller ones. Where duplicate sewers are provided and arranged, so that the storm water sewer takes the rain-water from the roads, front roofs and gardens of the houses, and the foul water sewer takes the rain-water from the back roofs and paved yards, it was found in the case previously worked out in detail that in built-up roads a width of 36 ft + 2 (8 ft 7 in) = 53 ft 2 in, or, say, 160 sq. ft per lineal yard of road would drain to the storm water sewer, and a width of 2 (6 ft 10 in) = 13 ft 8 in, or, say, 41 sq. ft per lineal yard of road to the foul water sewer. This shows that even if the whole of the rain which falls on the impervious areas flows off, only just under 80 per cent. of it would be intercepted by the special storm water sewers. Taking an average annual rainfall of 30 in, of which 75 per cent. flows off, the quantity reaching the storm water sewer in the course of a year from each lineal 30 75 yard of road would be --- x 160 x --- = 300 cubic 12 100 feet = 1,875 gallons. [Illustration: FIG. 17.--SECTION OF "LEAP WEIR" OVERFLOW] The cost of constructing a separate surface water system will vary, but may be taken at an average of, approximately, l5s. 0d. per lineal yard of road. To repay this amount in thirty years at 4 per cent, would require a sum of 10.42d., say 10-1/2d. per annum; that is to say, the cost of taking the surface water into special 10-1/2 d. x 1000 sewers is ---------------- = 5.6, say 6d. per 1,000 1875 gallons. If the sewage has to be pumped, the extra cost of pumping by reason of the increased quantity of surface water can be looked at from two different points of view:-- 1. The net cost of the gas or other fuel or electric current consumed in lifting the water. 2. The cost of the fuel consumed plus wages, stores, etc., and a proportion of the sum required to repay the capital cost of the pumping station and machinery. The extra cost of the sewers to carry the additional quantity of storm water might also be taken into account by working out and preparing estimates for the alternative schemes. The actual cost of the fuel may be taken at approximately 1/4 d. per 1,000 gallons. The annual works and capital charges, exclusive of fuel, should be divided by the normal quantity of sewage pumped per annum, rather than by the maximum quantity which the pumps would lift if they were able to run continuously during the whole time. For a town of about 10,000 inhabitants these charges may be taken at 1-1/4 d. per 1,000 gallons, which makes the total cost of pumping, inclusive of capital charges, 1-1/2 d. per 1,000 gallons. Even if the extra cost of enlarging the sewers is added to this sum it will still be considerably below the sum of 6 d., which represents the cost of providing a separate system for the surface water. Unless it is permissible for the sewage to have a free outlet to the sea at all states of the tide, the provision of effective storm overflows is a matter of supreme importance. Not only is it necessary for them to be constructed in well- considered positions, but they must be effective in action. A weir constructed along one side of a manhole and parallel to the sewer is rarely efficient, as in times of storm the liquid in the sewer travels at a considerable velocity, and the greater portion of it, which should be diverted, rushes past the weir and continues to flow in the sewer; and if, as is frequently the case, it is desirable that the overflowing liquid should be screened, and vertical bars are fixed on the weir for the purpose, they block the outlet and render the overflow practically useless. Leap weir overflows are theoretically most suitable for separating the excess flow during times of storm, but in practice they rarely prove satisfactory. This is not the fault of the system, but is, in the majority of the cases, if not all, due to defective designing. The general arrangement of a leap weir overflow is shown in Fig. 17. In normal circumstances the sewage flowing along the pipe A falls down the ramp, and thence along the sewer B; when the flow is increased during storms the sewage from A shoots out from the end of the pipe into the trough C, and thence along the storm-water sewer D. In order that it should be effective the first step is to ascertain accurately the gradient of the sewer above the proposed overflow, then, the size being known, it is easy to calculate the velocity of flow for the varying depths of sewage corresponding with minimum flow, average dry weather flow, maximum dry weather flow, and six times the dry weather flow. The natural curve which the sewage would follow in its downward path as it flowed out from the end of the sewer can then be drawn out for the various depths, taking into account the fact that the velocity at the invert and sides of the sewer is less than the average velocity of flow. The ramp should be built in accordance with the calculated curves so as to avoid splashing as far as possible, and the level of the trough C fixed so that when it is placed sufficiently far from A to allow the dry weather flow to pass down the ramp it will at the same time catch the storm water when the required dilution has taken place. Due regard must be had to the altered circumstances which will arise when the growth of population occurs, for which provision is made in the scheme, so that the overflow will remain efficient. The trough C is movable, so that the width of the leap weir may be adjusted from time to time as required. The overflow should be frequently inspected, and the accumulated rubbish removed from the trough, because sticks and similar matters brought down by the sewer will probably leap the weir instead of flowing down the ramp with the sewage. It is undesirable to fix a screen in conjunction with this overflow, but if screening is essential the operation should be carried out in a special manhole built lower down the course of the storm-water sewer. Considerable wear takes place on the ramp, which should, therefore, be constructed of blue Staffordshire or other hard bricks. The ramp should terminate in a stone block to resist the impact of the falling water, and the stones which may be brought with it, which would crack stoneware pipes if such were used. In cases where it is not convenient to arrange a sudden drop in the invert of the sewer as is required for a leap weir overflow, the excess flow of storm-water may be diverted by an arrangement similar to that shown in Fig. 18. [Footnote: PLATE IV] In this case calculations must be made to ascertain the depth at which the sewage will flow in the pipes at the time it is diluted to the required extent; this gives the level of the lip of the diverting plate. The ordinary sewage flow will pass steadily along the invert of the sewer under the plate until it rises up to that height, when the opening becomes a submerged orifice, and its discharging capacity becomes less than when the sewage was flowing freely. This restricts the flow of the sewage, and causes it to head up on the upper side of the overflow in an endeavour to force through the orifice the same quantity as is flowing in the sewer, but as it rises the velocity carries the upper layer of the water forward up the diverting plate and thence into the storm overflow drain A deep channel is desirable, so as to govern the direction of flow at the time the overflow is in action. The diverting trough is movable, and its height above the invert can be increased easily, as may be necessary from time to time. With this arrangement the storm-water can easily be screened before it is allowed to pass out by fixing an inclined screen in the position shown in Fig. 18. [Footnote: PLATE IV] It is loose, as is the trough, and both can be lifted out when it is desired to have access to the invert of the sewer. The screen is self- cleansing, as any floating matter which may be washed against it does not stop on it and reduce its discharging capacity, but is gradually drawn down by the flow of the sewage towards the diverting plate under which it will be carried. The heavier matter in the sewage which flows along the invert will pass under the plate and be carried through to the outfall works, instead of escaping by the overflow, and perhaps creating a nuisance at that point. CHAPTER IX. WIND AND WINDMILLS. In small sewerage schemes where pumping is necessary the amount expended in the wages of an attendant who must give his whole attention to the pumping station is so much in excess of the cost of power and the sum required for the repayment of the loan for the plant and buildings that it is desirable for the economical working of the scheme to curtail the wages bill as far as possible. If oil or gas engines are employed the man cannot be absent for many minutes together while the machinery is running, and when it is not running, as for instance during the night, he must be prepared to start the pumps at very short notice, should a heavy rain storm increase the flow in the sewers to such an extent that the pump well or storage tank becomes filled up. It is a simple matter to arrange floats whereby the pump may be connected to or disconnected from a running engine by means of a friction clutch, so that when the level of the sewage in the pump well reaches the highest point desired the pump may be started, and when it is lowered to a predetermined low water level the pump will stop; but it is impracticable to control the engine in the same way, so that although the floats are a useful accessory to the plant during the temporary absence of the man in charge they will not obviate his more or less constant attendance. An electric motor may be controlled by a float, but in many cases trouble is experienced with the switch gear, probably caused by its exposure to the damp air. In all cases an alarm float should be fixed, which would rise as the depth of the sewage in the pump well increased, until the top water level was reached, when the float would make an electrical contact and start a continuous ringing warning bell, which could be placed either at the pumping station or at the man's residence. On hearing the bell the man would know the pump well was full, and that he must immediately repair to the pumping-station and start the pumps, otherwise the building would be flooded. If compressed air is available a hooter could be fixed, which would be heard for a considerable distance from the station. [Illustration: PLATE IV. "DIVERTING PLATE" OVERFLOW. To face page 66.] It is apparent, therefore, that a pumping machine is wanted which will work continuously without attention, and will not waste money when there is nothing to pump. There are two sources of power in nature which might be harnessed to give this result--water and wind. The use of water on such a small scale is rarely economically practicable, as even if the water is available in the vicinity of the pumping-station, considerable work has generally to be executed at the point of supply, not only to store the water in sufficient bulk at such a level that it can be usefully employed, but also to lead it to the power-house, and then to provide for its escape after it has done its work. The power-house, with its turbines and other machinery, involves a comparatively large outlay, but if the pump can be directly driven from the turbines, so that the cost of attendance is reduced to a minimum, the system should certainly receive consideration. Although the wind is always available in every district, it is more frequent and powerful on the coast than inland. The velocity of the wind is ever varying within wide limits, and although the records usually give the average hourly velocity, it is not constant even for one minute. Windmills of the modern type, consisting of a wheel composed of a number of short sails fixed to a steel framework upon a braced steel tower, have been used for many years for driving machinery on farms, and less frequently for pumping water for domestic use. In a very few cases it has been utilised for pumping sewage, but there is no reason why, under proper conditions, it should not be employed to a greater extent. The reliability of the wind for pumping purposes may be gauged from the figures in the following table, No. 11, which were observed in Birmingham, and comprise a period of ten years; they are arranged in order corresponding with the magnitude of the annual rainfall:-- TABLE No. 11. MEAN HOURLY VELOCITY OF WIND Reference | Rainfall |Number of days in year during which the mean | Number | for |hourly velocity of the wind was below | | year | 6 m.p.h. | 10 m.p.h. | 15 m.p.h. | 20 m.p.h. | ----------+----------+----------+-----------+-----------+-----------+ 1... 33·86 16 88 220 314 2... 29·12 15 120 260 334 3... 28·86 39 133 263 336 4... 26·56 36 126 247 323 5... 26·51 34 149 258 330 6... 26·02 34 132 262 333 7... 25·16 33 151 276 332 8... 22·67 46 155 272 329 9... 22·30 26 130 253 337 10... 21·94 37 133 276 330 ----------+----------+----------+-----------+-----------+-----------+ Average 31·4 131·7 250·7 330·8 It may be of interest to examine the monthly figures for the two years included in the foregoing table, which had the least and the most wind respectively, such figures being set out in the following table: TABLE No. 12 MONTHLY ANALYSIS OF WIND Number of days in each month during which the mean velocity of the wind was respectively below the value mentioned hereunder. Month | Year of least wind (No. 8) | Year of most wind (No. *8*) | | 5 10 15 20 | 5 10 15 20 | | m.p.h. m.p.h. m.p.h. m.p.h. | m.p.h. m.p.h. m.p.h. m.p.h. | ------+-------+-----+-------+-------+-------+------+------+-------+ Jan. 5 11 23 27 3 6 15 23 Feb. 5 19 23 28 0 2 8 16 Mar. 5 10 20 23 0 1 11 18 April 6 16 23 28 1 7 16 26 May 1 14 24 30 3 11 24 31 June 1 12 22 26 1 10 21 27 July 8 18 29 31 1 12 25 29 Aug. 2 9 23 30 1 9 18 30 Sept. 1 13 25 30 1 12 24 28 Oct. 5 17 21 26 0 4 16 29 Nov. 6 11 20 26 3 7 19 28 Dec. 1 5 19 24 2 7 23 29 ------+-------+-----+-------+-------+-------+------+------+-------+ Total 46 155 272 329 16 88 220 314 During the year of least wind there were only eight separate occasions upon which the average hourly velocity of the wind was less than six miles per hour for two consecutive days, and on two occasions only was it less than six miles per hour on three consecutive days. It must be remembered, however, that this does not by any means imply that during such days the wind did not rise above six miles per hour, and the probability is that a mill which could be actuated by a six-mile wind would have been at work during part of the time. It will further be observed that the greatest differences between these two years occur in the figures relating to the light winds. The number of days upon which the mean hourly velocity of the wind exceeds twenty miles per hour remains fairly constant year after year. As the greatest difficulty in connection with pumping sewage is the influx of storm water in times of rain, it will be useful to notice the rainfall at those times when the wind is at a minimum. From the following figures (Table No. 13) it will be seen that, generally speaking, when there is very little wind there is very little rain Taking the ten years enumerated in Table No. 11, we find that out of the 314 days on which the wind averaged less than six miles per hour only forty-eight of them were wet, and then the rainfall only averaged .l3 in on those days. TABLE No. 13. WIND LESS THAN 6 M.P.H. -----------+-------------+------------+--------+---------------------------------- Ref. No. | Total No. | Days on | | Rainfall on each from Table | of days in | which no | Rainy | rainy day in No. 11. | each year. | rain fell. | days. | inches. -----------+-------------+------------+--------+---------------------------------- 1 | 16 | 14 | 2 | .63 and .245 2 | 15 | 13 | 2 | .02 and .02 3 | 39 | 34 | 5 | .025, .01, .26, .02 and .03 4 | 36 | 29 | 7 | / .02, .08, .135, .10, .345, .18 | | | | \ and .02 5 | 34 | 28 | 6 | .10, .43, .01, .07, .175 and .07 6 | 32 | 27 | 5 | .10, .11, .085, .04 and .135 7 | 33 | 21 | 2 | .415 and .70 8 | 46 | 40 | 6 | .07, .035, .02, .06, .13 and .02 9 | 26 | 20 | 6 | .145, .20, .33, .125, .015 & .075 10 | 37 | 30 | 7 | / .03, .23, .165, .02, .095 | | | | \ .045 and .02 -----------+-------------+------------+--------+---------------------------------- Total | 314 | 266 | 48 | Average rainfall on each of | | | | the 48 days = .13 in The greater the height of the tower which carries the mill the greater will be the amount of effective wind obtained to drive the mill, but at the same time there are practical considerations which limit the height. In America many towers are as much as 100 ft high, but ordinary workmen do not voluntarily climb to such a height, with the result that the mill is not properly oiled. About 40 ft is the usual height in this country, and 60 ft should be used as a maximum. Mr. George Phelps, in a paper read by him in 1906 before the Association of Water Engineers, stated that it was safe to assume that on an average a fifteen miles per hour wind was available for eight hours per day, and from this he gave the following figures as representing the approximate average duty with, a lift of l00 ft, including friction:-- TABLE NO. 14 DUTY OF WINTDMILU Diameter of Wheel. 10 12 14 16 18 20 25 30 35 40 The following table gives the result of tests carried out by the United States Department of Agriculture at Cheyenne, Wyo., with a l4 ft diameter windmill under differing wind velocities:-- TABLE No. 15. POWER or l4-rx WINDMILL IN VARYING WINDS. Velocity of Wind (miles per hour). 0--5 6-10 11-15 16-20 21-25 26-30 31-35 It will be apparent from the foregoing figures that practically the whole of the pumping for a small sewerage works may be done by means of a windmill, but it is undesirable to rely entirely upon such a system, even if two mills are erected so that the plant will be in duplicate, because there is always the possibility, although it may be remote, of a lengthened period of calm, when the sewage would accumulate; and, further, the Local Government Board would not approve the scheme unless it included an engine, driven by gas, oil, or other mechanical power, for emergencies. In the case of water supply the difficulty may be overcome by providing large storage capacity, but this cannot be done for sewage without creating an intolerable nuisance. In the latter case the storage should not be less than twelve hours dry weather flow, nor more than twenty-four. With a well-designed mill, as has already been indicated, the wind will, for the greater part of the year, be sufficient to lift the whole of the sewage and storm-water, but, if it is allowed to do so, the standby engine will deteriorate for want of use to such an extent that when urgently needed it will not be effective. It is, therefore, desirable that the attendant should run the engine at least once in every three days to keep it in working order. If it can be conveniently arranged, it is a good plan for the attendant to run the engine for a few minutes to entirely empty the pump well about six o'clock each evening. The bulk of the day's sewage will then have been delivered, and can be disposed of when it is fresh, while at the same time the whole storage capacity is available for the night flow, and any rainfall which may occur, thus reducing the chances of the man being called up during the night. About 22 per cent, of the total daily dry weather flow of sewage is delivered between 7 p.m. and 7 a.m. The first cost of installing a small windmill is practically the same as for an equivalent gas or oil engine plant, so that the only advantage to be looked for will be in the maintenance, which in the case of a windmill is a very small matter, and the saving which may be obtained by the reduction of the amount of attendance necessary. Generally speaking, a mill 20 ft in diameter is the largest which should be used, as when this size is exceeded it will be found that the capital cost involved is incompatible with the value of the work done by the mill, as compared with that done by a modern internal combustion engine. Mills smaller than 8 ft in diameter are rarely employed, and then only for small work, such as a 2 1/2 in pump and a 3-ft lift The efficiency of a windmill, measured by the number of square feet of annular sail area, decreases with the size of the mill, the 8 ft, 10 ft, and l2 ft mills being the most efficient sizes. When the diameter exceeds l2 ft, the efficiency rapidly falls off, because the peripheral velocity remains constant for any particular velocity or pressure of the wind, and as every foot increase in the diameter of the wheel makes an increase of over 3 ft in the length of the circumference, the greater the diameter the less the number of revolutions in any given time; and consequently the kinetic flywheel action which is so valuable in the smaller sizes is to a great extent lost in the larger mills. Any type of pump can be used, but the greatest efficiency will be obtained by adopting a single acting pump with a short stroke, thus avoiding the liability, inherent in a long pump rod, to buckle under compression, and obviating the use of a large number of guides which absorb a large part of the power given out by the mill. Although some of the older mills in this country are of foreign origin, there are several British manufacturers turning out well-designed and strongly-built machines in large numbers. Fig. 19 represents the general appearance and Fig. 20 the details of the type of mill made by the well-known firm of Duke and Ockenden, of Ferry Wharf, Littlehampton, Sussex. This firm has erected over 400 windmills, which, after the test of time, have proved thoroughly efficient. From Fig. 20 it will be seen that the power applied by the wheel is transmitted through spur and pinion gearing of 2 1/2 ratio to a crank shaft, the gear wheel having internal annular teeth of the involute type, giving a greater number of teeth always in contact than is the case with external gears. This minimises wear, which is an important matter, as it is difficult to properly lubricate these appliances, and they are exposed to and have to work in all sorts of weather. [Illustration: Fig. l9.--General View of Modern Windmill.] [Illustration: Fig. 20.--Details of Windmill Manufactured by Messrs. Duke and Ockenden, Littlehampton.] It will be seen that the strain on the crank shaft is taken by a bent crank which disposes the load centrally on the casting, and avoids an overhanging crank disc, which has been an objectionable feature in some other types. The position of the crank shaft relative to the rocker pin holes is studied to give a slow upward motion to the rocker with a more rapid downward stroke, the difference in speed being most marked in the longest stroke, where it is most required. In order to transmit the circular internal motion a vertical connecting rod in compression is used, which permits of a simple method of changing the length of stroke by merely altering the pin in the rocking lever, the result being that the pump rod travels in a vertical line. The governing is entirely automatic. If the pressure on the wind wheel, which it will be seen is set off the centre line of the mill and tower, exceeds that found desirable--and this can be regulated by means of a spring on the fantail--the windmill automatically turns on the turn-table and presents an ellipse to the wind instead of a circular face, thus decreasing the area exposed to the wind gradually until the wheel reaches its final position, or is hauled out of gear, when the edges only are opposed to the full force of the wind. The whole weight of the mill is taken upon a ball-bearing turn-table to facilitate instant "hunting" of the mill to the wind to enable it to take advantage of all changes of direction. The pump rod in the windmill tower is provided with a swivel coupling, enabling the mill head to turn completely round without altering the position of the rod. CHAPTER X. THE DESIGN OF SEE OUTFALLS. The detail design of a sea outfall will depend upon the level of the conduit with reference to present surface of the shore, whether the beach is being eroded or made up, and, if any part of the structure is to be constructed above the level of the shore, whether it is likely to be subject to serious attack by waves in times of heavy gales. If there is probability of the direction of currents being affected by the construction of a solid structure or of any serious scour being caused, the design must be prepared accordingly. While there are examples of outfalls constructed of glazed stoneware socketed pipes surrounded with concrete, as shown in Fig. 21, cast iron pipes are used in the majority of cases. There is considerable variation in the design of the joints for the latter class of pipes, some of which are shown in Figs. 22, 23, and 24. Spigot and socket joints (Fig. 22), with lead run in, or even with rod lead or any of the patent forms caulked in cold, are unsuitable for use below high-water mark on account of the water which will most probably be found in the trench. Pipes having plain turned and bored joints are liable to be displaced if exposed to the action of the waves, but if such joints are also flanged, as Fig. 24, or provided with lugs, as Fig. 23, great rigidity is obtained when they are bolted up; in addition to which the joints are easily made watertight. When a flange is formed all round the joint, it is necessary, in order that its thickness may be kept within reasonable limits, to provide bolts at frequent intervals. A gusset piece to stiffen the flange should be formed between each hole and the next, and the bolt holes should be arranged so that when the pipes are laid there will not be a hole at the bottom on the vertical axis of the pipe, as when the pipes are laid in a trench below water level it is not only difficult to insert the bolt, but almost impracticable to tighten up the nut afterwards. The pipes should be laid so that the two lowest bolt holes are placed equidistant on each side of the centre line, as shown in the end views of Figs. Nos. 23 and 24. [Illustration: Fig. 2l.-Stoneware Pipe and Concrete Sea Outfall.] With lug pipes, fewer bolts are used, and the lugs are made specially strong to withstand the strain put upon them in bolting up the pipes. These pipes are easier and quicker to joint under water than are the flanged pipes, so that their use is a distinct advantage when the hours of working are limited. In some cases gun-metal bolts are used, as they resist the action of sea water better than steel, but they add considerably to the cost of the outfall sewer, and the principal advantage appears to be that they are possibly easier to remove than iron or steel ones would be if at any time it was required to take out any pipe which may have been accidentally broken. On the other hand, there is a liability of severe corrosion of the metal taking place by reason of galvanic action between the gun-metal and the iron, set up by the sea water in which they are immersed. If the pipes are not to be covered with concrete, and are thus exposed to the action of the sea water, particular care should be taken to see that the coating by Dr. Angus Smith's process is perfectly applied to them. [Illustration: Fig. 22.--Spigot and Socket Joint for Cast Iron Pipes.] [Illustration: Fig. 23.--Lug Joint for Cast Iron Pipes.] [Illustration: Fig. 24.--Turned, Board, and Flanged Joint for Cast Iron Pipes.] Steel pipes are, on the whole, not so suitable as cast iron. They are, of course, obtainable in long lengths and are easily jointed, but their lightness compared with cast iron pipes, which is their great advantage in transport, is a disadvantage in a sea outfall, where the weight of the structure adds to its stability. The extra length of steel pipes necessitates a greater extent of trench being excavated at one time, which must be well timbered to prevent the sides falling in On the other hand, cast iron pipes are more liable to fracture by heavy stones being thrown upon them by the waves, but this is a contingency which does not frequently occur in practice. According to Trautwine, the cast iron for pipes to resist sea water should be close-grained, hard, white metal. In such metal the small quantity of contained carbon is chemically combined with the iron, but in the darker or mottled metals it is mechanically combined, and such iron soon becomes soft, like plumbago, under the influence of sea water. Hard white iron has been proved to resist sea water for forty years without deterioration, whether it is continually under water or alternately wet and dry. Several types of sea outfalls are shown in Figs. 25 to 31.[1] In the example shown in Fig. 25 a solid rock bed occurred a short distance below the sand, which was excavated so as to allow the outfall to be constructed on the rock. Anchor bolts with clevis heads were fixed into the rock, and then, after a portion of the concrete was laid, iron bands, passing around the cast iron pipes, were fastened to the anchors. This construction would not be suitable below low-water mark. Fig. 26 represents the Aberdeen sea outfall, consisting of cast iron pipes 7 ft in diameter, which are embedded in a heavy concrete breakwater 24 ft in width, except at the extreme end, where it is 30 ft wide. The 4 in wrought iron rods are only used to the last few pipes, which were in 6 ft lengths instead of 9 ft, as were the remainder. Fig. 27 shows an inexpensive method of carrying small pipes, the slotted holes in the head of the pile allowing the pipes to be laid in a straight line, even if the pile is not driven quite true, and if the level of the latter is not correct it can be adjusted by inserting a packing piece between the cradle and the head. Great Crosby outfall sewer into the Mersey is illustrated in Fig. 28. The piles are of greenheart, and were driven to a solid foundation. The 1 3/4 in sheeting was driven to support the sides of the excavation, and was left in when the concrete was laid. Light steel rails were laid under the sewer, in continuous lengths, on steel sleepers and to 2 ft gauge. The invert blocks were of concrete, and the pipes were made of the same material, but were reinforced with steel ribs. The Waterloo (near Liverpool) sea outfall is shown in Fig. 31. [Footnote 1: Plate V.] Piling may be necessary either to support the pipes or to keep them secure in their proper position, but where there is a substratum of rock the pipes may be anchored, as shown in Figs. 25 and 26. The nature of the piling to be adopted will vary according to the character of the beach. Figs. 27, 29, 30, and 31 show various types. With steel piling and bearers, as shown in Fig. 29, it is generally difficult to drive the piles with such accuracy that the bearers may be easily bolted up through the holes provided in the piles, and, if the holes are not drilled in the piles until after they are driven to their final position, considerable time is occupied, and perhaps a tide lost in the attempt to drill them below water. There is also the difficulty of tightening up the bolts when the sewer is partly below the surface of the shore, as shown. In both the types shown in Figs. 29 and 30 it is essential that the piles and the bearers should abut closely against the pipes; otherwise the shock of the waves will cause the pipes to move and hammer against the framing, and thus lead to failure of the structure. Piles similar to Fig. 31 can only be fixed in sand, as was the case at Waterloo, because they must be absolutely true to line and level, otherwise the pipes cannot be laid in the cradles. The method of fixing these piles is described by Mr. Ben Howarth (Minutes of Proceedings of Inst.C.E., Vol. CLXXV.) as follows:--"The pile was slung vertically into position from a four-legged derrick, two legs of which were on each side of the trench; a small winch attached to one pair of the legs lifted and lowered the pile, through a block and tackle. When the pile was ready to be sunk, a 2 in iron pipe was let down the centre, and coupled to a force-pump by means of a hose; a jet of water was then forced down this pipe, driving the sand and silt away from below the pile. The pile was then rotated backwards and forwards about a quarter of a turn, by men pulling on the arms; the pile, of course, sank by its own weight, the water-jet driving the sand up through the hollow centre and into the trench, and it was always kept vertical by the sling from the derrick. As soon as the pile was down to its final level the ground was filled in round the arms, and in this running sand the pile became perfectly fast and immovable a few minutes after the sinking was completed. The whole process, from the first slinging of the pile to the final setting, did not take more than 20 or 25 minutes." [Illustration: PLATE V. ROCK BED. Fig. 26--ABERDEEN SEA OUTFALL. Fig. 27--SMALL GREAT CROSBY SEA OUTFALL. Fig. 29--CAST IRON PIPE ON STEEL CAST AND BEARERS. Fig. 31--WATERLOO (LIVERPOOL) SEA OUTFALL.] (_To face page 80_.) Screw piles may be used if the ground is suitable, but, if it is boulder clay or similar material, the best results will probably be obtained by employing rolled steel joists as piles. CHAPTER XI. THE ACTION OF SEA WATER ON CEMENT. Questions are frequently raised in connection with sea-coast works as to whether any deleterious effect will result from using sea-water for mixing the concrete or from using sand and shingle off the beach; and, further, whether the concrete, after it is mixed, will withstand the action of the elements, exposed, as it will be, to air and sea-water, rain, hot sun, and frosts. Some concrete structures have failed by decay of the material, principally between high and low water mark, and in order to ascertain the probable causes and to learn the precautions which it is necessary to take, some elaborate experiments have been carried out. To appreciate the chemical actions which may occur, it will be as well to examine analyses of sea-water and cement. The water of the Irish Channel is composed of Sodium chloride.................... 2.6439 per cent. Magnesium chloride................. 0.3150 " " Magnesium sulphate................. 0.2066 " " Calcium sulphate................... 0.1331 " " Potassium chloride................. 0.0746 " " Magnesium bromide.................. 0.0070 " " Calcium carbonate.................. 0.0047 " " Iron carbonate..................... 0.0005 " " Magnesium nitrate.................. 0.0002 " " Lithium chloride................... Traces. Ammonium chloride.................. Traces. Silica chloride.................... Traces. Water.............................. 96.6144 -------- 100.0000 An average analysis of a Thames cement may be taken to be as follows:-- Silica................................ 23.54 per cent. Insoluble residue (sand, clay, etc.)............................ 0.40 " Alumina and ferric oxide............... 9.86 " Lime.................................. 62.08 " Magnesia............................... 1.20 " Sulphuric anhydride.................... 1.08 " Carbonic anhydride and water........... 1.34 " Alkalies and loss on analysis.......... 0.50 " ----- 100.00 The following figures give the analysis of a sample of cement expressed in terms of the complex compounds that are found:-- Sodium silicate (Na2SiO3)........ 3.43 per cent. Calcium sulphate (CaSO4)......... 2.45 " Dicalcium silicate (Ca2SiO4).... 61.89 " Dicalcium aluminate (Ca2Al2O5).. 12.14 " Dicalcium ferrate (Ca2Fe2O5)..... 4.35 " Magnesium oxide (MgO)............ 0.97 " Calcium oxide (CaO)............. 14.22 " Loss on analysis, &c............. 0.55 " ----- 100.00 Dr. W. Michaelis, the German cement specialist, gave much consideration to this matter in 1906, and formed the opinion that the free lime in the Portland cement, or the lime freed in hardening, combines with the sulphuric acid of the sea-water, which causes the mortar or cement to expand, resulting in its destruction. He proposed to neutralise this action by adding to the mortar materials rich in silica, such as trass, which would combine with the lime. Mr. J. M. O'Hara, of the Southern Pacific Laboratory, San Francisco, Cal., made a series of tests with sets of pats 4 in diameter and 1/2 in thick at the centre, tapering to a thin edge on the circumference, and also with briquettes for ascertaining the tensile strength, all of which were placed in water twenty-four hours after mixing. At first some of the pats were immersed in a "five-strength solution" of sea-water having a chemical analysis as follows:-- Sodium chloride.................... 11.5 per cent. Magnesium chloride................. 1.4 " " Magnesium sulphate................. 0.9 " " Calcium sulphate................... 0.6 " " Water.............................. 85.6 " " 100.0 This strong solution was employed in order that the probable effect of immersing the cement in sea-water might be ascertained very much quicker than could be done by observing samples actually placed in ordinary sea-water, and it is worthy of note that the various mixtures which failed in this accelerated test also subsequently failed in ordinary sea-water within a period of twelve months. Strong solutions were next made of the individual salts contained in sea-water, and pats were immersed as before, when it was found that the magnesium sulphate present in the water acted upon the calcium hydrate in the cement, forming calcium sulphate, and leaving the magnesium hydrate free. The calcium sulphate combines with the alumina of the cement, forming calcium sulpho-aluminate, which causes swelling and cracking of the concrete, and in cements containing a high proportion of alumina, leads to total destruction of all cohesion. The magnesium hydrate has a tendency to fill the pores of the concrete so as to make it more impervious to the destructive action of the sea-water, and disintegration may be retarded or checked. A high proportion of magnesia has been found in samples of cement which have failed under the action of sea water, but the disastrous result cannot be attributed to this substance having been in excess in the original cement, as it was probably due to the deposition of the magnesia salts from the sea-water; although, if magnesia were present in the cement in large quantities, it would cause it to expand and crack, still with the small proportion in which it occurs in ordinary cements it is probably inert. The setting of cement under the action of water always frees a portion of the lime which was combined, but over twice as much is freed when the cement sets in sea-water as in fresh water. The setting qualities of cement are due to the iron and alumina combined with calcium, so that for sea-coast work it is desirable for the alumina to be replaced by iron as far as possible. The final hardening and strength of cement is due in a great degree to the tri-calcium silicate (3CaO, SiO2) which is soluble by the sodium chloride found in sea-water, so that the resultant effect of the action of these two compounds is to enable the sea-water to gradually penetrate the mortar and rot the concrete. The concrete is softened, when there is an abnormal amount of sulphuric acid present, as a result of the reaction of the sulphuric acid of the salt dissolved by the water upon a part of the lime in the cement. The ferric oxide of the cement is unaffected by sea- water. The neat cement briquette tests showed that those immersed in sea-water attained a high degree of strength at a much quicker rate than those immersed in fresh water, but the 1 to 3 cement and sand briquette tests gave an opposite result. At the end of twelve months, however, practically all the cements set in fresh water showed greater strength than those set in sea- water. When briquettes which have been immersed in fresh water and have thoroughly hardened are broken, the cores are found to be quite dry, and if briquettes immersed in sea-water show a similar dryness there need be no hesitation in using the cement; but if, on the other hand, the briquette shows that the sea-water has permeated to the interior, the cement will lose strength by rotting until it has no cohesion at all. It must be remembered that it is only necessary for the water to penetrate to a depth of 1/2 in on each side of a briquette to render it damp all through, whereas in practical work, if the water only penetrated to the same depth, very little ill-effect would be experienced, although by successive removals of a skin 1/2 in deep the structure might in time be imperilled. The average strength in pounds per square inch of six different well-known brands of cement tested by Mr. O'Hara was as follows:-- TABLE No. 16. EFFECT OF SEA WATER ON STRENGTH OF CEMENT. Neat cement 1 cement to 3 sand set in set in Sea Water Fresh Water Sea Water Fresh Water 7 days 682 548 214 224 28 days 836 643 293 319 2 months 913 668 313 359 3 months 861 667 301 387 6 months 634 654 309 428 9 months 542 687 317 417 12 months 372 706 325 432 Some tests were also made by Messrs. Westinghouse, Church, Kerr, and Co., of New York, to ascertain the effect of sea- water on the tensile strength of cement mortar. Three sets of briquettes were made, having a minimum section of one square inch. The first were mixed with fresh water and kept in fresh water; the second were mixed with fresh water, but kept immersed in pans containing salt water; while the third were mixed with sea-water and kept in sea-water. In the experiments the proportion of cement and sand varied from 1 to 1 to 1 to 6. The results of the tests on the stronger mixtures are shown in Fig. 32. The Scandinavian Portland cement manufacturers have in hand tests on cubes of cement mortar and cement concrete, which were started in 1896, and are to extend over a period of twenty years. A report upon the tests of the first ten years was submitted at the end of 1909 to the International Association of Testing Materials at Copenhagen, and particulars of them are published in "Cement and Sea-Water," by A. Poulsen (chairman of the committee), J. Jorsen and Co., Copenhagen, 1909, price 3s. [Illustration: FIG. 32.--Tests of the Tensile Strength of Cement and Sand Briquettes, Showing the Effect of Sea Water.] Cements from representative firms in different countries were obtained for use in making the blocks, which had coloured glass beads and coloured crushed glass incorporated to facilitate identification. Each block of concrete was provided with a number plate and a lifting bolt, and was kept moist for one month before being placed in position. The sand and gravel were obtained from the beach on the west coast of Jutland. The mortar blocks were mixed in the proportion of 1 to 1, 1 to 2, and 1 to 3, and were placed in various positions, some between high and low water, so as to be exposed twice in every twenty- four hours, and others below low water, so as to be always submerged. The blocks were also deposited under these conditions in various localities, the mortar ones being placed at Esbjerb at the south of Denmark, at Vardo in the Arctic Ocean, and at Degerhamm on the Baltic, where the water is only one-seventh as salt as the North Sea, while the concrete blocks were built up in the form of a breakwater or groyne at Thyboron on the west coast of Jutland. At intervals of three, six, and twelve months, and two, four, six, ten, and twenty years, some of the blocks have, or will be, taken up and subjected to chemical tests, the material being also examined to ascertain the effect of exposure upon them. The blocks tested at intervals of less than one year after being placed in position gave very variable results, and the tests were not of much value. The mortar blocks between high and low water mark of the Arctic Ocean at Vardo suffered the worst, and only those made with the strongest mixture of cement, 1 to 1, withstood the severe frost experienced. The best results were obtained when the mortar was made compact, as such a mixture only allowed diffusion to take place so slowly that its effect was negligible; but when, on the other hand, the mortar was loose, the salts rapidly penetrated to the interior of the mass, where chemical changes took place, and caused it to disintegrate. The concrete blocks made with 1 to 3 mortar disintegrated in nearly every case, while the stronger ones remained in fairly good condition. The best results were given by concrete containing an excess of very fine sand. Mixing very finely-ground silica, or trass, with the cement proved an advantage where a weak mixture was employed, but in the other cases no benefit was observed. The Association of German Portland Cement Manufacturers carried out a series of tests, extending over ten years, at their testing station at Gross Lichterfeld, near Berlin, the results of which were tabulated by Mr. C. Schneider and Professor Gary. In these tests the mortar blocks were made 3 in cube and the concrete blocks l2 in cube; they were deposited in two tanks, one containing fresh water and the other sea-water, so that the effect under both conditions might be noted. In addition, concrete blocks were made, allowed to remain in moist sand for three months, and were then placed in the form of a groyne in the sea between high and low-water mark. Some of the blocks were allowed to harden for twelve months in sand before being placed, and these gave better results than the others. Two brands of German Portland cement were used in these tests, one, from which the best results were obtained, containing 65.9 per cent. of lime, and the other 62.0 per cent. of lime, together with a high percentage of alumina. In this case, also, the addition of finely-ground silica, or trass, improved the resisting power of blocks made with poor mortars, but did not have any appreciable effect on the stronger mixtures. Professor M. Möller, of Brunswick, Germany, reported to the International Association for Testing Materials, at the Copenhagen Congress previously referred to, the result of his tests on a small hollow, trapezium shape, reinforced concrete structure, which was erected in the North Sea, the interior being filled with sandy mud, which would be easily removable by flowing water. The sides were 7 cm. thick, formed of cement concrete 1:2 1/2:2, moulded elsewhere, and placed in the structure forty days after they were made, while the top and bottom were 5 cm. thick, and consisted of concrete 1:3:3, moulded _in situ_ and covered by the tide within twenty-four hours of being laid. The concrete moulded _in situ_ hardened a little at first, and then became soft when damp, and friable when dry, and white efflorescence appeared on the surface. In a short time the waves broke this concrete away, and exposed the reinforcement, which rusted and disappeared, with the result that in less than four years holes were made right through the concrete. The sides, which were formed of slabs allowed to harden before being placed in the structure, were unaffected except for a slight roughening of the surface after being exposed alternately to the sea and air for a period, of thirteen years. Professor Möller referred also to several cases which had come under his notice where cement mortar or concrete became soft and showed white efflorescence when it had been brought into contact with sea-water shortly after being made. In experiments in Atlantic City samples of dry cement in powder form were put with sea-water in a vessel which was rapidly rotated for a short time, after which the cement and the sea- water were analysed, and it was found that the sea-water had taken up the lime from the cement, and the cement had absorbed the magnesia salts from the sea-water. Some tests were carried out in 1908-9 at the Navy Yard, Charlestown, Mass., by the Aberthaw Construction Company of Boston, in conjunction with the Navy Department. The cement concrete was placed so that the lower portions of the surfaces of the specimens were always below water, the upper portions were always exposed to the air, and the middle portions were alternately exposed to each. Although the specimens were exposed to several months of winter frost as well as to the heat of the summer, no change was visible in any part of the concrete at the end of six months. Mons. R. Feret, Chief of the Laboratory of Bridges and Roads, Boulogne-sur-Mer, France, has given expression to the following opinions:-- 1. No cement or other hydraulic product has yet been found which presents absolute security against the decomposing action of sea-water. 2. The most injurious compound of sea-water is the acid of the dissolved sulphates, sulphuric acid being the principal agent in the decomposition of cement. 3. Portland cement for sea-water should be low in aluminium and as low as possible in lime. 4. Puzzolanic material is a valuable addition to cement for sea-water construction, 5. As little gypsum as possible should be added for regulating the time of setting to cements which are to be used in sea- water. 6. Sand containing a large proportion of fine grains must never be used in concrete or mortar for sea-water construction. 7. The proportions of the cement and aggregate for sea-water construction must be such as will produce a dense and impervious concrete. On the whole, sea-water has very little chemical effect on good Portland cements, such as are now easily obtainable, and, provided the proportion of aluminates is not too high, the varying composition of the several well-known commercial cements is of little moment. For this reason tests on blocks immersed in still salt water are of very little use in determining the probable behaviour of concrete when exposed to damage by physical and mechanical means, such as occurs in practical work. The destruction of concrete works on the sea coast is due to the alternate exposure to air and water, frost, and heat, and takes the form of cracking or scaling, the latter being the most usual when severe frosts are experienced. When concrete blocks are employed in the construction of works, they should be made as long as possible before they are required to be built in the structure, and allowed to harden in moist sand, or, if this is impracticable, the blocks should be kept in the air and thoroughly wetted each day. On placing cement or concrete blocks in sea water a white precipitate is formed on their surfaces, which shows that there is some slight chemical action, but if the mixture is dense this action is restricted to the outside, and does not harm the block. Cement mixed with sea water takes longer to harden than if mixed with fresh water, the time varying in proportion to the amount of salinity in the water. Sand and gravel from the beach, even though dry, have their surfaces covered with saline matters, which retard the setting of the cement, even when fresh water is used, as they become mixed with such water, and thus permeate the whole mass. If sea water and aggregate from the shore are used, care must be taken to see that no decaying seaweed or other organic matter is mixed with it, as every such piece will cause a weak place in the concrete. If loam, clay, or other earthy matters from the cliffs have fallen down on to the beach, the shingle must be washed before it is used in concrete. Exposure to damp air, such as is unavoidable on the coast, considerably retards the setting of cement, so that it is desirable that it should not be further retarded by the addition of gypsum, or calcium sulphate, especially if it is to be used with sea water or sea-washed sand and gravel. The percentage of gypsum found in cement is, however, generally considerably below the maximum allowed by the British Standard Specification, viz., 2 per cent., and is so small that, for practical purposes, it makes very little difference in sea coast work, although of course, within reasonable limits, the quicker the cement sets the better. When cement is used to joint stoneware pipe sewers near the coast, allowance must be made for this retardation of the setting, and any internal water tests which may be specified to be applied must not be made until a longer period has elapsed after the laying of the pipes than would otherwise be necessary. A high proportion of aluminates tends to cause disintegration when exposed to sea water. The most appreciable change which takes place in a good sound cement after exposure to the sea is an increase in the chlorides, while a slight increase in the magnesia and the sulphates also takes place, so that the proportion of sulphates and magnesia in the cement should be kept fairly low. Hydraulic lime exposed to the sea rapidly loses the lime and takes up magnesia and sulphates. To summarise the information upon this point, it appears that it is better to use fresh water for all purposes, but if, for the sake of economy, saline matters are introduced into the concrete, either by using sea water for mixing or by using sand and shingle from the beach, the principal effect will be to delay the time of setting to some extent, but the ultimate strength of the concrete will probably not be seriously affected. When the concrete is placed in position the portion most liable to be destroyed is that between high and low water mark, which is alternately exposed to the action of the sea and the air, but if the concrete has a well-graded aggregate, is densely mixed, and contains not more than two parts of sand to one part of cement, no ill-effect need be anticipated. CHAPTER XII DIVING. The engineer is not directly concerned with the various methods employed in constructing a sea outfall, such matters being left to the discretion of the contractor. It may, however, be briefly stated that the work frequently involves the erection of temporary steel gantries, which must be very carefully designed and solidly built if they are to escape destruction by the heavy seas. It is amazing to observe the ease with which a rough sea will twist into most fantastic shapes steel joists 10 in by 8in, or even larger in size. Any extra cost incurred in strengthening the gantries is well repaid if it avoids damage, because otherwise there is not only the expense of rebuilding the structure to be faced, but the construction of the work will be delayed possibly into another season. In order to ensure that the works below water are constructed in a substantial manner, it is absolutely necessary that the resident engineer, at least, should be able to don a diving dress and inspect the work personally. The particular points to which attention must be given include the proper laying of the pipes, so that the spigot of one is forced home into the socket of the other, the provision and tightening up of all the bolts required to be fixed, the proper driving of the piles and fixing the bracing, the dredging of a clear space in the bed of the sea in front of the outlet pipe, and other matters dependent upon the special form of construction adopted. If a plug is inserted in the open end of the pipes as laid, the rising of the tide will press on the plugged end and be of considerable assistance in pushing the pipes home; it will therefore be necessary to re-examine the joints to see if the bolts can be tightened up any more. Messrs. Siebe, Gorman, and Co., the well-known makers of submarine appliances, have fitted up at their works at Westminster Bridge-road, London, S.E., an experimental tank, in which engineers may make a few preliminary descents and be instructed in the art of diving; and it is distinctly more advantageous to acquire the knowledge in this way from experts than to depend solely upon the guidance of the divers engaged upon the work which the engineer desires to inspect. Only a nominal charge of one guinea for two descents is made, which sum, less out-of-pocket expenses, is remitted to the Benevolent Fund of the Institution of Civil Engineers. It is generally desirable that a complete outfit, including the air pump, should be provided for the sole use of the resident engineer, and special men should be told off to assist him in dressing and to attend to his wants while he is below water. He is then able to inspect the work while it is actually in progress, and he will not hinder or delay the divers. It is a wise precaution to be medically examined before undertaking diving work, although, with the short time which will generally be spent below water, and the shallow depths usual in this class of work, there is practically no danger; but, generally speaking, a diver should be of good physique, not unduly stout, free from heart or lung trouble and varicose veins, and should not drink or smoke to excess. It is necessary, however, to have acquaintance with the physical principles involved, and to know what to do in emergencies. A considerable amount of useful information is given by Mr. R. H. Davis in his "Diving Manual" (Siebe, Gorman, and Co., 5s.), from which many of the following notes are taken. A diving dress and equipment weighs about l75 lb, including a 40 lb lead weight carried by the diver on his chest, a similar weight on his back, and l6lb of lead on each boot. Upon entering the water the superfluous air in the dress is driven out through the outlet valve in the helmet by the pressure of the water on the legs and body, and by the time the top of the diver's head reaches the surface his breathing becomes laboured, because the pressure of air in his lungs equals the atmospheric pressure, while the pressure upon his chest and abdomen is greater by the weight of the water thereon. He is thus breathing against a pressure, and if he has to breathe deeply, as during exertion, the effect becomes serious; so that the first thing he has to learn is to adjust the pressure of the spring on the outlet valve, so that the amount of air pumped in under pressure and retained in the diving dress counterbalances the pressure of the water outside, which is equal to a little under 1/2lb per square inch for every foot in depth. If the diver be 6 ft tall, and stands in an upright position, the pressure on his helmet will be about 3lb per square inch less than on his boots. The breathing is easier if the dress is kept inflated down to the abdomen, but in this case there is danger of the diver being capsized and floating feet upwards, in which position he is helpless, and the air cannot escape by the outlet valve. Air is supplied to the diver under pressure by an air pump through a flexible tube called the air pipe; and a light rope called a life line, which is used for signalling, connects the man with the surface. The descent is made by a 3 in "shot-rope," which has a heavy sinker weighing about 50 lb attached, and is previously lowered to the bottom. A 1-1/4 in rope about 15 ft long, called a "distance- line," is attached to the shot-rope about 3 ft above the sinker, and on reaching the bottom the diver takes this line with him to enable him to find his way back to the shot-rope, and thus reach the surface comfortably, instead of being hauled up by his life line. The diver must be careful in his movements that he does not fall so as suddenly to increase the depth of water in which he is immersed, because at the normal higher level the air pressure in the dress will be properly balanced against the water pressure; but if he falls, say 30 ft, the pressure of the water on his body will be increased by about 15 lb per square inch, and as the air pump cannot immediately increase the pressure in the dress to a corresponding extent, the man's body in the unresisting dress will be forced into the rigid helmet, and he will certainly be severely injured, and perhaps even killed. When descending under water the air pressure in the dress is increased, and acts upon the outside of the drum of the ear, causing pain, until the air passing through the nose and up the Eustachian tube inside the head reaches the back of the drum and balances the pressure. This may be delayed, or prevented, if the tube is partially stopped up by reason of a cold or other cause, but the balance can generally be brought about if the diver pauses in his descent and swallows his saliva; or blocks up his nose as much as possible by pressing it against the front of the helmet, closing the mouth and then making a strong effort at expiration so as to produce temporarily an extra pressure inside the throat, and so blow open the tubes; or by yawning or going through the motions thereof. If this does not act he must come up again Provided his ears are "open," and the air pumps can keep the pressure of air equal to that of the depth of the water in which the diver may be, there is nothing to limit the rate of his descent. Now in breathing, carbonic acid gas is exhaled, the quality varying in accordance with the amount of work done, from .014 cubic feet per minute when at rest to a maximum of about .045, and this gas must be removed by dilution with fresh air so as not to inconvenience the diver. This is not a matter of much difficulty as the proportion in fresh air is about .03 per cent., and no effect is felt until the proportion is increased to about 0.3 per cent., which causes one to breathe twice as deeply as usual; at 0.6 per cent. there is severe panting; and at a little over 1.0 per cent. unconsciousness occurs. The effect of the carbonic acid on the diver, however, increases the deeper he descends; and at a depth of 33 ft 1 per cent. of carbonic acid will have the same effect as 2 per cent. at the surface. If the diver feels bad while under water he should signal for more air, stop moving about, and rest quietly for a minute or two, when the fresh air will revive him. The volume of air required by the diver for respiration is about 1.5 cubic feet per minute, and there is a non-return valve on the air inlet, so that in the event of the air pipe being broken, or the pump failing, the air would not escape backwards, but by closing the outlet valve the diver could retain sufficient air to enable him to reach the surface. During the time that a diver is under pressure nitrogen gas from the air is absorbed by his blood and the tissues of his body. This does not inconvenience him at the time, but when he rises the gas is given off, so that if he has been at a great depth for some considerable time, and comes up quickly, bubbles form in the blood and fill the right side of the heart with air, causing death in a few minutes. In less sudden cases the bubbles form in the brain or spinal cord, causing paralysis of the legs, which is called divers' palsy, or the only trouble which is experienced may be severe pains in the joints and muscles. It is necessary, therefore, that he shall come up by stages so as to decompress himself gradually and avoid danger. The blood can hold about twice as much gas in solution as an equal quantity of water, and when the diver is working in shallow depths, up to, say, 30 ft, the amount of nitrogen absorbed is so small that he can stop down as long as is necessary for the purposes of the work, and can come up to the surface as quickly as he likes without any danger. At greater depths approximately the first half of the upward journey may be done in one stage, and the remainder done by degrees, the longest rest being made at a few feet below the surface. The following table shows the time limits in accordance with the latest British Admiralty practice; the time under the water being that from leaving the surface to the beginning of the ascent:-- TABLE No. l7.--DIVING DATA. Stoppages in Total time minutes at for ascent Depth in feet. Time under water. different depths in minutes. at 20 ft 10 ft Up to 36 No limit - - 0 to 1 36 to 42 Up to 3 hours - - 1 to 1-1/2 Over 3 hours - 5 6 42 to 48 Up to 1 hour - - 1-1/2 1 to 3 hours - 5 6-1/2 Over 3 hours - 10 11-1/2 48 to 54 Up to 1/2 hour - - 2 1/2 to 1-1/2 hour - 5 7 1-1/2 to 3 hours - 10 12 Over 3 hours - 20 22 54 to 60 Up to 20 minutes - - 2 20 to 45 minutes - 5 7 3/4 to 1-1/2 hour - 10 12 1-1/2 to 3 hours 5 15 22 Over 3 hours 10 20 32 When preparing to ascend the diver must tighten the air valve in his helmet to increase his buoyancy; if the valve is closed too much to allow the excess air to escape, his ascent will at first be gradual, but the pressure of the water reduces, the air in the dress expands, making it so stiff that he cannot move his arms to reach the valve, and he is blown up, with ever-increasing velocity, to the surface. While ascending he should exercise his muscles freely during the period of waiting at each stopping place, so as to increase the circulation, and consequently the rate of deceleration. During the progress of the works the location of the sea outfall will be clearly indicated by temporary features visible by day and lighted by night; but when completed its position must be marked in a permanent manner. The extreme end of the outfall should be indicated by a can buoy similar to that shown in Fig. 33, made by Messrs. Brown, Lenox, and Co. (Limited), Milwall, London, E., which costs about £75, including a 20 cwt. sinker and 10 fathoms of chain, and is approved for the purpose by the Board of Trade. [Illustration: FIG 33 CAN BUOY FOR MARKING OUTFALL SEWER.] It is not desirable to fasten the chain to any part of the outfall instead of using a sinker, because at low water the slack of the chain may become entangled, which by preventing the buoy from rising with the tide, will lead to damage; but a special pile may be driven for the purpose of securing the buoy, at such a distance from the outlet that the chain will not foul it. The buoy should be painted with alternate vertical stripes of yellow and green, and lettered "Sewer Outfall" in white letters 12 in deep. It must be remembered that it is necessary for the plans and sections of outfall sewers and other obstructions proposed to be placed in tidal waters to be submitted to the Harbour and Fisheries Department of the Board of Trade for their approval, and no subsequent alteration in the works may be made without their consent being first obtained. CHAPTER XIII. THE DISCHARGE OF SEA OUTFALL SEWERS. The head which governs the discharge of a sea outfall pipe is measured from the surface of the sewage in the tank, sewer, or reservoir at the head of the outfall to the level of the sea. As the sewage is run off the level of its surface is lowered, and at the same time the level of the sea is constantly varying as the tide rises and falls, so that the head is a variable factor, and consequently the rate of discharge varies. A curve of discharge may be plotted from calculations according to these varying conditions, but it is not necessary; and all requirements will be met if the discharges under certain stated conditions are ascertained. The most important condition, because it is the worst, is that when the level of the sea is at high water of equinoctial spring tides and the reservoir is practically empty. Sea water has a specific gravity of 1.027, and is usually taken as weighing 64.14 lb per cubic foot, while sewage may be taken as weighing 62.45 lb per cubic foot, which is the weight of fresh water at its maximum density. Now the ratio of weight between sewage and sea water is as 1 to 1.027, so that a column of sea water l2 inches in height requires a column of fresh water 12.324, or say 12-1/3 in, to balance it; therefore, in order to ascertain the effective head producing discharge it will be necessary to add on 1/3 in for every foot in depth of the sea water over the centre of the outlet. The sea outfall should be of such diameter that the contents of the reservoir can be emptied in the specified time--say, three hours--while the pumps are working to their greatest power in pouring sewage into the reservoir during the whole of the period; so that when the valves are closed the reservoir will be empty, and its entire capacity available for storage until the valves are again opened. To take a concrete example, assume that the reservoir and outfall are constructed as shown in Fig. 34, and that it is required to know the diameter of outfall pipe when the reservoir holds 1,000,000 gallons and the whole of the pumps together, including any that may be laid down to cope with any increase of the population in the future, can deliver 600,000 gallons per hour. When the reservoir is full the top water level will be 43.00 O.D., but in order to have a margin for contingencies and to allow for the loss in head due to entry of sewage into the pipe, for friction in passing around bends, and for a slight reduction in discharging capacity of the pipe by reason of incrustation, it will be desirable to take the reservoir as full, but assume that the sewage is at the level 31.00. The head of water in the sea measured above the centre of the pipe will be 21 ft, so that [*Math: $21 \times 1/3$], or 7 in--say, 0.58 ft--must be added to the height of high water, thus reducing the effective head from 31.00 - 10.00 = 21.00 to 20.42 ft The quantity to be discharged will be [*Math: $\frac{1,000,000 + (3 * 600,000)}{3}$] = 933,333 gallons per hour = 15,555 gallons per minute, or, taking 6.23 gallons equal to 1 cubic foot, the quantity equals 2,497 cubic feet per min Assume the required diameter to be 30 in, then, by Hawksley's formula, the head necessary to produce velocity = [*Math: $\frac{Gals. per min^2}{215 \times diameter in inches^4} = \frac{15,555^2}{215 * 30^4}$] = 1.389 ft, and the head to overcome friction = [*Math: $\frac{Gals. per min^2 \times Length in yards}{240 * diameter in inches^5} = \frac{15,555^2 * 2042}{240 * 30^5}] = 84.719. Then 1.389 + 84.719 = 86.108--say, 86.11 ft; but the acutal head is 20.42 ft, and the flow varies approximately as the square root of the head, so that the true flow will be about [*Math:$15,555 * \sqrt{\frac{20.42}{86.11} = 7574.8\$] [Illustration: FIG 34 DIAGRAM ILLUSTRATING CALCULATIONS FOR THE DISCHARGE OF SEA OUTFALLS] --say 7,575 gallons. But a flow of 15,555 gallons per minute is required, as it varies approximately as the fifth power of the diameter, the requisite diameter will be about [*Math: \sqrt[5]{\frac{30^5 \times 15,555}{7575}] = 34.64 inches. Now assume a diameter of 40 in, and repeat the calculations. Then head necessary to produce velocity [*Math: = \frac{15,555^2}{215 \times 40^4}] = 0.044 ft, and head to overcome friction = [*Math: \frac{15,555^2 \times 2042}{240 \times 40^5}] = 20.104 ft Then 0.044 + 20.104 = 20.148, say 20.15 ft, and the true flow will therefore be about [*Math: 15,555 * \sqrt{\frac{20.42}{20.15}}] = 15,659 gallons, and the requisite diameter about [*Math: \sqrt[5]{\frac{40^5 * 15,555}{15,659}}] = 39.94 inches. When, therefore, a 30 in diameter pipe is assumed, a diameter of 34.64 in is shown to be required, and when 40 in is assumed 39.94 in is indicated. Let _a_ = difference between the two assumed diameters. _b_ = increase found over lower diameter. _c_ = decrease found under greater diameter. _d_ = lower assumed diameter. Then true diameter = [*Math: d + \frac{ab}{b+c} = 30 + \frac{10 \times 4.64}{4.64+0.06} = 30 + \frac{46.4}{4.7} = 39.872], or, say, 40 in, which equals the required diameter. A simpler way of arriving at the size would be to calculate it by Santo Crimp's formula for sewer discharge, namely, velocity in feet per second = [*Math: 124 \sqrt[3]{R^2} \sqrt{S}], where R equals hydraulic mean depth in feet, and S = the ratio of fall to length; the fall being taken as the difference in level between the sewage and the sea after allowance has been made for the differing densities. In this case the fall is 20.42 ft in a length of 6,126 ft, which gives a gradient of 1 in 300. The hydraulic mean depth equals [*Math: \frac{d}{4}]; the required discharge, 2,497 cubic feet per min, equals the area, [*Math: (\frac{\pi d^2}{4})] multiplied by the velocity, therefore the velocity in feet per second = 4/(pi d^2) x 2497/60 = 2497/(15 pi d^2) and the formula then becomes 2497/(15 pi d^2) = 124 x * 3rd_root(d^2)/3rd_root(4^3*) x sqrt(1)/sqrt(300) or d^2 x 3rd_root(d^2) = 3rd_root(d^6) = (2497 x 3rd_root(16) x sqrt(300)) / (124 x 15 x 3.14159*) or (8 x log d)/3 = log 2497 + (1/3 x log 16) + (* x log 300) - log 124 - log 15 - log 3.14159; or log d = 3/8 (3.397419 + 0.401373 + 1.238561 - 2.093422 - 1.176091 - 0.497150) = 3/8 (1.270690) = 0.476509. * d = 2.9958* feet = 35.9496, say 36 inches. As it happens, this could have been obtained direct from the tables where the discharge of a 36 in pipe at a gradient of 1 in 300 = 2,506 cubic feet per minute, as against 2,497 cubic feet required, but the above shows the method of working when the figures in the tables do not agree with those relating to the particular case in hand. This result differs somewhat from the one previously obtained, but there remains a third method, which we can now make trial of--namely, Saph and Schoder's formula for the discharge of water mains, V = 174 3rd_root(R^2) x S^.51*. Substituting values similar to those taken previously, this formula can be written 2497/(15 pi d^2) = 174 x 3rd_root(d_2)/3rd_root(4^2) x 1^.51/300^.51 or d^2 x 3rd_root(d^2) = 3rd_root(d^6) = (2497 x 3rd_root(16) x 300^.51) / (174 x 15 x 3.14159) or* log d = 3/8 (3.397419 + 0.401373 + (54 x 2.477121) - 2.240549 - 1.176091 - 0.497150) = 3/8 (1.222647) = 0.458493 * d = 2.874* feet = 34.388 say 34 1/2 inches. By Neville's general formula the velocity in feet per second = 140 SQRT(RS)-11(RS)^(1/3) or, assuming a diameter of 37 inches, V = 140 X SQRT(37/(12 x 4) x 1/300) - 11 (37/(12x4x300))^(1/3) = 140 x SQRT(37/14400) - 11 (37/1440)^(1/3) = 7.09660 - 1.50656 = 5.59 feet per second. Discharge = area x velocity; therefore, the discharge in cubic feet per minute = 5.59 x 60 x (3.14159 x 37^2)/(4*12^2) = 2504 compared with 2,497 c.f.m, required, showing that if this formula is used the pipe should be 37 in diameter. The four formulæ, therefore, give different results, as follows:-- Hawksley = 40 in Neville = 37 in Santo Crimp = 36 in Saph and Schoder = 34-1/2 in The circumstances of the case would probably be met by constructing the outfall 36 in in diameter. It is very rarely desirable to fix a flap-valve at the end of a sea outfall pipe, as it forms a serious obstruction to the flow of the sewage, amounting, in one case the writer investigated, to a loss of eight-ninths of the available head; the head was exceptionally small, and the flap valve practically absorbed it all. The only advantage in using a flap valve occurs when the pipe is directly connected with a tank sewer below the level of high water, in which case, if the sea water were allowed to enter, it would not only occupy space required for storing sewage, but it would act on the sewage and speedily start decomposition, with the consequent emission of objectionable odours. If there is any probability of sand drifting over the mouth of the outfall pipe, the latter will keep free much better if there is no valve. Schemes have been suggested in which it was proposed to utilise a flap valve on the outlet so as to render the discharge of the sewage automatic. That is to say, the sewage was proposed to be collected in a reservoir at the head of, and directly connected to, the outfall pipe, at the outlet end of which a flap valve was to be fixed. During high water the mouth of the outfall would be closed, so that sewage would collect in the pipes, and in the reservoir beyond; then when the tide had fallen such a distance that its level was below the level of the sewage, the flap valve would open, and the sewage flow out until the tide rose and closed the valve. There are several objections to this arrangement. First of all, a flap valve under such conditions would not remain watertight, unless it were attended to almost every day, which is, of course, impracticable when the outlet is below water. As the valve would open when the sea fell to a certain level and remain open during the time it was below that level, the period of discharge would vary from, say, two hours at neap tides to about four hours at springs; and if the two hours were sufficient, the four hours would be unnecessary. Then the sewage would not only be running out and hanging about during dead water at low tide, but before that time it would be carried in one direction, and after that time in the other direction; so that it would be spread out in all quarters around the outfall, instead of being carried direct out to sea beyond chance of return, as would be the case in a well- designed scheme. When opening the valve in the reservoir, or other chamber, to allow the sewage to flow through the outfall pipe, care should be taken to open it at a slow rate so as to prevent damage by concussion when the escaping sewage meets the sea water standing in the lower portion of the pipes. When there is considerable difference of level between the reservoir and the sea, and the valve is opened somewhat quickly, the sewage as it enters the sea will create a "water-spout," which may reach to a considerable height, and which draws undesirable attention to the fact that the sewage is then being turned into the sea. Chapter XIV TRIGONOMETRICAL SURVEYING. In the surveying work necessary to fix the positions of the various stations, and of the float, a few elementary trigonometrical problems are involved which can be advantageously explained by taking practical examples. Having selected the main station A, as shown in Fig. 35, and measured the length of any line A B on a convenient piece of level ground, the next step will be to fix its position upon the plan. Two prominent landmarks, C and D, such as church steeples, flag-staffs, etc., the positions of which are shown upon the ordnance map, are selected and the angles read from each of the stations A and B. Assume the line A B measures ll7 ft, and the angular measurements reading from zero on that line are, from A to point C, 29° 23' and to point D 88° 43', and from B to point C 212° 43', and to point D 272° 18' 30". The actual readings can be noted, and then the arrangement of the lines and angles sketched out as shown in Fig. 35, from which it will be necessary to find the lengths AC and AD. As the three angles of a triangle equal 180°, the angle B C A = 180°- 147° 17'-29° 23'= 3° 20', the angle B D A = 180°-87° 41' 30"- 88° 43'= 3° 35' 30". In any triangle the sides are proportionate to the sines of the opposite angles, and vice versa; therefore, A B : A C :: sin B C A : sin A B C, or sin B C A : A B :: sin ABC : A C, nr A C = (A B sin A B C) / (sin B C A) = (117 x sin 147° 17') / (sin 3° 20') or log A C = log 117 + L sin 147° 17' - L sin 3° 20'. The sine of an angle is equal to the sine of its supplement, so that sin 147° 17' = sin 32° 43', whence log A C = 2.0681859 + 9.7327837-8.7645111 = 3.0364585 Therefore A C = 1087.6 feet. Similarly sin B D A: A B :: sin A B D: A D A B sin A B D 117 x sin 87° 41' 30" therefore A D = --------------- = ----------------------- sin B D A sin 3° 35' 30" whence log A D = log ll7 + L sin 87° 41' 30" - L sin 3° 35' 30" = 2.0681859 + 9.99964745 - 8.79688775 = 3.2709456 Therefore AD = 1866.15 feet. The length of two of the sides and all three angles of each of the two triangles A C B and A D B are now known, so that the triangles can be drawn upon the base A B by setting off the sides at the known angles, and the draughtsmanship can be checked by measuring the other known side of each triangle. The points C and D will then represent the positions of the two landmarks to which the observations were taken, and if the triangles are drawn upon a piece of tracing paper, and then superimposed upon the ordnance map so that the points C and D correspond with the landmarks, the points A and B can be pricked through on to the map, and the base line A B drawn in its correct position. If it is desired to draw the base line on the map direct from the two known points, it will be necessary to ascertain the magnitude of the angle A D C. Now, in any triangle the tangent of half the difference of two angles is to the tangent of half their sum as the difference of the two opposite sides is to their sum; that is:-- Tan 1/2 (ACD - ADC): tan 1/2 (ACD + ADC):: AD - AC : AD + AC, but ACD + ADC = l80° - CAD = 120° 40', therefore, tan 1/2 (ACD - ADC): tan 1/2 (120° 40'):: (1866.15 - 1087.6): (1866.15 + 1087.6), 778.55 tan 60° 20' therefore, tan 1/2 (ACD - ADC) = -------------------- 2953.75 or L tan 1/2 (ACD - ADC) = log 778.55 + L tan 60° 20' - log 2953.75 . = 2.8912865 + 10.2444l54 - 3.4703738 = 9.6653281 .•. 1/2 (ACD - ADC) = 24° 49' 53" .•. ACD - ADC = 49° 39' 46". Then algebraically (ACD + ADC) - (ACD - ADC) ADC = --------------------------- 2 120° 40' - 49° 39' 46" 71° 0' 14" .•. ADC = ------------------------- = ------------ = 35° 30' 7", 2 2 ACD = 180° - 35° 30' 7" - 59° 20' = 85° 9' 53". [Illustration: Fig. 35.--Arrangement of lines and Angles Showing Theodolite Readings and Dimensions.] Now join up points C and D on the plan, and from point D set off the line D A, making an angle of 35° 30' 7" with C D, and having a length of l866.15 ft, and from point C set off the angle A C D equal to 85° 9' 53". Then the line A C should measure l087.6 ft long, and meet the line A D at the point A, making an angle of 59° 20'. From point A draw a line A B, ll7 ft long, making an angle of 29° 23' with the line A C; join B C, then the angle ABC should measure 147° 17', and the angle B C A 3° 20'. If the lines and angles are accurately drawn, which can be proved by checking as indicated, the line A B will represent the base line in its correct position on the plan. The positions of the other stations can be calculated from the readings of the angles taken from such stations. Take stations E, F, G, and H as shown in Fig. 36*, the angles which are observed being marked with an arc. It will be observed that two of the angles of each triangle are recorded, so that the third is always known. The full lines represent those sides, the lengths of which are calculated, so that the dimensions of two sides and the three angles of each triangle are known. Starting with station E, Sin A E D: A D:: sin D A E: D E A D sin D A E D E = -------------- sin A E D or log D E = log A D + L sin D A E-L sin A E D. From station F, E and G are visible, but the landmark D cannot be seen; therefore, as the latter can be seen from G, it will be necessary to fix the position of G first. Then, sin E G D: D E :: sin E D G : E G, D E sin E D G or EG= --------------- sin E G D Now, sin E F G: E G :: sin F E G : F G E G sin F E G F G = ------------- sin E F G thus allowing the position of F to be fixed, and then sin F H G : F G :: sin F G H : F H F G sin F G H F H= ------------- sin F H G [Illustration: FIG 36.--DIAGRAM ILLUSTRATING TRIGONOMETRICAL SURVEY OF OBSERVATION STATIONS.] In triangles such as E F G and F G H all three angles can be directly read, so that any inaccuracy in the readings is at once apparent. The station H and further stations along the coast being: out of sight of landmark D, it will be as well to connect the survey up with another landmark K, which can be utilised in the forward work; the line K H being equal to F H sin K F H ------------- sin F K H The distance between C and D in Fig. 35 is calculated in a similar manner, because sin A C D : A D:: sin CAD : CD, AD sin CAD 1866.15 sin 59° 20' or CD = ---------- = ------------------- sin SCD sin 85° 9' 53" or log CD = log 1866.15 + L sin 59° 20' - L sin 85° 9' 53" = 3.2709456 + 9.9345738 - 9.9984516 = 3.2070678. ' . CD = 1610.90 ft The distance between any two positions of the float can be obtained by calculation in a similar way to that in which the length C D was obtained, but this is a lengthy process, and is not necessary in practical work. It is desirable, of course, that the positions of all the stations be fixed with the greatest accuracy and plotted on the map, then the position of the float can be located with sufficient correctness, if the lines of sight obtained from the angles read with the theodolites are plotted, and their point of intersection marked on the plan. The distance between any two positions of the float can be scaled from the plan. The reason why close measurement is unnecessary in connection with the positions of the float is that it represents a single point, whereas the sewage escaping with considerable velocity from the outfall sewer spreads itself over a wide expanse of sea in front of the outlet, and thus has a tangible area. The velocity of any current is greatest in the centre, and reduces as the distance from the centre increases, until the edges of the current are lost in comparative still water; so that observations taken of the course of one particle, such as the float represents, only approximately indicate the travel of the sewage through the sea. Another point to bear in mind is that the dilution of the sewage in the sea is so great that it is generally only by reason of the unbroken fæcal, or other matter, that it can be traced for any considerable distance beyond the outfall. It is unlikely that such matters would reach the outlet, except in a very finely divided state, when they would be rapidly acted upon by the sea water, which is a strong oxidising agent. CHAPTER XV. HYDROGRAPHICAL SURVEYING. Hydrographical surveying is that branch of surveying which deals with the complete preparation of charts, the survey of coast lines, currents, soundings, etc., and it is applied in connection with the sewerage of sea coast towns when it is necessary to determine the course of the currents, or a float, by observations taken from a boat to fixed points on shore, the boat closely following the float. It has already been pointed out that it is preferable to take the observations from the shore rather than the boat, but circumstances may arise which render it necessary to adopt the latter course. In the simplest case the position of the boat may be found by taking the compass bearings of two known objects on shore. For example, A and B in Fig. 37 may represent the positions of two prominent objects whose position is marked upon an ordnance map of the neighbourhood, or they may be flagstaffs specially set up and noted on the map; and let C represent the boat from which the bearings of A and B are taken by a prismatic compass, which is marked from 0 to 360°. Let the magnetic variation be N. 15° W., and the observed bearings A 290, B 320, then the position stands as in Fig. 38, or, correcting for magnetic variation, as in Fig. 39, from which it will be seen that the true bearing of C from A will be 275-180=95° East of North, or 5° below the horizontal, and the true bearing of C from B will be 305-180=120° East of North, or 35° below the horizontal. These directions being plotted will give the position of C by their intersection. Fig. 40 shows the prismatic compass in plan and section. It consists practically of an ordinary compass box with a prism and sight-hole at one side, and a corresponding sight-vane on the opposite side. When being used it is held horizontally in the left hand with the prism turned up in the position shown, and the sight-vane raised. When looking through the sight-hole the face of the compass-card can be seen by reflection from the back of the prism, and at the same time the direction of any required point may be sighted with the wire in the opposite sight vane, so that the bearing of the line between the boat and the required point may be read. If necessary, the compass-card may be steadied by pressing the stop at the base of the sight vane. In recording the bearings allowance must in all cases be made for the magnetic pole. The magnetic variation for the year 1910 was about l5 1/2° West of North, and it is moving nearer to true North at the rate of about seven minutes per annum. [Illustration: FIG. 37.--POSITION OF BOAT FOUND BY COMPASS BEARINGS.] [Illustration: FIG. 38.--REDUCTION OF BEARINGS TO MAGNETIC NORTH.] [Illustration: FIG. 39.--REDUCTION OF BEARINGS TO TRUE NORTH.] There are three of Euclid's propositions that bear very closely upon the problems involved in locating the position of a floating object with regard to the coast, by observations taken from the object. They are Euclid I. (32), "The three interior angles of every triangle are together equal to two right angles"; Euclid III. (20), "The angle at the centre of a circle is double that of the angle at the circumference upon the same base--that is, upon the same part of the circumference," or in other words, on a given chord the angle subtended by it at the centre of the circle is double the angle subtended by it at the circumference; and Euclid III. (21), "The angles in the same segment of a circle are equal to one another." [Illustration: Fig. 40.--Section and Plan of Prismatic Compass.] Having regard to this last proposition (Euclid III., 21), it will be observed that in the case of Fig. 37 it would not have been possible to locate the point C by reading the angle A C B alone, as such point might be amywhere on the circumference of a circle of which A B was the chord. The usual and more accurate method of determining the position of a floating object from the object, itself, or from a boat alongside, is by taking angles with a sextant, or box-sextant, between three fixed points on shore in two operations. Let A B C, Fig. 41, be the three fixed points on shore, the positions of which are measured and recorded upon an ordnance map, or checked if they are already there. Let D be the floating object, the position of which is required to be located, and let the observed angles from the object be A D B 30° and B D C 45°. Then on the map join A B and B C, from A and B set off angles = 90 - 30 = 60°, and they will intersect at point E, which will be the centre of a circle, which must be drawn, with radius E A. The circle will pass through A B, and the point D will be somewhere on its circumference. Then from B and C set off angles = 90-45 = 45°, which will intersect at point F, which will be the centre of a circle of radius F B, which will pass through points B C, and point D will be somewhere on the circumference of this circle also; therefore the intersection of the two circles at D fixes that point on the map. It will be observed that the three interior angles in the triangle A B E are together equal to two right angles (Euclid I. 32), therefore the angle A E B = 180 - 2 x (90 - 30) = 600, so that the angle A E B is double the angle A D B (Euclid III., 20), and that as the angles subtending a given chord from any point of the circumference are equal (Euclid III, 21), the point that is common to the two circumferences is the required point. When point D is inked in, the construction lines are rubbed out ready for plotting the observations from the next position. When the floating point is out of range of A, a new fixed point will be required on shore beyond C, so that B, C, and the new point will be used together. Another approximate method which may sometimes be employed is to take a point on a piece of tracing paper and draw from it three lines of unlimited length, which shall form the two observed angles. If, now, this piece of paper is moved about on top of the ordnance map until each of the three lines passes through the corresponding fixed points on shore, then the point from which the lines radiate will represent the position of the boat. [Illustration: Fig. 41. Geometrical Diagram for Locating Observation Point Afloat.] The general appearance of a box-sextant is as shown in Fig. 42, and an enlarged diagrammatic plan of it is shown in Fig. 43. It is about 3 in in diameter, and is made with or without the telescope; it is used for measuring approximately the angle between any two lines by observing poles at their extremities from the point of intersection. In Fig. 43, A is the sight- hole, B is a fixed mirror having one-half silvered and the other half plain; C is a mirror attached to the same pivot as the vernier arm D. The side of the case is open to admit rays of light from the observed objects. In making an observation of the angle formed by lines to two poles, one pole would be seen through the clear part of mirror B, and at the same time rays of light from the other pole would fall on to mirror C, which should be moved until the pole is reflected on the silvered part of mirror B, exactly in line, vertically, with the pole seen by direct vision, then the angle between the two poles would be indicated on the vernier. Take the case of a single pole, then the angle indicated should be zero, but whether it would actually be so depends upon circumstances which may be explained as follows: Suppose the pole to be fixed at E, which is extremely close, it will be found that the arrow on the vernier arm falls short of the zero of the scale owing to what may be called the width of the base line of the instrument. If the pole is placed farther off, as at F, the rays of light from the pole will take the course of the stroke-and-dot line, and the vernier arm will require to be shifted nearer the zero of the scale. After a distance of two chains between the pole and sextant is reached, the rays of light from the pole to B and C are so nearly parallel that the error is under one minute, and the instrument can be used under such conditions without difficulty occurring by reason of error. To adjust the box-sextant the smoked glass slide should be drawn over the eyepiece, and then, if the sun is sighted, it should appear as a perfect sphere when the vernier is at zero, in whatever position the sextant may be held. When reading the angle formed by the lines from two stations, the nearer station should be sighted through the plain glass, which may necessitate holding the instrument upside down. When the angle to be read between two stations exceeds 90°, an intermediate station should be fixed, and the angle taken in two parts, as in viewing large angles the mirror C is turned round to such an extent that its own reflection, and that of the image upon it, is viewed almost edgeways in the mirror B. [Illustration: Fig. 42.--Box-Sextant.] It should be noted that the box-sextant only reads angles in the plane of the instrument, so that if one object sighted is lower than the other, the angle read will be the direct angle between them, and not the horizontal angle, as given by a theodolite. The same principles may be adopted for locating the position of an object in the water when the observations have to be taken at some distance from it. To illustrate this, use may be made of an examination question in hydrographical surveying given at the Royal Naval College, Incidentally, it shows one method of recording the observations. The question was as follows:-- [Illustration: Fig. 43.--Diagram Showing Principle of Box- Sextant] "From Coastguard, Mound bore N. 77° W. (true) 0.45 of a mile, and Mill bore, N. 88° E, 0.56 of a mile, the following stations were taken to fix a shoal on which the sea breaks too heavily to risk the boat near:-- Mound 60° C.G. 47° Mill. [Greek: phi] Centre of shoal Mound 55° C.G. 57° 30' Mill. [Greek: phi] Centre of shoal. Project the positions on a scale of 5 in = a mile, giving the centre of the shoal." It should be noted that the sign [Greek: phi] signifies stations in one line or "in transit," and C G indicates coastguard station. The order of lettering in Fig. 44 shows the order of working. [Illustration: Fig. 44.--Method of Locating Point in Water When Observations Have to Be Taken Beyond It.] The base lines A B and A C are set out from the lengths and directions given; then, when the boat at D is "in transit" with the centre of the shoal and the coastguard station, the angle formed at D by lines from that point to B and A is 60°, and the angle formed by lines to A and C is 47°. If angles of 90° - 60° are set up at A and B, their intersection at E will, as has already been explained, give the centre of a circle which will pass through points A, B, and D. Similarly, by setting up angles of 90°-47° at A and C, a circle is found which will pass through A C and D. The intersection of these circles gives the position of the boat D, and it is known that the shoal is situated somewhere in the straight line from D to A. The boat was then moved to G, so as to be "in transit" with the centre of the shoal and the mound, and the angle B G A was found to be 55°, and the angle A G C 57° 30'. By a similar construction to that just described, the intersection of the circles will give the position of G, and as the shoal is situated somewhere in the line G B and also in the line A D, the intersection of these two lines at K will give its exact position. Aberdeen Sea Outfall Admiralty, Diving Regulations of --Charts, Datums for Soundings on --Main Currents Shown on Age of Tide Air Pressure on Tides, Effect of Almanac, Nautical Analysis of Cement --Sea Water Anchor Bolts for Sea Outfalls Anemometer for Measuring Wind Aphelion Apogee Atlantic Ocean, Tides in Autumnal Equinox Barometric Pressure, Effect on Tides of Beach Material, Use in Concrete of Beaufort Scale for Wind Bench Mark for Tide Gauge "Bird" Tides Board of Trade, Approval of Outfall by Bolts for Sea Outfall Pipes Box Sextant Bristol Channel --Datum for Tides at Buoy for Marking Position of Outfall Can Buoy to Mark Position of Outfall Cast Iron, Resistance to Sea Water of Cement, Action of Sea Water on --Analysis of --Characteristics Causing Hardening of --Setting of --Effect of Saline Matters on Strength of --Sea Water on Setting Time of --Physical Changes Due to Action of Sea Water on --Precautions in Marine Use of --Retardation of Setting Time of --Tests for Marine Use of Centrifugal Force, Effect on Tides of Centripetal Force, Effect on Tides of --Variations in Intensity of Charts, Datum for Soundings on --Main Currents Shown on Chepstow, Greatest Tide at Clifton, Tides at Compass, Magnetic Variation --Marine --Prismatic Concentration of Storm Water in Sewers Concrete, Action of Sea Water on --Composition to Withstand Sea Water --Destruction in Sea Water of Crown, Foreshore owned by Currents and Tides. Lack of co-ordination in change of --Formation of --in Rivers, 30 --Observations of --Variation of Surface and Deep --Variations in Velocity of Current Observations by Marine Compass --Theodolites --Floats for --Hydrographical Surveying for --Method of making --Plotting on Plans, The --Selecting Stations for --Special points for consideration in making --Suitable Boat for --Trigonometrical Surveying for Datum Levels for Tides Declination of Sun and Moon Decompression after Diving Density of Sea Water Derivative Waves Design of Schemes, Conditions governing Diffusion of Sewage in Sea Discharge from Sea Outfalls, Calculations for --Precautions necessary for --Time of Disposal of Sewage by Diffusion --dependent on time of Discharge Diurnal Inequality of Tides Diverting-plate Storm Overflow Diving --Illnesses caused by --Instruction in --Medical Examination previous to --Physical Principles involved in --Equipment Diving Equipment, Weight of Dublin, Datum for Tides at Earth, Distance from Moon --Sun --Orbit around Sun of --Size of --Time and Speed of Revolution of Equinox Erosion of Shore caused by Sea Outfalls Establishment Flap Valves on Sea Outfall Pipes Floats, Deep and Surface --to govern Pumping Plant Foreshore owned by Crown Gauges, Measuring flow over Weirs by Gauging flow of Sewage --, Formula: for Gradient, Effect on Currents of Surface --Tides of Barometric Gravity, Specific, of Sea Water --Tides caused by Great Crosby Sea Outfall Harbour and Fisheries Dept., Approval of Outfall by Harwich, Mean Level of Sea at High Water Mark of Ordinary Tides Hook-Gauge, for Measuring flow over Weirs Hull, Mean Level of Sea at Hydrographical Surveying Problems in Current Observations Impermeable Areas, Flow of Rain off --Percentage of --per Head of Population Indian Ocean, Tides in Infiltration Water Irish Channel, Analysis of Water in Iron, Effect of Sea Water on Cast June, Low Spring: Tides in Kelvin's Tide Predicting Machine Land, Area of Globe Occupied by Leap-weir Storm Overflow Liverpool, Datum for Tides at --Soundings on Charts of --Tide Tables Lloyd-Davies, Investigations by Local Government Board, Current Observations Required for London, Datum for Port of Low Water Mark of Ordinary Tides Lunar Month Lunation Magnetic Variation of Compass Marine Compass Mean High Water Mersey, Soundings on Charts of Mixing Action of Sewage and Water Moon, Declination of --Distance from Earth of --Effect on Tides of --Mass of --Minor Movements of --Orbit around Earth of --Perigee and Apogee Morse Code for Signalling Nautical Almanac Neap Tides --Average Rise of Orbit of Earth around Sun --Moon around Earth Ordinary Tides, lines on Ordnance Maps of Ordnance Datum for England --Ireland, 17 --Records made to fix --Maps, lines of High and Low Water on Outfall Sewers, Approval by Board of Trade of --Calculations for Discharge of --Construction of --Detail Designs for --Details of cast-iron Pipe Joints for --Flap Valves on end of --Inspection during Construction of --Marking position by Buoy of --Selection of Site for Overflows for Storm Water Pacific Ocean, Tides in Parliament, Current Observations Required for Perigee Perihelion Piling for Sea Outfalls Pipes, Joints of Cast Iron --Steel Plymouth, Mean Sea Level at Predicting Tides Primary Waves Prismatic Compass Pumping --Cost of --Plant --Management of --Utilisation of Windmills for Pumps for Use with Windmills Quantity of Rainfall to Provide for --Sewage to Provide for Rainfall --at Times of Light Winds --Frequency of Heavy --in Sewers --Intensity of --Storage Capacity to be Provided for --To Provide for Range of Tides Rise of Tides Screening Sewage before Discharge --Storm Water before Overflow Sea, Mean Level of Sea Outfalls, Calculations for Discharge of --Construction of --Design of --Lights and Buoy to mark position of --Selection of Site for Seashore Material used in Concrete Sea, Variation around Coast in level of --Water, Analysis of --Effect on Cast-Iron of --Effect on Cement --Galvanic action in --Weight of Secondary Waves Separate System of Sewerage Sewage, Effect of Sea Water on --Gauging flow of --Calculations for --Hourly and daily variation in flow of, 42 --Quantity to provide for Sewers, Economic considerations in provision of Surface Water --Effect on Design of Scheme of Subsidiary --Storm Water in Sextant, Box Signalling, Flags for --Morse Code for Solstice, Summer and Winter Soundings on Charts, Datum for Southampton, Tides at Southern Ocean, High Water in --Origin of Tides in --Width and Length of Specific Gravity of Sea Water Spring Tides --Average Rise of --Variation in Height of Storage Tanks, Automatic High Water Alarms for --Determination of Capacity of --For Windmill Pumps Storm Water in Sewers --Overflows Subsidiary Sewers, Effect on Design of Scheme of Summer Solstice Sun, Aphelion and Perihelion --Declination of --Distance from Earth --Effect on Tides of --Mass of --Minor Movements of Surface Water Sewers, Average Cost of --Economic Considerations in Provision of Surveying, Problems in Hydrographical --Trigonometrical Thames Conservancy Datum --Flow of Sewage in Tidal Action in Crust of Earth --Attraction --Day, Length of --Flap Valves on Sea Outfall Pipes --Observations, Best Time to Make --Records, Diagram of --Rivers, Tides and Currents in --Waves, Length of Primary --Secondary or Derivative --Speed of Primary --Velocity of Tide Gauge, Method of Erecting --Selecting Position of Tide, Observations of Rise and Fall of Tide-Predicting Machine --Recording Instrument --Tables Tides, Abnormally High --Age of Tides and Currents, Lack of Co-ordination in Change of --Diagrammatic Representation of Principal --Diurnal Inequality --Double, 9 --Effect of Barometric Pressure on --Centripetal and Centrifugal Force --Storms on, --Extraordinary High --Formation of --in Rivers --lines on Ordnance Maps of High and Low Water of --Propagation to Branch Oceans of --Proportionate Effect of Sun and Moon on --Range of --Rate of Rise and Fall of --Rise of --Spring and Neap --Variations in Height of Towers for Windmills Trade Wastes, Effect on flow of Sewage of Trass in Cement for Marine Vork Trigonometrical Surveying for Cuirent Observations Trinity High Water Mark Upland Water, Effect on Rivers of Valves on Sea Outfall Pipes Velocity of Currents Vernal Equinox Visitors, Quantity of Sewage from Volume of Sewage Water, Area of Globe occupied by --Fittings, Leakage from --Power for Pumping --Supply, Quantity per Head for --Weight of Waterloo Sea Outfall Waves, Horizontal Movement of --Motion of --Primary and Secondary --Tidal --Wind Weight of Fresh Water --Sea Water --Sewage Weirs for Gauging Sewage, Design of --Storm Overflow by Parallel Weymouth, Mean Level of Sea at Wind --Beaufort Scale for Wind, Mean Hourly Velocity of --Measuring Velocity of --Monthly Analysis of --Power of Windmills According to Velocity of --Rainfall at Time of Light --Velocity and Pressure of --Waves Windmills --Comparative Cost of --Details of Construction of --Effective Duty of --Efficient Sizes of --For Pumping Sewage --Height of Towers for --Power in Varying Winds of Winter Solstice
Custom Search
Written and maintained by
Ronald Hunter
|
2017-03-30 10:47:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5368903279304504, "perplexity": 1757.4128026899205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00348-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://scicomp.stackexchange.com/questions/30229/memory-speed-tradeoff-for-many-small-matrix-inverses/30323
|
# Memory/speed tradeoff for many small matrix inverses
Problem
In the case of a finite element code, I have many small (order of 30x30) matrix inverses (or LU factorizations), one per finite element. These matrix inverses never change and must be applied to vectors repeatedly in the context of a matrix-free solver.
Storing all these inverses is the memory bottleneck of the code, because they have to be applied for each finite element in the mesh. On the other hand, computing the inverses 'on the fly' (using LAPACK or Numpy) is not practical, because these inverses must be applied at each iteration of a sparse iterative solve, so each time-step would require
(number of CG iterations) * (number of elements) * (time of inverse)
which is too expensive, even for problems far smaller than those I am interested in.
(possible) solution
I am considering performing the matrix inversions or LU factorizations once, and then storing them to disk, rather than keeping them in memory. Then the idea is to read the inverses into memory either one at a time or by chunks, and apply them, avoiding the repeated inversions. But I know that reading from disk can be expensive.
Are there good or accepted ways of doing this? Most out-of-core papers I've looked at are considering direct solves or much more complicated problems, so I would appreciate any pointers to relevant references as well.
EDIT: The code is a mix of Python and C, and I've found some posts indicating that perhaps the HDF5 library is a good alternative to rolling my own cache, as suggested above.
• When you say it's the memory bottleneck, do you mean bandwidth/speed or capacity/size? If it's bandwidth, moving it to disk is going to make things worse. If it's a size thing.. are you sure this is really an issue? Storing a single matrix per element, of fixed size, is not really asymptotically worse than storing the state vector/iterate vector itself (though certainly worse by a large constant factor). If storing them really is using too much memory, I think you would be better served by looking at distributed memory parallelism than out-of-core techniques. – rchilton1980 Sep 18 '18 at 14:33
• @rchilton1980 I mean capacity/size. As for the matrix vs. vector, yes, the large constant factor is a problem, and only gets worse with higher polynomial order. I have written a distributed version of the solver that splits the problem between nodes. But on each node, the storage of these inverses or their computation is still the limiting factor. Of course, the distributed approach also introduces a communication overhead between nodes. – user3482876 Sep 18 '18 at 14:48
• It might end up happening that doing the same inverses on the fly is faster than storing them. One keyword you might be looking for regarding the out-of-core solver is io latency hiding. Its efficiency will depend on how your other operations during the iterative solve allow you to incorporate the disk IO without significant slowdown. – Anton Menshov Sep 18 '18 at 15:39
• Any repetition in the grid/elements that could be exploited? Can you maybe use a mostly-structured grid? – rchilton1980 Sep 18 '18 at 17:04
• @rchilton1980 in general, no; the large problems in which I am interested involve unstructured grids. It's a good suggestion, though. – user3482876 Sep 18 '18 at 18:38
One idea would be to store the low-rank approximation of the inverse of the matrix $$A^{-1}\approx UV$$ using SVD. If $$A\in \mathbb{R}^{30\times 30}$$, then $$U\in \mathbb{R}^{30\times k}\ \& \ V\in \mathcal{R}^{k\times 30}$$, where $$k\approx 4,5$$ is the rank. Use the $$UV$$ approximation as a preconditioner for solving the system $$Ax=b$$ using GMRES. You can use vary $$k$$ as a tuning parameter that trades off memory with the convergence rate.
|
2020-01-18 15:03:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5596218109130859, "perplexity": 753.1942504035541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00019.warc.gz"}
|
https://arxiver.wordpress.com/2015/11/17/low-noise-titanium-nitride-kids-for-superspec-a-millimeter-wave-on-chip-spectrometer-ima/
|
Low Noise Titanium Nitride KIDs for SuperSpec: A Millimeter-Wave On-Chip Spectrometer [IMA]
SuperSpec is a novel on-chip spectrometer we are developing for multi-object, moderate resolution (R = 100 – 500), large bandwidth (~1.65:1) submillimeter and millimeter survey spectroscopy of high-redshift galaxies. The spectrometer employs a filter bank architecture, and consists of a series of half-wave resonators formed by lithographically-patterned superconducting transmission lines. The signal power admitted by each resonator is detected by a lumped element titanium nitride (TiN) kinetic inductance detector (KID) operating at 100 – 200 MHz. We have tested a new prototype device that achieves the targeted R = 100 resolving power, and has better detector sensitivity and optical efficiency than previous devices. We employ a new method for measuring photon noise using both coherent and thermal sources of radiation to cleanly separate the contributions of shot and wave noise. We report an upper limit to the detector NEP of $1.4\times10^{-17}$ W Hz$^{-1/2}$, within 10% of the photon noise limited NEP for a ground-based R=100 spectrometer.
S. Hailey-Dunsheath, E. Shirokoff, P. Barry, et. al.
Tue, 17 Nov 15
34/87
Comments: 8 pages, 4 embedded figures, accepted for publication in the Journal of Low Temperature Physics
|
2021-10-22 13:24:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584433913230896, "perplexity": 3910.3090902446384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00096.warc.gz"}
|
https://quant.stackexchange.com/questions/54873/under-what-conditions-will-both-european-and-american-put-options-worth-the-same/54888
|
Under what conditions will both European and American put options worth the same?
It is well-known that on a non-dividend paying stock, it is suboptimal to exercise an American call option earlier. In other words, both European and American call options on the same non-dividend paying stock worth the same. This can be proven using the Put-Call Parity.
However, I am not sure about European and American put options. I know that due to the ability that an American option can be exercised any time prior to maturity, it should worth at least as much as European put option. I am interested to know the conditions that give equality. In particular,
Question: Under what conditions will both European and American put options worth the same?
My motivation behind asking this question is so that I can ensure that my binomial tree implementation to price American put option is correct.
When dividend is zero, my binomial output the same price for both European and American call options, which is a good sign. However, I have no alternative to check whethee the tree gives correct price for American put option or not.
• Well, trivial examples would be zero strike ($K=0$) and at maturity ($t=T$). Also, the early exercise premium equals zero if $r=0$. – Kevin Jun 13 '20 at 5:08
• Perhaps you could explain your second statement? – Idonknow Jun 13 '20 at 10:38
• Early exercise is about receving your cash early and not waiting until the option expires. If there's no interest rate (no discounting, no time value of money), then there's no point of early exercise. More mathematically, look at the early exercise premium derived in Carr, Jarrow, Myneni (1992). It clearly equals zero if $r=0$. – Kevin Jun 13 '20 at 11:19
• To test your code you need an example where American and European prices are different. For an interesting test case try a put option that is deeply in the money (S very low compared to K) and a very high interest rate (double digit). This is a case where the American should be exercised early. – noob2 Jun 14 '20 at 8:03
• ... i.e. the american should have a price = Intrinsic Value, while the European price should be lower than this. – noob2 Jun 14 '20 at 15:41
|
2021-05-08 11:00:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3159608244895935, "perplexity": 682.603661868046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00267.warc.gz"}
|
http://pyxplot.org.uk/examples/03td/03surface_sinc/index.html
|
# Pyxplot
## Examples - Sinc function
An example of the surface plotting style
Pyxplot's surface plotting style evaluates a function at a grid of points in the x-y plane, and draws a 3D surface showing how the function varies across the plane. For added prettiness, an expression is also given for the color of the line, which varies from point to point. As in expressions passed to the using modifier, the columns of data are referred to as $1 for the first column, i.e. x;$2 for the second column, y; etc. The expression given here uses the built-in function hsb() to produce a color object with the specified hue, saturation and brightness.
### Script
set numerics complex
set xlabel r"$x$"
set ylabel r"$y$"
set zlabel r"$z$"
set xformat r"%s$\pi$"%(x/pi)
set yformat r"%s$\pi$"%(y/pi)
set xtics 3*pi ; set mxtics pi
set ytics 3*pi ; set mytics pi
set ztics
set key below
set size 6 square
set grid
plot 3d [-6*pi:6*pi][-6*pi:6*pi][-0.3:1] sinc(hypot(x,y)) \
with surface col black \
fillcol hsb(atan2($1,$2)/(2*pi)+0.5,hypot($1,$2)/30+0.2,\$3*0.5+0.5)
|
2018-01-16 17:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.594407856464386, "perplexity": 8315.327542754048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00480.warc.gz"}
|
https://www.physicsforums.com/threads/derivative-of-a-fractional-function-without-quotient-rule.842185/
|
# Homework Help: Derivative of a fractional function without quotient rule
1. Nov 9, 2015
### Saracen Rue
1. The problem statement, all variables and given/known data
The displacement of a particle can be modeled by the function $x(t)=\frac{2x-5}{4x^2+2x}$, where $t$ is in seconds, $x$ is in meters, and $t ∈ [1,10]$
a) Determine the derivative of the function without using the quotient rule.
b) Hence, find exactly when the particle is stationary.
c) Determine when the particle is moving at a constant velocity. You can use your calculator to assist you.
d) A jolt is defined as being a change in acceleration over time. With the help of your calculator, determine the time for which the jolt is equal to the displacement.
2. Relevant equations
I'm not sure, besides the differentiation rules. However, I can't use the quotient rule.
3. The attempt at a solution
I'm honestly not sure how to even begin here. I've never been taught how to solve this sort of question without the quotient rule. Can anyone please give me an idea on where I should start? Thank you for your time.
2. Nov 9, 2015
### Krylov
Use the product rule and the chain rule instead?
3. Nov 9, 2015
### Saracen Rue
I have never learnt the chain rule method. Also, I'm not sure how the product rule would work here: the $(4x^2+2x)$ term would be to the power of negative 1, and I'm not sure how to apply to product rule then. Sorry for being stupid, but I'm really new (not to mention bad) when it comes to calculus...
4. Nov 9, 2015
### Krylov
Lack of knowledge does not imply stupidity. Do you have a good calculus book? I recommend that you make a few hundred exercises from such a book (not all at once!) differentiating all kinds of functions using the chain rule, product rule and quotient rule. This is essential material and it should not be an obstruction.
Also, now that I take a better look, I think your function should read $x(t) = \frac{2 t - 5}{4 t^2 + 2 t}$ instead, so $t$ instead of $x$ in the right-hand side.
5. Nov 9, 2015
### Ray Vickson
[QUpOTE="Saracen Rue, post: 5283378, member: 521193"]1. The problem statement, all variables and given/known data
The displacement of a particle can be modeled by the function $x(t)=\frac{2x-5}{4x^2+2x}$, where $t$ is in seconds, $x$ is in meters, and $t ∈ [1,10]$
a) Determine the derivative of the function without using the quotient rule.
b) Hence, find exactly when the particle is stationary.
c) Determine when the particle is moving at a constant velocity. You can use your calculator to assist you.
d) A jolt is defined as being a change in acceleration over time. With the help of your calculator, determine the time for which the jolt is equal to the displacement.
2. Relevant equations
I'm not sure, besides the differentiation rules. However, I can't use the quotient rule.
3. The attempt at a solution
I'm honestly not sure how to even begin here. I've never been taught how to solve this sort of question without the quotient rule. Can anyone please give me an idea on where I should start? Thank you for your time.[/QUOTE]
The product rule gives
$$\frac{d}{dx} (2x-5)(4x^2 +2x)^{-1} = \left[ \frac{d}{dx} (2x-5) \right] (4x^2 + 2x)^{-1} + (2x - 5) \left[ \frac{d}{dx} (4x^2 + 2x)^{-1} \right]$$
Getting $d(2x-5)/dx$ is easy; where you need to use the chain rule is in the second differentiation:
$$\frac{d(4x^2 + 2x)^{-1}}{dx} = \frac{d(4x^2 + 2x)^{-1}}{d (4x^2 + 2x)} \cdot \frac{d(4x^2 + 2x)}{dx}$$
This means that if we have a function of the form $f(x) = g(h(x))$ we can let $u = h(x)$ and get
$$\frac{df}{dx} = \left. \frac{dg}{du} \right|_{u = h(x)} \cdot \frac{dh}{dx}$$
For f = g(h) (with h = h(x)), a way of remembering this is to think of
$$\frac{df}{dx} = \frac{dg}{dh} \cdot \frac{dh}{dx}$$
(cancelling the dh's).
Anyway, if $h = 4x^2 + 2x$ and $g(h) = h^{-1}$, can you calculate $dg/dh$ and $dh/dx$? That's all there is to it.
6. Nov 9, 2015
### Staff: Mentor
If your function is $x(t) = \frac{2t - 5}{4t^2 + 2t}$ (with change as noted by Krylov), and the objective is to find x'(t) without using the quotient rule, there are three other possibilities.
1) Use the definition of the derivative as the limit of a quotient.
2) Use the product rule and chain rule, writing the function as $x(t) = (2t - 5) (4t^2 + 2t)^{-1}$
3) Use the product rule and a rule you might have learned about the derivative of the reciprocal of a function, with $x(t) = (2t - 5) \left(\frac{1}{4t^2 + 2t}\right)$ .
Choice 1 above would be pretty tricky, so it seems to me that one of the other two choices is what you're expected to use.
7. Nov 9, 2015
### epenguin
Or you could express the fraction as
$\frac{(2x + 1) - 6}{2x (2x + 1)}$
and I hope easy to see ways to proceed.
8. Nov 9, 2015
### SammyS
Staff Emeritus
You could use partial fraction decomposition. Then you only need to deal with $\displaystyle \ \frac{A}{t}+ \frac{B}{2t+1}\$ .
Alternatively, you could multiply both sides by the denominator, then do implicit differentiation.
|
2018-07-23 01:03:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127821087837219, "perplexity": 270.28214929671185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00481.warc.gz"}
|
https://scitools.org.uk/iris/docs/latest/whatsnew/1.12.html
|
# What’s New in Iris 1.12¶
Release: 1.12 2017-01-30
This document explains the new/changed features of Iris in version 1.12 (View all changes.)
## Iris 1.12 Features¶
Showcase Feature: New regridding schemes
A new regridding scheme, iris.analysis.UnstructuredNearest, performs nearest-neighbour regridding from “unstructured” onto “structured” grids. Here, “unstructured” means that the data has X and Y coordinate values defined at each horizontal location, instead of the independent X and Y dimensions that constitute a structured grid. For example, data sampled on a trajectory or a tripolar ocean grid would be unstructured.
In addition, added experimental ProjectedUnstructured regridders which use scipy.interpolate.griddata to regrid unstructured data (see iris.experimental.regrid.ProjectedUnstructuredLinear and iris.experimental.regrid.ProjectedUnstructuredNearest). The essential purpose is the same as iris.analysis.UnstructuredNearest. This scheme, by comparison, is generally faster, but less accurate.
Support has been added for accelerated loading of UM files (PP and Fieldsfile), when these have a suitable regular “structured” form.
A context manager is used to enable fast um loading in all the regular Iris load functions, such as iris.load() and iris.load_cube(), when loading data from UM file types. For example:
>>> import iris
>>> filepath = iris.sample_data_path('uk_hires.pp')
This approach can deliver loading which is 10 times faster or more. For example :
• a 78 Gb fieldsfile of 51,840 fields loads in about 13 rather than 190 seconds.
• a set of 25 800Mb PP files loads in about 21 rather than 220 seconds.
• The results will normally differ, if at all, only in having dimensions in a different order or a different choice of dimension coordinates. In these cases, structured loading can be used with confidence.
• Ordinary Fieldsfiles (i.e. model outputs) are generally suitable for structured loading. Many PP files also are, especially if produced directly from Fieldsfiles, and retaining the same field ordering.
• Some inputs however (generally PP) will be unsuitable for structured loading : For instance if a particular combination of vertical levels and time has been omitted, or some fields appear out of order.
• There are also some known unsupported cases, including data which is produced on pseudo-levels. See the detail documentation on this.
It is the user’s responsibility to use structured loading only with suitable inputs. Otherwise, odd behaviour and even incorrect loading can result, as input files are not checked as fully as in a normal load.
• results are often somewhat different, especially regarding the order of dimensions and the choice of dimension coordinates.
• although both constraints and user callbacks are supported, callback routines will generally need to be re-written. This is because a ‘raw’ cube in structured loading generally covers multiple PPfields, which therefore need to be handled as a collection : A grouping object containing them is passed to the callback ‘field’ argument. An example showing callbacks suitable for both normal and structured loading can be seen here.
For full details, see : iris.fileformats.um.structured_um_loading().
• A skip pattern is introduced to the fields file loader, such that fields which cannot be turned into iris PPField instances are skipped and the remaining fields are loaded. This especially applies to certain types of files that can contain fields with a non-standard LBREL value : Iris can now load such a file, skipping the unreadable field and printing a warning message.
• Iris can now load PP files containing a PP field whose LBLREC value does not match the field length recorded in the file. A warning message is printed, and all fields up to the offending one are loaded and returned. Previously, this simply resulted in an unrecoverable error.
• The transpose method of a Cube now results in a lazy transposed view of the original rather than realising the data then transposing it.
• The iris.analysis.cartography.area_weights() function is now more accurate for single precision input bounds.
• Iris is now able to read seconds in datetimes provided in NAME trajectory files.
• Optimisations to trajectory interpolations have resulted in a significant speed improvement.
• Many new and updated translations between CF spec and STASH codes.
|
2018-10-20 23:37:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3468455970287323, "perplexity": 3713.340826429956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513508.42/warc/CC-MAIN-20181020225938-20181021011438-00506.warc.gz"}
|
http://mathv.chapman.edu/~jipsen/structures/doku.php/algebraic_semilattices?rev=1280441541&do=diff
|
# Differences
This shows you the differences between two versions of the page.
algebraic_semilattices [2010/07/29 15:12]
jipsen created
algebraic_semilattices [2010/09/04 16:55] (current)
jipsen delete hyperbaseurl
Line 1: Line 1:
-f+=====Algebraic semilattices=====
+
+Abbreviation: **ASlat**
+
+====Definition====
+An \emph{algebraic semilattice} is a [[complete semilattice]] $\mathbf{P}=\langle P,\leq \rangle$
+such that
+
+the set of compact elements below any element is directed and
+
+every element is the join of all compact elements below it.
+
+An element $c\in P$ is \emph{compact} if for every subset $S\subseteq P$ such that $c\le\bigvee S$, there exists
+a finite subset $S_0$ of $S$ such that $c\le\bigvee S_0$.
+
+The set of compact elements of $P$ is denoted by $K(P)$.
+
+==Morphisms==
+Let $\mathbf{P}$ and $\mathbf{Q}$ be algebraic semilattices. A morphism from $\mathbf{P}$ to
+$\mathbf{Q}$ is a function $f:P\rightarrow Q$ that is \emph{Scott-continuous}, which means that $f$ preserves all directed joins:
+
+$z=\bigvee D\Longrightarrow f(z)= \bigvee f[D]$
+
+
+====Examples====
+Example 1:
+
+====Basic results====
+
+
+====Properties====
+^[[Classtype]] |second-order |
+^[[Amalgamation property]] | |
+^[[Strong amalgamation property]] | |
+^[[Epimorphisms are surjective]] | |
+====Finite members====
+
+$\begin{array}{lr} +f(1)= &1\\ +\end{array}$
+
+
+====Subclasses====
+[[Algebraic lattices]]
+
+
+====Superclasses====
+[[Algebraic posets]]
+
+
+====References====
+
+[(Ln19xx>
+)]
|
2020-09-19 13:11:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096700310707092, "perplexity": 8727.080835486195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00611.warc.gz"}
|
http://cheaptalk.org/category/uncategorized/page/2/
|
You are currently browsing the category archive for the ‘Uncategorized’ category.
A firm has a basic goal: maximize profits. And then it has day-to-day decisions. It is far too complicated to every day try to trace through the consequences of those basic decisions on the fundamental objective of maximizing profits. A manager who tried to do that would spend so much time thinking that by the time he figured it out the day would be over and he’d have to start thinking again about tomorrow’s decision.
So firms don’t hire managers like that. Managers cling to intermediate goals, like say maximize market share. The best intermediate goals are the ones that are easy to monitor and which do a pretty good job of proxying for the underlying goal. These intermediate goals eventually become part of the culture of the firm and knowledge of their connection to the underlying goal can get lost. The manager can’t distinguish between intermediate goals and fundamental goals.
Now a consultant comes in to advise the manager. A consultant’s job is to show the manager how best to pursue his goals. So the very first thing a consultant should do is find out what the manager’s goals are. And here’s where the dilemma arises. The consultant might actually be smart enough to figure out that the manager’s goals are just intermediate goals. Does he say “De Gustibus” and advise the manager on how to pursue his goals even if he can see that in this particular instance it works against what the manager should really be maximizing?
Or does he have enough ambition in his job as advisor to try to convince the manager that his goals are all wrong, that he should really be maximizing something else? I honestly wonder what the smart consultant does in these situations.
More generally, in everyday life we have arguments about what’s the right thing to do. A lot of the time these arguments are confounded by the inability to distinguish whether we are arguing about the right course of action given our common goals (an argument that can be settled) or whether we have really just chosen different intermediate goals (loggerheads.)
Presh Talwalker tells us about this study of parking strategies:
They observed two distinct strategies: “cycling” and “pick a row, closest space.” They compared the results. “What was interesting,” [Professor Andrew Velkey found], “was although the individual cycling were spending more time driving looking for a parking space, on average they were no closer to the door, time-wise or distance-wise, than people using ‘pick a row, closest space.’”
And commenters are inferring that hunting for the best spot is a sub-optimal strategy. But those that are searching for the best parking spot are not interested in reducing their expected parking time, rather they care about the second moment. When we have an appointment there is a deadline effect: our payoff drops precipitously if we arrive past the deadline. Faced with such a payoff function we are typically wiling to increase our expected parking time if in return we can at least increase the probability of getting lucky with a really good spot. “Pick a row, closest space” guarantees we will be a bit late. “Cycling” may increase the average searching time but at least gives us a chance of being on time.
Dr. Doom did not get approval for the tub from the Department of Buildings, which resulted in the violation. He’s been ordered to remove not only the tub, but also the deck and party room (replete with a bar and bathroom) which he had constructed on the roof of his $5.5 million East First Street pad. The ruling apparently came about after a complaint levied in February. If you’re still grappling to get a sense of what will be lost now that these parties—at least, in the form they previously inhabited—will cease to exist, here’s a wonderful quote Roubini gave toNew York which paints quite the picture of both his allure and the nature of the shindigs. “[The models who attend my parties] love my beautiful mind. I am ugly, but they’re attracted to the brains. I’m a rock star among geeks, wonks and nerds,” he said. “[What makes the parties so great are] fun people and beautiful girls. I look for 10 girls to one guy.” 1. Exercise: find a name such that when you sing The Name Game (“banana fana fo …”) all three words you get are insults. 2. The Northwestern Women’s Lacrosse team has won the NCAA championship like every year but two in the past decade. The two losses make the overall dynasty more impressive. Discuss. 3. Why do fat people slide farther when they reach the bottom of a water slide? 4. When hiking in a group, if an accurate measure of (changes in) elevation is unavailable but you have a watch and a GPS it’s better to share the work of carrying the backpack by dividing the time rather than the distance. You are walking back to your office in the rain and your path is lined by a row of trees. You could walk under the trees or you could walk in the open. Which will keep you drier? If it just started raining you can stay dry by walking under the trees. On the other hand, when the rain stops you will be drier walking in the open. Because water will be falling off the leaves of the tree even though it has stopped raining. Indeed when the rain is tapering off you are better off out in the open. And when the rain is increasing you are better off under the tree. What about in steady state? Suppose it has been raining steadily for some time, neither increasing nor tapering off. The rain that falls onto the top of the tree gets trapped by leaves. But the leaves can hold only so much water. When they reach capacity water begins to fall off the leaves onto you below. In equilibrium the rate at which water falls onto the top of the tree, which is the same rate it would fall on you if you were out in the open, equals the rate at which water falls off the leaves onto you. Still you are not indifferent: you will stay drier out in the open. Under the tree the water that falls onto you, while constituting an equal total volume as the water that would hit you out in the open, is concentrated in larger drops. (The water pools as it sits on the leaves waiting to be pushed off onto you.) Your clothes will be dotted with fewer but larger water deposits and an equal volume of water spread over a smaller surface area will dry slower. It is important in all this that you are walking along a line of trees and not just standing in one place. Because although the rain lands uniformly across the top of the tree, it is probably channeled outward away from the trunk as it falls from leaf to leaf and eventually below. (I have heard that this is true of Louisiana Oaks.) So the rainfall is uniform out in the open but not uniform under the tree. This means that no matter where you stand out in the open you will be equally wet, but there will be spots under the tree in which the rainfall will be greater than and less than that average. You can stand at the local minimum and be drier than you would out in the open. Why are conditional probabilities so rarely used in court, and sometimes even prohibited? Here’s one more good reason: prosecution bias. Suppose that a piece of evidence X is correlated with guilt. The prosecutor might say, “Conditional on evidence X, the likelihood ratio for guilt versus innoncence is Y, update your priors accordingly.” Even if the prosecutor is correct in his statistics his claim is dubious. Because the prosecutor sees the evidence for all suspects before deciding which ones to bring to trial. And the jurors know this. So the fact that evidence like X exists against this defendant is already partially reflected in the fact that it was this guy they brought charges against and not someone else. If jurors were truly Bayesian (a necessary presumption if we are to consider using probabiilties in court at all) then they would already have accounted for this and updated their priors accordingly before even learning that evidence X exists. When they are actually told it would necessarily move their priors less than what the statistics imply, perhaps hardly at all, maybe even in the opposite direction. A balanced take in the New Yorker. Here is an excerpt. A core objection is that neuroscientific “explanations” of behavior often simply re-state what’s already obvious. Neuro-enthusiasts are always declaring that an MRI of the brain in action demonstrates that some mental state is not just happening but is really, truly, is-so happening. We’ll be informed, say, that when a teen-age boy leafs through the Sports Illustrated swimsuit issue areas in his brain associated with sexual desire light up. Yet asserting that an emotion is really real because you can somehow see it happening in the brain adds nothing to our understanding. Any thought, from Kiss the baby! to Kill the Jews!, must havesome route around the brain. If you couldn’t locate the emotion, or watch it light up in your brain, you’d still be feeling it. Just because you can’t see it doesn’t mean you don’t have it. Satel and Lilienfeld like the term “neuroredundancy” to “denote things we already knew without brain scanning,” mockingly citing a researcher who insists that “brain imaging tells us that post-traumatic stress disorder (PTSD) is a ‘real disorder.’ ” And It’s perfectly possible, in other words, to have an explanation that is at once trivial and profound, depending on what kind of question you’re asking. The strength of neuroscience, Churchland suggests, lies not so much in what it explains as in the older explanations it dissolves. She gives a lovely example of the panic that we feel in dreams when our legs refuse to move as we flee the monster. This turns out to be a straightforward neurological phenomenon: when we’re asleep, we turn off our motor controls, but when we dream we still send out signals to them. We really are trying to run, and can’t. He died earlier this week. If you grew up in Southern California, and you watched TV, you may have forgotten Cal Worthington but his dog Spot, the acres and acres of cars, the “Go See Cal”, the giant selection of cars and trucks on sale, the “open every day til midnight” and the music in the way “nineteen” springboarded the cars vintage out of your set and into your ears are all still stored away in some synapses somewhere in there and they’re all gonna come flowing out when you watch this video and probably bring with them a whole bunch of other stuff lost in there that you are gonna be pretty tickled to find again. RIP Cal Worthington. Should restaurants put salt shakers on the table? A variety of food writers weigh in on the question here. The naive argument is that salt shakers give diners more control. They know their own tastes and can fine tune the salt to their liking. The problem with this argument is that salt shaken over prepared food is not the same as salt added to food as it is cooked. A chef adds salt numerous times through the cooking process to different items on the plate because some need more salt than others. So the benefit of control comes at the cost of excess uniformity in the flavor. But beyond that, there is an interesting strategic issue. When there is no salt shaker on the table the chef chooses the level of saltiness to meet some median or average diner’s taste for salt. All diners get equally salty food independent of their taste. Diners to the left of the median find their dish too salty and diners to the right wish they had a salt shaker. A reduction in the level of saltiness benefits those just to the left of the median at the expense of those far to the right and at an optimum those costs outweigh the benefits. But when there is a salt shaker, the chef can reduce the level of saltiness at a lower cost because those to the right can compensate (albeit imperfectly) by adding back the salt. So in fact the optimal level of salt added by a chef whose restaurant puts salt shakers on the table is lower. So the interesting observation is that salt shakers on the table benefit diners who like less salt (and also those that like a lot of salt) at the expense of the average diner (who would otherwise be getting his salt bliss point but is now getting too little). Imagine that the President convenes his top economic advisors to get a recommendation on a pressing policy issue. They say unequivocally “do X.” The President asks why and they say “its complicated. Do X.” The President, not happy with that, decides he is going to read the economic literature on the pros and cons of doing X. After a thorough study he comes back to his advisors and says “You economists don’t understand your own science. I read the literature and I should do Y.” I think we would agree that’s a bad outcome. For probably exactly the same reason that Doctors don’t seem to be happy with economist Emily Oster’s apparent advice to pregnant women to drink alcohol “like a European adult.” But let’s assume that Emily truly can interpret the published statistical literature better than her Obstetrician. There is another reason to question her recommendations. An advisor’s job is to advise on the risks of an activity. Because the advisor is the expert on that. The decision-maker is the expert on her own preferences. The correct decision is based on weighing both of these. A recommendation to have up to a glass of wine per day while pregnant confounds the two sides. What it really means is “I like wine a lot. I also read about the risks and decided that my taste for wine was strong enough that I am willing to live with the risks.” Thus her recommendation amounts to “If you like wine as much as I do you should drink up to a glass per day when you are pregnant.” When I asked my doctor about drinking wine, she said that one or two glasses a week was “probably fine.” But “probably fine” isn’t a number. The problem is that there is no way to quantify how much she likes wine and so no way for her readers to know whether they like wine as much as she does. Likewise it is too much for Emily to demand her doctors to say much more than “probably fine.” The doctors’ advice is based on some assumption about the patient’s taste for wine weighed against the risks. Emily’s advice is based on a different assumption. As for the risks, when Emily reads the literature and concludes that the evidence is weak of the danger of drinking alcohol she then jumps to the conclusion that it is weaker than what the doctors thought. She makes the identifying assumption that their recommendation was conservative because they overestimated the risks and not because they underestimated her taste for wine. But there does not seem to be any basis for that assumption because her doctors never told her what they believed the risks to be and they never asked her how much she likes wine. One of my favorite subjects is woven through the highly entertaining This American Life segment about Emir Kamenica. I highly recommend listening. Unrelated to the main story, at the very beginning Michael Lewis details a very nice defense of plagiarism: The concept was alien to me, I mean I just thought it was an odd concept because you repeat what other people say all the time, I was just repeating what someone else said, it was a very intelligent thing to repeat. And I thought I was saving us a lot of trouble… save him the trouble of having to read something really awful and I wouldn’t have to write a boring book report or even read the boring book and so I was doing both of us a favor. And it seemed sortof counterintuitive to have to generate a thing that had already been done. Roy is coming to plant flowers in Zoe’s garden. Zoe loves flowers, her utility for a garden with $x$ flowers is $z(x) = x.$ Roy plants a unit mass of seeds and the fraction of these that will bloom into flowers depends on how attentive Roy is as a gardener. Roy’s attentiveness is his type $\theta$. In particular when Roy’s type is $\theta$, absent any sabotage by Zoe, there will be $\theta$ flowers in Zoe’s garden in Spring. Roy’s attentiveness is unknown to everyone and it is believed by all to be uniformly distributed on the unit interval. Jane, Zoe’s neighbor, is looking for a gardener for the following Spring. Jane has high standards, she will hire Roy if and only if he is sufficiently attentive. In particular, Jane’s utility for hiring Roy when his true type is $\theta$ is given by $j(\theta) = \theta - 2/3.$ (Her utility is zero if she does not hire Roy.) Roy tends to one and only one garden per year. Therefore Roy will continue to plant flowers in Zoe’s garden for a second year if and only if Jane does not hire him away from her. Consequently, Zoe is contemplating sabotaging Roy’s flowers this year. If Zoe destroys a fraction $1 - \alpha$ of Roy’s seeds then the total number of flowers in Zoe’s garden when Spring arrives will be $x = \alpha\theta$. Of course sabotage is costly for Zoe because she loves flowers. There will be no sabotage in the second year because after two years of gardening Roy goes into retirement. Therefore, if Zoe destroys $1-\alpha$ in the first year and Roy continues to work for Zoe in the second year, Zoe’s total payoff will be $z(\alpha\theta) + z(\theta)$ whereas if Roy is hired away by Jane, then Zoe’s total payoff is just $z(\alpha\theta)$. This is a two-player (Zoe and Jane) extensive-form game with incomplete information. The timing is as follows. First, Roy’s type is realized. Nobody observes Roy’s type. Zoe moves first and chooses $\alpha \in [0,1]$. Then Spring arrives and the flowers bloom. Jane does not observe $\alpha$ but does observe the number of flowers in Zoe’s garden. Then Jane chooses whether or not to hire Roy away from Zoe. Then the game ends. Describe the set of all Perfect Bayesian Equilibria. I haven’t decided yet and I can’t figure out which side this is evidence for: Me: Oh I have to remember to set up your desk today because I promised that I would and that if I didn’t I would give you$2.
7 year old: I was hoping you would forget.
Me: Are you saying you would rather have $2 than your desk? 7yo: No, I am saying I would rather have$2 today and my desk tomorrow.
Me: Hold on, what would you rather have: $2 today and your desk tomorrow or$2 today, another $2 tomorrow and then your desk the next day? 7yo:$2 today, another $2 tomorrow and then my desk the next day. Me:$2 today, $2 tomorrow,$2 the next day, and then your desk the day after that?
7yo: Yep.
Me: And what is the number of days you would like to have \$2 before you finally get your desk?
7yo: Infinity.
Here’s the preview:
Students all over are starting college this month, and some of them still have a nagging question: what, exactly, got me in? An admissions officer tells us the most wrongheaded things applicants try. And Michael Lewis has the incredible story of how a stolen library book got one man — Emir Kamenica — into his dream school. (Photo: Emir as a Harvard undergrad. Credit Terri Wang.)
From The New Yorker
Now, imagine an animal that emerges every twelve years, like a cicada. According to the paleontologist Stephen J. Gould, in his essay “Of Bamboo, Cicadas, and the Economy of Adam Smith,” these kind of boom-and-bust population cycles can be devastating to creatures with a long development phase. Since most predators have a two-to-ten-year population cycle, the twelve-year cicadas would be a feast for any predator with a two-, three-, four-, or six-year cycle. By this reasoning, any cicada with a development span that is easily divisible by the smaller numbers of a predator’s population cycle is vulnerable.
Prime numbers, however, can only be divided by themselves and one; they cannot be evenly divided into smaller integers. Cicadas that emerge at prime-numbered year intervals, like the seventeen-year Brood II set to swarm the East Coast, would find themselves relatively immune to predator population cycles, since it is mathematically unlikely for a short-cycled predator to exist on the same cycle. In Gould’s example, a cicada that emerges every seventeen years and has a predator with a five-year life cycle will only face a peak predator population once every eighty-five (5 x 17) years, giving it an enormous advantage over less well-adapted cicadas.
We were interviewed by the excellent Jessica Love for Kellogg Insight. Its about 12 minutes long. Here’s one bit I liked:
We go around in our lives and we collect information about what we should do, what we should believe and really all that matters after we collect that information is the beliefs that we derive from them and its hard to keep track of all the things we learn in our lives and most of them are irrelevant once we have accounted for them in our beliefs, the particular pieces of information we can forget as long as we remember what beliefs we should have. And so a lot of times what we are left with after this is done are beliefs that we feel very strongly about and someone comes and interrogates us about what’s the basis of our beliefs and we can’t really explain it and we probably can’t convince them and they say, well you have these irrational beliefs. But its really just an optimization that we’re doing, collecting information, forming our beliefs and then saving our precious memory by discarding all of the details.
I wish I could formalize that.
When you over-inflate a kid’s self-esteem you achieve a short-run gain (boost in confidence) at the expense of a long-run cost (jaded kids who learn that praise is just noise.) For that reason, emphasis on managing self-esteem gets a lot of scorn.
But what is the cost of jaded kids? They learn to see through your lies. All that means is that their credence is a scarce resource that parents must manage. In a first-best world you are honest with your kids right up until the stage in their lives when a false boost of self-confidence has maximal payoff. Probably when they are taking the SAT.
Unfortunately it’s not a first-best world: even if you don’t lie to them, other people will and eventually they will learn to be appropriately skeptical. Which means that a child’s trust is an exogenously depreciating resource. It’s just a matter of time before they are relieved of it.
Given the inevitability of that process you have two alternatives. Deplete their credence yourself and choose what lies they get told in the process, or be always truthful and allow their trust to be violated by outside forces.
Doing it yourself at least gives them the admittedly transient benefit that comes from an artificial boost of self-confidence. And the sooner the better.
Why does it seem like the other queue is more often moving faster than yours? Here’s MindHacks:
So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more.
Which is one theory. But how about this theory: because it is in fact more often moving faster than yours. It’s true by definition because out of the total time in your life you spent in queues, the time spent in the slow queues is necessarily longer than the time spent in the fast queues.
Buy tickets starting at 10AM at NUSports.com for the games against Ohio State on Oct 5 and Michigan on Nov 16. We have added bidding this season: you may submit a bid below the current auction price and you will receive tickets if and when the price falls to your bid level. Here is an older post about Purple Pricing with some more information.
Its called Chhota Pegs and so far has been mostly a chronicle of a visit to Calcutta (his hometown?) where food seems to be the star attraction:
Now this is the hard part. The damn thing must cook above and below but it is thick, and hard to turn over. On the other hand if you don’t turn it over the potatoes will burn. So what you do is cook it for a while (covered if needed) until you can move the pan and have the entire omelette wobble in it. The top will still be uncooked (if it is cooked, I’m guessing the bottom is burnt).
Then (and here you must take a large swig of what’s left of the Bloody Mary / Laphroaig and if you are married and male, remove spouse from kitchen) cover the pan with the snug plate, put on the oven mitts, and turn the whole contraption over until the omelette is on the plate. Or at least, try turning it.
Do not forget the oven mitts as you will have to grab the bottom of the pan. Exhortations such as Allah ho AkbarJoy Ma Kali or milder (Hare Krishna!) or even secular variants, such asBande Mataram, are useful here. Indeed, I encourage them.
With a sprinkling of Peter Hammond:
Thanks to Diego Garcia (uninhabited except temporarily by various U.K. and U.S. military personnel) and to Pitcairn (population now about 50), the British Empire appears safe from sunsets for the time being. (Both these territories have websites, by the way, though that for Diego Garcia is maintained by the U.S. Navy at www.nctsdg.navy.mil.) But the sun will be getting very low over the British Empire at around 01:40 GMT in late June each year….
Also, it seems that the sun could finally set over the British Empire if the sea level were to rise high enough because of global warming. It turns out that Diego Garcia has a mean elevation of only 4 feet above sea level, and a maximum elevation of only 22 feet. Perhaps the U.S. Navy will erect dikes around their strategically located communication facilities…
And Paul Dirac:
“[W]e have an economic system which tries to maintain an equality of value between two things, which it would be better to recognize from the beginning as of unequal value. These two things are the receipt of a single payment (say 100 crowns) and the receipt of a regular income (say 3 crowns a year) all through eternity…May I ask you to trace out for yourselves how all the obscurities become clear, if one assumes from the beginning that a regular income is worth incomparably more, in fact infinitely more, in the mathematical sense, than any single payment?” (From Dirac’s biography, The Strangest Man, by Graham Farmelo. Highly recommended btw.)
Coming from a physics genius, this is quite stunning in its stupidity. The most charitable thing I can say about the bloke is that he certainly wasn’t a hyperbolic discounter. (Never mind.) I find particularly telling the following observations: (a) how winning the Nobel prize appears to confer intellectual “rights” over other disciplines that one just don’t have the ability to exercise, but more importantly (b) how fundamentally “intuition” differs from field to field, so that a genius in one area can be a blithering idiot in another.
This promises to be a good one. I have already put it in my Safari Top Sites.
Homemade ravioli with truffles, Rovinj, Istria, Croatia.
Martin Osborne, the first Managing Editor of Theoretical Economics has just completed his term and handed the reins to George Mailath. Martin is the last of the original co-editors to complete his final term and on the (private) TE Editorial Message Board he offers this reflection on the making of the journal.
TE has been—and continues to be—very much a collaborative project. With a few days to go before my term as editor ends, I want to acknowledge the contributors and outline their role in the development of the journal for those of you who may not be aware of the journal’s history.
TE emerged from a project initiated by Manfredi La Manna. In the late 1990s (or maybe the early 2000s—I have been unable to find a record), Manfredi proposed starting low-cost economics journals to compete one-on-one with a long list of high-priced Elsevier offerings. His plans were extremely (even absurdly) ambitious, and although many economists signaled their support, none was willing to commit to work on the project. None, that is, until Manfredi found David Levine and George Mailath. Manfredi had been looking for someone to act as an editor of a journal he hoped would compete with JET. As I understand it, David and George suggested instead that they form an Executive Board with the aim of searching for an editor. David and George soon recruited Drew Fudenberg, Patrick Bolton, and Ariel Rubinstein (with whom Manfredi had been in touch previously) to join the Board.
In October 2002, after no doubt getting turned down by their top picks, they asked me if I would become the editor. We quickly recruited Bart Lipman, Narayana Kocherlakota, and Georg Nöldeke to serve as coeditors, and for two years worked with Manfredi on the “Review of Economic Theory”. For a variety of reasons, we parted ways with Manfredi in mid-2004 and started designing and setting up an Open Access journal. The team initially consisted of Patrick Bolton, Drew Fudenberg, David Levine, George Mailath, Bart Lipman, Georg Nöldeke, and myself, and was soon augmented by Ted Bergstrom, Jeff Ely, Preston McAfee, and Ariel Rubinstein (who re-joined the group, having left the La Manna project before the rest of us). A huge amount of work was involved; everyone played an important role.
One of the major tasks in starting TE was setting up a nonprofit corporation to run it. This task was handled by Bart and David. David had previously set up such a corporation to run his “Not A Journal”, but even with the benefit of that experience, a huge amount of work was involved. In Bart’s role as Treasurer, he also conducted the tricky negotiations necessary to get a credit card authorizer to deal with us. Eventually he found a company that specialized in small operations. Although that company appeared a bit disorganized—at some point it has us down as “Theological Economics”—it served us flawlessly during our pre-ES days.
Another major task was soliciting papers. That involved finding papers, reading them, discussing them, and selecting the ones that were potentially publishable. And then, usually, finding out that the authors had submitted them to Econometrica. Bart, Jeff, and Drew were particularly active in evaluating papers. Collectively, we read over 250 papers; Bart alone posted comments on more than 200 of them.
A final task that proved to be especially difficult was the choice of a name. The list we considered was long; it was very hard to find a reasonable name that was not close to the name of another journal or book series and also had an acronym not close to the acronym of another journal. Dozens of creative titles were proposed. Three of the more colorful were Theoretical Economics Arsenal (acronym TEA, proposed by Jeff Ely), Ecotheoretica, and Ekonomiko (both proposed by Preston McAfee; Ekonomiko is Esperanto for economics). One thousand three hundred and thirty messages after I opened a thread to decide on a name, we voted for “Theoretical Economics” (which, incidentally, was one of the two options I suggested in my original message).
We were extraordinarily fortunate that just as the journal was starting, a new version of the Open Journal Systems software became (freely) available. This superb system allowed us to automate almost all the the “administrative” tasks of running a journal (tasks that, I might add, are still performed manually at many other journals). You may think that software quality is an incidental issue that has little bearing on the activities of a journal. I disagree. In fact, because it obviates the need for an editorial assistant, superb software like the OJS system allows Open Access journals to be financially viable—even ones without a rich aunt like the Econometric Society.
Once we started publishing, the generosity of another group of people came into play—authors. We owe the success of the journal in no small part to the generosity of the authors who submitted to TE papers that could have been published in Econometrica. Submitting a paper to TE now is, I hope, a natural step for the author of a outstanding paper. But despite the commitment of the coeditors that the project would succeed, submitting a paper to TE in 2005 entailed some risk, and we are certainly grateful to the authors who took that risk in order to support Open Access. Submitting a paper to TE in 2004, when we were not yet certain we would go ahead, entailed a much greater risk; we were certainly encouraged that Bill Zame was willing to take that step.
A key determinant of the reputation that the journal has earned is the quality of its editorial work. The decision letters written by the first group of coeditors—Jeff Ely, Ed Green, and Bart Lipman, joined by Debraj Ray in 2008—were better than any others I have seen. Even rejected papers received very close attention. The efforts of this initial group of brilliant coeditors were critical in establishing a reputation for the journal.
One measure of the extent to which the initial group worked together is the number of postings on this Message Board. Between 2004 and July 2009 (when the journal was taken over by the Econometric Society), the coeditors (Jeff Ely, Ed Green, Bart Lipman, Debraj Ray, and myself) and the other members of the Executive Board (Ted Bergstrom, Drew Fudenberg, David Levine, George Mailath, Preston McAfee, Ariel Rubinstein, and Joel Sobel) posted over over 5,000 messages here.
Finally, the coeditors who have served since TE’s takeover by the Econometric Society—Gadi Barlevy, Faruk Gul, Johannes Hörner, and Nicola Persico—have upheld TE’s standards of editorial excellence with enviable energy, and our outstanding Associate Editors have provided us with high-quality evaluations within timeframes matched by few other journals.
It has been a great pleasure to coordinate this very hard-working group, which has transformed TE from an idea to the success it is today. I am delighted to hand over my role to George Mailath, who will surely lead the journal to new heights!
Martin Osborne was a truly outstanding Editor. He vastly understates his own role in building the editorial software that makes the journal run so smoothly. In my opinion the open source software that we use for free and that Martin painstakingly customized is far superior to the commercial systems used by all of the major journals in economics. Do not believe it when it is said that open access journals can only survive on large fees by authors. TE is a top-class journal and it is incredibly cheap to run. Do not believe it when it is said that a new open access journal faces a chicken and egg problem. The people behind TE believed in it and made it happen. People wishing to start open access journals would do well to copy what TE did.
(photograph taken by Ariel Rubinstein. More photos here.)
I just had one of my worst travel experiences. On United.
I was flying with my two kids and we got to O’Hare at 9 am in plenty of time for our 10.30 am flight to Seattle. The plane was delayed for one hour initially but then, after the airplane arrived, it turned out there was some malfunction so we had to wait for another plane. That one was due to leave at 2 pm.
My kids are pretty good but they were getting a bit restless so I decided to let them pick a treat for every delay. They opted to have lunch at Wolfgang Puck’s in the other of the two United terminals. They got to veto Frontera Fresca. So far so good.
The next bit of news – easy to forecast – further delay till 2.25. Peanut M&Ms. But then things got interesting. The pilots on the incoming flight had timed out given the additional 25 minute delay and we had to wait for new pilots to turn up. Ice cream for the kids. But no-one was insuring me so I was getting more and more pissed off. This pilot time out was news to me but surely eminently foreseeable for United? We left at 4.30 pm. Kids were on a sugar high and I was on a United low.
As far as I know. Anyway I always assumed that the Ely Lecture at the AEA meetings was named after me.
But changing the subject, Adriana Lleras-Muney writes to me:
From Henry Miller
“To be intelligent may be a boon, but to be completely trusting, gullible to the point of idiocy, to surrender without reservation is of of the supreme joys of life”
Agree?
I think Henry Miller is confusing correlation with causation. Its probably true that in our happiest moments (among those moments we are with other people–I might even dispute that those moments are the happiest unconditionally) we are trusting, gullible and idiotically surrendering. But that’s likely because we are with a certain person and in a certain blissful state that we respond by surrendering. Its the person and the state that brings us the supreme joy and our surrender is just a symptom of that joy. I might go far as to say that the surrender is a complementary good but its enough to think about surrendering to the very next person who knocks on your office door to convince you that the surrender is not itself the source of joy.
Would it be possible to make a statistical model of a jazz solo and use it to create new ones? Take a standard, and let’s focus on the saxaphone, say. Go to the solo and estimate a Markov transition kernel which tells you the probability distribution over the next tone conditional on the previous tone. In particular you want the joint probability distribution over the following note (or just the interval) and the note’s value (eighth, quarter, etc.) Feed it tons and tons of recordings of sax solos for the same tune (that’s why you want a standard.)
Once you have estimated your kernel, simulate it. Will it be music? How much of an improvement do you get if the state variable is the last two notes instead of just one? If your state variable is the last n notes, at what n are improvements no longer noticeable?
In my kids’ tennis class they are getting good enough to have actual rallies. The coach feeds them a ball and has them play out points. Each rally is worth 1 point and they play to 10. To stop them from trying to hit winners on the first shot and in attempt to get them to play longer rallies, the coaches tried out an interesting rule. “The ball must cross the net four times before the point begins. If your shot goes out before that, its 2 points for the other side.”
One form of mental accounting is where you give yourself separate budgets for things like food, entertainment, gas, etc. It’s suboptimal because these separate budgets make you less flexible in your consumption plans. For example in a month where there are many attractive entertainment offerings, you are unable to reallocate spending away from other goods in favor of entertainment.
But it could be understood as a second-best solution when you have memory limitations. Suppose that when you decide how much to spend on groceries, you often forget or even fail to think of how much you have been spending on gas this month. If so, then its not really possible to be as flexible as you would be in the first-best because there’s no way to reduce your grocery expenditures in tandem with the increased spending on gas.
That means that you should not increase your spending on gas. In other words you should stick to a fixed gas budget.
Now memory is associative, i.e. current experiences stimulate memories of related experiences. This can give some structure to the theory. It makes sense to have a budget for entertainment overall rather than separate budgets for movies and concerts because when you are thinking of one you are likely to recall your spending on the other. So the boundaries of budget categories should be determined by an optimal grouping of expenditures based on how closely associated they are in memory.
(Discussion with Asher Wolinsky and Simone Galperti)
The NRA successfully lobbied to stop gun control legislation. Several Democrats sided with Republicans to defeat it. But the NRA seems to have spent more than necessary to defeat the measures because they failed by more than a one-vote margin. It would have been enough to buy exactly the number of Senators necessary to prevent the bill from progressing through the Senate, no more than that.
But in fact the cost of defeating legislation is decreasing in the number of excess votes purchased. If the NRA has already secured enough votes to win, the next vote cannot be pivotal and so the Senator casting that vote takes less blame for the defeat. Indeed if enough Senators are bought so that the bill goes down by at least two votes, no Senator is pivotal.
Here’s a simple model. Suppose that the political cost of failing to pass gun control is c. If the NRA buys the minimum number of votes needed to halt the legislation it must pay c to each Senator it buys. That’s because each of those Senators could refuse to vote for the NRA and avoid the cost c. But if the NRA buys one extra vote, each Senator incurs the cost c whether or not he goes along with the NRA and his vote has just become cheaper by the amount c.
For the Vapor Mill: What is the voting rule that maximizes the cost of defeating popular legislation?
Amnesty –forgiving all of the current and previous violators but renewing a threat to punish future violators– always seems like a reputation fail. If we are granting amnesty today then doesn’t that signal that we will eventually be granting amnesty again in the future?
But there is at least one environment in which a once-only amnesty is incentive compatible and effective: when crime has bandwagon effects. For example, suppose there’s a stash of candy in the pantry and my kids have taken to raiding it. I catch one red-handed but I can’t punish her because she rightly points out that since everybody’s doing it she assumed we were looking the other way. A culture of candy crime had taken hold.
An amnesty (bring me your private stash and you will be forgiven) moves us from the everyone’s a criminal because everyone’s a criminal equilibrium to the one in which nobody’s a criminal. The latter is potentially stable if its easier to single out and punish a lone offender than one of many.
### Email Subscription
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Join 1,548 other followers
|
2014-11-26 05:10:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4049654006958008, "perplexity": 2541.7483932419127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005799.21/warc/CC-MAIN-20141125155645-00170-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://quantiki.org/wiki/no-cloning-theorem
|
# The no-cloning theorem
The '''no cloning theorem''' is a result of quantum mechanics which forbids the creation of identical copies of an arbitrary unknown quantum state. It was stated by Wootters, Zurek, and Dieks in 1982, and has profound implications in quantum computing and related fields. The theorem follows from the fact that all quantum operations must be unitary linear transformation on the state (and potentially an ancilla) . == Proof == Suppose the state of a quantum system A is a qubit, which we wish to copy. The state can be written (see bra-ket notation) as :$|\psi\rangle_A = a |0\rangle_A + b |1\rangle_A$. The complex coefficients ''a'' and ''b'' are unknown to us. In order to make a copy, we take a system B with an identical Hilbert space and initial state $|e\rangle_B$ (which must be independent of $|\psi\rangle_A$, of which we have no prior knowledge). The composite system is then described by the tensor product, and its state is :$|\psi\rangle_A |e\rangle_B$. There are only two ways to manipulate the composite system. We could perform an observation, which irreversibly collapses the system into some eigenstate of the observable, corrupting the information contained in the qubit. This is obviously not what we want. Alternatively, we could control the Hamiltonian of the system, and thus the time evolution operator U(Δ''t''), which is linear. We must fix a time interval Δ''t'', again independent of $|\psi\rangle_A$. Then U(Δ''t'') acts as a copier provided :{| |$U\left(\Delta t\right) |\psi\rangle_A |e\rangle_B$ |$= |\psi\rangle_A |\psi\rangle_B$ |- | |$= \left(a |0\rangle_A + b |1\rangle_A\right)\left(a |0\rangle_B + b |1\rangle_B\right)$ |- | |$= a^2 |0\rangle_A |0\rangle_B + a b |0\rangle_A |1\rangle_B + b a |1\rangle_A |0\rangle_B + b^2 |1\rangle_A |1\rangle_B$ |} for all ψ. This must then be true for the basis states as well, so :$U\left(\Delta t\right) |0\rangle_A |e\rangle_B = |0\rangle_A |0\rangle_B$ :$U\left(\Delta t\right) |1\rangle_A |e\rangle_B = |1\rangle_A |1\rangle_B$. Then the linearity of U(Δ''t'') implies :{| |$U\left(\Delta t\right) |\psi\rangle_A |e\rangle_B$ |$= U\left(\Delta t\right) \left(a |0\rangle_A + b |1\rangle_A\right)|e\rangle_B$ |- | |$= a |0\rangle_A |0\rangle_B + b |1\rangle_A |1\rangle_B$ |- | |$\ne a^2 |0\rangle_A |0\rangle_B + a b |0\rangle_A |1\rangle_B + b a |1\rangle_A |0\rangle_B + b^2 |1\rangle_A |1\rangle_B$ |}. Thus, $U\left(\Delta t\right) |\psi\rangle_A |e\rangle_B$ is generally not equal to $|\psi\rangle_A |\psi\rangle_B$, as may be verified by plugging in ''a'' = ''b'' = 2-1/2, so U(Δ''t'') cannot act as a general copier. Q.E.D. == Consequences == The no cloning theorem prevents us from using classical error correction techniques on quantum states. For example, we cannot create backup copies of a state in the middle of a quantum computation, and use them to correct subsequent errors. Error correction is vital for practical quantum computing, and for some time this was thought to be a fatal limitation. In 1995, Shor and Steane revived the prospects of quantum computing by independently devising the first quantum error correcting codes, which circumvent the no cloning theorem. In contrast, the no cloning theorem is a vital ingredient in quantum cryptography, as it forbids eavesdroppers from creating copies of a transmitted quantum cryptographic key. Fundamentally, the no-cloning theorem protects the uncertainty principle in quantum mechanics. If one could clone an ''unknown'' state, then one could make as many copies of it as one wished, and measure each dynamical variable with arbitrary precision, thereby bypassing the uncertainty principle. This is prevented by the non-cloning theorem. More fundamentally, the no cloning theorem prevents superluminal communication via quantum entanglement. Consider the EPR thought experiment, and suppose quantum states could be cloned. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the '''z''' direction, collapsing Bob's state to either $|z+\rangle_B$ or $|z-\rangle_B$. Bob creates many copies of his electron's state, and measures the spin of each copy in the '''z''' direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will be split evenly between +1/2 and -1/2. This would allow Alice and Bob to communicate across space-like separations, potentially violating causality. == Imperfect cloning == Even though it is impossible to make perfect copies of an unknown quantum state, it is possible to produce imperfect copies. This can be done by coupling a larger auxiliary system to the system that is to be cloned, and applying a unitary transformation to the combined system. If the unitary transformation is chosen correctly, several components of the combined system will evolve into approximate copies of the original system. Imperfect cloning can be used as an eavesdropping attack on quantum cryptography protocols, among other uses in quantum information science. ==See also== * Quantum teleportation * Quantum entanglement * Quantum information * Uncertainty principle * Time travel ==References== * Wootters, W.K. and Zurek, W.H.: ''A Single Quantum Cannot be Cloned''. Nature 299 (1982), pp. 802-803 * Dieks, D.: ''Communication by EPR devices''. Physics Letters A, vol. 92(6) (1982), pp. 271-272 * Buzek, V. and Hillery, M.: "Quantum cloning". Physics World 14 (11) (2001), pp. 25-29 {{FromWikipedia}} Category:Quantum Information Theory Category:Evolutions and Operations
|
2019-10-22 08:43:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026511669158936, "perplexity": 818.3289344912043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00487.warc.gz"}
|
http://math.stackexchange.com/questions/204920/how-to-prove-the-sum-of-2-linearly-independent-vectors-is-also-linearly-independ?answertab=oldest
|
# How to prove the sum of 2 linearly independent vectors is also linearly independent?
Suppose $a,b$ and $c$ are linearly independent vectors in a vector space $V$. How can I prove that $a+b$ or $b+c$ are also linearly independent?
-
Linearly independent w.r.t which vectors? Obviously not $a,b$ and $b,c$. – Jacob Sep 30 '12 at 16:31
I think what is intended is that you show that the set $\{a+b,b+c\}$ is a linearly independent set. – André Nicolas Sep 30 '12 at 16:32
Well that's the problem. the question is just giving a,b and c. no actual values. – Nima Sep 30 '12 at 16:33
You don't need actual values-the problem is that $a+b$ and $b+c$ are just single vectors, and linear independence of one vector on its own is trivial. So as @AndréNicolas said a more reasonable question is to prove that $a+b$ and $b+c$ are linearly independent, as was proved below. – Kevin Carlson Sep 30 '12 at 16:36
Thanks a lot guys. – Nima Sep 30 '12 at 16:37
If I understand your question correctly, you want to show that if $a$, $b$, $c$ are linearly independent, then $a+b$ and $b+c$ are linearly independent.
Just look at the definitions.
You know that $x_1a+x_2b+x_3c=0$ implies $x_1=x_2=x_3=0$. (This is the definition of linear independence for three vectors.)
You ask whether $y_1(a+b)+y_2(b+c)=0$ implies $y_1=y_2=0$.
Just simplify this to get: $y_1 a + (y_1+y_2)b +y_2c=0$. This implies that $y_1=y_1+y_2=y_2=0$. The condition $y_1+y_2=0$ is redundant there, but we have shown that $y_1=y_2=0$.
This means that the vectors $a+b$, $b+c$ are linearly independent.
-
|
2014-07-26 01:12:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475569128990173, "perplexity": 170.94784463771853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894931.59/warc/CC-MAIN-20140722025814-00207-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/draw-triangle-abc-side-bc-7-cm-b-45-a-105-then-construct-triangle-whose-sides-are-4-3-times-corresponding-side-abc-give-justification-construction-division-of-a-line-segment_7190
|
# Draw a triangle ABC with side BC = 7 cm, ∠B = 45°, ∠A = 105°. Then, construct a triangle whose sides are 4/3 times the corresponding side of ΔABC. Give the justification of the construction. - Mathematics
Draw a triangle ABC with side BC = 7 cm, ∠B = 45°, ∠A = 105°. Then, construct a triangle whose sides are 4/3 times the corresponding side of ΔABC. Give the justification of the construction.
#### Solution
∠B = 45°, ∠A = 105°
Sum of all interior angles in a triangle is 180°.
∠A + ∠B + ∠C = 180°
105° + 45° + ∠C = 180°
∠C = 180° − 150°
∠C = 30°
The required triangle can be drawn as follows.
Step 1
Draw a ΔABC with side BC = 7 cm, ∠B = 45°, ∠C = 30°.
Step 2
Draw a ray BX making an acute angle with BC on the opposite side of vertex A.
Step 3
Locate 4 points (as 4 is greater in 4 and 3), B1, B2, B3, B4, on BX.
Step 4
Join B3C. Draw a line through B4 parallel to B3C intersecting extended BC at C'.
Step 5
Through C', draw a line parallel to AC intersecting extended line segment at C'. ΔA'BC' is the required triangle.
Justification
The construction can be justified by proving that
A'B = 4/3 AB, BC' = 4/3BC , A'C' = 4/3 AC
n ΔABC and ΔA'BC',
∠ABC = ∠A'BC' (Common)
∠ACB = ∠A'C'B (Corresponding angles)
∴ ΔABC ∼ ΔA'BC' (AA similarity criterion)
=>(AB)/(A'B) = (BC)/(BC') = (AC)/(A'C') ....1
In ΔBB3C and ΔBB4C',
∠B3BC = ∠B4BC' (Common)
∠BB3C = ∠BB4C' (Corresponding angles)
∴ ΔBB3C ∼ ΔBB4C' (AA similarity criterion)
=>(BC)/(BC') =
=>(BC)/(BC') = 3/4 ...(2)
On comparing equations (1) and (2), we obtain
(AB)/(A'B) = (BC)/(BC')=(AC)/(A'C') = 3/4
=> A'B = 4/3 AB, BC' = 4/3 BC, A'C' = 4/3 AC
This justifies the construction.
Concept: Division of a Line Segment
Is there an error in this question or solution?
#### APPEARS IN
NCERT Class 10 Maths
Chapter 11 Constructions
Exercise 11.1 | Q 5 | Page 220
Share
|
2021-11-30 14:46:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381928324699402, "perplexity": 7434.256616400066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00455.warc.gz"}
|
http://www.like2do.com/learn?s=Stable_distributions
|
Stable Distributions
Get Stable Distributions essential facts below. View Videos or join the Stable Distributions discussion. Add Stable Distributions to your Like2do.com topic list for future reference or share this resource on social media.
Stable Distributions
Parameters Probability density function Symmetric ?-stable distributions with unit scale factorSkewed centered stable distributions with unit scale factor Cumulative distribution function CDFs for symmetric ?-stable distributionsCDFs for skewed centered stable distributions ? ? (0, 2] -- stability parameter ? ? [-1, 1] -- skewness parameter (note that skewness is undefined)c ? (0, ?) -- scale parameter ? ? (-?, ?) -- location parameter x ? R, or x ? [?, +?) if ? < 1 and , or x ? (-?, ?] if and not analytically expressible, except for some parameter values not analytically expressible, except for certain parameter values ? when , otherwise undefined ? when , otherwise not analytically expressible ? when , otherwise not analytically expressible 2c2 when , otherwise infinite 0 when , otherwise undefined 0 when , otherwise undefined not analytically expressible, except for certain parameter values undefined ${\displaystyle \exp \!{\Big [}\;it\mu -|c\,t|^{\alpha }\,(1-i\beta \operatorname {sgn}(t)\Phi )\;{\Big ]},}$ where ${\displaystyle \Phi ={\begin{cases}\tan {\tfrac {\pi \alpha }{2}}&{\text{if }}\alpha \neq 1\\-{\tfrac {2}{\pi }}\log |t|&{\text{if }}\alpha =1\end{cases}}}$
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.[1][2]
Of the four parameters defining the family, most attention has been focused on the stability parameter, ? (see panel). Stable distributions have 0 < ? normal distribution, and ? = 1 to the Cauchy distribution. The distributions have undefined variance for ? < 2, and undefined mean for ? iid) random variables. The normal distribution defines a family of stable distributions. By the classical central limit theorem the properly normed sum of a set of random variables, each with finite variance, will tend towards a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal. Mandelbrot referred to such distributions as "stable Paretian distributions",[3][4][5] after Vilfredo Pareto. In particular, he referred to those maximally skewed in the positive direction with 1 < ? < 2 as "Pareto-Lévy distributions",[1] which he regarded as better descriptions of stock and commodity prices than normal distributions.[6]
## Definition
A non-degenerate distribution is a stable distribution if it satisfies the following property:
Let X1 and X2 be independent copies of a random variable X. Then X is said to be stable if for any constants a > 0 and b > 0 the random variable aX1 + bX2 has the same distribution as cX + d for some constants c > 0 and d. The distribution is said to be strictly stable if this holds with d = 0.[7]
Since the normal distribution, the Cauchy distribution, and the Lévy distribution all have the above property, it follows that they are special cases of stable distributions.
Such distributions form a four-parameter family of continuous probability distributions parametrized by location and scale parameters ? and c, respectively, and two shape parameters ? and ?, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures).
Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be. Any probability distribution is given by the Fourier transform of its characteristic function ?(t) by:
${\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi (t)e^{-ixt}\,dt}$
A random variable X is called stable if its characteristic function can be written as[7][8]
${\displaystyle \varphi (t;\alpha ,\beta ,c,\mu )=\exp \left(it\mu -|ct|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}$
where sgn(t) is just the sign of t and
${\displaystyle \Phi ={\begin{cases}\tan \left({\frac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |t|&\alpha =1\end{cases}}}$
? ? R is a shift parameter, ? ? [-1, 1], called the skewness parameter, is a measure of asymmetry. Notice that in this context the usual skewness is not well defined, as for ? < 2 the distribution does not admit 2nd or higher moments, and the usual skewness definition is the 3rd central moment.
The reason this gives a stable distribution is that the characteristic function for the sum of two random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values of ? and ?, but possibly different values of ? and c.
Not every function is the characteristic function of a legitimate probability distribution (that is, one whose cumulative distribution function is real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some value t is the complex conjugate of its value at -t as it should be so that the probability distribution function will be real.
In the simplest case ? = 0, the characteristic function is just a stretched exponential function; the distribution is symmetric about ? and is referred to as a (Lévy) symmetric alpha-stable distribution, often abbreviated S?S.
When ? < 1 and ? = 1, the distribution is supported by [?, ?).
The parameter c > 0 is a scale factor which is a measure of the width of the distribution while ? is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.
### Parametrizations
The above definition is only one of the parametrizations in use for stable distributions; it is the most common but is not continuous in the parameters at ? = 1.
A continuous parametrization is[7]
${\displaystyle \varphi (t;\alpha ,\beta ,\gamma ,\delta )=\exp \left(it\delta -|\gamma t|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}$
where:
${\displaystyle \Phi ={\begin{cases}\left(|\gamma t|^{1-\alpha }-1\right)\tan \left({\tfrac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |\gamma t|&\alpha =1\end{cases}}}$
The ranges of ? and ? are the same as before, ? (like c) should be positive, and ? (like ?) should be real.
In either parametrization one can make a linear transformation of the random variable to get a random variable whose density is ${\displaystyle f(y;\alpha ,\beta ,1,0)}$. In the first parametrization, this is done by defining the new variable:
${\displaystyle y={\begin{cases}{\frac {x-\mu }{\gamma }}&\alpha \neq 1\\{\frac {x-\mu }{\gamma }}-\beta {\frac {2}{\pi }}\ln \gamma &\alpha =1\end{cases}}}$
For the second parametrization, we simply use
${\displaystyle y={\frac {x-\delta }{\gamma }}.}$
no matter what ? is. In the first parametrization, if the mean exists (that is, ? > 1) then it is equal to ?, whereas in the second parametrization when the mean exists it is equal to ${\displaystyle \delta -\beta \gamma \tan \left({\tfrac {\pi \alpha }{2}}\right).}$
### The distribution
A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function.[7] If ${\displaystyle f(x;\alpha ,\beta ,c,\mu )}$ denotes the density of X and Y is the sum of independent copies of X:
${\displaystyle Y=\sum _{i=1}^{N}k_{i}(X_{i}-\mu )\,}$
then Y has the density ${\displaystyle s^{-1}f(y/s;\alpha ,\beta ,c,0)}$ with
${\displaystyle s=\left(\sum _{i=1}^{N}|k_{i}|^{\alpha }\right)^{\frac {1}{\alpha }}.}$
The asymptotic behavior is described, for ?< 2, by:[7]
${\displaystyle f(x)\sim {\frac {1}{|x|^{1+\alpha }}}\left(c^{\alpha }(1+\operatorname {sgn}(x)\beta )\sin \left({\frac {\pi \alpha }{2}}\right){\frac {\Gamma (\alpha +1)}{\pi }}\right)}$
where ? is the Gamma function (except that when ? < 1 and ? = ±1, the tail vanishes to the left or right, resp., of ?). This "heavy tail" behavior causes the variance of stable distributions to be infinite for all ? < 2. This property is illustrated in the log-log plots below.
When ? = 2, the distribution is Gaussian (see below), with tails asymptotic to exp(-x2/4c2)/(2c).
## Properties
Stable distributions are closed under convolution for a fixed value of ?. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the same ? will yield another such characteristic function. The product of two stable characteristic functions is given by:
${\displaystyle \exp \left(it\mu _{1}+it\mu _{2}-|c_{1}t|^{\alpha }-|c_{2}t|^{\alpha }+i\beta _{1}|c_{1}t|^{\alpha }\operatorname {sgn}(t)\Phi +i\beta _{2}|c_{2}t|^{\alpha }\operatorname {sgn}(t)\Phi \right)}$
Since ? is not a function of the ?, c or ? variables it follows that these parameters for the convolved function are given by:
{\displaystyle {\begin{aligned}\mu &=\mu _{1}+\mu _{2}\\|c|&=\left(|c_{1}|^{\alpha }+|c_{2}|^{\alpha }\right)^{\frac {1}{\alpha }}\\[6pt]\beta &={\frac {\beta _{1}|c_{1}|^{\alpha }+\beta _{2}|c_{2}|^{\alpha }}{|c_{1}|^{\alpha }+|c_{2}|^{\alpha }}}\end{aligned}}}
In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.
## A generalized central limit theorem
Another important property of stable distributions is the role that they play in a generalized central limit theorem. The central limit theorem states that the sum of a number of independent and identically distributed (i.i.d.) random variables with finite non-zero variances will tend to a normal distribution as the number of variables grows.
A generalization due to Gnedenko and Kolmogorov states that the sum of a number of random variables with symmetric distributions having power-law tails (Paretian tails), decreasing as ${\displaystyle |x|^{-\alpha -1}}$ where ${\displaystyle 0<\alpha \leqslant 2}$ (and therefore having infinite variance), will tend to a stable distribution ${\displaystyle f(x;\alpha ,0,c,0)}$ as the number of summands grows.[9] If ${\displaystyle \alpha >2}$ then the sum converges to a stable distribution with stability parameter equal to 2, i.e. a Gaussian distribution.[10]
There are other possibilities as well. For example, if the characteristic function of the random variable is asymptotic to ${\displaystyle 1+a|t|^{\alpha }\ln |t|}$ for small t (positive or negative), then we may ask how t varies with n when the value of the characteristic function for the sum of n such random variables equals a given value u:
${\displaystyle \varphi _{\text{sum}}=\varphi ^{n}=u}$
Assuming for the moment that t -> 0, we take the limit of the above as n -> ?:
${\displaystyle \ln u=\lim _{n\to \infty }n\ln \varphi =\lim _{n\to \infty }na|t|^{\alpha }\ln |t|.}$
Therefore:
{\displaystyle {\begin{aligned}\ln(\ln u)&=\ln \left(\lim _{n\to \infty }na|t|^{\alpha }\ln |t|\right)\\[5pt]&=\lim _{n\to \infty }\ln \left(na|t|^{\alpha }\ln |t|\right)=\lim _{n\to \infty }\left\{\ln(na)+\alpha \ln |t|+\ln(\ln |t|)\right\}\end{aligned}}}
This shows that ${\displaystyle \ln |t|}$ is asymptotic to ${\displaystyle {\tfrac {-1}{\alpha }}\ln n,}$ so using the previous equation we have
${\displaystyle |t|\sim \left({\frac {-\alpha \ln u}{na\ln n}}\right)^{1/\alpha }.}$
This implies that the sum divided by
${\displaystyle \left({\frac {na\ln n}{\alpha }}\right)^{\frac {1}{\alpha }}}$
has a characteristic function whose value at some t? goes to u (as n increases) when ${\displaystyle t'=(-\ln u)^{\frac {1}{\alpha }}.}$ In other words, the characteristic function converges pointwise to ${\displaystyle \exp(-(t')^{\alpha })}$ and therefore by Lévy's continuity theorem the sum divided by
${\displaystyle \left({\frac {na\ln n}{\alpha }}\right)^{\frac {1}{\alpha }}}$
converges in distribution to the symmetric alpha-stable distribution with stability parameter ${\displaystyle \alpha }$ and scale parameter 1.
This can be applied to a random variable whose tails decrease as ${\displaystyle |x|^{-3}}$. This random variable has a mean but the variance is infinite. Let us take the following distribution:
${\displaystyle f(x)={\begin{cases}{\frac {1}{3}}&|x|\leqslant 1\\{\frac {1}{3}}x^{-3}&|x|>1\end{cases}}}$
We can write this as
${\displaystyle f(x)=\int _{1}^{\infty }{\frac {2}{w^{4}}}h\left({\frac {x}{w}}\right)dw}$
where
${\displaystyle h\left({\frac {x}{w}}\right)={\begin{cases}{\frac {1}{2}}&\left|{\frac {x}{w}}\right|<1,\\0&\left|{\frac {x}{w}}\right|>1.\end{cases}}}$
We want to find the leading terms of the asymptotic expansion of the characteristic function. The characteristic function of the probability distribution ${\displaystyle {\tfrac {1}{w}}h\left({\tfrac {x}{w}}\right)}$ is ${\displaystyle {\tfrac {\sin(tw)}{tw}},}$ so the characteristic function for f(x) is
${\displaystyle \varphi (t)=\int _{1}^{\infty }{\frac {2\sin(tw)}{tw^{4}}}dw}$
and we can calculate:
{\displaystyle {\begin{aligned}\varphi (t)-1&=\int _{1}^{\infty }{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]\,dw\\&=\int _{1}^{\frac {1}{|t|}}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]\,dw+\int _{\frac {1}{|t|}}^{\infty }{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]\,dw\\&=\int _{1}^{\frac {1}{|t|}}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+\left\{-{\frac {t^{2}w^{2}}{3!}}+{\frac {t^{2}w^{2}}{3!}}\right\}\right]\,dw+\int _{\frac {1}{|t|}}^{\infty }{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]\,dw\\&=\int _{1}^{\frac {1}{|t|}}-{\frac {t^{2}dw}{3w}}+\int _{1}^{\frac {1}{|t|}}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{3!}}\right]dw+\int _{\frac {1}{|t|}}^{\infty }{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]dw\\&=\int _{1}^{\frac {1}{|t|}}-{\frac {t^{2}dw}{3w}}+\left\{\int _{0}^{\frac {1}{|t|}}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{3!}}\right]dw-\int _{0}^{1}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{3!}}\right]dw\right\}+\int _{\frac {1}{|t|}}^{\infty }{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1\right]dw\\&=\int _{1}^{\frac {1}{|t|}}-{\frac {t^{2}dw}{3w}}+t^{2}\int _{0}^{1}{\frac {2}{y^{3}}}\left[{\frac {\sin(y)}{y}}-1+{\frac {y^{2}}{6}}\right]dy-\int _{0}^{1}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{6}}\right]dw+t^{2}\int _{1}^{\infty }{\frac {2}{y^{3}}}\left[{\frac {\sin(y)}{y}}-1\right]dy\\&=-{\frac {t^{2}}{3}}\int _{1}^{\frac {1}{|t|}}{\frac {dw}{w}}+t^{2}C_{1}-\int _{0}^{1}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{6}}\right]dw+t^{2}C_{2}\\&={\frac {t^{2}}{3}}\ln |t|+t^{2}C_{3}-\int _{0}^{1}{\frac {2}{w^{3}}}\left[{\frac {\sin(tw)}{tw}}-1+{\frac {t^{2}w^{2}}{6}}\right]dw\\&={\frac {t^{2}}{3}}\ln |t|+t^{2}C_{3}-\int _{0}^{1}{\frac {2}{w^{3}}}\left[{\frac {t^{4}w^{4}}{5!}}+\cdots \right]dw\\&={\frac {t^{2}}{3}}\ln |t|+t^{2}C_{3}-{\mathcal {O}}\left(t^{4}\right)\end{aligned}}}
where ${\displaystyle C_{1},C_{2}}$ and ${\displaystyle C_{3}}$ are constants. Therefore,
${\displaystyle \varphi (t)\sim 1+{\frac {t^{2}}{3}}\ln |t|}$
and according to what was said above (and the fact that the variance of f(x;2,0,1,0) is 2), the sum of n instances of this random variable, divided by ${\displaystyle {\sqrt {n(\ln n)/12}},}$ will converge in distribution to a Gaussian distribution with variance 1. But the variance at any particular n will still be infinite. Note that the width of the limiting distribution grows faster than in the case where the random variable has a finite variance (in which case the width grows as the square root of n). The average, obtained by dividing the sum by n, tends toward a Gaussian whose width approaches zero as n increases, in accordance with the Law of large numbers.
## Special cases
Log-log plot of symmetric centered stable distribution PDF's showing the power law behavior for large x. The power law behavior is evidenced by the straight-line appearance of the PDF for large x, with the slope equal to -(?+1). (The only exception is for ? = 2, in black, which is a normal distribution.)
Log-log plot of skewed centered stable distribution PDF's showing the power law behavior for large x. Again the slope of the linear portions is equal to -(?+1)
There is no general analytic solution for the form of p(x). There are, however three special cases which can be expressed in terms of elementary functions as can be seen by inspection of the characteristic function:[7][8][11]
• For ? = 2 the distribution reduces to a Gaussian distribution with variance ?2 = 2c2 and mean ?; the skewness parameter ? has no effect.
• For ? = 1 and ? = 0 the distribution reduces to a Cauchy distribution with scale parameter c and shift parameter ?.
• For ? = 1/2 and ? = 1 the distribution reduces to a Lévy distribution with scale parameter c and shift parameter ?.
Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as a mixture of Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem [12] which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution--and the beta parameter of the mixing distribution always equal to one).
A general closed form expression for stable PDF's with rational values of ? is available in terms of Meijer G-functions.[13] Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicated special functions. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDF's expressible by elementary functions are indicated by an E and those that are expressible by special functions are indicated by an s.[12]
?
1/3 1/2 2/3 1 4/3 3/2 2
? = 0 s s s E s s E
? = 1 s E s s s
Some of the special cases are known by particular names:
• For ? = 1 and ? = 1, the distribution is a Landau distribution which has a specific usage in physics under this name.
• For ? = 3/2 and ? = 0 the distribution reduces to a Holtsmark distribution with scale parameter c and shift parameter ?.
Also, in the limit as c approaches zero or as ? approaches zero the distribution will approach a Dirac delta function ?(x - ?).
## Series representation
The stable distribution can be restated as the real part of a simpler integral:[14]
${\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}e^{-(ct)^{\alpha }(1-i\beta \Phi )}\,dt\right].}$
Expressing the second exponential as a Taylor series, we have:
${\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}\sum _{n=0}^{\infty }{\frac {(-qt^{\alpha })^{n}}{n!}}\,dt\right]}$
where ${\displaystyle q=c^{\alpha }(1-i\beta \Phi )}$. Reversing the order of integration and summation, and carrying out the integration yields:
${\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\sum _{n=1}^{\infty }{\frac {(-q)^{n}}{n!}}\left({\frac {i}{x-\mu }}\right)^{\alpha n+1}\Gamma (\alpha n+1)\right]}$
which will be valid for x ? ? and will converge for appropriate values of the parameters. (Note that the n = 0 term which yields a delta function in x-? has therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers of x-? which is generally less useful.
## Simulation of stable variables
Simulating sequences of stable random variables is not straightforward, since there are no analytic expressions for the inverse ${\displaystyle F^{-1}(x)}$ nor the CDF ${\displaystyle F(x)}$ itself.[15][16] All standard approaches like the rejection or the inversion methods would require tedious computations. A much more elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS),[17] who noticed that a certain integral formula[18] yielded the following algorithm:[19]
• generate a random variable ${\displaystyle U}$ uniformly distributed on ${\displaystyle \left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)}$ and an independent exponential random variable ${\displaystyle W}$ with mean 1;
• for ${\displaystyle \alpha \neq 1}$ compute:
${\displaystyle X=\left(1+\zeta ^{2}\right)^{\frac {1}{2\alpha }}{\frac {\sin(\alpha (U+\xi ))}{(\cos(U))^{\frac {1}{\alpha }}}}\left({\frac {\cos(U-\alpha (U+\xi ))}{W}}\right)^{\frac {1-\alpha }{\alpha }},}$
• for ${\displaystyle \alpha =1}$ compute:
${\displaystyle X={\frac {1}{\xi }}\left\{\left({\frac {\pi }{2}}+\beta U\right)\tan U-\beta \log \left({\frac {{\frac {\pi }{2}}W\cos U}{{\frac {\pi }{2}}+\beta U}}\right)\right\},}$
where
${\displaystyle \zeta =-\beta \tan {\frac {\pi \alpha }{2}},\qquad \xi ={\begin{cases}{\frac {1}{\alpha }}\arctan(-\zeta )&\alpha \neq 1\\{\frac {\pi }{2}}&\alpha =1\end{cases}}}$
This algorithm yields a random variable ${\displaystyle X\sim S_{\alpha }(\beta ,1,0)}$. For a detailed proof see.[20]
Given the formulas for simulation of a standard stable random variable, we can easily simulate a stable random variable for all admissible values of the parameters ${\displaystyle \alpha }$, ${\displaystyle c}$, ${\displaystyle \beta }$ and ${\displaystyle \mu }$ using the following property. If ${\displaystyle X\sim S_{\alpha }(\beta ,1,0)}$ then
${\displaystyle Y={\begin{cases}cX+\mu &\alpha \neq 1\\cX+{\frac {2}{\pi }}\beta c\log c+\mu &\alpha =1\end{cases}}}$
is ${\displaystyle S_{\alpha }(\beta ,c,\mu )}$. It is interesting to note that for ${\displaystyle \alpha =2}$ (and ${\displaystyle \beta =0}$) the CMS method reduces to the well known Box-Muller transform for generating Gaussian random variables.[21] Many other approaches have been proposed in the literature, including application of Bergström and LePage series expansions, see [22] and,[23] respectively. However, the CMS method is regarded as the fastest and the most accurate.
## Applications
Stable distributions owe their importance in both theory and practice to the generalization of the central limit theorem to random variables without second (and possibly first) order moments and the accompanying self-similarity of the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that led Benoît Mandelbrot to propose that cotton prices follow an alpha-stable distribution with ? equal to 1.7.[6]Lévy distributions are frequently found in analysis of critical behavior and financial data.[8][24]
They are also found in spectroscopy as a general expression for a quasistatically pressure broadened spectral line.[14]
The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated for CGRO BATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.[25]
## Other analytic cases
A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed by ${\displaystyle f(x;\alpha ,\beta ,c,\mu )}$ then we know:
• The Cauchy Distribution is given by ${\displaystyle f(x;1,0,1,0).}$
• The Lévy distribution is given by ${\displaystyle f(x;{\tfrac {1}{2}},-1,1,0).}$
• The Normal distribution is given by ${\displaystyle f(x;2,0,1,0).}$
• Let ${\displaystyle S_{\mu ,\nu }(z)}$ be a Lommel function, then:[26]
${\displaystyle f\left(x;{\tfrac {1}{3}},0,1,0\right)=\Re \left({\frac {2e^{-{\frac {i\pi }{4}}}}{3{\sqrt {3}}\pi }}{\frac {1}{\sqrt {x^{3}}}}S_{0,{\frac {1}{3}}}\left({\frac {2e^{\frac {i\pi }{4}}}{3{\sqrt {3}}}}{\frac {1}{\sqrt {x}}}\right)\right)}$
• Let ${\displaystyle S(x)}$ and ${\displaystyle C(x)}$ denote the Fresnel Integrals then:[27]
${\displaystyle f\left(x;{\tfrac {1}{2}},0,1,0\right)={\frac {1}{\sqrt {2\pi |x|^{3}}}}\left(\sin \left({\tfrac {1}{4|x|}}\right)\left[{\frac {1}{2}}-S\left({\tfrac {1}{\sqrt {2\pi |x|}}}\right)\right]+\cos \left({\tfrac {1}{4|x|}}\right)\left[{\frac {1}{2}}-C\left({\tfrac {1}{\sqrt {2\pi |x|}}}\right)\right]\right)}$
${\displaystyle f\left(x;{\tfrac {1}{3}},1,1,0\right)={\frac {1}{\pi }}{\frac {2{\sqrt {2}}}{3^{\frac {7}{4}}}}{\frac {1}{\sqrt {x^{3}}}}K_{\frac {1}{3}}\left({\frac {4{\sqrt {2}}}{3^{\frac {9}{4}}}}{\frac {1}{\sqrt {x}}}\right)}$
{\displaystyle {\begin{aligned}f\left(x;{\tfrac {4}{3}},0,1,0\right)&={\frac {3^{\frac {5}{4}}}{4{\sqrt {2\pi }}}}{\frac {\Gamma \left({\tfrac {7}{12}}\right)\Gamma \left({\tfrac {11}{12}}\right)}{\Gamma \left({\tfrac {6}{12}}\right)\Gamma \left({\tfrac {8}{12}}\right)}}{}_{2}F_{2}\left({\tfrac {7}{12}},{\tfrac {11}{12}};{\tfrac {6}{12}},{\tfrac {8}{12}};{\tfrac {3^{3}x^{4}}{4^{4}}}\right)-{\frac {3^{\frac {11}{4}}x^{3}}{4^{3}{\sqrt {2\pi }}}}{\frac {\Gamma \left({\tfrac {13}{12}}\right)\Gamma \left({\tfrac {17}{12}}\right)}{\Gamma \left({\tfrac {18}{12}}\right)\Gamma \left({\tfrac {15}{12}}\right)}}{}_{2}F_{2}\left({\tfrac {13}{12}},{\tfrac {17}{12}};{\tfrac {18}{12}},{\tfrac {15}{12}};{\tfrac {3^{3}x^{4}}{4^{4}}}\right)\\[6pt]f\left(x;{\tfrac {3}{2}},0,1,0\right)&={\frac {\Gamma \left({\tfrac {5}{3}}\right)}{\pi }}{}_{2}F_{3}\left({\tfrac {5}{12}},{\tfrac {11}{12}};{\tfrac {1}{3}},{\tfrac {1}{2}},{\tfrac {5}{6}};-{\tfrac {2^{2}x^{6}}{3^{6}}}\right)-{\frac {x^{2}}{3\pi }}{}_{3}F_{4}\left({\tfrac {3}{4}},1,{\tfrac {5}{4}};{\tfrac {2}{3}},{\tfrac {5}{6}},{\tfrac {7}{6}},{\tfrac {4}{3}};-{\tfrac {2^{2}x^{6}}{3^{6}}}\right)+{\frac {7x^{4}\Gamma \left({\tfrac {4}{3}}\right)}{3^{4}\pi ^{2}}}{}_{2}F_{3}\left({\tfrac {13}{12}},{\tfrac {19}{12}};{\tfrac {7}{6}},{\tfrac {3}{2}},{\tfrac {5}{3}};-{\tfrac {2^{2}x^{6}}{3^{6}}}\right)\end{aligned}}}
with the latter being the Holtsmark distribution.
{\displaystyle {\begin{aligned}f\left(x;{\tfrac {2}{3}},0,1,0\right)&={\frac {\sqrt {3}}{6{\sqrt {\pi }}|x|}}\exp \left({\tfrac {2}{27}}x^{-2}\right)W_{-{\frac {1}{2}},{\frac {1}{6}}}\left({\tfrac {4}{27}}x^{-2}\right)\\[8pt]f\left(x;{\tfrac {2}{3}},1,1,0\right)&={\frac {\sqrt {3}}{{\sqrt {\pi }}|x|}}\exp \left(-{\tfrac {16}{27}}x^{-2}\right)W_{{\frac {1}{2}},{\frac {1}{6}}}\left({\tfrac {32}{27}}x^{-2}\right)\\[8pt]f\left(x;{\tfrac {3}{2}},1,1,0\right)&={\begin{cases}{\frac {\sqrt {3}}{{\sqrt {\pi }}|x|}}\exp \left({\frac {1}{27}}x^{3}\right)W_{{\frac {1}{2}},{\frac {1}{6}}}\left(-{\frac {2}{27}}x^{3}\right)&x<0\\{}\\{\frac {\sqrt {3}}{6{\sqrt {\pi }}|x|}}\exp \left({\frac {1}{27}}x^{3}\right)W_{-{\frac {1}{2}},{\frac {1}{6}}}\left({\frac {2}{27}}x^{3}\right)&x\geq 0\end{cases}}\end{aligned}}}
## Notes
• The STABLE program for Windows is available from John Nolan's stable webpage: http://academic2.american.edu/~jpnolan/stable/stable.html. It calculates the density (pdf), cumulative distribution function (cdf) and quantiles for a general stable distribution, and performs maximum likelihood estimation of stable parameters and some exploratory data analysis techniques for assessing the fit of a data set.
• Matlab codes by Rafal Weron and collaborators for simulation of stable variables and estimation of stable parameters are available from RePEc: https://ideas.repec.org/e/pwe42.html#software
• Versions R2016b+ of the Matlab Statistics Toolbox now include functions for the stable density and cumulative probability functions.
• Matlab File Exchange routines by Mark Veillette to compute the stable PDF, CDF and inverse CDF, to simulate stable random variables, and to fit univariate stable distributions.
• R Package 'stabledist' by Diethelm Wuertz, Martin Maechler and Rmetrics core team members. Computes stable density, probability, quantiles, and random numbers. Updated Sept. 12, 2016.
## References
1. ^ a b B. Mandelbrot, The Pareto-Lévy Law and the Distribution of Income, International Economic Review 1960 https://www.jstor.org/stable/2525289
2. ^ Paul Lévy, Calcul des probabilités 1925
3. ^ B.Mandelbrot, Stable Paretian Random Functions and the Multiplicative Variation of Income, Econometrica 1961 https://www.jstor.org/stable/pdfplus/1911802.pdf
4. ^ B. Mandelbrot, The variation of certain Speculative Prices, The Journal of Business 1963 [1]
5. ^ Eugene F. Fama, Mandelbrot and the Stable Paretian Hypothesis, The Journal of Business 1963
6. ^ a b Mandelbrot, B., New methods in statistical economics The Journal of Political Economy, 71 #5, 421-440 (1963).
7. Nolan, John P. "Stable Distributions - Models for Heavy Tailed Data" (PDF). Retrieved .
8. ^ a b c Voit, Johannes (2005). The Statistical Mechanics of Financial Markets - Springer. Springer. doi:10.1007/b137351.
9. ^ B.V. Gnedenko, A.N. Kolmogorov. Limit distributions for sums of independent random variables, Cambridge, Addison-Wesley 1954 https://books.google.com/books/about/Limit_distributions_for_sums_of_independ.html?id=rYsZAQAAIAAJ&redir_esc=y
11. ^ Samorodnitsky, G.; Taqqu, M.S. (1994). Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. CRC Press. ISBN 9780412051715.
12. ^ a b Lee, Wai Ha (2010). Continuous and discrete properties of stochastic processes. PhD thesis, University of Nottingham.
13. ^ Zolotarev, V. (1995). "On Representation of Densities of Stable Laws by Special Functions". Theory of Probability & Its Applications. 39 (2): 354-362. ISSN 0040-585X. doi:10.1137/1139025.
14. ^ a b Peach, G. (1981). "Theory of the pressure broadening and shift of spectral lines". Advances in Physics. 30 (3): 367-474. ISSN 0001-8732. doi:10.1080/00018738100101467.
15. ^ Nolan, John P. (1997). "Numerical calculation of stable densities and distribution functions". Communications in Statistics. Stochastic Models. 13 (4): 759-774. ISSN 0882-0287. doi:10.1080/15326349708807450.
16. ^ Matsui, Muneya; Takemura, Akimichi (2006). "Some Improvements in Numerical Evaluation of Symmetric Stable Density and Its Derivatives". Communications in Statistics - Theory and Methods. 35 (1): 149-172. ISSN 0361-0926. doi:10.1080/03610920500439729.
17. ^ Chambers, J. M.; Mallows, C. L.; Stuck, B. W. (1976). "A Method for Simulating Stable Random Variables". Journal of the American Statistical Association. 71 (354): 340-344. ISSN 0162-1459. doi:10.1080/01621459.1976.10480344.
18. ^ Zolotarev, V. M. (1986). One-Dimensional Stable Distributions. American Mathematical Society. ISBN 978-0-8218-4519-6.
19. ^ Misiorek, Adam; Weron, Rafa? (2012). Gentle, James E.; Härdle, Wolfgang Karl; Mori, Yuichi, eds. Heavy-Tailed Distributions in VaR Calculations. Springer Handbooks of Computational Statistics. Springer Berlin Heidelberg. pp. 1025-1059. ISBN 978-3-642-21550-6. doi:10.1007/978-3-642-21551-3_34.
20. ^ Weron, Rafa? (1996). "On the Chambers-Mallows-Stuck method for simulating skewed stable random variables". Statistics & Probability Letters. 28 (2): 165-171. doi:10.1016/0167-7152(95)00113-1.
21. ^ Janicki, Aleksander; Weron, Aleksander (1994). Simulation and Chaotic Behavior of Alpha-stable Stochastic Processes. CRC Press. ISBN 9780824788827.
22. ^ Mantegna, Rosario Nunzio (1994). "Fast, accurate algorithm for numerical simulation of L\'evy stable stochastic processes". Physical Review E. 49 (5): 4677-4683. doi:10.1103/PhysRevE.49.4677.
23. ^ Janicki, Aleksander; Kokoszka, Piotr (1992). "Computer investigation of the Rate of Convergence of Lepage Type Series to ?-Stable Random Variables". Statistics. 23 (4): 365-373. ISSN 0233-1888. doi:10.1080/02331889208802383.
24. ^ Rachev, Svetlozar T.; Mittnik, Stefan (2000). Stable Paretian Models in Finance. Wiley. ISBN 978-0-471-95314-2.
25. ^ Leddon, D., A statistical Study of Hard X-Ray Solar Flares
26. ^ a b Garoni, T. M.; Frankel, N. E. (2002). "Lévy flights: Exact results and asymptotics beyond all orders". Journal of Mathematical Physic. 43 (5): 2670-2689. doi:10.1063/1.1467095.
27. ^ a b Hopcraft, K. I.; Jakeman, E.; Tanner, R. M. J. (1999). "Lévy random walks with fluctuating step number and multiscale behavior". Physical Review E. 60 (5): 5327-5343. doi:10.1103/physreve.60.5327.
28. ^ Uchaikin, V. V.; Zolotarev, V. M. (1999). "Chance And Stability - Stable Distributions And Their Applications". VSP. Utrecht, Netherlands.
29. ^ Zlotarev, V. M. (1961). "Expression of the density of a stable distribution with exponent alpha greater than one by means of a frequency with exponent 1/alpha". Selected Translations in Mathematical Statistics and Probability (Translated from the Russian article: Dokl. Akad. Nauk SSSR. 98, 735-738 (1954)). 1: 163-167.
30. ^ Zaliapin, I. V.; Kagan, Y. Y.; Schoenberg, F. P. (2005). "Approximating the Distribution of Pareto Sums". Pure and Applied Geophysics. 162 (6): 1187-1228. doi:10.1007/s00024-004-2666-3.
|
2017-11-22 05:36:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 85, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731716871261597, "perplexity": 1614.9826036047946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00046.warc.gz"}
|
https://www.csauthors.net/jean-auriol/
|
# Jean Auriol
According to our database1, Jean Auriol authored at least 21 papers between 2016 and 2021.
Collaborative distances:
• Dijkstra number2 of four.
• Erdős number3 of five.
Book
In proceedings
Article
PhD thesis
Other
## Bibliography
2021
Output-feedback control of an underactuated network of interconnected hyperbolic PDE-ODE systems.
Syst. Control. Lett., 2021
Robust Control Design of Underactuated 2 × 2 PDE-ODE-PDE Systems.
IEEE Control. Syst. Lett., 2021
2020
A Sensing and Computational Framework for Estimating the Seismic Velocities of Rocks Interacting With the Drill Bit.
IEEE Trans. Geosci. Remote. Sens., 2020
Stability Analysis for a Class of Linear $2\times 2$ Hyperbolic PDEs Using a Backstepping Transform.
IEEE Trans. Autom. Control., 2020
Corrigendum to "Robust output feedback stabilization for two heterodirectional linear coupled hyperbolic PDEs" [Automatica 115].
Autom., 2020
Robust output feedback stabilization for two heterodirectional linear coupled hyperbolic PDEs.
Autom., 2020
Output feedback stabilization of an underactuated cascade network of interconnected linear PDE systems using a backstepping approach.
Autom., 2020
A differential-delay estimator for thermoacoustic oscillations in a Rijke tube using in-domain pressure measurements.
Proceedings of the 59th IEEE Conference on Decision and Control, 2020
Simultaneous Stabilization of Traffic Flow on Two Connected Roads.
Proceedings of the 2020 American Control Conference, 2020
Combining Formation Seismic Velocities while Drilling and a PDE-ODE observer to improve the Drill-String Dynamics Estimation.
Proceedings of the 2020 American Control Conference, 2020
Self-Tuning Torsional Drilling Model for Real-Time Applications.
Proceedings of the 2020 American Control Conference, 2020
2019
An explicit mapping from linear first order hyperbolic PDEs to difference systems.
Syst. Control. Lett., 2019
Late-lumping backstepping control of partial differential equations.
Autom., 2019
Delay-robust stabilization of an n+m hyperbolic PDE-ODE system.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019
Delay robust state feedback stabilization of an underactuated network of two interconnected PDE systems.
Proceedings of the 2019 American Control Conference, 2019
2018
Two-Sided Boundary Stabilization of Heterodirectional Linear Coupled Hyperbolic PDEs.
IEEE Trans. Autom. Control., 2018
Delay-Robust Control Design for Two Heterodirectional Linear Coupled Hyperbolic PDEs.
IEEE Trans. Autom. Control., 2018
Delay-robust stabilization of a hyperbolic PDE-ODE system.
Autom., 2018
Robust output regulation of 2×2 hyperbolic systems: Control law and Input-to-State Stability.
Proceedings of the 2018 Annual American Control Conference, 2018
2016
Minimum time control of heterodirectional linear coupled hyperbolic PDEs.
Autom., 2016
Two-sided boundary stabilization of two linear hyperbolic PDEs in minimum time.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016
|
2021-10-28 15:27:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.377690851688385, "perplexity": 14044.626293676325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00673.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-review-page-321/83
|
Prealgebra (7th Edition)
$\frac{8}{13}$
Divide and multiply working left to right. $\frac{5}{13}\div\frac{1}{2}\times\frac{4}{5}$ =$\frac{5}{13}\times\frac{2}{1}\times\frac{4}{5}$ =$\frac{10}{13}\times\frac{4}{5}$ =$\frac{40}{65}$ =$\frac{40\div5}{65\div5}$ =$\frac{8}{13}$
|
2019-11-19 02:00:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595015645027161, "perplexity": 1916.3845277135754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00009.warc.gz"}
|
https://www.physicsforums.com/threads/more-electric-energy-questions.59296/
|
# Homework Help: More Electric Energy questions
1. Jan 10, 2005
### thursdaytbs
The way to solve this, i tried is by saying F = k(q1)(q2) / r^2. So, the force between the two charges is (9x10^9)(2.6x10^-8)(5.5x10^-8) / (1.4^2). Although, from there I'm not sure where to go because that just solves for the force inbetween the two charges and not the EPE or even more specifically the EPE midway between them.
I started by saying that Electric Force = k(q1)(q2) / r^2,
Then I figured I would apply that to all 3 charges with the +8charge as q2. Next I would add up all the three forces together to find the total amount of force needed. Except, theres where I get stuck, since that single equation can't be applied to all 3 charges seperately since - don't they all effect one charge?
Any help appreciated.
2. Jan 10, 2005
### vincentchan
the formulas of eletric potential is V=kq/r, and potential is a form of energy, if you have two sourse of potential, just find the individual V and add them up....
work done is $$qV_{total}$$... find the potential for each charge and add them all up to get $$V_{total}$$
I think you have already fall behind your class , in this chapter, you are doing potential of point charge, the force fomulas k(q1)(q2) / r^2 is outdated... do some reading b4 posting next time
3. Jan 10, 2005
### HallsofIvy
"The way to solve this, i tried is by saying F = k(q1)(q2) / r^2. So, the force between the two charges is (9x10^9)(2.6x10^-8)(5.5x10^-8) / (1.4^2)."
This is the force each exerts on the other. The problem asked for the electric potential half way between them. Imagine a "test" charge q at distance 0.7 m from each charge. What is the force on that test charge due to each (be careful about the directions). What is the total force on that test charge? The potential is that total force divided by the charge q.
4. Jan 10, 2005
### vincentchan
That is the electric field, not potential....
5. Jan 10, 2005
### thursdaytbs
I thought W = q(Vb - Va)? Or is that only valid when it's one charge moving from one place to another, and W = qVtotal, when theres more than one charge?
And yeah, I think i've fallen behind because I can fully grasp the idea of F=k(q1)(q2) / r^2, and E = F/q, but I dont fully understand the work done to a charge, or the Electric Potential Energy.
6. Jan 10, 2005
### vincentchan
yes, you are right, However, if the charge is came from infinitely far away, Va = kq/r, Va goes to zero as r goes to infinite, W=qV works perfectly fine in your problem (2), hope this answer your question
|
2018-08-17 03:55:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.612984299659729, "perplexity": 886.0788587792345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00274.warc.gz"}
|
https://socratic.org/questions/what-is-the-horizontal-asymptote-of-y-y-2x-3-5x-2-7x-2-3
|
# What is the horizontal asymptote of y=((2x+3)(5x-2))/(7x^2-3) ?
Sep 11, 2014
Its horizontal asymptote is $y = \frac{10}{7}$.
By taking the limits at infinity,
${\lim}_{x \to \infty} \frac{\left(2 x + 3\right) \left(5 x - 2\right)}{7 {x}^{2} - 3}$
by divide the numerator and the denominator by ${x}^{2}$,
$= {\lim}_{x \to + \infty} \frac{\left(2 + \frac{3}{x}\right) \left(5 - \frac{2}{x}\right)}{7 - \frac{3}{x} ^ 2} = \frac{\left(2 + 0\right) \left(5 - 0\right)}{7 - 0} = \frac{10}{7}$
Similarly, you can find
${\lim}_{x \to - \infty} \frac{\left(2 x + 3\right) \left(5 x - 2\right)}{7 {x}^{2} - 3} = \frac{10}{7}$
Hence, there is only one horizontal asymptote $y = \frac{10}{7}$.
|
2022-08-19 11:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977630972862244, "perplexity": 1050.0843173575845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00311.warc.gz"}
|
https://www.sarthaks.com/109686/a-5-m-60-cm-high-vertical-pole-casts-a-shadow-3-m-20-cm-long-find-at-the-same-time
|
# A 5 m 60 cm high vertical pole casts a shadow 3 m 20 cm long. Find at the same time −
0 votes
256 views
A 5 m 60 cm high vertical pole casts a shadow 3 m 20 cm long. Find at the same time −
(i) the length of the shadow cast by another pole 10 m 50 cm high
(ii) the height of a pole which casts a shadow 5 m long.
## 1 Answer
0 votes
by (24.8k points)
selected
Best answer
(i) Let the length of the shadow of the other pole be x m. 1 m = 100 cm
The given information in the form of a table is as follows.
Height of pole (in m) 5.6 10.50 Length of shadow (in m) 3.2 x
More the height of an object, more will be the length of its shadow.
Thus, the height of an object and length of its shadow are directly proportional to each other. Therefore, we obtain
5.60/3.20 = 10.50/x
x = (10.50 x 3.20)/5.60 = 6
Hence, the length of the shadow will be 6 m.
(ii) Let the height of the pole be y m.
The given information in the form of a table is as follows.
Height of pole (in m) 5.6 y Length of shadow (in m) 3.2 5
The height of the pole and the length of the shadow are directly proportional to each other. Therefore,
5.60/3.20 = y/5
y = (5 x 5.60)/3.20 = 8.75
Thus, the height of the pole is 8.75 m or 8 m 75 cm.
+1 vote
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
+1 vote
1 answer
|
2021-02-27 19:10:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489954471588135, "perplexity": 1054.8382050971948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00179.warc.gz"}
|
https://brilliant.org/problems/15-puzzle-timesaver-3/
|
# 15 Puzzle
Algebra Level 4
Let A be a square matrix represented by above the 15 puzzle. The empty slot is 0. Find
$\lfloor \det (e^A) \rfloor.$
×
|
2018-04-21 21:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4847370684146881, "perplexity": 2198.9592701049846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945448.38/warc/CC-MAIN-20180421203546-20180421223546-00496.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/radar-used-detect-presence-aircraft-receives-pulse-has-reflected-object-6-times
|
Question
A radar used to detect the presence of aircraft receives a pulse that has reflected off an object $6 \times 10^{-5} \textrm{ s}$ after it was transmitted. What is the distance from the radar station to the reflecting object?
$9 \textrm{ km}$
Solution Video
|
2020-01-19 11:09:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2329782098531723, "perplexity": 804.2197585531686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00327.warc.gz"}
|
https://www.yaclass.in/p/mathematics-cbse/class-8/playing-with-numbers-2468/re-baa0f80d-4ee9-4a76-9523-56fce89c42cd
|
Theory:
Similar to the logics on two-digit numbers, three-digit numbers can also be subjected to a few tricks.
Trick $$1$$: What happens to the difference between three-digit numbers and their reverse?
The difference between three digit numbers and their reverse is always a multiple of $$99$$.
Consider the three-digit number '$$abc$$'.
The general form of '$$abc$$' is $$(100 \times a) + (10 \times b) + c$$ or $$100a + 10b + c$$.
The reverse of '$$abc$$' is '$$cba$$'.
The general form of '$$cba$$' is $$(100 \times c) + (10 \times b) + a$$ or $$100c + 10b + a.$$.
Now, let us find the difference between '$$abc$$' and '$$cba$$'.
If $$a > c$$:
$$abc - cba = (100a + 10b + c) - (100c + 10b + a)$$
$$= 100a + 10b + c - 100c - 10b - a$$
$$= 99a - 99c$$
$$= 99(a - c)$$
If $$c > a$$:
$$cba - abc = (100c + 10b + a) - (100a + 10b + c)$$
$$= 100c + 10b + a - 100c - 10b - a$$
$$= 99c - 99a$$
$$= 99(c - a)$$
Example:
Let us impose this logic on $$325$$.
The reverse of $$325$$ is $$523$$.
Hence, to find the difference, we should subtract $$325$$ from $$523$$.
$$523 - 325 = 198$$
$$= 99 \times 2$$
Hence, it is proved that the difference between three-digit numbers and their reverse is a multiple of $$99$$.
Trick $$2$$: What happens when $$3$$ forms of a three-digit number is summed up?
Consider the number '$$abc$$'.
Form $$1$$: $$abc = 100a + 10b + c \longrightarrow (1)$$
To find the other forms of the number, shift the ONES digit to the number's left end.
Therefore, the number '$$abc$$' becomes '$$cab$$'.
The digit '$$c$$' is shifted to the left end of the number.
Form $$2$$: $$cab = 100c + 10a + b \longrightarrow (2)$$
To form the third number, shift '$$b$$' from '$$cab$$' to the left end.
Form $$3$$: $$bca = 100b + 10c + a \longrightarrow (3)$$
On adding $$(1)$$, $$(2)$$, and $$(3)$$, we get:
$$abc + cab + bca = 100c + 10a + b + 100b + 10c + a + 100b + 10c + a$$
$$= 111(a + b + c)$$
$$= 37 \times 3(a + b + c)$$
Therefore, the sum of $$3$$ forms of a three-digit number is always a multiple of $$37$$.
Example:
Let us consider the number $$128$$.
Form $$1$$: $$128$$
Form $$2$$: $$812$$
Form $$3$$: $$281$$
On adding the $$3$$ numbers, we get:
$$128 + 812 + 281 = 1221$$
$$= 37 \times 33$$
Thus, the sum of three forms of a three-digit number is always a multiple of $$37$$.
|
2021-04-18 23:08:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7348012924194336, "perplexity": 891.5503324719847}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00515.warc.gz"}
|
https://www.esaral.com/q/write-the-number-significant-digits-in-a-1001-78602/
|
Write the number significant digits in (a) 1001 ,
Question:
Write the number significant digits in (a) 1001 , (b) $100.1$, (c) $100.10$, (d) $0.001001 .$
Solution:
The number of significant digits are as follows:
(a) 4
(b) 4
(c) 5
(d) 4
|
2022-05-17 00:28:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7663843035697937, "perplexity": 910.2313155919265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00525.warc.gz"}
|
https://math.stackexchange.com/questions/2389734/smart-enumeration-of-a-subset-of-graphs-obtained-from-a-parent-graph
|
# Smart enumeration of a subset of graphs obtained from a parent graph
Suppose i have a graph $G$ of $n$ nodes. For each node someone has given us a recipe $R$ how to replace the node with a graph. So for node $i$, i have $m_i$ choices of graphs to replace it with. Thus, it is simple to see that given $G$, i can obtain $\prod\limits_{i=1}^n m_i$ graphs because of the recipe $R$ already given. For huge possibilities of graphs that can be obtained from $G$ using $R$, exhaustive enumeration is a poor choice due to computational infeasibility.
To avoid doing exhaustive enumeration because computers cannot run for days, we need to do selective enumeration of those graphs. One idea i could think was that we define biases based on user/operator input. Thus some graphs are more relevant/important than others. Using this as a guiding heuristic we can do some kind of pruning to avoid enumerating all possible graphs.
I have done a lot of hand-waving without any concrete stuff. That is why i posed this question. My question is could someone show some work where this is actually done in a concrete computational way rather than hand-waving, or suggest concrete computational steps to take in this direction?
EDIT:
Suppose we have a directed acyclic graphs with weighted edges. And cost of the graph is a non-linear function of the edge weights. We want to see if changing the graph using $R$ leads to graphs with costs less than some threshold range. We want to avoid doing exhaustive enumeration and do some smart algo method. What previous work has been done in this direction? What keywords to search in google scholar to find such algo methods?
• Handwaving is OK in moderation but you need to give some idea of what you are evaluating - it seems like you want to choose the best set of transforms for the nodes of $G$? For some method of deciding "best". Are these perhaps directed acyclic graphs with weighted edges/nodes? – Joffan Aug 11 '17 at 0:39
• Yes i had directed acyclic graphs with weighted edges in mind. I agree with your statement wholly. To make discussion concrete based on your comment i have made an addendum in my original question. – user_1_1_1 Aug 11 '17 at 18:44
• For a computational example of pruning nodes from a graph with Python ya might want to check the Point toy class I wrote as an answer for a somewhat related question; specifically the cheapest of routes method demonstrates one way of returning a sub-set of nodes/Points who are the lowest or tied for lowest in cost to a calling process for further evaluation. As for smart algorithms it might be worth looking into sorting algos for bubbling up the most important points for preferential pre-filtering. – S0AndS0 Apr 1 '19 at 6:40
Note from the future; I've condensed this answer and added this answer's source code to a GitHub repo, and more content on the pages site related to graphs, because it was getting rather verbose.
I'll attempt to assist with the following bits...
One idea I could think was that we define biases based on user/operator input. Thus some graphs are more relevant/important than others. Using this as a guiding heuristic we can do some kind of pruning to avoid enumerating all possible graphs.
... but first it's probably a good idea to get comfortable (snack and/or drink in other-words), it's going to be one of those posts ;-D
Your idea sounds smart for prioritizing computation of unordered (and possibly mutating) structured data sets, keeping humans in the loop allows for the illusion of control and given the strides being made in machine learning... enough hand-waving though time to focus on a portion of this problem; a prioritizing iterator.
The best way that I know how to express this is in Python, I'll try to keep it to a minimum as far as script size while also only using/inheriting built-ins so that the technical-debt (a measure of having to understand sets of $$library_{dependency}$$ before even beginning to code), for readers is kept to a minimum too. A balancing act but at least what I share is very generalizable.
### Link to hybrid_iterator/__init__.py
a dependency of the following class that'll be used soon, it's mainly to keep code clean and mostly reliable between Python versions.
Bellow's a sketch of Hybrid_Iterator's super relationships with dict and Iterator classes
$$\color{#CD8C00}{\fbox{ dict }{ \color{#2E8B57}{\xleftarrow[]{ \color{#000}{\text{super}_{\left(key\_word\_args\right)}} }} \over{\color{#CD8C00}{ \xrightarrow[\color{#000}{\text{returned value}}]{} }} }} \color{#2E8B57}{\fbox{ Hybrid~Iterator }{ \color{#2E8B57}{\xrightarrow[]{ \color{#000}{\text{super}_{\left(key\_word\_args\right)}} }} \over{\color{#00A}{ \xleftarrow[\color{#000}{\text{returned value}}]{} }} }} \color{#00A}{\fbox{ Iterator }}$$
Above is nothing overly special on it's own, in fact without modification it's in someways a worse dictionary than it was before. Hint, compare the lists of methods available for each via dir(), eg dir(Hybrid_Iterator) to see it's inner bits that can be called upon... But let's not get lost in the details, instead get past the prereqs, and on to the proof of concept code and it's usage.
### Link to hybrid_iterator/priority_buffer.py
$$\color{#2E8B57}{\fbox{ Hybrid~Iterator }{ \color{#8C0073}{\xleftarrow[]{ \color{#000}{\text{super}_{\left(key\_word\_args\right)}} }} \over{\color{#2E8B57}{ \xrightarrow[\color{#000}{\text{returned value}}]{} }} }} \color{#8C0073}{\fbox{ Priority~Buffer }}$$
Above is a sketch of the relationships being built between above and bellow code blocks.
Indeed that was some code so to cover the what's, hows, and whys I'll try to put it into some context with usage examples.
## Usage Examples
Provided that the first script was saved to hybrid_iterator/__init__.py and the second to hybrid_iterator/priority_buffer.py (within what-ever sub-directory you've made and changed working directories to), it should be possible to run the following within a Python shell...
from priority_buffer import Priority_Buffer
... which should produce no output just a new line; thrilling for sure.
I'm going to ask Python to generate some toy data to play with, priorities will be randomly set within upper and lower bounds between 0 and 9 respectively...
from random import randint
graph = {}
for i in range(0, 21, 1):
graph.update({
"sub_graph_{0}".format(i): {
'points': {},
'first_to_compute': randint(0, 9),
}
})
Above should generate a graph dictionary that'll look sorta like the following...
for k, v in graph.items():
print("{0} -> {1}".format(k, v))
# ...
# Graph_4 -> {'points': {}, 'first_to_compute': 0}
# Graph_7 -> {'points': {}, 'first_to_compute': 4}
# Graph_5 -> {'points': {}, 'first_to_compute': 8}
# ...
The first_to_compute keys are what are going to be used soon and I hope readers like it, doesn't really matter what you call this key so long as you're consistent between above and bellow code blocks. The sub_graphs key names are unimportant for these examples as dictionaries are unordered, only serving as a hash for look-ups. The points are just place-holders to show that more than one key value pair are allowed in a dictionary; well so long as the keys are unique. Note nesting dicts is often easer than getting data back out without forethought though so this should not be used in production without significant modifications.
Readers who really want something more substantial in complexity to substitute in for points empty dictionary can find a link within the comments of this OP's question to another class written for a different graph related question. However, for the following set of examples it'll not be important.
With all that set-up out of the way it is time to initialize the Priority_Buffer class!...
buffer = Priority_Buffer(
graph = graph,
priority = {'key_name': 'first_to_compute',
'GE_bound': 7},
step = {'amount': -2,
'GE_min': -1},
buffer_size = 5,
)
... then to loop it, safely...
counter = 0
c_max = int(len(graph.keys()) / buffer['buffer_size'] + 1)
# ... (21 / 5) + 1 -> int -> 5
for chunk in buffer:
print("Chunk {count} of ~ {max}".format(
count = counter, max = c_max - 1))
for key, val in chunk['buffer'].items():
print("\t{k} -> {v}".format(**{
'k': key, 'v': val}))
counter += 1
if counter > c_max:
raise Exception("Hunt for bugs!")
That business above with counter and c_max is to ensure an initialized Priority_Buffer with inputs that result in contemplating $$\infty$$ the wrong way are not disastrous.
... which should output something that looks like...
Chunk 0 of ~ 4
Graph_18 -> {'points': {}, 'first_to_compute': 5}
Graph_13 -> {'points': {}, 'first_to_compute': 7}
Graph_5 -> {'points': {}, 'first_to_compute': 8}
Graph_8 -> {'points': {}, 'first_to_compute': 9}
Graph_9 -> {'points': {}, 'first_to_compute': 6}
# ... Trimmed for brevity...
Chunk 3 of ~ 4
Graph_6 -> {'points': {}, 'first_to_compute': 0}
Graph_4 -> {'points': {}, 'first_to_compute': 0}
Graph_3 -> {'points': {}, 'first_to_compute': 0}
Graph_16 -> {'points': {}, 'first_to_compute': 3}
Graph_1 -> {'points': {}, 'first_to_compute': 2}
Chunk 4 of ~ 4
Graph_0 -> {'points': {}, 'first_to_compute': 0}
The original graph dictionary should now be empty, destructive reads with pop() are a memory vs computation optimization feature as well as an attempt to mitigate looking over the same priorities ranges' worth of data too redundantly before expanding the search space.
If consuming the graph is not a desired behavior I believe Priority_Buffer(graph = dict(graph),...) copies the source data during initialization, this means you'd have two copies of the same data, however, Priority_Buffer's will on average always be decreasing... well that is unless you refill buffer['graph'] with an update({'sub_graph_<n>': ...}) before the main loop exits, which hint hint, allowing for such shenanigans on a loop is kinda why it's written the way it is ;-) though you'll want to refresh the buffer['GE_bound'] (or LE_bound) to ensure things are bubbling up again like they should.
Using the currently scripted methods you'll always get chunks size of buffer['buffer_size'] or less. The first found that are greater or equal to buffer['priority']['GE_bound'] (inverse for LE_bound) are returned as a chunk. If for example it didn't have enough on a given call of buffer.next() (implicitly called by loops), the search space would expand by buffer['step']['amount'] till either it reaches (or crosses) buffer['step']['GE_min'] or the buffer size is satisfied. There's a few other bits-n-bobs for exiting when graph is empty but that's the gist of it.
While it'll happily make mistakes on your behalf at the speed of "ohh sh..." I think it's a fair start (in that those within a range are prioritized, first-found $$=$$ first-poped), and close enough to what you're asking for that it hacking into something better is totally likely. This is probably a good point to pause and allow things to steep. When you're ready, the following are a few extra tips;
• be careful with step['amount']'s direction; decrement (use negative numbers) when using GE_ related configurations, and increment when using LE_,
• _bounds should be -+1 (less or more by 1) than the total target priority['key_name'] priority value max/min to avoid having a bad time.
• the priority['GE_bound'], buffer_size, and other values should be played with because ideal settings will depend upon system resources available.
• for applying $$R$$ at some level in the execution stack check the other answer I linked in your question's comments for hints on adding first class function calls, I used Python lambdas there (quick-n-dirty), but functions and methods work similarly.
• try running the file as a script, eg. python priority_buffer.py, a few times to observe the behavior of the current sorting methods.
• If new to Python try, adding print("something -> {0}".format('value')) (replacing 'value' with something), lines anywhere that you want to dump some info during execution, or use a fancy IDE such as Atom (with a few plugins) to enable setting break points and step through parts that don't quite make sense.
Hope this was somewhat helpful in getting ya up-to speed with how one can develop an approach for solving such problems.
For even more on this subject, heres some links to Q&A and Advanced Usage, content has been split out into posts hosted by GitHub Pages because of .
• You must write books and blogs. Your style is awesome!!! – user_1_1_1 Apr 3 '19 at 19:36
• The business with counter and c_max was not clear. could u explian more? – user_1_1_1 Apr 3 '19 at 20:00
• Thanks, I'm glad I come across as mostly coherent, I'm not currently cool enough to keep a regular blog; tried it once but it felt like the good ship lifestyle on repeat for a month solid, which regardless of initial opinion I'm pretty sure anyone would be up for verity even if it's Jingle Cats... I've updated the answer with a bit more on c_max and the whys of it's importance within the last section. – S0AndS0 Apr 4 '19 at 6:48
• thanks, lastly the mechanics of priority buffer is a bit unclear. is it just a straightforward extension of priority queue or you have coded this data structure on your own. I could not find a mention of this data structure in any standard cs book. – user_1_1_1 Apr 4 '19 at 21:50
• It's two nested dictionaries, one with methods for prioritizing the other. I chose to use dictionaries because it's possible to print(buffer) and dump it's state (easy inspection) at any point. They're what I consider to be one of the more friendly Python objects to organize the data structure it seemed like you where describing. Near the end of the recent update you'll find I get into what the code is doing a bit more within the Q&A links, and with the Advanced Usage link I think I'm very close to a complete (though non-optimal) answer... which future revisions might sort out. – S0AndS0 Apr 5 '19 at 23:24
|
2020-01-22 11:52:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2905122935771942, "perplexity": 2168.4029922680297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00008.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/105606/is-there-a-habitability-zone-between-the-primary-and-secondary-stars-of-a-binary
|
# Is there a habitability zone between the primary and secondary stars of a binary star system?
Is there a habitability zone between the primary and secondary stars in a binary star system for a planet orbiting only the primary star at a distance less than that of the secondary star and, if so, what is the greatest energy output of the secondary star (in terms of energy received by the planet) that would permit the habitability zone?
• Assume the primary star is equivalent to Sol.
• Assume the planet is equivalent size and mass of the Earth.
Consider a 3D chart:
• The X-axis is distance of the secondary from the primary.
• The Y-axis is the energy output of the secondary
• The Z-axis is habitability (0-100% liklihood of a habitability zone)
I can easily assume that as X approaches infinity habitability approaches 100%. Further, I can assume that as Y approaches zero, habitability approaches 100%. It's the space in between that I'm having trouble understanding.
• You haven't provided enough information/said to ignore the stellar mechanics that would be necessary to determine the answer. – rek Feb 23 '18 at 4:54
• @rek Can you reprise your comment. because the phrase(s) "You haven't provided enough information/said to ignore the stellar mechanics" doesn't seem to make sense. I think there is an excellent point buried deep within it. – a4android Feb 23 '18 at 5:51
• I closed the question, so it could get re-opened with a clean slate. Perhaps you should edit your repose to comments into the main question? Either go with the last sentence of the 'response to comments' as the main question, or otherwise I was thinking go for a reality check where you say 'there are two sol-sized stars, is there any orbital configuration where an Earth-sized planet orbits in a habitable zone between them.' Edit it to re-open and I will vote to do so. – kingledion Feb 23 '18 at 11:40
• Also, check out this answer. – kingledion Feb 23 '18 at 11:41
• Related (I think; I'm having a little trouble understanding what you're asking for): worldbuilding.stackexchange.com/q/25166/28 – Monica Cellio Feb 23 '18 at 22:24
# Yes, there is a habitable zone.
## Known examples of possibly-habitable binaries
I have to disagree with StephenG's answer; we have data that indicates that this is possible for similar, Sun-like stars. I talked about this in an answer I wrote a few months ago; searching this exoplanet catalog, I found several systems that might be of interest:
• WASP-94, a pair of F-type main sequence stars, each with a close hot Jupiter orbiting it.
• HD 20781/HD 20782, two G-type stars, each with 1-2 planets (one at 1.3 AU orbiting HD 20782, two at <1 AU orbiting HD 20781).
• Kepler-132, a pair of G-type main sequence stars, although the structure of the system has been disputed.
• XO-2, two cool K-type stars, with a confirmed hot Jupiter around one star and two possible planets around the other.
The HD 20781/HD 20782 system has me quite excited. Both are G-type stars, and each star has at least one planet within 2 AU of it. The planets are all more massive than Earth, but that's immaterial; the important thing is that the binary stars have a separation of 9080 AU! That's enormous, and it's absolutely enough for there to be relatively little effect on each planet from the other star in the system.
Some things to note:
• HD 20782 b has a large eccentricity, and HD 20782 b and HD 20781 c also have larger eccentricities than normal, which could be due to the binarity of the system. The XO-2 system's planets seem to have smaller eccentricities, even though their separation is a mere 4600 AU.
• In most of these systems, both stars are extremely similar in spectral type, which seems like a good thing. They're relatively Sun-like, not active red dwarfs or hot massive stars.
• StephenG's requirements of a large separation is easily satisfied in several of these cases, by an order of magnitude or two. Given the inverse square law for flux, I would expect the contribution from the second star to be many orders of magnitude lower than the primary; it's essentially zero.
## Calculating the habitable zone
I did some modeling (Python 3 code on Github) to give some numerical support to this answer, so I wrote a program that generates habitable zones around binary systems, with certain assumptions:
• Both stars are on the main sequence
• The stars' orbits have zero eccentricity
• Any exoplanet orbiting the stars won't be massive enough to influence the stars' orbits, and the system is stable
I defined the habitable zone as the region where water is liquid on the surface of a planet. In other words, the planet's effective temperature - not taking into account greenhouse effects - must be between 273.15 K and 373.13 K. The formula for the effective temperature of a planet in a binary system is $$T=\left(\frac{1-a}{4\sigma}(F_1+F_2)\right)^{1/4}$$ where $a$ is the albedo of the planet and $F_1$ and $F_2$ are the fluxes from the stars.
Here are three basic plots, of a single Sun-like (G2V) star, two Sun-like stars separated by 2 AU, and two Sun-like stars separated by 5 AU. All assume a planetary albedo of 0. The habitable zone is shaded in black (the precise temperature is not shown):
The effects from the second star are apparent with the 2 AU separation, but not with the 5 AU separation (although I can confirm that they're there). The form of the effective temperature formula means temperature only varies as $T\propto F^{1/4}$, where $F$ is flux, and thanks to the inverse-square law, even a separation of 5 AU produces minor results.
Here's a plot where the separation is 9080 AU, as in the HD 20781/HD 20782 system:
The other star is far off my screen. Zooming out makes it impossible to see. For the purposes of habitability calculations, each star is on its own.
## Orbital stability
Now, a class of binaries I'm curious about are late-type dwarfs with relatively small separations (1-2 AU). Two M-type red dwarfs can orbit close together and still have their own individual habitable zones:
What I don't know is what the stable orbits are around these stars. I assume some stable orbits are possible for the above case, but I don't know the ranges, and would be interested to find out. Another interesting scenario is two K5V stars at 1 AU; their habitable zones are connected, but the zones are stable orbits may be much smaller:
# Yes, and right next door
Weigert and Holman, 1997 concludes that
The habitable zone for planets, as defined by Hart (1979), lies about 1.2–1.3 AU (1′′) from α Cen A. A similar zone may exist 0.73–0.74 AU (0.6 ′′) from α Cen B. From our investigations, it appears that planets in this habitable zone would be stable in the sense used here, at least for certain inclinations.
This is confirmed more recently and with more powerful software in Quarles and Lissauer, 2016:
Our simulations show that circumstellar planets (test particles), within the habitable zone of either α Cen A or α Cen B, remain in circumstellar orbit even with moderately high values of initial eccentricity or mutual inclination relative to the binary orbital plane
Reading through their paper, they simulated stability for > 1 billion years for particles in the habitable zones of both $\alpha$ Cen A and $\alpha$ Cen B (and also circumbinary orbits that would not be habitable).
So, as far as we know, there exists a habitable zone between the primary and secondary of the nearest (non-red dwarf) star to us. $\alpha$ Cen A is very similar to our sun (1.1 solar masses, 1.5 solar luminosities, spectral type G2V just like Sol). The luminosity of $\alpha$ Cen B is 0.5 times that of Sol, so the companion can be pretty bright in these circumstancs.
# How close can they be?
To answer HDE's addendum/bounty, I fired up my trusty Rebound tool to find the closest orbits that suggest stability. I did a bunch of grid searches over various eccentricities, and the finding was that for the stellar masses below, eccentricity has relatively little effect (at least for low eccentricities < 0.1).
The computational demands of this problem proved to be much higher than in previous questions. I tried to test 10s of thousands of cases over 1 million years, integrating with a time step of 0.001 years (about 8 hours). I found some interesting cases and some generalizations about behavior, but take these answers with a grain of salt. 1 million years isn't enough to prove anything.
### Case: Two stars, both of 1 solar mass
Here we have some very interesting behaviour. Some planets will break out their orbit and orbit the barycenter of the system. Starting with the companion 3.5 AU from the primary, planet 1 AU from primary, and all orbits with 0 eccentricity, the planet did a horseshoe orbit at ~0.87 AU from either star for a million years. It was actually very close to the setup in this question.
For the case of the companion star being n AU away from the primary, the effects on the planet are:
0 - 3 AU Planet is quickly ejected
3 - 4 AU Planet achieves an eccentric but stable orbit near the habitable zone
4 AU + Planet achieves a stable orbit outside the habitable zone
The real finding here is that for suns of equal size, a planet is likely to end up near the barycenter of the two suns. The planet also very quickly achieves stability in the equal-mass sun setup, whereas in the following examples, the orbits are chaotic for more than a million years. I would suggest that in order to get the planet in an orbit in the habitable zone of the one of the suns, you would need to add other planets to the mix.
### Case: Two stars, one of 1 solar mass, one a large red dwarf (M1V, m = 0.5 Sols)
In this case, there are a good variety of stable orbits once the companion is at least 5.5 AU away from the primary. I didn't find any orbits that were stable in the habitable zone, though. Stable orbits for the planet tended to start about 2.5 AU away from the main, in some sort of resonance with the companion. Unfortunately, my 1 year old powered off my computer before I could read the final results of the 8 hour grid search for stable outer orbits. That is what you get for writing to the console and not a file. Whose idea was it to make power buttons have LEDs anyways? Those things are toddler magnets.
For the case of the companion star being n AU away from the primary, the effects on the planet are:
0 - 3.5 AU Planet is quickly ejected
3.5 - 5.5 AU Planet enters eccentric orbit in vicinity of habitable zone. May be
eventually ejected, unlikely to be stable in habitable zone.
5.5 AU + Planets in the habitable zone are pulled outwards into resonances
with the companion star
As with the last simulation, planets tend to be pulled towards the barycenter. This suggests that additional planets maybe necessary to straighten out eccentricities. However, it is also worth noticing that this simulation is close to the ones cited above related to Alpha Centauri, and it does not replicate the results. So, perhaps an extra big grain of salt needs to be taken with this entire endeavor.
### Case: Two stars, one of 1 solar mass, one a small red dwarf (M6V, m = 0.1 Sols)
Beyond 2 AU from the primary star, the companion star is too small to immediately eject the planet from the system. However, almost all of the orbits I plotted remained unstable for 1 million years, implying that they will eventually lead to ejection (or collision with the primary star! which did happen in one case). The two important relationships appeared to be the distance of the barycenter of of the system from the main star, and orbital resonances between the companion star and the planet.
In general there were the following zones of interest based on distance between the primary and companion:
< 2 AU The planet is quickly ejected
2 - 4.5 AU The planet finds a somewhat stable, but highly eccentric orbit
4.5 - 5 AU The planet quickly enters a stable orbit at ~0.55 AU
5 - 9 AU The planet enters an eccentric orbit, and may stabilize in a resonant
orbit with the companion
9 - 12 AU Same as above, but the barycenter is in the habitable zone, so a
stable orbit there is probably impossible
12 + AU The planet enters an eccentric orbit, and may stabilize later (none of
these did within 1 million years)
I did not find a single orbit of the tens of thousands tried that ended up stable in the habitable zone within 1 million years. However, the 5-9 AU and 12 + AU cases both contained eccentric orbits with roughly the correct semi-major axis, so it would be possible for these to stabilize out given enough time.
# In progress
• Thank you! I was really hoping you'd end up working on this. – HDE 226868 Aug 22 '18 at 1:28
Not for two large, similarly sized stars, maybe for one large star and one dwarf star (like a brown dwarf).
The former case would not allow a stable orbit to form for long enough for the planet to develop a reasonably consistent climate (essentially driven by a nearly constant level of solar energy) over a period large enough to develop life.
The case with the brown dwarf (a very dim type of star) can be though of as a system with a very large Jupiter that's still much smaller than the Sun, and with a much, much lower energy output. Such a brown dwarf could, in principle, be far enough from the planet to not greatly affect it.
There is a quantity known as the effective temperature which let's us estimate the approximate temperature effect of a star on a planet. We can use this to relate the effect of the smaller star (the secondary) on the temperature of the planet (dominated by the primary, for stability).
We get a formula like this :
$$\frac {T_{sec}}{T_{prim}} = \left( \frac{L_{sec}D_{prim}^2}{L_{prim}D_{sec}^2} \right)^{\frac 1 4}$$
where the $L$ values are luminosity of the stars and the $D$ values are distance from the planet.
We want this to be a small number, like a percent or two at most for a reasonably stable climate.
So some very rough order of magnitude calculations :
Now a possible value for $L_{sec} \approx L_{prm} \times 10^{-3}$, and if we set $\frac {T_{sec}}{T_{prim}} \approx 2 \times 10^{-2}$ (about a 4% variation in temperature due to the secondary), we get :
$$D_{sec} \approx 79 D_{prim}$$
So the secondary has to be about 80 times further from the primary than the planet is. Both would be in roughly circular orbits at these ranges, and the brown dwarf could be of the order of about 50 Jupiter masses.
These are, of course, ballpark figures.
• How would you define "large"? For two Sun-like stars - or maybe F-type stars, a bit more massive - there should be plenty of time for life to arise. It's only when you get to the massive B- and O-type stars that trouble begins. – HDE 226868 Feb 23 '18 at 5:44
• The OP is talking about two sun-like stars in a binary system with a planet in between (which I'd call large, although strictly speaking the Sun is not an unusually large star). My view (and I'm happy to be corrected), is that no arrangement of that sort is going to make a stable orbit (for the planet) while keeping temperatures stable as well. The OP doesn't seem to mean circumbinary systems which makes it difficult I think. – StephenG Feb 23 '18 at 6:40
• Question was edited. If anything, I think your answer fits it more, not less, but you may wish to review it. – Mołot Feb 23 '18 at 16:16
• This doesn’t seem right, stars larger than brown dwarves should be possible simply by placing them farther way. Proxima Centauri is larger than a brown dwarf, yet it’s orbit around Alpha Centauri does not preclude planets with a stable orbit, AFAIK. – RBarryYoung Aug 20 '18 at 19:42
• So apparently, even little Proxima Centauri has a planet in its habitable zone, even though it is in a ternary system (Alpha Centauri) with two other stars that are much larger than a single brown dwarf (they are approximately sun-sized), space.com/33834-discovery-of-planet-proxima-b.html – RBarryYoung Aug 20 '18 at 19:47
First, to help clarify the orbit in the question posted, according to arXiv:0705.3444 that planets orbiting a single star within a binary system, is called an ‘S-type’ orbit (while a circumbinary orbit is of ‘P-type). This paper S-Type and P-Type Habitability in Stellar Binary Systems: A Comprehensive Approach explores various scenarios and their effect and limitation on possible habitable zones for said S-Type planets, though in heavily math-based, less conceptual-based terms.
However, as regarding to the orbital stability of an S-Type planet, Solstation.org on this page, referred other papers when indicating that planets with orbits that are less than 1/5 the closest approach of the secondary star are generally stable. It also mentioned that there were existing observed binaries in which dust rings and possibly planets appeared circling only one star, though far fewer exist in binaries with intermediate separations between 3-50 AU (below 3 AU there were some circumstellar P-Type rings) and it's above 50 AU that the S-type rings around a single star were observed).
So, the limitation on habitability is primarily restricted by the requirement of a steady insolation falling on it, not the stability of its orbit. Also, if you use StephenG’s equation as a ball park estimation, the closest the secondary star can be to the primary for various luminosities can be determined (with equal ball park accuracy) by maintaining the value in parenthesis. Therefore, as your change the luminosity of the secondary star L$_{sec}$ by a given percent, you have to change its distance (~80$D_{prim}$) by the square root of that percent (i.e. quadruple the luminosity of the secondary and it now has to be twice the original distance of about 80$D_{prim}$ to maintain the same relative insolation - up to some maximum limit below the primary’s luminosity, which I was not able to determine from the paper and not stated directly in it). Though it showed mathematically why equal stars could not support a habitable planet.
|
2020-02-26 20:19:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6801499724388123, "perplexity": 1132.4245784705663}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00072.warc.gz"}
|
http://physics.stackexchange.com/tags/phase-transition/hot
|
# Tag Info
## Hot answers tagged phase-transition
149
The premise is wrong. Not all materials exist in exactly three different states; this is just the simplest schema and is applicable for some simple molecular or ionic substances. Let's picture what happens to a substance if you start at low temperature, and add ever more heat. Solid At very low temperatures, there is virtually no thermal motion that ...
102
Energy is needed to convert water to steam. This is called the latent heat of vapourisation and for water it is 2.26MJ/kg. So to boil away 1kg (about a litre) of water at 100ºC the kettle would need to supply 2.26MJ. Assuming the kettle has a power of 1kW this would take 2260 seconds. Given the unexpected interest in this question let me expand a bit on ...
32
The ultimate answer to a "why" physics question is "because". Physics is about observing and measuring nature and then finding mathematical models that fit the measurements and predict new behaviors under different conditions. Because we have observed these four states of matter. we have formulated mathematical theories called thermodynamics and quantum ...
30
There are three phenomena that occur before vigorous boiling of water that produce sound. 1) Air dissolved in water on heating forms small air bubbles at the bottom of the container. These air bubbles get released from the bottom of the container on reaching a sufficient size. The process of release produces a sound of frequency ~ 100Hz. 2) On boiling, ...
24
I have read that true steam is clear (transparent) water vapor. According to this theory, the white "steam" you see is really a small cloud of condensed water vapor droplets, a fine mist in effect. So what you are seeing is not more steam, but more condensation and more mist. The speed with which the steam/vapor/mist rises and disperses may also change.
20
I'll give a very qualitative answer / overview. The classification 'first-order phase transition vs. second-order phase transition' is an old one, now replaced by the classification 'first-order phase transition vs. continuous phase transition'. The difference is that the latter includes divergences in 2nd derivatives of $F$ and above - so to answer your ...
17
Basically the existence of different states of matter has to do with Inter-molecular forces, Temperature of its surroundings and itself and the Density of the substance. This image below shows you how the transition between each states occur (called Phase transitions). These transitions occur based on the change in temperature of the substance Now if ...
15
Your description of critical temperature isn't quite right. If you increase the temperature of a liquid beyond the critical point, the atoms are moving so quickly that persistent structure fails to form and so you have something that behaves a lot like a very dense gas. Similarly, if you increase the pressure of a gas beyond the critical point, it becomes ...
14
The most immediate answer would seem to be that a great variety of different crystal phases can exist because their long-range order makes it possible to classify them based on the different symmetries of their lattice structure. Since the liquid (or amorphous solid) phase only has short-range order and the gaseous phase doesn't even have that, it seems ...
14
I will try to answer these questions from different views. Macroscopic view The "quantitative" rather than qualitative difference in a liquid-gas phase transition is due to the fact that the molecules arrangement does not change so much (there is no qualitative difference) but the value of the compressibility changes a lot (quantitative difference). This ...
12
This is one of those funny questions where the cart gets put before the horse. Matter doesn't "exist" in any state. It simply does what it does, in the way it does it. Humans, wishing to understand how different types of matter behave chose to create a system of three states. This choice is the key: the reason "matter exists in 3 states" is because we ...
12
Different people have different definitions of dynamical phase transition. At present, a widely accepted one is by Heyl et al. See their original paper Dynamical Quantum Phase Transitions in the Transverse Field Ising Model. Basically, it means some quantity (e.g., the fidelity) as a function of time is non-analytical at some critical times. See the cusps ...
11
Yes, of course, the freezing point will decrease by the pressure developed, while part of the water freezes. But do not underestimate the pressures! In such an experiment easily some thousand bares may be developed. (Depends on the rigidity of the vessel and the volume of water) Here is a video showing how freezing water cracks a cast iron sphere. (...
11
Since neither of the answers given so far really answers the question, here's my $0.02: between convection (the flow of water of various temperatures around the kettle), and the fact that the heating element is at the bottom, the water is at various temperatures at various parts of the kettle at any time. Usually, the hottest is at the bottom, if the kettle ... 11 Not quite sure what you are asking, but I can explain the difference between the three common states of matter on a qualitative scale: Solid: molecules form bonds with neighboring molecules, very little of these bonds are broken at any given time. Liquid: molecules form bonds with neighboring molecules for most of the time, but there are enough energy for ... 10 If the metal pan was cool then you would expect to see water droplets staying in the same place once any original movement had dissipated. You would have a combination of cohesive forces within each water droplet and adhesive forces between the water and metal surfaces. With the metal having a temperature well above the boiling point of water, the water ... 9 Generalities on Conformal Invariance In two dimensions, a lot is known / conjectured about statistical models at criticality. For instance, at$T_c$, the spin configuration that you see will not only be self-similar (what others here have been calling "fractal") but actually fully conformally invariant (in the continum limit); that is, the probability ... 9 It's certainly possible for ice to sink in water under the right conditions. The diagram this section of Wikipedia's ice page will show you the conditions under which the various types of ice can form. Most of the "exotic" ones such as XII will form only at pressures greater than around 200MPa. These high-pressure forms are all denser than water, so they ... 9 A simple material will not undergo a liquid to solid transition as the temperature is raised. When you see this it means somthing more complicated than a simple phase transition is going on. In the example of egg white, what you are seeing is denaturation of the protein albumin. The heat causes the protein to lose its tertiary structure then form cross ... 9 Let's define temperature to be a measure the kinetic energy of the atom. A single atom has limited numbers of ways it can store energy. It can translate in X, Y or Z. It can't really rotate (well it does rotate, but it takes so little energy to make it rotate that we can ignore it). It can't vibrate. It does have electronic modes where adding energy can ... 9 As mentioned in the comments, this is an instance of supercooling. When you cool a liquid below its freezing point, the molecules are still moving around quite a lot and any two that stick together are likely to be broken up by a subsequent impact. Liquids freeze better when the molecules have something to latch onto -- either a block of the same ice they ... 9 Boiling is clearly not a surface phenomenon. But vaporising is. Boiling happens at all the points inside the liquid whereas when vaporising only the molecules at the surface escape into the space above. And it is true that a liquid boils when its saturated vapour pressure equals external (room) pressure. But it is not to be confused with vaporising. ... 9 Temperature is a measure of average kinetic energy. When you have a kettle of water at 100˚C, some of the water molecules will have more-than-average energy, and some will have less. The more-than-average molecules are the ones that will turn to steam, carrying off their energy and lowering the average (and thus the temperature) for the remaining water. ... 9 For a pure substance that can exist in the solid, liquid, and vapor states (i.e., wood is not in this category), let's assume that a closed container is half full of liquid and half full of vapor. As the temperature rises, the liquid expands and the liquid density falls. Also, as the temperature rises, the pressure in the container rises due to the vapor ... 9 The Earth has a liquid outer core, a solid mantle exterior to that, and a solid core interior to it! So that’s how come the Earth has the heaviest, densest elements at its core, and how we know its outer core is a liquid layer. Like all elements, whether iron is solid, liquid, gas or “other” depends on both the pressure and temperature of the iron. Iron, ... 8 In physics, critical behavior means the behavior in which there are no localized boundaries between phases. More quantitatively, the correlation length diverges (is infinite). For example, at the critical point of water, one sees clouds of vapor at all possible length scales. This is only possible because the relevant laws of physics around this point ... 8 Wikipedia quotes Other substances that expand on freezing are silicon, gallium, germanium, antimony, bismuth, plutonium and also chemical compounds that form spacious crystal lattices with tetrahedral coordination. EDIT:The same paragraph says silicon dioxide also exhibits this property. 8 In vacuum and with only the particles we know about the answer is no. Let's look at the symmetries we know exist in nature:$SU(3)$colour: confined, only colourless states exist below the QCD phase transition$SU(2)\times U(1)_Y$electroweak: Higgsed to$U(1)_{EM}$electromagnetism$U(1)_{EM}$: Here we have opportunity. See below...$U(1)_{B-L}\$: Global ...
8
Of course the name implies that time is involved somehow. People talk about dynamical thermal and quantum phase transitions and in one case you will rapidly change temperature, while in the other state defining parameter (say pressure or field etc.). We will consider thermal PT. Now what does it mean rapidly? Let us consider 2-d order phase transition as ...
7
No, the boundary doesn't suddenly "end" or "fade away", as the liquid-gas boundary fades away near the critical point. Instead, the sudden end indicates that many other things may happen in the region of these extremely high pressures and the diagram doesn't want to discuss those because they're outside the limits of interest of the author of the diagram. ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-07-26 12:10:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5709378719329834, "perplexity": 727.356012146298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824853.47/warc/CC-MAIN-20160723071024-00130-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://3dprinting.stackexchange.com/questions/7556/first-time-assemble-of-anet-a6-printer-only-fan-works
|
# First time assemble of Anet A6 printer. Only fan works
I bought an Anet A6-L printer (after some research I discovered that this is not a genuine Anet A6) and assembled it. The first time I plugged it in, the fan (fan 2, I believe) is the only thing that turns on. Also, the LCD screen (LCD2004 5 button display) only had 1 cord, and 1 jack for the cord to plug in on the back of the screen, but 2 openings on the motherboard. The fan turns on for around 30 seconds, then turns off. I have the anet A6 printer, with the V1-5 motherboard. I have everything else (fans, motors, hotbed, power) plugged into the right ports. I haven't plugged in the micro SD (TF) card yet, I turned it on to test it.
I am using a modified part that replaces the windpipe/fan duct (white) and redirects the airflow around it and inwards from a ring around the tip, but that is the only modification I have made. I have not messed with the power supply (other than wiring it in). Same with the motherboard. I tested it also without the modification for the fan duct but it still has the same problem.
• Welcome to 3dPrinting.SE! – Pᴀᴜʟsᴛᴇʀ2 Dec 1 '18 at 18:15
• I am having almost the same problem w/ an Anet A6-L,I ordered an A8 but this is what came to me. I am positive it is missing one larger ribbon cable that connects between the 16 pin terminals on the display board & the main board has an empty 10 pin connection point. The instructions supplied via PDF show two ribbon cables side by side but that is clearly not this unit. It came from LOEE in Lexington Ky via Amazon. It came with the Level sensor and no instructions on where to hook it up.Amazon is sending me another unit. If it comes with the cable then fine but if not they are both going back. – tim kieffer Dec 29 '18 at 21:31
This definitely sounds like a problem with your wiring if you have a genuine Anet A6, the genuine A6 comes with a 12864 full graphic display. For sure, you are missing 1 flat ribbon cable (see below). Maybe this is causing the LCD not to light up and the SD card not functioning. As the "fan 2" is working, the board is powered by the power supply (this fan is using the constant power feed of the supply of the board). What you are actually describing as a boot sequence is the actual boot sequence of a printer. Once you power the printer, the fan that is cooling the cold end of the extruder should start spinning and keep spinning while the part cooling fan usually spins up but then powers down to standstill. While this is happening, the LCD should come alive and show the boot screen and finally the printer menu. If your screen is not showing any light, this implies that your screen is either broken, not powered or wrongly connected.
You could connect the printer over USB and control the printer from an external program, e.g. Pronterface, OctoPrint, Repetier-host, etc. and see if the printer works (then you know that the display is broken).
From a search on AliExpress I found that there are auto leveling printers sold with the Anet A6 branding that differ from the standard Anet A6 as written in Chinglish:
Different Auto leveling A6 and Normal A6:
1.The auto-leveling version uses a proximity sensor to detect the aluminum print bed where the normal version of the printer uses a micro-switch to detect the end of travel for the Z-Axis movement (vertical limits).
2.Auto leveling A6-L work with LCD2004 screen, A6 work with LCD12864 screen
The second remark from the quote above suggests that there is a 2004 LCD version that is only used by the Anet A6-L version (probably because they need a free pin for the auto leveling sensor). Such a display only has a single connection socket and needs to be connected to a single socket on the Anet printer board (named "LCD", not "J3")
Note that automatic bed levelling is not magic, and a little more complicated to start with, if you order a printer without an auto bed levelling sensor, you will be able to update to one later. E.g. from here:
It uses the "LED pin" which is an unused pin on the A8 (using the stock 5 button 2004 LCD). That is the third wire counted from below (where the red marker is on the cable). I simply spliced the cable and cut that wire. This will be the servo signal (yellow).
### If you have an Anet A6 adapted for auto levelling with a 2004 display
When the LCD does not light up, this could be caused by incorrect placement of the flat cable, be sure to use the correct socket on the printer board and take care of the orientation. Once you have checked this, and it does not work and you are in the possession of a multimeter, you could measure the voltage over the "VSS" and the "VDD" pin; also look into the voltage over the "VSS" and the "VE" (see pin layout below). If there is power, but no light, the LCD is probably defective. You could try to hook up a computer to the board using a USB cable and use a program like Pronterface to interface with the printer to see if it works at all, the display is not required for printing (e.i. if you can access the printer over USB).
### If you have a genuine Anet A6
It is advised to install an extra flat ribbon cable and check all the wires, please do check for correct polarity and correct installment.
Please do note the the installation of the Anet A6 LCD display (see this movie and the screenshot from this video below) requires 2 flat ribbon cables to function properly.
Sidemark on fan ducts:
Both the stock and most ring type ducts are not aerodynamically designed fan ducts. The stock fan converges too much, this narrowing of the duct causes extra pressure build-up which these fans are not able to handle, so they stall, causing a reduced flow output of the fan. The (semi) circular fan ducts usually also have a design problem. The (semi) circular ones all (but one that I have seen) have the same deficiency that the main passage area does not decrease when the duct loses air through a slot/ejector; this means that the velocity in the main ring decreases after each bleed slot! Note that these fans move air and do not build up a high pressure difference that is large enough to overcome the friction of those designs.
• Oscar, My screen only came with one flat ribbon cable. It also only has one port for it on the screen, but 2 ports on the motherboard. The image shows a different screen than I have. The screen I have has 5 buttons in a + shape. The one in the image has 2 buttons and a selection disc. My printer is an A6-L printer with auto leveling if that helps. – Jase Kaeberlein Dec 1 '18 at 21:35
• @JaseKaeberlein Please update your question to include such vital information, even add pictures. It sounds that you have an Anet A6 clone with a 5 button display. To be sure add if you have a 2004 LCD or a full graphical one with the jog dial, I suspect that you have the first. – 0scar Dec 1 '18 at 23:12
• I'll update my answer tomorrow, thanks for the additions! It appears you do not have a genuine Anet A6. – 0scar Dec 1 '18 at 23:55
• odd: on Aliexpress, there is a version as "2017 Anet A6-L" that claims to exist in 2 versions with the LCD 2004 screen as 5-button, the LCD 12864 as button dial. LCD 2004 is the cheaper model. – Trish Dec 2 '18 at 10:05
• I have done some research, and have confirmed that it is not a genuine A6. The instructions and motherboard say A6, as does the amazon recipt and seller. However, the frame is an A8 frame, as is the screen. I belive we are going to return the printer, and get it from a different seller. Thank you for your help. – Jase Kaeberlein Dec 3 '18 at 2:35
I also ordered an A8, but was sent a A6-L, after doing my homework, I am very happy. The A6-L is an upgraded A8. It uses same board as A6 but has a A8 display (5 button), that's why people have extra LCD spot. The A6-L is a better frame (full front), better carriage set up ( horizontal vs vertical, resulting in better weight distribution and allowing for higher builds ). As far as fans go, the cooling fan only comes on when printing and told to (there is a setting in menu (make sure fan is on, as well as check cura settings for when and how much fan should come on). The Auto leveler that comes with it says to plug into Z limit switch, ( basically it replaces the z-switch all together). Thingaverse has better fan/sensor holders you can print. Remember Google/YouTube is your friend. I found out after lots of searching.
A simple google search answers everything. The A8 was the 1st printer released by Anet, followed by the A6. Check out links. Look at pictures and description and comparison. Or just Google it.
• A8= \$cheapest, flimsy frame, off balanced extruder carriage, uses belt to drive Z. • A6= \$\$\$ MOST EXPENSIVE of 3Full frame, better balanced carriage, LCD screen w/ knob, no Z belt.
• A6-L= \$\$ Middle Cost, A6 frame, carriage, no Z belt, but A8 screen and buttons
### ANET A6-L
https://www.banggood.com/Anet-A6-L5-DIY-3D-Printer-Kit-With-Auto-leveling-220220250mm-Printing-Size-1_75mm-0_4mm-Nozzle-p-1209606.html?cur_warehouse=CN
### COMPARISION
https://pevly.com/anet-a8-vs-a6-vs-a3-vs-a2/
### AUTO LEVEL
• please cite your sources. – Trish Jan 3 '19 at 15:23
• Please edit your answer, and include the links that your searches revealed, as they could prove useful in backing up your claims, as it sounds a bit strange that the A6-L is an upgraded A8, as an A8-L would be the logical model number, for an upgrade. Also, please quote (or summarise) the relevant parts, in case of link death. Thanks. – Greenonline Jan 3 '19 at 16:48
|
2020-10-23 05:42:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27713051438331604, "perplexity": 2338.0975314840143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00138.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Bounded_Below_Real_Sequence
|
# Definition:Bounded Below Sequence/Real
This page is about Bounded Below Real Sequence. For other uses, see Bounded Below.
## Definition
Let $\sequence {x_n}$ be a real sequence.
Then $\sequence {x_n}$ is bounded below if and only if:
$\exists m \in \R: \forall i \in \N: m \le x_i$
### Unbounded Below
$\sequence {x_n}$ is unbounded below if and only if there exists no $m$ in $\R$ such that:
$\forall i \in \N: m \le x_i$
|
2021-09-21 10:23:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939628541469574, "perplexity": 681.3214922132245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00709.warc.gz"}
|
https://www.projecteuclid.org/euclid.aos/1176348265
|
## The Annals of Statistics
### Optimal Weights for Experimental Designs on Linearly Independent Support Points
#### Abstract
An explicit formula is derived to compute the $A$-optimal design weights on linearly independent regression vectors, for the mean parameters in a linear model with homoscedastic variances. The formula emerges as a special case of a general result which holds for a wide class of optimality criteria. There are close links to iterative algorithms for computing optimal weights.
#### Article information
Source
Ann. Statist., Volume 19, Number 3 (1991), 1614-1625.
Dates
First available in Project Euclid: 12 April 2007
https://projecteuclid.org/euclid.aos/1176348265
Digital Object Identifier
doi:10.1214/aos/1176348265
Mathematical Reviews number (MathSciNet)
MR1126341
Zentralblatt MATH identifier
0729.62063
JSTOR
|
2019-11-22 16:39:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3004477620124817, "perplexity": 1681.9468823580398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00362.warc.gz"}
|
https://stats.stackexchange.com/questions/126613/is-it-possible-for-x-and-y-to-be-marginally-normally-distributed-and-have
|
# Is it possible for $X$ and $Y$ to be marginally normally distributed and have $E[Y|X]$ be a nonlinear function of $X$? [duplicate]
Is this at all possible? What is the intuition for this?
• After writing an answer I noticed that I'd read "E(Y|X) be a nonlinear function of X" instead of "of X" - was my misreading actually the meaning that was intended? – Silverfish Dec 4 '14 at 14:11
• The bottom right image in Cardinal's answer to the duplicate provides a clear counterexample. The answer itself gives a constructive way to create such counterexamples (via copulas), giving some intuition about the situation. NB: $E[Y|X]$ is a function of $X$, not $Y$. – whuber Dec 4 '14 at 14:15
• See the example in the later part of the answer to this question. – Glen_b -Reinstate Monica Dec 4 '14 at 16:35
• Another dup: stats.stackexchange.com/questions/308775/… – kjetil b halvorsen Apr 28 '19 at 20:23
This is possible, here's one example that comes to mind.
Let $X ~ \sim \mathcal{N}(0,1)$ and let $Y$ have a truncated standard normal distribution on $[0, \infty)$ if $X \geq 0$ and on $(-\infty, 0)$ if $X < 0$. Then the marginal distributions of $X$ and $Y$ are both standard normal, but $\mathbb{E}(Y|X)$ is a non-linear function of $X$.
In particular, if $X \geq 0$ then $Y$ has a half-normal distribution so has mean $\sqrt{\frac{2}{\pi}}$, and by symmetry if $X < 0$ then $Y$ has mean $-\sqrt{\frac{2}{\pi}}$. Overall $\mathbb{E}(Y|X)=\sqrt{\frac{2}{\pi}}$ for $X \geq 0$ and $-\sqrt{\frac{2}{\pi}}$ for $X < 0$ so is indeed a non-linear function of $X$.
My personal intuition for this choice was to think about "chunks of probability" of $X$ - then on each chunk, what conditional distribution could I set for $Y$ that would ensure both that (i) the marginal distribution of $Y$ is normal, (ii) the conditional mean of $Y$ is not a linear function $X$. The latter can be achieved easily if a discrete chunk of $X$'s probability distribution is taken and the same conditional distribution of $Y$ is used for all $X$ in that range. This means that $\mathbb{E}(Y|X)$ is the same for all $X$ in that interval, i.e. is just a constant function of $X$. So long as it is a different constant function of $X$ over a different interval of $X$, then the conditional mean must be a non-linear function of $X$ overall.
I went for the easiest case of splitting the distribution of $X$ up into two parts with probability of $\frac{1}{2}$ each. After that, it was clear that the half-normal distribution would be a good choice for the conditional distribution of $Y$ - it's just a matter of gluing two sides back together to get the entire normal distribution as the marginal. If I'd wanted to something fancier, I could have cut $X$ up into many such chunks, possibly irregular, and selected appropriate truncated normals for the conditional distributions of $Y$ over each chunk. The question asks for the underlying intuition, and this is the thought process that resuilted in my solution, but if you want to generalise more widely you are best to think about the copula.
• This is Example 3 in Moderator cardinal's answer (attributed to an answer of mine though I lay no claim to being the first one to come up with the idea) to the duplicate question pointed out by Moderator whuber. Also, may I suggest that you amend your last paragraph to clarify that it is $E[Y\mid X]$ that has value $\pm\sqrt{\frac{2}{\pi}}$ according as $X$ is positive or negative and so is a nonlinear function of $X$? – Dilip Sarwate Dec 4 '14 at 14:46
• @dilip Yes I noticed it matched Example 3 when whuber marked the duplicate. I have tried to clarify as you suggested. Since the question asked for "intuition" I have attempted to explain the thought process, though I'm unsure how successful this has been! – Silverfish Dec 4 '14 at 15:24
• It is interesting to compare $E[Y\mid X]$, the optimum (not necessarily linear) minimum-mean-square-error estimator for $Y$ given $X$ with the linear minimum-mean-square-error estimator for $Y$ given $X$. For jointly normal random variables, the two estimators are the same, while in this instance they are not. – Dilip Sarwate Dec 4 '14 at 16:13
|
2020-06-02 11:25:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7859281301498413, "perplexity": 256.9921126655615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00568.warc.gz"}
|
http://mathhelpforum.com/number-theory/126342-mod-proof-primes-print.html
|
# Mod proof with primes
• January 30th 2010, 05:37 PM
Mod proof with primes
If p is greater than or equal to 5 and p is prime, prove that [p]=[1] or [p]=[5] in Z6.
I think that we will have to consider cases, that is,
p=6q, p=6q+1, p=6q+2, p=6q+3, p=6q+4, p=6q+5....
Since p is prime, we have p divides p and 1 divides p....Also, since p is greater than or equal to 5, it must be odd, i.e. like 2k+1, but I don't know if this is useful.
I really don't know what I need to show this....
• January 30th 2010, 06:10 PM
tonio
Quote:
If p is greater than or equal to 5 and p is prime, prove that [p]=[1] or [p]=[5] in Z6.
I think that we will have to consider cases, that is,
p=6q, p=6q+1, p=6q+2, p=6q+3, p=6q+4, p=6q+5....
Since p is prime, we have p divides p and 1 divides p....Also, since p is greater than or equal to 5, it must be odd, i.e. like 2k+1, but I don't know if this is useful.
I really don't know what I need to show this....
If $[p]=0,2,4\!\!\!\pmod 6$ then p is even, if...etc.
Tonio
• February 1st 2010, 06:32 PM
Deepu
Quote:
|
2016-08-28 07:43:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432106733322144, "perplexity": 848.0308212584774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935857.56/warc/CC-MAIN-20160823200855-00093-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/differential-equation.290955/
|
# Differential Equation
1. Feb 8, 2009
### ganondorf29
1. The problem statement, all variables and given/known data
Solve the inital value first order linear differential equation
y'=y+x y(0) = 2
2. Relevant equations
3. The attempt at a solution
y'-y=x
That's as far as I got. I'm not sure how to approach this. I've looked through my notes and book, and I don't have any examples that are similar to this. I looked online and it said something about raising the coefficient of y to e, but I'm not sure what to do after.
2. Feb 8, 2009
### MathematicalPhysicist
You should multiply by an exponential factor.
y'-y=x
exp(-x)y'-exp(-x)y=exp(-x)x
(exp(-x)y)'=exp(-x)x
Usually the exponential factor is of the form:
if y'+f(x)y=g(x)
then the exponential factor is:
u=exp($$\int f(x)dx$$)
Take the integral of xexp(-x) by parts, and then your'e done.
3. Feb 8, 2009
### bsodmike
Hi; I tried using an integration factor, and using the initial value provided (y(0)=2), I arrived at a particular solution of $$y=x-1+3e^{-x}$$.
I just saw loop's reply above. The integration factor I obtained was $$e^x$$.
4. Feb 9, 2009
### HallsofIvy
Staff Emeritus
For the linear equation y'+ f(x)y= g(x), and integrating factor is a function m(x) such that multiplying by it, m(x)y'+ m(x)f(x)y= f(x)g(x), reduces the left side to a single derivative: (m(x)y)'. Since, by the product rule, (m(x)y)'= my'+ m'y= my'+ mfy, we must have m'= m(x)f(x) which is, itself, a separable differential equation: dm/m= f(x)dx so $ln(m)= \int f(x)dx$ and so $m(x)= e^{\int f(x)dx}$. For this particular problem f(x) is the constant -1 so your integrating factor is $e^{-x}$, not $e^x$. Multiplying the equation by $e^{-x}$, we have $e^{-x}y'- e^{-x}y= (e^{-x}y)'= xe^{-x}$. Integrating both sides of that (the left side by parts, as loop quantum gravity said) you get $e^{-x}y= -(x+1)e^{-x}+ C$ or $y= -x- 1+ Ce^x$ and, since y(0)= 2, 2= -1+ C and C= 3.
$y= -x-1+ 3e^x$
Notice that if $y= x- 1+ 3e^{-x}$ then $y'= 1- 3e^{-x}$ while $y+ x= x- 1+ 3e^{-x}+ x= 2x- 1+ 3e^{-x}$ so your y does NOT satisfy the differential equation.
Last edited: Feb 9, 2009
5. Feb 9, 2009
### bsodmike
Thanks for pointing out that mistake Ivy...
|
2017-10-23 20:58:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389969706535339, "perplexity": 1076.5983167906584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00451.warc.gz"}
|
https://mht.technology/post/content-aware-resize/
|
## mht.technology
A blog about computer science, programming, and whatnot.
# Content Aware Image Resize
February 13, 2017
Content aware image resizing, liquid image resizing, retargeting, or seam carving, refers to a image resizing technique where one can insert or remove seams, or “paths of least importance”, in order to shrink or grow the image. I was introduced to the concept by a YouTube video by Shai Avidan and Ariel Shamir.
In this blog post, I’ll go through a simple proof-of-concept implementation of content aware image resizing, naturally in Rust :)
For our sample image, I simply searched1 for "sample image", and got back this2:
# Sketching out a top down approach
Let’s start with some brainstorming. I imagine the library to be used like this:
/// caller.rs
// Resize to a known size?
image.resize_to(car::Dimensions::Absolute(800, 580));
// or remove 20 rows?
image.resize_to(car::Dimensions::Relative(0, -20));
// Maybe show the image in a window?
car::show_image(&image);
// or save to disk?
image.save("resized.jpeg");
The most important functions in lib.rs could look something like this:
/// lib.rs
pub fn load_image(path: Path) -> Image {
// We'll forget about error handling for now :)
Image {
}
}
impl Image {
pub fn resize_to(&mut self, dimens: Dimensions) {
// How many columns and rows do we need to insert/remove?
let (mut xs, mut ys) = self.size_diffs(dimens);
// When we want to add columns and rows, we would like
// to always pick the path with the lowest score, no
// matter if it's a row or a column.
while xs != 0 && ys != 0 {
let best_horizontal = image.best_horizontal_path();
let best_vertical = image.best_vertical_path();
// Insert the best
if best_horizontal.score < best_vertical.score {
self.handle_path(best_horizontal, &mut xs);
} else {
self.handle_path(best_vertical, &mut ys);
}
}
// Insert the rest in either direction.
while xs != 0 {
let path = image.best_horizontal_path();
self.handle_path(path, &mut xs);
}
while ys != 0 {
let path = image.best_vertical_path();
self.handle_path(path, &mut ys);
}
}
}
This gives us some idea on how to approach writing system. We need to load an image, we need to find these seams, or paths, and we need to handle removing such a path from the image. In addition, we would perhaps like to be able to see our result.
Let’s do the image loading first, so we know what kind of API we’re working with.
## image
The image library from the Piston developers seems useful, so we’ll add image = "0.12" to our Cargo.toml. A quick search in the docs is all that it takes for us to write the image loading:
struct Image {
inner: image::DynamicImage,
}
impl Image {
pub fn load_image(path: &Path) -> Image {
Image {
inner: image::open(path).unwrap()
}
}
}
A natural next step is figuring out how to get the gradient magnitudes from a image::DynamicImage. The image crate doesn’t provide a way to do this directly, but the imageproc crate does: imageproc::gradients::sobel_gradients. Here however, we run into trouble3. The sobel_gradient function takes an 8-bit grayscale image, and returns a 16-bit grayscale image. The image we have loaded is an RGB image with 8-bits per channel, so we’ll have to decompose the channels, convert the three channels into separate grayscale images, compute the gradients of the three component images, and then merge the gradients together into one image, in which we will do the path searching.
Is this elegant? No. Does it work? Maybe :)
type GradientBuffer = image::ImageBuffer<image::Luma<u16>, Vec<u16>>;
impl Image {
pub fn load_image(path: &Path) -> Image {
Image {
inner: image::open(path).unwrap()
}
}
// We'll assume RGB
let (red, green, blue) = decompose(&self.inner);
let mut container = Vec::with_capacity((w * h) as usize);
container.push(r[0] + g[0] + b[0]);
}
image::ImageBuffer::from_raw(w, h, container).unwrap()
}
}
fn decompose(image: &image::DynamicImage) -> (image::DynamicImage,
image::DynamicImage,
image::DynamicImage) {
let w = image.width();
let h = image.height();
let mut red = image::DynamicImage::new_luma8(w, h);
let mut green = image::DynamicImage::new_luma8(w, h);
let mut blue = image::DynamicImage::new_luma8(w, h);
for (x, y, pixel) in image.pixels() {
let r = pixel[0];
let g = pixel[1];
let b = pixel[2];
red.put_pixel(x, y, *image::Rgba::from_slice(&[r, r, r, 255]));
green.put_pixel(x, y, *image::Rgba::from_slice(&[g, g, g, 255]));
blue.put_pixel(x, y, *image::Rgba::from_slice(&[b, b, b, 255]));
}
(red, green, blue)
}
When ran, Image::gradient_magnitune takes our bird image, and returns this:
## The path of least resistance
Now we have to implement the arguably hardest part of the program: the DP algorithm to find the path of least resistance. Let’s take a quick look at how this will work out. For simplicitys sake, we’ll only look at the case where we find a vertical path. Imagine the table below being the gradient image of a 6x6 image.
$$G = \begin{bmatrix} 1 & 4 & 3 & 4 & 2 & 1\\ 2 & 2 & 3 & 5 & 3 & 2\\ 1 & 4 & 5 & 5 & 1 & 2\\ 4 & 4 & 3 & 1 & 5 & 3\\ 5 & 3 & 2 & 2 & 3 & 1\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix}$$
The point of the algorithm is to find a path $P=p_1 \dots\ p_6$ from one of the top cells $G_{1i}$ to one of the bottom cells $G_{6j}$, such that we minimize $\sum_{1 \leq i \leq 6} p_i$. This can be done by creating a new table $S$ using the following recurrence relation (ignoring boundaries):
$$S_{6i} = G_{6i}\\ S_{ji} = G_{ji} + \min(S_{j + 1, i - 1}, S_{j + 1, i}, S_{j + 1, i + 1})$$
That is, each cell in $S$ is the minimum sum from that cell to a cell on the bottom. Every cell selects the smallest of the three cells below it in the table to be the next cell in the path. When we have completed $S$, we simply select the smallest number in the top row to be our start.
Let’s find $S$:
$$S^{(1)} = \begin{bmatrix} - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix} \hspace{1cm} S^{(2)} = \begin{bmatrix} - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ 6 & 4 & 3 & 3 & 4 & 2\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix}$$ $$S^{(3)} = \begin{bmatrix} - & - & - & - & - & -\\ - & - & - & - & - & -\\ - & - & - & - & - & -\\ 8 & 7 & 6 & 4 & 7 & 5\\ 6 & 4 & 3 & 3 & 4 & 2\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix} \hspace{1cm} S^{(4)} = \begin{bmatrix} - & - & - & - & - & -\\ - & - & - & - & - & -\\ 8 & 10 & 9 & 9 & 5 & 7\\ 8 & 7 & 6 & 4 & 7 & 5\\ 6 & 4 & 3 & 3 & 4 & 2\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix}$$ $$S^{(5)} = \begin{bmatrix} - & - & - & - & - & -\\ 10 & 10 & 12 & 10 & 8 & 7\\ 8 & 10 & 9 & 9 & 5 & 7\\ 8 & 7 & 6 & 4 & 7 & 5\\ 6 & 4 & 3 & 3 & 4 & 2\\ 3 & 1 & 4 & 4 & 1 & 1 \end{bmatrix} \hspace{1cm} S^{(6)} = \begin{bmatrix} 11 & 14 & 13 & 13 & 10 & \textbf{8}\\ 10 & 10 & 12 & 10 & 8 & \textbf{7}\\ 8 & 10 & 9 & 9 & \textbf{5} & 7\\ 8 & 7 & 6 & \textbf{4} & 7 & 5\\ 6 & 4 & 3 & \textbf{3} & 4 & 2\\ 3 & 1 & 4 & 4 & \textbf{1} & 1 \end{bmatrix}$$
And there it is! We can see that there is a path which sums to only 8, and that the path starts in the upper right corner. In order to find the path, we could have saved which way we went for each cell (left, down, or right), but we don’t have to: we can simply choose the minimum child of each cell, because the cells in $S$ says how long the shortest path from that cell to a bottom cell is.
Also note that there are two paths that sum to 8 (the two bottom cells differ in the two paths).
### Implementation
Since we are just prototyping we will do the simplest thing. We’ll make a struct with an array for the table, and just for loop our way through the algorithm.
struct DPTable {
width: usize,
height: usize,
table: Vec<u16>,
}
impl DPTable {
let w = dims.0 as usize;
let h = dims.1 as usize;
let mut table = DPTable {
width: w,
height: h,
table: vec![0; w * h],
};
// return gradient[h][w], save us some typing
let get = |w, h| gradient.get_pixel(w as u32, h as u32)[0];
// Initialize bottom row
for i in 0..w {
let px = get(i, h - 1);
table.set(i, h - 1, px)
}
// For each cell in row j, select the smaller of the cells in the
// row above. Special case the end rows
for row in (0..h - 1).rev() {
for col in 1..w - 1 {
let l = table.get(col - 1, row + 1);
let m = table.get(col , row + 1);
let r = table.get(col + 1, row + 1);
table.set(col, row, get(col, row) + min(min(l, m), r));
}
// special case far left and far right:
let left = get(0, row) + min(table.get(0, row + 1), table.get(1, row + 1));
table.set(0, row, left);
let right = get(0, row) + min(table.get(w - 1, row + 1), table.get(w - 2, row + 1));
table.set(w - 1, row, right);
}
table
}
}
After running, we can convert the DPTable back to a GradientBuffer, and write it to a file. The pixels in the image below are the path weights divided by 128.
The image can be interpreted as follows: white pixels are cells that have a large sum from it to the bottom. These pixels has so much detail (change of color) around it (which we would like to preserve) so the gradient, which tells something about the rate of change, is large. Since the path finding algorithm will search for the smallest sum, which here is the “darkest path”, the algorithm will try its best to avoid these pixels. That is, the white parts in the gradient image are the most distinct parts.
## Finding the path
Now that we have the entire table, finding the best path is easy: it’s just a matter of searching through the uppper row and creating a vec of indices, by always choosing the smallest child:
impl DPTable {
fn path_start_index(&self) -> usize {
// Has FP Gone Too Far?!
self.table.iter()
.take(self.width)
.enumerate()
.map(|(i, n)| (n, i))
.min()
.map(|(_, i)| i)
.unwrap()
}
}
struct Path {
indices: Vec<usize>,
}
impl Path {
pub fn from_dp_table(table: &DPTable) -> Self {
let mut v = Vec::with_capacity(table.height);
let mut col: usize = table.path_start_index();
v.push(col);
for row in 1..table.height {
// Leftmost, no child to the left
if col == 0 {
let m = table.get(col, row);
let r = table.get(col + 1, row);
if m > r {
col += 1;
}
// Rightmost, no child to the right
} else if col == table.width - 1 {
let l = table.get(col - 1, row);
let m = table.get(col, row);
if l < m {
col -= 1;
}
} else {
let l = table.get(col - 1, row);
let m = table.get(col, row);
let r = table.get(col + 1, row);
let minimum = min(min(l, m), r);
if minimum == l {
col -= 1;
} else if minimum == r {
col += 1;
}
}
v.push(col + row * table.width);
}
Path {
indices: v
}
}
}
In order to see if the paths selected are at least plausible, I generated 10 paths, and colored them yellow:
Looks plausible to me!
## Removal
The only thing remaining now is to remove the path instead of coloring it yellow. Since we simply want to get something to work, we could do this in a pretty simple way: get the raw bytes from the image, and copy the intervals between in indexes we want to remove over in a new array, which we create a new image from.
impl Image {
fn remove_path(&mut self, path: Path) {
let image_buffer = self.inner.to_rgb();
let (w, h) = image_buffer.dimensions();
let container = image_buffer.into_raw();
let mut new_pixels = vec![];
let mut path = path.indices.iter();
let mut i = 0;
while let Some(&index) = path.next() {
new_pixels.extend(&container[i..index * 3]);
i = (index + 1) * 3;
}
new_pixels.extend(&container[i..]);
let ib = image::ImageBuffer::from_raw(w - 1, h, new_pixels).unwrap();
self.inner = image::DynamicImage::ImageRgb8(ib);
}
}
Finaly, the time has come. Now we can remove a line from an image, or we could loop, and remove, say, 200 lines:
let mut image = Image::load_image(path::Path::new("sample-image.jpg"));
for _ in 0..200 {
let path = Path::from_dp_table(&table);
image.remove_path(path);
}
However, we can see that the algorithm has removed quite a lot of the right side of the image, that is, the image is more or less cropped, which was exactly one of the problems that we would like to solve! A quick and somewhat dirty fix to this is to simply alter the gradient a little, by explicitly setting the borders to some large number, say 100.
There are quite a few artifacts here, which makes the end result a little less satisfactory. The bird however is almost untouched, and still looks great (to me). You could also argue that we have destroyed all sense of image composition in the process of making this image only slightly smaller. To this I will say …. uum…. yes.
## Seeing is believing
Saving the images to a file and looking at it is kind of cool, but it isn’t resize-window-live-update cool! As a final effort, let’s try to hack something together.
First, we need to be able to load, get, and resize an image outside of the crate. We’ll try to make something like our initial plan:
extern crate content_aware_resize;
use content_aware_resize as car;
fn main() {
image.resize_to(car::Dimensions::Relative(-1, 0));
let data: &[u8] = image.get_image_data();
// Somehow show this data in a window
}
We start simple, by only adding exactly what we need, and taking shortcuts where we can.
pub enum Dimensions {
Relative(isize, isize),
}
...
impl Image {
fn size_difference(&self, dims: Dimensions) -> (isize, isize) {
let (w, h) = self.inner.dimensions();
match dims {
Dimensions::Relative(x, y) => {
(w as isize + x, h as isize + x)
}
}
}
pub fn resize_to(&mut self, dimensions: Dimensions) {
let (mut xs, mut _ys) = self.size_difference(dimensions);
// Only horizontal downsize for now
if xs < 0 { panic!("Only downsizing is supported.") }
if _ys != 0 { panic!("Only horizontal resizing is supported.") }
while xs > 0 {
let path = Path::from_dp_table(&table);
self.remove_path(path);
xs -= 1;
}
}
pub fn get_image_data(&self) -> &[u8] {
self.inner.as_rgb8().unwrap()
}
}
Just a little copy-paste!
Now, maybe we want the resizable window. We can start a new project, include the library crate, and use, say, sdl2 to get something up fast.
extern crate content_aware_resize;
extern crate sdl2;
use content_aware_resize as car;
use sdl2::rect::Rect;
use sdl2::event::{Event, WindowEvent};
use sdl2::keyboard::Keycode;
use std::path::Path;
fn main() {
let (mut w, h) = image.dimmensions();
// Setup sdl2 stuff, and get a window
let sdl_ctx = sdl2::init().unwrap();
let video = sdl_ctx.video().unwrap();
let window = video.window("Context Aware Resize", w, h)
.position_centered()
.opengl()
.resizable()
.build()
.unwrap();
let mut renderer = window.renderer().build().unwrap();
// Convenience function to update texture with a resized image
let update_texture = |renderer: &mut sdl2::render::Renderer, image: &car::Image| {
let (w, h) = image.dimmensions();
let pixel_format = sdl2::pixels::PixelFormatEnum::RGB24;
let mut tex = renderer.create_texture_static(pixel_format, w, h).unwrap();
let data = image.get_image_data();
let pitch = w * 3;
tex.update(None, data, pitch as usize).unwrap();
tex
};
let mut texture = update_texture(&mut renderer, &image);
let mut event_pump = sdl_ctx.event_pump().unwrap();
'running: loop {
for event in event_pump.poll_iter() {
// Handle exit and resize events
match event {
Event::Quit {..}
| Event::KeyDown { keycode: Some(Keycode::Escape), .. } => { break 'running },
Event::Window {win_event: WindowEvent::Resized(new_w, _h), .. } => {
// Find out how many pixels we sized down, and scale down
// the image accordingly
let x_diff = new_w as isize - w as isize;
if x_diff < 0 {
image.resize_to(car::Dimensions::Relative(x_diff, 0));
}
w = new_w as u32;
texture = update_texture(&mut renderer, &image);
},
_ => {}
}
}
// Clear, copy, and present.
renderer.clear();
renderer.copy(&texture, None, Some(Rect::new(0, 0, w, h))).unwrap();
renderer.present();
}
}
And that’s it. A days work, wih only very little knowledge of sdl2, image, and blog post writing. I hope you enjoyed it, if only just a little bit :)
1. Somehow, duckduckgoed doesn’t work as well as googled when used as a verb. [return]
2. http://imgsv.imaging.nikon.com/lineup/lens/zoom/normalzoom/af-s_dx_18-140mmf_35-56g_ed_vr/img/sample/sample1_l.jpg [return]
3. I’d like to know if there is an easier way to do this! In addition, saving the resulting gradient is seemingly not possible at the moment, as the function returns an ImageBuffer over u16, while ImageBuffer::save requires the underlying data to be u8. I also couldn’t figure out how to create a DynamicImage (which also has a ::save, with a slightly cleaner interface) from an ImageBuffer, but this might be possible. [return]
|
2018-12-15 15:00:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37772268056869507, "perplexity": 5859.522639092862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00295.warc.gz"}
|
https://www.physicsforums.com/threads/even-the-conservatives-have-turned-on-bush.90511/
|
# News Even the conservatives have turned on Bush
1. Sep 23, 2005
### Ivan Seeking
Staff Emeritus
A massive federal budget deficit
http://www.pbs.org/newshour/bb/political_wrap/july-dec05/sb_9-23.html [Broken]
Last edited by a moderator: May 2, 2017
2. Sep 23, 2005
### pattylou
But... but.... but.... he gave me a tax break!
Last edited by a moderator: May 2, 2017
3. Sep 23, 2005
### SOS2008
http://www.washingtonpost.com/wp-dyn/content/article/2005/09/20/AR2005092001704.html
I've seen conflicting news reports on this, such as what percent of Americans are fiscal conservatives (I understand they are all here in Arizona) and how much clout they really have. What bothers me is not the spending of money on relief efforts--that's obviously needed, but rather the existing debt due to tax cuts, the invasion of Iraq, and pork spending in bills such as the highway bill.
In the meantime Bush supporters are accusing Dems of the "big government" spin, while claiming Katrina relief is the conservatives' vision of how to fight the war on poverty. Right.
4. Sep 23, 2005
### pattylou
There were lots of headlines on newsgoogle, I found the statistic quoted above amazing. I didn't realize party affiliation could change so radically in the course of a year.
Same source:
Last edited by a moderator: May 2, 2017
5. Sep 23, 2005
### Ivan Seeking
Staff Emeritus
A quote from Washington Week, tonight.
From a source close to the Bush Admin: " We are now in the post Bush Republican era"
But we still have all of those hypocrites in Congress...
6. Sep 23, 2005
### SOS2008
I don't see Bush recovering unless things really turn around in Iraq. At the same time, Republicans may swing in upcoming elections (like Dems did regarding terrorism), but I don't see people changing their party affiliation in large number. As for the increased interest in jobs and the economy--high time! These issues were so over-shadowed by fear mongering about terrorism and gay marriage before now (I can't imagine why).
7. Sep 23, 2005
### Ivan Seeking
Staff Emeritus
Yes, my brother-in-law had his $600 check stuck to his refrigerator for months. It's so nice to see his eight and five year old kids already supporting their parents. What great kids! 8. Sep 23, 2005 ### Ivan Seeking Staff Emeritus Another interesting point: 9. Sep 24, 2005 ### MaxS It didn't change radically. http://www.blackboxvoting.org/ There are numerous statistics that show the exit-polls had FAR FAR FAR higher democratic votes in the past two presidential elections than the official results demonstrated. http://www.crisispapers.org/topics/election-fraud.htm [Broken] In fact there were more voters voting in swing states than the number of eligible voters living in those states. There are simply an insane amount of voting anomalies that have been absolutely downplayed and ignored by the press. Sadly, the democratic election process in this nation has been compromised. Last edited by a moderator: May 2, 2017 10. Sep 24, 2005 ### sid_galt What nonsense! http://www.heritage.org/Research/Taxes/bg1844.cfm [Broken] Besides do you really advocate taking people's money away by force as a means to an end? Bush should be cutting spending and freeing up the market from regulations, not taking more money from the people. Last edited by a moderator: May 2, 2017 11. Sep 25, 2005 ### Ivan Seeking Staff Emeritus PEGGY NOONAN - Wall Street Journal http://www.opinionjournal.com/columnists/pnoonan/ 12. Sep 25, 2005 ### Gokul43201 Staff Emeritus Peegy Noonan !? Wasn't she papa's speechwriter (and Reagan's before that) ? Why, I remember her singing the praises on GW on Scarborough Country Good, for her, I guess ! That's just sickeningly unbelievable ! Does someone have a source or an explanation for this ? 13. Sep 25, 2005 ### Ivan Seeking Staff Emeritus He said it again this morning on Meet the Press. :rofl: He also said that in his darkest moments, he imagines "Bush as some kind of Manchurian Candidate who takes all of his best ideas and ruins them". :rofl: :rofl: :rofl: This is just too strange after listening to all of his excuses made for Bush for five years. 14. Sep 25, 2005 ### SOS2008 Here's another quote from that link: We never hear this one in PF do we? 15. Sep 25, 2005 ### faust9 Yeah, replace Kerry with Clinton and you have the typical apologist response... 16. Sep 25, 2005 ### MaxS Russ's world is collapsing. 17. Sep 25, 2005 ### Gokul43201 Staff Emeritus Sadly, Russ is only a fiscal conservative...and is almost a social lefty. Too bad Bush is just the opposite. There's someone else here that's a real social conservative...can't recall who...only remember he wasn't accepting applications from "women of the left". Last edited: Sep 25, 2005 18. Sep 26, 2005 ### Astronuc Staff Emeritus As for a bridge to nowhere - http://www.salon.com/news/feature/2005/08/09/bridges/index_np.html [Broken] http://www.taxpayer.net/Transportation/gravinabridge.htm [Broken] Rep. Don Young (R-AK) is trying to sell America's taxpayers a$315 million "bridge to nowhere" in rural Alaska. As Chairman of the House Transportation and Infrastructure Committee, he is in a very good position to get his way. But Rep. Young should be stopped from using his political clout to force federal taxpayers to pay for a bridge that is ridiculous in its scope, unjustified on its merits, and far too expensive for taxpayers to swallow at a time of record federal deficits.
If Rep. Young succeeds, tiny Ketchikan, Alaska, a town with less than 8,000 residents (about 13,000 if the entire county is included) will receive hundreds of millions of federal dollars to build a bridge to Gravina Island (population: 50). This bridge will be nearly as long as the Golden Gate Bridge and taller than the Brooklyn Bridge.
- but isn't that the duty of the president? Checks and balances?
http://www.washingtonpost.com/wp-dyn/content/article/2005/08/10/AR2005081000223.html
:rofl:
http://www.fhwa.dot.gov/reauthorization/safetea.htm
http://www.fhwa.dot.gov/reauthorization/index.htm
========================================
CNN
You mean Homeland Security doesn't have one? On what are they spending those \$ billions? Why do we need HS?
Seems to be a slippery slope.
Last edited by a moderator: May 2, 2017
19. Sep 26, 2005
### pattylou
Which is why one might wonder why Russ supports the man!
20. Sep 26, 2005
### pattylou
I certainly do!!
Last edited by a moderator: May 2, 2017
|
2017-12-12 06:48:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2401074767112732, "perplexity": 10061.86553107265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00795.warc.gz"}
|
https://mathoverflow.net/questions/78077/function-fields-of-characteristic-p-modular-curves-and-mod-p-reductions-of-the
|
Function fields of characteristic p modular curves, and mod p reductions of the classical modular equation
Let l and p be distinct primes, l>2. There are "characteristic p modular curves" X_0(l) and X(l), defined over an algebraic closure, K, of Z/p, solving moduli problems for elliptic curves with some additional level-l structure. Each of these curves has the same genus as the corresponding characteristic 0 object; in particular the genus of X(l) is (l-3)(l-5)(l+2)/24.
There is also an irreducible symmetric f in Z[x,y] with f(j(lz),j(z))=0, where j is the elliptic modular function. This is the "classical modular equation". Let f* be the mod p reduction of f. I'm looking for a proof that certain well-known relations between f and the function fields of the characteristic zero X_0(l) and X(l) persist when f is replaced by f*, and X_0 and X are replaced by their characteristic p counterparts. I'd like an argument showing:
1) f* is irreducible in K[x,y]
2) The Galois group of f* over K(y) identifies with PSL_2(Z/l).
3) The function field (over K) of the curve defined by f* identifies with the function field of the characteristic p X_0.
4) If L is the splitting field (over K(y)) of f*, then L identifies with the function field (over K) of the characteristic p X.
Remarks:
a) I would guess that 1)---4) somehow follow from the existence of moduli schemes defined over Z[1/l]. But can someone provide a reference and details?
b) A weaker form of 4) whose statement doesn't involve the theory of modular forms in characteristic p, is that the genus of L/K equals the genus of the classical X(l). As an old dog who has trouble with new tricks, I'd be happiest with a classical proof of this result.
c) I'm mostly interested in the case p=2, where I can prove 1) and 2). This is all related to an MO question of mine about the genus of a curve coming from the theory of characteristic 2 theta functions.
-
I think most of these things can be found in a paper of Igusa from the 50's (I think). I don't have the reference handy but should be easy to locate with mathscinet. Here is some to prettify your question. – Felipe Voloch Oct 14 '11 at 9:46
Thanks, Felipe! I found the papers in Amer. J. of Math., v.81, (1959). They're not easy for me to read, but I get the drift, and they seem to be exactly what I was looking for. – paul Monsky Oct 16 '11 at 6:23
|
2014-03-08 05:21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7646099328994751, "perplexity": 593.8991113590671}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653325/warc/CC-MAIN-20140305060733-00012-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://lists.gnu.org/archive/html/lilypond-user/2010-05/msg00212.html
|
lilypond-user
[Top][All Lists]
## vertical spacing problem
From: Marek Klein Subject: vertical spacing problem Date: Sun, 16 May 2010 23:19:22 +0200
Hi,
I have problem with the attached vocal score and I couldn't reproduce it with tiny example...
I get 2-paged layout
with the next to last row commented out (\markup),
There seems to be lot of space at the bottom of the second page, but if I add the \markup, output is on 3 pages
|
2023-01-30 23:00:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184771180152893, "perplexity": 3677.0409172390064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00676.warc.gz"}
|
https://mathhelpforum.com/threads/solved-function.144078/
|
# [SOLVED] function
#### dwsmith
MHF Hall of Honor
Let f be a function such that $$\displaystyle f(n+1)=1-[f(n)]^2, \forall n\in\mathbb{Z}, n\geq0$$. How would f(n+2) be expressed in terms of f(n)?
#### Anonymous1
Let f be a function such that $$\displaystyle f(n+1)=1-[f(n)]^2, \forall n\in\mathbb{Z}, n\geq0$$. How would f(n+2) be expressed in terms of f(n)?
$$\displaystyle f(n+1)=1-[f(n)]^2$$
$$\displaystyle \implies f(n+2) = f\left\{(n+1)+1\right\}=1-[f(n+1)]^2 = 1 - \left\{ 1-[f(n)]^2 \right\}^2$$
dwsmith
#### dwsmith
MHF Hall of Honor
Ok that trumps the ... answer first posed.
#### Anonymous1
I am almost positive that isn't the answer but I could be wrong.
Which step do you disagree with, particularly?
#### dwsmith
MHF Hall of Honor
Which step do you disagree with, particularly?
I don't disagree with that but when your response was first .... I was in disagreement.
#### Anonymous1
I don't disagree with that but when your response was first .... I was in disagreement.
Oh yeah that, haha.
When I am fairly certain I know the answer, I like to secure my position. (Evilgrin)
Cheers!
|
2019-11-20 12:33:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7404413223266602, "perplexity": 3427.0346599917043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670558.91/warc/CC-MAIN-20191120111249-20191120135249-00359.warc.gz"}
|
https://control.com/forums/threads/test-modbus-application.47574/
|
# Test Modbus Application
G
#### goVoyager
I have an inventory application (in C#.net) that uses the Modbus protocol to control the locks on a separate cabinet. The application controls the lock and reads the state of the lock. The problem is that I need to connect my computer to the cabinet in order to properly test and potentially debug the application. Is there any way to obtain or develop another application that emulates the cabinet hardware? I was thinking that I could open a cabinet application that would display when a door is unlocked or locked successfully while I am debugging my inventory application. Or is there a hardware device available that could do the same thing?
F
G
#### goVoyager
Thanks. This seems to be interesting. However, I was not sure about spending $200 to$300. I will have to think about this. Likewise, 3 days is a small window of time for a trial.
Again, thank you for the help.
H
G
#### goVoyager
Thanks for the response. I visited the website, but I was unable to find the mBTCP slave .net class. Let me know.
|
2020-06-04 23:57:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32346341013908386, "perplexity": 1027.7130257986491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00406.warc.gz"}
|
https://www.earth-syst-dynam.net/10/233/2019/
|
Journal cover Journal topic
Earth System Dynamics An interactive open-access journal of the European Geosciences Union
Journal topic
Earth Syst. Dynam., 10, 233-255, 2019
https://doi.org/10.5194/esd-10-233-2019
Earth Syst. Dynam., 10, 233-255, 2019
https://doi.org/10.5194/esd-10-233-2019
Research article 24 Apr 2019
Research article | 24 Apr 2019
# Evaluation of terrestrial pan-Arctic carbon cycling using a data-assimilation system
Evaluation of terrestrial pan-Arctic carbon cycling using a data-assimilation system
Efrén López-Blanco1,2, Jean-François Exbrayat2,3, Magnus Lund1, Torben R. Christensen1,4, Mikkel P. Tamstorf1, Darren Slevin2, Gustaf Hugelius5, Anthony A. Bloom6, and Mathew Williams2,3 Efrén López-Blanco et al.
• 1Department of Biosciences, Arctic Research Center, Aarhus University, Frederiksborgvej 399, 4000 Roskilde, Denmark
• 2School of GeoSciences, University of Edinburgh, Edinburgh, EH9 3FF, UK
• 3National Centre for Earth Observation, University of Edinburgh, Edinburgh, EH9 3FF, UK
• 4Department of Physical Geography and Ecosystem Science, Lund University, Sölvegatan 12, 223 62 Lund, Sweden
• 5Department of Physical Geography and Bolin Centre for Climate Research, Stockholm University, 106 91 Stockholm, Sweden
• 6Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
Abstract
There is a significant knowledge gap in the current state of the terrestrial carbon (C) budget. Recent studies have highlighted a poor understanding particularly of C pool transit times and of whether productivity or biomass dominate these biases. The Arctic, accounting for approximately 50 % of the global soil organic C stocks, has an important role in the global C cycle. Here, we use the CARbon DAta MOdel (CARDAMOM) data-assimilation system to produce pan-Arctic terrestrial C cycle analyses for 2000–2015. This approach avoids using traditional plant functional type or steady-state assumptions. We integrate a range of data (soil organic C, leaf area index, biomass, and climate) to determine the most likely state of the high-latitude C cycle at a 1× 1 resolution and also to provide general guidance about the controlling biases in transit times. On average, CARDAMOM estimates regional mean rates of photosynthesis of 565 g C m−2 yr−1 (90 % confidence interval between the 5th and 95th percentiles: 428, 741), autotrophic respiration of 270 g C m−2 yr−1 (182, 397) and heterotrophic respiration of 219 g C m−2 yr−1 (31, 1458), suggesting a pan-Arctic sink of −67 (−287, 1160) g Cm−2 yr−1, weaker in tundra and stronger in taiga. However, our confidence intervals remain large (and so the region could be a source of C), reflecting uncertainty assigned to the regional data products. We show a clear spatial and temporal agreement between CARDAMOM analyses and different sources of assimilated and independent data at both pan-Arctic and local scales but also identify consistent biases between CARDAMOM and validation data. The assimilation process requires clearer error quantification for leaf area index (LAI) and biomass products to resolve these biases. Mapping of vegetation C stocks and change over time and soil C ages linked to soil C stocks is required for better analytical constraint. Comparing CARDAMOM analyses to global vegetation models (GVMs) for the same period, we conclude that transit times of vegetation C are inconsistently simulated in GVMs due to a combination of uncertainties from productivity and biomass calculations. Our findings highlight that GVMs need to focus on constraining both current vegetation C stocks and net primary production to improve a process-based understanding of C cycle dynamics in the Arctic.
1 Introduction
Arctic ecosystems play a significant role in the global carbon (C) cycle (Hobbie et al., 2000; McGuire et al., 2012). Slow organic matter decomposition rates due to cold and poorly drained soils in combination with cryogenic soil processes have led to an accumulation of large stocks of C stored in the soils, much of which is currently held in permafrost (Tarnocai et al., 2009). The permafrost region soil organic C (SOC) stock is more than twice the size of the atmospheric C stock and accounts for approximately half of the global SOC stock (Hugelius et al., 2014; Jackson et al., 2017). High-latitude ecosystems are experiencing a temperature increase that is nearly twice the global average (AMAP, 2017). The expected future increase in temperature (IPCC, 2013) and precipitation (Bintanja and Andry, 2017) will likely have consequences for the Arctic net C balance. As high latitudes warm, C cycle dynamics may lead to an increase in carbon dioxide (CO2) emissions through ecosystem respiration (Reco) driven by, for example, larger heterotrophic respiration (Commane et al., 2017; Schuur et al., 2015; Zona et al., 2016), drought stress on plant productivity (Goetz et al., 2005), and episodic disturbances (Lund et al., 2017; Mack et al., 2011). Alternatively, temperature-induced vegetation changes (Forkel et al., 2016; Graven et al., 2013; Lucht et al., 2002) may increase gross primary productivity (GPP) due to extended growing seasons (Zeng et al., 2011), CO2 fertilisation (Zhuang et al., 2006), and shifts in vegetation cover such as greening (Myneni et al., 1997; Zhu et al., 2016) and shrub expansion (Myers-Smith et al., 2011). Consequently, ecosystem responses may feed back on climate with unclear magnitude and sign (Anav et al., 2013; Murray-Tortarolo et al., 2013; Peñuelas et al., 2009). As a result of the significant changes that are already affecting the structure and function of Arctic ecosystems, it is critical to understand and quantify the historical C dynamics of the terrestrial tundra and taiga and their sensitivity to climate (McGuire et al., 2012).
Although the land surface is estimated to offset ∼30 % of anthropogenic emissions of CO2 (Canadell et al., 2007; Le Quéré et al., 2018), the terrestrial C cycle is currently the least constrained component of the global C budget and large uncertainties remain (Bloom et al., 2016). Despite the importance of Arctic tundra and taiga biomes in the global land C cycle, our understanding of interactions between the allocation of C from net primary productivity (NPP), C stocks (Cstock), and transit times (TTs) is deficient (Carvalhais et al., 2014; Friend et al., 2014; Hobbie et al., 2000). The TT is a concept that represents the time it takes for a particle of C to persist in a specific C stock, and it is defined by the C stock and its outgoing flux, here addressed as TT =CstockNPP. According to a recent study by Sierra et al. (2017), TT is an important diagnostic metric of the C cycle and a concept that is independent of model-internal structure and theoretical assumptions for its calculation. Terms such as residence time (Bloom et al., 2016; Friend et al., 2014), turnover time (Carvalhais et al., 2014), and turnover rate (Thurner et al., 2016; TT =1∕turnover rate) are used in the literature to represent the concept of TT (Sierra et al., 2017). Studies have focused more on the spatial variability with climate of ecosystem productivity rather than C transit times (Friend et al., 2014; Nishina et al., 2015; Thurner et al., 2016, 2017). Friend et al. (2014) detailed that transit time dominates uncertainty in terrestrial vegetation responses to future climate and atmospheric CO2. They found a 30 % larger variation in modelled vegetation C change than response of NPP. Nishina et al. (2015) also suggested that long-term C dynamics within ecosystems (vegetation turnover and soil decomposition) are more critical factors than photosynthetic processes (i.e. GPP or NPP). The respective contribution of bias from biomass and NPP to biases in transit times remains unquantified. Without an appropriate understanding of the current state and dynamics of the C cycle, its feedbacks to climate change remains highly uncertain (Hobbie et al., 2000; Koven et al., 2015).
There are currently efforts to incorporate both in situ and satellite-based datasets to assess C cycle retrievals and to reduce their uncertainties. At a local scale, the net ecosystem exchange (NEE) of CO2 between the land surface and the atmosphere is usually measured using eddy covariance (EC) techniques (Baldocchi, 2003). International efforts have led to the creation of global networks such as FLUXNET (http://fluxnet.fluxdata.org/, last access: 9 April 2019) and ICOS (https://www.icos-ri.eu/, last access: 9 April 2019), to harmonise data and support the reduction of uncertainties around the C cycle and its driving mechanisms. However, upscaling field observations to estimate the regional to global C budget presents important challenges due to insufficient spatial coverage of measurements and heterogeneous landscape mosaics (McGuire et al., 2012). Furthermore, harsh environmental conditions in high-latitude ecosystems and their remoteness complicates the collection of high-quality data (Lafleur et al., 2012). Given the lack of continuous, spatially distributed in situ observations of NEE in the Arctic, it remains a challenging task to calculate with certainty whether or not the Arctic is a net C sink or a net C source and how the net C balance will evolve in the future (Fisher et al., 2014). Over the past decade, regional to global products generated from in situ networks and/or satellite observations have improved our understanding of the terrestrial C dynamics. These range from machine-learning-based upscaling of FLUXNET data (Jung et al., 2017), remotely sensed biomass products (Carvalhais et al., 2014; Thurner et al., 2014), and the creation of a global soil database (FAO/IIASA/ISRIC/ISSCAS/JRC, 2012). Due to a reliance on interpolation and upscaling with other spatial data, it is challenging to evaluate these products for inherent biases.
Global vegetation models (GVMs) have been developed to determine global terrestrial C cycling, through representing vegetation and soil processes, including vegetation dynamics (i.e. growth, competition, and turnover) and biogeochemical (i.e. water, carbon, and nutrients cycling) responses to climate variability (Koven et al., 2011; Sitch et al., 2003; Woodward et al., 1995). The advantage of using process-based models to characterise C dynamics is that processes which drive ecosystem–atmosphere interactions can be simulated and reconstructed when data are scarce. However, C cycle modelling in GVMs typically relies on parameters retrieved from literature, prescribed plant functional type (PFT), and a spin-up process ensuring C stocks (biomass and SOC) reach steady state. Further, inherent differences of model structure contribute more significantly to GVM uncertainties (Exbrayat et al., 2018; Nishina et al., 2014) than do differences in climate projections (Ahlström et al., 2012). Many model inter-comparison projects have demonstrated a lack of coherence in future projections of terrestrial C cycling (Ahlström et al., 2012; Friedlingstein et al., 2014). Recent studies have used simulations from the first phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISI-MIP) (Warszawski et al., 2014) to evaluate the importance of key elements regulating vegetation C dynamics but also the estimated magnitude of their associated uncertainties (Exbrayat et al., 2018; Friend et al., 2014; Nishina et al., 2015; Thurner et al., 2017). An important insight is that TTs in GVMs are a key uncertain feature of the global C cycle simulation. Further, GVMs tend not to report uncertainties in their estimates of stocks and fluxes, which weakens their analytical value.
To address these issues we integrate model and data more formally. We apply data assimilation (DA), defined as a Bayesian calibration process for a model of a dynamic system. DA, through probabilistic parameterisation, supports robust model estimates of C stocks and fluxes consistent with multiple observations and their errors (Fox et al., 2009; Luo et al., 2009; Williams et al., 2005). By following Bayesian methods, the uncertainty in observations weights the degree of data constraint, and the outcome is a set of acceptable parameterisations for a given model structure linked to likelihoods. Overall, this approach determines whether model structure, observations, and forcing are (in)consistent and thus assesses the validity of model structure. By assimilating co-located climatic, ecological, and biogeochemical data from remote-sensing observations at a specific grid scale across landscapes and regions, DA can map parameter estimation and uncertainties.
Here, we use the CARbon DAta MOdel framework (CARDAMOM) (Bloom et al., 2016; Bloom and Williams, 2015; Smallman et al., 2017) to analyse the pan-Arctic terrestrial carbon cycle at 1 resolution for the 2000–2015 period. We assimilate gridded observations of leaf area index (LAI), biomass, and SOC stocks at these spatio-temporal scales into an intermediate-complexity C model (DALEC2, which is less complex than GVMs). We compare analyses of C dynamics of Arctic tundra and taiga with (a) global products of GPP (Jung et al., 2017) and heterotrophic respiration (Rh) (Hashimoto et al., 2015); (b) NEE, GPP, and Reco field observations from eight high-latitude sites included in the FLUXNET2015 dataset; and (c) six GVMs from the ISI-MIP2a comparison project (Akihiko et al., 2017). Our objectives are to (1) present and evaluate the analyses and uncertainties of the current state of the pan-Arctic terrestrial C cycling using a DA system, (2) quantify the degree of agreement between the CARDAMOM product with local- to global-scale sources of available data to assess analytical bias, and (3) use CARDAMOM as a benchmarking tool for the ISI-MIP2a models to provide general guidance towards GVM improvements in transit time simulation. Finally, we suggest future work to be done in the context of advancing pan-Arctic C cycle modelling.
2 Data and methods
## 2.1 Pan-Arctic region
The spatial domain we considered in this study (Fig. S1 in the Supplement) corresponds to the extent of the Northern Circumpolar Soil Carbon Database version 2 (NCSCDv2) dataset (Hugelius et al., 2013a, b), bounded by 42–80 N and 180 W–180 E and at a spatial resolution of 1× 1. This area of study totals 18 000 000 km2 of land area. We used the GlobCover vegetation map product developed by the European Space Agency (Bontemps et al., 2011) to separate regions dominated by non-forested (tundra) and forested (taiga) land cover types. A complete description of the classes included in each domain can be found in Fig. S1 and caption. The differentiation between tundra and taiga grid cells is in agreement with the tree line delimitated by Brown et al. (1997) together with the tundra domain defined from the Regional Carbon Cycle Assessment and Processes Activity reported by McGuire et al. (2012). The extensive grasslands without presence of trees in some areas such as in south Russia, Mongolia, and Kazakhstan were neglected to focus on higher latitudes. This classification of tundra and taiga totals 8 100 000 and 9 900 000 km2 of land area, respectively.
## 2.2 The CARbon DAta MOdel framework
Here we use CARDAMOM (Bloom et al., 2016) (list of acronyms can be found in Table S1 in the Supplement) to retrieve terrestrial C cycle dynamics, including explicit confidence intervals, in the pan-Arctic region. CARDAMOM consist of two key components: (1) an ecosystem model, the Data Assimilation Linked Ecosystem Carbon version 2 (DALEC2) (Bloom and Williams, 2015; Williams et al., 2005), constrained by observations, and (2) a data-assimilation system (Bloom et al., 2016). This framework reconciles observational datasets as part of a representation of the terrestrial C cycle in agreement with ecological theory.
### 2.2.1 DALEC2
The DALEC2 ecosystem model simulates monthly land–atmosphere C fluxes and the evolution of six C stocks (foliage, labile, wood, roots, soil organic matter – SOM – and surface litter) and corresponding fluxes. DALEC2 includes 17 parameters controlling the processes of plant phenology, photosynthesis (GPP), allocation of primary production to respiration and vegetation carbon stocks and plant and organic matter turnover rates, all established within specific prior ranges based on ecologically viable limits (Table S2; most priors are uniform with broad ranges). DALEC2 simulates canopy-level GPP via the Aggregated Canopy Model (ACM; Williams et al., 1997), and the most sensitive ACM parameter, related to canopy photosynthetic efficiency, is included in the CARDAMOM calibration. DALEC allocates net primary production to the four plant stocks (foliage, labile, wood, and roots) and autotrophic respiration (Ra) as time-invariant fractions of GPP. Plant C decays into litter and soil stocks where microbial decomposition generates heterotrophic respiration (Rh). Turnover of litter and soil stocks is simulated using temperature-dependent first-order kinetics. For practical purposes we aggregated the different C stocks into photosynthetic (Cphoto; leaf and labile), vegetation (Cveg; leaf, labile, wood, and roots), soil (Cdom; litter and SOM), and total (${C}_{\mathrm{tot}}={C}_{\mathrm{photo}}+{C}_{\mathrm{veg}}+{C}_{\mathrm{dom}}$) C stocks. The NEE is calculated as the difference between GPP and the sum of the respiration fluxes (${R}_{\mathrm{eco}}={R}_{\mathrm{a}}+{R}_{\mathrm{h}}$), while NPP is the difference between GPP and Ra. Only NEE follows the standard micrometeorological sign convection presenting the uptake of C as negative (sink) and the release of C as positive (source); both GPP and Reco are reported as positive fluxes. In this study, we addressed C turnover rates and decomposition processes as their inverse rates; this is the C transit time (TTphoto, TTveg and TTdom), represented as the ratio between the mean C stock and the mean C input into that stock during the simulation period.
### 2.2.2 Data-assimilation system
The intermediate complexity of the DALEC2 model compared to typical GVMs facilitates computationally intense Monte Carlo (MC) data-assimilation to optimise the initial stock conditions and the 17 process parameters that shape C dynamics. CARDAMOM is forced with climate data from the European Centre for Medium-Range Weather Forecast Reanalysis interim (ERA-interim) dataset (Dee et al., 2011) monthly for the 2000–2015 period. A Bayesian Metropolis-Hastings Markov chain MC (MHMCMC) algorithm is used to retrieve the posterior distributions of the process parameters according to observational constraints and ecological and dynamic constraints (EDCs; Bloom and Williams, 2015). EDCs ensure that DALEC2 simulations of the terrestrial carbon cycle are realistic and ecologically viable and help to reduce the uncertainty in the model parameters by rejecting estimations that do not satisfy different conditions applied to C allocation and turnover rates as well as trajectories of C stocks.
Observational constraints include monthly time series of LAI from the MOD15A2 product (Myneni et al., 2002), estimates of vegetation biomass (Carvalhais et al., 2014), and soil organic carbon content (Hugelius et al., 2013a, b) (Table S3). We aggregated ∼130 000 1 km resolution MODIS LAI data monthly within each 1× 1 pixel. Biomass based on remotely sensed forest biomass (Thurner et al., 2014) and upscaled GPP (Jung et al., 2011) covering the pan-Arctic domain was aggregated to 1 resolution (Carvalhais et al., 2014). We used the NCSCD spatially explicit product (Hugelius et al., 2013a, b), which was generated from 1778 soil sample locations interpolated to a 1 grid.
We apply the set-up described above to 3304 1× 1 pixels (1686 in tundra; 1618 in taiga) using a monthly time step. Each pixel is treated independently without assuming a prior land cover or plant functional type, and we assume no spatial correlation between uncertainties in all pixels. In each 1× 1 pixel, we applied the MHMCMC algorithm to determine the probability distribution of the optimal parameter set and initial conditions (xi; Table S2) given observational constraints (Oi; LAI, SOC, and biomass; Table S3) using the same Bayesian inference approach described in Bloom et al. (2016):
$\begin{array}{}\text{(1)}& p\left({x}_{\mathrm{i}}|{O}_{\mathrm{i}}\right)\propto p\left({x}_{\mathrm{i}}\right)×p\left({O}_{\mathrm{i}}|{x}_{\mathrm{i}}\right).\end{array}$
First, in the expression 1, p(xi) represents the prior probability distribution of each DALEC2 parameter (xi) and is expressed as
$\begin{array}{}\text{(2)}& p\left({x}_{\mathrm{i}}\right)={p}_{\mathrm{EDC}}\left({x}_{\mathrm{i}}\right)×{e}^{-\mathrm{0.5}{\left(\frac{\mathrm{log}\left({f}_{\mathrm{auto}}\right)-\mathrm{log}\left(\mathrm{0.5}\right)}{\mathrm{log}\left(\mathrm{1.2}\right)}\right)}^{\mathrm{2}}}×{e}^{-\mathrm{0.5}{\left(\frac{\mathrm{log}\left({C}_{\mathrm{eff}}\right)-\mathrm{log}\left(\mathrm{17.5}\right)}{\mathrm{log}\left(\mathrm{1.2}\right)}\right)}^{\mathrm{2}}},\end{array}$
where pEDC(xi) is the prior parameter probability according to the EDCs included in Table S2 and described in Bloom and Williams (2015). In addition, prior values for two parameters and their uncertainties (canopy efficiency, Ceff, and fraction of GPP respired, fauto) are imposed with a log-normal distribution following Bloom et al. (2016) to be consistent with the global GPP range estimated in Beer et al. (2010) and fauto ranges specified by DeLucia et al. (2007), respectively.
Second, p(O|xi) from expression 1 represents the likelihood of xi with respect to Oi, and it is calculated based on the ability of DALEC2 to reproduce (1) biomass (Carvalhais et al., 2014), (2) SOC (Hugelius et al., 2013a, b), and (3) MODIS LAI (Myneni et al., 2002). The reported uncertainty in biomass data from Thurner et al. (2014) was ±37 % at pixel scale. Because of undetermined errors related to tree cover thresholds used in the upscaling and to reflect unknown model structural error, we slightly inflate the error estimate and use a log-transform (1.5) of $×/÷\mathrm{1.5}$ (i.e. $×/÷\mathrm{1.5}$ spans 67 % of the expected error). We use the same proportional error for SOC. For MODIS LAI we inflate the proportional error further to log(2) based on well-reported biases in this product for evergreen forests (De Kauwe et al., 2011) and the estimated measurement and aggregation uncertainty for boreal forest LAI of 1 m2 m−2 reported by Goulden et al. (2011). The uncertainty assumptions in expression 3 are chosen due to a lack of better knowledge about the combined uncertainties arising from model representation errors and observation errors:
$\begin{array}{ll}p& \left({O}_{\mathrm{i}}|{x}_{\mathrm{i}}\right)={e}^{-\mathrm{0.5}{\left(\frac{\mathrm{log}\left({O}_{\mathrm{biomass}}\right)-\mathrm{log}\left({M}_{\mathrm{biomass},\mathrm{0}}\right)}{\mathrm{log}\left(\mathrm{1.5}\right)}\right)}^{\mathrm{2}}}\\ \text{(3)}& & ×{e}^{-\mathrm{0.5}{\left(\frac{\mathrm{log}\left({O}_{\mathrm{SOC}}\right)-\mathrm{log}\left({M}_{\mathrm{SOC},\mathrm{0}}\right)}{\mathrm{log}\left(\mathrm{1.5}\right)}\right)}^{\mathrm{2}}}×{e}^{-\mathrm{0.5}{\left(\frac{\mathrm{log}\left({O}_{\mathrm{LAI},t}\right)-\mathrm{log}\left({M}_{\mathrm{LAI},t}\right)}{\mathrm{log}\left(\mathrm{2}\right)}\right)}^{\mathrm{2}}}.\end{array}$
For each 1× 1 pixel we run three MHMCMC chains with 107 accepted simulations each until convergence of at least two chains. We use 500 parameter sets sampled from the second half of each chain to describe the posterior distribution of parameter sets. We produce confidence intervals of terrestrial C fluxes and stocks from the selected parameter sets. In the following we report the highest confidence results (median; P50) and the uncertainty represented by the 90 % confidence interval (5th percentile to 95th percentile – P05P95). We calculate the transit time for C pools using the approach for non-steady-state pools described in Bloom et al. (2016), Sect. S3.
## 2.3 Model evaluation against independent in situ and pan-Arctic datasets
At the pan-Arctic scale, we compared CARDAMOM GPP with the FLUXCOM dataset from Jung et al. (2017). We also compared our CARDAMOM Rh with the global spatio-temporal distribution of soil respiration from Hashimoto et al. (2015) calculated by a climate-driven empirical model. To assess the degree of statistical agreement we calculated linear goodness of fit (slope, intercept, R2) between CARDAMOM and the two independent datasets and determined RMSE and bias from direct comparison on model–data residuals. The mapping includes stipples representing locations where the independent datasets are within CARDAMOM's 90 % confidence interval.
At a local scale, we compare CARDAMOM NEE and its partitioned components GPP and Reco estimates with monthly aggregated values from the FLUXNET2015 sites. We selected eight sites (Belelli Marchesini et al., 2007; Bond-Lamberty et al., 2004; Goulden et al., 1996; Ikawa et al., 2015; Kutzbach et al., 2007; López-Blanco et al., 2017; Lund et al., 2012; Sari et al., 2017) located across sub- and high-Arctic latitudes, covering locations with different climatic conditions and dominating ecotypes (Table S4). For this evaluation, we compared the same years for both observations and CARDAMOM, and we selected data using the daytime method (Lasslop et al., 2010) due to the absence of a true night-time period during Arctic summers in some locations. Additionally, we selected a variable u* threshold to identify insufficient turbulence wind conditions from year to year similar to López-Blanco et al. (2017). In this data–model comparison we included the median (P50) ± the 90 % confidence interval (5th to 95th percentiles, (P05P95)) including both random and u* filtering uncertainty following the method described in Papale et al. (2006). Some of the sites lack wintertime measurements, and we filtered out data for months with less than 10 % observations. We performed a point-to-grid-cell comparison to assess the degree of agreement between each flux magnitude and seasonality calculating the statistics of linear fit (slope, intercept, R2) per flux and site between CARDAMOM and FLUXNET2015 datasets and determined RMSE and bias from model–data residuals comparison.
## 2.4 Benchmark of global vegetation models from ISI-MIP2a
We compared CARDAMOM analyses of pan-Arctic NPP, vegetation biomass carbon stocks (Cveg) and vegetation transit times (TTveg) with six participating GVMs in the ISI-MIP2a comparison project (Akihiko et al., 2017). In this study we have considered DLEM (Tian et al., 2015), LPJmL (Schaphoff et al., 2013; Sitch et al., 2003), LPJ-GUESS (Smith et al., 2014), ORCHIDEE (Guimberteau et al., 2018), VEGAS (Zeng et al., 2005), and VISIT (Ito and Inatomi, 2012). The specific properties and degree of complexity of each ISI-MIP2a model are summarised in Table S5. The comparisons have been performed under the same spatial resolution as the CARDAMOM spatial resolution (1× 1) for the 2000–2010 period. Also, the chosen GVMs from the ISI-MIP2a phase have their forcing based on ERA-Interim climate data, similar to the forcing used in CARDAMOM. We estimated the degree of agreement using the statistics of linear fit (slope, intercept, R2, RMSE, and bias) per variable and model between CARDAMOM and GVMs but also their spatial variability including stipples where the GVM datasets are within CARDAMOM's 90 % confidence interval.
To understand the sources of errors in TTveg calculations, we used CARDAMOM to calculate two hypothetical TTveg (i.e. EXPERIMENT A TTveg= ISI-MIP2a Cveg CARDAMOM NPP and EXPERIMENT B TTveg= CARDAMOM Cveg ISI-MIP2a NPP) and then assessed the largest difference with CARDAMOM's CONTROL TTveg. We estimated the hypothetical TTveg for each pixel in each model and derived a pixel-wise measure of the contribution of biases in NPP and Cveg to biases in TTveg by overlapping their distribution functions.
3 Results
## 3.1 Pan-Arctic retrievals of C cycle
Overall, we found that the pan-Arctic region (Fig. 1 and Table 1) acted as a small sink of C (area-weighted P50) over the 2000–2015 period with an average of $-\mathrm{67.4}\phantom{\rule{0.25em}{0ex}}\left(-\mathrm{286.7}$, 1159.9) g C m−2 yr−1, P50 (P05P95), although the 90 % confidence intervals remain large (and so the region could be a source of C). Tundra region NEE was estimated at $-\mathrm{14.9}\phantom{\rule{0.25em}{0ex}}\left(-\mathrm{163.4}$, 1116.1) g C m−2 yr−1, a weaker sink compared to taiga regions, which were established at $-\mathrm{110.4}\phantom{\rule{0.25em}{0ex}}\left(-\mathrm{387.7}$, 1195.8) g C m−2 yr−1. The photosynthetic inputs exceeded the respiratory outputs (GPP >Reco; Table 1), although the much larger uncertainties stemming from Reco, and more specifically from Rh, compared with GPP complicate the net C sink–source estimate beyond the median's average ensembles. In the pan-Arctic region approximately half of GPP is autotrophically respired, resulting in an NPP of 290.3 (196.4, 410.7) g C m−2 yr−1. Carbon use efficiency (NPPGPP) averages 0.51 (0.46, 0.55), and varied marginally across tundra 0.51 (0.46, 0.54) and taiga 0.52 (0.46, 0.56). Despite these apparent small variations, tundra photosynthesised and respired (respectively, 327.2 (236.8, 463.3) and 310.0 (124.3, 1536.8) g C m−2 yr−1) approximately half as much as the Taiga region (759.8 (584.1, 967.9) and 635.3 (285.3, 2114.0) g C m−2 yr−1).
Figure 1Schematic diagram of the terrestrial C processes modelled in CARDAMOM for the pan-Arctic (black values), tundra (yellow values), and taiga (green values) domains. The values characterise the median for the 2000–2015 period and the parentheses delimit the 90 % confidence interval. C processes represented include flows for C fluxes in white (NEE, net ecosystem exchange; GPP, gross primary production; NPP, net primary production; Reco, ecosystem respiration; Ra, autotrophic respiration; Rh, heterotrophic respiration), C allocation in dark green (to labile, leaf, stem and root), and C turnover in cyan (from leaf, wood, roots, and litter). C stocks are represented in dark blue boxes (labile, leaf, stem, root, litter, and SOM, soil organic matter) and aggregated into photosynthetic (Cphoto= leaf + labile), vegetation (Cveg= leaf + labile + wood + roots), soil (Cdom= litter + SOM), and total (Ctot=Cphoto+Cveg+Cdom) C stocks in red boxes. Analogy, transit times (TT) are also aggregated into photosynthetic (TTphoto= leaf + labile), vegetation (TTveg= leaf + labile + wood + roots), soil (TTdom= litter + SOM), and total (TTtot= TTphoto+ TTveg+ TTdom) C transit times.
Table 1Multi-year (2000–2015) annual average of main ecosystem C fluxes (NEE, GPP, NPP, Reco, Ra, Rh; g C m−2 yr−1), C stocks (Cphoto, Cveg, Cdom, Ctot; kg C m−2) and transit times (TTphoto, TTveg, TTdom, TTtot; years) for the pan-Arctic, tundra (non-forested), and taiga (forested) domain. The averages contain the median in bold (50th percentile) and the uncertainty estimations across the 90 % confidence range between the 5th and 95th percentiles assuming no spatial correlation between uncertainties in all pixels. We assume spatial correlation between pixels: P50, P05, and P95 represent the area-weighted aggregate of all pixels' median, P05, and P95.
The total size of the pan-Arctic soil C stock (Cdom) averaged 24.4 (10.3, 47.5) kg C m−2, 16-fold greater than the vegetation C stock (Cveg), 1.5 (0.5, 5.8) kg C m−2. The soil C stock (fresh litter and SOM) is dominated by Csom, accounting for the 99 % which also dominates the total terrestrial C stock in the pan-Arctic. Among the living C stocks, 93 % of C (88 % in tundra and 90 % in taiga) is allocated to the structural stocks (wood and roots; 1.4 (0.4, 5.6) kg C m−2) compared to 7 % (12 % in tundra and 10 % in taiga) to the photosynthetic stock (leaves and labile; 0.1 (0.1, 0.2) kg C m−2). On average, the total ecosystem C stock is 26.3 (11.8, 51.0) kg C m−2 in the pan-Arctic region, with slightly lower stocks in tundra (24.6 (10.8, 50.6) kg C m−2) than taiga (27.7 (12.7, 51.2) kg C m−2). In general, the taiga region holds on average ∼100 % more photosynthetic tissues, ∼160 % more structural tissue, and ∼9 % more soil C stocks than tundra. In other words, taiga holds ∼12 % more total C than tundra. The greater living stock of C in taiga (2.1 (0.8, 5.1) kg C m−2) than tundra (0.8 (0.3, 6.8) kg C m−2) means that the relative size of Ra and Rh in the two regions differs. Thus in tundra Ra accounts for 51 % of total ecosystem respiration, while in taiga this fraction is 57 %. Ra is 4 % larger than Rh in tundra but 24 % greater in taiga, reflecting the greater rates of C cycling in taiga. Uncertainties in estimates of soil C stock are notably higher than for living C stocks, highlighting the lack of observational and mechanistic constraint on heterotrophic respiration.
The global mean C transit time is 1.3 (0.8, 2.1) years in leaves and labile plant tissue (TTphoto), 4.5 (1.7, 15.7) years in stems and roots (TTveg), and 120.5 (9.8, 822.6) years in litter and SOM (TTdom). The total C transit time (TTtot) (133.1 (11.5, 1013.6) years) is clearly dominated by the soil C stock, highlighting the very long periods of times that C persists in Arctic soils. CARDAMOM calculated 62 % longer TTdom in tundra compared to taiga, likely linked to lower temperatures, but uncertainties are large due to the limitations of data constraints.
Figure 2Original soil organic carbon (SOC; Hugelius et al., 2013a), biomass (Carvalhais et al., 2014), and leaf area index (LAI; Myneni et al., 2002) datasets used in the data-assimilation process within the CARDAMOM framework (a–c), assimilated SOC, biomass, and LAI integrated into CARDAMOM (d–f) and their respective goodness-of-fit statistics between original and assimilated datasets (g–i). The error bars represent the 90 % confidence interval of the assimilated variable in CARDAMOM.
## 3.2 Data assimilation and uncertainty reduction
The CARDAMOM framework generated an analysis broadly consistent with the combination of SOC, biomass, and LAI in each grid cell (Fig. 2) and the errors assigned to these data products. The agreement for the SOC dataset by Hugelius et al. (2013a) is a 1:1 relationship (R2=1.0; RMSE = 0.95 kg C m−2), reflecting a straightforward model parameterisation. The biomass product from Carvalhais et al. (2014) was well correlated (R2=0.97; RMSE = 0.46 kg C m−2), but CARDAMOM was consistently biased ∼28 % low. MODIS LAI data were also well correlated (${R}^{\mathrm{2}}=\mathrm{0.79};$ RMSE = 0.42 kg C m−2) but ∼28 % higher than CARDAMOM analyses. These biases (Fig. 2) likely arise due to a low estimate in the photosynthesis model (ACM) used in CARDAMOM which propagates through the C cycle. CARDAMOM balances uncertainty in data products and the models (ACM photosynthesis model and DALEC2) to generate a weighted analysis typical of Bayesian approaches. The CARDAMOM analysis 90 % confidence interval (CI) includes the 1:1 line for biomass and LAI (Fig. 2), indicating that the likelihoods on C cycle analyses include the expected value of the observations.
Figure 3Original gross primary productivity (GPP; Jung et al., 2017) and heterotrophic respiration (Rh; Hashimoto et al., 2015) datasets used in the data validation process (a, b), estimated GPP and Rh by CARDAMOM (c, d), and their respective goodness-of-fit statistics between original and assimilated datasets (e, f). Stippling indicates locations where the independent datasets are within CARDAMOM's 5th and 95th percentiles.
The degree to which posterior distributions were constrained from the prior distributions in each of the 17 model parameters and 6 initial stock sizes (Table S2) varied considerably depending on the parameters in question and their related processes (Table 2 and Fig. S2). The 90 % CI posterior range of foliar, wood, labile, and SOM C stocks (Cfoliar, Cwood, Clabile, and Csom) as well as parameters such as allocation to foliage (ffol) and lifespan (L) were considerably reduced (>80 % uncertainty reduction compared to priors) most likely controlled by the information on LAI, biomass, and SOC constraints. Contrarily, parameters that have not been regulated in any way in the MHMCMC algorithm, i.e. turnover processes such as litter mineralisation (MRlitter), root turnover (TORroots), wood turnover (TORwood), decomposition rates (Drate), and initial C stock such as litter (Clitter) were found to be poorly constrained (<20 % uncertainty reduction). Overall, the uncertainty reduction classified by processes and ranked from most to least constrained estimated a 71 % reduction for C stocks, a 67 % reduction for C allocation, 59 % for plant phenology, and 31 % for C turnover related parameters. Although there are no substantial differences between tundra and taiga, Croots was better constrained in tundra regions (42 %), while leaf onset day (Bday), leaf fall day (Fday), and leaf fall duration (Lf) were better constrained in taiga regions (>18 % or more).
Table 2Parameter uncertainty reduction in percentage ranked from least (red) to most (blue) constrained in the pan-Arctic, tundra, and taiga domains. The reduction percentage is calculated based on the difference between the 90 % CI prior range and the 90 % CI posterior range.
## 3.3 Independent evaluation: from global to local scale
We compared our estimates of GPP and Rh with independent datasets to evaluate the model performance (Fig. 3). We found GPP to be well correlated (R2=0.81; RMSE = 0.43 kg C m−2) but biased lower (∼53 %) compared to Jung et al.'s (2017) GPP estimates. There are in general very few pixels where the FLUXCOM product falls within CARDAMOM's 90 % confidence interval. Additionally, the Rh product from Hashimoto et al. (2015) is less consistent with our estimates (R2=0.40; RMSE = 0.09 kg C m−2), presenting a tendency towards lower values in tundra pixels and higher values in taiga pixels. The spatial variability of Rh is considerably smaller in Hashimoto et al. (2015) compared to our CARDAMOM estimates. Rh falls within the 90 % confidence interval of CARDAMOM in most of the pan-Arctic region due to the fact that the Rh uncertainties are significant (Fig. 3). This finding confirms the uncertainties previously noted in modelled respiratory processes (Table 1) where the upper P95 in Rh dominated NEE's uncertainties but also the soil C stocks and transit times.
Figure 4Monthly aggregated seasonal variability of observed (FLUXNET2015) and modelled (CARDAMOM) C fluxes (NEE, net ecosystem exchange; GPP, gross primary production; Reco, ecosystem respiration) across eight low- and high-Arctic sites (Hakasia, Kobbefjord, Manitoba, Poker Flat, Samoylov, Tiksi, UCI-1998, and Zackenberg). Each of these sites, located in different countries (RU – Russia; GL – Greenland; CA – Canada; US – United States) feature different meteorological conditions and vegetation types (Table S4). Uncertainties represent the 25th and 50th percentiles (darker shade) and the 5th and 95th percentiles (lighter shade) of both field observations and the CARDAMOM framework.
For comparison with direct ground observations from the FLUXNET2015 dataset, we report here monthly aggregated ${P}_{\mathrm{50}}±{P}_{\mathrm{05}-\mathrm{95}}$ estimates of NEE, GPP, and Reco to show timing and magnitudes but also to diagnose whether CARDAMOM is in general agreement with flux tower data. Overall, CARDAMOM performed well in simulating observed NEE (${R}^{\mathrm{2}}=\mathrm{0.66};$ RMSE = 0.51 g C m−2 per month; bias = 0.16 g C m−2 per month), GPP (R2=0.85; RMSE = 0.89 g C m−2 per month; bias = 0.5 g C m−2 per month) and Reco (R2=0.82; RMSE = 0.63 g C m−2 per month; bias = 0.35 g C m−2 per month) across eight sub-Arctic and high-Arctic sites from the FLUXNET2015 dataset (Fig. 4; Table S6). CARDAMOM NEE is ∼25 % lower than FLUXNET2015, while GPP and Reco are 30 % and ∼10 % higher, respectively. This mismatch is important in the context of the FLUXCOM GPP upscaling, 50 % higher than CARDAMOM GPP. At some sites such as Hakasia, Samoylov, Poker Flat, and Manitoba (NEE R2=0.73; GPP R2=0.92; Reco R2=0.88), CARDAMOM better matches the seasonality and the magnitude of the C fluxes than the rest, i.e. Tiksi, Kobbefjord, Zackenberg, and UCI-1998 (NEE R2=0.58; GPP R2=0.67; Reco R2=0.67). In general, CARDAMOM captured the beginning and the end of the growing season well (Fig. 4), although the assimilation system has some bias due to (1) a difference in timing (e.g. earlier shifts of peak of the growing season in Manitoba GPP and Reco and earlier end of the growing season in Poker Flat NEE) and (2) differences in flux magnitudes (such as in Hakasia GPP and Reco and Kobbefjord NEE).
## 3.4 Benchmarking ISI-MIP2a models with CARDAMOM
We used our highest confidence retrievals of NPP, Cveg, and TTveg (i.e. retrievals including assimilated LAI, biomass, and SOC) to benchmark the performance of the GVMs from the ISI-MIP2a project. In this assessment we compared not only their spatial variability across the pan-Arctic, tundra, and taiga region (Fig. 5) but also the degree of agreement between their mean model ensemble within the 90 % confidence interval of our assimilation framework (Fig. 6, Table 3). NPP estimates (RMSE = 0.1 kg C m−2 yr−1; R2=0.44) are in better agreement than Cveg (RMSE = 1.8 kg C m−2; R2=0.22) and TTveg (RMSE = 4.1 years; R2=0.12). The assessed GVMs estimated on average 8 % lower NPP, 16 % higher Cveg, and 22 % longer TTveg than CARDAMOM across the entire pan-Arctic domain (Figs. 5 and 6) on average. Thus, at regional aggregation CARDAMOM analyses agreed more closely with ISI-MIP2a models than with FLUXCOM (51 % difference) and with the Carvalhais et al. (2014) biomass data (28 % bias).
Figure 5Central tendency and variability of NPP (net primary production), Cveg (vegetation C stock), TTveg (vegetation transit time) estimated by CARDAMOM (orange), and ISI-MIP2a models (grey) in the pan-Arctic, tundra, and taiga regions. The box–whisker plots comprise the estimations between the 5th and 95th percentiles, and the box encompasses the 25th to 75th percentiles. The line in each box marks the median of studied variables in each region.
Figure 6NPP (net primary production), Cveg (vegetation C stock), and TTveg (vegetation transit time) ratios between ISI-MIP2a model ensembles (DLEM, LPJmL, LPJ-GUESS, ORCHIDEE, VEGAS, and VISIT) and CARDAMOM. Stippling indicates locations where the ISI-MIP2a model mean is within CARDAMOM's 5th and 95th percentiles.
Table 3Statistics of linear fit between the CARDAMOM framework (independent) and the ISI-MIP2a models (dependent) per individual model and per NPP (net primary production; kg C m−2 yr−1), Cveg (vegetation C stock; kg C m−2), and TTveg (vegetation transit time; years). The units for RMSE and bias are kg C m−2 yr−1 in NPP, kg C m−2 yr−1 in Cveg, and years in TTveg.
The poor spatial agreement regarding TTveg between CARDAMOM and ISI-MIP2a (Table 3) is indicative of uncertainties in the internal C dynamics of these models. For instance, the slopes in Table 3 are steep and the R2 are poor – so there is substantial disagreement in the spatial pattern, not just a large bias. For ISI-MIP2a comparison R2 values ranged from 0.03 to 0.52 for NPP, from 0.00 to 0.31 for Cveg, and from 0.00 to 0.24 for TTveg. Spatially, the stippling in Fig. 6 indicates areas where the GVMs are within the 90 % CI of CARDAMOM; agreement is best over the taiga domain rather than in tundra for TTveg. The benchmark area of consistency (stippling) is more extensive for Cveg and TTveg than for NPP. Thus, while there is a stronger spatial correlation for NPP between CARDAMOM and GVMs (Table 3), this is a clearer bias for NPP. Some models (LPJ-GUESS and ORCHIDEE) systematically calculate lower values in all the assessed variables, while others (LPJmL and VISIT) calculate higher estimates. The models in closer agreement with CARDAMOM were DLEM (5 % difference) and LPJ-GUESS (17 %), while VEGAS (44 %) and ORCHIDEE (56 %) were the models with larger discrepancies (Table 3; Figs. 5 and 6).
The attribution analysis to identify the origin of bias from ISI-MIP2a models indicated a joint split between NPP and Cveg for TTveg error simulated in GVMs (Fig. 7). The distribution of the differences relative to CARDAMOM revealed that the higher error (i.e. the lower overlapped area, and by extension the largest contributor to TTveg biases) comes from ISI-MIP2a NPP with a 69 % agreement in the distribution, while Cveg agrees to 72 %. In fact, the TTveg R2 for each model (Table 3) is very close to the product of the NPP R2 and Cveg R2 for that model, i.e. the uncertainty in the TTveg is a direct interaction of NPP and Cveg uncertainty (R2 of the correlation: 0.71). This finding supports Fig. 6, which shows TTveg error derives equally from both NPP and Cveg.
Figure 7Distribution functions derived from the attribution analysis used to estimate the origin of vegetation transit time (TTveg) bias from ISI-MIP2a models. The CONTROL TT (grey) includes both biomass (Cveg) and net primary production (NPP) estimated by CARDAMOM. EXPERIMENT A TT (dark red) incorporates Cveg from ISI-MIP2a and NPP from CARDAMOM, while EXPERIMENT B TT (dark green) includes NPP from ISI-MIP2a and Cveg from CARDAMOM. The lower the overlapped area between control and experimental TT, the larger the contribution for TT biases. For readability purposes, the scale on the x axis is limited to 20 years.
4 Discussion
## 4.1 Pan-Arctic retrievals of C cycle
The CARDAMOM framework has been used to evaluate the terrestrial pan-Arctic C cycle in tundra and taiga at coarse spatio-temporal scale (at monthly and annual time steps for the 2000–2015 period and at 1× 1 grid cells). Overall, we found that the pan-Arctic region was most likely a consistent sink of C (weaker in tundra and stronger in taiga), although the large uncertainties derived from respiratory processes (Table 1) strongly increase the 90 % confidence interval uncertainty. We estimate that tundra experienced 62 % longer transit times in litter and SOM C stocks than taiga ecosystems. Further, the contribution of Ra and Rh to total ecosystem respiration was similar in tundra (51 %, 49 %) but dominated by Ra in taiga (57 % compared to 43 %).
CARDAMOM retrievals are consistent with outcomes from relevant papers such as the (i) C flux observations and model estimates reported in McGuire et al. (2012), (ii) C stocks and transit times described by Carvalhais et al. (2014), and (iii) NPP, C stocks, and turnover rates stated in Thurner et al. (2017).
• i.
The CARDAMOM NEE estimates reported in this study for the tundra domain are inside the variability comparison of values compiled by McGuire et al. (2012) considering field observation, regional-process-based models, global-process-based models, and inversion models. The authors reported that Arctic tundra was a sink of CO2 of −150 Tg C yr−1 (SD = 45.9) across the 2000–2006 period over an area of 9.16×106 m2. Here, CARDAMOM NEE estimated −129 Tg C yr−1 over an area of 8.1×106 km2 for the same period. This exhaustive assessment of the C balance in Arctic tundra included approximately 250 estimates using the chamber and eddy covariance method from 120 published papers (McGuire et al., 2012; Supplement 1) with an area-weighted mean of means of −202 Tg C yr−1. The regional models, including runs from LPJ-WHyMe (Wania et al., 2009a, b), ORCHIDEE (Koven et al., 2011), TEM6 (McGuire et al., 2010), and the TCF model (Kimball et al., 2009), reported an NEE of −187 Tg C yr−1 and GPP, NPP, Ra, and Rh of 350, 199, 151, and 182 g C m−2 yr−1, respectively. GVMs applications such as CLM4C (Lawrence et al., 2011), CLM4CN (Thornton et al., 2009), Hyland (Levy et al., 2004), LPJ (Sitch et al., 2003), LPJ-GUESS (Smith et al., 2001), O-CN (Zaehle and Friend, 2010), SDGVM (Woodward et al., 1995), and TRIFFID (Cox, 2001) estimated an NEE of −93 Tg C yr−1 and GPP, NPP, Ra, and Rh of 272, 162, 83, and 144 g C m−2 yr−1. For the same period, CARDAMOM has estimated 330, 167, 160, and 154 g C m−2 yr−1, respectively, for the same gross C fluxes.
• ii.
Carvalhais et al. (2014) estimated a total ecosystem carbon (Ctot) of 20.5 (8.0, 52.5) kg C m−2 for tundra and 24.8 (15.2, 58.0) kg C m−2 for taiga, while values from CARDAMOM were 24.6 (10.8, 50.6) kg C m−2 for tundra and 27.7 (12.7, 51.2) kg C m−2 in taiga (Fig. 5; Table 1) for the same area. Thus, Carvalhais et al.'s (2014) Ctot product stored 20 % and 12 % less carbon in tundra and taiga, respectively, than CARDAMOM. Overall, CARDAMOM calculated 20 % and 6 % longer transit times for tundra and taiga, respectively, with average values of 80.8 (21.8, 195.2) years in tundra and 51.2 (22.1, 109.3) years in taiga (Table 1) compared to the 64.4 (25.7, 259.8) years in tundra and 48.2 (111.6, 24.9) years in taiga in Carvalhais et al. (2014). These numbers have been retrieved from the same biome classification and they include the 90 % confidence interval of the assessed spatial variability. Also, we applied a correction factor of TTgpp= TTnpp× (1 fraction of GPP respired) to be comparable with Carvalhais et al. (2014) TT. Both datasets agree on the fact that at high (cold) latitudes, first tundra and secondly taiga have the longest transit times on the entire globe (Bloom et al., 2016; Carvalhais et al., 2014).
• iii.
A recent study from Thurner et al. (2017) assessed temperate and taiga-related TTs, presenting a 5-year average NPP dataset applying both MODIS (Running et al., 2004; Zhao et al., 2005) and BETHY/DLR (Tum et al., 2016) products and an innovative biomass product (Thurner et al., 2014) accounting for both forest and non-forest vegetation. Our estimate of TTveg for the exact same period is 5.3 (1.9, 18.2) years, compared to Thurner et al.'s (2017) TT, 8.2 (5.5, 11.5) years using MODIS, and 6.5 (4.2, 8.7) years using BETHY/DLR. A note of caution here: the number reported by the authors are turnover rates, which are converted to transit times by just applying the inverse of turnover rates (TT${}_{\mathrm{veg}}=\mathrm{1}/$turnover rates). Additionally, their NPP estimates, 0.35 and 0.45 kg C m−2 yr−1 from both MODIS and BETHY/DLR, are only 5 % more productive on average than the CARDAMOM NPP estimate (0.4 (0.3, 0.6) kg C m−2 yr−1). The biomass derived from Thurner et al. (2014) (3.0±1.1 kg C m−2) is ∼30 % lower than CARDAMOM Cveg (2.2 (1.1, 5.0) kg C m−2), calculated for the same period and for the same taiga domain.
In general, we found a reasonable agreement between CARDAMOM and assimilated and independent data at pan-Arctic scale. CARDAMOM retrievals of assimilated data are in good agreement with the SOC (Fig. 2). The simulation of TTdom is weakly constrained (Table 1) – our analysis adjusts TT to match mapped stocks, hence the strong match of modelled to mapped SOC. So, independent data on TTdom data (e.g. 14C) are required across the pan-Arctic region to provide a stronger constraint on process parameters and reduce the very broad confidence intervals of CARDAMOM analyses. The low bias in mean estimates of LAI and biomass (Fig. 2) likely relates to the strong prior on photosynthesis estimates from the ACM model, which lacks a temperature acclimation for high latitudes in this implementation. However, the uncertainty in the biomass and LAI analyses spans the magnitude of the bias. So, CARDAMOM generates some parameters sets that are consistent with observations. CARDAMOM produces analyses that reproduce the pattern of LAI, GPP, biomass, and SOC (Figs. 2 and 3) – this demonstrates that the DALEC model structure can be calibrated to simulate the links between these variables as a function of mass balance constraints and realistic process interactions and climate sensitivities.
There are clear biases in CARDAMOM analyses compared to independent global upscaled GPP (Jung et al., 2017) and Rh products (Hashimoto et al., 2015) (Fig. 3). However, CARDAMOM resolves the spatial pattern in GPP effectively, while the spatial mismatch in Rh estimates is marked (Fig. 3), echoing the large uncertainty found in NEE (Fig. 1, Table 1). One difference with Hashimoto et al.'s (2015) Rh model is the lack of moisture limitation on respiration in CARDAMOM. Conversely, GPP is relatively well-constrained in space through the assimilation of LAI and a prior for productivity (Bloom et al., 2016), although an important mismatch has been found: CARDAMOM GPP is 50 % lower than FLUXCOM but 30 % higher than FLUXNET2015 EC data.
The agreement between CARDAMOM analyses and EC data is high given the scale difference. A direct point-to-grid cell comparison with local observations derived from the FLUXNET2015 dataset (Fig. 4, Table S6) is challenging and always difficult. CARDAMOM outputs covers 1× 1 grid cells, whereas local eddy covariance flux measurements are of the order of 1–10 ha. Thus, for observational sites located in areas with complex terrain, such as Kobbefjord in coastal Greenland, the agreement can be expected to be low. For inland forest sites, such as Poker Flat in Alaska, there may be less differences in vegetation characteristics and local climatology between the local-scale measurement footprint and the corresponding CARDAMOM grid cell. This scaling issue is likely to have a larger impact on flux magnitudes compared with seasonal dynamics. In general, CARDAMOM captured the seasonal dynamics in NEE, GPP, and Reco well (Fig. 4, Table S6), although the monthly model time step does reduce skill in shoulder seasons. There was a consistent timing mismatch in early season flux increase, where CARDAMOM predicts earlier growing season onset compared with observations. This is likely due to the impact of snow cover, which is not explicitly included in the CARDAMOM framework.
For a further independent evaluation of CARDAMOM outputs, we compare the tundra and boreal estimates to plot scale flux and stock information. For tundra, Street et al. (2012) calculate growing season GPP estimates of 263–380 g C m−2 for Empetrum nigrum communities and 295–386 g C m−2 for Betula nana communities, which is consistent with the ranges in Fig. 1 for tundra. Biomass stocks for Arctic tundra recorded in the Arctic LTER (Long Term Ecological Research) at Toolik Lake range from 105 to 1160 g C m−2 (Hobbie and Kling, 2014), which are consistent with the estimates from CARDAMOM, albeit at the lower end of the model estimates. For boreal forests, Goulden et al. (2011) report annual GPP estimates across a chronosequence of stands, and thus a variation across canopy densities, which varied from 450 to 720 g C m−2 yr−1. These data are consistent with the span of GPP in CARDAMOM (Fig. 1), again best matching the lower end of the model estimates. For the same study, the vegetation C stock estimates varied from 100 to 5000 g C m−2, consistent with CARDAMOM and with measurements of 10 to 40-year old boreal stands best matching the CARDAMOM median estimate of ∼1500 g C m−2. We conclude from comparisons with site data that CARDAMOM analyses are broadly consistent, with some tendency for CARDAMOM to have a high bias. This comparison is similar to the FLUXNET2015 evaluation of CARDAMOM. But it conflicts with the estimation of low bias from the comparison of CARDAMOM with FLUXCOM GPP and Carvalhais et al. (2014) biomass stock maps. It is possible that the scale differences between site-level products and landscape estimates is confusing these comparisons, but there is clearly a need to understand these inconsistencies in C cycle estimates better.
## 4.2 CARDAMOM as a model benchmarking tool
An ideal benchmarking tool for GVMs would compare model state variables and fluxes with multiple, independent, unbiased, error-characterised measurements collected repeatedly at the same temporal and spatial resolution. Of course direct measurements of key C cycle variables like these are not available. Even at FLUXNET sites GPP and Reco must be inferred, and NEE data are often gap-filled. Satellite data can provide continuous fields but do not directly measure ecological variables like biomass or LAI, so calibrated models are required to generate ecological products. Atmospheric conditions can introduce biases and data gaps into optical data that are poorly quantified. Upscaling of FLUXNET data requires other spatial data, e.g. MODIS LAI, which challenge the characterisation of error and generates complex hybrid products. We suggest that CARDAMOM provides some of the requirements of the ideal benchmark system – an error-characterised, complete analysis of the C cycle that is based on a range of observational products. CARDAMOM includes its own C cycle model; this has the advantage of evaluating the observational data for consistency (e.g. with mass balance), propagating error across the C cycle, and generating internal model variables such as TT. Further, the model is of intermediate complexity and independent of the benchmarked models.
Using CARDAMOM as a benchmarking tool for six GVMs we found disagreements that varied among models for spatial estimates of NPP, Cveg, and TTveg across the pan-Arctic (Fig. 6) in comparison with CARDAMOM confidence intervals. GVM NPP estimates had a higher correlation than TTveg and Cveg with CARDAMOM analyses (Table 3), but because CARDAMOM confidence intervals on NPP were relatively narrow (Fig. 1), the benchmarking scores from GVM NPP were relatively poor (Fig. 6). Consequently, we used CARDAMOM to calculate the relative contribution of productivity and biomass to the transit times bias by applying a simple attribution analysis (Fig. 7). We concluded that the largest bias to transit times originated not from a deficient understanding of one single component, but from an equal combination of both productivity and biomass errors together. Therefore, this study partially agrees with previous studies (Friend et al., 2014; Nishina et al., 2014; Thurner et al., 2017) highlighting the deficient representation of transit times or turnover dynamics, but we further suggest that global vegetation and Earth system modellers need to focus on the productivity and vegetation C stocks dynamics to improve inner C dynamics. A major challenge for GVMs is the spin-up problem (Exbrayat et al., 2014). GVMs need to find a way to ensure that the spin-up process produces biomass estimates consistent with the growing availability of biomass maps from Earth observations. CARDAMOM solves this problem by avoiding spin-up. Its fast run time allows the biomass maps to act as a constraint on the probability distribution of model parameters. There may be opportunities to use CARDAMOM-style approaches to assist the GVM community address this problem.
## 4.3 Outlook
Although CARDAMOM estimates for pan-Arctic C cycling are in moderately good agreement with observations and data constraints, we have not included important components controlling ecosystem processes that could potentially improve our understanding on C feedbacks, with an emphasis on high-latitude ecosystems. For example, thaw and the release of permafrost C is not represented in CARDAMOM, but the influence on vegetation dynamics, permafrost degradation, and soil respiration is critical at high latitudes (Koven et al., 2015; Parazoo et al., 2018). Also, Koven et al. (2017) have shown that soil thermal regimes are key to resolving the long-term vulnerability of soil C. Moreover, we have not characterised snow dynamics nor the insulating effect of snow affecting respiratory losses across wintertime periods (López-Blanco et al., 2018). Further, methane emissions, another important contributor to total C budget (Mastepanov et al., 2008; Zona et al., 2016), were neglected from this modelling exercise since it is not easy to model due to its three complex transport mechanisms (Walter et al., 2001).
However, our approach to use an intermediate-complexity model has the strong advantage of allowing very large (107) model ensembles per pixel and thus a thorough exploration of model–parameter interactions that is not feasible with typical GVMs. Other viable options include using emulators (Fer et al., 2018) and particle filters (Arulampalam et al., 2002), but MCMC methods provide the most detailed description of error distributions. There remains a strong argument to utilise intermediate-complexity models like DALEC2 to evaluate the minimum level of detail required to represent ecosystem processes consistent with local observations and to allow testing of alternate model structures. Assimilating further data products, for instance patterns in soil hydrology and snow states across the pan-Arctic from Earth observation, could provide useful information on spatio-temporal controls on soil activity and microbial metabolism to constrain below-ground processes. This information would need to be tied to process-level information on SOM turnover generated from experimental studies and included in updated versions of DALEC. Thus, more field observations are crucial across the pan-Arctic, specifically on the decomposition and TT of SOC (He et al., 2016) and respiratory processes such as the partitioning of Reco into Ra and Rh (Hobbie et al., 2000; McGuire et al., 2000) across the growing season and also during wintertime (Commane et al., 2017; Zona et al., 2016).
Our approach has used estimated observation error and inflated this to include unknown errors associated with model process representation. We currently lack any better knowledge of the combined uncertainties arising from model representation errors and observation errors. We acknowledge that all models are an imperfect representation of C dynamics, which generates irreconcilable model–data errors due to the inherent assumptions in model structure. Future analyses should investigate model structural error, using, for example, error-explicit Bayesian approaches (Xu et al., 2017), or comparing the likelihoods of alternate model structures of varying complexity. Using multiple sources of data, we have highlighted systematic errors in the model at a landscape scale (Figs. 2 and 3) for LAI, GPP, and biomass. However, these biases are not consistent for site-scale evaluations. Thus, a next step would be to include explicitly both random and systematic process errors for C fluxes in the data assimilation. These errors could be determined from field-scale evaluation of model process representation (Table 2) using, e.g., FLUXNET2015 data. We also need to understand better the error associated with the landscape heterogeneity of C stocks and fluxes to upscale from flux tower observations or direct measurements of LAI to landscape pixel. This could be achieved by constructing robust observation error models (Dietze, 2017) from field to pixel scale for, e.g., GPP, LAI, and foliar N. The evaluation of the sensitivity of C cycling DA analyses to observation error has shown relatively low sensitivity to data gaps and random error on net ecosystem flux data (Hill et al., 2012), but further analyses of error sensitivity are required for multiple streams of stock data.
5 Conclusions
The Arctic is experiencing rapid environmental changes, which will influence the global C cycle. Using a data-assimilation framework we have evaluated the current state of key C flux, stocks, and transit times for the pan-Arctic region for 2000–2015. We found that the pan-Arctic was a likely sink of C, weaker in tundra and stronger in taiga, but uncertainties around the respiration losses are still large, and so the region could be a source of C. Comparisons with global- and local-scale datasets demonstrate the capabilities of CARDAMOM for analysing the C cycle in the Arctic domain. CARDAMOM is a data-constrained and data-integrated analysis, evaluated for internal consistency, and is therefore a good candidate to benchmark performance of global vegetation models. We conclude that a GVM bias found in transit time of vegetation C is the result of a joint combination of uncertainties from productivity processes and biomass in GVMs, and thus these are a major component of error in their forecasts. While spatial patterns in GVM predictions of NPP are reasonable, particularly in taiga, they have significant biases against the CARDAMOM benchmark. Improved mapping of vegetation and soil C stocks and change over time is required for better analytical constraint. Moreover, future work is required on assimilating data on soil hydrology, permafrost, and snow dynamics to improve accuracy and decrease uncertainties in belowground processes. This work establishes the baseline for further process-based ecological analyses using the CARDAMOM data-assimilation system as a technique to constrain the pan-Arctic C cycle.
Code and data availability
Code and data availability.
CARDAMOM output used in this study is available from Exbrayat and Williams (2018) from the University of Edinburgh's DataShare service at https://doi.org/10.7488/ds/2334. The DALEC2 code is also available on Edinburgh DataShare at https://doi.org/10.7488/ds/2504 (Williams, 2019). Contact Mathew Williams for access to the CARDAMOM software.
Supplement
Supplement.
Author contributions
Author contributions.
ELB, FJE, TRC, ML, and MW designed the research. JFE performed the model simulations. ELB, FJE, and MW analysed the data. ELB and MW prepared the paper with contributions from all co-authors.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work was supported in part by a scholarship from the Aarhus-Edinburgh Excellence in European Doctoral Education Project and by the eSTICC (eScience tools for investigating Climate Change in Northern High Latitudes) project, part of the Nordic Center of Excellence. This work was also supported by the Natural Environment Research Council (NERC) through the National Center for Earth Observation. The authors would like to thank Matthias Forkel, and the anonymous reviewer and editor for their valuable comments that improved the quality of the paper. We thank Nuno Carvalhais for discussion that helped to focus our ideas. Thanks are also due to Isabel de Andrés Velasco for contributing to the artistic design in Fig. 1. Data-assimilation procedures were performed using the Edinburgh Compute and Data Facility resources. This work used eddy covariance data acquired and shared by the FLUXNET community, including the following networks: AmeriFlux, AfriFlux, AsiaFlux, CarboAfrica, CarboEuropeIP, CarboItaly, CarboMont, ChinaFlux, Fluxnet-Canada, GreenGrass, ICOS, KoFlux, LBA, NECC, OzFlux-TERN, TCOS-Siberia, and USCCC. The ERA-Interim reanalysis data are provided by ECMWF and processed by LSCE. The FLUXNET eddy covariance data processing and harmonisation was carried out by the European Fluxes Database Cluster, AmeriFlux Management Project, and Fluxdata project of FLUXNET, with the support of CDIAC and ICOS Ecosystem Thematic Center and the OzFlux, ChinaFlux, and AsiaFlux offices. We acknowledge the modelling groups and the ISI-MIP coordination team for their roles in producing and making the ISI-MIP model output available.
Review statement
Review statement.
This paper was edited by Ning Zeng and reviewed by Matthias Forkel and one anonymous referee.
References
Ahlström, A., Schurgers, G., Arneth, A., and Smith, B.: Robustness and uncertainty in terrestrial ecosystem carbon response to CMIP5 climate change projections, Environ. Res. Lett., 7, 044008, https://doi.org/10.1088/1748-9326/7/4/044008, 2012.
Akihiko, I., Kazuya, N., Christopher, P. O. R., Louis, F., Alexandra-Jane, H., Guy, M., Ingrid, J., Hanqin, T., Jia, Y., Shufen, P., Catherine, M., Richard, B., Thomas, H., Jörg, S., Sebastian, O., Sibyll, S., Philippe, C., Jinfeng, C., Rashid, R., Ning, Z., and Fang, Z.: Photosynthetic productivity and its efficiencies in ISIMIP2a biome models: benchmarking for impact assessment studies, Environ. Res. Lett., 12, 085001, https://doi.org/10.1088/1748-9326/aa7a19, 2017.
AMAP: Snow, water, ice and permafrost in the Arctic (SWIPA) 2017, Arctic Monitoring and Assessment Programme (AMAP) Oslo, Norway, xiv + 269 pp., 2017.
Anav, A., Murray-Tortarolo, G., Friedlingstein, P., Sitch, S., Piao, S., and Zhu, Z.: Evaluation of Land Surface Models in Reproducing Satellite Derived Leaf Area Index over the High-Latitude Northern Hemisphere. Part II: Earth System Models, Remote Sensing, 5, 3637, https://doi.org/10.3390/rs5083637, 2013.
Arulampalam, M. S., Maskell, S., Gordon, N., and Clapp, T.: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking, IEEE T. Signal Process., 50, 174-188, https://doi.org/10.1109/78.978374, 2002.
Baldocchi, D. D.: Assessing the eddy covariance technique for evaluating carbon dioxide exchange rates of ecosystems: past, present and future, Global Change Biol., 9, 479–492, https://doi.org/10.1046/j.1365-2486.2003.00629.x, 2003.
Beer, C., Reichstein, M., Tomelleri, E., Ciais, P., Jung, M., Carvalhais, N., Rödenbeck, C., Arain, M. A., Baldocchi, D., Bonan, G. B., Bondeau, A., Cescatti, A., Lasslop, G., Lindroth, A., Lomas, M., Luyssaert, S., Margolis, H., Oleson, K. W., Roupsard, O., Veenendaal, E., Viovy, N., Williams, C., Woodward, F. I., and Papale, D.: Terrestrial Gross Carbon Dioxide Uptake: Global Distribution and Covariation with Climate, Science, 329, 5993, https://doi.org/10.1126/science.1184984, 2010.
Belelli Marchesini, L., Papale, D., Reichstein, M., Vuichard, N., Tchebakova, N., and Valentini, R.: Carbon balance assessment of a natural steppe of southern Siberia by multiple constraint approach, Biogeosciences, 4, 581–595, https://doi.org/10.5194/bg-4-581-2007, 2007.
Bintanja, R. and Andry, O.: Towards a rain-dominated Arctic, Nature Clim. Change, 7, 263–267, https://doi.org/10.1038/nclimate3240, 2017.
Bloom, A. A. and Williams, M.: Constraining ecosystem carbon dynamics in a data-limited world: integrating ecological “common sense” in a model–data fusion framework, Biogeosciences, 12, 1299–1315, https://doi.org/10.5194/bg-12-1299-2015, 2015.
Bloom, A. A., Exbrayat, J.-F., van der Velde, I. R., Feng, L., and Williams, M.: The decadal state of the terrestrial carbon cycle: Global retrievals of terrestrial carbon allocation, pools, and residence times, P. Natl. Acad. Sci. USA, 113, 1285–1290, https://doi.org/10.1073/pnas.1515160113, 2016.
Bond-Lamberty, B., Wang, C., and Gower, S. T.: Net primary production and net ecosystem production of a boreal black spruce wildfire chronosequence, Global Change Biol., 10, 473–487, https://doi.org/10.1111/j.1529-8817.2003.0742.x, 2004.
Bontemps, S., Defourny, P., Bogaert, E., Arino, O., Kalogirou, V., and Perez, J.: GLOBCOVER 2009 – Products description and validation report, UCLouvain & ESA Team, 2011.
Brown, J., Ferrians Jr., O. J., Heginbottom, J. A., and Melnikov, E. S.: Circum-Arctic map of permafrost and ground-ice conditions, Report 45, US Geological Survey, 1997.
Canadell, J. G., Le Quéré, C., Raupach, M. R., Field, C. B., Buitenhuis, E. T., Ciais, P., Conway, T. J., Gillett, N. P., Houghton, R. A., and Marland, G.: Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks, P. Natl. Acad. Sci. USA, 104, 18866–18870, https://doi.org/10.1073/pnas.0702737104, 2007.
Carvalhais, N., Forkel, M., Khomik, M., Bellarby, J., Jung, M., Migliavacca, M., Saatchi, S., Santoro, M., Thurner, M., Weber, U., Ahrens, B., Beer, C., Cescatti, A., Randerson, J. T., and Reichstein, M.: Global covariation of carbon turnover times with climate in terrestrial ecosystems, Nature, 514, 213–217, https://doi.org/10.1038/nature13731, 2014.
Commane, R., Lindaas, J., Benmergui, J., Luus, K. A., Chang, R. Y.-W., Daube, B. C., Euskirchen, E. S., Henderson, J. M., Karion, A., Miller, J. B., Miller, S. M., Parazoo, N. C., Randerson, J. T., Sweeney, C., Tans, P., Thoning, K., Veraverbeke, S., Miller, C. E., and Wofsy, S. C.: Carbon dioxide sources from Alaska driven by increasing early winter respiration from Arctic tundra, P. Natl. Acad. Sci. USA, 114, 5361–5366, https://doi.org/10.1073/pnas.1618567114, 2017.
Cox, P. M.: Description of the “TRIFFID” Dynamic Global Vegetation Model, Hadley Centre technical note 24, Met Office, UK, 2001.
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J. J., Park, B. K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J. N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Q. J. Roy. Meteorol. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011.
De Kauwe, M. G., Disney, M. I., Quaife, T., Lewis, P., and Williams, M.: An assessment of the MODIS collection 5 leaf area index product for a region of mixed coniferous forest, Remote Sens. Environ., 115, 767–780, https://doi.org/10.1016/j.rse.2010.11.004, 2011.
DeLucia, E. H., Drake, J. E., Thomas, R. B., and Gonzalez-Meler, M.: Forest carbon use efficiency: is respiration a constant fraction of gross primary production?, Global Change Biol., 13, 1157–1167, https://doi.org/10.1111/j.1365-2486.2007.01365.x, 2007.
Dietze, M. C.: Ecological Forecasting, Princeton University Press, Princeton, 2017.
Exbrayat, J. F. and Williams, M.: CARDAMOM panarctic retrievals 2000–2015, 2000–2015 [Dataset], National Centre for Earth Observation and School of GeoSciences, University of Edinburgh, https://doi.org/10.7488/ds/2334, 2018.
Exbrayat, J.-F., Pitman, A. J., and Abramowitz, G.: Response of microbial decomposition to spin-up explains CMIP5 soil carbon range until 2100, Geosci. Model Dev., 7, 2683–2692, https://doi.org/10.5194/gmd-7-2683-2014, 2014.
Exbrayat, J.-F. and Williams, M.: CARDAMOM panarctic retrievals 2000–2015, National Centre for Earth Observation and School of GeoSciences, University of Edinburgh, https://doi.org/10.7488/ds/2334, 2018.
Exbrayat, J. F., Bloom, A. A., Falloon, P., Ito, A., Smallman, T. L., and Williams, M.: Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties, Earth Syst. Dynam., 9, 153–165, https://doi.org/10.5194/esd-9-153-2018, 2018.
FAO/IIASA/ISRIC/ISSCAS/JRC: Harmonized World Soil Database (version 1.21), FAO, Rome, Italy and IIASA, Laxenburg, Austria, 2012.
Fer, I., Kelly, R., Moorcroft, P. R., Richardson, A. D., Cowdery, E. M., and Dietze, M. C.: Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation, Biogeosciences, 15, 5801–5830, https://doi.org/10.5194/bg-15-5801-2018, 2018.
Fisher, J. B., Sikka, M., Oechel, W. C., Huntzinger, D. N., Melton, J. R., Koven, C. D., Ahlström, A., Arain, M. A., Baker, I., Chen, J. M., Ciais, P., Davidson, C., Dietze, M., El-Masri, B., Hayes, D., Huntingford, C., Jain, A. K., Levy, P. E., Lomas, M. R., Poulter, B., Price, D., Sahoo, A. K., Schaefer, K., Tian, H., Tomelleri, E., Verbeeck, H., Viovy, N., Wania, R., Zeng, N., and Miller, C. E.: Carbon cycle uncertainty in the Alaskan Arctic, Biogeosciences, 11, 4271–4288, https://doi.org/10.5194/bg-11-4271-2014, 2014.
Forkel, M., Carvalhais, N., Rödenbeck, C., Keeling, R., Heimann, M., Thonicke, K., Zaehle, S., and Reichstein, M.: Enhanced seasonal CO2 exchange caused by amplified plant productivity in northern ecosystems, Science, 351, 696–699, https://doi.org/10.1126/science.aac4971, 2016.
Fox, A., Williams, M., Richardson, A. D., Cameron, D., Gove, J. H., Quaife, T., Ricciuto, D., Reichstein, M., Tomelleri, E., Trudinger, C. M., and Van Wijk, M. T.: The REFLEX project: Comparing different algorithms and implementations for the inversion of a terrestrial ecosystem model against eddy covariance data, Agr. Forest Meteorol., 149, 1597–1615, https://doi.org/10.1016/j.agrformet.2009.05.002, 2009.
Friedlingstein, P., Meinshausen, M., Arora, V. K., Jones, C. D., Anav, A., Liddicoat, S. K., and Knutti, R.: Uncertainties in CMIP5 Climate Projections due to Carbon Cycle Feedbacks, J. Climate, 27, 511–526, https://doi.org/10.1175/jcli-d-12-00579.1, 2014.
Friend, A. D., Lucht, W., Rademacher, T. T., Keribin, R., Betts, R., Cadule, P., Ciais, P., Clark, D. B., Dankers, R., Falloon, P. D., Ito, A., Kahana, R., Kleidon, A., Lomas, M. R., Nishina, K., Ostberg, S., Pavlick, R., Peylin, P., Schaphoff, S., Vuichard, N., Warszawski, L., Wiltshire, A., and Woodward, F. I.: Carbon residence time dominates uncertainty in terrestrial vegetation responses to future climate and atmospheric CO2, P. Natl. Acad. Sci. USA, 111, 3280–3285, https://doi.org/10.1073/pnas.1222477110, 2014.
Goetz, S. J., Bunn, A. G., Fiske, G. J., and Houghton, R. A.: Satellite-observed photosynthetic trends across boreal North America associated with climate and fire disturbance, P. Natl. Acad. Sci. USA, 102, 13521–13525, https://doi.org/10.1073/pnas.0506179102, 2005.
Goulden, M. L., Munger, J. W., Fan, S.-M., Daube, B. C., and Wofsy, S. C.: Exchange of Carbon Dioxide by a Deciduous Forest: Response to Interannual Climate Variability, Science, 271, 1576–1578, https://doi.org/10.1126/science.271.5255.1576, 1996.
Goulden, M. L., McMillan, A. M. S., Winston, G. C., Rocha, A. V., Manies, K. L., Harden, J. W., and Bond-Lamberty, B. P.: Patterns of NPP, GPP, respiration, and NEP during boreal forest succession, Global Change Biol., 17, 855–871, https://doi.org/10.1111/j.1365-2486.2010.02274.x, 2011.
Graven, H. D., Keeling, R. F., Piper, S. C., Patra, P. K., Stephens, B. B., Wofsy, S. C., Welp, L. R., Sweeney, C., Tans, P. P., Kelley, J. J., Daube, B. C., Kort, E. A., Santoni, G. W., and Bent, J. D.: Enhanced Seasonal Exchange of CO2 by Northern Ecosystems Since 1960, Science, 341, 1085–1089, https://doi.org/10.1126/science.1239207, 2013.
Guimberteau, M., Zhu, D., Maignan, F., Huang, Y., Yue, C., Dantec-Nédélec, S., Ottlé, C., Jornet-Puig, A., Bastos, A., Laurent, P., Goll, D., Bowring, S., Chang, J., Guenet, B., Tifafi, M., Peng, S., Krinner, G., Ducharne, A., Wang, F., Wang, T., Wang, X., Wang, Y., Yin, Z., Lauerwald, R., Joetzjer, E., Qiu, C., Kim, H., and Ciais, P.: ORCHIDEE-MICT (v8.4.1), a land surface model for the high latitudes: model description and validation, Geosci. Model Dev., 11, 121–163, https://doi.org/10.5194/gmd-11-121-2018, 2018.
Hashimoto, S., Carvalhais, N., Ito, A., Migliavacca, M., Nishina, K., and Reichstein, M.: Global spatiotemporal distribution of soil respiration modeled using a global database, Biogeosciences, 12, 4121–4132, https://doi.org/10.5194/bg-12-4121-2015, 2015.
He, Y., Trumbore, S. E., Torn, M. S., Harden, J. W., Vaughn, L. J. S., Allison, S. D., and Randerson, J. T.: Radiocarbon constraints imply reduced carbon uptake by soils during the 21st century, Science, 353, 1419–1424, https://doi.org/10.1126/science.aad4273, 2016.
Hill, T. C., Ryan, E., and Williams, M.: The use of CO2 flux time series for parameter and carbon stock estimation in carbon cycle research, Global Change Biol., 18, 179–193, https://doi.org/10.1111/j.1365-2486.2011.02511.x, 2012.
Hobbie, J. E. and Kling, G. W.: Alaska's changing Arctic: Ecological consequences for tundra, streams, and lakes, Oxford University Press, Oxford, 2014.
Hobbie, S. E., Schimel, J. P., Trumbore, S. E., and Randerson, J. R.: Controls over carbon storage and turnover in high-latitude soils, Global Change Biol., 6, 196–210, https://doi.org/10.1046/j.1365-2486.2000.06021.x, 2000.
Hugelius, G., Bockheim, J. G., Camill, P., Elberling, B., Grosse, G., Harden, J. W., Johnson, K., Jorgenson, T., Koven, C. D., Kuhry, P., Michaelson, G., Mishra, U., Palmtag, J., Ping, C. L., O'Donnell, J., Schirrmeister, L., Schuur, E. A. G., Sheng, Y., Smith, L. C., Strauss, J., and Yu, Z.: A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region, Earth Syst. Sci. Data, 5, 393–402, https://doi.org/10.5194/essd-5-393-2013, 2013a.
Hugelius, G., Tarnocai, C., Broll, G., Canadell, J. G., Kuhry, P., and Swanson, D. K.: The Northern Circumpolar Soil Carbon Database: spatially distributed datasets of soil coverage and soil carbon storage in the northern permafrost regions, Earth Syst. Sci. Data, 5, 3–13, https://doi.org/10.5194/essd-5-3-2013, 2013b.
Hugelius, G., Strauss, J., Zubrzycki, S., Harden, J. W., Schuur, E. A. G., Ping, C. L., Schirrmeister, L., Grosse, G., Michaelson, G. J., Koven, C. D., O'Donnell, J. A., Elberling, B., Mishra, U., Camill, P., Yu, Z., Palmtag, J., and Kuhry, P.: Estimated stocks of circumpolar permafrost carbon with quantified uncertainty ranges and identified data gaps, Biogeosciences, 11, 6573–6593, https://doi.org/10.5194/bg-11-6573-2014, 2014.
Ikawa, H., Nakai, T., Busey, R. C., Kim, Y., Kobayashi, H., Nagai, S., Ueyama, M., Saito, K., Nagano, H., Suzuki, R., and Hinzman, L.: Understory CO2, sensible heat, and latent heat fluxes in a black spruce forest in interior Alaska, Agr. Forest Meteorol., 214–215, 80–90, https://doi.org/10.1016/j.agrformet.2015.08.247, 2015.
IPCC: Climate Change 2013: The Physical Science Basis, in: Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK and New York, NY, USA, 2013.
Ito, A. and Inatomi, M.: Water-Use Efficiency of the Terrestrial Biosphere: A Model Analysis Focusing on Interactions between the Global Carbon and Water Cycles, J. Hydrometeorol., 13, 681–694, https://doi.org/10.1175/jhm-d-10-05034.1, 2012.
Jackson, R. B., Lajtha, K., Crow, S. E., Hugelius, G., Kramer, M. G., and Piñeiro, G.: The Ecology of Soil Carbon: Pools, Vulnerabilities, and Biotic and Abiotic Controls, Annu. Rev. Ecol. Evol. Syst., 48, 419–445, https://doi.org/10.1146/annurev-ecolsys-112414-054234, 2017.
Jung, M., Reichstein, M., Margolis, H. A., Cescatti, A., Richardson, A. D., Arain, M. A., Arneth, A., Bernhofer, C., Bonal, D., Chen, J., Gianelle, D., Gobron, N., Kiely, G., Kutsch, W., Lasslop, G., Law, B. E., Lindroth, A., Merbold, L., Montagnani, L., Moors, E. J., Papale, D., Sottocornola, M., Vaccari, F., and Williams, C.: Global patterns of land-atmosphere fluxes of carbon dioxide, latent heat, and sensible heat derived from eddy covariance, satellite, and meteorological observations, J. Geophys. Res.-Biogeo., 116, G00J07, https://doi.org/10.1029/2010JG001566, 2011.
Jung, M., Reichstein, M., Schwalm, C. R., Huntingford, C., Sitch, S., Ahlström, A., Arneth, A., Camps-Valls, G., Ciais, P., Friedlingstein, P., Gans, F., Ichii, K., Jain, A. K., Kato, E., Papale, D., Poulter, B., Raduly, B., Rödenbeck, C., Tramontana, G., Viovy, N., Wang, Y.-P., Weber, U., Zaehle, S., and Zeng, N.: Compensatory water effects link yearly global land CO2 sink changes to temperature, Nature, 541, 516–520, https://doi.org/10.1038/nature20780, 2017.
Kimball, J. S., Jones, L. A., Zhang, K., Heinsch, F. A., McDonald, K. C., and Oechel, W.: A Satellite Approach to Estimate Land CO2 Exchange for Boreal and Arctic Biomes Using MODIS and AMSR-E, IEEE T. Geosci. Remote, 47, 569–587, https://doi.org/10.1109/TGRS.2008.2003248, 2009.
Koven, C. D., Ringeval, B., Friedlingstein, P., Ciais, P., Cadule, P., Khvorostyanov, D., Krinner, G., and Tarnocai, C.: Permafrost carbon-climate feedbacks accelerate global warming, P. Natl. Acad. Sci. USA, 108, 14769–14774, https://doi.org/10.1073/pnas.1103910108, 2011.
Koven, C. D., Schuur, E. A. G., Schädel, C., Bohn, T. J., Burke, E. J., Chen, G., Chen, X., Ciais, P., Grosse, G., Harden, J. W., Hayes, D. J., Hugelius, G., Jafarov, E. E., Krinner, G., Kuhry, P., Lawrence, D. M., MacDougall, A. H., Marchenko, S. S., McGuire, A. D., Natali, S. M., Nicolsky, D. J., Olefeldt, D., Peng, S., Romanovsky, V. E., Schaefer, K. M., Strauss, J., Treat, C. C., and Turetsky, M.: A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback, Philos. T. Roy. Soc. A, 373, 20140423, https://doi.org/10.1098/rsta.2014.0423, 2015.
Koven, C. D., Hugelius, G., Lawrence, D. M., and Wieder, W. R.: Higher climatological temperature sensitivity of soil carbon in cold than warm climates, Nat. Clim. Change, 7, 817–822, https://doi.org/10.1038/nclimate3421, 2017.
Kutzbach, L., Wille, C., and Pfeiffer, E.-M.: The exchange of carbon dioxide between wet arctic tundra and the atmosphere at the Lena River Delta, Northern Siberia, Biogeosciences, 4, 869–890, https://doi.org/10.5194/bg-4-869-2007, 2007.
Lafleur, P. M., Humphreys, E. R., St. Louis, V. L., Myklebust, M. C., Papakyriakou, T., Poissant, L., Barker, J. D., Pilote, M., and Swystun, K. A.: Variation in Peak Growing Season Net Ecosystem Production Across the Canadian Arctic, Environ. Sci. Technol., 46, 7971–7977, https://doi.org/10.1021/es300500m, 2012.
Lasslop, G., Reichstein, M., Papale, D., Richardson, A. D., Arneth, A., Barr, A., Stoy, P., and Wohlfahrt, G.: Separation of net ecosystem exchange into assimilation and respiration using a light response curve approach: critical issues and global evaluation, Global Change Biol., 16, 187–208, https://doi.org/10.1111/j.1365-2486.2009.02041.x, 2010.
Lawrence, D. M., Oleson, K. W., Flanner, M. G., Thornton, P. E., Swenson, S. C., Lawrence, P. J., Zeng, X., Yang, Z. L., Levis, S., Sakaguchi, K., Bonan, G. B., and Slater, A. G.: Parameterization improvements and functional and structural advances in Version 4 of the Community Land Model, J. Adv. Model. Earth Syst., 3, M03001, https://doi.org/10.1029/2011MS00045, 2011.
Le Quéré, C., Andrew, R. M., Friedlingstein, P., Sitch, S., Pongratz, J., Manning, A. C., Korsbakken, J. I., Peters, G. P., Canadell, J. G., Jackson, R. B., Boden, T. A., Tans, P. P., Andrews, O. D., Arora, V. K., Bakker, D. C. E., Barbero, L., Becker, M., Betts, R. A., Bopp, L., Chevallier, F., Chini, L. P., Ciais, P., Cosca, C. E., Cross, J., Currie, K., Gasser, T., Harris, I., Hauck, J., Haverd, V., Houghton, R. A., Hunt, C. W., Hurtt, G., Ilyina, T., Jain, A. K., Kato, E., Kautz, M., Keeling, R. F., Klein Goldewijk, K., Körtzinger, A., Landschützer, P., Lefèvre, N., Lenton, A., Lienert, S., Lima, I., Lombardozzi, D., Metzl, N., Millero, F., Monteiro, P. M. S., Munro, D. R., Nabel, J. E. M. S., Nakaoka, S.-I., Nojiri, Y., Padin, X. A., Peregon, A., Pfeil, B., Pierrot, D., Poulter, B., Rehder, G., Reimer, J., Rödenbeck, C., Schwinger, J., Séférian, R., Skjelvan, I., Stocker, B. D., Tian, H., Tilbrook, B., Tubiello, F. N., van der Laan-Luijkx, I. T., van der Werf, G. R., van Heuven, S., Viovy, N., Vuichard, N., Walker, A. P., Watson, A. J., Wiltshire, A. J., Zaehle, S., and Zhu, D.: Global Carbon Budget 2017, Earth Syst. Sci. Data, 10, 405–448, https://doi.org/10.5194/essd-10-405-2018, 2018.
Levy, P. E., Friend, A. D., White, A., and Cannell, M. G. R.: The Influence of Land Use Change On Global-Scale Fluxes of Carbon from Terrestrial Ecosystems, Climatic Change, 67, 185–209, https://doi.org/10.1007/s10584-004-2849-z, 2004.
López-Blanco, E., Lund, M., Williams, M., Tamstorf, M. P., Westergaard-Nielsen, A., Exbrayat, J. F., Hansen, B. U., and Christensen, T. R.: Exchange of CO2 in Arctic tundra: impacts of meteorological variations and biological disturbance, Biogeosciences, 14, 4467–4483, https://doi.org/10.5194/bg-14-4467-2017, 2017.
López-Blanco, E., Lund, M., Christensen, T. R., Tamstorf, M. P., Smallman, T. L., Slevin, D., Westergaard-Nielsen, A., Hansen, B. U., Abermann, J., and Williams, M.: Plant Traits are Key Determinants in Buffering the Meteorological Sensitivity of Net Carbon Exchanges of Arctic Tundra, J. Geophys. Res.-Biogeo., 123, 2675–2694, https://doi.org/10.1029/2018JG004386, 2018.
Lucht, W., Prentice, I. C., Myneni, R. B., Sitch, S., Friedlingstein, P., Cramer, W., Bousquet, P., Buermann, W., and Smith, B.: Climatic Control of the High-Latitude Vegetation Greening Trend and Pinatubo Effect, Science, 296, 1687–1689, https://doi.org/10.1126/science.1071828, 2002.
Lund, M., Falk, J. M., Friborg, T., Mbufong, H. N., Sigsgaard, C., Soegaard, H., and Tamstorf, M. P.: Trends in CO2 exchange in a high Arctic tundra heath, 2000–2010, J. Geophys. Res.-Biogeo., 117, G02001, https://doi.org/10.1029/2011JG001901, 2012.
Lund, M., Raundrup, K., Westergaard-Nielsen, A., López-Blanco, E., Nymand, J., and Aastrup, P.: Larval outbreaks in West Greenland: Instant and subsequent effects on tundra ecosystem productivity and CO2 exchange, Ambio, 46, 26–38, https://doi.org/10.1007/s13280-016-0863-9, 2017.
Luo, Y., Weng, E., Wu, X., Gao, C., Zhou, X., and Zhang, L.: Parameter identifiability, constraint, and equifinality in data assimilation with ecosystem models, Ecol. Appl., 19, 571–574, https://doi.org/10.1890/08-0561.1, 2009.
Mack, M. C., Bret-Harte, M. S., Hollingsworth, T. N., Jandt, R. R., Schuur, E. A. G., Shaver, G. R., and Verbyla, D. L.: Carbon loss from an unprecedented Arctic tundra wildfire, Nature, 475, 489–492, https://doi.org/10.1038/nature10283, 2011.
Mastepanov, M., Sigsgaard, C., Dlugokencky, E. J., Houweling, S., Strom, L., Tamstorf, M. P., and Christensen, T. R.: Large tundra methane burst during onset of freezing, Nature, 456, 628–630, https://doi.org/10.1038/nature07464, 2008.
McGuire, A. D., Melillo, J. M., Randerson, J. T., Parton, W. J., Heimann, M., Meier, R. A., Clein, J. S., Kicklighter, D. W., and Sauf, W.: Modeling the effects of snowpack on heterotrophic respiration across northern temperate and high latitude regions: Comparison with measurements of atmospheric carbon dioxide in high latitudes, Biogeochemistry, 48, 91–114, https://doi.org/10.1023/a:1006286804351, 2000.
McGuire, A. D., Hayes, D., Kicklighter, D. W., Manizza, M., Zhuang, Q., Chen, M., Follows, M. J., Gurney, K. R., Mcclelland, J. W., Melillo, J. M., Peterson, B. J., and Prinn, R. G.: An analysis of the carbon balance of the Arctic Basin from 1997 to 2006, Tellus B, 62, 455–474, https://doi.org/10.1111/j.1600-0889.2010.00497.x, 2010.
McGuire, A. D., Christensen, T. R., Hayes, D., Heroult, A., Euskirchen, E., Kimball, J. S., Koven, C., Lafleur, P., Miller, P. A., Oechel, W., Peylin, P., Williams, M., and Yi, Y.: An assessment of the carbon balance of Arctic tundra: comparisons among observations, process models, and atmospheric inversions, Biogeosciences, 9, 3185–3204, https://doi.org/10.5194/bg-9-3185-2012, 2012.
Murray-Tortarolo, G., Anav, A., Friedlingstein, P., Sitch, S., Piao, S., Zhu, Z., Poulter, B., Zaehle, S., Ahlström, A., Lomas, M., Levis, S., Viovy, N., and Zeng, N.: Evaluation of Land Surface Models in Reproducing Satellite-Derived LAI over the High-Latitude Northern Hemisphere. Part I: Uncoupled DGVMs, Remote Sensing, 5, 4819, https://doi.org/10.3390/rs5104819, 2013.
Myers-Smith, I. H., Forbes, B. C., Wilmking, M., Hallinger, M., Lantz, T., Blok, D., Tape, K. D., Macias-Fauria, M., Sass-Klaassen, U., Lévesque, E., Boudreau, S., Ropars, P., Hermanutz, L., Trant, A., Siegwart, C. L., Weijers, S., Rozema, J., Rayback, S. A., Schmidt, N. M., Schaepman-Strub, G., Wipf, S., Rixen, C., Ménard, C. B., Venn, S., Goetz, S., Andreu-Hayles, L., Elmendorf, S., Ravolainen, V., Welker, J., Grogan, P., Epstein, H. E., and Hik, D. S.: Shrub expansion in tundra ecosystems: dynamics, impacts and research priorities, Environ. Res. Lett., 6, 045509, https://doi.org/10.1088/1748-9326/6/4/045509, 2011.
Myneni, R. B., Keeling, C. D., Tucker, C. J., Asrar, G., and Nemani, R. R.: Increased plant growth in the northern high latitudes from 1981 to 1991, Nature, 386, 698–702, https://doi.org/10.1038/386698a0, 1997.
Myneni, R. B., Hoffman, S., Knyazikhin, Y., Privette, J. L., Glassy, J., Tian, Y., Wang, Y., Song, X., Zhang, Y., Smith, G. R., Lotsch, A., Friedl, M., Morisette, J. T., Votava, P., Nemani, R. R., and Running, S. W.: Global products of vegetation leaf area and fraction absorbed PAR from year one of MODIS data, Remote Sens. Environ., 83, 214–231, https://doi.org/10.1016/S0034-4257(02)00074-3, 2002.
Nishina, K., Ito, A., Beerling, D. J., Cadule, P., Ciais, P., Clark, D. B., Falloon, P., Friend, A. D., Kahana, R., Kato, E., Keribin, R., Lucht, W., Lomas, M., Rademacher, T. T., Pavlick, R., Schaphoff, S., Vuichard, N., Warszawaski, L., and Yokohata, T.: Quantifying uncertainties in soil carbon responses to changes in global mean temperature and precipitation, Earth Syst. Dynam., 5, 197–209, https://doi.org/10.5194/esd-5-197-2014, 2014.
Nishina, K., Ito, A., Falloon, P., Friend, A. D., Beerling, D. J., Ciais, P., Clark, D. B., Kahana, R., Kato, E., Lucht, W., Lomas, M., Pavlick, R., Schaphoff, S., Warszawaski, L., and Yokohata, T.: Decomposing uncertainties in the future terrestrial carbon budget associated with emission scenarios, climate projections, and ecosystem simulations using the ISI-MIP results, Earth Syst. Dynam., 6, 435–445, https://doi.org/10.5194/esd-6-435-2015, 2015.
Papale, D., Reichstein, M., Aubinet, M., Canfora, E., Bernhofer, C., Kutsch, W., Longdoz, B., Rambal, S., Valentini, R., Vesala, T., and Yakir, D.: Towards a standardized processing of Net Ecosystem Exchange measured with eddy covariance technique: algorithms and uncertainty estimation, Biogeosciences, 3, 571–583, https://doi.org/10.5194/bg-3-571-2006, 2006.
Parazoo, N. C., Koven, C. D., Lawrence, D. M., Romanovsky, V., and Miller, C. E.: Detecting the permafrost carbon feedback: talik formation and increased cold-season respiration as precursors to sink-to-source transitions, The Cryosphere, 12, 123–144, https://doi.org/10.5194/tc-12-123-2018, 2018.
Peñuelas, J., Rutishauser, T., and Filella, I.: Phenology Feedbacks on Climate Change, Science, 324, 887–888, https://doi.org/10.1126/science.1173004, 2009.
Running, S. W., Nemani, R. R., Heinsch, F. A., Zhao, M., Reeves, M., and Hashimoto, H.: A Continuous Satellite-Derived Measure of Global Terrestrial Primary Production, BioScience, 54, 547–560, https://doi.org/10.1641/0006-3568(2004)054[0547:ACSMOG]2.0.CO;2, 2004.
Sari, J., Tarmo, V., Vladimir, K., Tuomas, L., Maiju, L., Juha, M., Johanna, N., Aleksi, R., Juha-Pekka, T., and Mika, A.: Spatial variation and seasonal dynamics of leaf-area index in the arctic tundra-implications for linking ground observations and satellite images, Environ. Res. Lett., 12, 095002, https://doi.org/10.1088/1748-9326/aa7f85, 2017.
Schaphoff, S., Heyder, U., Ostberg, S., Gerten, D., Heinke, J., and Lucht, W.: Contribution of permafrost soils to the global carbon budget, Environ. Res. Lett., 8, 014026, https://doi.org/10.1088/1748-9326/8/1/014026, 2013.
Schuur, E. A. G., McGuire, A. D., Schadel, C., Grosse, G., Harden, J. W., Hayes, D. J., Hugelius, G., Koven, C. D., Kuhry, P., Lawrence, D. M., Natali, S. M., Olefeldt, D., Romanovsky, V. E., Schaefer, K., Turetsky, M. R., Treat, C. C., and Vonk, J. E.: Climate change and the permafrost carbon feedback, Nature, 520, 171–179, https://doi.org/10.1038/nature14338, 2015.
Sierra, C. A., Müller, M., Metzler, H., Manzoni, S., and Trumbore, S. E.: The muddle of ages, turnover, transit, and residence times in the carbon cycle, Global Change Biol., 23, 1763–1773, https://doi.org/10.1111/gcb.13556, 2017.
Sitch, S., Smith, B., Prentice, I. C., Arneth, A., Bondeau, A., Cramer, W., Kaplan, J. O., Levis, S., Lucht, W., Sykes, M. T., Thonicke, K., and Venevsky, S.: Evaluation of ecosystem dynamics, plant geography and terrestrial carbon cycling in the LPJ dynamic global vegetation model, Global Change Biol., 9, 161–185, https://doi.org/10.1046/j.1365-2486.2003.00569.x, 2003.
Smallman, T. L., Exbrayat, J.-F., Mencuccini, M., Bloom, A. A., and Williams, M.: Assimilation of repeated woody biomass observations constrains decadal ecosystem carbon cycle uncertainty in aggrading forests, J. Geophys. Res.-Biogeo., 122, 528–545, https://doi.org/10.1002/2016JG003520, 2017.
Smith, B., Prentice, I. C., and Sykes, M. T.: Representation of vegetation dynamics in the modelling of terrestrial ecosystems: comparing two contrasting approaches within European climate space, Global Ecol. Biogeogr., 10, 621–637, https://doi.org/10.1046/j.1466-822X.2001.t01-1-00256.x, 2001.
Smith, B., Wårlind, D., Arneth, A., Hickler, T., Leadley, P., Siltberg, J., and Zaehle, S.: Implications of incorporating N cycling and N limitations on primary production in an individual-based dynamic vegetation model, Biogeosciences, 11, 2027–2054, https://doi.org/10.5194/bg-11-2027-2014, 2014.
Street, L. E., Stoy, P. C., Sommerkorn, M., Fletcher, B. J., Sloan, V. L., Hill, T. C., and Williams, M.: Seasonal bryophyte productivity in the sub-Arctic: a comparison with vascular plants, Funct. Ecol., 26, 365–378, https://doi.org/10.1111/j.1365-2435.2011.01954.x, 2012.
Tarnocai, C., Canadell, J. G., Schuur, E. A. G., Kuhry, P., Mazhitova, G., and Zimov, S.: Soil organic carbon pools in the northern circumpolar permafrost region, Global Biogeochem. Cy., 23, GB2023, https://doi.org/10.1029/2008GB003327, 2009.
Thornton, P. E., Doney, S. C., Lindsay, K., Moore, J. K., Mahowald, N., Randerson, J. T., Fung, I., Lamarque, J. F., Feddema, J. J., and Lee, Y. H.: Carbon-nitrogen interactions regulate climate-carbon cycle feedbacks: results from an atmosphere-ocean general circulation model, Biogeosciences, 6, 2099–2120, https://doi.org/10.5194/bg-6-2099-2009, 2009.
Thurner, M., Beer, C., Santoro, M., Carvalhais, N., Wutzler, T., Schepaschenko, D., Shvidenko, A., Kompter, E., Ahrens, B., Levick, S. R., and Schmullius, C.: Carbon stock and density of northern boreal and temperate forests, Global Ecol. Biogeogr., 23, 297–310, https://doi.org/10.1111/geb.12125, 2014.
Thurner, M., Beer, C., Carvalhais, N., Forkel, M., Santoro, M., Tum, M., and Schmullius, C.: Large-scale variation in boreal and temperate forest carbon turnover rate related to climate, Geophys. Res. Lett., 43, 4576–4585, https://doi.org/10.1002/2016GL068794, 2016.
Thurner, M., Beer, C., Ciais, P., Friend, A. D., Ito, A., Kleidon, A., Lomas, M. R., Quegan, S., Rademacher, T. T., Schaphoff, S., Tum, M., Wiltshire, A., and Carvalhais, N.: Evaluation of climate-related carbon turnover processes in global vegetation models for boreal and temperate forests, Global Change Biol., 23, 3076–3091, https://doi.org/10.1111/gcb.13660, 2017.
Tian, H., Chen, G., Lu, C., Xu, X., Hayes, D. J., Ren, W., Pan, S., Huntzinger, D. N., and Wofsy, S. C.: North American terrestrial CO2 uptake largely offset by CH4 and N2O emissions: toward a full accounting of the greenhouse gas budget, Climatic Change, 129, 413–426, https://doi.org/10.1007/s10584-014-1072-9, 2015.
Tum, M., Zeidler, J. N., Günther, K. P., and Esch, T.: Global NPP and straw bioenergy trends for 2000–2014, Biomass Bioenergy, 90, 230–236, https://doi.org/10.1016/j.biombioe.2016.03.040, 2016.
Walter, B. P., Heimann, M., and Matthews, E.: Modeling modern methane emissions from natural wetlands: 1. Model description and results, J. Geophys. Res., 106, 34189–34206, https://doi.org/10.1029/2001JD900165, 2001.
Wania, R., Ross, I., and Prentice, I. C.: Integrating peatlands and permafrost into a dynamic global vegetation model: 1. Evaluation and sensitivity of physical land surface processes, Global Biogeochem. Cy., 23, GB3014, https://doi.org/10.1029/2008GB003412, 2009a.
Wania, R., Ross, I., and Prentice, I. C.: Integrating peatlands and permafrost into a dynamic global vegetation model: 2. Evaluation and sensitivity of vegetation and carbon cycle processes, Global Biogeochem. Cy., 23, GB3015, https://doi.org/10.1029/2008GB003413, 2009b.
Warszawski, L., Frieler, K., Huber, V., Piontek, F., Serdeczny, O., and Schewe, J.: The Inter-Sectoral Impact Model Intercomparison Project (ISI–MIP): Project framework, P. Natl. Acad. Sci. USA, 111, 3228–3232, https://doi.org/10.1073/pnas.1312330110, 2014.
Williams, M.: DALEC2, software, University of Edinburgh, https://doi.org/10.7488/ds/2504, 2019.
Williams, M., Rastetter, E. B., Fernandes, D. N., Goulden, M. L., Shaver, G. R., and Johnson, L. C.: Predicting Gross Primary Productivity in Terrestrial Ecosystems, Ecol. Appl., 7, 882–894, https://doi.org/10.2307/2269440, 1997.
Williams, M., Schwarz, P. A., Law, B. E., Irvine, J., and Kurpius, M. R.: An improved analysis of forest carbon dynamics using data assimilation, Global Change Biol., 11, 89–105, https://doi.org/10.1111/j.1365-2486.2004.00891.x, 2005.
Woodward, F. I., Smith, T. M., and Emanuel, W. R.: A global land primary productivity and phytogeography model, Global Biogeochem. Cy., 9, 471–490, https://doi.org/10.1029/95GB02432, 1995.
Xu, T., Valocchi, A. J., Ye, M., and Liang, F.: Quantifying model structural error: Efficient Bayesian calibration of a regional groundwater flow model using surrogates and a data-driven error model, Water Resour. Res., 53, 4084–4105, https://doi.org/10.1002/2016WR019831, 2017.
Zaehle, S. and Friend, A. D.: Carbon and nitrogen cycle dynamics in the O-CN land surface model: 1. Model description, site-scale evaluation, and sensitivity to parameter estimates, Global Biogeochem. Cy., 24, GB1005, https://doi.org/10.1029/2009GB003521, 2010.
Zeng, H., Jia, G., and Epstein, H.: Recent changes in phenology over the northern high latitudes detected from multi-satellite data, Environ. Res. Lett., 6, 045508, https://doi.org/10.1088/1748-9326/6/4/045508, 2011.
Zeng, N., Mariotti, A., and Wetzel, P.: Terrestrial mechanisms of interannual CO2 variability, Global Biogeochem. Cy., 19, GB1016, https://doi.org/10.1029/2004GB002273, 2005.
Zhao, M., Heinsch, F. A., Nemani, R. R., and Running, S. W.: Improvements of the MODIS terrestrial gross and net primary production global data set, Remote Sens. Environ., 95, 164–176, https://doi.org/10.1016/j.rse.2004.12.011, 2005.
Zhu, Z., Piao, S., Myneni, R. B., Huang, M., Zeng, Z., Canadell, J. G., Ciais, P., Sitch, S., Friedlingstein, P., Arneth, A., Cao, C., Cheng, L., Kato, E., Koven, C., Li, Y., Lian, X., Liu, Y., Liu, R., Mao, J., Pan, Y., Peng, S., Peñuelas, J., Poulter, B., Pugh, T. A. M., Stocker, B. D., Viovy, N., Wang, X., Wang, Y., Xiao, Z., Yang, H., Zaehle, S., and Zeng, N.: Greening of the Earth and its drivers, Nat. Clim. Change, 6, 791–795, https://doi.org/10.1038/nclimate3004, 2016.
Zhuang, Q., Melillo, J. M., Sarofim, M. C., Kicklighter, D. W., McGuire, A. D., Felzer, B. S., Sokolov, A., Prinn, R. G., Steudler, P. A., and Hu, S.: CO2 and CH4 exchanges between land ecosystems and the atmosphere in northern high latitudes over the 21st century, Geophys. Res. Lett., 33, L17403, https://doi.org/10.1029/2006GL026972, 2006.
Zona, D., Gioli, B., Commane, R., Lindaas, J., Wofsy, S. C., Miller, C. E., Dinardo, S. J., Dengel, S., Sweeney, C., Karion, A., Chang, R. Y.-W., Henderson, J. M., Murphy, P. C., Goodrich, J. P., Moreaux, V., Liljedahl, A., Watts, J. D., Kimball, J. S., Lipson, D. A., and Oechel, W. C.: Cold season emissions dominate the Arctic tundra methane budget, P. Natl. Acad. Sci. USA, 113, 40–45, https://doi.org/10.1073/pnas.1516017113, 2016.
|
2019-05-26 08:54:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.717715322971344, "perplexity": 14132.658756179202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00131.warc.gz"}
|
https://nl.mathworks.com/help/stats/incrementalclassificationkernel.html
|
# incrementalClassificationKernel
Binary classification kernel model for incremental learning
## Description
The `incrementalClassificationKernel` function creates an `incrementalClassificationKernel` model object, which represents a binary Gaussian kernel classification model for incremental learning. The kernel model maps data in a low-dimensional space into a high-dimensional space, then fits a linear model in the high-dimensional space. Supported linear models include support vector machine (SVM) and logistic regression.
Unlike other Statistics and Machine Learning Toolbox™ model objects, `incrementalClassificationKernel` can be called directly. Also, you can specify learning options, such as performance metrics configurations and the objective solver, before fitting the model to data. After you create an `incrementalClassificationKernel` object, it is prepared for incremental learning.
`incrementalClassificationKernel` is best suited for incremental learning. For a traditional approach to training a kernel model for binary classification (such as creating a model by fitting it to data, performing cross-validation, tuning hyperparameters, and so on), see `fitckernel`. For multiclass incremental learning, see `incrementalClassificationECOC` and `incrementalClassificationNaiveBayes`.
## Creation
You can create an `incrementalClassificationKernel` model object in several ways:
• Call the function directly — Configure incremental learning options, or specify learner-specific options, by calling `incrementalClassificationKernel` directly. This approach is best when you do not have data yet or you want to start incremental learning immediately.
• Convert a traditionally trained model — To initialize a model for incremental learning using the model parameters and hyperparameters of a trained model object, you can convert the traditionally trained model (`ClassificationKernel`) to an `incrementalClassificationKernel` model object by passing it to the `incrementalLearner` function.
• Convert a template object — You can convert a template object (`templateKernel`) to an `incrementalClassificationKernel` model object by passing it to the `incrementalLearner` function.
• Call an incremental learning function`fit`, `updateMetrics`, and `updateMetricsAndFit` accept a configured `incrementalClassificationKernel` model object and data as input, and return an `incrementalClassificationKernel` model object updated with information learned from the input model and data.
### Syntax
``Mdl = incrementalClassificationKernel()``
``Mdl = incrementalClassificationKernel(Name=Value)``
### Description
example
````Mdl = incrementalClassificationKernel()` returns a default incremental learning model object for binary Gaussian kernel classification, `Mdl`. Properties of a default model contain placeholders for unknown model parameters. You must train a default model before you can track its performance or generate predictions from it.```
example
````Mdl = incrementalClassificationKernel(Name=Value)` sets properties and additional options using name-value arguments. For example, `incrementalClassificationKernel(Solver="sgd",LearnRateSchedule="constant")` specifies to use the stochastic gradient descent (SGD) solver with a constant learning rate.```
### Input Arguments
expand all
Name-Value Arguments
Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Example: `Metrics="logit",MetricsWarmupPeriod=100` sets the model performance metric to the logistic loss and the metrics warm-up period to `100`.
Classification Options
expand all
Random number stream for reproducibility of data transformation, specified as a random stream object. For details, see Random Feature Expansion.
Use `RandomStream` to reproduce the random basis functions used by `incrementalClassificationKernel` to transform the predictor data to a high-dimensional space. For details, see Managing the Global Stream Using RandStream and Creating and Controlling a Random Number Stream.
Example: `RandomStream=RandStream("mlfg6331_64")`
SGD and ASGD (Average SGD) Solver Options
expand all
Mini-batch size, specified as a positive integer. At each learning cycle during training, `incrementalClassificationKernel` uses `BatchSize` observations to compute the subgradient.
The number of observations in the last mini-batch (last learning cycle in each function call of `fit` or `updateMetricsAndFit`) can be smaller than `BatchSize`. For example, if you supply 25 observations to `fit` or `updateMetricsAndFit`, the function uses 10 observations for the first two learning cycles and 5 observations for the last learning cycle.
Example: `BatchSize=5`
Data Types: `single` | `double`
Ridge (L2) regularization term strength, specified as a nonnegative scalar.
Example: `Lambda=0.01`
Data Types: `single` | `double`
Initial learning rate, specified as `"auto"` or a positive scalar.
The learning rate controls the optimization step size by scaling the objective subgradient. `LearnRate` specifies an initial value for the learning rate, and `LearnRateSchedule` determines the learning rate for subsequent learning cycles.
When you specify `"auto"`:
• The initial learning rate is `0.7`.
• If `EstimationPeriod` > `0`, `fit` and `updateMetricsAndFit` change the rate to `1/sqrt(1+max(sum(X.^2,2)))` at the end of `EstimationPeriod`.
Example: `LearnRate=0.001`
Data Types: `single` | `double` | `char` | `string`
Learning rate schedule, specified as a value in this table, where `LearnRate` specifies the initial learning rate ɣ0.
ValueDescription
`"constant"`The learning rate is ɣ0 for all learning cycles.
`"decaying"`
The learning rate at learning cycle t is
`${\gamma }_{t}=\frac{{\gamma }_{0}}{{\left(1+\lambda {\gamma }_{0}t\right)}^{c}}.$`
• λ is the value of `Lambda`.
• If `Solver` is `"sgd"`, then c = 1.
• If `Solver` is `"asgd"`, then c = 0.75 [4].
Example: `LearnRateSchedule="constant"`
Data Types: `char` | `string`
Adaptive Scale-Invariant Solver Options
expand all
Flag for shuffling the observations at each iteration, specified as logical `1` (`true`) or `0` (`false`).
ValueDescription
logical `1` (`true`)The software shuffles the observations in an incoming chunk of data before the `fit` function fits the model. This action reduces bias induced by the sampling scheme.
logical `0` (`false`)The software processes the data in the order received.
Example: `Shuffle=false`
Data Types: `logical`
Performance Metrics Options
expand all
Model performance metrics to track during incremental learning, specified as a built-in loss function name, string vector of names, function handle (`@metricName`), structure array of function handles, or cell vector of names, function handles, or structure arrays.
When `Mdl` is warm (see `IsWarm`), `updateMetrics` and `updateMetricsAndFit` track performance metrics in the `Metrics` property of `Mdl`.
The following table lists the built-in loss function names. You can specify more than one by using a string vector.
NameDescription
`"binodeviance"`Binomial deviance
`"classiferror"`Classification error
`"exponential"`Exponential loss
`"hinge"`Hinge loss
`"logit"`Logistic loss
`"quadratic"`Quadratic loss
For more details on the built-in loss functions, see `loss`.
Example: `Metrics=["classiferror","hinge"]`
To specify a custom function that returns a performance metric, use function handle notation. The function must have this form:
`metric = customMetric(C,S)`
• The output argument `metric` is an n-by-`1` numeric vector, where each element is the loss of the corresponding observation in the data processed by the incremental learning functions during a learning cycle.
• You specify the function name (`customMetric`).
• `C` is an n-by-`2` logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in the `ClassNames` property. Create `C` by setting `C(p,q)` = `1`, if observation `p` is in class `q`, for each observation in the specified data. Set the other element in row `p` to `0`.
• `S` is an n-by-`2` numeric matrix of predicted classification scores. `S` is similar to the `score` output of `predict`, where rows correspond to observations in the data and the column order corresponds to the class order in the `ClassNames` property. `S(p,q)` is the classification score of observation `p` being classified in class `q`.
To specify multiple custom metrics and assign a custom name to each, use a structure array. To specify a combination of built-in and custom metrics, use a cell vector.
Example: `Metrics=struct(Metric1=@customMetric1,Metric2=@customMetric2)`
Example: `Metrics={@customMetric1,@customMetric2,"logit",struct(Metric3=@customMetric3)}`
`updateMetrics` and `updateMetricsAndFit` store specified metrics in a table in the `Metrics` property. The data type of `Metrics` determines the row names of the table.
`Metrics` Value Data TypeDescription of `Metrics` Property Row NameExample
String or character vectorName of corresponding built-in metricRow name for `"classiferror"` is `"ClassificationError"`
Structure arrayField nameRow name for `struct(Metric1=@customMetric1)` is `"Metric1"`
Function handle to function stored in a program fileName of functionRow name for `@customMetric` is `"customMetric"`
Anonymous function`CustomMetric_j`, where `j` is metric `j` in `Metrics`Row name for `@(C,S)customMetric(C,S)...` is `CustomMetric_1`
For more details on performance metrics options, see Performance Metrics.
Data Types: `char` | `string` | `struct` | `cell` | `function_handle`
## Properties
expand all
You can set most properties by using name-value argument syntax when you call `incrementalClassificationKernel` directly. You can set some properties when you call `incrementalLearner` to convert a traditionally trained model object or model template object. You cannot set the properties `FittedLoss`, `NumTrainingObservations`, `SolverOptions`, and `IsWarm`.
### Classification Model Parameters
This property is read-only.
Unique class labels used in training the model, specified as a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. `ClassNames` and the response data must have the same data type. (The software treats string arrays as cell arrays of character vectors.)
The default `ClassNames` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, `ClassNames` is specified by the corresponding property of the traditionally trained model.
• Otherwise, incremental fitting functions infer `ClassNames` during training.
Data Types: `single` | `double` | `logical` | `char` | `string` | `cell` | `categorical`
This property is read-only.
Loss function used to fit the linear model, specified as `'hinge'` or `'logit'`.
ValueAlgorithmLoss Function`Learner` Value
`'hinge'`Support vector machineHinge: $\ell \left[y,f\left(x\right)\right]=\mathrm{max}\left[0,1-yf\left(x\right)\right]$`'svm'`
`'logit'`Logistic regressionDeviance (logistic): $\ell \left[y,f\left(x\right)\right]=\mathrm{log}\left\{1+\mathrm{exp}\left[-yf\left(x\right)\right]\right\}$`'logistic'`
This property is read-only.
Kernel scale parameter, specified as `"auto"` or a positive scalar. `incrementalClassificationKernel` stores the `KernelScale` value as a numeric scalar. The software obtains a random basis for feature expansion by using the kernel scale parameter. For details, see Random Feature Expansion.
If you specify `"auto"` when creating the model object, the software selects an appropriate kernel scale parameter using a heuristic procedure. This procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by using `rng` before training.
The default `KernelScale` value depends on how you create the model:
• If you convert a traditionally trained model object or template model object to create `Mdl`, `KernelScale` is specified by the corresponding property of the object.
• Otherwise, the default value is `1`.
Data Types: `char` | `string` | `single` | `double`
This property is read-only.
Linear classification model type, specified as `"svm"` or `"logistic"`. `incrementalClassificationKernel` stores the `Learner` value as a character vector.
In the following table, $f\left(x\right)=T\left(x\right)\beta +b.$
• x is an observation (row vector) from p predictor variables.
• $T\left(·\right)$ is a transformation of an observation (row vector) for feature expansion. T(x) maps x in ${ℝ}^{p}$ to a high-dimensional space (${ℝ}^{m}$).
• β is a vector of coefficients.
• b is the scalar bias.
ValueAlgorithmLoss Function`FittedLoss` Value
`"svm"`Support vector machineHinge loss: $\ell \left[y,f\left(x\right)\right]=\mathrm{max}\left[0,1-yf\left(x\right)\right]$`'hinge'`
`"logistic"`Logistic regressionDeviance (logistic loss): $\ell \left[y,f\left(x\right)\right]=\mathrm{log}\left\{1+\mathrm{exp}\left[-yf\left(x\right)\right]\right\}$`'logit'`
The default `Learner` value depends on how you create the model:
• If you convert a traditionally trained model object or template model object to create `Mdl`, `Learner` is specified by the corresponding property of the object.
• Otherwise, the default value is `"svm"`.
Data Types: `char` | `string`
This property is read-only.
Number of dimensions of the expanded space, specified as `"auto"` or a positive integer. `incrementalClassificationKernel` stores the `NumExpansionDimensions` value as a numeric scalar.
For `"auto"`, the software selects the number of dimensions using `2.^ceil(min(log2(p)+5,15))`, where `p` is the number of predictors. For details, see Random Feature Expansion.
The default `NumExpansionDimensions` value depends on how you create the model:
• If you convert a traditionally trained model object or template model object to create `Mdl`, `NumExpansionDimensions` is specified by the corresponding property of the object.
• Otherwise, the default value is `"auto"`.
Data Types: `char` | `string` | `single` | `double`
This property is read-only.
Number of predictor variables, specified as a nonnegative numeric scalar.
The default `NumPredictors` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, `NumPredictors` is specified by the corresponding property of the traditionally trained model.
• If you create `Mdl` by calling `incrementalClassificationKernel` directly, you can specify `NumPredictors` by using name-value argument syntax.
• Otherwise, the default value is `0`, and incremental fitting functions infer `NumPredictors` from the predictor data during training.
Data Types: `double`
This property is read-only.
Number of observations fit to the incremental model `Mdl`, specified as a nonnegative numeric scalar. `NumTrainingObservations` increases when you pass `Mdl` and training data to `fit` or `updateMetricsAndFit`.
Note
If you convert a traditionally trained model to create `Mdl`, `incrementalClassificationKernel` does not add the number of observations fit to the traditionally trained model to `NumTrainingObservations`.
Data Types: `double`
This property is read-only.
Prior class probabilities, specified as `"empirical"`, `"uniform"`, or a numeric vector. `incrementalClassificationKernel` stores the `Prior` value as a numeric vector.
ValueDescription
`"empirical"`Incremental learning functions infer prior class probabilities from the observed class relative frequencies in the response data during incremental training (after the estimation period `EstimationPeriod`).
`"uniform"`For each class, the prior probability is 1/2.
numeric vectorCustom, normalized prior probabilities. The order of the elements of `Prior` corresponds to the elements of the `ClassNames` property.
The default `Prior` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, `Prior` is specified by the corresponding property of the traditionally trained model.
• Otherwise, the default value is `"empirical"`.
Data Types: `single` | `double` | `char` | `string`
This property is read-only.
Score transformation function describing how incremental learning functions transform raw response values, specified as a character vector, string scalar, or function handle. `incrementalClassificationKernel` stores the `ScoreTransform` value as a character vector or function handle.
This table describes the available built-in functions for score transformation.
ValueDescription
`"doublelogit"`1/(1 + e–2x)
`"invlogit"`log(x / (1 – x))
`"ismax"`Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
`"logit"`1/(1 + ex)
`"none"` or `"identity"`x (no transformation)
`"sign"`–1 for x < 0
0 for x = 0
1 for x > 0
`"symmetric"`2x – 1
`"symmetricismax"`Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
`"symmetriclogit"`2/(1 + ex) – 1
For a MATLAB® function or a function that you define, enter its function handle; for example, `ScoreTransform=@function`, where:
• `function` accepts an n-by-`2` matrix (the original scores) and returns a matrix of the same size (the transformed scores). The column order corresponds to the class order in the `ClassNames` property.
• n is the number of observations, and row j of the matrix contains the class scores of observation j.
The default `ScoreTransform` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, `ScoreTransform` is specified by the corresponding property of the traditionally trained model.
• Otherwise, the default value is `"none"` (when `Learner` is `"svm"`) or `"logit"` (when `Learner` is `"logistic"`).
Data Types: `char` | `string` | `function_handle`
### Training Parameters
This property is read-only.
Number of observations processed by the incremental model to estimate hyperparameters before training or tracking performance metrics, specified as a nonnegative integer.
Note
• If `Mdl` is prepared for incremental learning (all hyperparameters required for training are specified), `incrementalClassificationKernel` forces `EstimationPeriod` to `0`.
• If `Mdl` is not prepared for incremental learning, `incrementalClassificationKernel` sets `EstimationPeriod` to `1000`.
For more details, see Estimation Period.
Data Types: `single` | `double`
This property is read-only.
Objective function minimization technique, specified as `"scale-invariant"`, `"sgd"`, or `"asgd"`. `incrementalClassificationKernel` stores the `Solver` value as a character vector.
ValueDescriptionNotes
`"scale-invariant"`
• This algorithm is parameter free and can adapt to differences in predictor scales. Try this algorithm before using SGD or ASGD.
• To shuffle an incoming chunk of data before the `fit` function fits the model, set `Shuffle` to `true`.
`"sgd"`Stochastic gradient descent (SGD) [2][3]
• To train effectively with SGD, specify adequate values for hyperparameters using options listed in SGD and ASGD (Average SGD) Solver Options.
• The `fit` function always shuffles an incoming chunk of data before fitting the model.
`"asgd"`Average stochastic gradient descent (ASGD) [4]
• To train effectively with ASGD, specify adequate values for hyperparameters using options listed in SGD and ASGD (Average SGD) Solver Options.
• The `fit` function always shuffles an incoming chunk of data before fitting the model.
The default `Solver` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, the `Solver` name-value argument of the `incrementalLearner` function sets this property. The default value of the argument is `"scale-invariant"`.
• Otherwise, the default value is `"scale-invariant"`.
Data Types: `char` | `string`
This property is read-only.
Objective solver configurations, specified as a structure array. The fields of `SolverOptions` depend on `Solver`.
You can specify the field values using the corresponding name-value arguments when you create the model object by calling `incrementalClassificationKernel` directly, or when you convert a traditionally trained model using the `incrementalLearner` function.
Data Types: `struct`
### Performance Metrics Parameters
This property is read-only.
Flag indicating whether the incremental model tracks performance metrics, specified as logical `0` (`false`) or `1` (`true`).
The incremental model `Mdl` is warm (`IsWarm` becomes `true`) after incremental fitting functions fit (`EstimationPeriod` + `MetricsWarmupPeriod`) observations to the incremental model.
ValueDescription
`true` or `1`The incremental model `Mdl` is warm. Consequently, `updateMetrics` and `updateMetricsAndFit` track performance metrics in the `Metrics` property of `Mdl`.
`false` or `0``updateMetrics` and `updateMetricsAndFit` do not track performance metrics.
Data Types: `logical`
This property is read-only.
Model performance metrics updated during incremental learning by `updateMetrics` and `updateMetricsAndFit`, specified as a table with two columns and m rows, where m is the number of metrics specified by the `Metrics` name-value argument.
The columns of `Metrics` are labeled `Cumulative` and `Window`.
• `Cumulative`: Element `j` is the model performance, as measured by metric `j`, from the time the model became warm (`IsWarm` is `1`).
• `Window`: Element `j` is the model performance, as measured by metric `j`, evaluated over all observations within the window specified by the `MetricsWindowSize` property. The software updates `Window` after it processes `MetricsWindowSize` observations.
Rows are labeled by the specified metrics. For details, see the `Metrics` name-value argument of `incrementalLearner` or `incrementalClassificationKernel`.
Data Types: `table`
This property is read-only.
Number of observations the incremental model must be fit to before it tracks performance metrics in its `Metrics` property, specified as a nonnegative integer.
The default `MetricsWarmupPeriod` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, the `MetricsWarmupPeriod` name-value argument of the `incrementalLearner` function sets this property. The default value of the argument is `0`.
• Otherwise, the default value is `1000`.
For more details, see Performance Metrics.
Data Types: `single` | `double`
This property is read-only.
Number of observations to use to compute window performance metrics, specified as a positive integer.
The default `MetricsWindowSize` value depends on how you create the model:
• If you convert a traditionally trained model to create `Mdl`, the `MetricsWindowSize` name-value argument of the `incrementalLearner` function sets this property. The default value of the argument is `200`.
• Otherwise, the default value is `200`.
For more details on performance metrics options, see Performance Metrics.
Data Types: `single` | `double`
## Object Functions
`fit` Train kernel model for incremental learning `updateMetrics` Update performance metrics in kernel incremental learning model given new data `updateMetricsAndFit` Update performance metrics in kernel incremental learning model given new data and train model `loss` Loss of kernel incremental learning model on batch of data `predict` Predict responses for new observations from kernel incremental learning model `perObservationLoss` Per observation classification error of model for incremental learning `reset` Reset incremental classification model
## Examples
collapse all
Create an incremental kernel model without any prior information. Track the model performance on streaming data, and fit the model to the data.
Create a default incremental kernel SVM model for binary classification.
`Mdl = incrementalClassificationKernel()`
```Mdl = incrementalClassificationKernel IsWarm: 0 Metrics: [1x2 table] ClassNames: [1x0 double] ScoreTransform: 'none' NumExpansionDimensions: 0 KernelScale: 1 Properties, Methods ```
`Mdl` is an `incrementalClassificationKernel` model object. All its properties are read-only.
`Mdl` must be fit to data before you can use it to perform any other operations.
Load the human activity data set. Randomly shuffle the data.
```load humanactivity n = numel(actid); rng(1) % For reproducibility idx = randsample(n,n); X = feat(idx,:); Y = actid(idx);```
For details on the data set, enter `Description` at the command line.
Responses can be one of five classes: Sitting, Standing, Walking, Running, or Dancing. Dichotomize the response by identifying whether the subject is moving (`actid` > 2).
`Y = Y > 2;`
Fit the incremental model to the training data by using the `updateMetricsAndFit` function. Simulate a data stream by processing chunks of 50 observations at a time. At each iteration:
• Process 50 observations.
• Overwrite the previous incremental model with a new one fitted to the incoming observations.
• Store the cumulative metrics, window metrics, and number of training observations to see how they evolve during incremental learning.
```% Preallocation numObsPerChunk = 50; nchunk = floor(n/numObsPerChunk); ce = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); numtrainobs = zeros(nchunk,1); % Incremental learning for j = 1:nchunk ibegin = min(n,numObsPerChunk*(j-1) + 1); iend = min(n,numObsPerChunk*j); idx = ibegin:iend; Mdl = updateMetricsAndFit(Mdl,X(idx,:),Y(idx)); ce{j,:} = Mdl.Metrics{"ClassificationError",:}; numtrainobs(j) = Mdl.NumTrainingObservations; end```
`Mdl` is an `incrementalClassificationKernel` model object trained on all the data in the stream. During incremental learning and after the model is warmed up, `updateMetricsAndFit` checks the performance of the model on the incoming observations, and then fits the model to those observations.
Plot a trace plot of the number of training observations and the performance metrics on separate tiles.
```t = tiledlayout(2,1); nexttile plot(numtrainobs) xline(Mdl.MetricsWarmupPeriod/numObsPerChunk,"--") xlim([0 nchunk]) ylabel("Number of Training Observations") nexttile plot(ce.Variables) xline(Mdl.MetricsWarmupPeriod/numObsPerChunk,"--") xlim([0 nchunk]) ylabel("Classification Error") legend(ce.Properties.VariableNames,Location="best") xlabel(t,"Iteration")```
The plot suggests that `updateMetricsAndFit` does the following:
• Fit the model during all incremental learning iterations
• Compute the performance metrics after the metrics warm-up period only.
• Compute the cumulative metrics during each iteration.
• Compute the window metrics after processing 200 observations (4 iterations).
Prepare an incremental kernel SVM learner by specifying a metrics warm-up period and a metrics window size. Train the model by using SGD, and adjust the SGD batch size, learning rate, and regularization parameter.
Load the human activity data set. Randomly shuffle the data.
```load humanactivity n = numel(actid); rng("default") % For reproducibility idx = randsample(n,n); X = feat(idx,:); Y = actid(idx);```
For details on the data set, enter `Description` at the command line.
Responses can be one of five classes: Sitting, Standing, Walking, Running, or Dancing. Dichotomize the response by identifying whether the subject is moving (`actid` > 2).
`Y = Y > 2;`
Create an incremental kernel model for binary classification. Configure the model as follows:
• Specify the SGD solver.
• Assume that a ridge regularization parameter value of 0.001, SGD batch size of 20, and learning rate of 0.002 work well for the problem.
• Specify a metrics warm-up period of 5000 observations.
• Specify a metrics window size of 500 observations.
• Track the classification and hinge error metrics to measure the performance of the model.
```Mdl = incrementalClassificationKernel( ... Solver="sgd",Lambda=0.001,BatchSize=20,LearnRate=0.002, ... MetricsWarmupPeriod=5000,MetricsWindowSize=500, ... Metrics=["classiferror","hinge"])```
```Mdl = incrementalClassificationKernel IsWarm: 0 Metrics: [2x2 table] ClassNames: [1x0 double] ScoreTransform: 'none' NumExpansionDimensions: 0 KernelScale: 1 Properties, Methods ```
`Mdl` is an `incrementalClassificationKernel` model object configured for incremental learning.
Fit the incremental model to the rest of the data by using the `updateMetricsAndFit` function. At each iteration:
• Simulate a data stream by processing a chunk of 50 observations. Note that the chunk size is different from the SGD batch size.
• Overwrite the previous incremental model with a new one fitted to the incoming observations.
• Store the cumulative metrics, window metrics, and number of training observations to see how they evolve during incremental learning.
```% Preallocation numObsPerChunk = 50; nchunk = floor(n/numObsPerChunk); ce = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); hinge = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); numtrainobs = zeros(nchunk,1); % Incremental fitting for j = 1:nchunk ibegin = min(n,numObsPerChunk*(j-1) + 1); iend = min(n,numObsPerChunk*j); idx = ibegin:iend; Mdl = updateMetricsAndFit(Mdl,X(idx,:),Y(idx)); ce{j,:} = Mdl.Metrics{"ClassificationError",:}; hinge{j,:} = Mdl.Metrics{"HingeLoss",:}; numtrainobs(j) = Mdl.NumTrainingObservations; end```
`Mdl` is an `incrementalClassificationKernel` model object trained on all the data in the stream. During incremental learning and after the model is warmed up, `updateMetricsAndFit` checks the performance of the model on the incoming observations, and then fits the model to those observations.
Plot a trace plot of the number of training observations and the performance metrics on separate tiles.
```t = tiledlayout(3,1); nexttile plot(numtrainobs) xlim([0 nchunk]) ylabel(["Number of","Training Observations"]) xline(Mdl.MetricsWarmupPeriod/numObsPerChunk,"--") nexttile plot(ce.Variables) xlim([0 nchunk]) ylabel("Classification Error") xline(Mdl.MetricsWarmupPeriod/numObsPerChunk,"--") legend(ce.Properties.VariableNames,Location="best") nexttile plot(hinge.Variables) xlim([0 nchunk]) ylabel("Hinge Loss") xline(Mdl.MetricsWarmupPeriod/numObsPerChunk,"--") legend(hinge.Properties.VariableNames,Location="best") xlabel(t,"Iteration")```
The plot suggests that `updateMetricsAndFit` does the following:
• Fit the model during all incremental learning iterations.
• Compute the performance metrics after the metrics warm-up period only.
• Compute the cumulative metrics during each iteration.
• Compute the window metrics after processing 500 observations (10 iterations).
Train a kernel model for binary classification by using `fitckernel`, convert it to an incremental learner, track its performance, and fit it to streaming data. Carry over training options from traditional to incremental learning.
Load and Preprocess Data
Load the human activity data set. Randomly shuffle the data.
```load humanactivity rng(1) % For reproducibility n = numel(actid); idx = randsample(n,n); X = feat(idx,:); Y = actid(idx);```
For details on the data set, enter `Description` at the command line.
Responses can be one of five classes: Sitting, Standing, Walking, Running, or Dancing. Dichotomize the response by identifying whether the subject is moving (`actid` > 2).
`Y = Y > 2;`
Suppose that the data collected when the subject was stationary (`Y` = `false`) has double the quality than when the subject was moving. Create a weight variable that attributes 2 to observations collected from a stationary subject, and 1 to a moving subject.
`W = ones(n,1) + ~Y;`
Train Kernel Model for Binary Classification
Fit a kernel model for binary classification to a random sample of half the data.
```idxtt = randsample([true false],n,true); Mdl = fitckernel(X(idxtt,:),Y(idxtt),Weights=W(idxtt))```
```Mdl = ClassificationKernel ResponseName: 'Y' ClassNames: [0 1] Learner: 'svm' NumExpansionDimensions: 2048 KernelScale: 1 Lambda: 8.2967e-05 BoxConstraint: 1 Properties, Methods ```
`Mdl` is a `ClassificationKernel` model object representing a traditionally trained kernel model for binary classification.
Convert Trained Model
Convert the traditionally trained classification model to a model for incremental learning.
`IncrementalMdl = incrementalLearner(Mdl)`
```IncrementalMdl = incrementalClassificationKernel IsWarm: 1 Metrics: [1x2 table] ClassNames: [0 1] ScoreTransform: 'none' NumExpansionDimensions: 2048 KernelScale: 1 Properties, Methods ```
`IncrementalMdl` is an `incrementalClassificationKernel` model object configured for incremental learning.
Separately Track Performance Metrics and Fit Model
Perform incremental learning on the rest of the data by using the `updateMetrics` and `fit` functions. Simulate a data stream by processing 50 observations at a time. At each iteration:
1. Call `updateMetrics` to update the cumulative and window classification error of the model given the incoming chunk of observations. Overwrite the previous incremental model to update the `Metrics` property. Note that the function does not fit the model to the chunk of data—the chunk is "new" data for the model. Specify the observation weights.
2. Call `fit` to fit the model to the incoming chunk of observations. Overwrite the previous incremental model to update the model parameters. Specify the observation weights.
3. Store the classification error and number of training observations.
```% Preallocation idxil = ~idxtt; nil = sum(idxil); numObsPerChunk = 50; nchunk = floor(nil/numObsPerChunk); ce = array2table(zeros(nchunk,2),VariableNames=["Cumulative","Window"]); numtrainobs = zeros(nchunk,1); Xil = X(idxil,:); Yil = Y(idxil); Wil = W(idxil); % Incremental fitting for j = 1:nchunk ibegin = min(nil,numObsPerChunk*(j-1) + 1); iend = min(nil,numObsPerChunk*j); idx = ibegin:iend; IncrementalMdl = updateMetrics(IncrementalMdl,Xil(idx,:),Yil(idx), ... Weights=Wil(idx)); ce{j,:} = IncrementalMdl.Metrics{"ClassificationError",:}; IncrementalMdl = fit(IncrementalMdl,Xil(idx,:),Yil(idx), ... Weights=Wil(idx)); numtrainobs(j) = IncrementalMdl.NumTrainingObservations; end```
`IncrementalMdl` is an `incrementalClassificationKernel` model object trained on all the data in the stream.
Alternatively, you can use `updateMetricsAndFit` to update performance metrics of the model given a new chunk of data, and then fit the model to the data.
Plot a trace plot of the number of training observations and the performance metrics.
```t = tiledlayout(2,1); nexttile plot(numtrainobs) xlim([0 nchunk]) ylabel("Number of Training Observations") nexttile plot(ce.Variables) xlim([0 nchunk]) legend(ce.Properties.VariableNames) ylabel("Classification Error") xlabel(t,"Iteration")```
The cumulative loss is stable and decreases gradually, whereas the window loss jumps.
expand all
expand all
## References
[1] Kempka, Michał, Wojciech Kotłowski, and Manfred K. Warmuth. "Adaptive Scale-Invariant Online Algorithms for Learning Linear Models." Preprint, submitted February 10, 2019. https://arxiv.org/abs/1902.07528.
[2] Langford, J., L. Li, and T. Zhang. “Sparse Online Learning Via Truncated Gradient.” J. Mach. Learn. Res., Vol. 10, 2009, pp. 777–801.
[3] Shalev-Shwartz, S., Y. Singer, and N. Srebro. “Pegasos: Primal Estimated Sub-Gradient Solver for SVM.” Proceedings of the 24th International Conference on Machine Learning, ICML ’07, 2007, pp. 807–814.
[4] Xu, Wei. “Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent.” CoRR, abs/1107.2490, 2011.
[5] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.” Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.
[6] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.” Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.
[7] Huang, P. S., H. Avron, T. N. Sainath, V. Sindhwani, and B. Ramabhadran. “Kernel methods match Deep Neural Networks on TIMIT.” 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, pp. 205–209.
## Version History
Introduced in R2022a
|
2022-11-26 12:02:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6140397787094116, "perplexity": 5827.734950548637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00744.warc.gz"}
|
https://the-learning-machine.com/article/ml/distance-and-similarity-metrics
|
# Distance and similarity metrics
## Introduction
Central to the foundations of machine learning is the concept of distance, or its opposite, similarity. Supervised machine learning, such as approaches to classification, relies on the assumption that there is some similarity among examples of the same class, or conversely, distant examples belong to separate classes. Similarly, unsupervised machine learning assumes that there are groups of similar examples and that such groups are distant from each other. This notion manifests explicitly in distance-based models such as nearest neighbors, support vector machines and K-means clustering.
In this article, we will describe some popular ways of compute similarity or distance among examples, in the context of machine learning.
## Prerequisites
To understand distance metrics introduced in this article, we recommend familiarity with the concepts in
• Introduction to machine learning: An introduction to basic concepts in machine learning such as classification, training instances, features, and feature types.
Follow the above links to first get acquainted with the corresponding concepts.
## Problem setting
In machine learning, multivariate instances are represented vectors consisting of attributes or features. In this article, we will consider $\ndim$-dimensional real-valued feature vectors that we will denote with bold-faced lower case letters. For example, $\vx \in \real^\ndim$, where $\vx$ is a shorthand for the vector $\vx = [x_1, \ldots, x_\ndim]$.
## Euclidean distance
Quite possibly the most popular distance in machine learning is the Euclidean distance, also casually known as the L2 distance due to the usage of $L_2$-norm in the formula. It is the straight-line distance between two points in the Euclidean space.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, it is computed as their $L_2$-norm
$$d_{\text{Euclidean}}(\vx, \vy) = \norm{\vx - \vy}{2}$$
where, the $L_2$ norm is defined as
$$\norm{\vx - \vy}{2} = \sqrt{\sum_{\ndimsmall=1}^{\ndim} \left(x_\ndimsmall - y_\ndimsmall\right)^2}$$
It can be shown that the Euclidean distance is equivalent to
$$\norm{\vx - \vy}{2} = \sqrt{(\vx - \vy) \cdot (\vx - \vy)} = \sqrt{\norm{\vx}{2}^2 + \norm{\vy}{2}^2 - 2\vx \cdot \vy}$$
where, $\cdot$ denotes a dot product of the operands.
It should be noted that the Euclidean distance lies in is unbounded and may take any positive real-value in $[0,\infty]$.
## Minkowski distance
Euclidean distance is $L_2$-norm of the difference of vectors. The generalization of this distance to the $L_p$-norm is known as the Minkowski distance.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, their Minkowski distance is computed as
$$d_{\text{Minkowski}}(\vx, \vy) = \norm{\vx - \vy}{p}$$
where, the $L_p$ norm is defined as
$$\norm{\vx - \vy}{p} = \left[ \sum_{\ndimsmall=1}^{\ndim} \left|x_\ndimsmall - y_\ndimsmall\right|^p \right]^{\frac{1}{p}}$$
For $p=2$, we have the Euclidean distance.
## Manhattan distance
A special case of Minkowski distance when $p = 1$ is known as the Manhattan distance, computed as the $L_1$ norm of the difference of two vectors.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, their Manhattan distance is computed as
$$d_{\text{Manhattan}}(\vx, \vy) = \norm{\vx - \vy}{1}$$
where, the $L_p$ norm is defined as
$$\norm{\vx - \vy}{p} = \sum_{\ndimsmall=1}^{\ndim} \left|x_\ndimsmall - y_\ndimsmall\right|^p$$
It should be noted that the distance is the sum of absolute difference along each dimension. This distance is akin to walking from point $\vx$ to point $\vy$ in the Euclidean space so that we start the walk along the $1$-st dimension, then along the $2$-nd dimension, and so on. This situation is analogous to walking between two points in a metropolitan area such as Manhattan by walking alongside city blocks. Hence the name, Manhattan distance. For the same reason, the $L_1$ distance is also known by alternative names such as city block distance, taxicab metric, or rectilinear distance.
## Chebyshev distance
The Minkowski distance with $p = \infty$ is known as the Chebyshev distance. It is computed as the $L_\infty$-norm of the difference of two vectors.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, their Chebyshev distance is computed as
$$d_{\text{Chebyshev}}(\vx, \vy) = \norm{\vx - \vy}{\infty}$$
where, the $L_\infty$ norm is defined as
$$\norm{\vx - \vy}{\infty} = \underset{\ndimsmall=1,\ldots,\ndim}{\max} \left|x_\ndimsmall - y_\ndimsmall\right|$$
While Euclidean and Manhattan distance are an aggregate over differences along all dimensions, the Chebyshev distance is the difference along the dimension that is most different among the two vectors.
## Weighted Minkowski distance
The Minkowski distance gives equal weighting to each dimension. Some tasks may require specialized weighting per dimension, for example if some dimensions are more important than others. The generalization of Minkowski distance that enables such per-dimension weighting is known as weighted Minkowski distance.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$ and weights $\vw \in \real^\ndim$, the weighted Minkowski distance is computed as
$$d_{\text{weightedMinkowski}}(\vx, \vy, \vw) = \left[ \sum_{\ndimsmall=1}^{\ndim} w_\ndimsmall \left|x_\ndimsmall - y_\ndimsmall\right|^p \right]^{\frac{1}{p}}$$
## Cosine similarity
Cosine similarity is particularly popular in text-mining approaches that utilize the bag-of-words strategy for modeling text, for example to build text classifiers or for information retrieval.
The cosine similarity of two vectors is computed as the cosine of the angle of the angle between the two vectors. For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, it is computed as
$$\text{cosim}(\vx, \vy) = \cos \theta$$
where, $\theta$ is the angle between the vectors $\vx$ and $\vy$.
$$\text{cosim}(\vx, \vy) = \frac{\vx \cdot \vy}{\norm{\vx}{2} \norm{\vy}{2}} \label{eqn:cosine-similarity}$$
Being a normalized score, a cosine of the angle between two vectors, the cosine similarity is bounded in the region $[0,1]$.
## Relationship: Cosine similiarty and Euclidean distance
Euclidean distance can be written in terms of cosine similarity through these simple transformations.
\begin{aligned} d_{\text{Euclidean}}(\vx, \vy) &= \sqrt{\norm{\vx - \vy}{2}} \\\\ &= \sqrt{\norm{\vx}{2}^2 + \norm{\vy}{2}^2 - 2\vx \cdot \vy} \\\\ &= \sqrt{\norm{\vx}{2}^2 + \norm{\vy}{2}^2 - 2\norm{\vx}{2}\norm{\vy}{2} \cos \theta} \\\\ \end{aligned}
If $\vx$ and $\vy$ are unit vectors, then
$$d_{\text{Euclidean}}(\vx, \vy) = \sqrt{2(1 - \cos \theta)}$$
This means, for unit vectors, cosine similarity is akin to the Euclidean distance. For unit vectors, high cosine similarity will imply low Euclidean distance and vice versa. When magnitudes of vectors should not matter, we should use the magnitude-invariant metric — the cosine similarity metric. When they do, we should use the magnitude-respecting Euclidean distance.
## Mahalanobis distance
Euclidean distance gives equal weighting to every dimension. In some settings, it might be useful to provide preferential weighting to certain dimensions.
The Mahalanobis distance weighs each dimension by the inverse of the variance in the data along that dimension. This leads to a scale-invariant distance because dimensions with higher variance (stretch or scaling) have lowered contributions to the overall distance.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, their Mahalanobis distance is computed as
$$d_{\text{Mahalanobis}}(\vx, \vy, \mSigma) = \sqrt{(\vx - \vy)\mSigma^{-1}(\vx - \vy)}$$
where, $\mSigma$ is the covariance of the data containing these vectors.
In the case of a diagonal covariance matrix, we ignore interactions among dimensions, $\mSigma = \vsigma^2 \mI$. Here, the vector $\vsigma^2 = [\sigma_1^2, \ldots, \sigma_\ndim^2]$,
\begin{aligned} d_{\text{Mahalanobis}}(\vx, \vy, \vsigma^2 \mI) &= \sqrt{(\vx - \vy)\mSigma^{-1}(\vx - \vy)} \\\\ &= \sqrt{\sum_{\ndimsmall=1}^{\ndim} \frac{\left(x_\ndimsmall - y_\ndimsmall\right)^2}{\sigma_\ndimsmall^2}} \end{aligned}
where, $\sigma_\ndimsmall$ is the variance of the sample set over the $\ndimsmall$-th dimension.
## Relationship: Mahalanobis distance and Euclidean distance
Just like cosine similarity is a magnitude-invariant version of Euclidean distance, the Mahalanobis is a scale-invariant version of Euclidean distance.
For real-valued vectors $\vx \in \real^\ndim$ and $\vy \in \real^\ndim$, consider first standardizing the data. This is typically done by subtracting the mean $\mu_\ndimsmall$ for each dimension $\ndimsmall$ and then dividing by the corresponding standard deviation $\sigma_\ndimsmall$. Standardization results in the updated vectors $\dash{\vx}$ and $\dash{\vy}$ as
$$\dash{\vx} = [\frac{x_1 - \mu_1}{\sigma_1}, \ldots, \frac{x_\ndim - \mu_\ndim}{\sigma_\ndim}]$$ $$\dash{\vy} = [\frac{y_1 - \mu_1}{\sigma_1}, \ldots, \frac{y_\ndim - \mu_\ndim}{\sigma_\ndim}]$$
The Euclidean distance of these normalized vectors is
\begin{aligned} d_{\text{Euclidean}}(\dash{\vx}, \dash{\vy}) &= \sqrt{\norm{\dash{\vx} - \dash{\vy}}{2}} \\\\ &= \sqrt{\sum_{\ndimsmall=1}^{\ndim} \left(\dash{x}_\ndimsmall - \dash{y}_\ndimsmall\right)^2} \\\\ &= \sqrt{\sum_{\ndimsmall=1}^{\ndim} \left(\frac{x_\ndimsmall - \mu_\ndimsmall}{\sigma} - \frac{y_\ndimsmall - \mu_\ndimsmall}{\sigma}\right)^2} \\\\ &= \sqrt{\sum_{\ndimsmall=1}^{\ndim} \left(\frac{x_\ndimsmall - \mu_\ndimsmall - y_\ndimsmall + \mu_\ndimsmall}{\sigma}\right)^2} \\\\ &= \sqrt{\sum_{\ndimsmall=1}^{\ndim} \frac{(x_\ndimsmall - y_\ndimsmall)^2}{\sigma^2}} \\\\ &= d_{\text{Mahalanobis}}(\vx, \vy, \vsigma^2 \mI) \end{aligned}
Thus, the Mahalanobis distance with a diagonal covariance matrix $\vsigma^2 \mI$ is effectively Euclidean distance between standardized vectors — vectors that have been transformed by subtracting their mean and divided by their standard deviation. Therefore, Mahalanobis distance with a diagonal covariance matrix is also known as standardized Euclidean distance.
|
2022-05-25 09:34:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9937310218811035, "perplexity": 798.8419242397581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00584.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/DHARMa/html/simulateResiduals.html
|
simulateResiduals {DHARMa} R Documentation
## Create simulated residuals
### Description
The function creates scaled residuals by simulating from the fitted model. Residuals can be extracted with residuals.DHARMa. See testResiduals for an overview of residual tests, plot.DHARMa for an overview of available plots.
### Usage
simulateResiduals(fittedModel, n = 250, refit = F,
integerResponse = NULL, plot = F, seed = 123, method = c("PIT",
### Arguments
fittedModel a fitted model of a class supported by DHARMa n number of simulations. The smaller the number, the higher the stochastic error on the residuals. Also, for very small n, discretization artefacts can influence the tests. Default is 250, which is a relatively safe value. You can consider increasing to 1000 to stabilize the simulated values. refit if FALSE, new data will be simulated and scaled residuals will be created by comparing observed data with new data. If TRUE, the model will be refit on the simulated data (parametric bootstrap), and scaled residuals will be created by comparing observed with refitted residuals. integerResponse if TRUE, noise will be added at to the residuals to maintain a uniform expectations for integer responses (such as Poisson or Binomial). Usually, the model will automatically detect the appropriate setting, so there is no need to adjust this setting. plot if TRUE, plotResiduals will be directly run after the residuals have been calculated seed the random seed to be used within DHARMa. The default setting, recommended for most users, is keep the random seed on a fixed value 123. This means that you will always get the same randomization and thus the same result when running the same code. NULL = no new seed is set, but previous random state will be restored after simulation. FALSE = no seed is set, and random state will not be restored. The latter two options are only recommended for simulation experiments. See vignette for details. method for refit = F, the quantile randomization method used. The two options implemented at the moment are probability integral transform (PIT-) residuals (current default), and the "traditional" randomization procedure, that was used in DHARMa until version 0.3.0. Refit = T will always use "traditional", respectively of the value of method. For details, see getQuantile rotation optional rotation of the residual space prior to calculating the quantile residuals. The main purpose of this is to remove residual autocorrelation. See details below, section residual auto-correlation, and help of getQuantile. ... parameters to pass to the simulate function of the model object. An important use of this is to specify whether simulations should be conditional on the current random effect estimates, e.g. via re.form. Note that not all models support syntax to specify conditional or unconditional simulations. See also details
### Details
There are a number of important considerations when simulating from a more complex (hierarchical) model:
Re-simulating random effects / hierarchical structure: in a hierarchical model, we have several stochastic processes aligned on top of each other. Specifically, in a GLMM, we have a lower level stochastic process (random effect), whose result enters into a higher level (e.g. Poisson distribution). For other hierarchical models such as state-space models, similar considerations apply.
In such a situation, we have to decide if we want to re-simulate all stochastic levels, or only a subset of those. For example, in a GLMM, it is common to only simulate the last stochastic level (e.g. Poisson) conditional on the fitted random effects. This is often referred to as a conditional simuation. For controlling how many levels should be re-simulated, the simulateResidual function allows to pass on parameters to the simulate function of the fitted model object. Please refer to the help of the different simulate functions (e.g. ?simulate.merMod) for details. For merMod (lme4) model objects, the relevant parameters are parameters are use.u and re.form
If the model is correctly specified, the simulated residuals should be flat regardless how many hierarchical levels we re-simulate. The most thorough procedure would therefore be to test all possible options. If testing only one option, I would recommend to re-simulate all levels, because this essentially tests the model structure as a whole. This is the default setting in the DHARMa package. A potential drawback is that re-simulating the lower-level random effects creates more variability, which may reduce power for detecting problems in the upper-level stochastic processes. In particular dispersion tests may produce different results when switching from conditional to unconditional simulations, and often the conditional simulation is more sensitive.
Refitting or not: a third issue is how residuals are calculated. simulateResiduals has two options that are controlled by the refit parameter:
1. if refit = FALSE (default), new data is simulated from the fitted model, and residuals are calculated by comparing the observed data to the new data
2. if refit = TRUE, a parametric bootstrap is performed, meaning that the model is refit on the new data, and residuals are created by comparing observed residuals against refitted residuals. I advise against using this method per default (see more comments in the vignette), unless you are really sure that you need it.
Residuals per group: In many situations, it can be useful to look at residuals per group, e.g. to see how much the model over / underpredicts per plot, year or subject. To do this, use recalculateResiduals, together with a grouping variable (see also help)
Transformation to other distributions: DHARMa calculates residuals for which the theoretical expectation (assuming a correctly specified model) is uniform. To transform this residuals to another distribution (e.g. so that a correctly specified model will have normal residuals) see residuals.DHARMa.
Integer responses: this is only relevant if method = "traditional", in which case it activates the randomization of the residuals. Usually, this does not need to be changed, as DHARMa will try to automatically if the fitted model has an integer or discrete distribution via the family argument. However, in some cases the family does not allow to uniquely identify the distribution type. For example, a tweedie distribution can be inter or continuous. Therefore, DHARMa will additionally check the simulation results for repeated values, and will change the distribution type if repeated values are found (a message is displayed in this case).
Residual auto-correlation: a common problem is residual autocorrelation. Spatial, temporal and phylogenetic autocorrelation can be tested with testSpatialAutocorrelation and testTemporalAutocorrelation. If simulations are unconditional, residual correlations will be maintained, even if the autocorrelation is addressed by an appropriate CAR structure. This may be a problem, because autocorrelation may create apparently systematic patterns in plots or tests such as testUniformity. To reduce this problem, either simulate conditional on fitted correlated REs, or you could try to rotate residuals via the rotation parameter (the latter will likely only work in approximately linear models). See getQuantile for details on the rotation.
### Value
An S3 class of type "DHARMa". Implemented S3 functions include plot.DHARMa, print.DHARMa and residuals.DHARMa. For other functions that can be used on a DHARMa object, see section "See Also" below.
testResiduals, plotResiduals, recalculateResiduals, outliers
### Examples
library(lme4)
testData = createData(sampleSize = 100, overdispersion = 0.5, family = poisson())
fittedModel <- glmer(observedResponse ~ Environment1 + (1|group),
family = "poisson", data = testData)
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
# standard plot
plot(simulationOutput)
# one of the possible test, for other options see ?testResiduals / vignette
testDispersion(simulationOutput)
# the calculated residuals can be accessed via
residuals(simulationOutput)
# transform residuals to other pdf, see ?residuals.DHARMa for details
residuals(simulationOutput, quantileFunction = qnorm, outlierValues = c(-7,7))
# get residuals that are outside the simulation envelope
outliers(simulationOutput)
# calculating aggregated residuals per group
simulationOutput2 = recalculateResiduals(simulationOutput, group = testData$group) plot(simulationOutput2, quantreg = FALSE) # calculating residuals only for subset of the data simulationOutput3 = recalculateResiduals(simulationOutput, sel = testData$group == 1 )
plot(simulationOutput3, quantreg = FALSE)
[Package DHARMa version 0.4.5 Index]
|
2022-05-21 02:53:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5937333703041077, "perplexity": 1999.1608742411497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00178.warc.gz"}
|
https://dipy.org/documentation/1.1.1./examples_built/tracking_sfm/
|
# Tracking with the Sparse Fascicle Model
Tracking requires a per-voxel model. Here, the model is the Sparse Fascicle Model (SFM), described in [Rokem2015]. This model reconstructs the diffusion signal as a combination of the signals from different fascicles (see also Reconstruction with the Sparse Fascicle Model).
from dipy.core.gradients import gradient_table
from dipy.data import get_sphere, get_fnames
from dipy.direction.peaks import peaks_from_model
from dipy.io.streamline import save_trk
from dipy.io.stateful_tractogram import Space, StatefulTractogram
from dipy.reconst.csdeconv import auto_response
from dipy.reconst import sfm
from dipy.tracking import utils
from dipy.tracking.local_tracking import LocalTracking
from dipy.tracking.streamline import (select_random_set_of_streamlines,
transform_streamlines,
Streamlines)
from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion
from dipy.viz import window, actor, colormap, has_fury
from numpy.linalg import inv
# Enables/disables interactive visualization
interactive = False
To begin, we read the Stanford HARDI data set into memory:
hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames('stanford_hardi')
label_fname = get_fnames('stanford_labels')
data, affine, hardi_img = load_nifti(hardi_fname, return_img=True)
This data set provides a label map (generated using FreeSurfer), in which the white matter voxels are labeled as either 1 or 2:
white_matter = (labels == 1) | (labels == 2)
The first step in tracking is generating a model from which tracking directions can be extracted in every voxel.
For the SFM, this requires first that we define a canonical response function that will be used to deconvolve the signal in every voxel
response, ratio = auto_response(gtab, data, roi_radius=10, fa_thr=0.7)
We initialize an SFM model object, using this response function and using the default sphere (362 vertices, symmetrically distributed on the surface of the sphere):
sphere = get_sphere()
sf_model = sfm.SparseFascicleModel(gtab, sphere=sphere,
l1_ratio=0.5, alpha=0.001,
response=response[0])
We fit this model to the data in each voxel in the white-matter mask, so that we can use these directions in tracking:
pnm = peaks_from_model(sf_model, data, sphere,
relative_peak_threshold=.5,
min_separation_angle=25,
parallel=True)
A ThresholdStoppingCriterion object is used to segment the data to track only through areas in which the Generalized Fractional Anisotropy (GFA) is sufficiently high.
stopping_criterion = ThresholdStoppingCriterion(pnm.gfa, .25)
Tracking will be started from a set of seeds evenly distributed in the white matter:
seeds = utils.seeds_from_mask(white_matter, affine, density=[2, 2, 2])
For the sake of brevity, we will take only the first 1000 seeds, generating only 1000 streamlines. Remove this line to track from many more points in all of the white matter
seeds = seeds[:1000]
We now have the necessary components to construct a tracking pipeline and execute the tracking
streamline_generator = LocalTracking(pnm, stopping_criterion, seeds, affine,
step_size=.5)
streamlines = Streamlines(streamline_generator)
Next, we will create a visualization of these streamlines, relative to this subject’s T1-weighted anatomy:
t1_fname = get_fnames('stanford_t1')
color = colormap.line_colors(streamlines)
To speed up visualization, we will select a random sub-set of streamlines to display. This is particularly important, if you track from seeds throughout the entire white matter, generating many streamlines. In this case, for demonstration purposes, we subselect 900 streamlines.
plot_streamlines = select_random_set_of_streamlines(streamlines, 900)
if has_fury:
streamlines_actor = actor.streamtube(
list(transform_streamlines(plot_streamlines, inv(t1_aff))),
colormap.line_colors(streamlines), linewidth=0.1)
vol_actor = actor.slicer(t1_data)
vol_actor.display(40, None, None)
vol_actor2 = vol_actor.copy()
vol_actor2.display(None, None, 35)
ren = window.Renderer()
window.record(ren, out_path='tractogram_sfm.png', size=(800, 800))
if interactive:
window.show(ren)
Sparse Fascicle Model tracks
Finally, we can save these streamlines to a ‘trk’ file, for use in other software, or for further analysis.
sft = StatefulTractogram(streamlines, hardi_img, Space.RASMM)
save_trk(sft, "tractogram_sfm_detr.trk")
## References
Rokem2015
Ariel Rokem, Jason D. Yeatman, Franco Pestilli, Kendrick N. Kay, Aviv Mezer, Stefan van der Walt, Brian A. Wandell (2015). Evaluating the accuracy of diffusion MRI models in white matter. PLoS ONE 10(4): e0123272. doi:10.1371/journal.pone.0123272
Example source code
You can download the full source code of this example. This same script is also included in the dipy source distribution under the doc/examples/ directory.
|
2020-04-01 03:52:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35429295897483826, "perplexity": 12317.275732517159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00366.warc.gz"}
|
https://mathoverflow.net/questions/257683/about-reflections-of-reflection-groups
|
# About reflections of reflection groups
For any finite crystallographic reflection group $W = \langle s_1, \ldots , s_n\rangle$, every hyperplane reflection is of the form $ws_iw^{-1}$ for some $i$ and some $w \in W$.
A finite crystallographic reflection group $W$ is a Coxeter group with the presentation \begin{align}\label{Coxeter system} S=\langle s_1, s_2, \ldots, s_n \mid (s_is_j)^{m_{ij}} = 1 \rangle, \end{align} where $(s_i)_{1 \leq i \leq n}$ is the set of simple reflections and $m_{ij} \in \{2,3,4,6\}$. The pair $(W,S)$ is called a Coxeter system.
I have some questions:
1. Is it true that every finite reflection group consists of some (not necessarily hyperplane) reflections and some rotations?
2. What are the reduced words of reflections under a Coxeter system $(W,S)$?
For any finite reflection group, the number of hyperplane reflections is the number of positive roots in the corresponding root system, see section 1.14 of Jim Humphreys' book "Reflection groups and Coxeter groups".
• Not clear to me what "rotation" would mean, abstractly, ... – paul garrett Dec 20 '16 at 18:58
• Well, anyways, every reflection has some reduced word of that form (palindromic). This paper studies reduced words for reflections in Coxeter groups in detail: deepblue.lib.umich.edu/handle/2027.42/46149 – Sam Hopkins Dec 20 '16 at 20:35
• This wikipedia page has a terrible definition of "rotation." I was under the impression that wikipedia was starting to be somewhat reliable for math...this is the first math page I've seen in a long time that I think should be completely scrapped. A "Euclidean" rotation should be defined as an rigid motion whose fixed space is a codimension-2 subspace – Nathan Reading Dec 21 '16 at 15:08
• As @NathanReading says, that wiki page is terrible, and misleading. I am suspecting that the question's intent involves the standard representation of a Coxeter group, but perhaps one should say so? And, still, a "rotation" should have a codimension-two fixed subspace. – paul garrett Dec 21 '16 at 22:21
• @bing: Concerning your final sentence, the crystallographic restriction isn't needed for a finite Coxeter group (= finite group generated by reflections acting on real euclidean space) if you define "root" appropriately in that setting. Reflections are always relative to roots in this general setting, e.g., see the exposition in section 1.14 of my 1990 book. – Jim Humphreys Mar 16 '17 at 17:59
1) No. $W$ consists of elements of determinant 1 and -1. According to your wikpedia, all elements of determinant 1 are "rotations". Elements of determinant -1 are not necessarily reflections because they are not necessarily of order 2. Just think of a 4-cycle $(1,2,3,4)\in S_4$: his order is 4, not 2. It is a proper roto-reflection...
BTW, the terminology is confusing: a rotation can be a reflection! Think of a reflection across 2-codimensional subspace.
2) It is explained by Sam for (hyperplane) reflections. They are all of the form $ws_iw^{-1}$ for some $w\in W$, and they will have a reduced word of this kind.
Higher-dimensional (fixing a subspace of higher codimension) reflections can be figured out as well. They are just elements of order 2. I do not know their reduced words off the top of my head.
• A specific example (in $\mathbb{R}^3$) of an element of determinant $-1$ that's not a reflection is the composition of a reflection in the $xy$ plane with a rotation of $\frac{2\pi}n$ about the $z$ axis - think of it as a sort of rotary 'glide-reflection'; it'll have order $n$ if $n$ is even, or order $2n$ if $n$ is odd. – Steven Stadnicki Dec 21 '16 at 16:35
• In the theory of abstract Coxeter groups, "reflections" are usually defined to be the elements in the set $\{wsw^{-1}\colon w \in W, s\in S\}$ (where $(W,S)$ is some Coxeter system). Thus, reflections across codimension $>1$ subspaces are not usually called reflections in this context. – Sam Hopkins Dec 23 '16 at 18:20
• And by the way, I think it requires at least a small argument to show that every reflection has a reduced word that is a palindrome. – Sam Hopkins Dec 23 '16 at 18:21
• This homework solution sheet seems to provide such an argument: math.sfsu.edu/federico/Clase/Coxeter/HomeworkSolutions/9.pdf – Sam Hopkins Dec 23 '16 at 18:25
Every coxeter group consists of reflections. An even number of reflections around two elements is a rotation, which is what $(s_is_j)^{m_{ij}}$ means. The infinite coxeter groups consists of sets of parallel mirrors, which equate to a translation rather than a reflection.
One might note the root lattice consists of perpendiculars to each of the mirrors, radiating from a point. This produces a lattice.
The 'reduced word' is the shortest path between two points. So if $ABABAB=1$ as in the hexagon, then $ABAB=BA$, where $BA$ is the reduced form.
• In the Bourbaki definition of "Coxeter group" (inspired mainly by work of Tits), a "reflection" always fixes a hyperplane, so your first sentence doesn't apply. For example, the identity element is not a "reflection". – Jim Humphreys Mar 19 '17 at 16:17
• You must understand that a hyperplane is space itself. Coxeter himself describes groups where there are order-three reflections etc, such as the group 3(3)3 => AAA = BBB = 1, ABA = BAB. Likewise, the intersection of two planes is an orthohedrix, the space orthogonal to a hedrix or 2-space. You will note then that (AB)^n constitutes a rotation. – wendy.krieger Mar 21 '17 at 9:29
|
2020-07-08 08:51:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6731157898902893, "perplexity": 382.87839588821475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00191.warc.gz"}
|
https://cs.stackexchange.com/questions/126119/channel-coding-and-error-probability-where-are-these-probabilities-from
|
# Channel coding and Error probability. Where are these probabilities from?
From where are the following probabilities?
We consider BSCε with ε = 0,1 and block code C = {c1, c2} with the code words c1 = 010 and c2 = 101. On the received word y we use the decoder D = {D1,D2} which decodes the word to the code word which has the lowest hamming distance to y. Determine D1 and D2 and the global error probability ERROR(D) if the code words have the same probability. Hint: To an output y there exists only one x which gets to a failing decoding. (Y = 100 will only gets wrong decoded if we sent message is x = c1 = 010.) So the term (1− p(D(y)|y)) is equal to Pr[X = x|Y = y] for a suitable x.
Nun
\begin{aligned} &\text { Hamming-Distance: }\\ &\begin{array}{c|cc} \text { Code } & 010 & 101 \\ \hline \hline 000 & 1 & 2 \\ \hline 001 & 2 & 1 \\ \hline 010 & 0 & 3 \\ \hline 011 & 1 & 2 \\ \hline 100 & 2 & 1 \\ \hline 101 & 3 & 0 \\ \hline 110 & 1 & 2 \\ \hline 111 & 2 & 1 \\ \hline \end{array} \end{aligned} $$\left.D_{1}=\{000,010,011,110\} \text { (Decides for } 010\right) \left.D_{2}=\{001,100,101,111\} \text { (Decides for } 101\right)$$ \begin{aligned} E R R O R(D) &=\sum_{y \in \Sigma_{A}^{3}} p(y)(1-p(D(y) | y)) \\ &=\overbrace{2 \cdot p(y)(1-p(D(y) | y))}+\quad \overbrace{6 \cdot p(y)(1-p(D(y) | y))}^{ } \\ &=2 \cdot\left(\frac{729}{2000}+\frac{1}{2000}\right)\left(\frac{7}{250}\right)+6 \cdot\left(\frac{81}{2000}+\frac{9}{2000}\right)\left(\frac{757}{1000}\right) \end{aligned} How do I get to the probabilities $$\frac{7}{250}$$ and $$\frac{757}{1000}$$??
I don't get this calculation. It should be right. But I don't get how to get to these probabilities.
Could someone explain me this?
|
2020-12-05 09:50:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000081062316895, "perplexity": 513.7948169771362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00335.warc.gz"}
|
http://universesandbox.com/blog/
|
## Hiring a Cross-Platform Engineer
Universe Sandbox is a space and gravity simulator masquerading as a video game with over 870,000 unit sales and an Overwhelmingly Positive 95% rating on Steam.
Giant Army is looking for a creative and highly technical cross-platform engineer to help put Universe Sandbox in everyone’s pocket (and, eventually, in the living room).
We are a close-knit multidisciplinary team of astrophysicists, engineers, graphics developers, and designers that highly values individual contributions and collaborative problem-solving. The company name, Giant Army, was inspired by the concept of “standing on the shoulders of giants.”
Our mission is to reveal the awesomeness of the universe and the fragility of our planet through real-time interaction, creation, and destruction of a realistic, science-based simulation.
### Cross-Platform Goals
Help us solve the complex challenge of bringing Universe Sandbox to all platforms. We are getting closer to our initial mobile release (iOS & Android) and need some help pushing us over the finish line. We plan to begin console development in 2022.
We embrace responsive design and use the same UI and codebase across all platforms; desktop, AR/VR, and mobile are all built from the same project.
You might be the perfect candidate if you:
• Enjoy solving technical challenges with multi-platform support
• Have shipped titles on desktop, mobile, and console
• Strive to make the user experience feel great on desktop, mobile, and console without compromise
• Work with our team to bring the best possible experience to mobile, consoles, and future platforms, without losing desktop functionality
• Help solve technical issues with input, performance, and overall usability
• Plan for the future as we dive into gamepad and console support
• Stay current on Unity’s tech and trends. We’re working on a major rewrite to be more closely integrated with Unity’s DOTS Physics.
### Qualifications
• Professional or personal programming projects showing your passion
• Experience with C# & Unity (DOTS knowledge a bonus)
• Strong attention to detail and a love of polish & iteration
• Experience shipping titles on mobile or console platforms (ideally)
• Unity native plugin experience (Objective-C, Java) preferred
• Passion for science, astronomy, and real-time interactive simulations
• Love of fantastical what-if scenarios: what-if.xkcd.com (note citation #6 on 148)
• Ability to see things from our user’s perspective
• Appreciation of video games
### Benefits
• This is a full-time, remote position working with a 100% remote team
• You will have a great deal of autonomy over your working hours
• Health coverage for all American employees
• We also offer optional 4 day work weeks (8-hour days)
### Company Overview
Giant Army is a profitable company wholly owned by Universe Sandbox’s original creator; we have no publishers, marketing department, or external stakeholders to derail our vision. We are a decentralized, remote team founded in Seattle, Washington, USA, with members across the United States, Germany, Denmark, and Australia.
Team members enjoy a flexible, collaborative environment that values work-life balance. We are independently published and release updates on our own (relaxed) schedule.
Giant Army provides generous paid time off, new hardware/software reimbursements, healthcare, and other benefits.
We pursue features that get us excited about science. We strive to create an accessible experience that can’t be found anywhere else.
As a fully remote team since 2011, we rely on Google Workspace (Gmail, Calendar, Docs, Spreadsheets, Meet), Slack, Groove, GitHub, ZenHub, Unity, WordPress, and Notion.
We believe science and video games are for everyone, regardless of identity, and we’re committed to making an inclusive workplace. We encourage anyone who shares our passion for space to apply.
### Product Overview
Universe Sandbox is a physics-based space simulator that allows you to create, destroy, and interact on an unimaginable scale. Experiment with gravity, climate, and collisions to reveal the beauty of our universe and the fragility of our planet.
It’s more than a game; it’s a way of experiencing and learning about reality in a way that’s never been done before.
Universe Sandbox is available on Windows, Mac, Linux, and VR with mobile in development and future platforms planned. We’ve sold over 870,000 copies and have an “Overwhelming Positive” rating on Steam with 95% positive user reviews.
If we don’t have an active job opening that fits your skill set, but working on Universe Sandbox is your dream job, send us an email telling us why and we’ll at least send you back a reply.
### How to Apply
Fill out this application
## Universe Sandbox for Mobile | DevLog 1
You can purchase Universe Sandbox via our website or the Steam Store.
Have you ever wanted a universe in your pocket? We have too, and so we’ve been actively working on a mobile version of Universe Sandbox for both iOS and Android to make this a reality.
Universe Sandbox for mobile will have the same features and interface as the desktop version (in fact mobile and desktop are built from the same source code) and we are working to make sure it is an equally enjoyable experience.
### All-Around Improved Experience
Our work on mobile has motivated many features and improvements that have already been made to the desktop version. This includes automatic scaling of graphics settings based on screen resolution and the separate, minimizable panel that comes up when you use a tool, like the laser. Additionally, optimizing Universe Sandbox for mobile has the added benefit of improving performance on the desktop version.
### Designing a Handheld Universe Simulator
For the last few months, we’ve been focusing on making sure the mobile version is just as fun to play as the desktop version. In Update 26, we unified the user interface across desktop and VR, and we’re continuing to develop this unified interface with physically smaller (that is, mobile) screens in mind. You can check out how we are building this flexible user interface right now by making the window in the desktop version small. If you do try this, you’ll notice it presents quite a design challenge, not only for existing features but also for any features we add.
You may have seen some of the improvements we’ve made to our user interface in recent updates. For example, our bottom bar redesigns both create a sleeker, more adaptable desktop experience while also making everything more accessible on mobile. However, we are still working on solving a few design challenges including (but not limited to):
• What’s the best way to manage all of the different panels on a small screen (our guide system creates particular challenges)?
• Working around the limitations of minimum button sizes required for a touchscreens
• How do we make the user interface work in both portrait and landscape orientations?
### What’s Next for Mobile Development
We have been working on numerous updated user interface designs that improve functionality and clarity no matter what device you are on, and implementing those is one of our major next steps. We’re also currently hiring a cross-platform engineer to help bring Universe Sandbox to mobile and beyond.
While we still do not have a release date or official price for mobile, we currently plan on it being a one-time paid app with no ads or in-app purchases. We hope to write more of these mobile-focused DevLogs as we make more progress, so stay tuned!
http://universesandbox.com/mobile/
## Atmospheric Adjustments | Update 27.2
Run Steam to download Update 27.2, or buy Universe Sandbox via our website or the Steam Store.
### Update 27.2
You can now change specific simulation interactions, like gravity and collisions, on a per-object basis! This minor update also includes simulated atmosphere opacity (a measure of how hard it is to see through the atmosphere), bug fixes, and more.
#### Individual Object Simulation Manipulation
The ability to turn off specific simulation interactions on a per-object basis has been added to all objects in Properties > Overview. We plan to add to this over time, and we hope you enjoy creating all kinds of crazy scenarios with these options as much as we do!
#### More Highlights
• Polar ice caps on random rocky planets are now informed by the water depth around the poles and are no longer circles
• Opening multiple surface data views no longer causes a noticeable reduction in performance
Please report any issues on our Steam forum, on Discord, or in-game via Home > Send Feedback.
## Cloud Speed Simulation | ScienceLog #5
One of our recent improvements to Universe Sandbox includes realistically simulating the speed at which clouds rotate around objects, like planets and moons. While our in-game guide, which can be found under Guides > Science > Clouds, shows off these new features, we wanted to explain them in a little more depth.
To simulate completely realistic clouds, we would need to do a full weather simulation, including the water cycle. As we talk about in our Snow Simulation ScienceLog, this isn’t currently possible without a supercomputer, so for now our clouds are drawn from pre-made cloud pictures. However, we determine the speed at which clouds rotate around an object’s surface from two simulated effects.
### Creating Wind
In reality wind is initially created going in an unexpected direction – it travels outwards from the equator to the poles instead of rotating around the equator.
This is because objects are (generally) warmer at their equator and colder at their poles. The higher temperatures at the equator lead to a higher air pressure (essentially the weight of the atmosphere) at the equator, while colder temperatures at the poles lead to lower air pressure. The high pressure air at the equator moves to the lower pressure air at the poles, creating a wind that moves the clouds with it.
This wind moves faster, increasing the cloud speed, the larger the temperature difference between the equator and the poles is, since this will create a larger air pressure difference. In Universe Sandbox we simulate this difference in air pressure between an object’s equator and its poles based on the difference between its Minimum and Maximum Temperature, which are usually at the poles and equator.
### Changing the Wind’s Direction
So if wind, and clouds, starts out moving from the equator to the poles, why is it that in reality (and in Universe Sandbox) the wind and clouds move around the Earth’s equator?
This has to do with something called the Coriolis effect – the second effect we simulate for our cloud rotation speeds – which is an effect that occurs on any object that rotates. The Coriolis effect creates a force, called the Coriolis force, that pushes the wind around the Earth’s (or any object’s) equator. The strength of this force increases the faster the object is rotating.
### The Resulting Rotation (Speed)
So we now have two effects pushing the winds, and thus clouds, in two different directions:
1. The difference in air pressure (and temperature) between the equator and the poles of the object forces the winds to move outwards from the equator to the poles.
2. The Coriolis Effect pushes the winds around the equator of an object.
So how do we arrive at the final wind, and cloud, rotation speed? The wind speed will increase until the strength of both effects on the wind is the same. When this happens, the wind and clouds end up rotating around the equator of the object at a constant, unchanging speed. In Universe Sandbox this speed is taken as the Cloud Rotation Speed.
### Manipulating the Winds
A really interesting effect that happens when our two simulated effects have the same strength is that the faster an object rotates, the slower the cloud speed will be.
A faster object Rotation Speed creates a stronger force from the Coriolis effect, which allows the two effects to reach an equal strength more quickly. This means that the wind speed has less time to increase before it becomes constant. The result is that the final wind, and Cloud Rotation Speed, is slower.
In addition to Rotation Speed and the Minimum and Maximum Temperature, the strength of the wind that is created from the temperature difference also depends on the Atmosphere Mass, the Surface Gravity, and the Radius of the object (see the Bonus Math section below for details). This is because a more massive atmosphere will slow down the Cloud Rotation Speed, since it is harder to move, and a smaller object radius will increase the Cloud Rotation Speed, since it is easier to move air around a smaller object.
While simulating these effects is a welcomed advancement in our cloud simulation, there are still many improvements we would like to make. This includes dynamically generating clouds and giving them more realistic material compositions. For now, try experimenting with different object properties to see how they affect the Cloud Rotation Speed. We recommend the object’s Rotation Speed, since we can’t slow down the Earth in real life (nor would we want to), this is a great way to see some amazing science at work!
This blog post is part of our ongoing series of ScienceLog articles, intended to share the science behind some of Universe Sandbox’s most interesting features. If you would love to learn about the real-life science powering our simulator, please stay tuned and let us know what you would like to read about next.
### Bonus Math
If you’re interested in exactly how different object properties relate to both the force from the difference in the air pressure between the equator and the poles (called the pressure gradient) and the force from the Coriolis effect (called the Coriolis force) then you’ll enjoy this extra little bit of math.
When we simulate the cloud rotation speed we figure out the pressure difference, ΔP, which is based on the maximum atmosphere surface pressure, Pmax. This is the surface pressure at the equator, and depends on the Atmosphere Mass, M, the Surface Gravity, g, and the radius of the planet, R,
P_{\rm{max}} = Mg/(4 \pi R^{2}).
The pressure and temperature of a gas are related (by something called the Ideal Gas Law), so we can compute ΔP using just Pmax and the maximum and minimum temperature, Tmax and Tmin , of the object,
\Delta P = P_{\rm{max}} \left(\frac{T_{\rm{max}}}{T_{\rm{min}}} - 1 \right).
Now that we have this pressure difference, we can compute the force, F, that this pressure gradient applies over a certain amount of air mass, m. This force per mass is what causes winds and clouds to move and depends on ΔP (and a few other less important things). That means that this force can change depending on M, g, R, Tmax and Tmin , (that is Atmosphere Mass, Surface Gravity, Radius, Maximum Temperature, and Minimum Temperature respectively) so all of these properties affect the cloud rotation speed,
\frac{F}{m} \propto \Delta P \propto P_{\rm{max}} \times \frac{T_{\rm{max}}}{T_{\rm{min}}} \propto
\frac{Mg}{R^{2}} \times \frac{T_{\rm{max}}}{T_{\rm{min}}}.
Here the ∝ symbol means “proportional to,” which is similar to an equals sign, “=”, but leaves out some of the less important values. The Coriolis Force also provides a force per mass in order to move clouds. This force is dependent on a few different things, but in particular it depends on the rotation speed of the planet, Ω, and the speed that the wind is already moving due to the pressure gradient, v,
\frac{F}{m} \propto \Omega v.
To reach a balanced state where the wind, and clouds, are moving around the equator of an object at a constant speed, the two forces must be equal, leading to the relationship
\Omega v \propto \frac{Mg}{R^{2}} \times \frac{T_{\rm{max}}}{T_{\rm{min}}}.
Now the value we want is the wind, or cloud, rotation speed, v. Rearranging the above equation gives us
v \propto \frac{Mg}{\Omega R^{2}} \times \frac{T_{\rm{max}}}{T_{\rm{min}}}.
So what does this mean? First, the larger the difference between the minimum and maximum temperature, the faster the clouds will move. This is because a larger temperature difference means a larger pressure difference, thus faster winds.
It also shows mathematically why a more massive atmosphere slows the cloud rotation speed and a smaller radius can dramatically increase the cloud rotation speed like we discussed above.
But the most interesting consequence of this relationship is that it shows why it is that the faster an object rotates, the slower the cloud speed will be. This result was so surprising to us at first that we had to triple check it (we’re convinced it is correct now, don’t worry). While it’s impossible to slow down the Earth’s rotation in reality (not to mention the immense destruction that would cause if we could), exploring in Universe Sandbox allows you to see the consequences of some beautiful math for yourself.
## Clouds in Motion | Update 27.1
Run Steam to download Update 27.1, or buy Universe Sandbox via our website or the Steam Store.
### Update 27.1
Cloud speed is now simulated based on an object’s temperature and rotation speed. Surface simulation performance improvements, an Appearance interface redesign, adjustable planetary rim lighting, and more round out this minor update.
The feature image shows a laser heating up the Earth to speed up the cloud rotation and our new atmosphere opacity property.
#### Simulated Cloud Speed
Cloud speed is now simulated based on an object’s temperature and rotation speed as part of our continued incremental improvements of clouds. Check out our new Clouds guide for a tour through our entire cloud system: Guides > Science > Clouds.
#### Easier Appearance Editing
As part of our continued user interface improvements, the Properties > Appearance tab has been redesigned to combine color customization with visibility and other options to make changing the appearance of an object even easier.
#### More Highlights
• Surface Simulation has been improved to update only objects that are changing each frame, improving performance for simulations with many objects
• Created simulation of NASA’s Juno spacecraft flyby of Ganymede in June 2021: Open > Historical > Juno Flyby of Ganymede in 2021
• Star glows correctly fade as you get farther away from them when Object Visibility is set to Realistic again
• Added Atmosphere Density and Speed of Sound Properties to Properties > Surface > Atmosphere
• Rim Lighting can now be adjusted under View > Object Visibility > Rim Lighting
Check out the full list of What’s New in Update 27.1
Please report any issues on our Steam forum, on Discord, or in-game via Home > Send Feedback.
## Universe Sandbox Roadmap: 2021.5 & Beyond
Last year we worked hard to make Universe Sandbox even better, and we’ve continued that work into this year with two updates already. We wanted to share some of the exciting plans we have for upcoming features including terrain manipulation, an expanded materials system, and more, that we have for the rest of the year (and a bit beyond).
## What did we do in 2020?
• I Like My Heat Tidal | Update 25
• Rewrote the tidal heating model and temperature calculations
• Major graphics performance improvements which allowed us to improve surface simulation
• Light It Up | Update 25.1
• Added randomizable city lights to all planets
• Improved how multiple light sources interact, particularly with atmospheres
• Even More Colors in Space | Update 25.2
• Even more custom color options for clouds, city lights, asteroids, and galaxies
• Improved energy calculations even more with laser, explosions, and impacts
• Reimagined Experience – Unified VR & Desktop | Update 26
• We brought the full desktop experience into VR
• Reimagined user interface with a customizable bottom bar
• Better looking collision fragments, rocky planets, and liquid water
• Star Fusion & the Brown Dwarves | Update 26.1
• Smoother simulated transitions between gas giant, brown dwarf, and full fledged star
• More customizable colors and laser improvements, including the “Wave Maker” laser
• Ending 2020 with a Bang | Update 26.2
• Objects retain lasting surface damage with craters and scorched areas
• More realistic explosions with better simulated gas particles
View our “What’s New” for a chronological list of changes.
## What have we already done in 2021?
This year has already seen two notable updates that have included many fixes to our core simulation, a few new features, and improvements to the Universe Sandbox experience.
• Splish, Splash, Filling a Bath | Update 26.3
• Water fills from lowest points and flows to lowest points
• Smoother, better performing, and explodi-er collisions
• Fast & Flurrious | Update 27
• More realistic snow simulation and better looking random rocky planets
• New Render Scale settings to improve performance for non-gaming hardware
### New team members
Brent was hired in March as our new Science Writer & Community Advocate. Brent has a PhD in Physics and will be writing about all of the awesome science and simulations that Universe Sandbox can do (including writing this post – Hi Everybody 😃 ).
We also recently hired Brian as our new User Interface Engineer. Brian joins us after working as a frontend web engineer, and is excited to help make Universe Sandbox the best possible experience for exploring space and science. He’ll be implementing some of the many, many user interface designs we’ve been working on.
## What’re we currently working on?
You’ve seen some of what we’ve been up to this year from the updates we’ve put out already, but there’s still a lot of new features in development. While we would love to get all of these new features and improvements out by the end of the year, there may be new priorities or unexpected difficulties that pop up, which make it hard to predict exactly when we will have these features ready.
### Surface Grids & Planet Appearances
The simulation of temperature, ice, water, vapor, and other properties across the surface of an object, a feature we call Surface Grids, has become a fundamental part of Universe Sandbox. Read more about Surface Grids in our DevLog and ScienceLog series. Yet even with all of the improvements we made to them last year, there is still so much that can be done.
• Planet Surface Editing
• Last year we did a bit of work designing and determining the best way to add tools that will allow you to directly edit the surface properties of an object, and we’re hoping to implement those tools this year.
• You’ll be able to change your planet’s elevation, water level, temperature, and more with single point precision on the surface grid.
• Improved Atmosphere Simulation
• With many improvements in performance in Update 26.3, we are looking into extending the atmosphere of planets into the surface grids simulation.
• Better simulated atmospheres will allow for more realistic climate and cloud simulation.
• Fiery Collisions
• We plan to build upon the collision improvements introduced in Update 26.3 with more realistic post-impact shockwaves, fragmentation, and grazing impacts.
### Enhanced User Interface
Our goal for each update to Universe Sandbox is to improve your experience. This might involve updating our user interface, improving our guides for new users, increasing performance, and other small tweaks.
• Towards a Forward-Thinking User Interface
• We’ve taken a responsive design approach that lets the interface work universality across desktop, mobile, VR, and future-gamepad support in mind.
• We already redesigned the Add panel in Update 27 and unified the styling in preparation for future tools and features, like the surface enhancement tools we mentioned above.
• The Most Enjoyable Experience
• In addition to guide rails, which help walk users through our tutorials, that were added in Update 27, we want to update and add more guides and science simulations for new and old users alike.
## …and Beyond?
In addition to all of these plans, we also have some longer-term goals, and though many of these may not happen for a while, we wanted to share with you.
### Multiple & Mixed Materials
We want to give you access to more basic materials, like methane, CO2 (carbon dioxide) and O2 (oxygen), to our current system. Not only will a more robust material system allow you to further customize your planets, moons, and asteroids, but it will also make future features, like Life Simulation and realistic atmosphere simulation, possible.
• A Material World
• More materials will allow for more planet customization and allow more versatility in terraforming planets. They will also allow you to customize your planet’s atmosphere — will it be suitable for life, or mostly composed of methane like Titan?
• Multiple materials will be critical for developing life simulation in the future. Life is made up of complex materials after all.
• Unified Materials System
• Our goal is to track all the different materials, and the state that they’re in, in each point in a surface grid on planet surfaces.
• Keeping track of all of these materials and their properties will allow for more realistic simulation calculations in general, like the phase state (solid, liquid, gas) changes for all of our new materials.
### Rigid Body Physics
Currently our physics engine isn’t optimized for rigid body physics required for proper human scale (like dice and spacecraft) object interactions. We’re working to change that and have been researching different methods of implementing a whole new suite of physics into Universe Sandbox.
• DOTS-based Unity Physics
• Unity has introduced a promising new physics engine that we’ve been spending time researching. Based on the Unity DOTS (Data Oriented Technology Stack) framework, the new Unity Physics engine promises significantly better performance.
• This DOTS-based Unity Physics would not only enhance our rigid body physics, but would also be used to improve the performance of our gravity simulation.
• To Infinity and Beyond
• Much like Surface Grids, rigid body physics lays the foundation for many simulation features to come, such as spacecraft, thrusters, and megastructures.
• Having more bodies that follow these new physics will also allow you to better simulate how larger bodies form as smaller objects clump together. Make planets out of asteroids, or pigeons, it’s up to you.
### Mobile
We’re still actively working on a mobile release for both iOS and Android, though there’s still many design questions to be answered.
• The Full Experience on Mobile
• When we bring Universe Sandbox to mobile, we plan to bring the full desktop version to mobile so you’ll have access to all the same features, just in your pocket!
• A smaller screen means our user interface will need to be even more innovative, and we’re working hard to make sure that the mobile experience is just as fun to play as it is on a desktop.
## And More
There’s no shortage of additions and improvements to be made when you’re simulating the Universe! We’re also thinking about future features like Lagrange points, atmospheric scattering, and life simulation, and we’re excited to share them all with you!
## Hiring
1. Will you be able to export (and import) height maps for celestial objects?
We plan to support importing and exporting of surface maps, but we don’t yet know when that will happen.
Much of the surface data (like elevation and color) that we use for known solar system objects is based on data available online. You can find similar maps to the ones we use online, like this set of Moon elevation and color maps (though these are not exactly what we use).
2. What materials are you planning on adding?
Development of adding additional materials to the simulation is still in flux. Our goal is to add all the basic materials that are necessary for simulating basic life like carbon dioxide and oxygen, as well as some others like methane and sulfur, which are found in large quantities on other planets and moons in the solar system.
3. Will we be able to make planets out of any materials?
Great question. When we add additional materials to Universe Sandbox, we are planning to make them available as part of an object’s Composition properties, similar to our current system, so you’ll be able to make planets out of any material we add.
4. Will we be able to make megastructures like Death Stars?
We think this would be really fun, and actually have this on our Roadmap already. The general idea would involve a separate system to build megastructures like Dyson Spheres (or even Death Stars). However we still have a lot to do before we start work on that.
5. Are we adding sentient life as part of our life simulation?
That is a cool idea. We’re still a ways away from even the simple life simulation we are hoping to implement though. The first version of life simulation will likely be a surface map, similar to our other surface grid maps, that would show “amount of biomass” or “vegetation,” but we don’t have all of the details worked out yet.
6. Any plans to make the game run in Vulkan?
While we don’t have any plans right now to support Universe Sandbox for Vulcan, an API for graphics rendering and computing (different from what we currently use), on Desktop, that doesn’t mean we won’t look into it in the future.
7. Will we be adding continental drift or volcanoes?
Right now we don’t have any plans to add either of these features, though we have thought about adding very simple volcanoes in the past. There are a lot of improvements and fixes we need to make to the core features before we think about adding in complicated features like that.
8. Is there a way to beta test new features?
Yes! We will occasionally release Community Test builds to get feedback on new features and help us find bugs. We announce when these go live in a post on the Steam Forums and on our Discord server.
9. What devices will Universe Sandbox mobile be available on and will it be free for those who have already bought Universe Sandbox?
Universe Sandbox will be available on both iOS and Android devices. Minimum device requirements have not been finalized yet.
We still do not have a release date or official price for mobile, but we do plan on it being a one-time paid app with no ads or in-app purchases. The desktop and the mobile versions will be sold on separate stores and will be separate purchases. If you want Universe Sandbox on your mobile device you will need to purchase from the mobile store, even if you already own Universe Sandbox on desktop.
10. Are we still working on Smoothed-particle Hydrodynamics (SPH)?
We have shifted our focus on fixing problems and bugs with the fundamental systems and simulation before resuming work on awesome new features like SPH. That said, we still want to do SPH, but are actively working on improvements to the current collision system that we discussed in the Roadmap.
Updated July, 15, 2021
## Hiring a Physics Engineer – Planetary Destruction
Universe Sandbox is a space and gravity simulator masquerading as a video game with over 800,000 unit sales and an overwhelmingly positive 95% rating on Steam.
Giant Army is looking for a creative and highly technical physics engineer to help implement and improve our real-time physics simulation and collision system.
We are a close-knit multidisciplinary team of astrophysicists, engineers, graphics developers, and designers that highly values individual contributions and collaborative problem-solving. The company name, Giant Army, was inspired by the concept of “standing on the shoulders of giants.”
Our mission is to reveal the awesomeness of the universe and the fragility of our planet through real-time interaction, creation, and destruction of a realistic, science-based simulation.
Help us solve the complex challenge of simulating interactions and collisions of objects:
• Across all scales (human-scale, asteroids, moons, planets, and stars)
• At a wide range of user-controllable time speeds (from super slow to super fast)
• While being enjoyable on a wide variety of hardware (from gaming PC to mobile devices)
What would it actually look like if you collided the Moon with the Earth, the Earth with Jupiter, Jupiter with another gas giant, or spawned a black hole inside of the Moon (and also all of those at the same time)?
You might be the perfect candidate if you enjoy physics, programming, and celestial destruction, constrained by consumer hardware and the need for realism.
This is a full-time, remote position working with a 100% remote team.
• Develop improvements and innovations to our custom physics engine
• Focus on realism and scientific accuracy while delivering creative solutions to allow for high performance on consumer hardware
• Proactively observe the simulation, notice issues and areas for improvement, propose solutions, and take ownership to see the implementation through to release
• Build upon, innovate, and improve our existing simulations of:
• Similar-sized planetary collisions (like Earth colliding with Earth)
• Ultra-high-speed collisions (objects colliding at fractions of the speed of light)
• Objects breaking apart from high rotational speeds
• Deformation of objects from tidal forces
• Exploding objects
• And other physics simulation features you’re excited by
• Stay current on Unity’s tech and trends. We’re working on a major rewrite to be more closely integrated with Unity’s DOTS Physics
### Qualifications
• Professional or personal programming projects showing your talent and love of physics
• Experience with C# & Unity DOTS (although these specifics could be learned if you’re a great coder and love physics)
• Ability to write and maintain your own physics code (not just implement an existing physics engine)
• Strong attention to detail and a love of polish & iteration
• Passion for science, astronomy, and real-time interactive simulations
• Love of fantastical what-if scenarios: what-if.xkcd.com (note citation #6 on 148)
• Ability to see things from our user’s perspective
• Appreciation of video games
### Company Overview
Giant Army is a profitable company wholly owned by Universe Sandbox’s original creator; we have no publishers, marketing department, or external stakeholders to derail our vision. We are a decentralized, remote team founded in Seattle, Washington, USA, with members across the United States, Germany, Denmark, and Australia.
Team members enjoy a flexible, collaborative environment that values work-life balance. We are independently published and release updates on our own (relaxed) schedule.
Giant Army provides generous paid time off, new hardware/software reimbursements, healthcare, and other benefits.
We pursue features that get us excited about science. We strive to create an accessible experience that can’t be found anywhere else.
As a fully remote team since 2011, we rely on Google Workspace (Gmail, Calendar, Docs, Spreadsheets, Meet), Slack, Groove, GitHub, ZenHub, Unity, WordPress, and Notion.
We believe science and video games are for everyone, regardless of identity, and we’re committed to making an inclusive workplace. We encourage anyone who shares our passion for space to apply.
### Product Overview
Universe Sandbox is a physics-based space simulator that allows you to create, destroy, and interact on an unimaginable scale. Experiment with gravity, climate, and collisions to reveal the beauty of our universe and the fragility of our planet.
It’s more than a game; it’s a way of experiencing and learning about reality in a way that’s never been done before.
Universe Sandbox is available on Windows, Mac, Linux, and VR with mobile in development and future platforms planned. We’ve sold over 800,000 copies and have an “Overwhelming Positive” rating on Steam with 95% positive user reviews.
If we don’t have an active job opening that fits your skill set, but working on Universe Sandbox is your dream job, send us an email telling us why and we’ll at least send you back a reply.
### How to Apply
Fill out this application
## Simulating Snow | ScienceLog #4
It turns out it’s a lot harder to simulate snow, or any weather for that matter, than it is to simulate regular surface water. In Universe Sandbox the phases of water on the surface of an object depend just on the sea level temperature, and we even make sure to conserve the total surface water mass in all of its phases, which you can find under Properties > Surface > Total Water Mass. However, because snow depends on so many other conditions we don’t keep track of it in the same way.
Like all phases of water, snow is also tracked with surface grids, which we discussed in our first ScienceLog. We simulate snow by checking if each point on the surface grid has the right elevation, amount of water vapor, and surface temperature, needed for snow to form. If the point meets all of our checks, we know snow needs to be added to that point. This is much more complex than how we simulate ice, which only depends on whether the sea level temperature of a point on the surface grid is below freezing.
As of Update 27 we’re also keeping a record of where snow is being formed. One thing this allows us to do now is add and remove snow more realistically. This is a big improvement over our previous snow simulation where snow would just appear and disappear instantaneously depending on the properties of each point on the surface grid at any given time. We’re also doing a better job of simulating snow and ice on random planets by stabilizing the water phases and then running our snow checks when the planet is created.
### At the Speed of Snow
Now, you may be wondering why we don’t just simulate snow with the rest of the phases of water. To do that, we would need to simulate the entire water cycle, which we just can’t do accurately at simulation speeds faster than about one second per second on a desktop computer (yet). Even organizations like NASA need supercomputers to accurately simulate weather! This limitation comes from how fast we can allow water to flow through the points on a surface grid and maintain a stable surface simulation. In the water cycle, the phases change much faster than we can simulate the flow rate of water. This means we can’t keep track of which points should have which phases. For simulating the phase changes of water on the surface of a planet, like liquid water to ice, we aren’t limited because the flow rate of water is faster than the phase changes of this surface water. However, as consumer computers get faster, our snow simulation has the potential to become more realistic. So while we may not have personal supercomputers anytime soon, you can still check out how much better snow looks by checking out the Tidally Locked Earth or Mars Collisions Sims.
This blog post is part of our ongoing series of ScienceLog articles, intended to share the science behind some of Universe Sandbox’s most interesting features. If you would love to learn about the real-life science powering our simulator, please stay tuned and let us know what you would like to read about next.
## Fast & Flurrious | Update 27
Run Steam to download Update 27, or buy Universe Sandbox via our website or the Steam Store.
### Update 27
Snow simulation improvements, more detailed temperature maps, better performance, new cloud visuals, and more are rolled up into Update 27.
The featured image shows what would happen if the Earth was tidally locked, where one side of the planet always faces the Sun.
#### Superior Snow Simulation
We’re now keeping track of snow so that it falls and melts more realistically. Previously it disappeared immediately if the water vapor got too low. There’s also more accurate snow and ice formations on newly-created random rocky planets!
#### Taking the (Surface) Temperature
Temperature maps have gotten a facelift with the addition of temperature calculations adjusted by elevation. Previously temperature maps were only shown at sea level, even if the elevation data was above sea level.
#### Downscaling to Benefit Non-gaming Hardware
Render Scale has been added as a new graphics setting. This allows you to run the simulation at a lower resolution while keeping the interface looking crisp. The automatic settings have also been updated for improved performance on lower-end hardware.
#### More Highlights
• More customization for cloud visuals on rocky planets, including adjustable coverage and opacity
• We’ve added new shapes to our Human Scale Objects
• …and Human Scale Objects can now have custom colors
• View > Object Visibility has been added so that you can see all objects that would normally be impossible to see at realistic scales. You can also really blow them up with advanced settings!
• The Add Panel has been restyled to accommodate smaller screens and to prepare the panel for future plans
• Heating from stars and supernovae is now smoother at high simulation speeds for all spinning objects
• Our guide system now provides better assistance to new users with Guide Rails
• Curved trails are now more precisely rendered as at high simulation speeds
• Dyslexia-friendly font options have been added under Settings > General > Accessibility
Check out the full list of What’s New in Update 27
Please report any issues on our Steam forumon Discord, or in-game via Home > Send Feedback.
## The End of the World: Slower Than You Expected | ScienceLog #3
Sure, the Sun’s pretty useful, we guess. It feeds Earth’s plant life, keeps us warm, and helps people see where they’re going when they walk around outside. If the Sun suddenly disappeared from the Solar System (which you can do with the click of a button in Universe Sandbox!), we’d be in big trouble.* In fact, right now you’re probably imagining the desolate, frozen landscape that our planet would become without its Sun. But this apocalypse wouldn’t happen quite as fast as you probably think:
If the Sun disappeared, it would take
over a century for the Earth’s oceans to completely freeze solid!
Universe Sandbox lets you perform this kind of catastrophic experiment from the safety and comfort of your own home by simulating three phases of water (solid, liquid, and gas), and how they react to the changing environment. As a planet cools, its surface water will freeze into ice. Heat that planet back up with a laser, and the ice will melt and even vaporize into gas.
But you might have noticed that some of these phase changes take longer than you expect them to. If you’ve found yourself wondering “Why is it taking so long for the oceans to freeze?” or “I’ve been waiting for ages for the ice caps to melt, what’s going on??”, read on to learn more about the physics (and speed) of phase changes.
### Energy Flow… Again
In ScienceLog #1, we explained how the flow of energy into and out of a planet will affect that planet’s temperature. In fact, the flow of energy also affects the phase of water.
As you know if you’ve ever boiled a pot of water, you need to add energy to turn water from a liquid to a gas. The opposite phase change— condensing water vapor into a liquid— involves the release of energy into the cooler environment surrounding the water. Similarly, energy needs to flow into a block of ice to melt it into water, but energy must flow out of a pool of water in order to turn it into ice. We can figure out how fast a phase change is occurring based on the speed at which energy is flowing into or out of the water.
The key point here is that phase changes are not instantaneous. You’ve probably already noticed that, if you pay attention to phase changes in your daily life: It can take a few days for snow to melt after a big blizzard, even if the temperature rises above freezing. Even ice doesn’t melt instantly in your drink on a hot day. And of course, we all know that water never boils as fast as we want it to, even if we set it on high heat.
The speed of a phase change of surface water in Universe Sandbox will depend on the temperature of the surface, the freezing or boiling point of water, and the mass of water that you’re trying to change. This last factor, the mass of the water, is probably the source of most of the confusion about this issue in Universe Sandbox. Since we’re all used to seeing phase changes in our everyday lives, we have some intuition for how fast we think they should happen. But the masses of the Earth’s ice caps or oceans are much, much larger than an ice cube or a kettle of water, and this significantly slows down the rate of boiling, melting, and any other phase change.
The heat from the too-close Sun is melting the Earth’s ice quickly, as you can see in the Total Ice Mass graph on the left, but not instantly.
That’s why you might have to wait a while for your simulated planet’s oceans to freeze or boil (depending on what you’ve done to that poor planet). Of course, if you get impatient, you can always use the new Stabilize Phases button in the Surface tab to instantly change the surface water to the correct phase based on the local temperature. What a convenient apocalypse!
…What’s that? You still don’t believe us that it would take a century to freeze the Earth’s oceans?
…You want some proof in the form of equations and hard numbers?
…All right, you asked for it. If you’re still with us, read on for the juicy, math-y details:
### Bonus Math: How Long Does It Take to Freeze the Earth’s Oceans?
We’re going to put our money where our math is and walk through an example. Suppose we want to freeze all the water on Earth into ice. We could do this by deleting the Sun in the Solar System, although then we’d have to wait for the Earth to slowly cool down. If we’re impatient, we can skip ahead by just setting the Earth’s Average Surface Temperature to the lowest possible temperature: -273°C, or zero Kelvin (also known as “absolute zero”).
If you try this in Universe Sandbox, you’ll notice that after you change the temperature, the oceans are still made of liquid water. How long should we expect it to take to freeze all that water into ice: Days? Weeks? Months?
Let’s start by asking how much water we’re trying to freeze. Earth’s oceans have a mass of roughly 1.4 thousand billion billion kilograms. In scientific notation, that’s 1.4 x 1021 kg of water. To turn the liquid water into a solid, we need to remove energy from it. Since the water hasn’t frozen yet, its temperature is sitting at the freezing point, around 273 Kelvin. Since the Earth itself is at zero Kelvin, the heat energy in the water will flow into the Earth (and then out into space).
Our next question is: How much energy needs to flow out of the water in order to freeze it? To answer this question, we use a property of water called the Heat of Fusion. This property represents how much energy, in Joules, is required to melt one kilogram of ice into water, or, conversely, how much energy must be removed to freeze one kilogram of water into ice. You can look up the Heat of Fusion for many different materials online— For water, it’s about 3.3 x 105 Joules per kilogram.
This means that the amount of energy that must be removed from Earth’s oceans to freeze them entirely into water is:
\text{Energy} = \text{Mass} \times \text{Heat of Fusion} = (1.4 \times 10^{21} \text{kg}) \times (3.3 \times 10^5 \frac{\text{J}}{\text{kg}}) = 4.62 \times 10^{26} \text{J}
That’s roughly the amount of energy that would be released by two billion Tsar Bomba hydrogen bombs, the most powerful nuclear weapon ever created.
Now we need to know the speed at which energy is flowing out of the water, and into its zero Kelvin environment. For this, we can use the Stefan-Boltzman law, which says that an object with temperature T will lose energy through its surface at a rate of
\text{Rate} = \sigma T^{4}A
where σ, the Greek letter “sigma”, represents the Stefan-Boltzmann constant, and the A is the surface area of the object.
The surface area of the Earth is about 5.1 x 1014 m2, so the rate at which the oceans are losing energy is roughly
\text{Rate} = (5.7 \times 10^{-8} \frac{\text{J}}{\text{s m}^{2}~\text{K}^{4}}) \times (273~\text{K})^{4} \times (5.1 \times 10^{14}~\text{m}^{2}) = 1.61 \times 10^{17}~\text{J/s}
We can actually double-check this number in the game: First, put the Earth in an empty simulation. Then set Earth’s Average Surface Temperature to 273 Kelvin and look at the Energy Radiation Rate property. As expected, it shows that this Earth is losing energy at a rate of 1.61 x 1017 W (the Watts unit is equivalent to Joules per second).
Back to our zero-Kelvin Earth: you probably know that only about 70% of our planet’s surface is covered in water. Since we’re only interested in how fast the oceans are losing heat, we should use a reduced rate of
\text{Rate} = 1.61 \times 10^{17}~\text{J/s} * 0.70 = 1.13 \times 10^{17}~\text{J/s}
We now know how much energy we need the oceans to lose in order to freeze them all, and how fast they are losing energy to their surroundings. Now we can easily calculate the time it will take for the oceans to lose the required amount of energy:
\text{Time} = \text{Energy / Rate} = (4.62 \times 10^{26}~\text{J}) / (1.13 \times 10^{17}~\text{J/s}) = 4.09 \times 10^{9}~\text{s}
There are about 3.15 x 107 seconds in a year, so that’s
\text{Time} = (4.09 \times 10^{9}~\text{s}) / (3.15 \times 10^{7}~\text{s/yr}) = 132~\text{yr}
In other words, we estimate that it would take over 100 years(!) for the Earth’s oceans to completely freeze if the Earth’s temperature suddenly dropped to absolute zero. In real life, it would likely take even longer: The layer of ice that would form on top of the oceans would insulate the liquid water underneath, keeping it from freezing from much longer. Geothermal vents at the bottom of the oceans could also keep temperatures cozy for the microorganisms that live down there, possibly for billions of years.
If you’d rather go in the opposite direction and try to boil away Earth’s oceans by heating up the planet, you might find that it takes even more energy! That’s because the energy needed to change water from a liquid to a gas, known as the Heat of Vaporization, is almost ten times its Heat of Fusion. You can explore exactly this scenario in our Welcome | Part 2 guide, which you can find in Home > Guides > Tutorials. You can also learn more about how Universe Sandbox simulates the surface temperatures of objects in the Surface Simulation or the Energy & Heating tutorials.
Based on some comments we’ve received about the assumptions we made for this calculation, we wanted to go into a bit more depth about what they are, and why they may (or may not) be important. You’ll notice that because of these assumptions, the 132 years that we come up with really represents a minimum amount of time it would take for the oceans to freeze solid.
• Space is actually 3°K, not 0°K:
Yes, that’s true, the ambient temperature of empty space is around 2.7°K due to the cosmic microwave background. However, after the Sun disappears, the Earth is still much hotter than the temperature of space, and the difference between 0°K and 2.7°K is small, so this would not notably affect the speed of cooling.
• We didn’t consider atmospheric heating (the greenhouse effect):
No we didn’t, though it is included in the Energy Absorption Rate in Universe Sandbox, so you can go see how large an effect this is by running the simulation for yourself! This effect actually makes the largest difference in the time it would take for the oceans to freeze. This Atmosphere Power is actually based on the infrared emissivity, ε, of Earth, a measure of how efficiently it emits infrared radiation. For Earth this is about 0.78 on a scale of 0-1 (1 being very efficient). The energy radiated back at Earth by the atmosphere is then calculated as:
P_{\rm{atm}} = \frac{\epsilon}{2} \sigma T^4 A
where again σ, the Greek letter “sigma”, represents the Stefan-Boltzmann constant, and the A is the surface area of the object, and T is the temperature. Which works out to be 39% of the Energy Radiation Rate of Earth. So this means that the cooling rate is significantly slower when you take atmospheric heating into effect, adding another 83 years or so to the time it would take for Earth’s oceans to freeze solid.
• We didn’t discuss tidal forces:
True, we did not discuss tidal forces, but they are also computed in Universe Sandbox as part of the Energy Absorption Rate. However, once you get rid of the Sun, the additional heating from tidal forces is over a million times smaller than the Energy Radiation Rate. The main source of tidal heating once the Sun is gone is the Moon, which adds about 2 terawatts of constant power (though it varies very slightly). This additional energy would only delay Earth’s oceans from freezing over for another day or so.
• We didn’t consider geothermal (internal) heating:
Geothermal vents are mentioned in the last sentence of the second-to-last paragraph, but you’re right that we did not include them in our calculations. In fact, that property is not simulated in Universe Sandbox. However, assuming this rate is constant at providing 47 terawatts of power, this is still about 1000 times smaller than the Energy Radiation Rate, and would only add about 20 more days to the total time that it would take to freeze the Oceans.
• Earth is not a perfect blackbody:
That’s also true. In many astronomical fields, celestial objects are approximated as blackbodies not only because it makes the math much easier, but also because we don’t know their exact emission and absorption properties, and it tends to be a pretty accurate approximation. This is why we approximate all of our objects as blackbodies to compute the Energy Radiation Rate in Universe Sandbox. Even though Earth is not a perfect blackbody, the difference between it’s blackbody temperature and measured temperature is only a few degrees Celsius (not including the greenhouse effect).
Another assumption we made was that the surface temperature of the Earth would be starting at 0°K. As we mentioned, if we don’t start Earth at 0 °K, then we need to wait for it to cool off enough that it’s oceans would start to freeze, making it take even longer for Earth’s oceans to freeze solid. We dynamically compute the temperature of an object and its subsequent Energy Absorption and Radiation Rates in Universe Sandbox each second, so you can actually watch it cool in real time. Computing the exact amount of additional time this cooling would add is quite complicated. But we can run the simulation in Universe Sandbox and find that this will add another 100 years or so to the total time that it will take Earth’s oceans to freeze solid.
Since we do include atmospheric and tidal heating in Universe Sandbox, I encourage you to go and delete the Sun yourselves and see how long it takes for the oceans to freeze solid!
*So how long would you survive after the Sun disappeared? It would depend a lot on where you live and how much food you have on hand. The crops we depend on for food need sunlight to grow, although larger plants like trees can have enough energy stored to last for years without the Sun. Many people would probably freeze to death before they starved. Some people might last for a few months, especially those living in places like Yellowstone or Iceland with a lot of geothermal activity. After a few years, though, the Earth’s surface would grow so cold that the atmosphere would condense, and there’d be nothing left to breathe. It really makes you appreciate our nearest star, doesn’t it?
This blog post is part of our ongoing series of ScienceLog articles, intended to share the science behind some of Universe Sandbox’s most interesting features. If you would love to learn about the real-life science powering our simulator, please stay tuned and let us know what you would like to read about next.
|
2021-09-20 15:01:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33794495463371277, "perplexity": 1560.7723776270063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00450.warc.gz"}
|
http://modelai.gettysburg.edu/2018/rnntext/handout/index.html
|
## Understanding How Recurrent Neural Networks Model Text
In this assignment you will explore how Recurrent Neural Networks (RNNs) model and generate text. You will work on extending Andrej Karpathy’s min-char-rnn.py implementation of a vanilla RNN, train the RNN on a corpus of Shakespeare’s texts (download here ), and figure out how the RNN manages to generate “Shakespeare-like” text. and write up your findings in a report.
We are providing you with a model trained on the Shakespeare corpus. The model is available here (alternatively, you can read in the model from a Pickle file ), and code to read in the weights is here .
### Part 1 (20%)
Recall from lecture that at each time-step $$t$$, the RNN computes an output layer
$$y^{(t)} = (y^{(t)}_1, y^{(t)}_2, ..., y^{(t)}_k).$$
The estimate of the probability that the character at time-step $$(t+1)$$ is character $$i$$ is then proportional to $$\exp(y^{(t)}_i)$$:
$$P(x^{(t+1)}=i) = \frac{\exp(y^{(t)}_i)}{\sum_{i'=1}^k \exp(y^{(t)}_{i'})}.$$
As discussed in lecture, when sampling from the RNN, we can sample using different “temperatures.” We can sample the character at time-step $$(t+1)$$ by setting the probability of sampling character $$i$$ to be proportional to $$\exp(\alpha y^{(t)}_i)$$:
$$P(x^{(t+1)}=i) = \frac{\exp(\alpha y^{(t)}_i)}{\sum_{i'=1}^k \exp(\alpha y^{(t)}_{i'})}.$$
The quantity $$1/\alpha$$ is called the “temperature.”
Write a function to sample text from the model using different temperatures. Try different temperatures, and, in your report, include examples of texts generated using different temperatures. Briefly discuss what difference the temperature makes.
You should either train the RNN yourself (this can take a couple of hours), or use the weights we provided – up to you.
### Part 2 (50%)
Write a function that uses an RNN to complete a string. That is, the RNN should generate text that is a plausible continuation of a given starter string. In order to do that, you will need to compute the hidden activity $$h^{(t)}$$ at the end of the starter string of length $$t$$, and then start generating new text.
In your report, include five interesting examples of outputs that your network generated using a starter string that you chose.
You should either train the RNN yourself (this can take a couple of hours), or use the weights we provided – up to you.
### Part 3 (30%)
Some examples of texts generated from the model provided to you (at temperature = 1) are here .
In the samples that the RNN generated, it seems that a newline or a space usually follow the colon (i.e., “:” ) character. In the weight data provided to you, identify the specific weights that are responsible for this behaviour by the RNN. In your report, specify the coordinates and values of the weights you identified, and explain how those weights make the RNN generate newlines and spaces after colons. Explain how you figured out which weights are responsible for the behaviour. You are encouraged to write code to get the answer, and to include the scripts you wrote in your report.
### Part 4 (10% bonus)
Identify another interesting behaviour of the RNN, and identify the weights that are responsible for it. Specify the coordinates and the values of the weights, and explain how those weights lead to the behaviour that you identified. To obtain more than 2/10 for the bonus part, the behaviour has to be more interesting than the behaviour in Part 3 (i.e., character A following character B).
### What to submit
The project should be implemented using Python. Your report should be in PDF format. You should use LaTeX to generate the report, and submit the .tex file as well.
Reproducibility counts! We should be able to obtain all the graphs and figures in your report by running your code. Set all the seeds to 0 to enable us to reproduce your outputs.
|
2019-03-23 21:38:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7514134645462036, "perplexity": 887.8860794044782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00152.warc.gz"}
|
http://mathonline.wikidot.com/the-alternating-series-test-for-alternating-series-of-real-n
|
The Alternating Series Test for Alternating Series of Real Numbers
# The Alternating Series Test for Alternating Series of Real Numbers
Recall from the Alternating Series of Real Numbers page that if $(a_n)_{n=1}^{\infty}$ is a strictly positive sequence then the corresponding alternating sequence is given by $((-1)^{n+1}a_n)_{n=1}^{\infty}$.
We will now look at a very important theorem known as the alternating series test which provides criterion for an alternating series to converge.
Theorem 1: If $(a_n)_{n=1}^{\infty}$ is a decreasing sequence and $\displaystyle{\lim_{n \to \infty} a_n = 0}$ then the alternating series $\displaystyle{\sum_{n=1}^{\infty} (-1)^{n+1} a_n}$ converges.
• Proof: Let $(a_n)_{n=1}^{\infty}$ be a decreasing sequence that converges to $0$ and let $(a_n^*)_{n=1}^{\infty}$ be the corresponding alternating sequence of terms whose general term is defined for all $n \in \mathbb{N}$ by:
(1)
\begin{align} \quad a_n^* = (-1)^{n+1} a_n \end{align}
• Let $(s_n)_{n=1}^{\infty}$ be the corresponding sequence of partial sums for the alternating series. Since $(a_n)_{n=1}^{\infty}$ is a decreasing sequence we note that for all $n \in \mathbb{N}$ that $-a_{2n} > a_{2n+1}$, i.e., $a_{2n} + a_{2n+1} < 0$ and so we see that:
(2)
\begin{align} \quad s_{2n+1} = s_{2n - 1} + a_{2n} + a_{2n+1} < s_{2n-1} \end{align}
• This shows that the subsequence of odd partial sums forms a decreasing sequence and so:
(3)
\begin{align} \quad s_1 > s_3 > s_5 > ... > s_{2n-1} > ... \quad (*) \end{align}
• Furthermore we also have that $a_{2n} > -a_{2n + 1}$, i.e., $a_{2n + 1} + a_{2n} > 0$ and so we see that:
(4)
\begin{align} \quad s_{2n + 2} = s_{2n} + a_{2n+1} + a_{2n+2} > s_{2n} \quad \end{align}
• This shows that the subsequence of even partial sums forms an increasing sequence and so:
(5)
\begin{align} \quad s_2 < s_4 < s_6 < ... < s_{2n} <... \quad (**) \end{align}
• So in total, the sequence of partial sums $(s_n)_{n=1}^{\infty}$ is bounded above by $s_1$ and bounded below by $s_2$ as seen by combining $(*)$ and $(**)$:
(6)
\begin{align} \quad s_2 < s_4 < s_6 < ... < s_{2n} < ... < s_{2n-1} < ... < s_5 < s_3 < s_1 \quad (***) \end{align}
• Moreover, the decreasing subsequence of odd partial sums $(s_{2n-1})_{n=1}^{\infty}$ is bounded below and hence converges to some $s_1 \in \mathbb{R}$. Similarly, the increasing subsequence of even partial sums $(s_{2n})_{n=1}^{\infty}$ is bounded above and hence converges to some $s_2 \in \mathbb{R}$. We are given that $\displaystyle{\lim_{n \to \infty} a_n = 0 }$ and since $a_{2n} = s_{2n} - s_{2n-1}$ we see that:
(7)
\begin{align} \quad \lim_{n \to \infty} a_{2n} = \lim_{n \to \infty} s_{2n} - \lim_{n \to \infty} s_{2n-1} \\ \quad 0 = s_1 - s_2 \end{align}
• So $s_1 = s_2$. So, let $s = s_1 = s_2$. Then $\lim_{n \to \infty} s_n = s$ since every subsequence of $(s_n)_{n=1}^{\infty}$ converges to $s$ as seen from the inequality presented by $(***)$. Thus $\displaystyle{\sum_{n=1}^{\infty} (-1)^n a_n}$ converges. $\blacksquare$
|
2018-12-11 08:32:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998254179954529, "perplexity": 207.76008427386282}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00533.warc.gz"}
|
https://codereview.stackexchange.com/questions/211604/potential-type-safety-issues-on-object-parsing-function-in-dxl
|
# Potential type-safety issues on object parsing function in DXL
I have inherited the maintenance of a DXL script (for IBM Doors).
In this, I came across various examples of stuff that make me scratch my head. Take this example:
int key = 3
print key
Running this in Doors, results in the expected output: 3.
However, in the documentation, we see that key is also the name of a function.
Object o
for o in numberCache do {
// must cast the key command.
int i = (int key numberCache)
print i
}
While even the reference docs are full of examples declaring stuff like string key, Item key and so on, my concern is about safety.
What bad things I can run into, potentially, leaving the code I'm maintaining as is, knowing that it works, despite the fact it contains several functions using key as a variable name?
void linkFindObjects(string value, Module m, string key_name, Skip objectList)
{
Object o
string key, key2, key3
bool match1 = false
bool match2 = false
bool match = false
for o in m do {
key = probeAttr_(o, key_name)
if(key == value)
{
put (objectList, o, o)
}
}
}
My concern is that in DXL parenthesis are not mandatory: as you can see in the example, casting key(numberCache) can be simplified in key numberCache. When declaring the first three strings, the only thing preventing the whole code to blow up seems to be the comma. Please ignore for now the fact that the code declares a lot of unused stuff. It is as I got it.
Am I worrying too much?
• Welcome to Code Review! The current question title, which states your concerns about the code, is too general to be useful here. Please edit to the site standard, which is for the title to simply state the task accomplished by the code. Please see How to get the best value out of Code Review: Asking Questions for guidance on writing good question titles. – Toby Speight Jan 16 '19 at 9:41
About "key": key is only defined for skip lists (perm: _x key (Skip)) and for the internal perm HttpHeaderEntryString_ key (). It is not possible to cast a Skip list to a string, the code Skip sk = create(); string s = "hello " sk "!" will not be valid, so there is almost no danger that the interpreter wants to make a string concatenation when you have the code xx = key sk, even when key is defined as a string variable.
So, in this case, you are more or less safe.string key key2 key3 will give you a plain old syntax error that you can easily detect.
|
2020-01-22 00:40:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23612403869628906, "perplexity": 2777.658565440293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00439.warc.gz"}
|
https://eng.libretexts.org/Courses/Oxnard_College/Matlab_and_Octave_Programming_for_STEM_Applications_(Smith)/03%3A_for_Loops_and_Basic_2D_Plots/3.06%3A_Sequences
|
# 3.6: Sequences
Now that we have the ability to write loops, we can use them to explore sequences and series, which are useful for describing and analyzing systems that change over time.
In mathematics, a sequence is a set of numbers that corresponds to the positive integers. The numbers in the sequence are called elements. In math notation, the elements are denoted with subscripts, so the first element of the series $$A$$ is $$A_1$$, followed by $$A_2$$, and so on.
A for loop is a natural way to compute the elements of a sequence. As an example, in a geometric sequence, each element is a constant multiple of the previous element. As a more specific example, let’s look at the sequence with $$A_1 = 1$$ and the relationship $$A_{i+1} = A_i/2$$, for all $$i$$. In other words, each element is half as big as the one before it.
The following loop computes the first 10 elements of $$A$$:
a = 1
for i=2:10
a = a/2
end
The first line initializes the variable a with the first element of the sequence, $$A_1$$. Each time through the loop, we find the next value of a by dividing the previous value by 2, and assign the result back to a. At the end, a contains the 10th element.
The other elements are displayed on the screen, but they are not saved in a variable. Later, we’ll see how to save the elements of a sequence in a vector.
This loop computes the sequence recurrently, which means that each element depends on the previous one. For this sequence, it’s also possible to compute the $$i$$th element directly, as a function of $$i$$, without using the previous element.
In math notation, $$A_i = A_1 r^{i-1}$$, where $$r$$ is the ratio of successive elements. In the previous example, $$A_{i+1} = A_i/2$$, so $$r = 1/2$$.
##### Exercise $$3.3$$
Write a script named sequence.m that uses a loop to compute elements of $$A$$ .
This page titled 3.6: Sequences is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Allen B. Downey (Green Tea Press) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
2023-03-26 15:04:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8908870816230774, "perplexity": 255.21361014002932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00584.warc.gz"}
|
https://codegolf.stackexchange.com/questions/241136/is-this-continuous-terrain
|
Is this continuous terrain?
Very related
You're given a piece of ASCII art representing a piece of land, like so:
/‾\
__/ ‾\_
\_/‾\
\
Since an overline () is not ASCII, you can use a ~ or - instead.
Your challenge is to determine if it is connected by the lines of the characters. For example, the above can be traced like so:
To clarify the connections:
• _ can only connect on the bottom on either side
• (or ~ or -) can only connect on the top on either side
• / can only connect on the top right and bottom left
• \ can only connect on the top left and bottom right
It doesn't matter where the connections start and end as long as they go all the way across. Note that one line's bottom is the next line's top, so stuff like this is allowed:
_
‾
\
\
_
/
\
‾
You can assume input will only contain those characters plus spaces and newlines, and will only contain one non-space character per column.
Input can be taken as ASCII art, an array of rows, a character matrix, etc.
Testcases
Separated by double newline.
Truthy:
\
\
\_
\ /\/\_
‾ \
/\
\/
\_
‾\
/‾\
__/ ‾\_
\_/‾\
\
Falsy:
/
\
//
‾‾
/ \
‾
_
\____/
/‾‾‾\
Python 3.8, 85 bytes
Input is taken as a list of lines, with replaced with dashes (-).
lambda L,n=0:len({(n:=n+2-ord(c:=max(x))//2%5)-x.index(c)+(c<'0')for x in zip(*L)})<2
Try it online!
J, 43 39 bytes
[:*/2=/@(I.@:*-1#.3 2&=+1&=)\' -\/'i.|:
Try it online!
• ' -\/'i.|: Transpose and convert characters to integers.
• 2...(...)\ For each pair of rows...
• I.@:* Get the indices of the non-zero (non-space) element...
• -1#.3 2&=+1&= And adjust it as follows:
• 1&= If it's a - in either position, subtract 1.
• 3 2&= If it's a / on the left or a \ on the right, subtract 1.
• =/@ Are the two row indices equal after this adjustment?
• [:*/ Are they all equal?
• I think ' -\/'i.|: works for -2
– ovs
Jan 16 at 22:49
• I actually just went back to this and found that as well as some other golfs... Jan 16 at 22:55
Retina, 77 bytes
m/(?<=^|¶)([\\_]([_\/]|.*¶ [-\\])|[\/-][-\\]| [_\/].*¶[\/-])/+^.
^\s*\S\s*$Try it online! Link includes test suite that takes double-spaced test cases and converts to - which the script actually uses. Explanation: /(?<=^|¶)([\\_]([_\/]|.*¶ [-\\])|[\/-][-\\]| [_\/].*¶[\/-])/+ Repeat while the first two columns have a valid join, either a \ or _ followed by either a _ or / or a - or \ on the next line, or a / or - followed by a - or \, or a / or - followed by a _ or / on the previous line, ... m^. ... delete the first character of each line. ^\s*\S\s*$
Check that only one column is left.
Charcoal, 37 bytes
WS⊞υι⬤Φθκ¬↨EE²⭆υ§ν⁺κλ⁺⌕λ⌈λ№⁺_§\/μ⌈λ±¹
Try it online! Link is to verbose version of code. Takes input as a rectangle of newline-terminated lines and outputs a Charcoal boolean, i.e. '-' for continuous, nothing if not. Assumes that any character not one of \_/ or space is the overline character. Explanation:
WS⊞υι
Input the land.
⬤Φθκ
Loop over all the columns except one.
E²⭆υ§ν⁺κλ
Get the current and the next column.
E...⁺⌕λ⌈λ№⁺_§\/μ⌈λ
Find the positions of the non-space characters, but add 1 if the character is a _, or if it is either \ or / depending on the column.
¬↨...±¹
Ensure the positions are equal.
BQN, 35 bytesSBCS
Uses ¯ MACRON instead of the overline.
⌈˝{≡´¯1‿1↓¨((/˘𝕨=𝕩)-𝕨∊'¯'∾⊢)¨"/\"}⍉
Run online!
┌─
╵"\_ # Input character matrix - m
\ /\/\_
¯ \"
┘
"\_\¯/\/\_\" # collapsed - ⌈˝m
⟨ 0 0 1 2 1 1 1 1 1 2 ⟩ # the depth of each column - /˘(⌈˝m)=(⍉m)
⟨ 0 0 0 1 1 0 1 0 0 0 ⟩ # locations of '¯' and '/' - (⌈˝m)∊'¯'∾'/'
⟨ 0 0 1 1 0 1 0 1 1 2 ⟩ # depth - ^locations
⟨ 0 0 1 1 0 1 0 1 1 ⟩ # the last element dropped - ¯1↓...
⟨ 1 0 1 1 0 1 0 1 0 1 ⟩ # locations of '¯' and '\' - (⌈˝m)∊'¯'∾'\'
⟨ ¯1 0 0 1 1 0 1 0 1 1 ⟩ # depth - ^locations
⟨ 0 0 1 1 0 1 0 1 1 ⟩ # the first element dropped - 1↓...
• Nice. How do you like BQN compared with APL so far? Jan 17 at 19:48
• @Jonah for golfing purposes it is quite nice (as long as you don't need base conversion). Nothing · (similar to J's cap [:) makes writing tacit functions easier and there are a few additional useful operators, like Catch, Under, Before, etc. The somewhat different selection of primitives for sequences takes a while to get used to, but Occurence Count has already provided me with multiple very short answers in return ;). For non golfed code I tend to notice that Dyalog actually did a decent job at making APL run fast, there is definitely a gap there.
– ovs
Jan 17 at 21:13
Jelly, 28 bytes
“-\/_”iⱮⱮZTḢ+S+3BḊƲƲ€FḊs2E€Ạ
Try it online!
|
2022-05-22 03:38:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4450136125087738, "perplexity": 2363.6374618528366}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00075.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-4-4-4-translations-of-conics-4-4-exercises-page-349/94b
|
## Algebra and Trigonometry 10th Edition
$\dfrac{(x-5)^2}{16}+\dfrac{(y-6)^2}{25}=1$
The standard form of the equation of the ellipse when the major axis is horizontal can be expressed as: $\dfrac{(x-h)^2}{a^2}+\dfrac{(y-k)^2}{b^2}=1$ Where $(h,k)$ is the center, $2a$ is the major axis length, and $2b$ is the minor axis length. The standard form of the equation of the ellipse when the major axis is vertical can be expressed as: $\dfrac{(x-h)^2}{b^2}+\dfrac{(y-k)^2}{a^2}=1$ Where $(h,k)$ is the center, $2a$ is the major axis length, and $2b$ is the minor axis length. The center $(h,k): (5,6)$ The vertices are located at the end points of the vertical major axis $(5,1), (5,11)$. The major axis length is $2a =10 \implies a=5$ and the minor axis length is $2b=8 \implies b=4$ Now, $c=\sqrt {a^2-b^2}=\sqrt {5^2-4^2}=3$ Foci: $(5,3), (5,9)$ The major axis is vertical, so we have $a=5, b=4$: $\dfrac{(x-5)^2}{16}+\dfrac{(y-6)^2}{25}=1$
|
2021-05-17 20:31:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402504920959473, "perplexity": 80.435678665092}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00073.warc.gz"}
|
http://buildandcrash.blogspot.com/2014_02_01_archive.html
|
## Sunday, February 23, 2014
Update: see this post for the latest with this frame. See the entry on thingverse to make your own.
Steve (aka hovership) was nice enough to send me one of his 3D printed foldable microquads, which I'm really excited about. It's nice and small and even has an offset mounting hole for Sparky which is pretty awesome. It is designed for FPV, although I haven't got it set up for that yet. So far it is a ton of fun for normal flying.
For electronics I'm using:
## Construction
I took some photos to document the construction.
I like how the single sided low profile of Sparky works well with getting it nice and small.
## Configuration
I configured it for basic flight the way described in this video.
The tuning values that worked well were:
I wanted this quad to be really well calibrated for optimal navigation. I went ahead and ran the complete temperature and 6 point calibration, as well as using an advanced feature for autotuning that lets you adjust the aggressiveness of the tuning. Here is a video that describes the process:
The tuning values I ended up with were:
## Flying
I wanted to have more time to really take it out and tear around, but time is limited. So far I really like how it flies. It's quite locked in and snappy and has probably 7-8 minutes with 3S 800mAh batteries. I've given it some fairly heavy landings and the landing gear is holding up well. No majors crashes although some small flying into things. It also can carry a GoPro which will make it a nice easy traveler for taking quick videos. Looks like 3D printed quads are going to be big this year.
Coming up (hopefully), will be a video about tuning up the quad for the best navigation and RTH. I've already got some improvements to altitude control in the pipeline from playing with this.
## Saturday, February 22, 2014
### Shapeoko assembly and testing
I received my Shapeoko last week which was extremely exciting. I ordered the full kit which (as the name implies) comes with everything you need to get started. Well, except for a jumper that is not documented (see below).
There are lots of little upgrades that you might want to look at still - the new baseboard and bumper switches being high on my list to get. It uses an Arduino for control (which despite 15+ years playing with embedded stuff I'd managed to avoid till now). I was really really really tempted to not get the electrical kit and make my own controller (after all it could basically use the software and hardware from Sparky BGC and/or my ESC) however I really want this to be a tool rather than a project in itself so sucked i up.
Of course I had to immediately start assembling, so that shot two nights of studying, but oh well. I snapped a few photos along the way.
## Assembly
Stuffing all the bearings into these took forever and killed my fingers
This gives a little spring to the z-axis in case of a crash
The rollers for all the axes. I didn't notice at this point you should add the screw terminals if you want them. I figured I would solder but they were ultimately easier so I had to get them on later. Oh well.
Finished X and Z axis
End of night 1. Almost there - just the last mounts (which involved lots of tapping holes) and then on to electronics.
And all finished!
I did print a "Hello world" but it looks fairly ugly because of sloppy pen mounting so I will not show it :).
One thing to look out for though, is there is an error in the documentation unless I missed something. For grblShield it says to use $2=320 but the issue with this is that it assumes 2x microstepping on the z-axis. The way the grblShield comes configured, it is set for 8x microstepping. If you follow the instructions as written, your z-axis will only move 1/4 the amount it is meant to. You need to add a jumper for the z-axis to make it work properly (or probably$2=1280, but I did not run like this because it reduces the z-axis torque). You can find more information here.
## Toolchain / Workflow
Jumping in to CNC manufacturing is fairly overwhelming. I know a few things from work where we use Catia and a nice Hermle 5 axis (and by we I mean other people and I'm just peripherally involved). I spent some time reading around on various programs for both the CAD (design) and CAM (toolpaths) stage of things. My ideals were open source and runs nicely on a mac. I'm found something that satisfies it although it is possible I'll eventually fork over some money if these end up too limiting.
But I settled on two workflows, the former being the only thing I have used so far:
• Most things: FreeCAD for design and generating toolpasses with PyCAM
• Artwork: Blender with the BlenderCAM add on (I've used Blender previously)
I really like FreeCAD so far because of the parametric modeling. I like the ability to simply type in the dimensions and angles and have it take care of the details. I recommend taking a look at this tutorial. It also has a lot of similarities with what I learned from Catia. I'm using the 0.14 version right now, which I think isn't the official release. So far it's worked fairly well, although definitely has some bugs. So the first thing I did was a simple heart to try and convince my girlfriend this hobby is something she will like :).
I did this by using the draft mode to create the face (using two arcs and two straight lines) and then extruding that to make a solid. I realized afterwards this is NOT the right way to use FreeCAD so will talk about it later. From there I saved the shape as a .stl file and loaded it up in PyCAM.
For PyCAM I would definitely recommend getting it from the source. The precompiled version I could only get a really silly toolpath that ended up taking a solid hour to cut. Tons of lifting the spindle up and then down, cut a little, and repeat. Unfortunately I forgot to save a screenshot of that ridiculousness. Anyway with the latest version I used the "waterline" cutting pattern to generate essentially what I wanted.
Although it would be nice to have the path in something not red since for me red on black is super low contrast. Luckily since it is OSS I can change that :D. Finally upload that path through the universal gcode sender and cut away :). These videos were taken with the earlier version of PyCAM so you'll see more silliness in the path.
All of the cutting was done with the"Solid Carbide 2 Flute" bit from Inventables using some scrap wood I had sitting around.
Happy Valentines day!
One question I did have after playing with this: what is the best place to designate as "0" for the tool. Right now I'm putting it towards the top of the material and then zeroing the coordinate system. However that ends up making my waste board a little sad when the material isn't the precise depth. I'm thinking about zeroing out on the table surface so that it can simply start a little high, but then it gets awkward because you need to move the material in and out for zeroing.
## More serious design
Now I knew the basics were working, I wanted to design something a bit more sophisticated. I'm trying to build a hinge that can be actuated in a fairly simple way and this will be a prototype of it. At this point I realized the right way to do this is using Part Design mode and creating constrained sketches. I'll defer to the hackaday tutorial for details on how to do these things.
Create the outline of the piece
Make it a pad that comes up
Make a sketch on the top face of how I want to shape a pocket
Add the filets at the corner to make it machinable
Get it fully constrained so it is valid. At this point it turns green.
Make a pocket in the piece that is that shape
And another pocket at the end to lower the profile of the hinge area
And finally a hole for where the opposite piece will mount
## More serious CAM
From this model I exported the stl file and loaded it into PyCAM.
Here you see the model all loaded up. Note this time "0" is below the piece which I need to remember
Tell PyCAM the bit I am using
So the strategy for cutting things out I like is "waterline" which goes around the edges. However, because I need to cut a big chunk of stock out I need to do this first because the drop down area won't be caught by the waterline strategy. I'll do this but restrict it to the part of the piece that is relevant.
So a process named bulk removal that will be allowed to have some slop (0.5mm error)
Bounding box that only covers the area I need to remove material
Set up a task using these tools, strategies and area
Which gives a pretty reasonable path
Now that the stock is milled down there, waterline will work for the remaining area.
Use the waterline mode to cut around the edge
Bounds that cover the whole model now
Set up a task to generate the pass using the correct process and bounds as well as the right collision model
And finally the tool pass to generate this part
## Cutting
I made a mistake the first time I generated this tool pass. I set the safety margin to 5mm which I had previously done when the "0" was on the top of the piece. In this case, that made the tool transverse between locations THROUGH the material. It took me a while to realize what was happening and it produced quite a mess. You can see this near the end of the video.
CNC Joint from James Cotton on Vimeo.
My second pass was almost perfect. However, right at the end when it finished the middle hole the whole thing was disconnected and started spinning around. That ripped it nicely and luckily only bent the bit and didn't break anything. You have to be sitting by the power switch on this thing!
To work around this I went ahead and added a support structure:
Which leaves it attached. However, I forgot to enable it for the collision detection so my second one almost went the same way. Luckily I stopped it in time and pulled the piece out. In the future I will definitely make sure to use supports properly and manually cut them when the piece is done.
Also for the final version I need to make sure the pocket inner cutting goes to the same depth as the waterline pass. That ended up not lining up perfectly and i'm not sure why yet.
## Other component and finished!
I also machined the other side of the joint. This time I figured out how to use the grid cloning feature in PyCAM which was nice to make it one operation (which luckily didn't fail). This time I also got the supports to work properly.
## Thanks Inventables
Both for making this available at a reasonable cost and having pretty good documentation. Also it just seems like a company run by people that are nice which is awesome. When there was a mixup on some shipping stuff they paid the difference even when I offered to, which is just really nice.
|
2018-02-23 19:53:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4697324335575104, "perplexity": 1104.956008670371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00127.warc.gz"}
|
http://wiki.tiker.net/Teaching/DiscreteMathSpring2011/Quiz3/CommonMistakes?action=fullsearch&value=linkto%3A%22Teaching%2FDiscreteMathSpring2011%2FQuiz3%2FCommonMistakes%22&context=180
|
# Problem 1
• The case distinction in y is not the same as x. It should be y\ge 0 and y<0.
• All the set-up in a proof may be similar each time, but it is not optional. You start your proof with "Let x\in \mathbb R. Let y=f(x) (the hypothesis)." And you continue from there.
• Note that this amounts to only half of the proof of g being f's inverse, despite what some of you claimed. A full proof would also include the converse of the statement proved.
• A popular mistake was to prove the converse of what was asked--see also the previous point.
# Problem 2
• You're not supposed to apply any rules in writing the dual or the set theory variant of an equality.
# Problem 3
Proof of 3b)
To show: If g\circ f=\iota_A, then f is one-to-one.
Proof: Let g\circ f=\iota_A. To show: f is one-to-one, which is equivalent to \forall x,y\in A, f(x)=f(y) \rightarrow x=y. (Not to be confused with \forall x,y\in A, x=y \rightarrow f(x)=f(y), which is true for any function.)
So let x,y\in A and let f(x)=f(y). Then obviously g(f(x))=g(f(y)), but by g\circ f=\iota_A, that means \iota_A(x)=\iota_A(y), which in turn means x=y, which is what we needed to show.
|
2013-12-09 05:25:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089391827583313, "perplexity": 1208.8891357540615}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163901500/warc/CC-MAIN-20131204133141-00000-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://123doc.org/doc_search_title/3860416-aqa-ms03-w-tsm-ex-jun07.htm
|
# AQA MS03 w TSM EX JUN07
## AQA MD01 w TSM EX JUN07
... the new routes This script clearly shows the new routes giving the new figures of 69 and 62, which meant that in the body of the script full marks were obtained Too many candidates will ‘work ... ‘work in their head’ and write down the best answer without any justification Mark Scheme MD01 Question 4a Student Response Commentary Candidates must know the difference between all the algorithms ... candidate knew that the exponential function had to be dealt with Many candidates were unsure as to how to proceed and used logs without realising the implications This solution showed a lack of...
• 19
• 62
• 0
## AQA MFP3 w TSM EX JUN07
... the wrong answer 1 x + x In the exemplar the candidate clearly recognises 16 the need to use a law of logarithms and completes the solution within a few lines Part (c) was generally answered ... Most candidates used their answers to part (c)(i) as the limits but other values were used, not always with any justification In the exemplar, the candidate shows good examination technique by first ... incorrectly to ‘y = sin x + 3’ all marks would have been awarded (the examiners would have applied ISW for work after ‘y sec x = tan x + 3’) Mark Scheme MFP3 Question Student Response Commentary...
• 20
• 65
• 0
## AQA MM1A w TSM EX JUN07
... working to the required accuracy and hence ends up with a mark of zero It is worth noting that candidates are only penalised once for not working to the required accuracy, so this candidate would ... the weight of the balloon and no account is taken of the fact that it is accelerating In part (d), the candidate states the weight that had been calculated in part (b) However as the answer is ... common error Solutions of this type were awarded method mark Mark Scheme MM1A Question Student Response Commentary In part (c) of this question, a common error was to calculate an angle based on...
• 7
• 63
• 0
## AQA MM04 w TSM EX JUN07
... in part b) was to lose a π symbol resulting in the wrong answer Candidates showed good knowledge of suspension problems and even if errors were made in part b) full recovery marks were available ... 45 as 354 Explanations in a)ii) were good No follow through marks were available here as the question could be answered with reference to the diagram Part b) proved very discriminating with many ... “resolving the whole system” to get the answer as 500 N – failing to understand the implication of a reaction force at point C It was expected that candidates would use Newton’s third law at point...
• 19
• 86
• 0
## AQA MM05 w TSM EX JUN07
... Response Commentary A number of candidates ‘invented’ the answer given to part (a) This example shows a candidate who uses the extensions to be x in both strings, rather than the correct 2a ... speed which the question required The formula used, vmax = aω needed a to be the maximum displacement for the pendulum (where a = Al) rather than the maximum angular displacement which was A ... MM05 Question Student Response Commentary This example shows a candidate spending considerable time and effort instead of thinking and then using a more suitable approach as outlined in the Examiners...
• 17
• 78
• 0
## AQA MPC2 w TSM EX JUN07
... question was answered well by the above average candidates but some left their answer as f(4x) which was not awarded the mark In the exemplar the candidate gives the other common wrong answer which ... MPC2 Question Student Response Commentry Part (a) was generally answered very well The only common error was in (iii) where the wrong answer ‘ x ’ was given by a minority ... Part (a) was generally answered well but some candidates failed to gain any credit due to poor examination technique In the exemplar the candidate shows good examination technique by writing ‘...
• 18
• 91
• 0
## AQA MPC3 w TSM EX JUN07
... trigonometry questions if they give extra answers that are OUTSIDE of the given range However for extra answers within the range will be penalised Mark Scheme MPC3 Question 4b(ii) Student Response ... paper candidates were required to show that a given answer was true This was so that candidates could proceed with later parts of the question, BUT wrongs not make a right! Candidates were able to ... paper candidates were required to show that a given answer was true This was so that candidates could proceed with later parts of the question, BUT enough steps need to be shown as to justify...
• 15
• 83
• 0
## AQA MD01 w TSM EX JUN08
... diagram This’ solution’ shows a number of arrows on the diagram with no clear order shown The candidate appears to start at vertex1 but it is then unclear how the path follows on The candidate only ... alphabet! On the next line they have chosen to work with the first sublist only – again acceptable Next line working with the second subset is ok apart from their earlier mistake However they have ... candidate knowing something about odd vertices but not knowing exactly what to They have found AB, AC and AD without realising that pairs of vertices are required Again in their explanation they...
• 19
• 60
• 0
## AQA MD01 w TSM EX JUN09
... Although there were many fully correct responses to this question, there was a significant number who failed to write down the correct number of comparisons The number of swaps was well done, as ... correctly This is work that we would expect a student in Year 10 to be able to well It is essential that students practise drawing graphs accurately Although the line from (0, 60) to (40, 0) was an incorrect ... Commentary Candidates were given a piece of bookwork at the start of this question to help with the network given in part (b) This candidate correctly stated that there were edges in a minimum...
• 22
• 63
• 0
## AQA MD02 w TSM EX JUN08
... Scheme MD02 Question Student Response Commentary (a)(i) It is a good idea to explain what p represents before writing down expressions A better statement might have been that “Roseanne plays R1 with ... for Roseanne is explained in words MD02 Many candidates did not write such a statement and lost a mark (ii) Instead of using either of the two expressions used previously to show that the value ... , but what the candidate writes here, although badly worded, is understood The expected values when Collette chooses each of the columns are calculated correctly The diagram is a good example...
• 25
• 61
• 0
## AQA MD02 w TSM EX JUN09
... these were not required Many left these in their solution and this was not penalised The minimum of the column maxima was indicated with an arrow and further explanation showed why C1 was Colin’s ... the Simplex Method where fractions were used and the row operations were performed accurately Extra information was given regarding the pivot being used for the second iteration, which was not ... drew a wrong conclusion about the pivot Although this candidate does not use the word pivot, it is clear that the entry has been identified from the third row The row operations were clearly explained...
• 23
• 65
• 0
## AQA MFP1 w TSM EX JUN08
... M15 6EX Dr Michael Cresswell, Director General Question Student Response MFP1 Commentary Most candidates answered this question well Common errors occurred in part (d), where, as in this example, ... part (b) In this x , but then after raising the x2 power by one divides by the old power rather than the new power Mark Scheme MFP1 Question MFP1 Student Response Commentary Many candidates appreciated ... equation would then become y (x+ 2) = ax (x+ 2) + b, or, Y = aX + b This script shows a typical candidate who struggled with the required algebraic multiplication by x + Mark Scheme MFP1 Question...
• 25
• 65
• 0
## AQA MFP1 w TSM EX JUN09
... candidate was not sufficient to find ( ) The other common error, also shown here, was finding the product of the new roots which was 4 or 16 ; many used its value as 4( ) MFP1 Mark ... candidate shows to find when z z was real instead of letting the imaginary part, which was x , equal zero, candidates looked for a far more complicated solution MFP1 Mark Scheme Question MFP1 ... Student Response Commentary In this example the candidate has just written down the equation Y = x log b + log a instead of showing the use of the two log laws and the intermediary result Y =...
• 21
• 60
• 0
## AQA MFP2 w TSM EX JUN09
... particular, (as part (a) was completely correctly done by virtually all candidates) sufficient rows were written down by the candidate to show the cancellation Sometimes rows were written as 1/2(2-1) ... together with the method of showing how it was to be done Mark Scheme MFP2 Question Student Response MFP2 Commentary The candidate starts off well with ds/dx = √(1+(dy/dx)²) Many candidates started with ... candidates wrote down the answer without sufficient intermediate working In this case, the candidate went into considerable detail when evaluating (coskΘ+isinkΘ)(cosΘ+isinΘ) even to the extent of...
• 23
• 65
• 0
## AQA MFP3 w TSM EX JUN08
... display in this case, without showing the working, no marks could have been awarded for method The candidate gave a wrong value for k Although no method was shown, the value given was the same as ... MFP3 Question Student Response Commentary Although this question was generally answered very well by candidates, the exemplar illustrates partial poor examination technique ... correct values for the four unknowns a, b, c and d A significant number of candidates, like the one in the exemplar, wasted time by d2 y finding an expression for which was not required in the solution...
• 19
• 65
• 0
Xem thêm
|
2017-04-27 17:16:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8554355502128601, "perplexity": 2152.6304355268653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00223-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://eprint.iacr.org/2022/1630
|
### Finding Collisions for Round-Reduced Romulus-H
##### Abstract
Romulus-H is a hash function that currently competes as a finalist in the NIST Lightweight Cryptography competition. It is based on the Hirose DBL construction which is provably secure when used with an ideal block cipher. However, in practice, ideal block ciphers can only be approximated. The security of concrete instantiations must be cryptanalyzed carefully; the security margin may be higher or lower than in the secret-key setting. So far, the Hirose DBL construction has been studied with only a few other block ciphers, like IDEA and AES. However, Romulus-H uses Hirose DBL with the SKINNY block cipher where no dedicated analysis has been published so far. In this work, we present the first third-party analysis of Romulus-H. We propose a new framework for finding collisions in hash functions based on the Hirose DBL construction. This is in contrast to previous work that only focused on free-start collisions. Our framework is based on the idea of joint differential characteristics which capture the relationship between the two block cipher calls in the Hirose DBL construction. To identify good joint differential characteristics, we propose a combination of a MILP and CP model. Then, we use these characteristics in another CP model to find collisions. Finally, we apply this framework to Romulus-H and find practical collisions of the hash function for 10 out of 40 rounds and practical semi-free-start collisions up to 14 rounds.
Available format(s)
Category
Attacks and cryptanalysis
Publication info
Preprint.
Keywords
Hash functions Differential cryptanalysis MILP SMT Romulus-H
Contact author(s)
marcel nageler @ iaik tugraz at
felix pallua @ student tugraz at
maria eichlseder @ iaik tugraz at
History
2022-11-23: approved
See all versions
Short URL
https://ia.cr/2022/1630
CC BY
BibTeX
@misc{cryptoeprint:2022/1630,
author = {Marcel Nageler and Felix Pallua and Maria Eichlseder},
title = {Finding Collisions for Round-Reduced Romulus-H},
howpublished = {Cryptology ePrint Archive, Paper 2022/1630},
year = {2022},
note = {\url{https://eprint.iacr.org/2022/1630}},
url = {https://eprint.iacr.org/2022/1630}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2022-11-29 15:49:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25295597314834595, "perplexity": 4228.082792897019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00263.warc.gz"}
|
https://www.electricalexams.co/magnetically-isolated-coils-value-of-coefficient/
|
# For magnetically isolated coils, the value of coefficient of coupling is:
For magnetically isolated coils, the value of coefficient of coupling is:
#### SOLUTION
Coefficient of coupling is given by
$k = \frac{M}{{\sqrt {{L_1}{L_2}} }}$
Where
L1 and L2 = self-inductance
M = Mutual inductance
For isolated coil, M = 0
Coefficient of coupling will be
$k = \frac{0}{{\sqrt {{L_1}{L_2}} }}$
As the coil is magnetically isolated so its coefficient of coupling will be zero. K = 0
Scroll to Top
|
2021-10-26 06:06:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5413252115249634, "perplexity": 6383.597914788513}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00078.warc.gz"}
|
https://www.bikecad.ca/saddle_height
|
# Saddle Height
BikeCAD defines saddle height as the direct distance from the center of the bottom bracket to the reference point on the saddle. The reference point is by default the deepest part of the saddle, where it may dip down in the middle. However, this point can be changed according to the instructions here.
This dimension will tend to be roughly parallel to the seat tube, but it will not always be exactly parallel. As the saddle is slid back and forth on its rails, the saddle height dimension will shift to follow the saddle. The saddle height dimension will also shift as the seatpost setback is changed.
As the fore and aft position of the saddle is changed, and as the seatpost setback is changed, BikeCAD will adjust the length of exposed seatpost so that the actual saddle height will remain as specified.
Saddle height can be directly input into BikeCAD. Alternatively, the saddle can be located by XY coordinates with respect to the bottom bracket. This is explained at: bikecad.ca/saddle_position.
|
2021-04-19 23:06:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266040682792664, "perplexity": 1039.1864474841336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00382.warc.gz"}
|
https://www.vietlod.com/500-cau-trac-nghiem-kinh-te-luong-3c
|
# 500 câu trắc nghiệm Kinh tế lượng – 3C
Tổng hợp 500 câu trắc nghiệm + tự luận Kinh tế lượng (Elementary Statistics). Tất cả các câu hỏi trắc nghiệm + tự luận đều có đáp án. Nội dung được khái quát trong 13 phần, mỗi phần gồm 3 bài kiểm tra (A, B, C). Các câu hỏi trắc nghiệm + tự luận bám rất sát chương trình kinh tế lượng, đặc biệt là phần thống kê, rất phù hợp cho các bạn củng cố và mở rộng các kiến thức về Kinh tế lượng. Các câu hỏi trắc nghiệm + tự luận của phần 3C bao gồm:
SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Provide an appropriate response.
1) Compare probabilities and odds. How can you convert odds to probabilities?
Probabilities compare the number of occurrences of an event A to the total number of outcomes. Odds compare the number of occurrences of event A to the number of occurrences of the complement of event A. If the odds for A are 13:6, then P(A) = 13/19 since there would be a total of 19 outcomes (13 + 6).
2) List two reasons it is better to sample without replacement when testing batches of products.
When sampling without replacement, should you use the multiplication rule for independent or dependent events? Explain your answer.
The two reasons include the lower chance of getting only good items when some defects are present, and sampling with replacement might allow you to test the same item more than once which would be inefficient. You should use the multiplication rule for dependent events, since the sample space has diminished and the probability of choosing a second good item has gotten smaller.
3) Suppose a student is taking a 5-response multiple choice exam; that is, the choices are A, B, C, D, and E, with only one of the responses correct. Describe the complement method for determining the probability of getting at least one of the questions correct on the 15-question exam. Why would the complement method be the method of choice for this problem?
P(at least one correct) = 1 – P(none are correct). The alternative to the complement method is to find P(1), P(2),…P(15) and take this sum. This method is too time consuming and too difficult.
MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
Express the indicated degree of likelihood as a probability value.
4) “You have one chance in ten of winning the race.”
○ 0.5
● 0.10
○ 0.90
○ 1
Find the indicated probability.
5) If a person is randomly selected, find the probability that his or her birthday is in May. Ignore leap years.
○ 1/365
○ 1/12
● 31/365
○ 1/31
Answer the question, considering an event to be “unusual” if its probability is less than or equal to 0.05.
6) Assume that a study of 500 randomly selected school bus routes showed that 479 arrived on time. Is it “unusual” for a school bus to arrive late?
● Yes
○ No
From the information provided, create the sample space of possible outcomes.
7) A coin and an octagonal die are tossed.
○ H1 H2 H3 H4 H5 H6 T1 T2 T3 T4 T5 T6
○ H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10
● H1 H2 H3 H4 H5 H6 H7 H8 T1 T2 T3 T4 T5 T6 T7 T8
○ H1 H2 H3 H4 H5 T1 T2 T3 T4 T5
8) Suppose you are playing a game of chance. If you bet $10 on a certain event, you will collect$500 (including your \$10 bet) if you win. Find the odds used for determining the payoff.
○ 1 : 49
○ 50 : 1
● 49 : 1
○ 500 : 510
Find the indicated probability.
9) 100 employees of a company are asked how they get to work and whether they work full time or part time. The figure below shows the results. If one of the 100 employees is randomly selected, find the probability of getting someone who carpools or someone who works full time.
1. Public transportation: 9 full time, 6 part time
2. Bicycle: 3 full time, 5 part time
3. Drive alone: 30 full time, 30 part time
4. Carpool: 9 full time, 8 part time
○ 0.13
○ 0.53
● 0.59
○ 0.27
10) The table below describes the smoking habits of a group of asthma sufferers.
Nonsmoker Occasional smoker Regular smoker Heavy smoker Total Men 348 40 66 36 490 Women 431 46 90 30 597 Total 779 86 156 66 1087
If one of the 1087 people is randomly selected, find the probability of getting a regular or heavy smoker.
○ 0.144
○ 0.459
○ 0.094
● 0.204
### Thuyết Nguyễn
- Giảng dạy Kinh tế lượng ứng dụng, PPNCKH - Chuyên gia phân tích dữ liệu với Stata - Dành hơn 15000 giờ nghiên cứu Kinh tế lượng ứng dụng - Đam mê nghiên cứu, học hỏi cái mới; - Làm việc độc lập & tự học cao.
|
2021-05-18 12:14:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21976226568222046, "perplexity": 3513.0881291056794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00373.warc.gz"}
|
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/1570
|
## Global Manufacturing: How to Use Mathematical Optimisation Methods to Transform to Sustainable Value Creation
Please always quote using this URN: urn:nbn:de:0297-zib-15703
• It is clear that a transformation to sustainable value creation is needed, because business as usual is not an option for preserving competitive advantages of leading industries. What does that mean? This contribution proposes possible approaches for a shift in existing manufacturing paradigms. In a first step, sustainability aspects from the German Sustainability Strategy and from the tools of life cycle sustainability assessment are chosen to match areas of a value creation process. Within these aspects are indicators, which can be measured within a manufacturing process. Once these data are obtained they can be used to set up a mathematical linear pulse model of manufacturing in order to analyse the evolution of the system over time, that is the transition process, by using a system dynamics approach. An increase of technology development by a factor of 2 leads to an increase of manufacturing but also to an increase of climate change. Compensation measures need to be taken. This can be done by e.g. taking money from the GDP (as an indicator of the aspect macroeconomic performance''). The value of the arc from that building block towards climate change must then be increased by a factor of 10. The choice of independent and representative indicators or aspects shall be validated and double-checked for their significance with the help of multi-criteria mixed-integer programming optimisation methods.
|
2018-04-27 06:48:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44734856486320496, "perplexity": 759.4460300827756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949489.63/warc/CC-MAIN-20180427060505-20180427080505-00608.warc.gz"}
|
https://www.investopedia.com/ask/answers/122314/what-difference-between-gross-margin-and-contribution-margin.asp
|
• General
• Personal Finance
• Reviews & Ratings
• Wealth Management
• Popular Courses
• Courses by Topic
# Gross Margin vs. Contribution Margin: What's the Difference?
## Gross Margin vs. Contribution Margin: An Overview
Gross margin measures the amount of revenue that remains after subtracting costs directly associated with production. Contribution margin is a measure of the profitability of various individual products based on the variable costs associated with those goods.
### Key Takeaways
• Gross margin is the amount of profit left after subtracting the cost of goods sold from revenue, while contribution margin is the amount of profit left after subtracting variable costs from revenue.
• Gross margin encompasses an entire company’s profitability, while contribution margin is more useful on a per-item profit metric.
• Contribution margin can be used to examine variable production costs and is usually expressed as a percentage.
• While gross profit is generally an absolute value, gross profit margin is expressed as a percentage.
• Contribution margin is used to determine the breakeven point, while gross margin is more likely to be used to set operating targets for divisions to achieve.
## Gross Margin
Gross margin is synonymous with gross profit margin and includes only revenue and direct production costs. It does not include operating expenses such as sales and marketing expenses, or other items such as taxes or loan interest. Gross margin would include a factory's direct labor and direct materials costs, but not the administrative costs for operating the corporate office.
Direct production costs are called cost of goods sold (COGS). This is the cost to produce the goods or services that a company sells. Gross margin shows how well a company generates revenue from direct costs such as direct labor and direct materials costs. Gross margin is calculated by deducting COGS from revenue and dividing the result by revenue. The result can be multiplied by 100 to generate a percentage.
### How to Calculate Gross Margin
Gross profit is calculated as the different between net sales and cost of goods sold. Gross profit margin is calculated as the ratio between gross profit and net sales:
\begin{aligned}&\textbf{Gross Profit Margin}=\frac{\textbf{Net Sales}-\textbf{COGS}}{\textbf{Net Sales}}\\&\textbf{where:}\\&\textbf{COGS}=\text{Cost of goods sold}\end{aligned}
Net sales is determined by taking total gross revenue and deducting residual sale activity such as customer returns, product discounts, or product recalls. Cost of goods sold is the sum of the raw materials, labor, and overhead attributed to each product. Inventory (and by extension cost of goods sold) must be calculated using the absorption costing method as required by generally accepted accounting principles (GAAP).
## Contribution Margin
Contribution margin is the revenue remaining after subtracting the variable costs that go into producing a product. Contribution margin calculates the profitability for individual items that a company makes and sells.
Specifically, contribution margin is used to review the variable costs included in the production cost of an individual item. It is a per-item profit metric, whereas gross margin is a company's total profit metric. Contribution margin ratio is expressed as a percentage, though companies may also be interested in calculating the dollar amount of contribution margin to understand the per-dollar amount attributable to fixed costs.
### How to Calculate Contribution Margin
Contribution margin is the difference between revenue and variable costs. It can be calculated on an aggregate basis or a per unit basis, and contribution margin is reported on a dollar basis:
\begin{aligned}&\textbf{Contribution Margin}=\textbf{NSR}-\textbf{VC}\\&\textbf{where:}\\&\textbf{NSR}=\text{Net sales revenue}\\&\textbf{VC}=\text{Variable costs}\end{aligned}
Net sales is calculated the same for contribution margin as gross margin. However, variable costs are different than cost of goods sold. Often, a company's cost of goods sold will be comprised of variable costs and fixed costs. Variable costs are only expenses incurred in proportion of manufacturing; for example, manufacturing one additional unit will result in a little bit of materials expense, labor expense, and overhead expenses.
It is possible for a product to have a positive contribution margin yet a negative gross margin.
## Key Differences
### Different Implementations
Most often, a company will analyze gross margin on a company-wide basis. This is how gross margin is communicated on a company's set of financial reports, and gross margin may be more difficult to analyze on a per-unit basis.
Alternatively, contribution margin is often more accessible and useful on a per-unit or per-product basis. A company will be more interested in knowing how much profit for each unit can be used to cover fixed costs as this will directly impact what product lines are kept.
### Different Expense Considerations
Gross margin considers a broader range of expenses than contribution margin. Gross margin encompasses all of the cost of goods sold regardless of if they were a fixed cost or variable cost.
The primary difference is fixed overhead is included in cost of goods sold, while fixed overhead is not considered in the calculation for contribution margin. As contribution margin will have fewer costs, contribution margin will likely always be higher than gross margin.
### Different Users
Investors, lenders, government agencies, and regulatory bodies are interested in the total profitability of a company. These users are more interested in the total profitability of a company considering all of the costs required to manufacture a good.
On the other hand, internal management may be most interested in the costs that go into manufacturing a good that are controllable. Management may have little to no say regarding fixed costs; therefore, internal members of a company often focus more on the elements they are responsible for (i.e. the variable costs) that fluctuate with production levels.
### Different Reporting Requirements
Technically, gross margin is not explicitly required as part of externally presented financial statements. However, external financial statements must presented showing total revenue and the cost of goods sold. Often, externally presented reports will contain gross margin (or at least both categories required to calculate gross margin).
On the other hand, a company is not required to externally disclose its amount of variable costs. In its financial statements, it is not required to bifurcate fixed expenses from variable costs. For this reason, contribution margin is simply not an external reporting requirement.
### Different Levels of Transparency
Under either method, a company's ultimate net income will be the same. Because gross margin encompasses all costs necessary to manufacture a good, some may argue it is a more transparent figure. On the other hand, a company may be able to shift costs from variable costs to fixed costs to "manipulate" or hide expenses easier.
For example, consider a soap manufacturer that previously paid $0.50 per bar for packaging. Should the company enter into an agreement to pay$500 for all packaging for all bars manufactured this month. Gross margin would report both types of costs the same (include it in its calculation), while contribution margin would consider these costs differently.
Gross Margin
• More often used at a company-wide, higher level of analysis
• Fixed overhead is included in the calculation
• Often used by external parties analyzing a company's overall profitability
• Is included in external reporting
• Is more difficult to exclude costs; all COGS are considered
Contribution Margin
• More often used at a product-level, lower level of analysis
• Fixed overhead is excluded from the calculation
• Often used by internal management to determine operational strategies
• Is strictly an internal reporting metric
• Is easier to exclude costs when shifted between variable and fixed
## Gross Margin vs. Contribution Margin Example
If a company has $2 million in revenue and its COGS is$1.5 million, gross margin would equal revenue minus COGS, which is $500,000 or ($2 million - $1.5 million). As a percentage, the company's gross profit margin is 25%, or ($2 million - $1.5 million) /$2 million.
For an example of contribution margin, take Company XYZ, which receives $10,000 in revenue for each widget it produces, while variable costs for the widget is$6,000. The contribution margin is calculated by subtracting variable costs from revenue, then dividing the result by revenue, or (revenue - variable costs) / revenue. Thus, the contribution margin in our example is 40%, or ($10,000 -$6,000) / \$10,000.
Contribution margin is not intended to be an all-encompassing measure of a company's profitability. However, contribution margin can be used to examine variable production costs. Contribution margin can also be used to evaluate the profitability of an item and calculate how to improve its profitability, either by reducing variable production costs or by increasing the item's price.
There's little value in comparing a company's gross margin to its contribution margin. Each metric is used in different ways, and it isn't overly helpful to compare the two.
## Other Profit Metrics
Gross margin and contribution margin are just two of the many different types of profit metrics. Other examples of profit metrics include:
• Operating Profit: Operating profit is the amount of money a company earns after all costs of goods sold, operating expenses, depreciation, and amortization have been subtracted from net revenue.
• Pre-Tax Profit: Pre-tax profit is the amount of money a company earns after all costs except for taxes have been considered. This is often calculated as operating profit less interest expense.
• Net Income: Net income is the amount of money a company earns after all expenses have been deducted from net revenue.
• Accounting Profit: Accounting profit is the amount of money a company earns in accordance with GAAP. GAAP rules require that net income be included on a company's income statement.
• Economic Profit: Economic profit is the sum of the accounting profit and opportunity cost. It attempts to recognize the all expenses, even ones that result in benefits foregone.
• Other Comprehensive Income: Other comprehensive income (OCI) is an accounting metric that recognized gains and losses yet to be realized.
## What Is a Good Contribution Margin?
A product's contribution margin will largely depend on the product, industry, company structure, and competition. Though the best possible contribution margin is 100% (there are no variable costs), this may mean a company is highly levered and is locked into many fixed contracts. A good contribution margin is positive as this means a company is able to use proceeds from sales to cover fixed costs.
## Is Contribution Margin Higher Than Gross Margin?
Yes, contribution margin will be equal to or higher than gross margin because gross margin includes fixed overhead costs. As contribution margin excludes fixed costs, the amount of expenses used to calculate contribution margin will likely always be less than gross margin. The total of net revenue for both is the same.
## Do You Want a High or Low Contribution Margin?
In general, a higher contribution margin is better as this means more money is available to pay for fixed expenses. However, some companies may prefer to have a lower contribution margin. Although the company has less residual profit per unit after all variable costs are incurred, these types of companies may have little to no fixed costs and maybe keep all profit at this point.
## What Is a Good Gross Margin?
Similar to contribution margin, a good gross margin highly depends on the company, industry, and and product. For example, the state of Massachusetts claims food retailers earn a gross margin around 20%, while specialty retailers earn a gross margin up to 60%.
## What Is the Difference Between Gross Profit and Gross Margin?
Gross profit is the dollar difference between net revenue and cost of goods sold. Gross margin is the percent of each sale that is residual and left over after cost of goods sold is considered. The former is often stated as a whole number, while the latter is usually a percentage.
## The Bottom Line
As a company becomes strategic about the customers it serves and products it sells, it must analyze its profit in different ways. Two of those ways are gross margin and contribution margin. Gross margin encompasses all costs of a specific product, while contribution margin encompasses only the variable costs of a good. While gross profit is more useful in identifying whether a product is profitable, contribution margin can be used to determine when a company will breakeven or how well it will be able to cover fixed costs.
|
2022-09-27 03:39:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2949688136577606, "perplexity": 2430.080329932692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00621.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=d08e3b92d0d9aab37cf983cc6590b8b4&p=505806
|
mersenneforum.org ARM builds and SIMD-assembler prospects
Register FAQ Search Today's Posts Mark Forums Read
2018-07-20, 13:50 #166 ET_ Banned "Luigi" Aug 2002 Team Italia 7·691 Posts Any hints about the addition of the prp option for small Mersenne exponents? It would be optimal for our PIS 😀
2018-07-21, 20:41 #167 ewmayer ∂2ω=0 Sep 2002 República de California 13·29·31 Posts @Luigi: Work continues on PRP support ... but slowly. I am moving at end of August, so Mlucas work necessarily has been taking a back seat to all the work involved with that.
2018-07-22, 11:35 #168
ET_
Banned
"Luigi"
Aug 2002
Team Italia
7×691 Posts
Quote:
Originally Posted by ewmayer @Luigi: Work continues on PRP support ... but slowly. I am moving at end of August, so Mlucas work necessarily has been taking a back seat to all the work involved with that.
I didn't remember that, nor willing to look snappy, it wasn't my intention.
Hope everything goes smooth.
2018-07-24, 20:28 #169
ewmayer
2ω=0
Sep 2002
República de California
13·29·31 Posts
Quote:
Originally Posted by ET_ I didn't remember that, nor willing to look snappy, it wasn't my intention.
You didn't remember that because I never previously mentioned it around here. :)
Thanks for the smoothness Glückwunsch!
2018-12-29, 18:35 #170
M344587487
"Composite as Heck"
Oct 2017
36816 Posts
Quote:
Originally Posted by ewmayer Glad you got at least a nice chunk of the "missing FLOPS" back. I'm looking forward to buying an Odroid N1 once they go on sale, hopefully within a month - one of the beta-testers ran Mlucas benchmarks, and using all 6 cores (one 2-threaded job running on the 'big' 2-core a72 cpu, another 4-theaded one on the 'little' 4-core a53 cpu) he gets 2.2-2.3 the total throughput of an Odroid C2, which means ~3.5x the total throughput of a Pi3. N1 pricing is estimated at ~$110, i.e. about the same$/FLOP as the C2. We can ony hope this sparks a full-blown 'multi-socket war' amongst the various ARM-micro-PC manufacturers. :) Even for the N1 one still needs ~10 of them to match the LL-test throughput of a cutting-edge Intel quad, but things are getting close to the "interestingness" level as far as wide-scale adoption goes.
If ~10 RK3399 equals a quad core intel is true for all RK3399 boards then the "interestingness" level has improved due to prices (but it's still in x86's favour).
10x Neo4 + 10x SD cards = ~£400, PSUs probably ~£30, no need for a switch so total ~£430. Not too shabby, is there a stronger SoC than 2xA72+4xA53 that makes more bang for buck sense?
2018-12-29, 19:48 #171 nomead "Sam Laur" Dec 2018 Turku, Finland 317 Posts I doubt that 10x factor somewhat. On the Odroid forum someone ran a couple test runs on Mlucas and a test version of the cancelled Odroid N1. 2560K fft and the timing was 87.7 msec/iter running on the two "big" A72 cores. As a comparison my Ryzen 3 2200G is doing 5.07 ms/iter at 2560K when running mprime, 4 cores 1 worker (throughput is slightly higher for 4 workers) and aren't Intel chips supposed to be much faster than that? Would someone please make more recent benchmarks available - the benchmark tables on both mersenne.org and mersenne.ca are a bit old now. But anyway, I ordered a Rock960 board. It hasn't arrived yet, but it's also got that RK3399 chip on it. It's more expensive than those other boards, but it's just one board for testing and general fooling around, thermals and power consumption etc. There are boards with e.g. Kirin 970 (4x A73, 4x A53) but they are really quite expensive now. Maybe after a while they'll get cheaper too. Or maybe something else will.
2018-12-29, 22:24 #172 ewmayer ∂2ω=0 Sep 2002 República de California 13×29×31 Posts Re. the Odriod-N1 tests, I had a beta tester of one of those do an Mlucas build and timings on that - long story short, we got best total throughput running one 2-thread job on the dual A72 core and one 4-thread job on the quad A53, and said total throughput was ~2.2x what I get running on my quad-A53-core Odroid C2. No idea whether the N2 will be appreciably better FLOPS-wise than the N1, and no word (AFAIK) on when the much-delayed N2 will finally be available, hopefully early in 2019. Looking down the road to 2020, the best hope for an x86-competitive ARM implementation may be in Apple's PC roadmap. More frustrating waiting!
2018-12-29, 23:18 #173
M344587487
"Composite as Heck"
Oct 2017
36816 Posts
Quote:
Originally Posted by nomead I doubt that 10x factor somewhat. On the Odroid forum someone ran a couple test runs on Mlucas and a test version of the cancelled Odroid N1. 2560K fft and the timing was 87.7 msec/iter running on the two "big" A72 cores. As a comparison my Ryzen 3 2200G is doing 5.07 ms/iter at 2560K when running mprime, 4 cores 1 worker (throughput is slightly higher for 4 workers) and aren't Intel chips supposed to be much faster than that? Would someone please make more recent benchmarks available - the benchmark tables on both mersenne.org and mersenne.ca are a bit old now. ...
Taking your 87.7 ms/it for 2xA72 and my 160 msec/it for 2560K 4xA53 from the previous page we get ~17.65 it/sec, effectively ~56.65ms/it for an RK3399. Ewmayer's post on the Odroid forum suggests there may be a 10% slowdown running both clusters simultaneously. So ~11.2 RK3399 may be equivalent to a 2200G (~12.4 if 10% slowdown is present). From the mersenne.ca bench of an i3 8100 of 3.67ms/it for 2560K we can estimate that an 8100 roughly translates to ~15.4 RK3399 (~17.2 if 10% slowdown is present). That's a lot of compounded estimates so a big pinch of salt is required.
Quote:
Originally Posted by ewmayer ... Looking down the road to 2020, the best hope for an x86-competitive ARM implementation may be in Apple's PC roadmap. More frustrating waiting!
Oh dear.
2019-01-13, 20:52 #175 thorken Jan 2019 43 Posts take a look at this post
2019-01-14, 20:17 #176
ewmayer
2ω=0
Sep 2002
República de California
266478 Posts
Quote:
If you're not willing to post source, you're going to continue being greeted with skepticism. All the major current clients used by GIMPSers allow users to inspect the source and to build directly from it, should they so desire.
On the other hand, were to do a build of Mlucas and post comparative timings vs your app on your Android hardware, that could be useful.
Similar Threads Thread Thread Starter Forum Replies Last Post cheesehead Science & Technology 137 2018-06-26 15:46 BrainStone Mlucas 14 2017-11-19 00:59 ixfd64 Software 7 2011-02-25 20:05 ewmayer Programming 34 2010-10-18 22:36 fivemack Software 7 2009-03-23 18:15
All times are UTC. The time now is 00:40.
Tue Jan 18 00:40:45 UTC 2022 up 178 days, 19:09, 0 users, load averages: 1.17, 1.14, 1.12
|
2022-01-18 00:40:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2504282295703888, "perplexity": 10276.11790354448}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00085.warc.gz"}
|
https://ask.libreoffice.org/en/answers/72485/revisions/
|
Ask Your Question
Revision history [back]
My apologies for the tardy response, partiularly after the prompt response.
I thought I had asked a perfectly simple question but the two replies indicate that I have not. Ratsinger has correctly surmized my issue when he says "once you advance to a new row in the sub-table control, that record is updated." In my case the movement to the new row in the sub-table is precipitated by a movement in the parent table but that is not particularly relevant,
This default behaviour is what I wanted to supress. I only wanted the update (or insertion) to come from an explicit button press.
My problem was one of English language semantics. Suppose a cursor is pointing to the second row in a resultset. I interprested "before record change" to mean "before a change in the contents of the current row", i.e. the updateRow or insertRow call. However, "record change" actually means movement of the cursor wthin the result set, say to the first or third row. What I needed to do was to write a macro to catche the "before record action" event. This turned out to be non-trivial since the macro is actually fired twice, once with the controller as the source and once with the form.
This is the code I wrote that sort of did the job, but USE IT AT YOUR OWN RISK:
function BeforeRecordChange(event) ' USE AT YOUR OWN RISK
With event.source
If .supportsservice("com.sun.star.form.runtime.FormController") Then
if .currentcontrol.model.classid = com.sun.star.form.FormComponentType.COMMANDBUTTON then
BeforeRecordChange = true ' Button update OK
else
BeforeRecordChange = false ' Updates initiated by row movement disallowed
end if
else
BeforeRecordChange = true ' Not controller so ignore
end if
End With
End Function
However, this seems to confuse the BASIC framework, as does trying to veto row movements with "before record change". It looks like BASIC does not expect you to use event handlers to veto these sorts of things although supressing mouse click and keyboard events seems to be OK.
My advice to any would-be developers that want users to explictly confirm or cancel changes is to either use a dialogue box or write the whole application in Java. If you don't like either of those options try simplifying the form to exclude subforms and minimizing the extent to which the BASIC framework can get itself confused.
My apologies for the tardy response, partiularly after the prompt response.
I thought I had asked a perfectly simple question but the two replies indicate that I have not. Ratsinger has correctly surmized my issue when he says "once you advance to a new row in the sub-table control, that record is updated." In my case the movement to the new row in the sub-table is precipitated by a movement in the parent table but that is not particularly relevant,
This default behaviour is what I wanted to supress. I only wanted the update (or insertion) to come from an explicit button press.
My problem was one of English language semantics. Suppose a cursor is pointing to the second row in a resultset. I interprested "before record change" to mean "before a change in the contents of the current row", i.e. the updateRow or insertRow call. However, "record change" actually means movement of the cursor wthin the result set, say to the first or third row. What I needed to do was to write a macro to catche the "before record action" event. This turned out to be non-trivial since the macro is actually fired twice, once with the controller as the source and once with the form.
This is the code I wrote that sort of did the job, but USE IT AT YOUR OWN RISK:
function BeforeRecordChange(event) BeforeRecordAction(event) ' USE AT YOUR OWN RISK
With event.source
If .supportsservice("com.sun.star.form.runtime.FormController") Then
if .currentcontrol.model.classid = com.sun.star.form.FormComponentType.COMMANDBUTTON then
BeforeRecordChange BeforeRecordAction = true ' Button update OK
else
BeforeRecordChange BeforeRecordAction = false ' Updates initiated by row movement disallowed
end if
else
BeforeRecordChange BeforeRecordAction = true ' Not controller so ignore
end if
End With
End Function
However, this seems to confuse the BASIC framework, as does trying to veto row movements with "before record change". It looks like BASIC does not expect you to use event handlers to veto these sorts of things although supressing mouse click and keyboard events seems to be OK.
My advice to any would-be developers that want users to explictly confirm or cancel changes is to either use a dialogue box or write the whole application in Java. If you don't like either of those options try simplifying the form to exclude subforms and minimizing the extent to which the BASIC framework can get itself confused.
|
2019-04-23 12:17:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3520054817199707, "perplexity": 2614.7266219252356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00422.warc.gz"}
|
http://blogs.ams.org/visualinsight/2016/09/15/togliatti-quintic-surface/
|
# Togliatti Quintic
Togliatti Quintic – Abdelaziz Nait Merzouk
A quintic surface is one defined by a polynomial equation of degree 5. A nodal surface is one whose only singularities are ordinary double points: that is, points where it looks like the origin of the cone in 3-dimensional space defined by
$$x^2 + y^2 = z^2 .$$
A Togliatti surface is a quintic nodal surface with the largest possible number of ordinary double points, namely 31. In the above picture, Abdelaziz Nait Merzouk has drawn the real points of a Togliatti surface.
This surface is described by a homogeneous quintic equation in four variables, say $w,x,y,z$, which is then intersected with the hyperplane $w = 1$. Here is a version rotated around the $wz$ plane before intersecting with the hyperplane $w = 1$:
Rotated Togliatti Quintic – Abdelaziz Nait Merzouk
This version is sometimes called the ‘dervish’, due to its resemblance to a whirling dervish.
The first example of a Togliatti quintic was constructed in 1940:
• Eugenio G. Togliatti, Una notevole superficie di 5° ordine con soli punti doppi isolati, Vierteljschr. Naturforsch. Ges. Zürich 85 (1940), 127–132.
In 1980, Beauville proved that 31 is the maximum possible number of nodes for a surface of this degree, showing this example to be optimal:
• Arnaud Beauville, Sur le nombre maximum de points doubles d’une surface dans $\mathrm{P}^3$ (μ(5) = 31), Journées de Géometrie Algébrique d’Angers, Juillet 1979/Algebraic Geometry, Angers, 1979, Alphen aan den Rijn—Germantown, Md.: Sijthoff & Noordhoff, 1980, pp. 207–215.
Abdelaziz Nait Merzouk created these pictures of a Kummer surface and made them available on Google+ and made them available under a Creative Commons Attribution-ShareAlike 3.0 Unported license.
Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment here and let me know!
|
2017-05-23 12:41:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4681800901889801, "perplexity": 2328.6888457123746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607636.66/warc/CC-MAIN-20170523122457-20170523142457-00237.warc.gz"}
|
https://scicomp.stackexchange.com/questions/8316/machine-epsilon-eps/8317
|
# Machine epsilon (eps)
The wiki for machine epsilon says:
"Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic"
If machine epsilon is the upper bound on the relative error, why does the spacing between floating point numbers actually get bigger for larger numbers? For example in MATLAB:
eps(1) = 2.220446049250313e-016 (machine epsilon)
eps(2) = 4.440892098500626e-016
eps(4) = 8.881784197001252e-016
eps(8) = 1.776356839400251e-015
...
eps(realmax) = 1.995840309534720e+292
So how is eps the upper bound on the relative error if it is less than all of those numbers?
The Matlab command help eps says the following:
In other words, if $\varepsilon_\mathsf{mach}$ is the relative error due to floating point, as defined in the Wikipedia article, then $\mbox{eps}(x)$ will, for any normal $x$, be $\hat{x}\varepsilon_\mathsf{mach}$, where $\hat{x}$ is the largest power of two such that $|\hat{x}|\leq|x|$.
Calling eps without an argument will give you the relative error, calling eps(x) with an argument will give you the absolute error for that argument.
|
2021-02-28 03:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742351174354553, "perplexity": 468.124625592935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00087.warc.gz"}
|
https://electronics.stackexchange.com/questions/448584/intel-fpga-applying-timing-constraints
|
# Intel FPGA: applying timing constraints
I have a data signal, select and clock signal which I am sending from the FPGA to another chip and I need to constrain them so I don't violate setup/hold time etc.
I have tried to write and SDC file, but looking at the signal on the oscilloscope it doesn't seem to work, both clk and data transition happen at the same time. Setup is min. 1 ns and hold time is 0.2 ns. I have assumed 0.5 ns clock jitter and 1 ns pcb travel time. The tx_clk is derived from a PLL (2MHz). My sdc file looks like this:
create_clock -name clk_main -period 25.000 [get_ports {clk_main}]
derive_pll_clocks -create_base_clocks
derive_clock_uncertainty
set_output_delay -clock { u_nios_system|altpll_0|sd1|pll7|clk[1] } -max 2.5 [get_ports {tx_data[*] tx_sel}]
set_output_delay -clock { u_nios_system|altpll_0|sd1|pll7|clk[1] } -min -add_delay 0.3 [get_ports {tx_data[*] tx_sel}]
Am I misunderstanding how to calculate the appropriate delays or is there something wrong with the way i am applying the constraints?
Any help would be greatly appreciated.
set_output_delay -clock <clock> -min -<hold time> <port>
|
2019-10-18 04:48:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44421446323394775, "perplexity": 6990.253191236136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00422.warc.gz"}
|
https://itectec.com/ubuntu/ubuntu-run-configuration-script-at-x-session-resume/
|
# Ubuntu – Run configuration script at X session resume
power-managementscriptssuspendxinput
On Ubuntu 13.04 I have to manually configure the touchpad since a bug prevents me using the standard configuration tool (changes don't save). However I created a script that sets up velocity, acceleration and scrolling, configured it to run at sartup and it works. The problem rises when I resume after suspension: especially the scrolling settings (the easiest to check) disappear.
Following other questions and answers I wrote this script (which contains the same commands I used in the over-mentioned one) located in /etc/pm/sleep.d/ZZtouchpad:
#!/bin/sh
case "$1" in resume|thaw) xinput --set-prop "CyPS/2 Cypress Trackpad" "Device Accel Constant Deceleration" 2 xinput --set-prop "CyPS/2 Cypress Trackpad" "Device Accel Velocity Scaling" 35 xinput --set-prop "CyPS/2 Cypress Trackpad" "Synaptics Scrolling Distance" -20, -20 esac But it doesn't work at all. Thnks for help! EDIT I found out that the script works when suspending with pm-suspend or pm-suspend-hybrid, but when suspending from the system menu or closing the laptop lid it doesn't. It seems the error is 'unable to connect to X server'. So, the question better be rephrased: where should I put those commands for them to be executed when the X session is resumed? I tried ~/.xinitrc, a file under ~/.xinitrc.d and ~/.xsessionrc. Any suggestions? #### Best Answer • I had a similar problem. The issue is to connect to the X server. I solved it by stealing from /etc/acpi/sleep.sh. Put the following into /etc/pm/sleep.d/99_setup_touchpad. #! /bin/sh . /usr/share/acpi-support/power-funcs case "$1" in
resume|thaw)
if pidof xscreensaver > /dev/null; then
for x in /tmp/.X11-unix/*; do
displaynum=echo $x | sed s#/tmp/.X11-unix/X## getXuser; if [ x"$XAUTHORITY" != x"" ]; then
export DISPLAY=":$displaynum" su$user -c "xinput set-prop 'CyPS/2 Cypress Trackpad' 'Device Accel Constant Deceleration' 2"
su $user -c "xinput set-prop 'CyPS/2 Cypress Trackpad' 'Device Accel Velocity Scaling' 35" su$user -c "xinput set-prop 'CyPS/2 Cypress Trackpad' 'Synaptics Scrolling Distance' -20, -20"
fi
done
fi
;;
*)
# Nothing.
;;
esac
Finally make the file executable: chmod 755 /etc/pm/sleep.d/99_setup_touchpad.
Note: I'm usually the only one logged in via X on my laptop. So the loop is just one iteration. I don't know what happens if there are more than one sesssions live at the same time. The above is good enough for me.
|
2021-06-19 01:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17560145258903503, "perplexity": 10725.861768440152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00516.warc.gz"}
|