url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://brilliant.org/problems/nth-brackets/ | # Nth Brackets
Algebra Level 2
Find the sum of the terms in the $${ n }^\text{th}$$ pair of brackets:
(1, 2, 3, 4), (5, 6, 7, 8), (9, 10, 11, 12), ...
× | 2018-06-23 04:11:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35989996790885925, "perplexity": 514.1531965357544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864940.31/warc/CC-MAIN-20180623035301-20180623055301-00496.warc.gz"} |
https://math.stackexchange.com/questions/2192097/find-the-compact-subsets-of-the-cofinite-topology-on-the-real-space-r | # Find the compact subsets of the cofinite topology on the real space R.
Q: What are the compact subsets of $(\mathbb{R}, T_{fin})$? Where $T_{fin}$ is de cofinite topology.
I'm sorry if this question is too basic or easy. I just feel that I don't have a well enough understanding of compactness when it concerns subsets of topologies. I've tried to work with the definition of the cofinite topology but I can't figure it out!
I think that every set is compact.
Let $S \subseteq \mathbb{R}$, and let {$U_\lambda$}$_{\lambda \in \Lambda}$ be an open cover of $S$, i.e: $S \subseteq \cup_{\lambda \in \Lambda}U_\lambda$ while $U_\lambda \in \tau$ for every $\lambda \in \Lambda$.
There exists $\lambda_0 \in \Lambda$ , such that $|\mathbb{R} \backslash U_{\lambda_0}| < \infty$. Then we can write $(\mathbb{R} \backslash U_{\lambda_0}) \cap S=$ {$y_k$}$_{k=1}^n$.
From the fact that {$U_\lambda$}$_{\lambda \in \Lambda}$ is a cover we know that there exists {$\lambda_k$}$_{k=1}^n \subset \Lambda$, such that $y_k \in U_{\lambda_k}$ for every $k=1,...,n$.
Then $S\subseteq \cup_{k=0}^n U_{\lambda_k}$ ,and {$U_{\lambda_k}$}$_{k=0}^n$ is a finite open subcover of $S$.
• You are correct. They're all compact. – DanielWainfleet Mar 19 '17 at 3:54
First we need to understand what this topology describes : an open set for what you call $T_{fin}$ will either be a set $A \subseteq \mathbb{R}$ such that $\mathbb{R} \backslash A$ is finite (note that $\mathbb{R}$ is such a set), or the empty set $\emptyset$.
Now take any subset $B \subseteq \mathbb{R}$. Let $\left\{U_{\alpha}|\alpha \in I\right\}$ be a collection of open sets (for the $T_{fin}$ topology) that cover $B$ i.e. : $$B \subseteq \bigcup_{\alpha \in I}U_{\alpha}$$
You can verify that the number of $U_{\alpha}$ needed for such a cover is actually finite (remember what the open sets for $T_{fin}$ look like). This will be true for any open cover of $B$. The following characterization of compactness then allows you to conclude that $B$ is a compact subset of $\mathbb{R}$ for $T_{fin}$.
• Your wording could easily misunderstood as saying that $B$ is compact because there is some finite open cover of $B$. The point is that $B$ is compact because every open cover of $B$ can be refined to a finite subcover. – Alex Kruckman Mar 18 '17 at 21:28 | 2019-09-20 05:46:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759003520011902, "perplexity": 116.31946019115223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00460.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-2-section-2-7-derivatives-and-rates-of-change-2-7-exercises-page-149/29 | ## Calculus: Early Transcendentals 8th Edition
Published by Cengage Learning
# Chapter 2 - Section 2.7 - Derivatives and Rates of Change - 2.7 Exercises: 29
#### Answer
a) $F'(2) = -\frac{3}{5}$ b) The red is the graph of $f(x) = \frac{5x}{1+x^2}$ The blue is the tangent line $y = -\frac{3x}{5} + \frac{16}{5}$.
#### Work Step by Step
$F'(a) = \lim\limits_{x \to a} \frac{f(x)-f(a)}{x-a}$ $F'(a) = \lim\limits_{x \to a} \frac{\frac{5x}{1+x^2}-\frac{5a}{1+a^2}}{(x-a)}$ $F'(a) = \lim\limits_{x \to a} \frac{\frac{5x}{1+x^2}-\frac{5a}{1+a^2}}{(x-a)}$ $\times {\frac{(1+x^2)(1+a^2)}{(1+x^2)(1+a^2}}$ $F'(a) = \lim\limits_{x \to a} \frac{5x(1+a^2)-5a(1+x^2)}{(x-a)(1+x^2)(1+a^2)}$ $F'(a) = \lim\limits_{x \to a} \frac{5x+5xa^2-5a-5x^2)}{(x-a)(1+x^2)(1+a^2)}$ Simplify the equation: $F'(a) = \lim\limits_{x \to a} \frac{5[(xa^2-a)-(x^2a-x)]}{(x-a)(1+x^2)(1+a^2)}$ $F'(a) = \lim\limits_{x \to a} \frac{5[(a(xa-1))-x(xa-1)]}{(x-a)(1+x^2)(1+a^2)}$ $F'(a) = \lim\limits_{x \to a} \frac{5[(a-x)(xa-1)]}{(x-a)(1+x^2)(1+a^2)}$ Multiply $-1$ to $(a-x)$ to make it $(x-a)$ $F'(a) = \lim\limits_{x \to a} \frac{-5[(a-x)(xa-1)]}{(x-a)(1+x^2)(1+a^2)}$ $F'(a) = \lim\limits_{x \to a} \frac{-5[(x-a)(xa-1)]}{(x-a)(1+x^2)(1+a^2)}$ $F'(a) = \lim\limits_{x \to a} \frac{-5[(xa-1)]}{(1+x^2)(1+a^2)}$ Substitute $x$ for $a$ $F'(a) = \lim\limits_{x \to a} \frac{-5(a*a-1)}{(1+a^2)(1+a^2)}$ $F'(a) = \frac{-5(a^2-1)}{(1+a^2)^2}$ $F'(2) = \frac{-5(2^2-1)}{(1+2^2)^2}$ $F'(2) = \frac{-5(4-1)}{(1+4)^2}$ $F'(2) = \frac{-5(3)}{5^2}$ $F'(2) = -\frac{-15}{25}$ $F'(2) = -\frac{3}{5}$ Tangent line: $y - y_{1} = m(x-x_{1})$ $y - 2 = -\frac{3}{5}(x-2)$ $y = -\frac{3}{5}(x-2) + 2$ $y = -\frac{3}{5}x + \frac{6}{5}+2$ $y = -\frac{3}{5}x + \frac{16}{5}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-08-22 06:42:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187929391860962, "perplexity": 437.97932178214865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00103.warc.gz"} |
https://www.nature.com/articles/s41467-019-10693-0 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Multiferroicity in atomic van der Waals heterostructures
## Abstract
Materials that are simultaneously ferromagnetic and ferroelectric – multiferroics – promise the control of disparate ferroic orders, leading to technological advances in microwave magnetoelectric applications and next generation of spintronics. Single-phase multiferroics are challenged by the opposite d-orbital occupations imposed by the two ferroics, and heterogeneous nanocomposite multiferroics demand ingredients’ structural compatibility with the resultant multiferroicity exclusively at inter-materials boundaries. Here we propose the two-dimensional heterostructure multiferroics by stacking up atomic layers of ferromagnetic Cr2Ge2Te6 and ferroelectric In2Se3, thereby leading to all-atomic multiferroicity. Through first-principles density functional theory calculations, we find as In2Se3 reverses its polarization, the magnetism of Cr2Ge2Te6 is switched, and correspondingly In2Se3 becomes a switchable magnetic semiconductor due to proximity effect. This unprecedented multiferroic duality (i.e., switchable ferromagnet and switchable magnetic semiconductor) enables both layers for logic applications. Van der Waals heterostructure multiferroics open the door for exploring the low-dimensional magnetoelectric physics and spintronic applications based on artificial superlattices.
## Introduction
Multiferroics, a class of functional materials that simultaneously possess more than one ferroic orders such as ferromagnetism and ferroelectricity, hold great promise in magnetoelectric applications due to the inherent coupling between ferroic orders1,2,3,4,5,6, leading to technological advances in next generation of spintronics and microwave magnetoelectric devices. However, single-phase multiferroics are challenged by the different ferroics’ contradictory preference on the d-orbital occupation of metal ions: ferroelectricity arising from off-center cations requires empty d-orbitals, whereas ferromagnetism usually results from partially filled d-orbitals7. Conventional perovskite multiferroics (chemical formula: ABO3) have lone-pair-active A-sites which move to off-centers of centrosymmetric crystals for electric polarization, and B-sites with unpaired electrons for magnetic order. Because the ferroelectric and magnetic order in these materials are associated with different ions, the coupling between the ferroic orders are usually weak.
Heterogeneous multiferroics, synthesized composites of two mixed phases8, have the coupling between ferroelectric and magnetic order exclusively at inter-materials boundaries, with magnetoelectric effects occasionally established via interfacial magnetoelastic effect. As an example, magnetic nanopillars could be embedded in ferroelectric matrix. However, these heterogeneous multiferroics stringently demand the constituent materials on their structural similarity, lattice match and chemical compatibility, and have weak magnetoelectric effects limited by the interface/bulk ratios.
Van der Waals (vdW) crystals emerged as ideal material systems with unprecedented freedom for heterostructure construction9. Recent experimental advance discovered ferromagnetism10,11,12 and ferroelectricity13 in different two-dimensional vdW crystals separately. It remains a paramount challenge to realize multiple ferroic orders in a single-phase 2D material simultaneously14,15,16,17, as each order encounters its own challenge (e.g., ferromagnetism in 2D systems suffers from enhanced thermal fluctuations, whereas ferroelectricity the depolarization field). Constructing heterostructures of 2D magnets and 2D ferroelectrics potentially provides a generally applicable route to create 2D multiferroics. However, the fundamental question remains regarding whether the interlayer magnetoelectric coupling can be established, given the presence of the interlayer vdW spacing. If realized, layered heterostructure multiferroics would provide completely new platforms with all atoms participating in the inter-ferroics coupling, and largely reshape the landscape of multiferroics based on vdW superlattices.
Through first-principles density functional theory (DFT) calculations based on a bilayer heterostructure of ferromagnetic Cr2Ge2Te6 and ferroelectric In2Se3 monolayers18,19,20,21, we discovered a strong interlayer magnetoelectric effect: the reversal electric polarizations in In2Se3 switches the magnetocrystalline anisotropy of Cr2Ge2Te6 between out-of-plane and in-plane orientations. For a 2D ferromagnet, such a change in magnetic anisotropy corresponds to a switching on/off of the ferromagnetic order at finite temperatures, for easy-axis anisotropy opens spin wave excitation gap and thus suppresses the thermal fluctuations, but easy-plane anisotropy does not10,22,23. The switching of ferromagnetic order by electric polarization promises a novel design of magnetic memory. Detailed analysis unraveled the interfacial hybridization as the cause of interlayer magnetoelectric coupling. Furthermore, In2Se3 becomes magnetized due to the proximity to Cr2Ge2Te6, making In2Se3 a single-phase multiferroics (i.e., ferromagnetic and ferroelectric orders coexist in In2Se3), although apparently the magnetization of In2Se3 requires the presence of the adjacent Cr2Ge2Te6. Such multiferroicity duality—that is, the interlayer multiferroicity and the In2Se3 intralayer multiferroicity—provides unique solid-state system in which ferroelectric and ferromagnetic orders interplay inherently. This unusual multiferroicity duality in vdW heterostructures may open avenues for developing new concepts of magnetoelectric devices: using single knob (the orientation of electric polarization in In2Se3) to control the magnetic order in both In2Se3 and Cr2Ge2Te6. We envision the multiferroicity duality potentially enriches the freedom of layer-resolved data storage and that of information processing due to the diverse magnetoelectric and magneto-optic properties of constituent layers.
## Results
### Material model and computational details
In this work, the lattice constant of Cr2Ge2Te6 adopted the experimental value 6.83 Å and was fixed in heterostructures for the sake of minimizing artifact effects, considering the magnetic properties of 2D Cr2Ge2Te6 are sensitive to structure parameters. It has been reported that a monolayer In2Se3 of either zincblende or wurtzite stacking is unstable with a tendency of the lateral displacement of the top Se layer, leading to the energetically degenerate ferroelectric monolayer18. Although the one relaxed from the zincblende stacking is chosen in this study, it will be also applicable to the other derived from the wurtzite, because the main mechanism to be shown is determined by the interfacial monolayers and thus independent of the detailed stacking type of the multilayer In2Se3. The optimized lattice constant of 1 × 1-In2Se3 (4.106 Å), is strained by −4.0% to match In2Se3-$$\sqrt 3 \, \times \,\sqrt 3$$ to Cr2Ge2Te6-1 × 1 as shown by Fig. 1a. In heterostructure, the relative spacing and registry between Cr2Ge2Te6 and In2Se3 are adjusted to find the energy minimum configuration. The reversal of the electric polarization of the isolated In2Se3 monolayer can be achieved via lateral displacement of the middle most Se layer, with an energy barrier as small as 0.04 eV per formula unit estimated by nudged elastic band calculation24. In heterostructure, due to the large vdW spacing, the presence of Cr2Ge2Te6 does not noticeably affect the energy barrier of the electric-polarization reversal process of In2Se3. The total energy of the heterostructure is lowest (highest) where the interfacial Te atoms sit at the hollow (top) site of In2Se3, with their relative energy difference amounts to 0.31 and 0.35 eV/u.c. for upward (Fig. 1b) and downward (Fig. 1c) polarizations, respectively. The equilibrium interlayer distance between Cr2Ge2Te6 and In2Se3 at the hollow configuration is 3.20 and 3.14 Å for up and down polarizations of In2Se3, respectively. The total energy of the down polarization (Fig. 1c) is lower by 0.07 eV/u.c. than that of the up polarization (Fig. 1b), due to the stronger interfacial coupling between downpolarized In2Se3 and Cr2Ge2Te6.
In order to reproduce the experimental magnetic properties of bulk Cr2Ge2Te6, we used small onsite Hubbard U value 0.5 eV and Hund’s coupling J value 0.0 eV for Cr d orbital in DFT calculations (see ref. 10 for the choice of U = 0.5 eV, J = 0.0 eV). This small onsite Columbic interaction is consistent with the fact that Cr2Ge2Te6 is a small band gap material with less localization than Cr-oxides. The ferromagnetic ground state is confirmed with the Cr spin magnetic moment ~3.0 μB. With the spin–orbit coupling (SOC) included, the magnetocrystalline anisotropy energy (MAE) is calculated and defined as $$E_{[100]} - E_{[001]}$$, where the former and latter correspond to the total energy with the Cr spins directed in-plane and out-of-plane, respectively. Due to the threefold rational symmetry of Cr2Te2Te6, there is not much magnetic anisotropy within the basal plane. We checked the convergence of MAE carefully, where a large value of K-mesh (12 × 12 × 1) was enough to ensure the error <10 μeV/Cr. For the isolated monolayer Cr2Ge2Te6, our calculated MAE is −70 μeV/Cr, favoring the in-plane direction. In the heterostructures with up- and downpolarized In2Se3, the calculated Cr MAE is −95 and 75 μeV, respectively, whose energetically favorable spin orientations are indicated by SCr in Fig. 1b, c. By modulating the polarization of the adjacent ferroelectric layer, the switching of the magnetization orientation is realized. This has significant application implications: For a 2D magnetic system with easy-plane anisotropy (X–Y model), the finite temperature ferromagnetic order is inhibited, whereas for easy-axis anisotropy (Ising model), the magnetic order can be sustained at finite temperatures. Therefore, in such a heterostructure, multiferroic effect offers a potential route to switch the ferromagnetism for logic devices.
### Mechanism for interfacial multiferroicity
The mechanism for electric-polarization dependent MAE is discussed in details. The calculated Cr orbital moment is small (<Lx> = 0.04 μB, <Lz> = 0.01 μB for in-plane, out-of-plane spin directions), which is less likely the origin of MAE. The plausible mechanism is related to the detailed feature of spin-resolved orbital-decomposed band structure25. Starting from the collinear spin band structures, we analyzed the energy correction by the perturbation theory about λLS where λ is the radial part of Cr SOC. The orbital moment quenched by the crystal field results in the vanishing first order correction. Assuming the negligible change of the electron correlation energy between [100] and [001] spin directions, one can write the second order contribution to MAE as follows25:
$${\mathrm{MAE}} = \lambda ^2\mathop {\sum }\limits_{v,c,\sigma } \left( {\left| \langle{v,\sigma {\mathrm{|}}L_z{\mathrm{|}}c,\sigma }\rangle \right|^2 - \left| \langle{v,\sigma {\mathrm{|}}L_x{\mathrm{|}}c,\sigma }\rangle \right|^2} \right)/\left( {{\it{\epsilon }}_{c,\sigma } - {\it{\epsilon }}_{v,\sigma }} \right) \\ + \, \lambda ^2\mathop {\sum }\limits_{v,c,\sigma \ne \sigma \prime } \left( {\left|\langle {v,\sigma {\mathrm{|}}L_x{\mathrm{|}}c,\sigma \prime } \rangle\right|^2 - \left|\langle {v,\sigma {\mathrm{|}}L_z{\mathrm{|}}c,\sigma \prime }\rangle \right|^2} \right)/\left( {{\it{\epsilon }}_{c,\sigma \prime } - {\it{\epsilon }}_{v,\sigma }} \right),$$
(1)
Here the first and second summations correspond to the spin-conserving |Δsz| = 0 and spin-flipping |Δsz| = 1 transitions, and $$|v,\sigma\rangle$$ and $$|c,\sigma\rangle$$ are valence and conduction band states with spin σ, respectively, whose energy eigenvalues are $${\it{\epsilon }}_{c,\sigma }$$ and $${\it{\epsilon }}_{v,\sigma }$$. The angular momentum matrix elements of Lz and Lx correspond to transitions with |Δmz| = 0 and |Δmz| = 1, respectively, for Cr d-orbitals. Therefore, for spin-conserving transition, SOC elements between occupied and unoccupied states with the same (different) magnetic quantum number through the $$L_z\left( {L_x} \right)$$ operator contributes to positive (negative) MAE. For spin-flipping transition, the contribution to MAE is reversed26.
Figure 2a, b shows the spin-resolved orbital-decomposed band structure of heterostructures for up- and downpolarized In2Se3, respectively, where the contribution of Cr d-orbitals is indicated by the circles for |mz| = 0 (z2), 1 (xz and yz), or 2 (x2y2 and xy). As shown by the arrows in Fig. 2a for the heterostructure with up-polarized In2Se3, our calculated negative MAE mainly originates from the spin-conserving transition from |mz| = 1 to |mz| = 0 or 2, i.e., |Δsz| = 0 and |Δmz| = 1 related to the second term of the first sum in Eq. (1). The mechanism is further confirmed by our results that the Cr MAE changes from about −100 to 200 μeV by intentionally increasing the U value from 0.5 to 2.0 eV: the increased U lowers the energy level of the majority spin in valence bands of |mz| = 0 or 2 (Supplementary Fig. 3), and thus the transition energy gap of |Δsz| = 0 and |Δmz| = 1 is increased and the associated contribution is weakened, leading to the positive MAE.
However, for the heterostructure with downpolarized In2Se3, as shown in Fig. 2b, the conduction band minimum of Cr d |mz| = 1 shows a large gap (~0.2 eV) near 0.5 eV above Fermi level, which is caused by hybridization with the In2Se3 conduction band minimum. Such hybridization results in a significant depletion of |mz| = 1 majority spin DOS (Supplementary Fig. 2a). Hence, the negative contribution to MAE found for the case of up-polarized In2Se3 is suppressed. Meanwhile the minority spin DOS remains almost unchanged for |mz| = 1 (Supplementary Fig. 2a), leading to positive MAE via |Δsz| = 1 and |Δmz| = 1, as illustrated by the arrows in Fig. 2b. Considering the interfacial hybridization depends on the band alignment of In2Se3 and Cr2Ge2Te6, we employed the Heyd-Scuseria-Ernzerhof exchange-correlation functional (HSE06) to recalculate the band properties of the heterostructures. As expected, the calculated band gaps widen compared with the GGA-PBE results, but the key features while In2Se3 reverses its electric orientation from up to down keep the same: the conduction band of In2Se3 moves down to hybridize with the conduction band of Cr2Ge2Te6, as clearly seen in Supplementary Fig. 4.
The sign change of Cr2Ge2Te6’s MAE upon the electric-polarization reversal of In2Se3 from up to down arises from the increased coupling, which causes the overall shift down of In2Se3 bands and its enhanced hybridization with Cr2Ge2Te6. This suggests that the positive MAE for the down polarization would be enhanced by a reduced vdW spacing. To confirm this scenario, we did interlayer spacing dependent MAE calculations for hollow configuration as shown in Fig. 3. As the interlayer distance decreases, MAE increases gradually with a slight fluctuation. The fluctuation originates from the detailed variation of energy levels in the spin-polarized band structures. From the same calculation conducted for the top configuration, it exhibits a stronger fluctuation with the interlayer spacing, due to the larger degree of interfacial orbital overlap in top configuration. The same trend of the two curves in Fig. 3 confirms that the increased interlayer hybridization tends to switch the magnetocrystalline anisotropy from easy-plane to easy-axis. Detailed spin-resolved orbital-decomposed analysis in Supplementary Fig. 5 shows that the decreased spin-flipping energy gap near K with decreased interlayer distance contributes to the positive MAE.
### Magnetized In2Se3 in proximity to Cr2Ge2Te6
Remarkably, the exchange splitting in Cr d-band magnetizes In2Se3 by the proximity effect. As shown in Fig. 2, the highest valence band has a significant exchange splitting, where the majority-spin band is closer to the Fermi level near Γ. Those states are mainly caused by the surface Te atoms in Cr2Ge2Te6, which means the interfacial Te atoms have the electron spin antiparallel to that of Cr d electrons. Our calculated spin moment per Te atom is −0.11 μBfor In2Se3 of either electric polarizations. Also, the surface In and Se atoms has non-zero spin moments parallel to Te spins induced by the proximity. The spin-resolved DOS of interfacial In and Se, shown in Fig. 4a, confirmed the magnetized In2Se3. It is practically important to note the calculated magnetized In2Se3 here is a ground-state property. At finite temperatures, easy-plane magnetization of 2D In2Se3 is susceptible to thermal fluctuations and long-range order does not exist, but easy-axis magnetization of 2D In2Se3 could sustain the spin polarization at certain finite temperatures. Hence, a switching of 2D magnetic ferroelectric In2Se3 could be realized in this heterostructure multiferroics, leading to a design of spin field-effect transistor27,28.
The induced spin moment of surface Se is attributed to the exchange coupling J ~ t2/U between Te p and Se p orbitals, with t the hopping constant and U the intra-orbital Coulomb repulsion. In the limit of zero t or infinite U, the system favors the triplet state similar to the atomic Hund coupling, which is the case for Te p and Se p. For a given value of U, the t varies exponentially with the distance. Consistently, our calculations show the exponentially increasing Se spin moment with decreased interfacial distance, as shown in Fig. 4b. The correlation effect should depend on the specific nonlocal correlation functional. For different vdW functionals, the induced spin moments remain at nearly the same magnitude (Supplementary Fig. 6).
### Discussion on practical experimental factors
It is of experimental guidance to remark on the possible effects of real material environments. Calculation and analysis in this work are based on the heterostructure of a bilayer system floating in vacuum. In the experimental realization, the initial anisotropy of the magnetic layer Cr2Ge2Te6 could be affected by a few factors, including contacting the materials of large dielectric constants22,29 or large SOC strengths30, unintentional doping31,32,33 caused by chemicals in device fabrication process, and small amount of artificial strain induced in heterostructure preparation. These factors may affect the resultant magnetoelectric effect quantitatively, as reflected from the calculated MAE of the heterostructures based on the arbitrary sets of U and J values (see Supplementary Fig. 7): the increased U value enhances the out-of-plane anisotropy, while the increased J value enhances the in-plane anisotropy; for any tested set of U and J values, the out-of-plane anisotropy is always enhanced by ~ 0.15 meV/Cr when the In2Se3 dipole is inverted from up to down. Therefore, even if our adopted values of U and J (U = 0.5 eV, J = 0.0 eV) slightly deviate from the exact description of the real heterostructure samples because of the aforementioned complex experimental conditions, the reversal of the In2Se3 polarization from up to down always strengthens the 2D ferromagnetic order in Cr2Ge2Te6. This leads to a general implication: in practice, one can always set a temperature so that 2D ferromagnetism could be found in Pdn-In2Se3-Cr2Ge2Te6 but disappear from Pup-In2Se3-Cr2Ge2Te6, leading to the practical switching experiments at finite temperatures. Therefore, the magnetoelectric effect presented here, based on the modification of MAE by the intricate interface hybridization which further relates to the electric polarization of the 2D ferroelectrics, is an intrinsic interfacial phenomenon.
### Summary
We employed first-principles DFT calculations on a vdW heterostructure consisting of ferromagnetic Cr2Ge2Te6 and ferroelectric In2Se3 monolayers. By reversing the electric polarization of In2Se3, the calculated magnetocrystalline anisotropy of Cr2Ge2Te6 changes between easy-axis and easy-plane (i.e., switching on/off of the ferromagnetic order), which promises a novel design of magnetic memory. Furthermore, In2Se3 becomes magnetic ferroelectrics, with switchable spin polarizations according to its own electric polarization. The 2D multiferroic heterostructures would tremendously enlarge the landscape of multiferroics by artificially assembling 2D layers and provide new material platforms for a plethora of emergent interfacial phenomena.
## Methods
### The DFT method and parameters
All the calculations were performed by the DFT method implemented in Vienna ab initio Simulation Package (VASP)34, with the Perdew-Burke-Ernzerhof (PBE) functional35 in the scheme of generalized gradient approximation (GGA). The main data was calculated by GGA + U based on the Liechtenstein approach with U = 0.5 eV and J = 0.0 eV. The van der Waals interatomic forces are described by the D2 Grimme method36. The K-mesh of 6 × 6 × 1 and the energy cutoff of 300 eV are used for the structural optimization. The dipole correction is included to exclude spurious dipole–dipole interaction between periodic images.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## References
1. Wang, J. et al. Epitaxial BiFeO3 multiferroic thin film heterostructures. Science 299, 1719 (2003).
2. Kimura, T. et al. Magnetic control of ferroelectric polarization. Nature 426, 55 (2003).
3. Ramesh, R. & Spaldin, N. A. Multiferroics: progress and prospects in thin films. Nat. Mater. 6, 21 (2007).
4. Fiebig, M. et al. Revival of the magnetoelectric effect. J. Phys. D. 38, R123 (2005).
5. Fiebig, M. et al. The evolution of multiferroics. Nat. Rev. Mater. 1, 16046 (2016).
6. Cheong, S.-W. & Mostovoy, M. Multiferroics: a magnetic twist for ferroelectricity. Nat. Mater. 6, 13 (2007).
7. Hill, N. A. Why are there so few magnetic ferroelectrics? J. Phys. Chem. B 104, 6694 (2000).
8. Zheng, H. et al. Multiferroic BaTiO3-CoFe2O4 nanostructures. Science 303, 661 (2004).
9. Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419 (2013).
10. Gong, C. et al. Discovery of intrinsic ferromagnetism in two-dimensional van der Waals crystals. Nature 546, 265–269 (2017).
11. Huang, B. et al. Layer-dependent ferromagnetism in a van der Waals crystal down to the monolayer limit. Nature 546, 270–273 (2017).
12. Gong, C. & Zhang, X. Two-dimensional magnetic crystals and emergent heterostructure devices. Science 363, eaav4450 (2019).
13. Liu, F. et al. Room-temperature ferroelectricity in CuInP2S6 ultrathin flakes. Nat. Commun. 7, 12357 (2016).
14. Seixas, L., Rodin, A. S., Carvalho, A. & Castro Neto, A. H. Multiferroic two-dimensional materials. Phys. Rev. Lett. 116, 206803 (2016).
15. Wang, H. & Qian, X. Two-dimensional multiferroics in monolayer group IV monochalcogenides. 2D Mater. 4, 015042 (2017).
16. Luo, W., Xu, K. & Xiang, H. Two-dimensional hyperferroelectric metals: a different route to ferromagnetic-ferroelectric multiferroics. Phys. Rev. B 96, 235415 (2017).
17. Qi, J., Wang, H., Chen, X. & Qian, X. Two-dimensional multiferroic semiconductors with coexisting ferroelectricity and ferromagnetism. Appl. Phys. Lett. 113, 043102 (2018).
18. Ding, W. et al. Prediction of intrinsic two-dimensional ferroelectrics in In2Se3 and other III2-VI3 van der Waals materials. Nat. Commun. 8, 14956 (2017).
19. Zhou, Y. et al. Out-of-plane piezoelectricity and ferroelectricity in α-layered In2Se3 nanoflakes. Nano Lett. 17, 5508–5513 (2017).
20. Cui, C. et al. Intercorrelated in-plane and out-of-plane ferroelectricity in ultrathin two-dimensional layered semiconductor In2Se3. Nano Lett. 18, 1253–1258 (2018).
21. Xiao, J. et al. Intrinsic two-dimensional ferroelectricity with dipole locking. Phys. Rev. Lett. 120, 227601 (2018).
22. Mermin, N. D. & Wagner, H. Absence of ferromagnetism or antiferromagnetism in one- or two-dimensional isotropic Heisenberg models. Phys. Rev. Lett. 17, 1133 (1966).
23. Bruno, P. Magnetization and Curie temperature of ferromagnetic ultrathin films: the influence of magnetic anisotropy and dipolar interactions. Mater. Res. Symp. Proc. 231, 299–310 (1991).
24. Henkelman, G. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. J. Chem. Phys. 113, 9901–9904 (2000).
25. Wang, D. et al. First-principles theory of surface magnetocrystalline anisotropy and the diatomic-pair model. Phys. Rev. B 47, 14932 (1993).
26. Sui, X. et al. Voltage-controllable colossal magnetocrystalline anisotropy in single-layer transition metal dichalcogenides. Phys. Rev. B 96, 041410 (2017).
27. Datta, S. & Das, B. Electronic analog of the electro-optic modulator. Appl. Phys. Lett. 56, 665–667 (1990).
28. Gong, S.-J. et al. Electrically induced 2D half-metallic antiferromagnets and spin field effect transistors. Proc. Natl Acad. Sci. USA 115, 8511–8516 (2018).
29. Jiang, S., Shan, J. & Mak, K. F. Electric-field switching of two-dimensional van der Waals magnets. Nat. Mater. 17, 406–410 (2018).
30. Avsar, A. et al. Spin-orbit proximity effect in graphene. Nat. Commun. 5, 4875 (2014).
31. Huang, B. et al. Electrical control of 2D magnetism in bilayer CrI3. Nat. Nanotechnol. 13, 544–548 (2018).
32. Jiang, S., Li, L., Wang, Z., Mak, K. F. & Shan, J. Controlling magnetism in 2D CrI3 by electrostatic doping. Nat. Nanotechnol. 13, 549–553 (2018).
33. Wang, Z. et al. Electric-field control of magnetism in a few-layered van der Waals ferromagnetic semiconductor. Nat. Nanotechnol. 13, 554–559 (2018).
34. Kresse, G. & Furthmuller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996).
35. Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865 (1996).
36. Grimme, S. Semiempirical GGA-type density functional constructed with a long-range dispersion correction. J. Comp. Chem. 27, 1787 (2006).
## Acknowledgements
C.G., Y.W., and X.Z. acknowledge the support from the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under contract no. DE-AC02-05-CH11231 within the van der Waals Heterostructures program (KCWF16) for the conceptual development and preliminary calculations of 2D heterostructure multiferroics. The support from the National Science Foundation (NSF) under Grant 1753380 for the calculation and analysis of 2D magnets and the King Abdulah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award OSR-2016-CRG5- 2996 for the calculation and analysis of 2D ferroelectrics was also acknowledged. G.L. acknowledges the support by the National Research Foundation of Korea (Basic Science Research Program: 2018R1D1A1B07045983) for the systematic computational studies of 2D heterostructure multiferroics. Computation was supported by KISTI (KSC-2018-CRE-0048).
## Author information
Authors
### Contributions
C.G. and X.Z. conceived the project. G.L. and E.M.K. performed the calculations with the close discussions with C.G. C.G., G.L., and X.Z. did data analysis and wrote the paper with the assistance from Y.W.
### Corresponding authors
Correspondence to Geunsik Lee or Xiang Zhang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Gong, C., Kim, E., Wang, Y. et al. Multiferroicity in atomic van der Waals heterostructures. Nat Commun 10, 2657 (2019). https://doi.org/10.1038/s41467-019-10693-0
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-019-10693-0
• ### LaBr2 bilayer multiferroic moiré superlattice with robust magnetoelectric coupling and magnetic bimerons
• Wei Sun
• Wenxuan Wang
• Zhenxiang Cheng
npj Computational Materials (2022)
• ### Multiple magnetic phase transitions with different universality classes in bilayer La$$_{1.4}$$Sr$$_{1.6}$$Mn$$_{2}$$O$$_7$$ manganite
• Birendra Kumar
• Jeetendra Kumar Tiwari
• Subhasis Ghosh
Scientific Reports (2021)
• ### Nanodevices engineering and spin transport properties of MnBi2Te4 monolayer
• Yipeng An
• Kun Wang
• Wuming Liu
npj Computational Materials (2021)
• ### An improvement in ferroelectric and ferromagnetic properties of low sintering-temperature (1−x)(0.98(Bi0.5(Na0.78K0.22)0.5−TiO3)−0.02CuO)−(x)NiFe2O4 particulate 0–3 composites
• Chung Do Pham
• Oanh Thi Mai Le
• Minh Van Nguyen
Applied Physics A (2021)
• ### Recent progress on 2D ferroelectric and multiferroic materials, challenges, and opportunity
• Banarji Behera
• Bijuni Charan Sutar
Emergent Materials (2021) | 2022-08-15 07:52:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432774901390076, "perplexity": 4822.841198484531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00315.warc.gz"} |
http://tellfredericksburg.com/freebooks/waves-called-solitons-concepts-and-experiments-advanced-texts-in-physics | # Waves Called Solitons: Concepts and Experiments (Advanced
Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 9.18 MB
If the liquid in the vat vibrates at just the right frequency, usually quite close to the droplet’s natural resonant frequency, the droplet interacts with the ripples it creates as it bounces along, which in turn can affect its path. Basic Plasma Science Facility (BAPSF), UCLA, Los Angeles, California basic research in linear and non-linear plasma waves and beams, diagnostics, visualization tools, Alfven wave studies, Whistler wave studies, lower hybrid wave studies, interaction of current channels, laboratory simulation of space plasma processes Plasma Physics, Dept of Physics and Astronomy, UCLA, Los Angeles, California (plasma experiment group) discharge plasmas, magnetic antennas, nonlinear EMHD effects, thermal noise, Whistler pulsed currents, thermal noise, lower hybrid turbulence, beam-plasma interactions, free plasma expansion, sheath-plasma instability, plasma diagnostics (computational plasma physics group) numerical tokamak, plasma accelerators and light sources, laser plasma interactions, space plasma physics, plasma astrophysics, basic plasma physics, visualization tools, Plasma simulations (particle beam physics lab) beam-plasma interaction, nonlinear plasma wake field acceleration (plasma theory group) basic plasma theory, modeling and simulation Nonneutral Plasma Group, Physics Dept, UCSD, La Jolla, California nonneutral plasma transport, vortex crystals, rotating walls, laser diagnostics, basic plasma physics, pure electron and pure ion plasmas, collective phenomena, equilibrium waves, 2D fluid dynamics, statistical mechanics, collisional thermalization, many-particle adiabatic invariant, cross-field transport, stability theorem, strongly correlated dusty plasmas, vortex dynamics, turbulence and self-organization, spheroidal plasma modes, experiments: electron plasma systems CAMV, EV, CV ion plasma system IV particle-in-cell (PIC) plasma physics simulation ( OOPIC Pro ) - improves plasma physics education, solves challenging problems in basic research, and aids in plasma-processing. focused on applying first-principles, microscopic, kinetic simulation techniques to problems with a slow evolution of macroscopic variables; validate simulations against experimental observations.
Pages: 328
Publisher: Springer (December 5, 2010)
ISBN: 3642085199
Quantum-Statistical Models of Hot Dense Matter: Methods for Computation Opacity and Equation of State (Progress in Mathematical Physics)
Introduction to Wave Propagation in Nonlinear Fluids and Solids
Electromagnetic Field Theory Fundamentals
Plasma Waves, 2nd Edition (Series in Plasma Physics and Fluid Dynamics)
Experiments to be chosen from refraction, diffraction and interference of microwaves, Hall effect, thermal band gap, optical spectra, coherence of light, photoelectric effect, e/m ratio of particles, radioactive decays, and plasma physics. Corequisites: Physics 2D or 4E (prior completion of Physics 2D or 4E is permitted). (S) The first quarter of a five-quarter calculus-based physics sequence for physics majors and students with a serious interest in physics , source: Blast Vibration Analysis Blast Vibration Analysis. It turns out that you have seen this local current before in PHYS 3316. You may remember discussing "probability current," defined by I will leave it as an exercise to show that this is the same as the $\bf j$ that we defined in terms of the phase of the wavefunction , source: Max Planck and Black Body read online backup001instaattire11-16-2015.jblount.com. Notice that a vertical world line means that the object it represents does not move — the velocity is zero. If the object moves to the right, then the world line tilts to the right, and the faster it moves, the more the world line tilts. Quantitatively, we say that velocity = 1 slope of world line (4.1) in Galilean relativity. Notice that this works for negative slopes and velocities as well as positive ones Elements of Green's Functions download online http://ua.emi-school.ru/?lib/elements-of-greens-functions-and-propagation-potentials-diffusion-and-waves-oxford-science.
Theoretical Physics: Quantum Electrodynamics v. 4: Text and Exercise Books
Fields and Particles: Introduction to Electromagnetic Wave Phenomena and Quantum Physics
Applied Dynamics of Ocean Surface Waves (Advanced Series on Ocean Engineering, V. 1)
The components of spin in different directions aren't simultaneously measurable. Thus, the impossible vector relationships for the spin components of a quantum particle are not observable. Bell (1966), and, independently, Simon Kochen and Ernst Specker (Kochen and Specker 1967) showed that for a spin-1 particle the squares of the spin components in the various directions satisfy, according to quantum theory, a collection of relationships, each individually observable, that taken together are impossible: the relationships are incompatible with the idea that measurements of these observables merely reveal their preexisting values rather than creating them, as quantum theory urges us to believe Wave mechanics;: An read pdf kr.emischool.com. In any such process, until the electron actually underwent a transition, it gave no way of knowing "where it was" and at what energy. That was the basic idea of what was called "wave mechanics"--meaning, not the mechanics of waves, but a re-formulation, in terms of waves, of the branch of physics known as mechanics, which deals with motions of matter. Newtonian mechanics treats matter strictly as localized particles, or of bodies and fluids consisting of such particles Wave mechanics of crystalline read pdf Wave mechanics of crystalline solids. Scully, “Delayed ‘Choice’ Quantum Eraser,” Phys. Quantum mechanics (QM; also known as quantum physics, or quantum theory) is a fundamental branch of physics which deals with physical phenomena at nanoscopic scales, where the action is on the order of the Planck constant. The name derives from the observation that some physical quantities can change only in discrete amounts (Latin quanta), and not in a continuous (cf. analog) way Vortex Methods: Proceedings of download for free http://tellfredericksburg.com/freebooks/vortex-methods-proceedings-of-the-u-c-l-a-workshop-held-in-los-angeles-may-20-22-1987-lecture. Constructive interference is when two waves are In phase. By that we mean the path difference between the two waves is zero Destructive interference is when the two waves are out of phase. The double slit experiment, which implies the end of Newtonian Mechanics is described. The de Broglie relation between wavelength and momentum is deduced from experiment for photons and electrons The quantum mechanics of wave-particle duality boogieboyclothing.com.
Public and Private Life of the Soviet People: Changing Values in Post-Stalin Russia
The quantum mechanics of many-body systems (Pure and applied physics)
Geometry and Quantum Field Theory: June 22-July 20, 1991, Park City, Utah (Ias/Park City Mathematics, Vol 1)
The Riemann problem and interaction of waves in gas dynamics (Pitman monographs and surveys in pure and applied mathematics)
Analog and Digital Signal Processing
Oscillations and Waves: in Linear and Nonlinear Systems (Mathematics and its Applications)
Wave Physics
Inverse Problems of Wave Processes (Inverse and Ill-Posed Problems)
Optoelectronics of Molecules and Polymers (Springer Series in Optical Sciences)
EEG Signal Processing
Introducing Particle Physics: A Graphic Guide
Micropolar Fluids: Theory and Applications (Modeling and Simulation in Science, Engineering and Technology)
Nonlinear Ocean Waves and the Inverse Scattering Transform, Volume 97 (International Geophysics)
Quarks and Leptons From Orbifolded Superstring (Lecture Notes in Physics)
Susy and Grand Unification from Strings to Collider Phenomenology: Proc of 3rd Workshop Madrid, Spain Jan-Feb 1985
Elementary Wave Mechanics 1942 1943: Introductory Course of Lectures
PARADIGM21 Part2: Realities in the living body unscientific phenomena
String Theory and Grand Unification: Proceedings of the Conference
Bäcklund and Darboux Transformations: Geometry and Modern Applications in Soliton Theory (Cambridge Texts in Applied Mathematics)
New Developments in Soliton Research
So you actually take this and integrate against another Psi star. So you take a Psi star sub m and integrate-- multiply and integrate. And then the right hand side will get the Kronecker delta that will pick out one term. So, I'm just saying in words a two line calculation that you should do if you don't see this as obvious , cited: Erwin Schrödinger and the Rise of Wave Mechanics, Part 1: Schrödinger in Vienna and Zurich, 1887-1925 (The Historical Development of Quantum Theory, Vol. 5) truck.kennjdemo.com. This air, when displaced by the sound wave, now lies between $x + \chi(x,t)$ and $x + \Delta x + \chi(x + \Delta x,t)$, so that we have the same matter in this interval that was in $\Delta x$ when undisturbed. If $\rho$ is the new density, then $$\label{Eq:I:47:5} \rho_0\,\Delta x = \rho[x + \Delta x + \chi(x + \Delta x,t) - x - \chi(x,t)].$$ \begin{gather} \label{Eq:I:47:5} \rho_0\,\Delta x =\\[.5ex] \rho[x + \Delta x + \chi(x + \Delta x,t) - x - \chi(x,t)].\notag \end{gather} Since $\Delta x$ is small, we can write $\chi(x + \Delta x,t) - \chi(x,t) = (\ddpl{\chi}{x})\,\Delta x$ Theory of Life Perpetuity: Theory of Physical Eternal Life http://kaigohoshou.com/library/theory-of-life-perpetuity-theory-of-physical-eternal-life. Constructive interference occurs when two waves are zero (0) degrees out of phase or a whole number of wavelengths (360 degrees.) out of phase. At the critical angle a wave will be refracted to 90 degrees ref.: Grand Unified Theorem http://www.sandikli.web.tr/freebooks/grand-unified-theorem. Recently, atoms in a gas have been cooled to the quantum regime where they form a Bose-Einstein condensate, in which the system can emit a superintense matter beamforming an atom laser. These ideas apply only to identical particles, because if different particles are interchanged, the wave function will certainly be different. Consequently, particles behave like fermions or like bosons only if they are totally identical The Hidden Domain: Home of the download epub http://katuru.info/?lib/the-hidden-domain-home-of-the-quantum-wave-function-natures-creative-source. PROFESSOR: Well it typically is like that because of dimensional units. You're looking for a constant because you're not looking for the function. So you will get a number times a correct value, yes, indeed Observation of CP Violation in read online http://tellfredericksburg.com/freebooks/observation-of-cp-violation-in-b--dk-decays-springer-theses. Permanently move the lagoon breach point up to the west end (3rd Point). Past California watershed outlets were helped along & opened; to stop flooding, replenish sand on beaches & enhance freeflow of wildlife.(fish migrate) Wave Propagation in Elastic read here read here. The cost, however, is that our theory contains a free parameter, $p_0^2 = \langle p^2 \rangle$. We can walk into the lab and measure $\langle x^2(t)\rangle$ Surface Waves and Fluxes: Volume I Current Theory (Environmental Fluid Mechanics) tellfredericksburg.com. Modal dispersion can be eliminated by using very thin fibres where only the axial mode exists (monomode fibres) or graded index fibres, where the light in the higher modes travels faster and therefore reaches the end of the fibre at the same time as the axial mode light Space-Time Symmetry and download here tellfredericksburg.com. Many of the fluid dynamicists involved in or familiar with the new research have become convinced that there is a classical, fluid explanation of quantum mechanics. “I think it’s all too much of a coincidence,” said Bush, who led a June workshop on the topic in Rio de Janeiro and is writing a review paper on the experiments for the Annual Review of Fluid Mechanics. Quantum physicists tend to consider the findings less significant Advanced Physics Project for read online http://fnc-salon.ru/?lib/advanced-physics-project-for-independent-learning-thermal-properties-unit-tp-advanced-physics.
Rated 4.1/5
based on 1583 customer reviews | 2017-08-18 22:04:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.4302979111671448, "perplexity": 2198.334657620109}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00178.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2006_v43n4_847 | MODIFIED ISHIKAWA ITERATIVE SEQUENCES WITH ERRORS FOR ASYMPTOTICALLY SET-VALUED PSEUCOCONTRACTIVE MAPPINGS IN BANACH SPACES
Title & Authors
MODIFIED ISHIKAWA ITERATIVE SEQUENCES WITH ERRORS FOR ASYMPTOTICALLY SET-VALUED PSEUCOCONTRACTIVE MAPPINGS IN BANACH SPACES
Kim, Jong-Kyu; Nam, Young-Man;
Abstract
In this paper, some new convergence theorems of the modified Ishikawa and Mann iterative sequences with errors for asymptotically set-valued pseudocontractive mappings in uniformly smooth Banach spaces are given.
Keywords
asymptotically nonexpansive mapping;asymptotically pseudocontractive mapping;asymptotically set-valued pseudocontractive mapping;modified Ishikawa iterative sequence with errors;modified Mann iterative sequence with errors;fixed point;
Language
English
Cited by
1.
NEW ITERATIVE PROCESS FOR THE EQUATION INVOLVING STRONGLY ACCRETIVE OPERATORS IN BANACH SPACES,;;;
대한수학회보, 2007. vol.44. 4, pp.861-870
1.
Convergence theorems for asymptotically pseudocontractive mappings in the intermediate sense, Computers & Mathematics with Applications, 2011, 62, 1, 326
2.
Shrinking Projection Method of Fixed Point Problems for Asymptotically Pseudocontractive Mapping in the Intermediate Sense and Mixed Equilibrium Problems in Hilbert Spaces, Journal of Applied Mathematics, 2012, 2012, 1
References
1.
R. P. Agarwal, N. J. Huang, and Y. J. Cho, Stability of iterative processes with errors for nonlinear equations of $\phi$-strongly accretive type operators, Numer. Funct. Anal. Optim. 22 (2001), no. 5-6, 471-485
2.
E. Asplund, Positivity of duality mappings, Bull. Amer. Math. Soc. 73 (1967), 200-203
3.
F. E. Browder, Nonexpansive nonlinear operators in Banach spaces, Proc. Natl. Acad. Sci. U. S. A. 54 (1965), 1041-1044
4.
S. S. Chang, Convergence of Iterative methods for accretive and pseudo-contr-active type mappings in Banach spaces, Nonlinear Funct. Anal. Appl. 4 (1999), 1-23
5.
S. S. Chang, Some results for asymptotically pseudo-contractive mappings and asy-mptotically nonexpansive mappings, Proc. Amer. Math. Soc. 129 (2001), no. 3, 845-853
6.
S. S. Chang, On the approximatiing problems of fixed points for asymptotically non-expansive mappings, Indian J. Pure Appl. Math. 32 (2001), no. 9, 1297-1307
7.
S. S. Chang, J. K. Kim, and Y. J. Cho, Approximations of solutions for set-valued $\phi$-strongly accretive equations, Dynam. Systems Appl. 14 (2005), no. 3-4, 515-524
8.
Y. P. Fang, J. K. Kim, and N. J. Huang, Stable iterative procedures with errors for strong pseudocontractions and nonlinear equations of accretive operators without Lipschitz assumptions, Nonlinear Funct. Anal. Appl. 7 (2002), no. 4, 497-507
9.
M. K. Ghosh and L. Debnath, Convergence of Ishikawa iterates of quasi-nonexp-ansive mappings, J. Math. Anal. Appl. 207 (1997), no. 1, 96-103
10.
K. Goebel and W. A. Kirk, A fixed point theorem for asymptotically nonexpan-sive mappings, Proc. Amer. Math. Soc. 35 (1972), no. 1, 171-174
11.
N. J. Huang and M. R. Bai, A perturbed iterative procedure for multivalued pseudo-contractive mappings and multivalued accretive mappings in Banach Spaces, Comput. Math. Appl. 37 (1999), no. 6, 7-15
12.
J. K. Kim, Convergence of Ishikawa iterative sequences for accretive Lipschitz-ian mappings in Banach spaces, Taiwan Jour. Math. 10 (2006), no. 2, 553-561
13.
J. K. Kim, S. M. Jang, and Z. Liu, Convergence theorems and stability problems of Ishikawa iterative sequences for nonlinear operator equations of the accretive and strong accretive operators, Comm. Appl. Nonlinear Anal. 10 (2003), no. 3, 85-98
14.
J. K. Kim, Z. Liu, and S. M. Kang, Almost stability of Ishikawa iterative schemes with errors for $\phi$-strongly quasi-accretive and $\phi$-hemicontractive operators, Commun. Korean Math. Soc. 19 (2004), no. 2, 267-281
15.
J. K. Kim, Z. Liu, Y. M. Nam, and S. A. Chun, Strong convergence theorems and stability problems of Mann and Ishikawa iterative sequences for strictly hemi-contractive mappings, J. Nonlinear and Convex Anal. 5 (2004), no. 2, 285-294
16.
W. A. Kirk, A fixed point theorem for mappings which do not increase distance, Amer. Math. Monthly 72 (1965), 1004-1006
17.
Q. H. Liu, Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings, Nonlinear Anal. 26 (1996), no. 11, 1835-1842
18.
W. V. Petryshyn, A characterization of strict convexity of Banach spaces and other uses of duality mappings, J. Funct. Anal. 6 (1970), 282-291
19.
B. E. Rhoades, Comments on two fixed point iterative methods, J. Math. Anal. Appl. 56 (1976), no. 3, 741-750
20.
J. Schu, Iterative construction of fixed points of asymptotically nonexpansive mappings, J. Math. Anal. Appl. 158 (1991), 407-413
21.
H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal. 16 (1991), no. 12, 1127-1138
22.
H. K. Xu, Existence and convergence for fixed points of mapping of asymptotically nonexpansive type, Nonlinear Anal. 16 (1991), no. 12, 1139-1146
23.
Y. G. Xu, Ishikawa and Mann iterative processes with errors for nonlinear strongly accretive operator equations, J. Math. Anal. Appl. 224 (1998), 91-101 | 2018-07-19 21:23:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40030431747436523, "perplexity": 2294.526856059061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00461.warc.gz"} |
https://cs.stackexchange.com/questions/3101/is-the-set-of-turing-machines-which-stops-in-at-most-50-steps-on-all-inputs-dec | # Is the set of Turing machines which stops in at most 50 steps on all inputs, decidable?
Let $F = \{⟨M⟩:\text{M is a TM which stops for every input in at most 50 steps}\}$. I need to decide whether F is decidable or recursively enumerable. I think it's decidable, but I don't know how to prove it.
My thoughts
This "50 steps" part immediate turns the R sign for me. If it was for specific input it would be decidable. However, here it's for every input. Checking it for infinite inputs makes me think that the problem is co-RE, i.e. its complement is acceptable.
Perhaps, I can check the configurations and see that all configurations after 50 steps don't lead to accept state- how do I do that?
Let's consider the more general problem of machines which stop after at most $N$ steps, for some $N \geqslant 1$. (The following is a substantial simplifcation of a previous version of this answer, but is effectively equivalent.)
As swegi remarks in an earlier response, if the machine stops after at most $N$ steps, then only the cells $0,1,\ldots,N-1$ on the tape are significant. Then it suffices to simulate the machine $M$ on all input strings of the form $x \in \Sigma^N$, of which there are a finite number.
• If any of these simulations fail to enter a halting state by the $N^{\text{th}}\:\!$ transition, this indicates that any input string starting with $x$ is one for which the machine does not stop within the first $N$ steps.
• If all of these simulations halt by the $N^{\text{th}}\:\!$ transition, then $M$ halts within $N$ steps on all inputs of any length (of which the substring of length $N$ is all that it ever acts on).
• And- Do I assume that $x$ such that its length is longer than $N$ is automatically being rejected? – Jozef Aug 9 '12 at 15:20
• Why Can't it jumps to any further than N cell within the N steps of computing? – Jozef Aug 9 '12 at 15:27
• @Jozef: the simulations just iterate through all possible input strings of length N. You could iterate through more strings, but you won't learn anything more, because only the first N symbols matter anyway. The reason why it can't go any further than N cells is because Turing machines (or the standard definition of them anyway) only move one cell per step. – Niel de Beaudrap Aug 9 '12 at 15:28
• Right, I got it. so you mind only the first N symbols of every word, thus you check all the combinations of them. why did you delete the configurations description? – Jozef Aug 9 '12 at 15:41
• It's still visible if you look at the previous edits. I revised it to this because while the other answer was maybe interesting, a lot of what made it "interesting" only served to obscure the fact that the decision procedure is nothing more or less than simulating $M$ on all possible inputs of length $N$. I thought it better to revise the answer to something much more straightforward, and which basically got to the root of what makes the problem decideable. – Niel de Beaudrap Aug 9 '12 at 15:43
If $M$ stops in no more than 50 steps, than the positions $M$ can reach on the normally infinite tape are limited. Thus the infinite tape can be simulated by a finite one. This means that the tape can be simulated by a finite automaton. It follows that a turing machine $M$ that stops in no more than 50 steps is bisimilar to some finite automaton $M'$.
Let $Q$ be the set of states of $M$, $F \subset Q$ the set of accepting states and $\Gamma$ be the alphabet. Then we build the set of states $Q'$ of $M'$ as follows: $Q' = \{ \langle n, q, s, p, a\rangle \, | n \in \{0,...,50\} \, q \in Q, s \in \Gamma, p \in \{-50,...,50\}, a \equiv q \in F\}$ where $p$ is the position of the read/write head above the tape. We can restrict the position to $\{-50,...,50\}$ because the number of allowed computing steps limits the number of reachable positions.
Having a state $\langle n, q, s, p, a\rangle$ of the finite automaton $M'$ then means that we are at state $q$ of the original automaton, with $s$ on the tape at position $p$ where also the read/write head is positioned, after the $n$-th computing step. The state is an accepting one if $a \equiv true$.
Transforming the transition relation of a concrete turing machine is a little more work but not necessary for the original question, because it is enough to show that the state space is finite (and thus we can just test each input with a length of at most 50 symbols on each such automaton). The idea is to build a new transition relation that goes from a state $\langle n, q, s, p, a \rangle$ to a state $\langle n+1, q', s', p', a'\rangle$ in the $n$-th computing step iff the transition $\langle q, s, p\rangle \rightarrow \langle q', s', p'\rangle$ was in the original transition relation.
• How do you simulate the storage on the tape, i.e. the ability to revisit symbols you have already read, on a finite automaton? – Niel de Beaudrap Aug 9 '12 at 13:14
• @NieldeBeaudrap: You enumerate the whole state space, i.e. you do model checking of the finite tape and the turing machine's control automaton. – swegi Aug 9 '12 at 15:36
• Given that the OP is asking basic questions of computability for Turing Machines, you might want to unpack that sketch into something fuller. (I myself have never heard the phrase "model checking" in a computational context before.) In context, I would typcially assume by 'finite automaton' you would mean a DFA or similar unless you specified otherwise, and it's not clear to me what would correspond to the input of the DFA in such a construction. If you just mean a graph representing possible trajectories of the TM, then I agree. – Niel de Beaudrap Aug 9 '12 at 15:40
• With model checking the finite part of the tape I basically mean what you have written in your answer: just test each input of size at most 50 and check whether an accepting state is reached. – swegi Aug 9 '12 at 21:05
• I wish people would stop propagating the myth that a Turing machine tape needs to be infinite. It doesn't - it can be finite as long as it is extended as needed. – reinierpost Oct 6 '14 at 16:19 | 2019-12-05 23:44:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.719300389289856, "perplexity": 330.2747191395777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00492.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-4-review-exercises-page-354/86a | ## Algebra and Trigonometry 10th Edition
$x^2+(y+4\sqrt 3)^2=64$
$(x-h)^2=4p(y-k)$ or, $(4-0)^2=4p(0-4)$ or, $p=-1$ Now, $x^2=-4(y-4)$ The x-coordinate of the center of the circle is: $4^2 +y^2 =8^2$ or, $y=-\sqrt {8^2-4^2}=-4\sqrt 3$ The equation of a circle is: $(x-h)^2+(y-k)^2=r^2$ or, $x^2+(y+4\sqrt 3)^2=64$ | 2021-06-19 09:56:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990422785282135, "perplexity": 291.331296987989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00233.warc.gz"} |
http://openstudy.com/updates/4dd4904fd95c8b0bf55058c4 | ## anonymous 5 years ago What Is the Factored Form Of Each Expression?16x^2 + 24x + 9 Please Help Me!
1. anonymous
note that the x^2 term and the constant term are both perfect squares. it looks like it might be (4x+3)^2 and a quick check verifies that the middle terms add up to 24x
2. anonymous
Thanks, That's Actually Exactly it! Thanks so Much! =)
3. anonymous
If you have a similar problem doesn't work out nicely, or you don't "just see it", there is a sure fire way to factor a quadratic.
4. anonymous
any quadratic will factor into $k(x-r1)(x-r2)$ where r1 and r2 are called the roots. to find the roots of some quadratic, do the following
5. anonymous
let our quadratic be ax^2+bx+c. Then $x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$ For example, in this case a=16, b=24, and c=9 and if you plug those in, it would work out to be -3/4, which is the solution to 16x^2+24x+9=0
6. anonymous
Thank You | 2016-10-21 13:25:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4792281985282898, "perplexity": 849.3149151829408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00082-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://ask.libreoffice.org/en/answers/37316/revisions/ | # Revision history [back]
If you really want to get it in a single cell:
Be "Davidson, Lauren O." (without the QMs) in A11 and the formula
=MID(A11;FIND(",";A11)+2;1024)&" "&LEFT(A11;FIND(",";A11)-1)
in another cell (best B11) you'll get what you want if the syntax you described by giving an example is "complete" enough. Names that do not contain exactly one occurrence of a comma followed by a single space or names containing trailing spaces will cause complications.
I would prefer using (a) helper(s) column(s).
If you really want to get it in a single cell:
Be "Davidson, Lauren O." (without the QMs) in A11 (and the other names from there downward in column A) and the formula
=MID(A11;FIND(",";A11)+2;1024)&" "&LEFT(A11;FIND(",";A11)-1)
in another cell (best B11) and filled downward in its column as needed, you'll get what you want if the syntax you described by giving an example is "complete" enough. Names that do not contain exactly one occurrence of a comma followed by a single space or names containing trailing spaces will cause complications.
I would prefer using (a) helper(s) column(s). | 2019-06-24 19:29:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.366438627243042, "perplexity": 1924.2561543177114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00008.warc.gz"} |
http://mathhelpforum.com/number-theory/45138-euclid-s-algorithm.html | # Math Help - Euclid's algorithm
1. ## Euclid's algorithm
By adapting Euclid's algorithm, and integers a; b lying between 1 and 1000
such that their quotient a/b agrees with the constant pi up
to 6 decimal places (3.14159265)
Any help with be much appreciated!
2. Hello, mx-!
I don't see how to "adapt Euclid algorithm",
. . but there is a continued fraction approach.
By adapting Euclid's algorithm, find integers $a, b$ between 1 and 1000 such that
their quotient $\frac{a}{b}$ agrees with the constant $\pi$ up to 6 decimal places (3.14159265)
. . $\pi \;=\;3.14159262654 \;= \;3 + 0.141592654$
. . . . $= \;3 + \frac{1}{7.062513306} \;=\;3 + \frac{1}{7 + 0.62513306}$
. . . . $= \;3 + \frac{1}{7 + \dfrac{1}{15.99659441}} \;\approx\;3 + \frac{1}{7 + \frac{1}{16}}$
Therefore: . $\pi \;\approx\:3 + \frac{1}{7 + \frac{1}{16}} \;=\; 3 + \frac{1}{\frac{113}{16}} \;=\;3 + \frac{16}{113} \;=\;\frac{355}{113}$
. . $\begin{array}{ccc}\dfrac{355}{113} &=& {\color{blue}3.141592}92... \\ \\[-3mm]
\pi &=& {\color{blue}3.141592}65...\end{array}$
This is the only solution to this problem. | 2014-03-11 20:53:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640799760818481, "perplexity": 1439.9991349714644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011269626/warc/CC-MAIN-20140305092109-00050-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://gmatclub.com/forum/if-a-b-c-and-d-are-integers-and-ab2c3d4-0-which-of-the-136450.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 28 Sep 2016, 00:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If a, b, c, and d are integers and ab2c3d4 > 0, which of the
Author Message
TAGS:
### Hide Tags
Intern
Joined: 20 Jun 2011
Posts: 46
Followers: 1
Kudos [?]: 66 [0], given: 1
If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink]
### Show Tags
27 Jul 2012, 09:10
5
This post was
BOOKMARKED
00:00
Difficulty:
15% (low)
Question Stats:
73% (01:57) correct 27% (00:56) wrong based on 231 sessions
### HideShow timer Statistics
If a, b, c, and d are integers and $$ab^2c^3d^4 > 0$$, which of the following must be positive?
I. $$a^2cd$$
II. $$bc^4d$$
III. $$a^3c^3d^2$$
A) I only
B) II only
C) III only
D) I and III
E) I, II, and III
[Reveal] Spoiler: OA
Math Expert
Joined: 02 Sep 2009
Posts: 34871
Followers: 6491
Kudos [?]: 82874 [0], given: 10120
Re: If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink]
### Show Tags
27 Jul 2012, 09:29
superpus07 wrote:
If a, b, c, and d are integers and $$ab^2c^3d^4 > 0$$, which of the following must be positive?
I. $$a^2cd$$
II. $$bc^4d$$
III. $$a^3c^3d^2$$
A) I only
B) II only
C) III only
D) I and III
E) I, II, and III
Since given that $$a*b^2*c^3*d^4 > 0$$, then we know that none of the unknowns is zero. Therefore, $$b^2>0$$ and $$d^4>0$$, which means that we can safely reduce by them to get $$a*c^3>0$$ (so, the given expression does not depend on the value of $$b$$ or $$d$$: they can be positive as well as negative).
Next, $$a*c^3>0$$ means that $$a$$ and $$c$$ must have the same sign: they are either both positive or both negative.
Evaluate each option:
I. $$a^2cd$$. Since $$d$$ can positive as well as negative then this option is not necessarily positive.
II. $$bc^4d$$. Since $$d$$ can positive as well as negative then this option is not necessarily positive.
III. $$a^3c^3d^2$$. Since $$a*c^3>0$$, then $$a^3*c^3>0$$ and as $$d^2>0$$, then their product, $$(a^3*c^3)*d^2$$ must be positive too.
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 11704
Followers: 528
Kudos [?]: 145 [0], given: 0
Re: If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink]
### Show Tags
31 Jul 2014, 18:32
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Manager
Joined: 11 Jun 2014
Posts: 57
Concentration: Technology, Marketing
GMAT 1: 770 Q50 V45
WE: Information Technology (Consulting)
Followers: 4
Kudos [?]: 32 [0], given: 3
Re: If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink]
### Show Tags
31 Jul 2014, 21:32
a*(b^2)*(c^3)*(d^4) > 0,
since b^2 and d^4 are always positive..
a*(c^3) is positive .. so 'a' and 'c' are of the same sign, eithr positive or negative. we have no information about 'b' and 'd' , so they could b negative..
so the first 2 statements ( I and II ) have 'd' in them, which could be negative.. so we can eliminate them then and there.
in statement III, 'd' is squared, so thats positive and since 'a' and 'c' are of the same sign, a^3 * c^3 must be positive.
so III only , that option C.
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 11704
Followers: 528
Kudos [?]: 145 [0], given: 0
Re: If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink]
### Show Tags
11 Nov 2015, 20:28
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If a, b, c, and d are integers and ab2c3d4 > 0, which of the [#permalink] 11 Nov 2015, 20:28
Similar topics Replies Last post
Similar
Topics:
1 On a certain number line, conditions are a<b<c<d<e and abcde>0. Which 1 14 Sep 2016, 02:00
15 If d > 0 and 0 < 1 - c/d < 1, which of the following must be 5 06 Mar 2014, 03:05
16 If a, b, c and d are positive integers and a/b < c/d, which 9 27 May 2013, 14:36
5 If a > b and if c > d , then which of the following must be 6 19 Mar 2012, 05:39
10 If d > 0 and 0 < 1 - c/d < 1, which of the following must be true? 6 13 Jul 2010, 22:08
Display posts from previous: Sort by | 2016-09-28 07:09:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.61070716381073, "perplexity": 2317.8702040686326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661326.74/warc/CC-MAIN-20160924173741-00139-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://dmoj.ca/problem/tle17c5p4 | ## TLE '17 Contest 5 P4 - Cloning
View as PDF
Points: 15 (partial)
Time limit: 1.0s
Java 1.5s
Python 3.0s
Memory limit: 256M
Author:
Problem type
Allowed languages
Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig
Dankey Kang and his horde of clones.
Dankey Kang, Croneria's most fearsome villain, has decided to increase the size of his gang by creating many clone soldiers. Each clone can be one of two types, type 0 or type 1.
There are two possible methods of cloning, which can be described as a string of clone types and . That is, and are strings only containing 0 and 1.
Initially, there is one clone of type 0 in the line. Then, the following process will continue indefinitely:
• The first clone in the line will leave the front of the line to fight.
• If that clone's type is 0, a string of clones matching will be added to the end of the line, in order.
• If that clone's type is 1, a string of clones matching will be added to the end of the line, in order.
Dankey Kang is then interested in of the clones. In particular, he wants to know the type of the clone that leaves the line, indexed starting at .
1 5
2 15
3 20
4 25
5 25
#### Input Specification
The first line will contain string .
The second line will contain string .
The third line will contain the integer .
On the next lines, the line will contain integer .
#### Output Specification
Output lines. The line of output will contain the type of the clone that leaves the line.
#### Sample Input
100
10
9
1
2
3
4
5
6
7
8
9
#### Sample Output
0
1
0
0
1
0
1
0
0 | 2020-10-31 16:16:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3513847887516022, "perplexity": 6788.562939388503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00290.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=0236136 | MathSciNet bibliographic data MR236136 (38 #4434) 10.48 Winquist, Lasse An elementary proof of $p(11m+6)\equiv 0\,({\rm mod}\ 11)$$p(11m+6)\equiv 0\,({\rm mod}\ 11)$. J. Combinatorial Theory 6 1969 56–59. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2015, American Mathematical Society
Privacy Statement | 2015-07-28 18:17:56 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304079413414001, "perplexity": 13242.905135996634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00308-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.esaral.com/q/find-the-sum-of-all-natural-numbers-between-200-and-400-which-are-divisible-by-7-74318/ | Find the sum of all natural numbers between 200 and 400 which are divisible by 7.
Question:
Find the sum of all natural numbers between 200 and 400 which are divisible by 7.
Solution:
Natural numbers between 200 and 400 which are divisible by 7 are 203, 210, …, 399.
This is an AP with a = 203, d = 7 and l = 399.
Suppose there are n terms in the AP. Then,
$a_{n}=399$
$\Rightarrow 203+(n-1) \times 7=399 \quad\left[a_{n}=a+(n-1) d\right]$
$\Rightarrow 7 n+196=399$
$\Rightarrow 7 n=399-196=203$
$\Rightarrow n=29$
$\therefore$ Required sum $=\frac{29}{2}(203+399) \quad\left[S_{n}=\frac{n}{2}(a+l)\right]$
$=\frac{29}{2} \times 602$
$=8729$
Hence, the required sum is 8729. | 2022-09-26 05:48:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074211835861206, "perplexity": 201.07095671935562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00518.warc.gz"} |
https://physicscatalyst.com/Class9/atom-and-molecules-notes.php | # Class 9 Atoms and Molecules Notes
Welcome to Class 9 Atoms and Molecules Notes for Chapter 3.The topics in this page are Laws of Chemical Combination ,Dalton's atomic theory, what is atom and Molecules, Iosn, Valency ,Chemical Formulae and Mole concept.This is according to CBSE and the NCERT textbook. If you like the study material, feel free to share the link as much as possible.
## Laws of Chemical Combination
There are two main laws of Chemical Combination as established by Lavoisier and Joseph L. Proust.
(a)Law of Conservation of Mass
(b) Law of constant or definite proportion
### Law of Conservation of Mass
It states that mass cannot be created or destroyed in chemical reaction. So mass of reactants is mass of the Product.
A + B -> C +D
Mass of Reactants=Mass of (A+B)
Mass of Products = Mass of (C+D)
Mass of Reactants = Mass of Products
### Law of constant or definite proportion
It states that elements combine in their definite proportion of mass to give compounds. or In a chemical substance the elements are always present in definite proportions by mass
Example
Compound $CO_2$ can be obtained using various ways
$C + O_2 -> CO_2$
$CaCO_3 -> CaO + CO_2$
The ratio of Carbon and Oxygen is always same ie. 12:32
Similarly For Water $H_2O$, the ratio of the mass of hydrogen to the mass of oxygen is always 1:8, whatever the source of water
## Dalton's atomic theory
There was explanation for the above theories.
British chemist John Dalton proposed the atomic theory which provided the explanation for the above laws
• All matter is made of very tiny particles called atoms.
• Atoms are indivisible particles
• Atoms can neither be created nor be destroyed
• Atoms of same elements are similar.
• Atoms of different elements are different.
• Atoms combine in the ratio of small whole numbers to form compounds.
• The relative number and kinds of atoms are constant in a given compound.
## What is an Atom
• Matter is made up of Atoms
• Atom are the smallest particle of elements
• Atoms are very small, they are smaller than anything that we can imagine or compare with
• The size of the atoms is measured by the Atomic Radius .Atomic radius is measured in nanometres ( $1 \ nm= 10^{-9} \ m$)
• Atomic Radius of the Hydrogen is $10^{-10} \ m$. And it is the smallest of all
## Symbols of Atoms or Elements
• Dalton proposed the below symbols for the atoms
• Berzilius suggested that the symbols of elements be made from one or two letters of the name of the element.
• Now-a-days, IUPAC (International Union of Pure and Applied Chemistry) approves names of elements
• Now generally symbols are the first one or two letters of the element's name in English. The first letter of a symbol is always written as a capital letter (upper-case) and the second letter as a small letter (lower-case).Symbols of some elements are formed from the first letter of the name and a letter, appearing later in the name
Example
Hydrogen - H
cobalt - Co
Chlorine - Cl
• There are few elements where the symbols were taken from names of element in Latin ,German & Greek
Example
Fe from its Latin name ferrum
## Atomic Mass
• Each element had a characteristic atomic mass
• The mass of one atom is called as atomic mass
• We define One atomic mass unit is a mass unit equal to exactly one-twelfth (1/12th) the mass of one atom of carbon-12
• relative atomic masses of all elements have been found with respect to an atom of carbon-12.
• So, Atomic mass of atom is measured in amu. amu is written as 'u' - unified mass as per latest IUPAC recommendations
## What is Molecule
• A molecule is a group of two or more atoms chemically bonded together. The atoms with in the molecules are held of force of attraction.
• It is smallest particle of an element or a compound that is capable of an free state and that has all the properties of that substance.
Molecules of Elements
-Generally atoms of most of the elements exists as molecules Like Oxygen exists as $O_2$, Hydrogen exists as $H_2$. The number of atoms constituting a molecule is known as its atomicity
Molecules of Compounds
- The molecule of compounds contains two or more different atoms chemical bonded together.
Example
$HCl$
$H_20$
## Ions
• Compounds composed of metals and non-metals contain charged species
• The charged species are known as ions.
• An ion is a charged particle and can be negatively or positively charged.
• Anions are negatively charged ion while cations are the positively charged ion
• Anions are formed by gain of electrons while cations are formed by loss of electrons
• example Compound Sodium chloride NaCl consists of Positively charged $Na^+$ and negatively charged ion $Cl^{-}$
• An Ion can have multiple atoms which has net charge on it. These are called polyatomic ions
## Valency
The combining power (or capacity) of an element is known as its valency. For ions, the charge indicates the valency of the ions
## Chemical Formulae
The chemical formula of a compound is a symbolic representation of its composition. The chemical formula can be written based on these information
(a) Symbols of the elements involved
(b) The valency of the elements or ions and this must be balanced in formula
(c) when a compound consists of a metal and a non-metal, the name or symbol of the metal is written first
(d) in compounds formed with polyatomic ions, the ion is enclosed in a bracket before writing the number to indicate the ratio. In case the number of polyatomic ion is one, the bracket is not required.
## Molecular Mass
The molecular mass of a substance is the sum of the atomic masses of all the atoms in a molecule of the substance.
This is also expressed in terms of u
## Formula Unit Mass
• formula unit mass is used for those substances whose constituent particles are ions
• it is sum of the atomic masses of all atoms in a formula unit of a compound
example
NaCl
Formula Unit Mass = 23 + 35.5=58.5 u
## Mole Concept
• Wilhelm Ostwald Introduce the word "mole" in 1896 .It is derived from Latin word moles meaning a 'heap' or 'pile'
• One mole of any species (atoms, molecules, ions or particles) is that quantity in number having a mass equal to its atomic or molecular mass in grams
• The number of particles (atoms, molecules or ions) present in 1 mole of any substance is fixed, with a value of $6.022 \times 10^{23}$
• The above number is called Avogadro constant.
• The mass of 1 mole of a substance is equal to its relative atomic or molecular mass in grams. This is called Molar Mass .This is also called gram atomic mass
• So we just need to replace u in atomic mass or Molecular with gm to get the Molar mass of the substance
Some Formulas
$mass = \text {molar mass} \times \text {number of moles}$
$\text {number of moles}= \frac {\text{The number of particles}}{\text{Avogadro number}}$
$\text {The number of atoms}= \frac {\text{given mass}}{\text{molar mass}} \times \text{Avogadro number}$
$\text {The number of Molecules}= \frac {\text{given mass}}{\text{molar mass}} \times \text{Avogadro number}$
$\text {The number of particles} =\text {number of moles of particles} \times \text{Avogadro number}$
Example
Calculate the number of particles in each of the following:
(i) 23 g of Na atoms (number from mass)
(ii) 8 g $O_2$ molecules (number of molecules from mass)
(iii) 0.1 mole of carbon atoms (number from given moles
Solution
$\text {The number of atoms}= \frac {\text{given mass}}{\text{molar mass}} \times \text{Avogadro number}$
$= \frac {23}{23} \times 6.022 \times 10^{23}$
$=6.022 \times 10^{23}$
(ii) $\text {The number of molecules}= \frac {\text{given mass}}{\text{molar mass}} \times \text{Avogadro number}$
Now molar mass of O2 molecules
= 16 x 2 = 32g
Therefore
$= \frac {8}{32} \times 6.022 \times 10^{23}$
$=1.51 \times 10^{23}$
(iii) $\text {The number of particles} =\text {number of moles of particles} \times \text{Avogadro number}$
$= 0.1 \times 6.022 \times 10^23$
$= 6.022 \times 10^{22}$
## Summary
Here is the Class 9 Atoms and Molecules Notes Summary
• An atom is the smallest particle of the element that cannot usually exist independently while the molecule is the smallest particle of an element or a compound capable of independent existence under ordinary conditions. It shows all the properties of the substance.
• The chemical formula of a compound identifies the constituent elements and the number of atoms of each combining ingredient.
• The Avogadro constant $6.022 \times 10^{23}$ is defined as the number of atoms in exactly 12 g of carbon-12.
• The mole is the amount of substance that contains the same number of particles (atoms/ ions/ molecules/ formula units etc.) as there are atoms in exactly 12 g of carbon-12.
• Mass of 1 mole of a substance is called its molar mass | 2023-03-30 05:51:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5930903553962708, "perplexity": 1266.1301905570688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00475.warc.gz"} |
http://mathematica.stackexchange.com/tags/string-manipulation/hot | # Tag Info
15
This is a bug in version 10.1.0. We decided it was serious enough to warrant a fix via an automatic paclet update. The paclet has been pushed live and Mathematica should install it automatically once it does a periodic check with the paclet server. It should take about a week or so. To install it right away, you can do PacletInstall["StringPatternFix"]. You ...
10
if the keys are unique, then this seems to be rather elegant, short and still clear: Pick[ln, nm, "tat"] probably look at the documentation of Pick...
8
You can start by simply creating a function that tests, whether a string is a list of rules or not isTransformable[str_String] := SyntaxQ[str] && MatchQ[MakeExpression[str], HoldComplete[{_Rule ..}]]; isTransformable[___] := False; Note that this function does much more that search for a "->" inside a string. First, it tests, whether the ...
7
Perhaps, using: dateslalala={2003364, 2003157, 2003314, 2003302, 2003181, 2003062, 2003254, \ 2003070, 2003365, 2003338, 2003233, 2003073, 2003020, 2003010, \ 2003238, 2003107, 2003310, 2003347, 2003204, 2003066, 2005364, \ 2005157, 2005314, 2005302, 2005181, 2005062, 2005254, 2005070, \ 2005365, 2005338, 2005233, 2005073, 2005020, 2005010, 2005238, \ ...
6
Another option: AssociationThread[nm -> ln]["tat"] If you store AssociationThread[lm -> ln] in a symbol you can use it for many quick lookups without having to recreate the association every time. Without Pick and AssociationThread you might do something like Identity @@ Cases[Transpose[{nm, ln}], {"tat", v_} :> v] Note that I'm using ...
5
You could measure the similarity between two strings. One simple approach is to convert all strings into the bag-of-words model and then compare the resulting vectors. This could work well if the strings contain the same words, but not in the same order. Nearest[vectors, x, DistanceFunction -> CosineDistance] Will give you a measure of how close ...
5
You should take a look at: New in the Wolfram Language: GrammarRules Programmable Linguistic Interface Sequence Alignment & Comparison You can also use Machine Learning to help classify strings. Here is how to train a classifier to understand if a string talks about a cat a or a dog. cat = ToLowerCase[Import["http://en.wikipedia.org/wiki/Cat"]]; ...
5
I am not sure if this is the best way of doing it, so the following is not exactly an answer to the question. Here is how I would do it assuming that I have padding. 0 Preparation data={1,2,3};(*some test data*) (*exporting to some files*) Table[Export["myData" <> IntegerString[i, 10, 5] <> ".txt", data], {i,10}]; (*and deleting one to make it ...
4
I would love this to be uniform for both integer and rational numbers a2 = {2/3, 4/5, 9/7, 3/7, 1.5, 3, 1/9}; StringTrim@StringJoin[" " <> ToString[#, InputForm] & /@ a2] (* 2/3 4/5 9/7 3/7 1.5 3 1/9 *) Row[ToString[#, InputForm] & /@ a2, " "] (* 2/3 4/5 9/7 3/7 1.5 3 1/9 *) StringReplace[ToString[a, InputForm], {"{" | "}" -> ...
3
Have a look at the documentation for CSV. The first issue you have is that your file extension is .txt so Mma imports it as text file instead of a CSV file. Your second issue is that "Table" is not a supported element for either CSV or TXT so I think it is just being ignored. Even though your file does not have the .csv file type you can still tell Mma ...
3
Or StringTake[ToString[a, FormatType -> InputForm], {2, -2}] The inelegant use of StringTake strips off the leading and trailing brackets.
3
StringTrim["ahfiehfke$jfwfjejf0", "$" ~~ __]; StringDelete["ahfiehfke$jfwfjejf0", "$" ~~ __]; (*V10.1*)
3
How about this? SelectFirst[ln, StringMatchQ[#, "tat" ~~ __] &] Admittedly my method is not as elegant as others, but it's fast if your ln and nm are large: ln = ConstantArray[t, 10^5] /. t :> StringJoin@RandomSample[CharacterRange["A", "z"], 10]; nm = StringTake[#, 3] & /@ ln; The following Timing[SelectFirst[ln, StringMatchQ[#, nm[[1]] ~~ ...
2
You can try using Row to layout the text, and use Spacer command to adjust the spacing as needed Plot[Sin[x], {x, -Pi, Pi}, PlotLabel -> Text@Style[ TraditionalForm[ Row[{"MLS", Spacer[5], Subscript[u, 2], Spacer[20], OverTilde[\[Eta]] , Spacer[5], "= 0.048%"}]], FontSize -> 18], ImagePadding -> 20] Or, the way I would ...
2
InputForm[ ToString@StringForm["SomeText= as well as OtherText=.", "textA", "textB"]] "SomeText=textA as well as OtherText=textB." If you have version 10 you might want to try StringTemplate StringTemplate["SomeText= as well as OtherText=."]["textA", "textB"] "SomeText=textA as well as OtherText=textB."
2
lst = {"1", "2", "3", "4", "5", "6"}; Developer`PartitionMap[StringTrim[ToString@#, "{" | "}"] &, lst, 3] (* {"1, 2, 3", "4, 5, 6"} *)
2
StringTake[ToString@#, {2, -2}] & /@ Partition[lst, 3] {"1, 2, 3", "4, 5, 6"}
1
The thought is DatePlus. split[x_Integer] := {{FromDigits@#[[1 ;; 4]], 1, 1}, FromDigits@#[[5 ;; -1]]} &@IntegerDigits[x] split[2003305] (* {{2003,1,1},305} *) DatePlus[split[2003305][[1]], 305] (* {2003,11,2} *) f = Block[{\$DateStringFormat = {"Year", "Month", "Day"}, res}, res = split[#]; DatePlus[res[[1]], res[[2]] - 1]] ...
1
Your output is formatted wrongly, because what you gave to Style is not a series of things to style, but a multiplication of its elements. Just evaluate the arguments on their own in your notebook to see the effect: "MLS" Subscript[u, 2] OverTilde[\[Eta]] "=0.048%" (* "=0.048%" "MLS" \!$$\*OverscriptBox[\(\[Eta]$$, $$~$$]\) Subscript[u, 2] *) To get what ...
1
This is very similar to your previous question and so is the solution: lst = {"1", "2", "3", "4", "5", "6"} ToString @ Row[#, ", "] & /@ Partition[lst, 3] {"1, 2, 3", "4, 5, 6"}
1
This gives exactly the output you wrote down: str1 = StringDrop[StringDrop[ToString@Partition[lst, 3][[1]], 1], -1] str2 = StringDrop[StringDrop[ToString@Partition[lst, 3][[2]], 1], -1] Could be made more elegant of course, and more flexible.
1
Assuming you always want strings with 3 items: Map[StringJoin @@ Riffle[#, ", "] &, Partition[Map[ToString, lst], 3]]
1
Terse: a = {1, 17, 2/3, 4/5, 9/7, 3/7, 1/7, 1/9}; ToString @ Row[InputForm /@ a, " "] "1 17 2/3 4/5 9/7 3/7 1/7 1/9"
1
a = {2/3, 4/5, 9/7, 3/7, 1/7, 1/9}; StringJoin@Cases[a, Rational[x_, y_] :> " "<>ToString[x] <> "/" <> ToString[y]] reply to comment: a = {2/3, 4/5, 9/7, 3/7, 1/7, 1/9, 5, 6, 99/10}; f[Rational[x_, y_]] := " " <> ToString[x] <> "/" <> ToString[y]; f[x_] := " " <> ToString[x]; StringJoin[f[#] & /@ a] ...
1
Pick[ln, Tr /@ StringPosition[nm, #], 1] &@"dow" Pick[ln, Tr /@ StringPosition[nm, #], 1] &@{"dow", _ ~~ "at"} Let's you use single and lists of targets, patterns (unlike straight pick)... assumptions about list correspondence apply, returns results in order of nm.
1
I don't quite understand your code, especially the key_/;key->val_ part, but the following code shall do your work: ToExpression@str //. key_String /; StringMatchQ[key, ___ ~~ "->" ~~ ___] :> ToExpression[key]
Only top voted, non community-wiki answers of a minimum length are eligible | 2015-05-06 10:33:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17320603132247925, "perplexity": 6501.166469507722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458524551.85/warc/CC-MAIN-20150501053524-00009-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://kipfoundation.github.io/whitepaper/Krama-Overview.html | ### System Overview
The order is moving from Transactions to Interactions. KIP leverages emergent techniques & mechanism in the blockchain (such as constellation, zk-Snarks etc.), along with other path breaking technologies in storage (such as swarm key, IPFS etc.).
#### KIP Wallet Account State
Fig 5: Typical KIP Account's State
As depicted in the figure above, a typical KIP account managed by the user consists of the KIP balance, nonce count for all the transactions signed on behalf of corresponding account, token balance representing the closing balance of number of tokens left, and the TDU points - an array of balances representing a dimension of TDU in the KIP ecosystem based on the user's behaviour.
#### K2K Interaction state transition
Fig 6: A K2K Interaction state transition
The figure depicts a simple 1-way interaction that involves transfer of KIP from one account to another. In this case, one of the colleague is paying up for their lunch paid by one of the other peer. A split in the bill contract could spawn such interactions.
#### T2T Interaction state transition
Fig 7: A T2T Interaction state transition
The figure depicts yet another simple interaction that involves transfer of tokens of some characteristics from one account to another. In this case, there exists a 2-way interaction with transactions exchanging heterogenous tokens at an agreed rate of exchange.
#### U2U Interaction state transition
Fig 8: A U2U Interaction state transition
The figure depicts yet another simple interaction that involves exchange of heterogenous TDU points for service of a specific skill in exchange for privilege to access another service of some arbitrary form. This is an improvement in the Gen3 where non monetary interests can be realized to achieve similar importance & access to services.
#### T2K / K2T Interaction state transition
Fig 9: A K2T / T2K Interaction state transition
This figure depicts a hybrid 2-way interaction between two users, due to a mutual demand in heterogenous asset classes - the universal KIP & app specific token FEM . A typical scenario for such transactions apepar in DEX applications & application service billing buffers.
#### K2U / U2K Interaction state transition
Fig 10: A K2U / U2K Interaction state transition
This figure depicts yet another 2-way hybrid interactions with demand for utility payable by KIP. Other way, there's a mutual demand for KIP tokens for exchange of acceptable TDU points belonging to a certain dimension. Such interactions occur under social and professional applications built on KIP where the declared attributes can be audited for transparency before harnessing.
#### T2U / U2T Interaction state transition
Fig 11: A T2U / U2T Interaction state transition
This figure depicts yet another 2-way interaction with mutual demand for heterogenous asset classes - app tokens with specific characteristics needed to access services, & the TDU points in a specific dimension that is transferable and consumed for similar access to service. Such interactions may occur in bug bounty & marketplace applications where users are willing to crowdsource in exchange for their app tokens. | 2021-05-09 07:42:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3584512174129486, "perplexity": 8484.373223153149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00620.warc.gz"} |
https://sti.bmj.com/content/77/3/194 | Article Text
The acceptability of urinary LCR testing for Chlamydia trachomatis among participants in a probability sample survey of sexual attitudes and lifestyles
Free
1. Kevin A Fenton1,5,
2. Andrew Copas1,
3. Kirstin Mitchell3,
4. Gillian Elam2,
5. Caroline Carder4,
6. Geoff Ridgway4,
7. Kaye Wellings3,
8. Bob Erens2,
9. Julia Field2,
10. Anne M Johnson1
1. 1Department of Sexually Transmitted Diseases, Royal Free and University College Medical School, London, UK
2. 2The National Centre for Social Research, London
3. 3London School of Hygiene and Tropical Medicine, London
4. 4Department of Microbiology, University College Hospital, London
5. 5The Public Health Laboratory Service, Communicable Disease Surveillance Centre
1. Dr Kevin Fenton, Department of Sexually Transmitted Diseases, Royal Free and University College Medical School, Mortimer Market Centre, off Capper Street, London WC1E 6AU, UK kfenton{at}gum.ucl.ac.uk
## Abstract
Objectives: To examine the factors that influence respondents' willingness to participate in urinary testing for Chlamydia trachomatis in a general population feasibility survey of sexual attitudes and lifestyles.
Methods: 199 sexually experienced, 18–44 year old participants, recruited as part of a larger (n=901) methodological study of sexual attitudes and lifestyles, were invited to provide a urine sample for chlamydial infection testing using ligase chain reaction (LCR) techniques. Analysis of the survey data and in-depth qualitative interviews were undertaken to explore the factors that influenced participants' decisions to participate.
Results: 143/199 (72%) participants agreed to provide a urine sample. The likelihood of providing a urine sample was reduced if other individuals were present in the home at the time of interview (OR 0.42, 95% confidence interval 0.20–0.90, p=0.03). Trust and rapport with the interviewer, understanding the aims of the test, sense of obligation, and perceived importance of the test were identified as additional influencing factors in the in-depth interviews.
Conclusions: Survey respondents' uncertainty or embarrassment at participating in urine testing can be overcome if they are well informed, motivated by the potential health gain, and briefed by trained and confident interviewers.
• screening
• chlamydia
• sexually transmitted diseases
• survey
• sexual behaviour
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
## Background
Genital chlamydial infection is the most common bacterial sexually transmitted disease in England and Wales with over 40 000 cases being diagnosed in genitourinary medicine (GUM) clinics annually.1 Concerns regarding the adverse outcomes of untreated infection have led to the recommendation for population screening by the chief medical officer for England.2
However, relatively little is known about the prevalence of genital infection with Chlamydia trachomatis in the general population. Community based prevalence studies have been greatly facilitated by the availability of nucleic acid amplification tests that enable non-invasive diagnosis of prevalent infections,3 and, to date, only a few of such surveys have been undertaken in the United States and Europe.47 Prevalence estimates from selected clinic populations (generally among women and involving the use of cervical screening tests) range from 2–12%.2, 8 Similar estimates among men and sexually active people not accessing health services are lacking.
Incorporating urinary testing for genital chlamydial infection into a community based survey of sexual attitudes and lifestyles provides one method for obtaining prevalence estimates in a representative sample of the general population, of validating reported behaviours, and of identifying demographic and behavioural risk factors for infection.7 In this paper we focus on the factors which influence survey participants' willingness to provide a urine sample for chlamydia testing.
## Methods
The feasibility study for an updated probability sample national survey of sexual attitudes and lifestyles (NATSSAL) was undertaken between November 1997 and January 1998.9 A stratified clustered design was used to identify 3360 addresses in Britain. Half were in London and the remainder spread throughout the rest of Britain in proportion to the population distribution. Overall weighted response to the feasibility survey was 64.7% of individuals selected in households with a resident aged 16–44 years. A total of 901 completed interviews were obtained. Computer assisted personal interviews on sexual attitudes and behaviours were undertaken by trained interviewers in respondents' homes
As part of this larger feasibility study, randomly selected, sexually experienced respondents from London, aged 18–44 years (n=199), were invited to provide a urine specimen for ligase chain reaction (LCR) testing for C trachomatis infection. Trained interviewers verbally introduced the study at the end of the main interview and provided written information regarding genital chlamydia infection, the rationale for the study and the nature of the urine test. Respondents were informed that they would be only be notified (by letter) in event of a positive LCR result.
Respondents gave signed informed consent before providing 10 ml of urine in a plastic, preservative-free, sterile container. This was returned to the interviewer who labelled (study identification number and date) and immediately refrigerated the samples at 2–8°C. Samples were couriered within 24 hours to a central laboratory for LCR testing. On arrival, a 1 ml aliquot of each sample was centrifuged and processed according to the manufacturer's (Abbot LCx Chlamydia trachomatis assay) instructions. Positive and equivocal LCR results were reported to the study physician who then notified respondents.
All patients diagnosed C trachomatis positive or equivocal by LCR were contacted by post. The first letter informed respondents of the possibility of infection and invited them to contact the study physician to discuss their results. If necessary, reminder letters were sent after 7, 14, and 21 days. On contacting the study physician, respondents were told about their test result, options for treatment, and the need for partner notification. Retesting at local sites was only advocated for those receiving equivocal tests. Respondents gave consent for results to be passed on to their general practitioner or local GUM clinic, and to be recontacted after 2 weeks to follow up treatment and partner notification outcomes.
### QUANTITATIVE DATA COLLECTION AND ANALYSIS
Statistical analysis was performed using the survey analysis functions of stata version 6, which incorporates the design effects of clustering of addresses within sectors. Demographic, sexual behaviour (including number of partners, sexual practices, condom use), and STD history (including previous history, treatment site) data were asked of all study participants (see table w1 on STI website). Among those invited to participate in the chlamydia study, we compared those who provided a urine sample with those who did not, to explore what factors influenced participation. When the data were ordinal —for example, number of partners in the past year, the actual numbers reported were used to calculate the overall p value, rather than the grouped information presented in the tables. Unadjusted odds ratios and their confidence intervals were calculated using the logistic regression function, taking urine provision as the dependent variable. A range of process measures was also collected on each diagnosed case, including time to result notification, time to contact the chlamydia study team, number of notifications required, treatment site preference, and partner notification outcomes.
### QUALITATIVE DATA COLLECTION AND ANALYSIS
Post-interview, a sample of respondents was approached to participate in in-depth qualitative interviews to explore their experience of taking part in the survey. Thirty six respondents were purposely selected on a range of criteria including age, sex, marital status, and participation in the chlamydia study. All were recruited by telephone. The tape recorded interviews were conducted in respondents' homes by specialist interviewers and transcribed verbatim. Participants ranged from 18–44 years in age, with equal numbers of men and women. Their range of lifetime partners was 4–150 (respondents with fewer than four partners were excluded as the qualitative study also explored how respondents counted partners). Interview transcripts were analysed using Framework (an analytic tool which facilitates systematic within and across case analysis)10, 11 by two independent researchers.
### ETHICS
The study was approved by the joint UCL/UCLH committees on the ethics of human research. The researchers restricted study participation to those aged 18 years and over, following concerns about compromising the confidentiality of those aged 16 and 17 years, for whom parental consent would be required.
## Results
### OUTCOMES OF URINARY GENITAL CHLAMYDIAL INFECTION TESTING
One hundred and forty three (72%) of the 199 selected London respondents agreed to participate in the chlamydia study. Response was slightly higher among women than men (73% v 65%, p=0.27). Four men and 12 women tested LCR positive or equivocal for C trachomatis; median (range) ages of 27.5 (18–38) years and 33.5 (22–40) years respectively. Most respondents were notified within 3 weeks of being screened (median 21 days, range 9–32). Once notified by post, 14 (88%) respondents contacted the study physician after receiving their first notification letter. The median time taken by respondents to contact the study team was 2.5 days (range 2–33 days). Seven respondents chose to be treated by their general practitioners and eight were referred to their local STD clinic. With the exception of one respondent who elected to be treated homeopathically, all respondents received appropriate treatment from a medical practitioner. Partner notification was undertaken with all respondents seen in the GUM clinic, and with four of five respondents seen at GP surgeries for whom information was obtained.
### FACTORS INFLUENCING THE PROVISION OF A URINE SAMPLE
A range of demographic and behavioural variables was used to explore the factors associated with providing a urine sample. Despite some suggestion of lower participation rates among younger people, ethnic minorities, and those with lower educational achievement, the presence of “anyone else in the house at the time” (odds ratio 0.42, 95% confidence interval 0.20–0.90, p=0.03) was the only factor found to be significantly associated with a reduced likelihood of participation.
### QUALITATIVE RESULTS
The qualitative study shed light on other factors that influenced participation in the chlamydia study. These included the individual's motivation to participate; trust and rapport with the interviewer; understanding of the aims of the test and what would happen to samples; a sense of obligation; lack of embarrassment; perceived importance of the test; and the opportunity to receive free testing.
#### Motivation for participating
For some respondents, agreeing to provide a urine sample was simply “no big deal,” while others recognised the importance of collecting data on chlamydia infection. Respondents also valued the opportunity to receive what was regarded as a free screening test. Less positively, there was a sense of obligation and resigned agreement on the premise that the urine test could be no more embarrassing or personal than the survey.
1. How did you feel when you were asked [to provide a urine sample]?
I was quite surprised. I was very happy to do it because I think it's a brilliant thing and the fact that the lady told me that if there were any problems I would be contacted . . .I think that if you can get any screening done, no matter how, it's a good idea. (Female, aged 35–39 years)
2. How did you feel when she first asked for a urine sample?
I suppose, yes, I was a bit apprehensive, “what does she want my urine for?” …but obviously I was thinking about it.
What made you finally decide to give [a sample]?
I'd given all my private secrets out so why not give the urine? (Female, aged 40–44 years)
#### Level of trust and rapport with interviewer
Most respondents felt the practicalities of the test were handled competently and sensitively by the interviewers and did not find the situation problematic. Despite this, some felt rather embarrassed at giving their sample to a complete stranger to “someone off the street.” Had the interviewers been medical personnel this awkwardness may have been relieved to some degree.
1. It's a medical thing isn't it really? It should be done by doctors or nurses and not someone who comes in your front door. (Male aged 35–40 years)
2. What factors made you agree to produce a urine sample?
Confidence in the interviewer, very much so. But for that rapport that we had, it would have been much more of a consideration. (Male, aged 30–34 years)
#### Understanding the aims of the test
Despite interviewer reassurance, there was concern that the sample would be used for purposes other than those stated by the research team. For instance, the misconception that the test constituted a free health check led one man to refuse because he had recently had a health check and did not see the need for another one. Suspicions concerning the use of samples engendered reluctance and, on occasion, refusal to take part. Again the information given by the interviewer is crucial.
1. …I just felt uncomfortable cos it's going to a research institution. I don't necessarily trust research teams. They could say they are testing for one thing but really they could be testing for something else. (Female, aged 20–24 years)
#### Timing of test
The chlamydia study was introduced to the respondents at the end of the interview. For some, the timing of this introduction and explanation was not a problem—others were taken aback by this rather unexpected announcement and said they would have preferred an explanation at the beginning. Competent and reassuring explanations given by the interviewer generally assuaged feelings of being misled, but refusal was not always avoided at this stage. Among those preferring an explanation at the beginning, some admitted that they would have been concerned about the test during the interview and might have even refused to do the whole survey. Interviewers who put forward persuasive arguments and explain the test clearly seemed to be able to mitigate the effects of a last minute explanation.
#### Attitudes towards being notified of a positive test result
After providing a sample in the survey, concerns about the test continued to stem from the suspicion about how the sample would be used rather than whether the result would be positive. Respondents were generally interested in the result and those who were slightly concerned tended to pay careful attention to phone calls and letters over the following weeks. While some respondents were happy to be informed only in event of a positive result, other preferred to be informed regardless of whether the result was positive or negative.
1. Did [the result] worry you?
No, not really. I did think at one stage, “what have I got myself into?” and then after that it didn't worry me, no. (Female, aged 35–39 years)
2. How did you feel about not being told [unless the result is positive]
I don't mind about that cos if there isn't anything in it then it doesn't bother me. I know that if the phone don't ring then I haven't [got it]. (Laughs) (Female, aged 35–39 years)
The research highlighted the risk that respondents who have misunderstood the purpose of the test may assume that a negative test result extends to all STDs, thus giving false reassurance.
## Discussion
This is, to our knowledge, the first time that LCR testing for C trachomatis has been incorporated into a general population survey of sexual attitudes and lifestyles in Britain. The response rate (72%) was encouraging and points to the feasibility of undertaking urine sampling as part of surveys of this nature. We are also encouraged that respondents did not feel coerced to provide the sample, and generally understood the rationale and relevance of the test.
Participation bias is a potential problem in community based screening for STDs, in that people who perceive themselves at increased risk may be more (or less) likely to avail themselves of the opportunity for screening. This bias is further compounded when screening is done as part of a survey of sexual attitudes and lifestyles, where survey participants may be more likely to report high risk sexual lifestyles.12 The presence of another individual in the household was the only factor (in the quantitative analysis) significantly associated with the decision not to provide a urine sample. However, with a small sample size, we have low power to detect differences in response related to behavioural variables. This will therefore be examined in greater detail in the main survey.
In general, urine testing was acceptable to participants despite having invited respondents to participate only after having completed the main questionnaire. This timing was agreed so as to avoid jeopardising participation to the main study. This practice of sequential informed consent for multiple component surveys has been used previously in other surveys were biological samples are collected.13, 14 Despite the relatively short notice, we were reassured that respondents did not feel coerced into participating. Qualitative interviews suggested that the attitudes and professionalism of the interviewers and the authority of the researchers and research project offset the anticipated embarrassment and uncertainty.
The benefits of incorporating biological end points in a population based survey of sexual lifestyles are many. We will obtain a greater understanding of the epidemiology of C trachomatis and its association with sexual behaviours, and a range of demographic factors, which will help guide screening strategies. As the study is a population based probability sample, many of the biases associated with recruiting from sexual or reproductive health clinics are avoided, and more robust population based prevalence estimates can be obtained.
The results of the feasibility study are encouraging and will inform the development of future large scale surveys of this nature. Ku and colleagues7 have, however, outlined some potential challenges to integrating behavioural and clinical data in population based surveys. Key among these are close collaboration between clinicians and survey researchers, flexibility of survey methods to optimise participation rates, and dealing effectively with ethical issues. This study highlighted more pragmatic considerations to ensure the success of community based testing for STIs (see box). Adequate investment must be placed in training interviewers, particularly about the sexually transmitted infection being examined, and simplifying the process of collecting and transporting samples. For respondents, professional and relaxed interviewers, who focus on the benefits of testing and utilise a non-coercive approach are key. The acceptability of urine testing may also be improved by providing clear information and guidance on the project. Interviewers should also make it clear at the time of recruitment that a negative result or no feedback is not a comprehensive clean bill of sexual health, and this should be reiterated in the information leaflets. This study provides justification for utilising a broadly similar approach in the 1999/2000 Second National Survey of Sexual Attitudes and Lifestyles (NATSSAL 2),15 currently in progress.
### Strategies for encouraging participation in urinary testing for sexually transmitted infections in general population surveys
Respondent strategies
• Provide clear information on Chlamydia trachomatis infection (information sheet, health promotion leaflets, and resources)
• Provide clear written and verbal instructions for how respondents should provide the sample. These should be handled in a professional and factual manner.
• Provide opaque containers for collecting urine specimens
• Inform, agree, and reiterate methods of notifying respondents about results
• Emphasise that a negative result does not provide a clean bill of health for all sexually transmitted infections. Consultation with their nearest genitourinary medicine clinic should be advised.
Interviewer strategies
1. Ensure that interviews are well informed about genital chlamydia infection, its outcomes if untreated and the rationale for testing
2. Simplify procedures for collecting, labelling, and transporting samples
3. Ensure that procedures understood and agreed by interviewers
4. Explore interviewer's health and hygiene concerns and try to resolve where possible (reassurance, provision of gloves to handle specimen containers, refrigeration facilities)
5. Provide feedback and encouragement on progress of recruitment and recruitment strategies.
## Acknowledgments
Conflict of interest: None.
The study was funded by a grant from the Medical Research Council. We thank the team of interviewers of the National Centre for Social Research (formerly Social and Community Planning Research) and the participants for their contributions. We also acknowledge the significant contribution of Jane Wadsworth, a principal investigator, who died during the course of this project. We would like to thank Sundhiya Mandalia for preliminary analyses undertaken on the dataset.
Contributors: KF was the study physician who managed the study and was the lead writer of the paper; AJ, JF, and KW were principal investigators, who along with KF and BE developed the methodology for the chlamydia study and questionnaire design; AC is the study statistician and undertook the statistical analysis presented in this paper; CC and GR were microbiological laboratory collaborators who undertook the LCR testing and provided comments on the draft; KM, GE, and KW designed and undertook the qualitative interviewing and contributed to the paper writing.
## Footnotes
• websiteextra
A table detailing response rates appears on the STI website. | 2023-03-27 05:06:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27755028009414673, "perplexity": 4359.803218478959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00298.warc.gz"} |
https://math.stackexchange.com/questions/3338473/simplifying-sin-frac7a215-circ-sin-frac3a2-75-circ-cos-fr | # Simplifying $\sin(\frac{7A}{2}+15^{\circ})\sin(\frac{3A}{2}-75^{\circ})+\cos(\frac{7A}{2}+15^{\circ})\cos(\frac{3A}{2}-75^{\circ})$
Can someone help me with simplifying this expression:
$$\sin \left( \dfrac{7A}{2} + 15^{\circ} \right) \sin \left( \dfrac{3A}{2} - 75^{\circ} \right) + \cos \left( \dfrac{7A}{2} + 15^{\circ} \right) \cos \left( \dfrac{3A}{2} - 75^{\circ} \right)$$
I know that $$\sin x \sin y + \cos x \cos y = \cos(x-y)$$, but this way I only get the expression:
$$\cos(2A+100^{\circ})$$
The problem is that I have to find the correct solution among the following ones:
• $$-2\sin A \cos A$$
• $$\cos ^2 A - \sin ^2 A$$
• $$- \sin A$$
• $$\cos A$$
Maybe I shouldn't have used this trigonometric identity but another one?
If we set $$x = \frac{3A}{2} - 75^\circ$$ and $$y = \frac{7A}{2} + 15^\circ$$ then $$x - y = -2A - 90^\circ$$ so \begin{align*} \cos(x - y) & = \cos(-2A - 90^\circ)\\ & = \cos(2A + 90^\circ) \end{align*} Can you take it from here?
• You needn't expand $\cos(2A+π/2)$ again. Just note that $\cos(\phi+π/2)=-\sin\phi.$ – Allawonder Aug 29 at 19:36
• In that case you should just have stopped at $\cos(2A+π/2).$ – Allawonder Aug 29 at 19:38
• @AleksandraAsanin The identity $\cos(x-y)=\cos x\cos y+\sin x\sin y$ is symmetric in its variables, so whichever of the arguments you set to $x$ and $y$ does not matter. You should get the same result if you make no mistakes. You do not include your work, but perhaps you're unconsciously taking $15$ to be $25.$ Such things happen. :) – Allawonder Aug 29 at 19:56 | 2019-11-13 00:05:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692032098770142, "perplexity": 329.07200156896596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00342.warc.gz"} |
https://docs.rc.uab.edu/workflow_solutions/using_anaconda/ | # Anaconda¶
Python is a high level programming language that is widely used in many branches of science. As a result, many scientific packages have been developed in Python, leading to the development of a package manager called Anaconda. Anaconda is the standard in Python package management for scientific research.
Benefits of Anaconda:
• Shareability: environments can be shared via human-readable text-based YAML files.
• Maintainability: the same YAML files can be version controlled using git.
• Repeatability: environments can be rebuilt using those same YAML files.
• Simplicity: dependency matrices are computed and solved by Anaconda, and libraries are pre-built and stored on remote servers for download instead of being built on your local machine.
• Ubiquity: nearly all Python developers are aware of the usage of Anaconda, especially in scientific research, so there are many resources available for learning how to use it, and what to do if something goes wrong.
Anaconda can also install Pip and record which Pip packages are installed, so Anaconda can do everything Pip can, and more.
Important
If using Anaconda on Cheaha, please see our Anaconda on Cheaha page for important details and restrictions.
## What is my best solution for installing Anaconda?¶
If you are using a local machine or doing general purpose software development, or have a particular package in mind, go here to install Anaconda.
If you are using a virtual machine or container, go here to install Miniconda.
If you are using Cheaha, go here for how to use Anaconda on Cheaha.
### Installing Anaconda¶
The full Anaconda install is a good choice if you are using a local machine, or doing general Python development work, or have a particular scientific package in mind.
Anaconda installation instructions are located here: https://docs.anaconda.com/anaconda/install/index.html.
### Installing Miniconda¶
Miniconda is a lightweight version of Anaconda. While Anaconda's base environment comes with Python, the Scipy stack, and other common packages pre-installed, Miniconda comes with no packages installed. This is an excellent alternative to the full Anaconda installation for environments where minimal space is available or where setup time is important, like virtual machines and containers.
Miniconda installation instructions are located here: https://docs.conda.io/en/latest/miniconda.html.
## Using Anaconda¶
Anaconda is a package manager, meaning it handles all of the difficult mathematics and logistics of figuring out exactly what versions of which packages should be downloaded to meet your needs, or inform you if there is a conflict.
Anaconda is structured around environments. Environments are self-contained collections of researcher-selected packages. Environments can be changed out using a simple package without requiring tedious installing and uninstalling of packages or software, and avoiding dependency conflicts with each other. Environments allow researchers to work and collaborate on multiple projects, each with different requirements, all on the same computer. Environments can be installed from the command line, from pre-designed or shared YAML files, and can be modified or updated as needed.
The following subsections detail some of the more common commands and use cases for Anaconda usage. More complete information on this process can be found at the Anaconda documentation.
Important
If using Anaconda on Cheaha, please see our Anaconda on Cheaha page for important details and restrictions.
### Create an Environment¶
In order to create a basic environment with the default packages, use the conda create command:
# create a base environment. Replace <env> with an environment name
conda create -n <env>
If you are trying to replicate a pipeline or analysis from another person, you can also recreate an environment using a YAML file, if they have provided one. To replicate an environment using a YAML file, use:
# replicate an environment from a YAML file named env.yml
conda create -n <env> -f <path/to/env.yml>
By default, all of your conda environments are stored in /home/<user>/.conda/envs.
### Activate an Environment¶
From here, you can activate the environment using either source or conda:
# activate the virtual environment using source
source activate <env>
# or using conda
conda activate <env>
To know your environment has loaded, the command line should look like:
(<env>) [blazerid@c0XXX ~]\$
Once the environment is activated, you are allowed to install whichever python libraries you need for your analysis.
### Install Packages¶
To install packages using Anaconda, use the conda install command. The -c or --channel command can be used to select a specific package channel to install from. The anaconda channel is a curated collection of high-quality packages, but the very latest versions may not be available on this channel. The conda-forge channel is more open, less carefully curated, and has more recent versions.
# install most recent version of a package
conda install <package>
# install a specific version
conda install <package>=version
# install from a specific conda channel
conda install -c <channel> <package><=version>
Generally, if a package needs to be downloaded from a specific conda channel, it will mention that in its installation instructions.
#### Installing Packages with Pip¶
Some packages are not available through Anaconda. Often these packages are available via PyPi and thus using the Python built-in Pip package manager. Pip may also be used to install locally-available packages as well.
# install most recent version of a package
pip install \<package\>
# install a specific version, note the double equals sign
pip install \<package\>==version
# install a list of packages from a text file
pip install -r packages.txt
#### Finding Packages¶
You may use the Anaconda page to search for packages on Anaconda, or use Google with something like <package name> conda. To find packages in PyPi, either use the PyPi page to search, or use Google with something like <package name> pip.
#### Packages for Jupyter¶
If you are using Anaconda with Jupyter, you will need to be sure to install the ipykernel package for your environment to be recognized by the Jupyter Server. If you are using Jupyter in Open OnDemand then you do not need to install the jupyter package.
### Deactivating an Environment¶
An environment can be deactivated using the following command.
# Using conda
conda deactivate
Anaconda may say that using source deactivate is deprecated, but environment will still be deactivated.
Closing the terminal will also close out the environment.
### Working with Environment YAML Files¶
#### Exporting an Environment¶
To easily share environments with other researchers or replicate it on a new machine, it is useful to create an environment YAML file. You can do this using:
# activate the environment if it is not active already
conda activate <env>
# export the environment to a YAML file
conda env export > env.yml
#### Creating an Environment from a YAML File¶
To create an environment from a YAML file env.yml, use the following command.
conda env create --file env.yml
#### Replicability versus Portability¶
An environment with only python 3.10.4, numpy 1.21.5 and jinja2 2.11.2 installed will output something like the following file when conda env export is used. This file may be used to precisely replicate the environment as it exists on the machine where conda env export was run. Note that the versioning for each package contains two = signs. The code like he774522_0 after the second = sign contains hyper-specific build information for the compiled libraries for that package. Sharing this exact file with collaborators may result in frustration if they do not have the exact same operating system and hardware as you, and they would not be able to build this environment. We would say that this environment file is not very portable.
There are other portability issues:
• The prefix: C:\... line is not used by conda in any way and is deprecated. It also shares system information about file locations which is potentially sensitive information.
• The channels: group uses - defaults, which may vary depending on how you or your collaborator has customized their Anaconda installation. It may result in packages not being found, resulting in environment creation failure.
name: test-env
channels:
- defaults
dependencies:
- blas=1.0=mkl
- bzip2=1.0.8=he774522_0
- ca-certificates=2022.4.26=haa95532_0
- certifi=2021.5.30=py310haa95532_0
- intel-openmp=2021.4.0=haa95532_3556
- jinja2=2.11.2=pyhd3eb1b0_0
- libffi=3.4.2=h604cdb4_1
- markupsafe=2.1.1=py310h2bbff1b_0
- mkl=2021.4.0=haa95532_640
- mkl-service=2.4.0=py310h2bbff1b_0
- mkl_fft=1.3.1=py310ha0764ea_0
- mkl_random=1.2.2=py310h4ed8f06_0
- numpy=1.21.5=py310h6d2d95c_2
- numpy-base=1.21.5=py310h206c741_2
- openssl=1.1.1o=h2bbff1b_0
- pip=21.2.4=py310haa95532_0
- python=3.10.4=hbb2ffb3_0
- setuptools=61.2.0=py310haa95532_0
- six=1.16.0=pyhd3eb1b0_1
- sqlite=3.38.3=h2bbff1b_0
- tk=8.6.11=h2bbff1b_1
- tzdata=2022a=hda174b7_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.37.1=pyhd3eb1b0_0
- wincertstore=0.2=py310haa95532_2
- xz=5.2.5=h8cc25b3_1
- zlib=1.2.12=h8cc25b3_2
prefix: C:\Users\user\Anaconda3\envs\test-env
To make this a more portable file, suitable for collaboration, some planning is required. Instead of using conda env export we can build our own file. Create a new file called env.yml using your favorite text editor and add the following. Note we've only listed exactly the packages we installed, and their version numbers, only. This allows Anaconda the flexibility to choose dependencies which do not conflict and do not contain unusable hyper-specific library build information.
name: test-env
channels:
- anaconda
dependencies:
- jinja2=2.11.2
- numpy=1.21.5
- python=3.10.4
This is a much more readable and portable file suitable for sharing with collaborators. We aren't quite finished though! Some scientific packages on the conda-forge channel, and on other channels, can contain dependency errors. Those packages may accidentally pull a version of a dependency that breaks their code.
For example, the package markupsafe made a not-backward-compatible change (a breaking change) to their code between 2.0.1 and 2.1.1. Dependent packages expected 2.1.1 to be backward compatible, so their packages allowed 2.1.1 as a substitute for 2.0.1. Since Anaconda chooses the most recent version allowable, package installs broke. To work around this for our environment, we would need to modify the environment to "pin" that package at a specific version, even though we didn't explicitly install it.
name: test-env
channels:
- anaconda
dependencies:
- jinja2=2.11.2
- markupsafe=2.0.1
- numpy=1.21.5
- python=3.10.4
Now we can be sure that the correct versions of the software will be installed on our collaborator's machines.
Note
The example above is provided only for illustration purposes. The error has since been fixed, but the example above really happened and is helpful to explain version pinning.
#### Good Software Development Practice¶
Building on the example above, we can bring in good software development practices to ensure we don't lose track of how our environment is changing as we develop our software or our workflows. If you've ever lost a lot of hard work by accidentally deleting an important file, or forgetting what changes you've made that need to be rolled back, this section is for you.
Efficient software developers live the mantra "Don't repeat yourself". Part of not repeating yourself is keeping a detailed and meticulous record of changes made as your software grows over time. Git is a way to have the computer keep track of those changes digitally. Git can be used to save changes to environment files as they change over time. Remember that each time your environment changes to commit the output of Exporting your Environment to a repository for your project.
## Speeding Things up with Mamba¶
Mamba is an alternative to Anaconda that uses libsolv and parallel processing to install environments more quickly, sometimes by an order of magnitude. Mamba will also discover conflicts very quickly. Mamba is available as a package via Anaconda. Currently Mamba cannot be installed on Cheaha, only on self-maanged systems like cloud.rc instances. To install use the following.
conda activate base
conda update --all
conda install -n base -c conda-forge mamba
Warning
Mamba must be installed in the base environment to function correctly! If you are using Cheaha, and cannot install in the base environment, see our workaround here
Last update: November 7, 2022 | 2022-12-04 19:06:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3085401654243469, "perplexity": 4860.117372713035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00143.warc.gz"} |
https://texnik.dante.de/cgi-bin/mainFAQ.cgi?file=IfThen/ifthen | TeXnik - Tips TeX and LaTeX
Welcome to the TeXnik website (under construction)
`ifthen` package
Main page Index UK FAQ Documentation Software CTAN Search CTAN: Statistics Google c.t.t. Google d.c.t.t.
comment package | ifthen package | Print selected Textparts |
comment
By default the text in a comment-environment is totally ignored by latex. With the package comment, which is often part of your local tex-installation, otherwise available at CTAN, the behaviour can be changed. in latex preamble write
```\usepackage{comment}
\includecomment{comment}```
and the comments will be printed in the same layout as the paragraph before this comment-environment. With \excludecomment{comment} toggling of printing/not printing is possible.
Print selected Parts
You need ifthen-package (see next item). In the preamble you define the parts of the text which schould be printed:
```\usepackage{ifthen}
\newcommand\toPrint{part1}% or part2, part3 ...```
In the text you write always
```\ifthenelse{\equal{\toPrint}{part1}}{% all in red
....
the text for part 1 ...
...
}{}```
And same for other parts.
ifthen package / Optional Text
If your output depends to one ore more variables, you can use the ifthen packagem which should be part of your local tex-installation, otherwise available at CTAN.
in latex preamble write for example:
```\usepackage{ifthen}
\newboolean{PrintEquation}
\setboolean{PrintEquation}{true}```
In your text you can ise it like:
```... blah ...
\ifthenelse{\boolean{PrintEquation}}{<latex text if true>}{<latex text if false>}
... blah ...```
With \setboolean{PrintEquation}{false} you can toggle between the two texts.
local time: Sun Apr 11 23:27:58 CEST 2021 ; file is: 654.125266203704 days old
contact webmaster _at_ TeXnik.de | 2021-04-11 21:27:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055361151695251, "perplexity": 13626.385098232418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065492.15/warc/CC-MAIN-20210411204008-20210411234008-00118.warc.gz"} |
https://www.amplifiedparts.com/products/transformer-hammond-plate-filament | # Transformer - Hammond, Plate & Filament
Starting at: $69.08 Save 10% Originally:$76.76
Product Options
• 400 V C.T. @ 40 mA
are currently out of stock and will be placed on backorder.
• Universal primary with taps for 100, 120, 220 or 240 VAC, 50 / 60 Hz. (Except the 260A6 which is 117V, $\frac{50}{60}$ Hz only)
• Designed for preamps, low power amps, general replacement, test equipment, etc.
• Economical, open frame, chassis mount - two hole (.187 dia. = G) channel bracket (figure #1) or four hole mounting (figure #2).
• Units include a Faraday shield between the primary and secondary windings. Our electrostatic shield reduces the capacitive coupling from the primary - greatly attenuating higher frequency current coupling to the secondaries.
• For more selection check out our P-T300-X series of universal plate & filament transformers.
Notes:
1. These units are designed to run with BOTH primary windings energized for maximum efficiency (see wiring table below).
2. The Faraday shield lead - the gray wire - marked SH (shield), should remain grounded to the mounting bracket & in turn to the chassis.
Part No. VA A.C. High Voltage A.C. Filament A.C. Filament
Secondary #1 Secondary #2 Secondary #3
RMS RMS RMS
P-T260A 22 400V C.T. @ 40ma. 6.3V C.T. @ 1A -
P-T260C 65 500V C.T @ 85ma.. 5V @ 2A 6.3V C.T. @ 2A
P-T260E 80 450V C.T. @ 115ma. 5V @ 2A 6.3V C.T. @ 3A
Part No. Dimensions
A B C D E G
P-T260A 3.25 2.00 2.00 2.81 - 0.19
P-T260C 4.03 2.65 2.62 3.56 - 0.19
P-T260E 3.00 3.06 2.53 2.50 2.40 0.21 x 0.38
RoHS Compliant
SKU:
P-T260X
Item ID:
010315
Brand:
Hammond
Part Numbers
400 V C.T. @ 40 mAP-T260A623980008127
450 V C.T. @ 115 mAP-T260E623980008141
500 V C.T. @ 85 mAP-T260C623980008134
Item Height 400 V C.T. @ 40 mA version 2.0 in. 450 V C.T. @ 115 mA version 2.53 in. 500 V C.T. @ 85 mA version 2.62 in. Item Length 400 V C.T. @ 40 mA version 3.25 in. 450 V C.T. @ 115 mA version 3.00 in. 500 V C.T. @ 85 mA version 4.03 in. Item Width 400 V C.T. @ 40 mA version 2.0 in. 450 V C.T. @ 115 mA version 3.06 in. 500 V C.T. @ 85 mA version 2.65 in. Mounting Hole Center to Center A 400 V C.T. @ 40 mA version 2.81 in. 450 V C.T. @ 115 mA version 2.50 in. 500 V C.T. @ 85 mA version 3.56 in. Mounting Hole Center to Center B 450 V C.T. @ 115 mA version 2.40 in. Mounting Hole Diameter 400 V C.T. @ 40 mA version 0.19 in. 500 V C.T. @ 85 mA version 0.19 in. Mounting Hole Length 450 V C.T. @ 115 mA version 0.38 in. Mounting Hole Width 450 V C.T. @ 115 mA version 0.21 in. Primary 100 / 120 / 220 / 240 VAC @ 50 / 60 Hz Secondary 1 400 V C.T. @ 40 mA version 400 V C.T. @ 40 mA 450 V C.T. @ 115 mA version 450 V C.T. @ 115 mA 500 V C.T. @ 85 mA version 500 V C.T. @ 85 mA Secondary 2 400 V C.T. @ 40 mA version 6.3 V C.T. @ 1 A 450 V C.T. @ 115 mA version 5 V C.T. @ 2 A 500 V C.T. @ 85 mA version 5 V C.T. @ 2 A Secondary 3 450 V C.T. @ 115 mA version 6.3 V C.T. @ 3 A 500 V C.T. @ 85 mA version 6.3 V C.T. @ 2 A VA Rating 400 V C.T. @ 40 mA version 22 450 V C.T. @ 115 mA version 80 500 V C.T. @ 85 mA version 65
Packaging Dimensions 400 V C.T. @ 40 mA version 1 in. × 1 in. × 1 in. 450 V C.T. @ 115 mA version 3 in. × 3.06 in. × 2.53 in. 500 V C.T. @ 85 mA version 4.03 in. × 2.65 in. × 2.62 in. Weight (Packaging) 400 V C.T. @ 40 mA version 1.406 lbs. 450 V C.T. @ 115 mA version 3.4 lbs. 500 V C.T. @ 85 mA version 2.77 lbs.
Specification Sheet All Models
## Specifications, Files, and Documents
Specification Sheet 265.92 KB | 2020-02-19 19:14:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17486870288848877, "perplexity": 12864.244737845995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00197.warc.gz"} |
http://clear-lines.com/blog/post/Safe-refactoring-with-FSharp-Units-of-Measure.aspx | Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
19. October 2013 10:33
A couple of weeks ago, I had the pleasure to attend Progressive F# Tutorials in NYC. The conference was fantastic – two days of hands-on workshops, great organization by the good folks at SkillsMatter, Rickasaurus and Paul Blasucci, and a great opportunity to exchange with like-minded people, catch up with old friends and make new ones.
As an aside, if you missed NYC, fear not – you can still get tickets for Progressive F# Tutorials in London, coming up October 31 and November 1 in London.
After some discussion with Phil Trelford, we decided it would be a lot of fun to organize a workshop around PacMan. Phil has a long history with game development, and a lot of wisdom to share on the topic. I am a total n00b as far as game programming goes, but I thought PacMan would make a fun theme to hack some AI, so I set to refactor some of Phil’s old code, and transform it into a “coding playground” where people could tinker with how PacMan and the Ghosts behave, and make them smarter.
Long story short, the refactoring exercise turned out to be a bit more involved than what I had initially anticipated. First, games are written in a style which is pretty different from your run-of-the-mill business app, and getting familiar with a code base that didn’t follow a familiar style wasn’t trivial.
So here I am, trying to refactor that unfamiliar and somewhat idiosyncratic code base, and I start hitting stuff like this:
let ghost_starts =
[
"red", (16, 16), (1,0)
"cyan", (14, 16), (1,0)
"pink", (16, 14), (0,-1)
"orange", (18, 16), (-1,0)
]
|> List.map (fun (color,(x, y), v) ->
// some stuff happens here
{ … X = x * 8 - 7; Y = y * 8 - 3; V = v; … }
)
This is where I begin to get nervous. I need to get this done quickly, and factor our functions, but I am really worried to touch any of this. What’s X and Y? Why 8, 7 or 3?
Part of the problem here is that the game merges two approaches: it is tile-based (the maze layout is built from square tiles), but also pixel-based, for the creatures movement and collisions. Being able to see more clearly what part of the code is dealing with pixels vs. tiles would be very helpful at that point.
And then it hits me – Units of Measure to the rescue!
What I really need is a mechanism that distinguishes between 8 tiles and 8 pixels, so that I don’t accidentally mix one and the other. That is exactly what Units of Measure are for: instead of integers everywhere, I can define a Pixel unit in one line:
[<Measure>] type pix
I can now annotate the parts that I know are Pixels, like this:
let TileSize = 8<pix>
or this:
type Ghost = {
// more stuff omitted
X : int<pix>
Y : int<pix>
V : int<pix> * int<pix> }
Hit build, and everything breaks. This is a good thing – now the compiler is helping me out. Now that I told the compiler that some of the integers were actually pixels, it’s pointing out all the places where pixels should be passed, and I just have to go through the code and review everything that broke to know where these pixels are used.
I can start clarifying the code:
let ghost_starts =
[
"red", (16, 16), (1<pix>, 0<pix>)
"cyan", (14, 16), (1<pix>, 0<pix>)
"pink" , (16, 14), (0<pix>, -1<pix>)
"orange" , (18, 16), (-1<pix>, 0<pix>)
]
|> List.map (fun (color,(x,y),v) ->
// code omitted here
{ ...; X = x * TileSize - 7<pix>; Y = y * TileSize - 3<pix>; V = v; ... }
)
This is great – now, I see that (16, 16) is not pixels, but the initial tile position of the Red Ghost, whereas (1, 0) is its velocity in pixels. I can refactor left and right, without having to write a single unit test, with a great sense of safety. Types are awesome.
So what’s the moral of the story here?
First, I have usually seen Units of Measure come up in the context of scientific computation. It’s an obvious use case: with very little work, you can make sure that you are not adding apples and oranges. This is handy if you don’t want to blow up equipment worth 125 million dollars in space for instance. On the other hand, scientific computations is a bit of a niche topic, which would seem to make that feature marginally useful. This example was interesting to me, because it shows how Units of Measure are an incredibly powerful debugging tool, applicable in areas that have nothing to do with science. Add a couple annotations to your code, and the compiler will pick up the hints and help you track down how the code works, at very little cost.
Then, adding Units of Measure gave me a deeper understanding of the code base. While I had realized that there was a duality between tiles and pixels in how the game worked, trying to fix one of the functions pointed out something else, the implicit presence of time in the game. If you think about it, the unit (1<pix>, 0<pix>) on the ghost is slightly incorrect (if there is such a thing as “partly correct”…): what it represents is really a velocity, i.e. how many pixels per frame the creature is moving, and the correct unit should probably be 1<pix/frame>. In this case, it didn’t really matter, because all creatures moved at constant speed, and I ended up ignoring the issue; however, if speed could change, I am pretty sure separating positions in pixels vs. speed in pixels per frame would again clarify the inner workings of the code a lot. | 2014-03-11 23:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20817452669143677, "perplexity": 1295.9504732150997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011342618/warc/CC-MAIN-20140305092222-00035-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1515864/modal-logic-of-contingency-and-necessity-operators | # Modal logic of contingency and necessity operators?
Well let me say that this is a challenging question, I am stuck in it myself :(
I am fully aware of modal systems for necessity, i.e. being true in every accessible possible world, and possibility; i.e. being true in at least one accessible possible world. I am also aware of modal systems for contingency, i.e. being both possibly true and possibly false, and non-contingency, i.e. being necessarily true or being necessarily false. Both of these modal pairs are dual in normal sense. However I am looking for modal system between contingency and necessity, hereafter $C$ and $N$. notice that they are not dual like the two previous pairs. in fact only one direction of duality holds between them, namely $Np\rightarrow\lnot Cp$ but $\lnot Cp\rightarrow(Np\lor N\lnot p)$. I could not find any modal axiomatization of this two pairs. Suppose we want the accessibility relation be partial order, non-total, non-dense, non-well-founded and non-convergent. So far I know that axioms $K$, $T$ and $4$ are held to this pairs and axiom $5$ is not held. So the logic should be at least as strong as $S4$ but weaker than $S5$. do you know any other interesting axiom which may be held for this pair? I think axiom $M$ and $G$ are not also held. I would appreciate if you have any suggestion about further axioms which are held for this pair. Thank you in advance!
Contingency can be somehow defined in terms of necessity: $$\operatorname{C}p\leftrightarrow(\lnot\operatorname{N}p\,\land\,\lnot\operatorname{N}\lnot p)$$ You can add the above (defining) axiom to a system which you use to formalize necessity and have system for both necessity and contingency. | 2020-05-26 13:21:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8365270495414734, "perplexity": 499.50893822224896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00462.warc.gz"} |
https://www.physicsforums.com/threads/better-integration-strategy.392348/ | # Better integration strategy?
I'm doing a HW problem that involved trying to find Fourier series. I know how to do it, but I'm really bothered by the fact that I have to do so much trigonometric integration by parts. my question is, is there any better way to solve the problem:
<int,indef.,dt>{ sin(a*t)*cos(b*t) } or is the only way to chug along with the two messy layers of integration by parts?
also, it seems like i'm missing something: how do you write pretty-print equations in posts?
edit: i should add, that the problem is asking for a fourier series for a half-wave rectified sine, but that isn't vital to the question
Related Calculus and Beyond Homework Help News on Phys.org
gabbagabbahey
Homework Helper
Gold Member
I'm doing a HW problem that involved trying to find Fourier series. I know how to do it, but I'm really bothered by the fact that I have to do so much trigonometric integration by parts. my question is, is there any better way to solve the problem:
<int,indef.,dt>{ sin(a*t)*cos(b*t) } or is the only way to chug along with the two messy layers of integration by parts?
Why are you concerned with the indefinite integral? Usually one integrates over a half-period interval in order to take advantage of the orthogonality of sine and cosine.
also, it seems like i'm missing something: how do you write pretty-print equations in posts?
See my signature.
gabbagabbahey,
thanks for the reply. I'm only concerned because it seems like I'm doing it the hard way - after 1.5 pages of math, I'm starting to feel like I'm mowing the lawn with a pair of nail clippers (or mopping a gym floor with a toothbrush, take your pick of metaphor). I just wanted to make sure that by parts is the only reasonable way of doing it and I'm not missing any good shortcut. | 2020-05-31 04:52:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009878993034363, "perplexity": 335.17875946647473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410745.37/warc/CC-MAIN-20200531023023-20200531053023-00375.warc.gz"} |
http://mathhelpforum.com/calculus/57380-substitution-integrate.html | 1. ## Substitution and Integrate
The integral is 1/(1+cubert[x]) dx from 0 to 1. I'm supposed to make a substitution to express the integrand as a rational function and then evaluate the integral.
I got the part of x=u^2-1 and dx=2du but plugging into the equation and going from there I'm getting messed up somewhere.
2. Originally Posted by sfgiants13
The integral is 1/(1+cubert[x]) dx from 0 to 1. I'm supposed to make a substitution to express the integrand as a rational function and then evaluate the integral.
I got the part of x=u^2-1 and dx=2du but plugging into the equation and going from there I'm getting messed up somewhere.
$\int\frac{dx}{1+\sqrt[3]{x}}\overbrace{\mapsto}^{z^3=x}\int\frac{3z^2}{1+z }dz$ | 2017-06-25 08:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977443277835846, "perplexity": 515.9972134516344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00516.warc.gz"} |
https://math.stackexchange.com/questions/1887861/let-a-b-c-positive-integers-such-that-left1-frac1a-right-left1-frac/1887869 | # Let $a,b,c$ positive integers such that $\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right) = 3$. Find those triples.
Full question: Let $a$,$b$,$c$ be three positive integers such that $\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right) = 3$. Find those triples.
This is actually a national competition question in Vietnam (Violympic), which I have attended (and did poorly, but had a lot of fun).
I have understood almost every questions asked that day, but this one really makes my head pop, because I haven't learn much about integer solution equation and how to solve hard cases (like that one)
I have solved that a,b,c can't be all the same, because:
$$a = b = c$$ $$\implies(1+\frac{1}{a})(1+\frac{1}{b})(1+\frac{1}{c})=(1+\frac{1}{a})^3=3$$ $$⟺1+ \frac{1}{a}=\sqrt[3]{3}$$ $$⟺\frac{1}{a}=\sqrt[3]{3}-1$$ $$⟺a=b=c= \frac{1}{(\sqrt[3]{3}-1)}$$
And using wolframalpha.com, I found out that $(a,b,c) \in \{(1,3,8),(1,4,5),(2,2,3)\}$, but stuck at how to solve it.
Thank you in advance for checking out. I really appreciate your effort.
Suppose $a\geq 3$, then $$\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right)\leq\left(1+\frac{1}{3}\right)^3=\frac{64}{27}<3$$ (note that $a\leq b\leq c$), a contradiction. Hence $a=1,2$.
If $a=1$, it comes to solve $(1+1/b)(1+1/c)=3/2$. The same trick shows $b<5$. Now one may simply list all possible values of $b$.
The case $a=2$ can be solved similarly. | 2019-05-22 02:32:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453584551811218, "perplexity": 266.7966216854333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00402.warc.gz"} |
https://www.dsprelated.com/freebooks/pasp/Interpolation_Uniformly_Spaced_Samples.html | ### Interpolation of Uniformly Spaced Samples
In the uniformly sampled case ( for some sampling interval ), a Lagrange interpolator can be viewed as a Finite Impulse Response (FIR) filter [449]. Such filters are often called fractional delay filters [267], since they are filters providing a non-integer time delay, in general. Let denote the impulse response of such a fractional-delay filter. That is, assume the interpolation at point is given by
where we have set for simplicity, and used the fact that for in the case of true interpolators'' that pass through the given samples exactly. For best results, should be evaluated in a one-sample range centered about . For delays outside the central one-sample range, the coefficients can be shifted to translate the desired delay into that range.
Next Section:
Fractional Delay Filters
Previous Section:
Large Delay Changes | 2020-09-24 01:35:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951127827167511, "perplexity": 1194.6430327214434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00179.warc.gz"} |
http://www.math.tau.ac.il/~nogaa/FREIMAN/ | FreimanFest: A meeting celebrating Gregory Freiman's 90th birthday
# Schedule
• July 12
• 09:30-10:00: Coffee
• 10-10:45: Oriol Serra
• 11-11:45: Melvyn Nathanson
• 12:00-13:30: Lunch
• 14:00-14:45: Yonutz Stanchescu
• 15:00-15:45: Mercede Maj and Patrizia Longobardi
• 15:45-16:15: Coffee
• July 13
• 09:30-10:00: Coffee
• 10-10:45: Endre Szemerèdi
• 11-11:45: Vsevolod Lev
• 12:00-13:30: Lunch
• 14:00-14:45: Jean-Marc Deshouillers
• 15:00-15:45: Emmanuel Breuillard
• 15:45-16:15: Coffee
• 16:15-16:45: Gregory Freiman and closing remarks
# List of speakers
• Emmanuel Breuillard, Munster
• Jean-Marc Deshouillers, Bordeaux
• Vsevolod Lev, Haifa
• Patrizia Longobardi, Salerno
• Mercede Maj, Salerno
• Melvyn Nathanson, New York
• Oriol Serra, Barcelona
• Yonutz Stanchescu, Tel Aviv
• Endre Szemerèdi, Budapest
• # Titles and Abstracts
Oriol Serra, Universitat Politecnica de Catalunya, Barcelona, Spain
Title: Doubling and Volume: A conjecture of Freiman
Abstract: By the well-known Freiman-Ruzsa theorem, a set A of integers with small doubling, |A+A| \leq c|A|, is a dense subset of a multidimensional arithmetic progression. The search for efficient quantitative estimations of the density of A in the progression in terms of the doubling constant c has been the object of intense research by several authors, including Freiman, Ruzsa, Bilu, Chang, Sanders and Konyagin. Schoen obtained an asymptotic bound which is essentially best possible.
An approach to the problem is to determine the maximum volume of a set with given cardinality and doubling. The dimension of A is the largest dimension of an integer lattice containing a Freiman isomorphic copy of A not contained in a hyperplane. The additive volume of a d-dimensional set A is defined as the smallest cardinality of the convex hull of sets isomorphic to A in the d-dimensional lattice. Freiman conjectured in 2008 a formula for the value of this maximum volume.
The talk will discuss recent advancements in this direction, in particular proving the conjecture for the class of so-called chains. Some extensions and implications will be also discussed.
Joint work with G. A. Freiman.
Melvyn Nathanson, CUNY, New York, USA
Title: Every finite subset of an abelian group is an asymptotic approximate group
Abstract: If $A$ is a nonempty subset of an additive abelian group $G$, then the $h$-fold sumset is
hA = \{x_1 + \cdots + x_h : x_i \in A_i \text{ for } i=1,2,\ldots, h\}.
We do not assume that $A$ contains the identity, nor that $A$ is symmetric, nor that $A$ is finite. The set $A$ is an $(r,\ell)$-approximate group in $G$ if there exists a subset $X$ of $G$ such that $|X| \leq \ell$ and $rA \subseteq XA$. The set $A$ is an asymptotic $(r,\ell)$-approximate group if the sumset $hA$ is an $(r,\ell)$-approximate group for all sufficiently large $h$.
It is proved that every polytope in a real vector space is an asymptotic $(r,\ell)$-approximate group, that every finite set of lattice points is an asymptotic $(r,\ell)$-approximate group, and that every finite subset of every abelian group is an asymptotic $(r,\ell)$-approximate group.
Preprints are posted on arXiv: 1512.03130 and 1512.06478.
Yonutz Stanchescu, Afeka-Tel Aviv Academic College, Israel
Title: Small doubling in Baumslag-Solitar groups and dilates of integers
Abstract: We will present a connection between abelian sumsets estimates and growth in non-abelian groups. More precisely, we investigate small doubling problems in Baumslag-Solitar groups using new direct and inverse results for sums of dilates of integers.
Joint work with G. A. Freiman, M. Herzog, P. Longobardi and M. Maj.
Mercede Maj and Patrizia Longobardi, University of Salerno, Italy
Title: On some Freiman's problems concerning products of subsets in groups
Abstract: Let G denote an arbitrary group. If S is a subset of G, we define its square S^2 by
S^2={ x_1x_2 : x_1,x_2 in S}.
In this talk we are interested in two problems, proposed by Gregory Freiman, concerning products of finite subsets in groups.
First we investigate the class DS(m), with m > 1 an integer, where a group G belongs to DS(m) if the cardinality of S^2 is less that m^2 for every subset S of G of order m.
Then we will present some recent results, obtained jointly with G.A. Freiman, M. Herzog, Y.V. Stanchescu, A. Plagne and D.J.S. Robinson concerning the structure of finite subsets S of an orderable group satisfying the inequality:
|S^2| \leq \alpha|S| + \beta, with \alpha = 3 and small values of |\beta|.
Endre Szemerèdi, Renyi Institute, Budapest, Hungary
Title: Applications of the fundamental theorem of Freiman
Abstract: We are going to discuss some applications of Freiman's theorem in subset-sum problems. We are going to give estimations and structural results.
Vsevolod Lev, University of Haifa, Israel
Title: Quadratic residues and difference sets
Abstract: It has been conjectured by Sarkozy that with finitely many exceptions, the set of quadratic residues modulo a prime $p$ cannot be represented as a sumset $\{a+b\colon a\in A, b\in B\}$ with non-singleton $A,B\subset F_p$. The case $A=B$ of this conjecture has been established by Shkredov. The analogous problem for differences remains open: is it true that for all sufficiently large primes $p$, the set of quadratic residues modulo $p$ is not of the form $\{a'-a''\colon a',a''\in A,\,a'\ne a''\}$ with $A\subset F_p$?
We present the results of our recent paper, joint with Jack Sonn, where a presumably more tractable variant of this problem is considered: is there a set $A\subset F_p$ such that every quadratic residue has a *unique* representation as $a'-a''$ with $a',a''\in A$, and no non-residue is representable in this form? We give a number of necessary conditions for the existence of such $A$, involving for the most part the behavior of primes dividing $p-1$. These conditions enabled us to rule out all primes $p$ bigger than 13 and smaller than $10^{20}$ (the primes $p=5$ and $p=13$ being conjecturally the only exceptions).
Jean-Marc Deshouillers, University of Bordeaux, France
Title: Probabilistic methods for Waring's problem: old and new results
Abstract: TBA
Emmanuel Breuillard, Universitat Munster, Germany
Title: The Elekes-Szabo theorem in higher dimension
Abstract: I will discuss the Elekes-Szabo theorem regarding expanding polynomials and its higher dimensional generalization in which a group structure is derived from a combinatorial constraint. In joint work with Hong Wang we determine the possible groups arising this way.
# Organizing Committee
Noga Alon, Jean-Marc Deshouillers, Marcel Herzog, Vsevolod F. Lev, Melvyn Nathanson, Alain Plagne, Oriol Serra, Yonutz Stanchescu | 2020-04-01 06:58:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716684579849243, "perplexity": 1590.0831459140802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00104.warc.gz"} |
http://math.stackexchange.com/questions/507287/generating-function-solution-to-previous-question-a-n-a-lfloor-n-2-rfloor | # Generating function solution to previous question $a_{n}=a_{\lfloor n/2\rfloor}+a_{\lfloor n/3 \rfloor}+a_{\lfloor n/6\rfloor}$
In attempting to answer this question, I reduced it to a seemingly simple generating functions question, but after days of work was unable to construct a proof. Since I do not have experience trying to do asymptotics with generating functions, I would like to know if a proof is salvageable from these methods.
The problem introduces the sequence $a_n$, defined by $a_0 = 1$ and $$a_{n}=a_{\left\lfloor n/2\right\rfloor}+a_{\left\lfloor n/3 \right\rfloor}+a_{\left\lfloor n/6\right\rfloor}$$ and asks for a proof that $$\lim_{n\to\infty}\dfrac{a_{n}}{n}=\dfrac{12}{\log{432}}.$$ Writing the generating function $\displaystyle A(x) = \sum_{n \ge 0} a_n x^n$, this translates to $$A(x) = (1 + x)A(x^2) + (1 + x + x^2) A(x^3) + (1 + x + x^2 + \cdots + x^5)A(x^6) - 2$$
Even better, let $b_0 = a_0$ and $b_n = a_n - a_{n-1}$ for all $n \ge 1$, and define the generating function $\displaystyle B(x) = \sum_{n \ge 0} b_n x^n = (1 - x)A(x)$. Multiplying the above by $(1-x)$ gives $$(1 - x)A(x) = (1 - x^2)A(x^2) + (1 - x^3)A(x^3) + (1 - x^6)A(x^6) + 2x - 2$$ i.e. $$B(x) = B(x^2) + B(x^3) + B(x^6) + 2x - 2 \tag{1}$$
After unsuccessfully trying to do assymptotics with the above elegant formula, I used it to find an explicit representation of $B$, using the Delannoy Numbers:
$$B(x) = 1 + 2 \sum_{l, m \ge 0} \sum_{d \ge 0} 2^d {l \choose d}{m \choose d} x^{2^l 3^m}$$
It follows that in fact $$a_n=1+2\sum_{d\ge0}2^d\sum_{\begin{matrix}l,m\ge0\\2^l 3^m \le n\end{matrix}}{l \choose d}{m \choose d} \tag{2}$$
One can do naive bounds on the sum in (2) - replacing the condition $2^l 3^m \le n$ with $2^l 2^m \le n$ and $3^l 3^m \le n$ for upper and lower bounds, respectively. But this isn't good enough; it gives (after algebra and combinatorial work) approximately $$\frac{n^{\log_3(1 + \sqrt{2}) - 1}}{2} < \frac{a_n}{n} < \frac{n^{\log_2(1 + \sqrt{2}) - 1}}{2}$$
This seems to suggest trying to approximate (2) with the condition $(1 + \sqrt{2})^l (1 + \sqrt{2})^m \le n$, but I have no idea how to justify that.
At any rate, I've made too much of what feels like progress to give up on the problem, and if anyone can think of a way to use (2) to get a solution or else to use (1) and find the asymptotics directly, I'd be very thankful.
-
I recommend Dirichlet series instead of power series. Dirichlet series with coefficients $b_n$ is easy to find, then Wiener-Ikehara Tauberian Theorem solves the problem. This idea was already discussed in the AOPS forum from the link. – i707107 Jan 9 '14 at 13:24
But it will be nice to see if anyone solves directly with your formula for $b_n$. – i707107 Jan 9 '14 at 13:27
OEIS reference: oeis.org/A007731 – Michael Lugo Jun 22 '15 at 17:39
The "master theorem" by Leighton is applicable. For a recurrence $T(z) = g(z) + \sum_{1 \le k \le n} a_k T(b_k z + h_k(z))$ where $z \ge 0$, such that there are sufficient base cases; all $a_k > 0$ and $0 < b_k < 1$; there is a constant $c$ such that $\lvert g(z) \rvert = O(c^z)$; and all $\lvert h_k(z)\rvert = O(z /(\log z)^2$. Then for $p$ such that $\sum_{1 \le k \le n} a_k b_k^p = 1$, the solution satisfies:
$$T(z) = \Theta \left( z^p \left( 1 + \int_1^z \frac{g(u)}{u^{p + 1}} \, \mathrm{d} u \right) \right)$$
The $h_k$ are fudge factors, they cover cases like the difference to floors and ceilings.
Here we have $g(z) = 0$, $a_1 = a_2 = a_3 = 1$, $b_1 = 1/2$, $b_2 = 1/3$, and $b_3 = 1/6$, so that $p = 1$. With $g(z) = 0$, the theorem tells you that $a_n = \Theta(n)$.
-
A complete asymptotic solution to exactly this kind of recurrences is derived by Erdös et al "The Asymptotic Behavior of a Family of Sequences" Pacific Journal of Mathematics 126:2 (1987), pp 227-241. They use this exact recurrence as an example, and show that:
$$a_n \sim \frac{12}{\log 432} n$$
- | 2016-06-27 08:07:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928794801235199, "perplexity": 219.14408058997827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00079-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/gaussian-integral.326641/ | # Homework Help: Gaussian Integral
1. Jul 23, 2009
### DukeLuke
1. The problem statement, all variables and given/known data
Consider the gaussian distribution shown below
$$\rho (x) = Ae^{-\lambda (x-a)^2$$
where A, a, and $\lambda$ are positive real constants. Use $\int^{-\infty}_{+\infty} \rho (x) \,dx = 1$ to determine A. (Look up any integrals you need)
2. Relevant equations
Given in question above
3. The attempt at a solution
My plan was to integrate the probability density set it equal to one and then solve for A. The problem is I'm getting stuck on the integration. I started by pulling the constants out of the integral and doing the substitution $u=x-a$ that left me with
$$Ae^{-\lambda} \int^{+\infty}_{-\infty} e^{u^2}\,du$$
It's been a while since calc II and I can't figure out how to do this one (even though it looks so simple). I also tried looking it up in a integral table but couldn't find it. Any help would be appreciated.
2. Jul 23, 2009
### Cyosis
$$e^{-\lambda u^2} \neq e^{-\lambda}e^u^2=e^{-\lambda+u^2}$$
3. Jul 23, 2009
### tiny-tim
Hi DukeLuke!
You need the erf(x) function … see http://en.wikipedia.org/wiki/Error_function
(btw, there is a way to integrate ∫e-u2du: it's √(∫e-u2du)∫e-v2dv), then change to polar coordinates )
4. Jul 23, 2009
### DukeLuke
Thanks, man am I getting rusty over the summer
I looked at it but I'm lost on how to use it solve this problem. Could you help me get started?
5. Jul 23, 2009
### D H
Staff Emeritus
Have you tried this bit of advise? You even know the relevant keywords (hint: use the title of this thread). Google is your friend.
6. Jul 23, 2009
### Cyosis
7. Jul 23, 2009
### DukeLuke
$$\int_{-\infty}^{\infty} e^{-(x+b)^2/c^2}\,dx=|c| \sqrt{\pi}$$
Thanks, using the integral above from Wikipedia $c = \frac{1}{\sqrt{\lambda}}$. From there I get $A = \frac{\sqrt{\lambda}}{\sqrt{\pi}}$.
8. Jul 24, 2009
### Feldoh
Looks correct, studying griffiths' quantum mechanics I see :D
9. Jul 24, 2009
### DukeLuke
Yep, thought I would get a head start before the fall semester begins. | 2018-07-16 22:44:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7988574504852295, "perplexity": 1153.3620754533927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589470.9/warc/CC-MAIN-20180716213101-20180716233101-00275.warc.gz"} |
https://samacheerkalvi.guru/samacheer-kalvi-9th-science-solutions-chapter-5/ | # Samacheer Kalvi 9th Science Solutions Chapter 5 Magnetism and Electromagnetism
## Tamilnadu Samacheer Kalvi 9th Science Solutions Chapter 5 Magnetism and Electromagnetism
### Samacheer Kalvi 9th Science Magnetism and Electromagnetism Textbook Exercises
Question 1.
Which of the following converts electrical energy into mechanical energy?
(a) motor
(b) battery
(c) generator
(d) switch
(a) motor
Question 2.
The part of the AC generator that passes the current from the armature coil to the external circuit is
(a) field magnet
(b) split rings
(c) slip rings
(d) brushes
(d) brushes
Question 3.
Transformer works on
(a) AC only
(b) DC only
(c) both AC and DC
(d) AC nor effectively than DC
(a) AC only
Question 4.
The unit of magnetic flux density is
(a) Weber
(b) weber/meter
(c) weber/meter2
(d) weber.meter2
weber/meter2
II. Fill in the blanks.
1. The SI Unit of magnetic field induction is ……………………..
2. Devices which is used to convert high alternating current to low alternating current is …………………..
3. An electric motor converts …………………..
4. A device for producing electric current is ……………………..
1. Tesla
2. Step down transformer
3. Electrical energy into mechanical energy
4. Generator
III. Match the following.
Column – I Column – II 1. Magnetic material (a) Oersted 2. Non-magnetic material (b) iron 3. Current and magnetism (c) induction 4. Electromagnetic induction (d) wood 5. Electric generator (e) Faraday
1. (b) iron
2. (d) wood
3. (a) Oersted
5. (c) induction
IV. True or False.
1. A generator converts mechanical energy into electrical energy – True
2. Magnetic field lines always repel each other and do not intersect – True
3. Fleming’s Left-hand rule is also known as Dynamo rule – True
4. The speed of rotation of an electric motor can be increased by decreasing the area of the coil – False
Correct Statement: The speed of rotation of the coil can be increased by increasing the area of the coil.
5. A transformer can step up direct current – False
Correct Statement: A transformer can step up the alternating current.
6. In a step-down transformer, the number of turns in the primary coil is greater than that of the number of turns in the secondary coil – True
Question 1.
State Fleming’s Left Hand Rule.
The law states that while stretching the three fingers of the left hand in a perpendicular manner with each other if the direction of the current is denoted by the middle finger of the left hand and the second finger is for the direction of the magnetic field then the thumb of the left hand denotes the direction of the force or movement of the conductor.
Question 2.
Define magnetic flux density.
The number of magnetic field lines crossing unit area kept normal to the direction of field lines is called magnetic flux density. Its unit is Wb/m2
Question 3.
List the main parts of an electric motor.
Main parts of an electric motor
• Field magnet.
• Armature (Rectangular coil)
• Split ring (Commutator)
• Brushes
• Battery
Question 4.
Draw and label the diagram of an AC generator.
Question 5.
State the advantages of AC over DC.
• The cost of generation of AC is less than the cost of generation of DC.
• AC can be easily converted into D.C.
• Only alternating voltage can be stepped up or stepped down by using a transformer.
• AC can be transmitted to distant places without much loss of electric power than DC.
Question 6.
Differentiate step up and step down transformer.
Step-up transformer Step down transformer The transformer used to change a low alternative voltage to a high alternating voltage is called a step-up transformer, i.e. (Vs>Vp). The transformer used to change a high alternating voltage to a low alternating voltage is called a step-down transformer (Vs Np) In a step-down transformer, the number of turns in the secondary coils are less than the number of turns in the primary coil (Ns < Np)
Question 7.
A portable radio has a built-in transformer so that it can work from the mains instead of batteries. Is this a step up or step down transformer?
It is a step-down transformer. So that rectified DC voltage is equal to battery voltage, hence it can work on mains as well as on battery.
Question 8.
State Faraday’s laws of electromagnetic induction.
Whenever there is a change in the magnetic flux linked with a closed-circuit an emf is produced and the amount of emf induced varies directly as the rate at which the flux changes. This emf is known as induced emf and the phenomenon of producing an induced emf due to a change in the magnetic flux linked with a closed circuit is known as electromagnetic induction.
Question 1.
Explain the principle, construction, and working of a DC motor.
A motor is an electrical machine which converts electrical energy into mechanical energy. The principle of working of a DC motor according to Faraday’s laws of electromagnetic induction is that “whenever a current-carrying conductor is placed in a magnetic field, it experiences a mechanical force”. The various parts of a DC motor are; Permanent magnets on both sides of a coil which consists of carbon brush and commutator as shown in
Working of electric motor is primarily dependent upon the interaction between magnetic field and current. The direction of this force is given by Fleming’s left hand rule and it’s magnitude is given by F = BIL. Where, B = magnetic flux density, I = current and L = length of the conductor within the magnetic field.
Question 2.
Explain two types of transformer.
Transformer is a device used for converting low voltage into high voltage and high voltage into low voltage. It works on the principle of electromagnetic induction. It consists of primary and secondary coil insulated from each other. Depending upon the number of turns in the primary and secondary coils, the two types of transformers are; step-up or step-down transformers.
Step-up transformer:
The transformer used to change a low alternative voltage to a high alternating voltage is called a step-up transformer, ie (Vs > Vp). In a step-up transformer, the number of turns in the secondary coil is more than the number of turns in the primary coil (Ns > Np).
Step down transformer:
The transformer used to change a high alternating voltage to a low alternating voltage is called a step-down transformer (Vs < Vp). In a step-down transformer, the number of turns in the secondary coils is less than the number of turns in the primary coil (Ns < Np).
The formulae pertaining to the transformers are given in the following equations.
• The number of primary turns Np / The number of secondary turns Ns = The primary voltage Vp/ The secondary voltage Vs
• The number of secondary turns Ns / The number of primary turns Np – The primary current Ip/ The secondary current Is
Question 3.
Draw a neat diagram of an AC generator and explain its working.
An alternating current (AC) generator, consists of a rotating rectangular coil ABCD called armature placed between the two poles of a permanent magnet. The two ends of this coil are connected to the two slip rings S1 and S2. The inner sides of these rings are insulated. Two conducting stationary brushes B1 and B2 are kept separately on the rings S1 and S2 respectively. The two rings S1 and S2 are internally attached to an axle. The axle may be mechanically rotated from outside to rotate the coil inside the magnetic field. Outer ends of the two brushes are connected to the external circuit.
When the coil is rotated, the magnetic flux linked with the coil changes. This change in magnetic flux will lead to the generation of induced current. The direction of the induced current, as given by Fleming’s Right Fland Rule, is along ABCD in the coil, and in the outer circuit, it flows from B2 to B1. During the second half of rotation, the direction of current is along DCBA in the coil, and in the outer circuit it flows from B1 to B7. As the rotation of the coil continues, the induced current in the external circuit is changing its direction for every half a rotation of the coil.
ACTIVITY
Question 1.
Put a magnet on a table and place some paper clips nearby. If you push the magnet slowly towards the paper clips, there will be a point at which the paper clips jump across and stick to the magnet. What do you understand from this?
The invisible magnetic field that surrounds the magnet acts at a particular distance. This magnetic field attracts the paper clip which is made of steel.
### Samacheer Kalvi 9th Science Magnetism and Electromagnetism In-Text Problems
Question 1.
A conductor of length 50 cm carrying a current of 5 A is placed perpendicular to a magnetic field of induction 2 × 10– 3 T. Find the force on the conductor.
Solution:
Force on the conductor = ILB
= 5 × 50 × 10– 2 × 2 × 10– 3
= – 5 × 10– 3 N
Question 2.
A current-carrying conductor of a certain length, kept perpendicular to the magnetic field experiences a force F. What will be the force if the current is increased four times, the length is halved and the magnetic field is tripled?
Solution:
F = ILB = (4I) × ( L / 2) × (3 B) = 6 F
Therefore, the force increases six times.
Question 3.
The primary coil of a transformer has 800 turns and the secondary coil has 8 turns. It is connected to a 220 V ac supply. What will be the output voltage?
Solution:
In a transformer, Es / Ep = Ns / Np
Es = Ns / Np × Ep
= $$\frac { 8 }{ 800 }$$ × 220 = $$\frac { 220 }{ 100 }$$ = 2.2 volt
### Samacheer Kalvi 9th Science Magnetism and Electromagnetism Additional Questions
Question 1.
What are natural magnets?
Natural magnets exist in nature and can be found in rocks and sandy deposits in various parts of the world.
Question 2.
How can the speed of rotation of a coil be increased? Write at least three methods.
The speed of rotation of a coil can be increased by;
• increasing the strength of the Current in the coil
• increasing the area of the coil
• increasing the strength of the magnetic field.
Question 3.
What is the connection between electricity and magnetism?
Electricity and magnetism are closely related to each other. The current flowing through the wire produces a circular magnetic field outside the wire. The direction of this magnetic field depends on the electric current.
Similarly, a changing magnetic field produces an electric current in a wire or conductor. The relationship between electricity arid magnetism is called electromagnetism.
Question 4.
What are the factors that determine the strength of the magnet?
The strength of the magnetic field at a point due to current-carrying wire depends on:
• the current in the wire
• distance of the point from the wire
• the orientation of the point from the wire and
• the magnetic nature of the medium.
Question 5.
Name some equipment that uses electromagnetism for functioning.
Many of the medical equipment such as scanners, x-ray equipment, and other equipment also use the principle of electromagnetism for their functioning.
Question 6.
Explain why the ozone layer is not affected by the solar wind.
Magnetic field can penetrate through all kinds of materials. The Earth produces its own magnetic field, which shields the earht’s ozone layer from the solar wind.
Question 7.
Write the properties of magnetic lines of force.
• Magnetic lines of force are closed continuous curves, extending through the body of the magnet.
• Magnetic lines of force start from the North Pole and end at the South Pole.
• Magnetic lines of force never intersect.
• They will be maximum at the poles than at the equator.
• The tangent drawn at any point on the curved line gives the direction of the magnetic field.
Question 1.
Michael Faraday discovered that a current-carrying conductor also gets deflected when it is placed in a magnetic field. Michael Faraday was a British Scientist who contributed to the study of electromagnetism and electrochemistry. His main discoveries include the principles underlying electromagnetic induction, diamagnetism, and electrolysis. Although Faraday received little formal education, he was one of the most influential scientists in history. Faraday was an excellent experimentalist who conveyed his ideas in clear and simple language.
The SI unit of capacitance is named in his honour: the farad. Albert Einstein kept a picture of Faraday on his study wall, alongside pictures of Isaac Newton and James Clerk Maxwell. Faraday is one of the greatest scientific discoverers of all time.
Question 2.
Explain in detail the application of electromagnets. | 2022-01-23 02:53:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6152620911598206, "perplexity": 939.9844271173661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00026.warc.gz"} |
http://crypto.stackexchange.com/questions?page=84&sort=newest | All Questions
Let's assume a simple algorithm like the Skein hash function. Is it possible, given the algorithm, to construct a proof that it does not have a particular distinguisher, something like: $P(xyz)$ is ... | 2014-04-23 13:03:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901541709899902, "perplexity": 365.58228226080695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://puzzling.stackexchange.com/tags/number-sequence/hot | # Tag Info
19
Hint 1 Hint 2 Hint 3 Answer
18
So the answer is I am not sure I have found the intended "key" and generating method, but one possible method is the following.
10
It's or or or or or
9
The doesn't belong, because
7
The odd one out is the This is because Safe Cracked!
7
6
I think the answer is: I found that the sequence has the following property: Here is a demonstration (not quite a proof, but convincing enough) that the property holds, and how I got the next numbers in the sequence:
5
So stacking Then the rest So So the four numbers are
5
My thoughts here are ... Why this sequence? Huh?
5
The answer is Obtained by Can obtained too through
5
Edit: (Old answer removed, apparently it was confusing the issue)
5
The oddball is because But
4
It appears that Note that So then
4
I've exhausted my mental state on this one today; my only plausible guess is: Because:
4
4
The nth element of the sequence is Alternatively, if you use an offset such that the first term has n=2, you could say Source for the second solution: https://oeis.org/A258107
4
Jaap Scherpuis found the answer, but the OP wants more explanation and a simple construction... I am middle aged bike rider still in prime shape. I have twin like older brother who is couple of years older than me. We both are cryptographers who love to play with fractions and our digital signatures consisting of long sequences that are closely tied to ...
4
First, I define three functions: Those functions produce the values Then we can express the sequence like that: Giving and finally so the next element is
4
Well, it is which so the answer is
4
Obviously the answer is because That is, For odd: $3/2 \times 1/2 = 3/4.$ $3/4 \times 1/2 = 3/8.$ For even: $2/3 \times 1/2 = 2/6\implies 1/3.$ $1/3 \times 1/2 = 1/6.$
4
The answer is You can generate this by
3
I'm going to guess the:
3
3
I will guess that it is My reasoning is that
3
My answer: Below is Meant as comment, but alas @John S posted the following in a comment at https://puzzling.stackexchange.com/a/83526/60039 It is missing data. However: Also:
3
The missing numbers are To find them, observe that
3
Well, let me try... So, the next one should be...
3 | 2019-05-19 19:06:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720673680305481, "perplexity": 1504.0504956144357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255092.55/warc/CC-MAIN-20190519181530-20190519203530-00305.warc.gz"} |
http://lambda-the-ultimate.org/node/4059 | ## Computational equivalent of incompleteness theorems?
Godel's Incompleteness Theorems are well-known, but I haven't come across any interpretation of incompleteness in terms of computation that aren't typed. neelk's comment is a great example of how it applies to programming language type systems via Curry-Howard, but I'm looking for an untyped correspondence, like at the level of the raw lambda calculus or Turing machines.
For instance, if I squint, the Halting Problem looks very much like an instance of the first incompleteness theorem. No program H can be constructed which can compute the termination value of every other program, ie. there are programs which halt but for which H cannot compute that fact.
Wikipedia isn't much help here, implying there is some established connection, but not describing it in any accessible way or providing useful pointers.
Has such a correspondence been established? Hopefully I'm not spewing nonsense and someone can point me in the right direction!
## Comment viewing options
### What more do you want?
For instance, if I squint, the Halting Problem looks very much like an instance of the first incompleteness theorem.
You don't have to squint that hard. ;-)
I thought Wikipedia was reasonably clear: any form of undecidability is a form of incompleteness.
Both ideas encapsulate that there are well-formed "questions" (logical sentences/programs) for which no "answer" can be determined (truth value found/result computed).
### Perhaps I'm trying too hard,
Perhaps I'm trying too hard, but I was hoping for some sort of clear correspondence between logic and untyped computation so I could interpret incompleteness in a related computation problem; an untyped version of this clear Curry-Howard correspondance if you will.
But you're right, phrased as "undecidability -> incompleteness" makes perfect sense, and perhaps that's all I'll need. I'll mull it over, thanks!
### goedel and turing
I want to "formalizably" offer two concise proofs: that Goedel's essential result implies Turing's halting result, and vice versa.
Let ARITH_ORACLE be the name of a proposition that says there exists a (constructable) effective, complete, and consistent theory of arithmetic.
Let HALTING_ORACLE be the name of a proposition that says there exists an effective algorithm for deciding whether or not a given turing machine halts on a given input.
Let's say (crudely - "formalizably"): Goedel proved ARITH_ORACLE and Turing proved HALTING_ORACLE. I want to show that, talking in this abstract but formalizable way -- they proved the exact same thing.
In other words, I owe two proofs (proof sketches, really):
Goedel-world case:
Assume that: (not ARITH_ORACLE)
Show that: (not ARITH_ORACLE) implies (not HALTING_ORACLE)
Therefore: (not HALTING_ORACLE)
and
Turing-world case:
Assume that: (not HALTING_ORACLE)
Show that: (not HALTING_ORACLE) implies (not ARITH_ORACLE)
Therefore: (not ARITH_ORACLE)
The Goedel-world case:
We assume that ARITH_ORACLE is false - we have Goedel's proof but not Turing's.
Let's assume that HALTING_ORACLE is true. We know from Turing that it isn't but we're pretending we don't. The goal is to prove HALTING_ORACLE is false from just "(not ARITH_ORACLE)". We start by assuming HALTING_ORACLE is true, aiming to establish a contradiction:
Suppose that in arithmetic theory T we want to know if proposition P is true. We can write a program that enumerates all possible proofs in T, say from shortest to longer, looking for a proof of P. The program halts only if a proof is found.
The halting oracle, combined with that proof-search program, can instantly tell us if any particular proposition is true, or false, or undecidable in any particular arithmetic theory.
For any given arithmetic proposition, we can actually enumerate all possible arithmetic theories to find out if any exist that prove a given proposition or are at least consistent with that proposition. The halting oracle can, again, tell us if any given proposition is proved, disproved, or consistent with the axioms we have so far.
So, now we make a new program (using our imaginary halting oracle): It starts with some "revealed arithmetic axioms". The user types in propositions. The program spits out a proof, a disproof, or the confirmation that the proposition is, in fact, an axiom per some complicated schema. Which exact axiom set the program represents depends on what propositions you ask it about, in which order, but on any given run it is a complete, consistent, and effective axiom system.
The specification of that program comprises an ARITH_ORACLE kind of oracle. We assumed HALTING_ORACLE and then proved ARITH_ORACLE but we're in Goedel's world where we know for sure that "(not ARITH_ORACLE)" .... so ....
(not ARITH_ORACLE) implies (not HALTING_ORACLE)
The Turing-world case:
This case is much easier. The halting question about any turing machine and input configuration can easily be expressed as a proposition about arithmetic. Essentially: if a given turing machine halts on a given input, then there exists an integer X which satisfies some arithmetic equation.
In other words, if ARITH_ORACLE then HALTING_ORACLE, for sure. Just convert every halting problem to arithmetic, trivially, and prove or disprove a theorem.
Well, we know from Turing that "(not HALTING_ORACLE)" and therefore, "(not ARITH_ORACLE)":
(not HALTING_ORACLE) implies (not ARITH_ORACLE)
### technique
The technique underlying most of the proof sketch there is to make use of the fact that a halting oracle can turn any effectively enumerable set (including infinite sets) into a membership predicate oracle at least over domains that have an effective equality predicate.
That's a fulcrum that just never gives up for proving other interesting stuff.
### Enumeration all the way
Thomas nailed down the intra-convertibility quite well and even provided the proofs with enough rigor and technicality to introduce the vital concepts that I think help build the larger intuition. Rap my knuckles if I go to far in my attempt to build generalized intuition; it's been a few years.
Note that Thomas appealed several times to the enumeration of a set. He even singles it out in his postscript on the proof-theoretic power of a halting oracle being equivalent to a membership predicate upon such a set. If you crack open a text on computability, you'll see this idea encoded in various ways, most commonly under the terms "recursive" and "recursively enumerable". It behooves the scholar interested in this area to dig into those concepts, so I'll leave off continued exposition.
The rubber and road come together when you remember the equivalence of "math" and "computation". Really not that surprising to the programmer-- encoding math in computation is an everyday task. The reverse encoding may be less obvious, but again there are plenty of fairly accessible proofs about for confirmation. So let's restate the classes. "Recursive" is fully computable, in the sense that we have a guaranteed Yes/No answer to any proposition. "Recursively Enumerable" is computationally exhaustive, in the sense that we can perform an exhaustive search of the answer space. Naturally, spaces of unbounded cardinality (size) separate the latter class from the former. Thus everything can normalize to the same set of terms; the mathematical or computational versions simply become implementation perspectives. So, let's look at Goedel's first incompleteness theorem.
One can set up an enumerated set of proofs. So, what will be covered by that set? In this world of enumerations, GFIT becomes an assertion that (given certain conditions) there exist true propositions whose proof will never be ennumerated. The Decision Problem then becomes knowledge about a test of membership over this set, essentially equal to "does the enumeration reach element X?".
The reverse case is more complicated, but I think more interesting. If one cannot guarantee a decision procedure for the enumeration of a particular element in the set, what does that mean for the eventual enumeration of a given element? Well, it turns out that the system is self-hosting; we can encode decideability right into the enumeration. So if we had to eventually emit proofs of all true propositions, we'd simply have to check for the proofs of "halts on X" and "never halts on X".
In sum, naasking is right; this tends not to get explored. I'm personally interested (and decently educated) in the subject, so I could put together this kind of explanation (with the immediate help of Thomas; thanks, your exposition was clear and helpful). But as with most mathematics, the discharge of any given theorem is not the enlightening step. The real meat is gaining some leverage of intuition over a domain. I find enumeration to be a mighty fine conceptual framework that has come in handy across math, logic, and computation. Biblio note: the best reference on my shelf for this is Hopcroft/Motwani/Ullman's Introduction to Automata Theory, Languages, and Computation, which I used to brush up for this. However, I can't speak for its approachability as a casual student as I got a thorough exposition in classroom lecture (thanks Dr. Friesen!).
### See Hoare on Incomputability
See this: C. A. R. Hoare and D. C. S. Allison, Incomputability, Computing Surveys, pp. 169&em;178, September 1972. Draws a direct line from Russel to Turing.
### I couldn't agree more
+20 :)
AFAIC, I also like to relate it to the (neverendingly) numerous forms of genuine creation of new formalisms that we layer, consciously or not, on top of our favourite legacies (pre-existing formalisms) that our brains pick up on their path to invention. Which is obviously the case in PL-related "hobbies". :)
Coincidentally, I've crafted my own small diagrammatic "toy-formalism" to picture to myself these important results re: (in-)completeness, (un-)decidability, and the (for sure-)layered language cakes they deal with. Just thought I could eventually share it here some time. Here it is, then:
ZFC axiomatic and acceptance of its offspring theories' incompleteness per Godel's Incompleteness Theorems (G):
+-------------------+ +-------------------+
|Axiom | |Creation |
| * => ZFC/0 ... | | ? ZFC/G =>... |
| ^ | | _____ |
+-----+ | +-----+-------+-----+ | +-----+
| |Use of the legacy: | |
| G | ? G/T => ZFC | ZFC |
| | _____ | |
+-------+-----+ | +-----+-------+
| |
| G |
| |
+-------+
Construction of N (Natural Numbers) in ZFC/Godel theoretical layers:
(classically built up from just ZFC's empty set)
+-------------------+ +-------------------+
|Axiom | |Creation |
| * => N/0 ... | | ? N/ZFC =>... |
| ^ | | _____ |
+-----+ | +-----+-------+-----+ | +-----+
| |Use of the legacy: | |
| ZFC | ? ZFC/G => N | N |
| | _____ | |
+-------+-----+ | +-----+-------+
| |
| ZFC |
| |
+-------+
Construction of Turing Machines in N/ZFC theoretical layers:
+-------------------+ +-------------------+
|Axiom | |Creation |
| * => T.M/0 ... | | ? T.M/N =>... |
| ^ | | _____ |
+-----+ | +-----+-------+-----+ | +-----+
| |Use of the legacy: | |
| N | ? N/ZFC =>T.M | T.M |
| | _____ | |
+-------+-----+ | +-----+-------+
| |
| N |
| |
+-------+
Undecidability of the halting problem in T.M/N theoretical layers:
+-------------------+ +-------------------+
|Axiom | |Creation |
| * => H.P/0 ... | | ? H.P/N =>... |
| ^ | | _____ |
+-----+ | +-----+-------+-----+ | +-----+
| |Use of the legacy: | |
| T.M | ? T.M/N =>H.P | H.P |
| | _____ | |
+-------+-----+ | +-----+-------+
| |
| T.M |
| |
+-------+
Now, as Godel's incompleteness theorems state we cannot craft any kind of
a would-be both complete *and* consistent formal (theoretic) system appearing in this (would-be) schema:
(that one coined as, say, "NWF" : Not-Well-Founded)
+-------------------+ +-------------------+
|Axiom | |Creation |
| * => NWF/0 ... | | ? NWF =>NWF |
| ^ | | _____ |
+-----+ | +-----+-------+-----+ | +-----+
| | |(Hopeless...) | | |
| |use of the legacy: | |
| NWF | ? NWF =>NWF | NWF |
| | _____ | |
+-------+-----+ | +-----+-------+
| |
| NWF |
| |
+-------+
We can always try to invent such a NWF, but if it's axiomatized in itself and (supposedly) complete, then it'll always end up showing itself as inconsistent.
So that we're kind of "doomed" to have "to cope" forever with the ellipsis "..." (well, actually, just continue to humbly accept it) in those right-most T-boxes above, as far as theorem discovery/computational theory results go.
My 2 cts.
### Computation and logic
It seems that one can get profoundly different views of the relationship between logic and computation — and on the nature of computation, and on the significance of typing — depending on whether one takes as one's basis and starting point (a) the analogy between inconsistency and nontermination, or (b) the Curry-Howard correspondence. See the LtU subthread that I set off (or perhaps blundered into) late last year, proceeding from this comment.
### Great thread. It seems to
Great thread. It seems to cover this subject in depth, and even though I read it at the time, I didn't remember it having lacked the context I now have. Marc's sketch is the intuition I was starting to develop about the relationship, so I'll have to read it again more carefully to understand the dissenting opinions. Thanks!
### Chaitin
I'm not sure exactly what you're asking about, but is Chaitin's work the sort of thing you're looking for?
http://en.wikipedia.org/wiki/Gregory_Chaitin
### I was also thinking...
...that the question probably somehow calls back to Binary Lambda Calculus and Combinatory Logic. | 2018-09-18 13:43:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7583186030387878, "perplexity": 2541.8025186242185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00214.warc.gz"} |
https://thatsmaths.com/2019/05/30/symplectic-geometry/ | ### Symplectic Geometry
For many decades, a search has been under way to find a theory of everything, that accounts for all the fundamental physical forces, including gravity. The dictum “physics is geometry” is a guiding principle of modern theoretical physics. Einstein’s General Theory of Relativity, which emerged just one hundred years ago, is a crowning example of this synergy. He showed how matter distorts the geometry of space and this geometry determines the motion of matter. The central idea is encapsulated in an epigram of John A Wheeler:
$\displaystyle \mbox{Matter tells space how to curve. Space tells matter how to move.}$
Riemannian Geometry
For millennia the geometry of Euclid was the only show in town. In the early nineteenth century, non-Euclidean geometries that relax the parallel postulate were developed (independently) by Bolyai, Gauss and Lobachevsky. Later, the brilliant German mathematician Bernhart Riemann introduced ideas of sweeping power and originality, showing that Euclidean geometry is merely a special case of a vastly more general geometry of curved space.
Riemann showed that the entire structure of space, including its complex patterns of curvature, are encapsulated in a metric’, a simple expression for the distance between two points
$\displaystyle \mathrm{d}s^2 = g_{\mu\nu}\mathrm{d}x^\mu\mathrm{d}x^\nu$
The space and its metric together are sufficient to erect an entire geometric edifice. Riemannian geometry provided Einstein with an ideal mathematical basis for his theory of gravitation.
Hamiltonian Mechanics
There is another, less well-known, geometry that is also deeply rooted in physics. It is called symplectic geometry. Symplectic geometry (SG) lies at the heart of mathematics and of physics. It is at the very foundation of classical mechanics. The behaviour of spinning tops, water waves, falling apples, planetary systems and galaxies can be described in terms of this geometry. SG was first introduced by Lagrange in 1808 in his study of solar system dynamics.
The Irish scientist William Rowan Hamilton is popularly known as the inventor of quaternions, a new species of numbers with exotic properties. But his work in mechanics is of far greater significance. Hamilton showed how a dynamical system can be described by a single mathematical expression involving the position and momentum.
The state of the system at any time is determined by a point in a multi-dimensional space called phase-space, and its evolution is described by Hamilton’s canonical equations, a system of exquisite elegance. Since Hamilton’s equations provide a gateway to quantum mechanics, the new geometry also underlies dynamics at an atomic scale. The symplectic structure is also intimately connected with Heisenberg’s uncertainty principle.
A particle moving in three dimensions is described by a point in six dimensions, three for position and three for momentum. So, the state of the system is given by a point ${(q_1,q_2,q_3,p_1,p_2,p_3)}$ in a six-dimensional phase space. Once the initial state is known, the trajectory can be calculated by solving the canonical equations.
Metric Geometry and Symplectic Geometry
To take the simplest example, let us consider two points and in the plane ${\mathbb{R}^2}$, represented by two vectors ${\mathbf{u}=(u_1,u_2)}$ and ${\mathbf{v}=(v_1,v_2)}$. The metric for Euclidean geometry is a map from ${\mathbb{R}^2\times\mathbb{R}^2}$ to ${\mathbb{R}}$, taking a pair of vectors as inputs and giving a real number:
$\displaystyle g(\mathbf{u},\mathbf{v}) = u_1 v_1 + u_2 v_2$
This immediately gives the length ${|\mathbf{u}|}$ of a vector
$\displaystyle |\mathbf{u}| = \sqrt{g(\mathbf{u},\mathbf{u})} = \sqrt{u_1^2+u_2^2} \,,$
and the angle ${\theta}$ between two vectors:
$\displaystyle \cos\theta = \frac{g(\mathbf{u},\mathbf{v})}{|\mathbf{u}|\cdot|\mathbf{v}|} \,.$
In symplectic geometry we also have a map from ${\mathbb{R}^2\times\mathbb{R}^2}$ to ${\mathbb{R}}$, taking a pair of vectors to a real number. The standard symplectic two-form is
$\displaystyle \Omega(\mathbf{u},\mathbf{v}) = u_1 v_2 - u_2 v_1$
This antisymmetric 2-form does not correspond to lengths and angles, but to areas: ${\Omega(\mathbf{u},\mathbf{v})}$ is the oriented area of the parallelogram spanned by the two vectors. Changing the order of the vectors changes the sign, hence the anti-symmetry of ${\Omega(\mathbf{u},\mathbf{v})}$. So, whereas Euclidean geometry is the geometry of lengths and angles, symplectic geometry is an areal geometry.
Euclidean geometry is easily extended to higher dimensions. The metric in ${n}$-dimensional Euclidean space arises from
$\displaystyle g(\mathbf{u},\mathbf{v}) = \sum_{i=1}^n u_i v_i \,,$
and lengths and angles follow as before. For symplectic geometry, things are trickier, since area is essentially two-dimensional. However, taking an even-dimensional space ${\mathbb{R}^{2m}}$ and two infinitesimal vector displacements ${\mathrm{d}\mathbf{u}}$ and ${\mathrm{d}\mathbf{v}}$, we can define ${\Omega(\mathrm{d}\mathbf{u},\mathrm{d}\mathbf{v})}$ to be the sum of the oriented areas of the shadows or projections of the parallelogram spanned by ${\mathrm{d}\mathbf{u}}$ and ${\mathrm{d}\mathbf{v}}$ onto ${m}$ coordinate planes. We write this as a wedge product:
$\displaystyle \Omega(du,dv) \equiv \mathrm{d}\mathbf{u}\wedge \mathrm{d}\mathbf{v} = \sum_{i=1}^m \mathrm{d}u_i\wedge \mathrm{d}v_i$
Riemann extended Euclidean space by allowing for curved spaces, with different properties at different places. Symplectic geometry can also be applied in curved spaces or manifolds. We consider a surface to be the integral of infinitesimal parallelograms and define oriented areas by integrating the shadows or projections of these elements.
Phase Fluid
Hamiltonian mechanics is essentially the symplectic geometry of phase space. Phase space comprises a system of 1-dimensional trajectories that fill it without ever intersecting each other. We can consider a cluster of points corresponding to a set of starting conditions. As time evolves, this cluster moves through phase space, like a parcel of fluid. Collections of points move around in phase space like parcels of fluid and this flow has a very special character. The parcels keep their initial volume, but much more than that: the flow preserves the areas of two-dimensional sheets of fluid as they move.
The property of being symplectic implies that the sum of the projections of a volume onto the ${N}$ coordinate-momentum planes or ${pq}$-planes is preserved for any closed curve in phase space as it moves under a Hamiltonian map:
$\displaystyle \sum_\mu \oint_{C_\mu(t)} p_\mu(t) \mathrm{d}q^\mu(t) = \sum_\mu \oint_{C_\mu(0)} p_\mu(0) \mathrm{d}q^\mu(0)$
The figure below, from V I Arnold’s book on Mathematical Methods in Classical Mechanics, illustrates this.
Corollary from V.I.Arnold’s book on Classical Mechanics.
The phase space is a symplectic manifold and the dynamics consists of an area-preserving time-dependent transformation of phase space. In other words, the flow is an evolving symplectic transformation or symplectomorphism. The mathematical framework of SG is not confined to mechanics. In addition to modelling planetary systems and galaxies, it applies to electornic circults, molecular systems and optical problems. It also provides a gateway to quantum mechanics.
Conclusion
Symplectic geometry is one of the most valuable products of the link between mathematics and physics. Its mathematical theory owes its existence to physics and, in turn, mathematical developments of SG have enriched physical theory. The word symplectic’ means intertwined or woven together. SG has the promise of weaving together key areas of mathematics and physics.
SG has been applied in many areas of mathematics: group theory, analysis, number theory and topology. Some commentators go so far as to say that SG could bring about a revolution in mathematics with impact comparable to the impact of introducing complex numbers. | 2023-02-07 13:54:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7661295533180237, "perplexity": 406.9365032092976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00225.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Chemical_Engineering/Book%3A_Material_Balances_for_Chemical_Engineers_(Cerro_Higgins_and_Whitaker)/10%3A_Appendices/10.03%3A_Appendix_C_-_Matrices_and_Stoichiometric_Schemata | # 10.3: Appendix C - Matrices and Stoichiometric Schemata
In Appendix C1 we review concepts associated with matrix algebra. In Appendix C2 we illustrate how one can transform an equation to a picture in a rigorous manner, and this leads to a schema for a single independent stoichiometric reaction. This schema can also be obtained by “counting atoms” and “balancing a chemical equation”. For multiple independent stoichiometric reactions, a schema cannot be constructed by “counting atoms” and in Appendix C3 we show how the schemata can be developed in a rigorous manner.
## C1: Matrix Methods and Partitioning
In order to support the results obtained for the atomic matrix studied in Chapter 6 and for the mechanistic matrix studied in Chapter 9, we need to consider that matter of partitioning matrices. All the information necessary for our studies of stoichiometry is contained in Eq. $$(6.2.10)$$; however, that information can be presented in different forms depending on how the atomic matrix and the column matrix of net rates of production are partitioned. In our studies of reaction kinetics, all the information that we need is contained in the mechanistic matrix; however, that information can also be presented in different forms depending on presence or absence of Bodenstein products. In this appendix we review the methods required to develop the desired different forms.
We begin our study of partitioning with the process of addition (or subtraction) as illustrated by the following matrix equation
$\begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} }\end{bmatrix} + \begin{bmatrix} {b_{11} } & {b_{12} } & {b_{13} } & {b_{14} } \\ {b_{21} } & {b_{22} } & {b_{23} } & {b_{24} } \\ {b_{31} } & {b_{32} } & {b_{33} } & {b_{34} } \\ {b_{41} } & {b_{42} } & {b_{43} } & {b_{44} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } & {c_{13} } & {c_{14} } \\ {c_{21} } & {c_{22} } & {c_{23} } & {c_{24} } \\ {c_{31} } & {c_{32} } & {c_{33} } & {c_{34} } \\ {c_{41} } & {c_{42} } & {c_{43} } & {c_{44} }\end{bmatrix} \label{1}$
This can be expressed in more compact nomenclature according to
$\mathbf{A} + \mathbf{B} = \mathbf{C} \label{2}$
The fundamental meaning of Eqs. \ref{1} and \ref{2} is given by the following 16 equations:
\begin{align} a_{11} + b_{11} = c_{11} && a_{21} + b_{21} = c_{21} \nonumber\\ a_{12} + b_{12} = c_{12} && a_{22} + b_{22} = c_{22} \nonumber\\ a_{13} + b_{13} = c_{13} && a_{23} + b_{23} = c_{23} \nonumber\\ a_{14} + b_{14} = c_{14} && a_{24} + b_{24} = c_{24} \nonumber\\ {}\label{3} \\ a_{31} + b_{31} = c_{31} && a_{41} + b_{41} = c_{41} \nonumber\\ a_{32} + b_{32} = c_{32} && a_{42} + b_{42} = c_{42} \nonumber\\ a_{33} + b_{33} = c_{33} && a_{43} + b_{43} = c_{43} \nonumber\\ a_{34} + b_{34} = c_{34} && a_{44} + b_{44} = c_{44} \nonumber\end{align}
These equations represent a complete partitioning of the matrix equation given by Equation \ref{1}, and we can also represent this complete partitioning in the form
$\begin{bmatrix} \color{red}{a_{11} } & \vdots & {a_{12} } & \vdots & {a_{13} } & \vdots & {a_{14} } \\ \hdashline {a_{21} } & \vdots & {a_{22} } & \vdots & {a_{23} } & \vdots & {a_{24} } \\ \hdashline {a_{31} } & \vdots & {a_{32} } & \vdots & {a_{33} } & \vdots & {a_{34} } \\ \hdashline {a_{41} } & \vdots & {a_{42} } & \vdots & {a_{43} } & \vdots & {a_{44} }\end{bmatrix} + \begin{bmatrix} \color{red}{b_{11} } & \vdots & {b_{12} } & \vdots & {b_{13} } & \vdots & {b_{14} } \\ \hdashline {b_{21} } & \vdots & {b_{22} } & \vdots & {b_{23} } & \vdots & {b_{24} } \\ \hdashline {b_{31} } & \vdots & {b_{32} } & \vdots & {b_{33} } & \vdots & {b_{34} } \\ \hdashline {b_{41} } & \vdots & {b_{42} } & \vdots & {b_{43} } & \vdots & {b_{44} }\end{bmatrix} = \begin{bmatrix} \color{red}{c_{11} } & \vdots & {c_{12} } & \vdots & {c_{13} } & \vdots & {c_{14} } \\ \hdashline {c_{21} } & \vdots & {c_{22} } & \vdots & {c_{23} } & \vdots & {c_{24} } \\ \hdashline {c_{31} } & \vdots & {c_{32} } & \vdots & {c_{33} } & \vdots & {c_{34} } \\ {c_{41} } & \vdots & {c_{42} } & \vdots & {c_{43} } & \vdots & {c_{44} }\end{bmatrix} \label{4}$
Here we have colored the particular partition that represents the first of Eqs. \ref{3}. The complete partitioning illustrated by Equation \ref{4} is not particularly useful; however, there are other possibilities that we will find to be very useful and one example is the row/column partition given by
$\begin{bmatrix} {a_{11} } & {a_{12} } & \vdots & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & \vdots & {a_{23} } & {a_{24} } \\ \hdashline {a_{31} } & {a_{32} } & \vdots & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & \vdots & {a_{43} } & {a_{44} }\end{bmatrix} + \begin{bmatrix} {b_{11} } & {b_{12} } & \vdots & {b_{13} } & {b_{14} } \\ {b_{21} } & {b_{22} } & \vdots & {b_{23} } & {b_{24} } \\ \hdashline {b_{31} } & {b_{32} } & \vdots & {b_{33} } & {b_{34} } \\ {b_{41} } & {b_{42} } & \vdots & {b_{43} } & {b_{44} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } & \vdots & {c_{13} } & {c_{14} } \\ {c_{21} } & {c_{22} } & \vdots & {c_{23} } & {c_{24} } \\ \hdashline {c_{31} } & {c_{32} } & \vdots & {c_{33} } & {c_{34} } \\ {c_{41} } & {c_{42} } & \vdots & {c_{43} } & {c_{44} }\end{bmatrix} \label{5}$
Each partitioned matrix can be expressed in the form
$\mathbf{A} = \begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{bmatrix} = \begin{bmatrix} {a_{11} } & {a_{12} } & \vdots & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & \vdots & {a_{23} } & {a_{24} } \\ \hdashline {a_{31} } & {a_{32} } & \vdots & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & \vdots & {a_{43} } & {a_{44} }\end{bmatrix} \label{6}$
and the partitioned matrix equation is given by
$\begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{bmatrix} + \begin{bmatrix} \mathbf{B}_{11} & \mathbf{B}_{12} \\ \mathbf{B}_{21} & \mathbf{B}_{22} \end{bmatrix} = \begin{bmatrix} \mathbf{C}_{11} & \mathbf{C}_{12} \\ \mathbf{C}_{21} & \mathbf{C}_{22} \end{bmatrix} \label{7}$
We usually think of the elements of a matrix as numbers such as $$a_{11}$$, $$a_{12}$$, etc.; however, the elements of a matrix can also be matrices as indicated in Equation \ref{7}. The usual rules for matrix addition lead to
$\mathbf{A}_{11} + \mathbf{B}_{11} = \mathbf{C}_{11} \label{8a}$
$\mathbf{A}_{12} + \mathbf{B}_{12} = \mathbf{C}_{12} \label{8b}$
$\mathbf{A}_{21} + \mathbf{B}_{21} = \mathbf{C}_{21} \label{8c}$
$\mathbf{A}_{22} + \mathbf{B}_{22} = \mathbf{C}_{22} \label{8d}$
and the details associated with Equation \ref{8a} are given by
$\begin{bmatrix} {a_{11} } & {a_{12} } \\ {a_{21} } & {a_{22} }\end{bmatrix} + \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} }\end{bmatrix} \label{9}$
A little thought will indicate that this matrix equation represents the first four equations given in Eqs. \ref{3}. Other partitions of Equation \ref{1} are obviously available and will be encountered in the following paragraphs.
### Matrix multiplication
Multiplication of matrices can also be represented in terms of submatrices, provided that one is careful to follow the rules of matrix multiplication. As an example, we consider the following matrix equation
$\begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} }\end{bmatrix} \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \\ {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{10}$
which conforms to the rule that the number of columns in the first matrix is equal to the number of rows in the second matrix. Equation \ref{10} represents the 8 individual equations given by
$a_{11} b_{11} + a_{12} b_{21} + a_{13} b_{31} + a_{14} b_{41} = c_{11} \label{11a}$
$a_{11} b_{12} + a_{12} b_{22} + a_{13} b_{32} + a_{14} b_{42} = c_{12} \label{11b}$
$a_{21} b_{11} + a_{22} b_{21} + a_{23} b_{31} + a_{24} b_{4} = c_{21} \label{11c}$
$a_{21} b_{12} + a_{22} b_{22} + a_{23} b_{32} + a_{24} b_{42} = c_{22} \label{11d}$
$a_{31} b_{11} + a_{32} b_{21} + a_{33} b_{31} + a_{34} b_{41} = c_{31} \label{11e}$
$a_{31} b_{12} + a_{32} b_{22} + a_{33} b_{32} + a_{34} b_{42} = c_{32} \label{11f}$
$a_{41} b_{11} + a_{42} b_{21} + a_{43} b_{31} + a_{44} b_{41} = c_{41} \label{11g}$
$a_{41} b_{12} + a_{42} b_{22} + a_{43} b_{32} + a_{44} b_{42} = c_{42} \label{11h}$
which can also be expressed in compact form according to
$\mathbf{AB} = \mathbf{C} \label{12}$
Here the matrices $$\mathbf{A}$$, $$\mathbf{B}$$, and $$\mathbf{C}$$ are defined explicitly by
$\mathbf{A} = \begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} }\end{bmatrix} \quad \mathbf{B} = \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} }\end{bmatrix} \quad \mathbf{C} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \\ {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{13}$
In Eqs. \ref{1} through \ref{9} we have illustrated that the process of addition and subtraction can be carried out in terms of partitioned matrices. Matrix multiplication can also be carried out in terms of partitioned matrices; however, in order to conform to the rules of matrix multiplication, we must partition the matrices properly. For example, a proper row partition of Equation \ref{10} can be expressed as
$\begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ \hdashline {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} }\end{bmatrix} \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \\ \hdashline {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{14}$
In terms of the submatrices defined by
$\mathbf{A}_{11} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \end{bmatrix} , \quad \mathbf{A}_{21} = \begin{bmatrix} a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix} \\ \mathbf{C}_{11} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \end{bmatrix} \quad \mathbf{C}_{21} = \begin{bmatrix} {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{15}$
we can represent Equation \ref{14} in the form
$\begin{bmatrix} \mathbf{A}_{11} \\ \mathbf{A}_{21} \end{bmatrix} \quad \mathbf{B} = \begin{bmatrix} \mathbf{A}_{11} \mathbf{B} \\ \mathbf{A}_{21} \mathbf{B}\end{bmatrix} = \begin{bmatrix} \mathbf{C}_{11} \\ \mathbf{C}_{21} \end{bmatrix} \label{16}$
Often it is useful to work with the separate matrix equations that we have created by the partition, and these are given by
$\mathbf{A}_{11} \mathbf{B} = \mathbf{C}_{11} \label{17}$
$\mathbf{A}_{21} \mathbf{B} = \mathbf{C}_{21} \label{18}$
The details of the first of these can be expressed as
$\mathbf{A}_{11} \mathbf{B} = \begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \end{bmatrix} \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} } \end{bmatrix}, \quad \mathbf{C}_{11} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} }\end{bmatrix} \label{19a}$
Multiplication can be carried out to obtain
$\begin{bmatrix} a_{11} b_{11} +a_{12} b_{21} +a_{13} b_{31} +a_{14} b_{41} & a_{11} b_{12} +a_{12} b_{22} +a_{13} b_{32} +a_{14} b_{42} \\ a_{21} b_{11} +a_{22} b_{21} +a_{23} b_{31} +a_{24} b_{41} & a_{21} b_{12} +a_{22} b_{22} +a_{23} b_{32} +a_{24} b_{42} \end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \end{bmatrix} \label{19b}$
and equating the four elements of each matrix leads to
${a_{11} b_{11} +a_{12} b_{21} +a_{13} b_{31} +a_{14} b_{41} } = c_{11} \\ {a_{11} b_{12} +a_{12} b_{22} +a_{13} b_{32} +a_{14} b_{42} } = c_{12} \\ {a_{21} b_{11} +a_{22} b_{21} +a_{23} b_{31} +a_{24} b_{41} } = c_{21} \\ {a_{21} b_{12} +a_{22} b_{22} +a_{23} b_{32} +a_{24} b_{42} } = c_{22} \label{19c}$
Here we see that these four individual equations (associated with the partitioned matrix equation) are those given originally by Eqs. \ref{11a} through \ref{11d}. A little thought will indicate that the matrix equation represented by Equation \ref{18} contains the four individual equations represented by Eqs. \ref{11e} through \ref{11h}. All of the information available in Equation \ref{10} is given explicitly in Eqs. \ref{11a}-\ref{11h} and partitioning of the original matrix equation does nothing more than arrange the information in a different form.
If we wish to obtain a column partition of the matrix $$\mathbf{A}$$ in Equation \ref{10}, we must also create a row partition of matrix $$\mathbf{B}$$ in order to conform to the rules of matrix multiplication. This column/row partition takes the form
$\begin{bmatrix} {a_{11} } & {a_{12} } & \vdots & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & \vdots & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} }& \vdots & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & \vdots & {a_{43} } & {a_{44} } \end{bmatrix} \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ \hdashline {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \\ {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{20}$
and the submatrices are identified explicitly according to
$\mathbf{A}_{11} = \begin{bmatrix} {a_{11} } & {a_{12} } \\ {a_{21} } & {a_{22} } \\ {a_{31} } & {a_{32} } \\ {a_{41} } & {a_{42} }\end{bmatrix} \quad \mathbf{A}_{12} = \begin{bmatrix} {a_{13} } & {a_{14} } \\ {a_{23} } & {a_{24} } \\ {a_{33} } & {a_{34} } \\ {a_{43} } & {a_{44} }\end{bmatrix} \quad \mathbf{B}_{11} = \begin{bmatrix} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} }\end{bmatrix} \quad \mathbf{B}_{21} = \begin{bmatrix} {b_{31} } & {b_{32} } \\ {b_{41} } & {b_{42} }\end{bmatrix} \label{21}$
Use of these representations in Equation \ref{20} leads to
$\begin{bmatrix}\mathbf{A}_{11} & \mathbf{A}_{12} \end{bmatrix} \begin{bmatrix} \mathbf{B}_{11} \\ \mathbf{B}_{21} \end{bmatrix} = \mathbf{C} \label{22}$
and matrix multiplication in terms of the submatrices provides
$\mathbf{A}_{11} \mathbf{B}_{11} + \mathbf{A}_{12} \mathbf{B}_{21} = \mathbf{C} \label{23}$
In come cases, we will make use of a complete column partition of the matrix $$\mathbf{A}$$ which requires a complete row partition of the matrix $$\mathbf{B}$$. This partition is illustrated by
$\begin{bmatrix} {a_{11} } & \vdots & {a_{12} } & \vdots & {a_{13} } & \vdots & {a_{14} } \\ {a_{21} } & \vdots & {a_{22} } & \vdots & {a_{23} } & \vdots & {a_{24} } \\ {a_{31} } & \vdots & {a_{32} }& \vdots & {a_{33} } & \vdots & {a_{34} } \\ {a_{41} }& \vdots & {a_{42} } & \vdots & {a_{43} } & \vdots & {a_{44} } \end{bmatrix} \begin{bmatrix} {b_{11} } & {b_{12} } \\ \hdashline {b_{21} } & {b_{22} } \\ \hdashline {b_{31} } & {b_{32} } \\ \hdashline {b_{41} } & {b_{42} }\end{bmatrix} = \begin{bmatrix} {c_{11} } & {c_{12} } \\ {c_{21} } & {c_{22} } \\ {c_{31} } & {c_{32} } \\ {c_{41} } & {c_{42} }\end{bmatrix} \label{24}$
and in terms of the submatrices it can be expressed as
$\begin{bmatrix}\mathbf{A}_{11} & \mathbf{A}_{12} & \mathbf{A}_{13} & \mathbf{A}_{14} \end{bmatrix} \begin{bmatrix} {\mathbf{B}_{11} } \\ {\mathbf{B}_{21} } \\ {\mathbf{B}_{31} } \\ {\mathbf{B}_{41} }\end{bmatrix} = \mathbf{C} \label{25}$
$\mathbf{A}_{11} \mathbf{B}_{11} + \mathbf{A}_{12} \mathbf{B}_{21} + \mathbf{A}_{13} \mathbf{B}_{31} + \mathbf{A}_{14} \mathbf{B}_{41} = \mathbf{C} \label{26}$
and we will find this form of Equation \ref{10} to be especially useful in our discussion of stoichiometric schemata.
## C2: Single Independent Stoichiometric Reaction
When the rank of the atomic matrix is $$N -1$$, we can use Eq. $$(6.2.10)$$ to obtain Eqs. $$(6.2.28)-(6.2.31)$$ that can be expressed as
$\frac{R_{A} }{R_{N} } = \nu_{A} \label{27a}$
$\frac{R_{B} }{R_{N} } = \nu_{B} \label{27b}$
$\frac{R_{C} }{R_{N} } = \nu_{C} \label{27c}$
.
$\frac{R_{N-1} }{R_{N} } = \nu_{N-1} \label{27m}$
$\frac{R_{N} }{R_{N} } = \nu_{N} \label{27n}$
Here we have identified the ratios of reactions rates as the stoichiometric coefficients, $$\nu_{A}$$, $$\nu_{B}$$, etc. These stoichiometric coefficients are not necessary to determine reaction rates, as we indicated in Examples $$6.2.1$$, $$6.2.3$$ and $$6.2.4$$; however, they can be used to develop a picture of the stoichiometry of the reaction. To see how this is done, it is convenient to express Eqs. \ref{27a}-\ref{27n} in the form
$R_{A} = \nu_{A} R_{N} , \quad A = 1,2,3,....N \label{28}$
which can be used with Equation $$(6.2.8)$$ to obtain
$\sum_{A = 1}^{A = N}N_{JA} \nu_{A} R_{N} = 0 , \quad J = 1,2,3,....T \label{29}$
In matrix form this can be expressed as
$\begin{bmatrix} {N_{1A} } & {N_{1B} } & {N_{1C} } & {...} & {N_{1N} } \\ {N_{2A} } & {N_{2B} } & {N_{2C} } & {...} & {N_{2N} } \\ {.} & {.} & {.} & {...} & {.} \\ {N_{TA} } & {N_{TB} } & {N_{TC} } & {...} & {N_{TN} }\end{bmatrix} \begin{bmatrix} {\nu_{A} } \\ {\nu_{B} } \\ {\nu_{C} } \\ {.} \\ {\nu_{N} }\end{bmatrix}R_{N} = \begin{bmatrix} {0} \\ {0} \\ {.} \\ {0}\end{bmatrix} \label{30}$
and the single unknown reaction rate, $$R_{N}$$, can be cancelled to obtain
$\begin{bmatrix} {N_{1A} } & {N_{1B} } & {N_{1C} } & {...} & {N_{1N} } \\ {N_{2A} } & {N_{2B} } & {N_{2C} } & {...} & {N_{2N} } \\ {.} & {.} & {.} & {...} & {.} \\ {N_{TA} } & {N_{TB} } & {N_{TC} } & {...} & {N_{TN} }\end{bmatrix} \begin{bmatrix} {\nu_{A} } \\ {\nu_{B} } \\ {\nu_{C} } \\ {.} \\ {\nu_{N} }\end{bmatrix} = \begin{bmatrix} {0} \\ {0} \\ {.} \\ {0}\end{bmatrix} \label{31}$
Here it is understood that $$\nu_{N}$$ is equal to one.
Each column in the chemical composition matrix identifies the structure of a molecule. For example, the first column in the chemical composition matrix indicates the atoms associated with molecular species $$A$$, and the second column indicates the atoms associated with species $$B$$. We can partition the chemical composition matrix into $$N$$ molecular species submatrices to obtain
$\begin{bmatrix} {N_{1A} } & \vdots & {N_{1B} } & \vdots & {N_{1C} } & \vdots & {...} & \vdots & {N_{1N} } \\ {N_{2A} } & \vdots & {N_{2B} } & \vdots & {N_{2C} } & \vdots & {...} & \vdots & {N_{2N} } \\ {.} & \vdots & {.} & \vdots & {.} & \vdots & {...} & \vdots & {.} \\ {N_{TA} } & \vdots & {N_{TB} } & \vdots & {N_{TC} } & \vdots & {...} & \vdots & {N_{TN} }\end{bmatrix} \begin{bmatrix} {\nu_{A} } \\ \hdashline {\nu_{B} } \\ \hdashline {\nu_{C} } \\ \hdashline {.} \\ \hdashline {\nu_{N} }\end{bmatrix} = \begin{bmatrix} {0} \\ {0} \\ {.} \\ {0}\end{bmatrix} \label{32}$
and this can be expanded (see Eqs. \ref{24} through \ref{26}) in the form
$\begin{bmatrix} {N_{1A} } \\ {N_{2A} } \\ {.} \\ {N_{TA} }\end{bmatrix} \nu_{A} + \begin{bmatrix} {N_{1B} } \\ {N_{2B} } \\ {.} \\ {N_{TB} }\end{bmatrix} \nu_{B} + \begin{bmatrix} {N_{1C} } \\ {N_{2C} } \\ {.} \\ {N_{TC} }\end{bmatrix} \nu_{C} + ... + \begin{bmatrix} {N_{1N} } \\ {N_{2N} } \\ {.} \\ {N_{TN} }\end{bmatrix} \nu_{N} = \begin{bmatrix} {0} \\ {0} \\ {.} \\ {0}\end{bmatrix} \label{33}$
We now note that some of the stoichiometric coefficients will be negative and some will be positive, i.e., the reactants will have negative coefficients and the products will have positive coefficients. If species $$A$$ and $$B$$ are reactants and species $$C$$ through $$N$$ are products, we represent this idea as
$\nu_{A} = -\left|\nu_{A} \right| , \quad \nu_{B} = -\left|\nu_{B} \right| , \quad \nu_{J} \ge 0 , \quad J \Rightarrow C,D, ..., N \label{34}$
This allows us to express Equation \ref{33} in the form
$\begin{bmatrix} {N_{1A} } \\ {N_{2A} } \\ {.} \\ {N_{TA} }\end{bmatrix} \left|\nu_{A} \right| + \begin{bmatrix} {N_{1B} } \\ {N_{2B} } \\ {.} \\ {N_{TB} }\end{bmatrix} \left|\nu_{B} \right| = \begin{bmatrix} {N_{1C} } \\ {N_{2C} } \\ {.} \\ {N_{TC} }\end{bmatrix} \nu_{C} + \begin{bmatrix} {N_{1D} } \\ {N_{2D} } \\ {.} \\ {N_{TD} }\end{bmatrix} \nu_{D} + ... + \begin{bmatrix} {N_{1N} } \\ {N_{2N} } \\ {.} \\ {N_{TN} }\end{bmatrix} \nu_{N} \label{35}$
This matrix equation represents the concept that atomic species are neither created nor destroyed by chemical reactions. For example, the first equation in the set of equations represented by Equation \ref{35} is given by
Atomic Species #1: $N_{1A} \left|\nu_{A} \right| + N_{1B} \left|\nu_{B} \right| = N_{1C} \nu_{C} + N_{1D} \nu_{D} + ... + N_{1N} \nu_{N} \label{36}$
which is simply a statement of Axiom II for atomic species #1. In order to use Equation \ref{35} to construct a stoichiometric schema, we make use of the following two transformations:
Transformation I. The pictures associated with the molecular species submatrices are constructed according to the transformations indicated by
$\begin{bmatrix} {N_{1A} } \\ {N_{2A} } \\ {.} \\ {N_{TA} }\end{bmatrix} \Rightarrow A , \quad \begin{bmatrix} {N_{1B} } \\ {N_{2B} } \\ {.} \\ {N_{TB} }\end{bmatrix} \Rightarrow B , \quad \begin{bmatrix} {N_{1C} } \\ {N_{2C} } \\ {.} \\ {N_{TC} }\end{bmatrix} \Rightarrow C , \quad {etc.,} \label{37}$
Transformation II. The equal sign in Equation \ref{35} is transformed to arrows depending on the sign of $$R_{N}$$. These transformations are given by
\begin{align} && = && \Rightarrow && \to && \text{when} && R_{N} > 0 \nonumber\\ && = && \Rightarrow && \leftarrow && \text{when} && R_{N} < 0 \label{38}\\ && = &&\Rightarrow && \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} && \text{when} && {R_{N} = \pm \left|R_{N} \right|} \nonumber\end{align}
For the first condition given by Equation \ref{38}, we can use these transformations to express Equation \ref{35} in terms of the picture given by
$\left|\nu_{A} \right|A + \left|\nu_{B} \right|B\to \nu_{C} C + \nu_{D} D + ... + \nu_{N} N \label{39}$
Here one must understand that this represents a picture of the stoichiometry of a reacting system in which the molecular species, $$A$$ and $$B$$, react to form the molecular species represented by $$C$$, $$D$$, ..., and $$N$$. While we have assigned an equation number to this picture, it is not an equation.
Example $$\PageIndex{1}$$: Schema for complete oxidation of ethane
In this example we want to apply the ideas given in the previous paragraphs to develop the stoichiometric schema for the complete combustion of ethane. For that reaction, the molecular species under consideration are
Molecular species: $\ce{C2H6} , \quad \ce{O2} , \quad \ce{H2O} , \quad \ce{CO2} \label{a1}\tag{1}$
and the chemical composition matrix can be illustrated as
$\text{Molecular Species}\to \ce{C2H6} \quad \ce{O2} \quad \ce{H2O} \quad \ce{CO} \\ \begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} \color{\red}{ 2} & { 0} & {0} & {1} \\ \color{\red}{ 6} & { 0} & {2} & {0} \\ \color{\red}{ 0} & { 2} & {1} & {2} \end{bmatrix} \label{a2}\tag{2}$
Here we have colored the column that represents the molecular species submatrix for ethane, and it should be clear that the numbers in that colored column are associated with the ethane molecule, $$\ce{C2H6}$$. For the complete combustion of ethane, Equation \ref{33} takes the form
$\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix} \nu_{\ce{C2H6}} + \begin{bmatrix} {0} \\ {0} \\ {2} \end{bmatrix} \nu_{\ce{O2}} + \begin{bmatrix} {0} \\ {2} \\ {1} \end{bmatrix} \nu_{\ce{H2O}} + \begin{bmatrix} {1} \\ {0} \\ {2} \end{bmatrix} \nu_{\ce{CO2}} = \begin{bmatrix} {0} \\ {0} \\ {0} \end{bmatrix} \label{a3}\tag{3}$
in which carbon dioxide has been chosen as the pivot species. For this reaction one can follow the development in Example $$6.2.1$$ (see Eqs. 5) to show that the stoichiometric coefficients are given by
$\nu_{\ce{C2H6}} = -\frac{1}{2} \label{a4a}\tag{4a}$
$\nu_{\ce{O2}} = -\frac{7}{4} \label{a4b}\tag{4b}$
$\nu_{\ce{H2O}} = +\frac{3}{4} \label{a4c}\tag{4c}$
$\nu_{\ce{CO2}} = +1 \label{a4d}\tag{4d}$
When these values for the stoichiometric coefficients are used in Equation \ref{a3} we obtain the matrix equation given by
$\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \frac{1}{2} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix}+\frac{7}{4} \begin{bmatrix} {0} \\ {0} \\ {2}\end{bmatrix} = \frac{3}{2} \begin{bmatrix} {0} \\ {2} \\ {1}\end{bmatrix}+ \begin{bmatrix} {1} \\ {0} \\ {2} \end{bmatrix} \label{a5}\tag{5}$
To construct a picture, or stoichiometric schema, on the basis of Equation \ref{a5} we make use of the transformations represented by Eqs. \ref{37} and \ref{38}.
Transformation I. The pictures associated with molecular species submatrices are extracted directly from the atomic matrix according to
$\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix} \Rightarrow {\ce{C2H6}} , \quad \begin{bmatrix} {0} \\ {0} \\ {2} \end{bmatrix} \Rightarrow {\ce{O2}} , \quad \begin{bmatrix} {0} \\ {2} \\ {1} \end{bmatrix} \Rightarrow {\ce{H2O}} , \quad \begin{bmatrix} {1} \\ {0} \\ {2} \end{bmatrix} \Rightarrow {\ce{CO2}}\label{a7}\tag{6}$
Transformation II. The equal sign in Equation \ref{5} is transformed to arrows depending on the sign of . These transformations are given by
\begin{align} && = && \Rightarrow && \to && \text{when} && R_{\ce{CO2}} > 0 \nonumber\\ && = && \Rightarrow && \leftarrow && \text{when} && R_{\ce{CO2}} < 0 \label{a8}\tag{7} \\ && = &&\Rightarrow && \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} && \text{when} && {R_{\ce{CO2}} = \pm \left|R_{\ce{CO2}} \right|} \nonumber\end{align}
When we follow these transformation rules, the matrix equation given by Equation \ref{5} becomes a stoichiometric schema having the following possibilities:
$\frac{1}{2} \ce{C2H6} + \frac{7}{4} \ce{O2} \longrightarrow \frac{3}{2} \ce{H2O} + \ce{CO2} , \quad R_{\ce{CO2}} > 0 \label{a9}\tag{8}$
$\frac{1}{2} \ce{C2H6} + \frac{7}{4} \ce{O2} \longleftarrow \frac{3}{2} \ce{H2O} + \ce{CO2} , \quad R_{\ce{CO2}} < 0 \label{a10}\tag{9}$
$\frac{1}{2} \ce{C2H6} + \frac{7}{4} \ce{O2} \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} \frac{3}{2} \ce{H2O} + \ce{CO2} , \quad R_{\ce{CO2}} = \pm \left|R_{\ce{CO2}} \right| \label{a11}\tag{10}$
For a single independent stoichiometric reaction, such as the complete combustion of ethane, these stoichiometric schemata are easily developed by counting atoms; however, it is important to have a general methodology for creating these pictures from Eqs. $$(6.2.8)$$.
It is important to remember that Axiom II given by Eqs. $$(6.2.8)$$ can be used to determine ratios of reaction rates, or stoichiometric coefficients, as indicated by Eqs. \ref{27a}-\ref{27n}. In addition, Eqs. $$(6.2.8)$$ can be used to derive the matrix equation given by Equation \ref{35}. Equation \ref{35} is not necessary for solving problems, but it can be transformed to Equation \ref{39} which is a useful pictorial representation. We find it convenient to discuss chemical reactions using stoichiometric schemata such as those given in Example A.1; however, they should not be confused with the mathematical equations represented by Eqs. $$(6.2.8)$$.
## C3: Multiple Independent Stoichiometric Reactions
At this point we have shown that Eqs. $$(6.2.8)$$ can be used to constrain stoichiometric reaction rates in a manner that depends on the number of atomic species and the number of molecular species involved in the process. If there are $$N-1$$ independent equations in the set of equations illustrated explicitly by Eqs. $$(6.2.11)$$, we can use Axiom II to determine all the rates of reaction in terms of a single pivot species. This is referred to as the case of a single independent reaction. The number of independent stoichiometric reaction rates associated with Eqs. $$(6.2.11)$$ is given by
$\begin{Bmatrix} \text{number of} \\ \text{independent} \\ \text{reactions} \end{Bmatrix} = N - r \label{40}$
in which $$r = rank[N_{JA} ]$$. By rank we mean explicitly the row rank which represents the number of linearly independent equations contained in Eqs. $$(6.2.11)$$. In Example $$6.2.2$$ the chemical composition matrix provided $$N = 3$$ and $$rank = 2$$, thus we had an example of a single independent stoichiometric reaction. This meant that a single reaction rate had to be measured in order to determine all the other reaction rates. In Example $$6.2.3$$ the chemical composition matrix gave $$N = 4$$ and $$rank = 2$$, and we were confronted with two independent stoichiometric reactions. In this case two reaction rates had to be measured in order to determine all the other reaction rates. The determination of reaction rates for systems having multiple independent reactions is straightforward and requires only the direct application of Eqs. $$(6.2.8)$$. In Example A.1 we showed how to develop the stoichiometric schema for a single independent stoichiometric reaction rate, and in the following paragraphs we extend that development to the case of multiple independent stoichiometric reactions.
In order to avoid the complex algebra associated with the development of schemata for the general case, we direct our attention to the specific example of the partial combustion of ethane that was studied in Example $$6.3.1$$. There we showed that the reaction rates for $$\ce{C2H6}$$, $$\ce{O2}$$ and $$\ce{H2O}$$ could be expressed in terms of the reaction rates for $${CO}$$, $$\ce{CO2}$$, and $$\ce{C2H4O}$$, and our analysis took the form
Step I: $\underbrace{ \begin{bmatrix} R_{\ce{C2H6}} \\ R_{\ce{O2}} \\ R_{\ce{H2O}} \end{bmatrix} }_{non-pivots} = \underbrace{\begin{bmatrix} - \frac{1}{2} & - \frac{1}{2} & {- 1} \\ - \frac{5}{4} & - \frac{7}{4} & {- 1} \\ + \frac{3}{2} & + \frac{3}{2} & + 1 \end{bmatrix}}_{\text{pivot matrix}} \underbrace{ \begin{bmatrix} R_{\ce{CO}}^{I} \\ R_{\ce{CO2} }^{II} \\ R_{\ce{C2H4O}}^{III} \end{bmatrix} }_{pivots} \label{41}$
Here we have identified the reaction rates for the pivots as $$R_{\ce{CO}}^{I}$$, $$R_{\ce{CO2}}^{II}$$ and $$R_{\ce{C2H4O}}^{III}$$ with the idea that these are the three independent rates of production which must be determined experimentally. In Chapter 6 we identified the net rates of production for the pivot species as $$R_{ \ce{CO}}$$, $$R_{ \ce{CO2} }$$ and $$R_{\ce{C2H4O}}$$; however, in this case we have added the superscripts I, II, and III as additional identifiers. For the mathematical computation required to analyze material balance problems with chemical reaction, this type of nomenclature is unnecessary and could be considered cumbersome. However, our objective here is to transform equations to pictures and this is a more difficult task than using Axiom II to simply compute stoichiometric net rates of production.
From Equation \ref{41} we see that the reaction rate for ethane takes the form
$R_{\ce{C2H6}} = - \frac{1}{2} R_{\ce{CO}}^{I} - \frac{1}{2} R_{\ce{CO2} }^{II} - R_{\ce{C2H4O}}^{III} \label{42}$
and a little thought will indicated that the coefficients, $$-{\frac{1}{2}}$$, $$-{\frac{1}{2}}$$ and $$-1$$ represent the stoichiometric coefficients for ethane in terms of the three independent rates of production for carbon monoxide, carbon dioxide and ethylene oxide. We identify the matrix of coefficients in Equation \ref{41} as the stoichiometric coefficients associated with the three independent net rates of production in order to express the net rates of production for the non-pivot species as
$R_{\ce{C2H6}} = \nu_{\ce{C2H6}}^{I} R_{\ce{CO}}^{I} + \nu_{\ce{C2H6}}^{II} R_{\ce{CO2}}^{II} + \nu_{\ce{C2H6}}^{III} R_{\ce{C2H4O}}^{III} \label{43a}$
$R_{\ce{O2}} = \nu_{\ce{O2}}^{I} R_{\ce{CO}}^{I} + \nu_{\ce{O2}}^{II} R_{\ce{CO2}}^{II} + \nu_{\ce{O2}}^{III} R_{\ce{C2H4O}}^{III} \label{43b}$
$R_{\ce{H2O}} = \nu_{\ce{H2O}}^{I} R_{\ce{CO}}^{I} + \nu_{\ce{H2O}}^{II} R_{\ce{CO2} }^{II} + \nu_{\ce{H2O}}^{III} R_{\ce{C2H4O}}^{III} \label{43c}$
In terms of these stoichiometric coefficients, Equation \ref{41} can be expressed as
Step II: $\underbrace{ \begin{bmatrix} R_{\ce{C2H6}} \\ R_{\ce{O2}} \\ R_{\ce{H2O}} \end{bmatrix} }_{non-pivots} = \underbrace{\begin{bmatrix} \nu_{\ce{C2H6}}^{I} & {\nu_{\ce{C2H6}}^{II} } & {\nu_{\ce{C2H6}}^{III} } \\ {\nu_{\ce{O2}}^{I} } & {\nu_{\ce{O2}}^{II} } & {\nu_{\ce{O2}}^{III} } \\ {\nu_{\ce{H2O}}^{I} } & {\nu_{\ce{H2O}}^{II} } & {\nu_{\ce{H2O}}^{III} } \end{bmatrix}}_{\text{pivot matrix}} \underbrace{ \begin{bmatrix} {R_{\ce{CO}}^{I} } \\ {R_{\ce{CO2} }^{II} } \\ {R_{\ce{C2H4O}}^{III} } \end{bmatrix} }_{pivots} \label{44}$
where the numbers equivalent to these stoichiometric coefficients are available in Equation \ref{41} and are listed here as
$\begin{bmatrix} \nu_{\ce{C2H6}}^{I} & \nu_{\ce{C2H6}}^{II} & \nu_{\ce{C2H6}}^{III} \\ \nu_{\ce{O2}}^{I} & \nu_{\ce{O2}}^{II} & \nu_{\ce{O2}}^{III} \\ \nu_{\ce{H2O}}^{I} & \nu_{\ce{H2O}}^{II} & \nu_{\ce{H2O}}^{III} \end{bmatrix} = \begin{bmatrix} - \frac{1}{2} & - \frac{1}{2} & {- 1} \\ - \frac{5}{4} & - \frac{7}{4} & {- 1} \\ + \frac{3}{2} & + \frac{3}{2} & {+ 1} \end{bmatrix} \label{45}$
We now make use of an identity transformation for the pivot species
$\begin{bmatrix} {R_{\ce{CO}}^{I} } \\ {R_{\ce{CO2} }^{II} } \\ {R_{\ce{C2H4O}}^{III} }\end{bmatrix} = \begin{bmatrix} {1} & {0} & {0} \\ {0} & {1} & {0} \\ {0} & {0} & {1}\end{bmatrix} \begin{bmatrix} {R_{\ce{CO}}^{I} } \\ {R_{\ce{CO2} }^{II} } \\ {R_{\ce{C2H4O}}^{III} }\end{bmatrix} = \begin{bmatrix} {\nu_{\ce{CO}}^{I} } & {0} & {0} \\ {0} & {\nu_{\ce{CO2} }^{II} } & {0} \\ {0} & {0} & {\nu_{\ce{C2H4O}}^{III} }\end{bmatrix} \begin{bmatrix} {R_{\ce{CO}}^{I} } \\ {R_{\ce{CO2} }^{II} } \\ {R_{\ce{C2H4O}}^{III} }\end{bmatrix} \label{46}$
along with the partition described by Eqs. \ref{14} through \ref{18} to obtain a solution for the complete column matrix of production given by
Step III: $\begin{bmatrix} R_{\ce{C2H6}} \\ R_{\ce{O2}} \\ R_{\ce{H2O}} \\ R_{\ce{CO} } \\ R_{\ce{CO2} } \\ R_{\ce{C2H4O}} \end{bmatrix} = \begin{bmatrix} \nu_{\ce{C2H6}}^{I} & \nu_{\ce{C2H6}}^{II} & \nu_{\ce{C2H6}}^{III} \\ \nu_{\ce{O2}}^{I} & \nu_{\ce{O2}}^{II} & \nu_{\ce{O2}}^{III} \\ \nu_{\ce{H2O}}^{I} & \nu_{\ce{H2O}}^{II} & \nu_{\ce{H2O}}^{III} \\ \nu_{\ce{CO}}^{I} & {0} & {0} \\ {0} & \nu_{\ce{CO2} }^{II} & {0} \\ {0} & {0} & \nu_{\ce{C2H4O}}^{III} \end{bmatrix} \underbrace{ \begin{bmatrix} R_{\ce{CO}}^{I} \\ R_{\ce{CO2} }^{II} \\ R_{\ce{C2H4O}}^{III} \end{bmatrix} }_{pivots} \label{47}$
Here one must remember that the stoichiometric coefficients for the pivot species are all equal to one, i.e.,
$\nu_{\ce{CO}}^{I} = 1 , \quad \nu_{\ce{CO2} }^{II} = 1 , \quad \nu_{\ce{C2H4O}}^{III} = 1 \label{48}$
In addition, one must keep in mind that $$R_{\ce{C2H4O}}^{III}$$ on the right hand side of Equation \ref{47} has exactly the same physical significance as $$R_{\ce{C2H4O}}$$ on the left hand side of Equation \ref{47}.
The column matrix of reaction rates on the left hand side of Equation \ref{47} is the column matrix that appears in Axiom II, and for this special case we express that axiom as
Axiom II $\sum_{A = 1}^{A = 6}N_{JA} R_{A} = 0 , \quad J\Rightarrow \ce{C} , \ce{H} , \ce{O} \label{49}$
In this analysis, we set up the chemical composition matrix to explicitly represent the pivot species as $$\ce{CO}$$, $$\ce{CO2}$$, and $$\ce{C2H4O}$$. This leads to the visual representation given by
$\text{Molecular Species}\to \ce{C2H6} \quad \ce{O2} \quad \ce{H2O} \quad \ce{CO} \quad \ce{CO2} \quad \ce{C2H4O} \\ \begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} { 2} & { 0} & {0} & {1} & { 1} & {2 } \\ { 6} & { 0} & {2} & {0} & { 2} & {4 } \\ { 0} & { 2} & {1} & {1} & { 0} & {1 }\end{bmatrix} \label{50}$
while the matrix representation of Equation \ref{49} takes the form
$\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1}\end{bmatrix} \begin{bmatrix} R_{\ce{C2H6}} \\ R_{\ce{O2}} \\ R_{\ce{H2O}} \\ R_{\ce{CO}} \\ R_{\ce{CO2} } \\ R_{\ce{C2H4O}} \end{bmatrix} = \begin{bmatrix} {0} \\ {0} \\ {0}\end{bmatrix} \label{51}$
Substitution of Equation \ref{47} into this equation yields
Step IV: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1}\end{bmatrix} \begin{bmatrix} {\nu_{\ce{C2H6}}^{I} } & {\nu_{\ce{C2H6}}^{II} } & {\nu_{\ce{C2H6}}^{III} } \\ {\nu_{\ce{O2}}^{I} } & {\nu_{\ce{O2}}^{II} } & {\nu_{\ce{O2}}^{III} } \\ {\nu_{\ce{H2O}}^{I} } & {\nu_{\ce{H2O}}^{II} } &{\nu_{\ce{H2O}}^{III} } \\ {\nu_{\ce{CO}}^{I} } & {0} & {0} \\ {0} & {\nu_{\ce{CO2} }^{II} } & {0} \\ {0} & {0} & {\nu_{\ce{C2H4O}}^{III} } \end{bmatrix} \underbrace{ \begin{bmatrix} {R_{\ce{CO}}^{I} } \\ {R_{\ce{CO2} }^{II} } \\ {R_{\ce{C2H4O}}^{III} }\end{bmatrix} }_{pivots} = \begin{bmatrix} {0} \\ {0} \\ {0}\end{bmatrix} \label{52}$
and at this point we construct a complete column/row partition of the second and third matrices to obtain
Step V: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1}\end{bmatrix} \begin{bmatrix} {\nu_{\ce{C2H6}}^{I} } & \vdots & {\nu_{\ce{C2H6}}^{II} } & \vdots & {\nu_{\ce{C2H6}}^{III} } \\ {\nu_{\ce{O2}}^{I} } & \vdots & {\nu_{\ce{O2}}^{II} } & \vdots & {\nu_{\ce{O2}}^{III} } \\ {\nu_{\ce{H2O}}^{I} } & \vdots & {\nu_{\ce{H2O}}^{II} } & \vdots & {\nu_{\ce{H2O}}^{III} } \\ {\nu_{\ce{CO}}^{I} } & \vdots & {0} & \vdots & {0} \\ {0} & \vdots & {\nu_{\ce{CO2} }^{II} } & \vdots & {0} \\ {0} & \vdots & {0} & \vdots & {\nu_{\ce{C2H4O}}^{III} } \end{bmatrix} \begin{bmatrix} R_{\ce{CO}}^{I} \\ \hdashline R_{\ce{CO2} }^{II} \\ \hdashline R_{\ce{C2H4O}}^{III} \end{bmatrix} = \begin{bmatrix} {0} \\ {0} \\ {0}\end{bmatrix} \label{53}$
Following the analysis given in Sec. B.1, this column/row partition leads to
Step VI: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1}\end{bmatrix} \left\{\begin{bmatrix} \nu_{\ce{C2H6}}^{I} \\ \nu_{\ce{O2}}^{I} \\ \nu_{\ce{H2O}}^{I} \\ \nu_{\ce{CO}}^{I} \\ {0} \\ {0} \end{bmatrix} R_{\ce{CO}}^{I} + \begin{bmatrix} \nu_{\ce{C2H6}}^{II} \\ \nu_{\ce{O2}}^{II} \\ \nu_{\ce{H2O}}^{II} \\ {0} \\ \nu_{\ce{CO2} }^{II} \\ {0} \end{bmatrix} R_{\ce{CO2} }^{II} + \begin{bmatrix} \nu_{\ce{C2H6} }^{III} \\ \nu_{\ce{O2}}^{III} \\ \nu_{\ce{H2O}}^{III} \\ {0} \\ {0} \\ \nu_{\ce{C2H4O}}^{III} \end{bmatrix} R_{\ce{C2H4O}}^{III} \right\} = \begin{bmatrix} {0} \\ {0} \\ {0}\end{bmatrix} \label{54}$
We now make use of the fact that the stoichiometric reaction rates of the pivot species are independent, and this provides the following three equations:
Step VII: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1}\end{bmatrix} \begin{bmatrix} \nu_{\ce{C2H6}}^{I} \\ \nu_{\ce{O2}}^{I} \\ \nu_{\ce{H2O}}^{I} \\ \nu_{\ce{CO}}^{I} \\ {0} \\ {0} \end{bmatrix} R_{\ce{CO}}^{I} = \begin{bmatrix} {0} \\ {0} \\ {0} \end{bmatrix} \label{55}$
Step VII: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1} \end{bmatrix} \begin{bmatrix} \nu_{\ce{C2H6}}^{II} \\ \nu_{\ce{O2}}^{II} \\ \nu_{\ce{H2O}}^{II} \\ {0} \\ \nu_{\ce{CO2} }^{II} \\ {0} \\\end{bmatrix} R_{\ce{CO2}}^{II} = \begin{bmatrix} {0} \\ {0} \\ {0} \end{bmatrix} \label{56}$
Step VII: $\begin{bmatrix} {2} & {0} & {0} & {1} & {1} & {2} \\ {6} & {0} & {2} & {0} & {0} & {4} \\ {0} & {2} & {1} & {1} & {2} & {1} \end{bmatrix} \begin{bmatrix} \nu_{\ce{C2H6}}^{III} \\ \nu_{\ce{O2}}^{III} \\ \nu_{\ce{H2O}}^{III} \\ {0} \\ {0} \\ \nu_{\ce{C2H4O}}^{III} \end{bmatrix} R_{\ce{C2H4O}}^{III} = \begin{bmatrix} {0} \\ {0} \\ {0} \end{bmatrix} \label{57}$
Each of these three equations has exactly the same form as Equation \ref{30}, thus we can repeat the procedure developed for a single independent stoichiometric reaction rate to determine the stoichiometric schemata for the three independent stoichiometric reaction rates. For example, the complete column/row partition of Equation \ref{55} leads to the matrix equation given by
Step VIII: $\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix} \nu_{\ce{C2H6}}^{I} + \begin{bmatrix} {0} \\ {0} \\ {2}\end{bmatrix} \nu_{\ce{O2}}^{I} + \begin{bmatrix} {0} \\ {2} \\ {1}\end{bmatrix} \nu_{\ce{H2O}}^{I} + \begin{bmatrix} {1} \\ {0} \\ {1}\end{bmatrix} \nu_{\ce{CO}}^{I} = \begin{bmatrix} {0} \\ {0} \\ {0}\end{bmatrix} \label{58}$
while the use of the numerical values of the stoichiometric coefficients given in Equation \ref{45} provides
$\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \frac{1}{2} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix}+ \frac{5}{4} \begin{bmatrix} {0} \\ {0} \\ {2}\end{bmatrix} = \frac{3}{2} \begin{bmatrix} {0} \\ {2} \\ {1}\end{bmatrix}+\begin{bmatrix} {1} \\ {0} \\ {1}\end{bmatrix} \label{59}$
This represents a bona fide equation that has no use other than to create a stoichiometric schema. To do so, we make use of Transformation I (see Equation \ref{37}) to obtain
$\begin{matrix} {carbon} \\ { hydrogen} \\ {oxygen} \end{matrix} \begin{bmatrix} {2} \\ {6} \\ {0}\end{bmatrix} \Rightarrow \ce{C2H6} , \quad \begin{bmatrix} {0} \\ {0} \\ {2} \end{bmatrix} \Rightarrow \ce{O2} , \quad \begin{bmatrix} {0} \\ {2} \\ {1}\end{bmatrix} = \ce{H2O} , \quad \begin{bmatrix} {1} \\ {0} \\ {1}\end{bmatrix} \Rightarrow \ce{CO} \label{60}$
and then employ Transformation II (see Equation \ref{38}) so that Equation \ref{59} provides the stoichiometric schema for the first independent reaction.
Stoichiometric schema I: $\frac{1}{2} \ce{C2H6} + \frac{5}{4} \ce{O2} \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} \frac{3}{2} \ce{H2O} + \ce{CO} , \quad R_{\ce{CO}}^{I} = \pm \left|R_{\ce{CO}}^{I} \right| \label{61a}$
The schemata for the second and third independent reactions are developed in the same manner leading to
Stoichiometric schema II: $\frac{1}{2} \ce{C2H6} + \frac{7}{4} \ce{O2} \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} \frac{3}{2} \ce{H2O} + \ce{CO2} , \quad R_{\ce{CO2}}^{II} = \pm \left|R_{\ce{CO2}}^{II} \right|\label{61b}$
Stoichiometric schema III: $\ce{C2H6} + \ce{O2} \mathop{\longleftarrow}\limits^{\displaystyle\longrightarrow} \ce{H2O} + \ce{C2H4O} , \quad R_{\ce{C2H4O}}^{III} = \pm \left|R_{\ce{C2H4O}}^{III} \right|\label{61c}$
While these stoichiometric schemata are unnecessary for problem solving, they are quite traditional and in this section we have provided a methodical route for their development. In terms of solving problems, one must remember that the net rates of production for the non-pivot species are given in terms of the net rates of production of all the pivot species, and this idea is illustrated by Eqs. \ref{41} through \ref{44}.
In Chapter 6 we developed a rigorous mathematical statement of the concept that atomic species are neither created nor destroyed by chemical reaction. This concept is identified as Axiom II and it is represented by Eqs. $$(6.2.8)$$ or by Eq. $$(6.2.10)$$. Axiom II was used to identify the number of independent net rates of production that are required for any given set of molecular species. In this appendix we have shown how Axiom II can be used to develop the stoichiometric schemata for the independent rates of production. These pictures are convenient for the discussion of independent stoichiometric reactions but they are not necessary for solving material balance problems.
## C4 Problems
### Section C1
1. Given a matrix equation of the form $$\mathbf{c} = \mathbf{Ab}$$ having an explicit representation of the form,
$\begin{bmatrix} {c_{1} } \\ {c_{2} } \\ {c_{3} } \\ {c_{4} } \\ {c_{5} }\end{bmatrix} = \begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} } \\ {a_{51} } & {a_{52} } & {a_{53} } & {a_{54} }\end{bmatrix} \begin{bmatrix} {b_{1} } \\ {b_{2} } \\ {b_{3} } \\ {b_{4} }\end{bmatrix} \label{1.1}$
develop a partition that will lead to an equation for the column vector represented by
$\begin{bmatrix} {c_{1} } \\ {c_{2} } \\ {c_{3} }\end{bmatrix} = ? \label{1.2}$
If the elements of c are related to the elements of b according to
$\begin{bmatrix} {c_{4} } \\ {c_{5} }\end{bmatrix} = \begin{bmatrix} {b_{3} } \\ {b_{4} }\end{bmatrix} \label{1.3}$
what are the elements of the matrix normally identified as $$\mathbf{A}_{ 21}$$?
2. Construct the complete column/row partition of the matrix equation given by
$\begin{bmatrix} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ {a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\ {a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\ {a_{41} } & {a_{42} } & {a_{43} } & {a_{44} }\end{bmatrix} \begin{bmatrix} {b_{1} } \\ {b_{2} } \\ {b_{3} } \\ {b_{4} }\end{bmatrix} = \begin{bmatrix} {c_{1} } \\ {c_{2} } \\ {c_{3} } \\ {c_{4} }\end{bmatrix} \label{2.1}$
and show how it can be represented in the form
$\begin{bmatrix} {} \\ {} \\ {}\end{bmatrix}b_{1} + \begin{bmatrix} {} \\ {} \\ {}\end{bmatrix}b_{2} + \begin{bmatrix} {} \\ {} \\ {}\end{bmatrix}b_{3} + \begin{bmatrix} {} \\ {} \\ {}\end{bmatrix}b_{4} = \begin{bmatrix} {} \\ {} \\ {}\end{bmatrix} \label{2.2}$
### Section C2
3. Given a reacting system containing $$\ce{C3H6}$$, $$\ce{NH3}$$, $$\ce{O2}$$, $$\ce{C3H3N}$$ and $$\ce{H2O}$$, construct the chemical composition matrix and determine the rank of that matrix. Use the development presented in Sec. 6.4 to develop the stoichiometric schema for the single independent reaction involving propylene, ammonia, oxygen, acrylonitrile and water.
4. Construct the stoichiometric schema for the reaction described in Problem 6-13 for which the stoichiometric coefficients are given by
$\frac{R_{\ce{NaOH}} }{R_{\ce{NaBr}} } = \nu_{\ce{NaOH}} = -1 , \quad \frac{R_{\ce{CH3Br}} }{R_{\ce{NaBr}} } = \nu_{\ce{CH3Br}} = -1 , \quad \frac{R_{\ce{CH3OH}} }{R_{\ce{NaBr}} } = \nu_{\ce{CH3OH}} = 1 , \quad \frac{R_{\ce{NaBr}} }{R_{\ce{NaBr}} } = \nu_{\ce{NaBr}} = 1 \nonumber$
### Section C3
5. Fogler2 has proposed the following gas phase kinetic schemata and kinetic rate equations involving the oxidation of ammonia and the reduction of nitric oxide:
I. $\ce{4NH3} + 5\ce{O2} \to 4\ce{NO} + 6\ce{H2O} , \quad R_{\ce{NH3}} = -k_{1} c_{\ce{NH3}} (c_{\ce{O2}} )^{2} \nonumber$
II. $\ce{2NH3} + {\frac{3}{2}} \ce{O2} \to \ce{N2} + 3\ce{H2O} , \quad R_{\ce{NH3}} = -k_{2} c_{\ce{NH3}} c_{\ce{O2}} \nonumber$
III. $\ce{2NO} + \ce{O2} \to 2\ce{NO2} , \quad R_{\ce{O2}} = -k_{3} (c_{\ce{NO}} )^{2} c_{\ce{O2}} \nonumber$
IV. $\ce{4NH3} + 6\ce{NO}\to 5\ce{N2} + 6\ce{H2O} , \quad R_{\ce{NO}} = -k_{4} c_{\ce{NO}} (c_{\ce{NH3}} )^{2 / 3} \nonumber$
Analyze a system containing these six molecular species and these three atomic species to determine the number of independent reactions. Are there any restrictions on the choice of pivot and non-pivot species? Do you have any ideas about how one would measure the rate of consumption of ammonia in Reaction I independently from the rate of consumption of ammonia in Reaction II in order to determine the rate constants $$k_{1}$$ and $$k_{2}$$?
6. When methane is partially combusted with oxygen, one finds the following molecular species: $$\ce{CH4}$$, $$\ce{O2}$$, $$\ce{CO}$$, $$\ce{CO2}$$, $$\ce{H2O}$$and $$\ce{H2}$$. Determine the number of independent reactions and develop the stoichiometric schemata for this system.
1. Fogler, S.H. 1992, Elements of Chemical Reaction Engineering, Second Edition, Prentice Hall, Englewood Cliffs, New Jersey.↩ | 2021-07-28 10:17:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530389666557312, "perplexity": 483.9220473099979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00564.warc.gz"} |
http://www.numdam.org/articles/10.1051/ps:2007037/ | Analysis of the Rosenblatt process
ESAIM: Probability and Statistics, Tome 12 (2008), pp. 230-257.
We analyze the Rosenblatt process which is a selfsimilar process with stationary increments and which appears as limit in the so-called Non Central Limit Theorem (Dobrushin and Majòr (1979), Taqqu (1979)). This process is non-gaussian and it lives in the second Wiener chaos. We give its representation as a Wiener-Itô multiple integral with respect to the brownian motion on a finite interval and we develop a stochastic calculus with respect to it by using both pathwise type calculus and Malliavin calculus.
DOI : https://doi.org/10.1051/ps:2007037
Classification : 60G12, 60G15, 60H05, 60H07
Mots clés : non central limit theorem, Rosenblatt process, fractional brownian motion, stochastic calculus via regularization, Malliavin calculus, Skorohod integral
@article{PS_2008__12__230_0,
author = {Tudor, Ciprian A.},
title = {Analysis of the {Rosenblatt} process},
journal = {ESAIM: Probability and Statistics},
pages = {230--257},
publisher = {EDP-Sciences},
volume = {12},
year = {2008},
doi = {10.1051/ps:2007037},
mrnumber = {2374640},
language = {en},
url = {http://www.numdam.org/articles/10.1051/ps:2007037/}
}
TY - JOUR
AU - Tudor, Ciprian A.
TI - Analysis of the Rosenblatt process
JO - ESAIM: Probability and Statistics
PY - 2008
DA - 2008///
SP - 230
EP - 257
VL - 12
PB - EDP-Sciences
UR - http://www.numdam.org/articles/10.1051/ps:2007037/
UR - https://www.ams.org/mathscinet-getitem?mr=2374640
UR - https://doi.org/10.1051/ps:2007037
DO - 10.1051/ps:2007037
LA - en
ID - PS_2008__12__230_0
ER -
Tudor, Ciprian A. Analysis of the Rosenblatt process. ESAIM: Probability and Statistics, Tome 12 (2008), pp. 230-257. doi : 10.1051/ps:2007037. http://www.numdam.org/articles/10.1051/ps:2007037/
[1] P. Abry and V. Pipiras, Wavelet-based synthesis of the Rosenblatt process. Signal Process. 86 (2006) 2326-2339. | Zbl 1172.94348
[2] J.M.P. Albin, A note on the Rosenblatt distributions. Statist. Probab. Lett. 40 (1998) 83-91. | MR 1650532 | Zbl 0951.60019
[3] J.M.P. Albin, On extremal theory for self similar processes. Ann. Probab. 26 (1998) 743-793. | MR 1626515 | Zbl 0937.60033
[4] E. Alòs, O. Mazet and D. Nualart, Stochastic calculus with respect to Gaussian processes. Ann. Probab. 29 (2001) 766-801. | MR 1849177 | Zbl 1015.60047
[5] E. Alòs and D. Nualart, Stochastic integration with respect to the fractional Brownian motion. Stoch. Stoch. Rep. 75 (2003) 129-152. | MR 1978896 | Zbl 1028.60048
[6] T. Androshuk and Y. Mishura, Mixed Brownian-fractional Brownian model: absence of arbitrage and related topics. Stochastics An Int. J. Probability Stochastic Processes 78 (2006) 281-300. | MR 2270939 | Zbl 1115.60043
[7] F. Biagini, M. Campanino and S. Fuschini, Discrete approximation of stochastic integrals with respect of fractional Brownian motion of Hurst index $H>1/2.$ Preprint University of Bologna (2005). | MR 2456329 | Zbl 1155.60014
[8] P. Cheridito, H. Kawaguchi and M. Maejima, Fractional Ornstein-Uhlenbeck processes. Electron. J. Probab. 8 (2003) 1-14. | EuDML 124790 | MR 1961165 | Zbl 1065.60033
[9] L. Decreusefond and A.S. Ustunel, Stochastic analysis of the fractional Brownian motion. Potential Anal. 10 (1998) 177-214. | MR 1677455 | Zbl 0924.60034
[10] G. Da Prato and J. Zabczyk, Stochastic equations in infinite dimensions. Cambridge University Press (1992). | MR 1207136 | Zbl 0761.60052
[11] R.L. Dobrushin and P. Major, Non-central limit theorems for non-linear functionals of Gaussian fields. Z. Wahrscheinlichkeitstheorie verw. Gebiete 50 (1979) 27-52. | MR 550122 | Zbl 0397.60034
[12] A. Drewitz, Mild solutions to stochastic evolution equations with fractional Brownian motion. Diploma thesis at TU Darmstadt (2005).
[13] P. Embrechts and M. Maejima, Selfsimilar processes. Princeton University Press, Princeton, New York (2002). | MR 1920153 | Zbl 1008.60003
[14] R. Fox and M.S. Taqqu, Multiple stochastic integrals with dependent integrators. J. Mult. Anal. 21 (1987) 105-127. | MR 877845 | Zbl 0649.60064
[15] V. Goodman and J. Kuelbs, Gaussian chaos and functional law of the ierated logarithm for Itô-Wiener integrals. Ann. I.H.P., Section B 29 (1993) 485-512. | Numdam | MR 1251138 | Zbl 0805.60027
[16] M. Gradinaru, I. Nourdin, F. Russo and P. Vallois, $m$-order integrals and generalized Itôs formula; the case of a fractional Brownian motion with any Hurst parameter. Preprint, to appear in Annales de l'Institut Henri Poincaré (2003). | Numdam | Zbl 1083.60045
[17] M. Gradinaru, I. Nourdin and S. Tindel, Ito's and Tanaka's type formulae for the stochastic heat equation. J. Funct. Anal. 228 (2005) 114-143. | MR 2170986 | Zbl 1084.60039
[18] P. Hall, W. Hardle, T. Kleinow and P. Schmidt, Semiparametric Bootstrap Approach to Hypothesis tests and Confidence intervals for the Hurst coefficient. Stat. Infer. Stoch. Process. 3 (2000) 263-276. | MR 1819399 | Zbl 0981.62035
[19] M. Jolis and M. Sanz, Integrator properties of the Skorohod integral. Stochastics and Stochastics Reports 41 (1992) 163-176. | MR 1275581 | Zbl 0765.60048
[20] O. Kallenberg, On an independence criterion for multiple Wiener integrals. Ann. Probab. 19 (1991) 483-485. | MR 1106271 | Zbl 0738.60052
[21] H. Kettani and J. Gubner, Estimation of the long-range dependence parameter of fractional Brownian motionin, in Proc. 28th IEEE LCN03 (2003).
[22] I. Kruk, F. Russo and C.A. Tudor, Wiener integrals, Malliavin calculus and covariance measure structure. J. Funct. Anal. 249 (2007) 92-142. | MR 2338856 | Zbl 1126.60046
[23] N.N. Leonenko and V.V. Ahn, Rate of convergence to the Rosenblatt distribution for additive functionals of stochastic processes with long-range dependence. J. Appl. Math. Stoch. Anal. 14 (2001) 27-46. | MR 1825099 | Zbl 0983.60009
[24] N.N. Leonenko and W. Woyczynski, Scaling limits of solutions of the heat equation for singular Non-Gaussian data. J. Stat. Phys. 91 423-438. | MR 1632518 | Zbl 0926.60054
[25] M. Maejima and C.A. Tudor, Wiener integrals with respect to the Hermite process and a non central limit theorem. Stoch. Anal. Appl. 25 (2007) 1043-1056. | MR 2352951 | Zbl 1130.60061
[26] O. Mocioalca and F. Viens, Skorohod integration and stochastic calculus beyond the fractional Brownian scale. J. Funct. Anal. 222 (2004) 385-434. | MR 2132395 | Zbl 1068.60078
[27] I. Norros, E. Valkeila and J. Virtamo, An elementary approach to a Girsanov formula and other analytical results for fractional Brownian motion. Bernoulli 5 (1999) 571-587. | MR 1704556 | Zbl 0955.60034
[28] I. Nourdin, A simple theory for the study of SDEs driven by a fractional Brownian motion, in dimension one. Séminaire de Probabilités XLI (2006). To appear. | MR 2483731 | Zbl 1148.60034
[29] D. Nualart, Malliavin Calculus and Related Topics. Springer (1995). | MR 1344217 | Zbl 0837.60050
[30] D. Nualart and M. Zakai, Generalized mulptiple stochastic integrals and the representation of Wiener functionals. Stochastics 23 (1987) 311-330. | MR 959117 | Zbl 0641.60062
[31] V. Pipiras, Wavelet type expansion of the Rosenblatt process. J. Fourier Anal. Appl. 10 (2004) 599-634. | MR 2105535 | Zbl 1075.60032
[32] V. Pipiras and M. Taqqu, Convergence of weighted sums of random variables with long range dependence. Stoch. Process. Appl. 90 (2000) 157-174. | MR 1787130 | Zbl 1047.60018
[33] V. Pipiras and Murad Taqqu, Integration questions related to the fractional Brownian motion. Probab. Theor. Relat. Fields 118 (2001) 251-281. | MR 1790083 | Zbl 0970.60058
[34] N. Privault and C.A. Tudor, Skorohod and pathwise stochastic calculus with respect to an ${L}^{2}$-process. Rand. Oper. Stoch. Equ. 8 (2000) 201-204. | Zbl 0973.60062
[35] Z. Qian and T. Lyons, System control and rough paths. Clarendon Press, Oxford (2002). | MR 2036784 | Zbl 1029.93001
[36] M. Rosenblatt, Independence and dependence. Proc. 4th Berkeley Symposium on Math, Stat. II (1961) 431-443. | MR 133863 | Zbl 0105.11802
[37] F. Russo and P. Vallois, Forward backward and symmetric stochastic integration. Probab. Theor. Relat. Fields 97 (1993) 403-421. | MR 1245252 | Zbl 0792.60046
[38] F. Russo and P. Vallois, Stochastic calculus with respect to a finite quadratic variation process. Stoch. Stoch. Rep. 70 (2000) 1-40. | MR 1785063 | Zbl 0981.60053
[39] F. Russo and P. Vallois, Elements of stochastic calculus via regularization. Preprint, to appear in Séminaire de Probabilités (2006). | MR 2409004 | Zbl 1126.60045
[40] G. Samorodnitsky and M. Taqqu, Stable Non-Gaussian random variables. Chapman and Hall, London (1994). | MR 1280932
[41] A.S. Üstunel and M. Zakai, On independence and conditioning on Wiener space. Ann. Probab. 17 (1989) 1441-1453. | MR 1048936 | Zbl 0693.60046
[42] M. Taqqu, Weak convergence to the fractional Brownian motion and to the Rosenblatt process. Z. Wahrscheinlichkeitstheorie verw. Gebiete 31 (1975) 287-302. | MR 400329 | Zbl 0303.60033
[43] M. Taqqu, Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeitstheorie verw. Gebiete 50 (1979) 53-83. | MR 550123 | Zbl 0397.60028
[44] M. Taqqu, A bibliographical guide to selfsimilar processes and long-range dependence. Dependence in Probability and Statistics, Birkhauser, Boston (1986) 137-162. | MR 899989 | Zbl 0596.60054
[45] S. Tindel, C.A. Tudor and F. Viens, Stochastic evolution equations with fractional Brownian motion. Probab. Theor. Relat. Fields. 127 (2003) 186-204. | MR 2013981 | Zbl 1036.60056
[46] C.A. Tudor, Itô's formula for the infinite-dimensional fractional Brownian motion. J. Math. Kyoto University 45 (2005) 531-546. | MR 2206361 | Zbl 1121.60038
[47] W.B. Wu, Unit root testing for functionals of linear processes. Econ. Theory 22 (2005) 1-14. | MR 2212691 | Zbl 1083.62098
Cité par Sources : | 2022-05-22 15:11:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36537835001945496, "perplexity": 4501.527159231098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00743.warc.gz"} |
https://doc.cgal.org/5.0.2/Partition_2/classIsYMonotoneTraits__2.html | CGAL 5.0.2 - 2D Polygon Partitioning
IsYMonotoneTraits_2 Concept Reference
Definition
Requirements of a traits class to be used with the function is_y_monotone_2() that tests whether a sequence of 2D points defines a $$y$$-monotone polygon or not.
Has Models:
CGAL::Partition_traits_2<R>
CGAL::Kernel_traits_2
CGAL::Is_y_monotone_2<Traits>
CGAL::y_monotone_partition_2()
CGAL::y_monotone_partition_is_valid_2()
Types
The following two types are required:
typedef unspecified_type Point_2
The point type of the polygon vertices.
typedef unspecified_type Less_yx_2
Predicate object type that compares Point_2s lexicographically. More...
Creation
Only a copy constructor is required.
IsYMonotoneTraits_2 (IsYMonotoneTraits_2 tr)
Operations
The following function that creates an instance of the above predicate object type must exist:
Less_yx_2 less_yx_2_object ()
◆ Less_yx_2
Predicate object type that compares Point_2s lexicographically.
Must provide bool operator()(Point_2 p, Point_2 q) where true is returned iff $$p <_{xy} q$$. We have $$p<_{xy}q$$, iff $$p_x < q_x$$ or $$p_x = q_x$$ and $$p_y < q_y$$, where $$p_x$$ and $$p_y$$ denote $$x$$ and $$y$$ coordinate of point $$p$$ resp. | 2022-05-21 13:15:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4268270432949066, "perplexity": 3012.7314334133357}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00706.warc.gz"} |
http://mathhelpforum.com/math-topics/3368-stem-leaf-diagrams.html | # Math Help - Stem and Leaf Diagrams
1. ## Stem and Leaf Diagrams
i so do not get stem and leaf diagrams, what are they??? please someone explain! I NEED HELP FOR MY MATHS TEST TOMORROW
2. Originally Posted by kate
i so do not get stem and leaf diagrams, what are they??? please someone explain! I NEED HELP FOR MY MATHS TEST TOMORROW
This is the Wikipedia article on Stem and Leaf Diagrams.
RonL
3. Originally Posted by kate
i so do not get stem and leaf diagrams, what are they??? please someone explain! I NEED HELP FOR MY MATHS TEST TOMORROW
thanks sooooooooooooooooooooo much - i get it now!!
4. Originally Posted by kate
It some stupid idea that staticians are proud of.
All it is a way of arranging numbers.
5. kate no double posting please
-=USER WARNED=-
6. Originally Posted by ThePerfectHacker
It some stupid idea that staticians are proud of.
All it is a way of arranging numbers.
I doubt that statisticians in general hold this form of data representation
in high regard except in restricted sub-domains. To me it does not convey
information that is not conveyed more intuitively by other forms of plot.
(PS I merged these threads- despite PH having deleted one of them, as
PH's comments on the utility of the Stem and Leaf Plot seemed worth
preserving)
RonL | 2015-04-28 01:20:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156419992446899, "perplexity": 3882.0010824765527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660448.47/warc/CC-MAIN-20150417045740-00171-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/192033-adjoint-linear-operator.html | # Math Help - Adjoint of linear Operator
1. ## Adjoint of linear Operator
Let V be an inner product space and y,z be in V. Let T be a linear operator on V defined by T(x) = <x,y>z for all x in V. Find the explicit expression for T*.
I know that <T(x),w> = <x,T*(w)>.
Hence, <<x,y>z,w> = <x,T*(w)>. But <x,y> is just a scalar.
Then, <x,y><z,w> = <x,T*(w)>.
But how do i proceed on from here to find T*?
Thank you.
2. ## Re: Adjoint of linear Operator
Is your vector space over the real or complex numbers? That's important because for a space over the real numbers, <x, y> = <y x> while for a space over the complex numbers, <x,y>= <y, x>* where * is the complex conjugate.
If over the real numbers, then <x, y><z, w>= <x, y<z, w>> and, since that is to be equal to < x, T*(w)>, T*(w)= <z, w>y. | 2014-08-22 07:02:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112224340438843, "perplexity": 1930.8413912672213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00443-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/velocity-of-an-inertial-frame-versus-velocity-of-a-particle.957460/ | # Velocity of an inertial frame versus velocity of a particle
## Main Question or Discussion Point
Hello all,
This post is in reference to a previous homework post, found here:
That thread is closed to further replies. Probably because it's nearly 10 years old.
That thread is about deriving relativistic force from the derivative of relativistic momentum specifically when the force is parallel with the velocity. The OP must prove that F = (d/dt) (γmv) = γ3ma.
I followed that thread fine and I see where the answer came from. However, when I first attempted the problem (before I found my way here), I was assuming that the velocity of the particle was not necessarily the same as the velocity of the inertial frame buried inside of the lorentz factor γ . Feedback from the community suggested that they are necessarily the same and, indeed, the proof would be impossible if they were not the same.
Why are they the same?
I thought perhaps they are the same because we assume that the velocity vector is the reference frame. In other words, we are traveling on the back of the particle moving at velocity v. Is there a simpler way to think about the problem?
Related Special and General Relativity News on Phys.org
Orodruin
Staff Emeritus
Homework Helper
Gold Member
The gamma factor is not associated to any inertial frame. It is an integral part of the relativistic definition of momentum.
Mister T
Gold Member
I followed that thread fine and I see where the answer came from. However, when I first attempted the problem (before I found my way here), I was assuming that the velocity of the particle was not necessarily the same as the velocity of the inertial frame buried inside of the lorentz factor γ .
It's the same $v$ in both relations: $p=\gamma mv$ and $\gamma=\frac{1}{\sqrt{1-(v/c)^2}}$
What is it that makes you think they are defined differently?
It's the same $v$ in both relations: $p=\gamma mv$ and $\gamma=\frac{1}{\sqrt{1-(v/c)^2}}$
What is it that makes you think they are defined differently?
Is there not a case where two coordinate systems S and S' move with respect to each other with their x-axes coinciding at speed v, who both watch an object move with speed u in S and u' in S'?
In that case. wouldn't you have γ(v) for the speed between the two systems S and S' and then γ(u) and γ(u') for what observers at rest in S and S' measure for the moving object?
Mister T
Gold Member
Is there not a case where two coordinate systems S and S' move with respect to each other with their x-axes coinciding at speed v, who both watch an object move with speed u in S and u' in S'?
Yes, but the case being discussed here is not one of them.
Sorcerer
Yes, but the case being discussed here is not one of them.
Oh, sorry, I thought you were talking generally. Honestly I've imbibed a bit too much of a certain magical brain cell killing Sorerer's potion tonight.
Is there not a case where two coordinate systems S and S' move with respect to each other with their x-axes coinciding at speed v, who both watch an object move with speed u in S and u' in S'?
In that case. wouldn't you have γ(v) for the speed between the two systems S and S' and then γ(u) and γ(u') for what observers at rest in S and S' measure for the moving object?
Yes, but the case being discussed here is not one of them.
Well yes actually, that is exactly the type of example I was thinking. In my case above, I don't see why the velocities must be the same. Is my case special in some other way that I'm not seeing?
The gamma factor is not associated to any inertial frame. It is an integral part of the relativistic definition of momentum.
I don't really know what this means. I'm 5 weeks through my first course in non-classical physics.
Ibix
The point is that only one frame is being considered here, the one in which the particle is moving at $v$ and has momentum $\gamma mv$. You don't need any other frame, so it's not clear to me why you think there's a second one or how you intend to use it.
Last edited:
Well yes actually, that is exactly the type of example I was thinking. In my case above, I don't see why the velocities must be the same. Is my case special in some other way that I'm not seeing?
I don't really know what this means. I'm 5 weeks through my first course in non-classical physics.
Your case, IMHO, is like choosing to do a free body diagram of a box being dragged in which the coordinates you choose have axes that are not parallel to any force involved or any direction of motion.
What I mean is, why not just choose your moving reference frame so that γ(v) = γ(u)?
Of course if you have more objects involved you can’t do that, but for the derivation in question it seems simpler.
Mister T
Gold Member
Well yes actually, that is exactly the type of example I was thinking. In my case above, I don't see why the velocities must be the same. Is my case special in some other way that I'm not seeing?
One frame of reference is the rest frame of the laboratory. In this frame the speed of the particle is $v$.
Another frame of reference is the rest frame of the particle. This frame moves with (the same) speed $v$ relative to the laboratory.
Mister T | 2020-04-10 19:50:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.751534104347229, "perplexity": 370.10138697638513}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00292.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/proc.2005.2005.624 | Article Contents
Article Contents
# Geometric approach to a singular boundary value problem with turning points
• The singularly perturbed boundary value problem $\epsilon \ddot x=f(x,t;\epsilon)\dot x$, $x(-1;\epsilon)=A$, $x(0;\epsilon)=B$ is studied as an application of the geometric singular perturbation theory for turning points. The key ingredients are: the delay of stability loss that characterizes all possible singular orbits of the boundary value problem, and the exchange lemmas for problems with turning points as the geometric tool to show the existence of solutions shadowing singular orbits.
Mathematics Subject Classification: Primary: 34E20; Secondary: 34B16.
Citation:
Open Access Under a Creative Commons license
• on this site
/ | 2022-12-06 19:45:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.18204821646213531, "perplexity": 553.3508132128393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00463.warc.gz"} |
https://physics.stackexchange.com/questions/109936/functional-field-integral-in-condensed-matter-field-theory-altland | # Functional field integral in condensed matter field theory (Altland)
This is the action for the 1+1 dimensional interacting electron system;
$$S_{cl}[\theta , \phi]= \frac{1}{2\pi} \int dxd\tau \left(g^{-1}v(\partial_x \theta)^2 + gv(\partial_x \phi)^2 + 2i\partial_{\tau} \theta \partial_x \phi \right).$$
I want to integrate out the Gaussian field $\phi$. This book says that it is just an "elementary" Gaussian integration. So, I tried some modification to this action;
$$S_{cl}[\theta , \phi]= \frac{1}{2\pi} \int dxd\tau \left(g^{-1}v(\partial_x \theta)^2 + (\sqrt{gv}\partial_x \phi + \frac{i}{\sqrt{gv}}\partial_{\tau} \theta)^2 + \frac{1}{gv}(\partial_{\tau} \theta)^2 \right).$$
For this action, partition function is given by
$$\int D\theta D\phi \exp[-S_{cl}].$$
Maybe, the second term in the action is related to Gaussian integral. But, I don't know how to calculate it.
How can I calculate this?
• Which page in Altland? – Qmechanic Apr 25 '14 at 9:29
• part (b) in Page 191 – user45234 Apr 25 '14 at 10:46
OP has already completed the square in the second term
$$\tag{1} (\sqrt{gv}\partial_x \phi + \frac{i}{\sqrt{gv}}\partial_{\tau} \theta)^2 ~=~ gv (\partial_x \phi + \frac{i}{gv}\partial_{\tau} \theta)^2 ~=~ gv \left(\partial_x( \phi + \frac{i}{gv}\partial_{\tau} \Theta)\right)^2$$
of the action. Here we defined the antiderivative (aka. primitive or indefinite integral)
$$\tag{2} \Theta(x,t)~:=~ \int_0^x \!dx^{\prime}~\theta(x^{\prime},t).$$
So the Gaussian integration over $\phi$ removes the second term in the classical action, even for an imaginary shift (1).
Quantum mechanically, there will also appear a multiplicative Van Vleck-Morette determinantal factor
$$\tag{3} ({\rm Det}^{\prime}(-\Delta))^{-\frac{1}{2}}$$
in front of the remaining path integral over $\theta$. Here $\Delta:=\partial_x^2$. The prime in eq. (3) indicates that a zeromode should be excluded.
References:
1. A. Altland and B. Simons, Condensed Matter Field Theory, 2010, p. 180-191. | 2019-07-16 04:00:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9068152904510498, "perplexity": 275.72650481690005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00244.warc.gz"} |
http://pld.fk.ui.ac.id/e9587lri/right-angles-and-complementary-angle-pairs-have-measure-d89dc3 | It was so explanatory and straight forward, the explanation is very good and very interesting, however I need help with the trigonometric ratios for supplementary angles, Your email address will not be published. The two angles do not need to be together or adjacent. Comparing the above set of ratios with the ratios mentioned earlier, it can be seen that; sin (90°- A) = cos A ; cos (90°- A) = sin A, tan (90°- A) = cot A; cot (90°- A) = tan A, sec (90°- A) = csc A; csc (90°- A) = sec A. Practice Problems. A right angle does not need to have complement angle as by itself the angle measures 90 degree. The answer is : A,C,D,E Assume a triangle ∆ABC, which is right-angled at B. Test. all right angles are equal in measure). Vertical angles are not adjacent. 65 degrees 90 - 25 = 65. Since is complementary to , any angle complementary to must have measure equal to that of , which is . Instead, you have to refer to the angle in question with a number or with three letters. One angle measures 37 degrees. These relations are valid for all the values of A that lies between 0° and 90°. Complementary angles are two angles that add up to 90°, or a right angle; two supplementary angles add up to 180°, or a straight angle. The two angles, say ∠X and ∠Y are complementary if. Two angles whose measures have a sum of 90 degrees and can be adjacent. (They share a vertex and side, but do not overlap. Complementary angles Two angles whose measures have a sum of 90°. By Mark Ryan . Complementary angles are two angles that sum to 90 ° degrees. When two angles add to 90°, we say they "Complement" each other. Acute, Right, Obtuse, Straight, Reflex & Complete angle. Supplementary Angles are two angles the sum of whose measures is 180º. Angle a measures 171 degrees. They can be adjacent angles but don’t have to be. Angles that are a linear pair have measures that add up to 180°. Suppose if one angle is x then the other angle will be 90 o – x. The measure of an angle's supplement is 44 degrees less than the measure of the angle. Complementary Angles Formula: The difference between the given acute angle from 90 degree is the complementary angle. Report an Error Geometry Chapter 1 Vocabulary A B Adjacent angles Two angles in the same plane with a common vertex and a common side, but no common interior points. Because x and 41° are complementary angles, x + 41° = 90° Subtract 41 ° from each side. If two congruent angles add to 180º, each angle contains 90º, forming right angles. Find the measure of angle $$CBW$$. Complementary angles: Two angles that add up to 90° (or a right angle) are complementary. Two angles form a linear pair. This is a good app for children education for improvement of children knowledge and for making maths as easy subject for children, I enjoyed this lesson. As ∠C = 90°- A (A is used for convenience instead of ∠A ), and the side opposite to 90° – A is AB and the side adjacent to the angle 90°- A is BC as shown in the figure given above. Complementary Angles. 100. m∠5 + 125 = 180 Definition of supplementary angles m∠5 = 55 Subtract 125 from each side. Supplementary angles Two angles whose measures have a … Complementary angles can be placed so they form perpendicular lines, or they may be two separate angles. Angle Pairs Practice DRAFT. how to find the measure of an angle geometry Home; Events; Register Now; About What's the measurement of the second angle? Each one of these angles is called the Complementary of the other. zimmurbunz. Complementary. Examples of acute angles Angles can also be classified as obtuse , right , and straight , depending on the measure of their angles. The two angles, say ∠X and ∠Y are complementary if, ∠X + ∠Y = 90° In such a condition ∠X is known as the complement of ∠Y and vice-versa. Question 2 : The measure of an angle is 62°. The relationship between the acute angle and the lengths of sides of a right-angle triangle is expressed by trigonometric ratios. It is not possible for a triangle to have more than one vertex with internal angle greater than or equal to 90°, or it would no longer be a triangle. 200. Linear pairs do not have two angles that form a sunrise. ), ∠ABC and ∠1 are NOT adjacent angles. What is the measure of angle b? Complementary angles have angle measures that add up to 90°. Auburn University. 500. One of the complementary angles is said to be the complement of the other. When two angles add to 90°, we say they "Complement" each other. Pairs of Angles To have a better insight on trigonometric ratios of complementary angles consider the following example. Example: From the above example ∠POR = 50 o, ∠ROQ = 40 o are complementary angles. Right Angle. The first angle measures 35°. Complementary Angles Formula: The difference between the given acute angle from 90 degree is the complementary angle. Complementary Angles. There are some special relationships between "pairs" of angles. What is the measure of a complementary angle? 200. 9th - 11th grade. Angles that have the same measure (i.e. 100. Supplementary angles: Two angles that add up to 180° (or a straight angle) are supplementary. So the measure of angle DBA plus the measure of angle ABC is equal to 90 degrees. Gravity. If the outer rays of two adjacent angles form a right angle, then the angles are complementary. (∠ABC overlaps ∠1.). ... Find missing angle measures for supplementary or complementary angles. Vertical angles are always equal in measure. Complementary Angles and Supplementary angles - relationships of various types of paired angles, Word Problems on Complementary and Supplementary Angles solved using Algebra, Create a system of linear equations to find the measure of an angle knowing information about its complement and supplement, in video lessons with examples and step-by-step solutions. This is a right angle. In other words, if two angles add up to form a right angle, then these angles are referred to as complementary angles. Lines and Angles Class 7 Maths MCQs Pdf. An acute angle is an angle that measures greater than 0° but less than 90° (the measure of a right angle). You can write "the measure of angle 1" as m ∠1. Three angles or more angles whose sum is equal to 90 degrees cannot also be called complementary angles. Topical Outline | Geometry Outline | MathBitsNotebook.com | MathBits' Teacher Resources For the given right angle triangle, the trigonometric ratios of ∠A is given as follows: The trigonometric ratio of the complement of ∠A. ANSWER: m∠1 = 55 Linear pairs can be adjacent when there are 2 lines or 1 line and a ray. the same magnitude) are said to be equal or congruent.An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g. Adjacent angles. No matter where you draw the line, you have created two complementary angles. This indicates how strong in your memory this concept is. In a right angled triangle, the two non-right angles are complementary, because in a triangle the three angles add to 180°, and 90° has already been taken by the right angle. They just need to add up to 90 degrees. The sum of the angles in a triangle add to 180º. 1. If A, B and C are the interior angles of a right-angle triangle, right-angled at B then find the value of A, given that tan 2A = cot(A – 30°) and 2A is an acute angle. Let's Review Complementary angles form a right angle (L shape) and have a sum of 90 degrees. Two angles are called complementary angles if the sum of their degree measurements equals 90 degrees (right angle). Vertical angles, such as ∠1 and ∠2, form linear pairs with the same angle, ∠4, giving. Find the value of $$b$$. One angle measures 127 degrees. 100. Spell. B C E. Use the diagram to identify the special angle pairs. Supplementary. PLAY. When the sum of two angles is 90°, then the angles are known as complementary angles. Two angles, either a pair of right angles or one acute angle and one obtuse angle, are called supplementary angles if their measures add to $$180^{\circ}$$. SURVEY . Complementary Angles. ; Two angles that share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles. Terms of Use Contact Person: Donna Roberts, Adjacent Angles are two angles that share a common vertex, a common side, and no common interior points. % Progress . Complementary angles always have positive measures. If an angle measures 50 °, then the complement of the angle measures … The acute angles of a right triangle are complementary. Please read the ". There are three types of angles: acute, straight, and right. There's also a word for two angles whose sum add up to 90 degrees, and that is complementary. 145° C. 125° D. 55° (Show 128°, 52° for supplementary angle) What are Complementary Angles. If the sum of two angles are 90 o then the angles are said to be Complementary angles. ... SURVEY . The sum of the angles in a triangle add to 180º. • Have students measure angles around the school building and findexamples ofvertical angles, adjacent angles, supplementary angles, and complementary angles. They can be adjacent angles but don’t have to be. Two angles are complementary if the sum of their measures is 90. The measure of an angle's supplement is 44 degrees less than the measure of the angle. In other words, if two angles add up to form a right angle they known as complementary angles. Complementary angles do not have to be congruent to each other. 200. When the sum of two angles is 90°, then the angles are known as complementary angles. In other words, if two angles add up to form a right angle, then these angles are referred to as complementary angles. In a right angle triangle, as the measure of the right angle is fixed, the remaining two angles always form the complementary as the sum of angles in a triangle is equal to 180°. 130 o. Contact Person: Donna Roberts, Vertical angles are located across from one another in the corners of the ". A right angle does not need to have complement angle as by itself the angle measures 90 degree. Required fields are marked *, Trigonometric Ratios Of Complementary Angles. Answer : Let x be the measure of the required complementary angle. column B $$X$$ is on line $$WY$$. All of the pairs of angles pictured below are complementary. No calculation is actually required for this problem. For example, 30° and 60° are complementary to each other as their sum is equal to 90°. In the diagram below, the angles $$\alpha$$ and $$\beta$$ are supplementary angles while the pair $$\gamma$$ and $$\theta$$ are complementary angles. Complementary. Terms of Use In a right angled triangle, the two non-right angles are complementary, because in a triangle the three angles add to 180°, and 90° has already been taken by the right angle. Strategies for Differentiation Direct students to make an angle vocabulary glossary in which they include a drawing to represent the definition of each angle and angle pair. Tags: Question 8 . After subtracting 90º for the right angle, there are 90º left for the two acute angles, making them complementary. Here we say that the two angles complement each other. Learn. What is the complementary angle of 25 degrees? MEMORY METER. When the sum of the measures of two angles is 90°, the angles are called (а) supplementary angles (b) complementary angles (c) adjacent angles (d) vertically opposite angles Answer/Explanation The acute angles of a right triangle are complementary. And that is because their measures add up to 90 degrees. Complementary angles: Two angles that add up to 90° (or a right angle) are complementary. STUDY. Two angles are supplementary if the sum of their measures is 180. Straight lines are also straight angles that measure 180°. Your email address will not be published. Flashcards. angles. ... Angles a and b are Linear angles. The measures of complementary angles add up to _____. Two angles form a linear pair. They just need to add up to 90 degrees. 100. Simply draw a straight line beginning at the right angle vertex and through the triangle. Two angles are called complementary angles if the sum of their degree measurements equals 90 degrees (right angle). CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, Sin of an angle = Cos of its complementary angle, Cos of an angle = Sin of its complementary angle, Tan of an angle = Cot of its complementary angle. – x, form linear pairs can be placed so they form a linear pair the. Online complementary angle calculator measure angles around the school building and findexamples ofvertical angles, is. ( X\ ) is on line \ ( WY\ ) this concept is x! 180º, each angle in the pair is said to be up to 90 degrees you... Acute angle from the sum of the other B and C is a straight )... Use '' for educators so the measure of the same angle, ∠4, giving law! Supplementary right angles and complementary angle pairs have measure the sum of 90°, ∠ABC and ∠1 are not vertical angles ( they share a vertex side! O then the angles are supplementary of trigonometry ratio of complementary angles is said to.. Two acute angles, adjacent angles, and straight, depending on the measure of angle DBA plus measure... Online complementary angle ° degrees Let 's Review complementary angles 's supplement is 44 degrees than! By making use of trigonometry ratio of complementary angles so we can be! Are marked *, trigonometric ratios of complementary angles of a right-angle triangle is expressed by trigonometric ratios of angles! Of two acute angles with this simple online complementary angle 8 } )... Complementary of the other angle will be 90 o then the angles are angles... ° from each side also say that the two angles do not have angles... Measures right angles and complementary angle pairs have measure a sum of two angles whose sum add up to form a linear pair, the of... ’ t have to be congruent to each other angles have angle measures 90 is... Measure 125 and ∠5 are corresponding angles, adjacent angles, say and... To add up to 90 degrees, ∠ABC and ∠1 are not vertical angles ( they a. A number or with three letters, which is right-angled at B say that the can... To each other as their sum is equal to 90 degrees which is at! What angle pair also a word for two angles that measure 180° find the angles... Findexamples ofvertical angles, say ∠X and ∠Y are complementary if know more about trigonometric ratios of angles... Separate angles 8 } \ ) two angles add up to 180° they known as complementary angles straight... What angle pair if one angle is x then the other What are complementary angles the acute a... Angle 's supplement is 44 degrees less than the measure of an angle 's supplement is 44 degrees less 90... Lines or 1 line and have a sum of the angle in with... Below are complementary 55 the missing angle measures 50 °, then these angles are referred to complementary! That create a right angle, ∠4, giving ) two angles do have... Differ in size by an integer multiple of a right angle, then the angles are 90 then! To be together or adjacent and ∠3 are not adjacent angles, supplementary angles, are complementary., then these angles are referred to as complementary angles Formula: the measure of angle ABC equal... ∠X and ∠Y are complementary angles, and complementary angles are congruent 2! Think of them as two puzzle pieces that form one 90 degree is the complementary angles x = 49°,! Of, which is when they are put together example ∠POR = 50 o, ∠ROQ 40. Vertex and side, but do not need to have complement angle as by itself the in. Right-Angled at B and is not considered fair use '' for educators right ). Same angle, then the complement of the other angle will be 90 o – x a condition ∠X known. By an integer multiple of a right angle they known as complementary form! Angle as by itself the angle with measure 125 and ∠5 are corresponding,. You draw the line through points a, C, D, E acute, right, and that complementary. Your memory this concept is also a word for two angles add to 180º pair ( line. Angle when they are a linear pair ) use '' for educators 103 degrees together. Have to be the measure of an angle measures for supplementary or complementary angles: two angles whose measures a! ’ t have to refer to the angle with measure 125 and ∠5 are corresponding angles such... Right triangle are complementary angles: two angles do not need to add up to.. Identify angle pairs as being complementary, supplementary angles m∠5 = 55 the right angles and complementary angle pairs have measure angle measures 90 degree ’ have... Abc is equal to that of, so and must comprise a complementary pair angles. Not adjacent angles but don ’ t have to be the complement of the angle... Depending on the measure of the angle ∠X and ∠Y are complementary are twoangles whose have... Sides of a right-angle triangle is expressed by trigonometric ratios of complementary angles if the sum of angles! The given acute angle and the lengths of sides of a triangle add to 180º, angle! Have equal measures share terminal sides, but differ in size by an integer multiple of a angle. Are 2 lines or 1 line and have a sum of 90 degrees ( right angle are! Pair is said to be congruent to each other their measure equals 90° 8 } \ two! Angles consider the following example said to be through points a, B and C is a straight )... Pairs with the same angle, then the angles are called coterminal angles as,. 90° Subtract 41 ° from each side the law of sines makes it possible to find unknown and! Be together or adjacent 1 line and a ray have measures that add to... Or more angles whose measures is 90 x = 49° so, angles! Marked *, trigonometric ratios of complementary angles add to 180º 60° are complementary angles up... 90° – ∠A than 90 degrees is 62° are called complementary angles and vice-versa angle... Think of them as two puzzle pieces that form one 90 degree – ∠A to each other be angles. Have two angles that add up to _____ ), or they may be two separate.! Using the law of sines makes it possible to find unknown right angles and complementary angle pairs have measure and sides of right. Of two adjacent angles but don ’ t have to be the complement of the angle ( )!, such as ∠1 and ∠5 are supplementary if the sum of their measure equals 90° differ. Create a right angle: Reflex & Complete angle input data into any field to calculate the rest such! And through the triangle angles add up to 90 degrees ( right angle ) are supplementary a condition ∠X known. Find the complementary angle calculator of their degree measurements equals 90 degrees angles are supplementary at B as,! The other angles measuring less than 90 degrees fair use '' for educators vertex and side, but not! Concept is m∠5 + 125 = 180 Definition of supplementary angles m∠5 = 55 the angle. Of whose measures have a sum of two angles that form one 90 is! Which is is expressed by trigonometric ratios of complementary angles add to 180º, each angle 90º. Less than the measure of the other ABC are complementary if the right angles and complementary angle pairs have measure... Set ( 9 ) which angle pairs are supplementary that their sum equal... Determine the measure of angle DBA plus the right angles and complementary angle pairs have measure of the same angle, then angles! Complementary angle ∠5 are right angles and complementary angle pairs have measure column B \ ( X\ ) is on line \ ( {... 90 degrees, depending on the measure of angle DBA plus the measure of the angle equals. Measures of complementary angles is called the complementary angle lines, or congruent angles, ∠X. That form a right angle they known as the complement of the other the law of sines it! X\ ) is on line \ ( \PageIndex { 9 } \ ) \ ( )! 90º left for the right angle vertex and side, but differ in size by an integer multiple of right. From each side you can Subtract the given acute angle from the sum of measures... With the same angle, then these angles are twoangles whose measures a. To identify angle pairs are supplementary must comprise a complementary pair of.. More angles whose measures have a sum of their degree measurements equals 90 degrees ( angle... Linear pair have measures that add up to 90 degrees required fields are marked *, trigonometric of... Find unknown angles and sides of right angles and complementary angle pairs have measure turn, are called complementary add. 90° – ∠A as 90° – ∠A, D, E acute,,... Are a linear pair, the angles are known as complementary angles this concept.. Of 180 degrees by making use of trigonometry ratio of complementary angles of a triangle add to 180º each... Degree measurements equals 90 degrees trigonometry ratio of complementary angles Formula: the difference between the acute a! Of sides of a triangle add to 180º ( WY\ ) B C E. use diagram! To calculate the rest sum of 90° angles: two angles are two add. Placed so they form a linear pair ( straight line beginning at the right angle ∠4! Is not considered fair use '' for educators acute, right,,... Need to add up to 90° ( or a straight line and have a better insight on trigonometric of... Byju ’ S – the Learning App equal measures triangle is expressed by trigonometric ratios of complementary form... Have a sum of their measures is 180º 103 degrees as being complementary, supplementary angles, so and comprise... | 2021-07-28 08:47:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6111934185028076, "perplexity": 1016.975886235554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00580.warc.gz"} |
http://aa.quae.nl/en/antwoorden/melkwegstelsels.html | $$\def\|{&}$$
## 1. Melkwegstelsels
Melkwegstelsels zijn sterrenwolken met miljoenen of miljarden sterren en vaak ook met gas en zogenaamde donkere materie erin. Je kunt de dichtstbijzijnde melkwegstelsels buiten dat van onszelf met je eigen ogen zien, zonder een telescoop. Het zijn de Kleine Magelhaanse Wolk (in het sterrenbeeld Tucana - de Toekan), de Grote Magelhaanse Wolk (in het sterrenbeeld Dorado - de Goudvis), en de Andromedanevel (in het sterrenbeeld Andromeda). Net zoals ons melkwegstelsel bestaat uit sterren en andere dingen met veel lege ruimte er tussen in, zo bestaat het Heelal uit melkwegstelsels met veel lege ruimte er tussen in. Veel van de Messier-objecten zijn melkwegstelsels, zoals M 31 (de Andromedanevel). Je kunt een lijst van ze vinden (met verwijzingen naar plaatjes) op //www.seds.org/messier/objects.html#galaxy.
[248] [323]
## 2. Galaxies
Galaxies are clouds of stars with millions or billions of stars and often also with gas clouds and so-called dark matter. There are uncountably many other galaxies beyond that of our own. You can see the closest ones with your own eyes, without using a telescope. These are the Small Magellanic Cloud (in the constellation Tucana - the Toucan), the Large Magellanic Cloud (in the constellation Dorado - the Goldfish), and the Andromeda Nebula (in the constellation Andromeda). Just like our Galaxy is made up of stars and other things with lots of empty space in between, so the Universe is made up of galaxies with lots of empty space in between. Many of the Messier objects are galaxies, for example M 31 (the Andromeda Nebula). You can see a list of them (with links to pictures) at //www.seds.org/messier/objects.html#galaxy.
Galaxies exist in many shapes and sizes. Edwin Hubble devised the classification scheme that is still used today to indicate the type of the galaxy. In Hubble's scheme there are four major types of galaxies: elliptical galaxies (indicated with the type letter E), spiral galaxies (S), barred spiral galaxies (SB), and irregular galaxies (Irr).
Elliptical galaxies have the shape of a rugby ball and look from all sides as an ellipse or circle. They usually contain no gas and no internal structure. The largest galaxies are elliptical galaxies. The type E is subdivided into subclasses that are indicated with a number. The smaller the number, the flatter the galaxy. E0 is flattest, and E9 the most spherical.
Spiral galaxies look like a flat disk and have spiral arms that wind their way from the middle to the edge of the disk. Spiral galaxies contain lots of gas clouds as well as stars. The subclasses of type S and Sa, Sb, and Sc. Subtype a has tightly wound spiral arms and a relatively large core. Going towards subtype c, the arms get less tightly wound and the core gets smaller. If a galaxy is between, for example, Sb and Sc, then it is sometimes classified as Sbc.
Barred spiral galaxies are like spiral galaxies but have a sort of bar running through the center. Spiral arms often sprout from the ends of the bar. Barred spiral galaxies have the same subtypes a, b, and c as ordinary spiral galaxies.
Irregular galaxies do not fit into any of the other classes and have, as their name suggests, an irregular shape. Many small galaxies are irregular galaxies.
[92]
The difference between, for example, type SBa and type Sc is that SBa is a barred spiral and Sc an ordinary spiral, that SBa has more tightly wound spiral arms and a more prominent core than Sc.
[14]
## 3. The Milky Way
The Milky Way is the galaxy that the Sun belongs to. The Milky way looks like a band of weakly shining clouds in the sky, that are always in the same place between the stars, and thus move with the stars along the sky. You can see the Milky Way only from dark places far from the light of cities. From the Netherlands it is hard to see the Milky Way, because almost everywhere there are one or two cities nearby that produce light pollution. The best time to see the Milky Way is when it crosses the sky straight overhead at midnight. The time of the year at which this happens depends on your latitude, but it is roughly at the beginning of January and the beginning of July.
The Milky Way passes through the following well-known constellations (and through others as well): the Swan, Cassiopeia, the Centaur, the Scorpion, the Archer.
[598]
## 4. Milky Way and Galaxy
"The Galaxy" is the same as one of the meanings of "the Milky Way", namely the total of all gas clouds and stars, including the Sun, that are bound together by gravity and move through the Universe together, separate from other similar galaxies. That meaning is relatively young and dates to about 1930.
The other ― much older ― meaning of "Milky Way" is the weakly shining band in the sky. That meaning dates back more than 2000 years. At that time people didn't yet know that that band is made up of the light of countless stars that are too dim to see with the unaided eye, so it was logical to consider the Milky Way and the stars as two entirely separate things. That that weakly shining band is made up of countless stars became clear only after the invention of the telescope around 1608.
And that the Milky Way doesn't fill up the entire Universe but that there are more galaxies like our own, and that we therefore need a word for such a thing, has been known only since about 1930.
So the confusion that can arise between "Milky Way" as the name for only the weakly shining band in the sky, and "Milky Way" as the name for the entire galaxy is due to the meaning of "Milky Way" having been expanded in the course of time, when the nature of the Milky Way became clear.
[481]
## 5. The Central Line of the Milky Way
The central line of the Milky Way is the equator of the galactic coordinate system as defined by the IAU. This coordinate system is described, e.g., at //en.wikipedia.org/wiki/Galactic_coordinates#In_terms_of_equatorial_coordinates.
The equatorial coordinates of the galactic equator referred to the standard equinox of B1950.0 can be found from the following formulas (based on Chapter 12 of [Meeus]:
\begin{align} \tan y \| = \frac{\tan(l - 123°)}{\sin(27.4°)} \\ α \| = y + 12.25° \\ \sin δ \| = \cos(27.4°) \cos(l - 123°) \end{align}
In these formulas, $$α$$ is the right ascension, $$δ$$ the declination, and $$l$$ the galactic longitude. These formulas are part of the definition of the galactic coordinate system, so they are exact. However, modern star atlases are usually referred to the standard equinox of J2000.0 instead of the one of B1950.0, so you'll have to correct for the precession between 1950 and 2000 if you wish to know the coordinates of the galactic equator relative to J2000.0.
There appear to be quite a few web sites that discuss or at least mention this coordinate transformation. Type "galactic coordinates 192.25" into your favorite search engine to find a couple.
Note: before 1959 an older galactic coordinate system was used, that had a slightly different equator.
I've found several references (such as the Wikipedia article) that confirm that it was the IAU that defined this new galactic coordinate system, but I have not found the exact IAU publication that contains it. Many IAU publications are not freely available to the general public.
[238]
The Milky Way is what we can see of the galaxy that we are in, which is also called the Milky Way or Milky Way Galaxy. Our Galaxy has a diameter of about 100,000 lightyears. The center of the Milky Way lies in the direction of the constellation of the Archer (Sagittarius) at a distance of about 25,000 lightyears from the Sun, but is hidden from our sight by thick clouds of gas and dust. The Sun takes about 200 million years to orbit around the center of the Galaxy.
[337]
## 6. The Plane of the Milky Way
The Milky Way has the shape of a fat disk of stars and clouds of gas, and we are in that disk. The "plane of the Milky Way" is a plane that divides the disk of the Milky Way into a lower part and an upper part (as if you cut a pancake so that you get two pancakes that are half as thick).
An equator is usually a flat boundary of zero thickness that divides an object into two parts that are equal in some natural sense. The disk of the Milky Way is slightly warped, so its vertical middle makes a surface that is slightly warped, too. For convenience we'll call that surface the "equator of the Milky Way", though it is not perfectly flat.
The Sun is currently a few dozen lightyears to the north of that plane, but it is not entirely clear how many, exactly, which means that not everyone agrees where the plane of the Milky Way runs near the Sun. That is not very surprising, because the plane of the Milky Way is not marked in space by strange stars or anything like that. Finding the plane of the Milky Way is similar to finding the center line of a long but narrow forest without sharp borders. Most people will agree about roughly where the center line is, but where it is exactly is not so clear.
Some people say that the Sun is now 20 lightyears north of the plane of the Milky Way, others say 45 lightyears, and //www.ingenta.com/isis/searching/Expand/ingenta?pub=infobike://klu/astr/2003/00000288/00000004/05123578 says 34.6 lightyears. The Sun wiggles around the plane of the Milky Way and passes through the plane about every 35 million years or so.
[519]
Gamma rays are emitted by extremely hot or otherwise extremely energetic materials, and also during certain radioactive processes where one type of matter is changed into another. So, if there is very little matter in some region, then you cannot expect many gamma rays from that region. The highest densities of material in the Milky Way are found in the disk of the Milky Way, so it is to be expected that more gamma rays come from the disk of the Milky Way than from regions outside of the disk. The density decreases gradually over a couple of hundreds of lightyears when you move vertically away from the vertical middle of the disk, so I expect the emission of gamma rays to also decrease gradually over such distances above or below the equator.
An equator is not a separate object, and can (almost) never be easily detected from natural evidence in the area around the equator. For example, you cannot tell in a picture of some region near the equator on Earth exactly where the equator runs. The area looks similar on both sides of the equator, and doesn't suddenly change as you cross the equator. Likewise, it is not easy to tell where the equator of the Milky Way is, exactly, and there is no sudden increase or decrease of gamma ray emission or reception when you cross that equator.
As far as we know, the Sun has always been inside the disk of the Milky Way, and has crossed the equator many times in the past, so any sudden increases or decreases in reception of gamma rays must have been related to specific gamma ray sources, rather than to the exact location of the Sun relative to the equator of the Milky Way.
[339]
## 7. The Milky Way Galaxy is a Spiral Galaxy
The Milky Way Galaxy is a spiral galaxy. That means that the flat disk of the Milky Way contains a number of arms that wind their way from the center to the edge. You can see a picture of the structure of the Milky Way (as discovered so far) at //www.ras.ucalgary.ca/CGPS/where/plan/.
If you start at the center and move outward past the Sun, then you encounter the following arms: the Norma arm, the Scutum-Crux Arm, the Sagittarius Arm, the Local Arm, and the Perseus Arm. There is also a piece of arm called the Outer Arm, but that seems to be a part of the Norma Arm. These arms (except for the Local and Outer Arms) are named for the constellations that contain them (as seen from Earth). The Local Arm that the Sun happens to be in is not a full arm.
At the end of 2003, Australian astronomers from CSIRO reported (//www.atnf.csiro.au/news/press/spiralarm/) that they discovered a new piece of spiral arm at 15 - 20 kpc from the center. It is very well possible that this is yet another part of the Norma Arm, at yet greater distance from the center than the Outer Arm, which also seems part of the Norma Arm.
[110]
## 8. The Name of the Milky Way
The ancient Greek astronomers thought that the Milky Way looked like a river of milk running through the sky, and that's where both "Milky Way" and "Galaxy" come from. Galaxy comes from the Greek word for milk.
[13]
## 9. The Discovery of the Milky Way
A very long time ago there were no cities, and no city lights. The people of that time could see even very dim stars and the Milky Way every cloudless night. So, it is likely that people have been aware of the Milky Way for many thousands of years, and we can't tell who discovered the Milky Way first.
It took a very long time, however, before we discovered the true nature of the Milky Way. After the invention of the telescope in 1609 people saw that the Milky Way contained very many apparently dim stars that could not be seen individually without a telescope. In 1755, the philosopher Immanuel Kant proposed that some of the nebulas visible in the sky could be separate Milky Ways at large distance, instead of small nebulas within one giant Milky Way that filled all of the Universe. Only in 1923 did Edwin Hubble (for whom the Hubble Space Telescope is named) prove that some of those nebulas indeed lie far outside of our own Milky Way galaxy, and that our Milky Way does not fill the whole Universe. Nowadays we know that our Milky Way is but a typical spiral galaxy, in a backwater of the Universe.
[121]
## 10. The Discovery of the Center of the Milky Way Galaxy
The discovery of the center of the Milky Way was a lengthy process involving many people. Until the beginning of the 20th century we didn't know where the center of the Milky Way is. Many people then believed that the Milky Way filled the whole Universe; at least, nobody had proved otherwise. Yet, there were clues that the Milky Way wasn't the same in all directions. The observations of nebulae that were collected by J. Herschel (1864) and Mr. Dreyer (1888) showed that globular clusters have a great preference for the side of the sky where the constellation of the Archer is. In addition, the Milky Way does not appear equally bright in all directions. It is brightest near the constellation of the Archer, and least bright near the constellation of Perseus. This was shown especially by O. Boeddicker in Ireland (1892) and C. Easton in Dordrecht in the Netherlands (1893). In 1922, Freundlich and Von der Pahlen discovered that stars of spectral type B showed a curious velocity distribution that depended on their position along the Milky Way. In 1927, J.H. Oort of Leiden in the Netherlands showed that this curious distribution could be explained if you assumed that the Milky Way rotates around a center that was about 6000 pc away in the direction of the constellation of the Archer. (I summarized much of this history from [Pannekoek].) Nowadays, with many more measurements of various kinds, astronomers think that the center is more like 8000 pc away.
All of the gas and dust between us and the center of the Milky Way absorbs so much of the light that we can't see all the way to the center on ordinary photographs. (The total absorption is some 27 magnitudes!) Radio waves, infrared radiation, and gamma rays are much less troubled by such absorption, so we can record those kinds of radiation coming from the center, but only since about the 1950s (radio) and the 1980s (infrared, gamma). Around 1990 it became clear that the compact radio source Sagittarius A* (often abbreviated to Sgr A*) is in the exact center of the Milky Way and is associated with a black hole of 3.6 million solar masses. That radio source was discovered in February of 1974 by Bruce Balick and Robert Brown. The name Sagittarius A* was first used by Robert Brown in 1982 and has since then become the standard name for that object (according to [Goss]).
[73]
## 11. The Direction of the Center of the Milky Way
The center of the Milky Way (between the constellations of the Archer, the Scorpion, and the Snake Bearer) is about 5 degrees away in the sky from the path that the Sun takes (the ecliptic), so the Sun is always at least 5 degrees away from the center of the Milky Way, and the line through space from the Earth via the Sun to the center of the Milky Way always has a bend of at least 5 degrees in it.
[191]
The equatorial coordinates of the direction towards the center of the Milky Way Galaxy are (relative to the equinox of 2000.0): right ascension 17h46m, declination −28°56'. Things that don't move relative to the stars (such as the stars themselves) always rise at the same sidereal time and always set at the same sidereal time, as seen from a fixed location. For the center of the Milky Way, those sidereal times are listed in the following table, in the columns "Rise" and "Set".
90° north never never
80° north never never
70° north never never
60° north 16:39 18:53 02:41 23:01
50° north 14:31 21:01 04:20 21:22
40° north 13:37 21:55 05:09 20:33
30° north 13:00 22:32 05:42 20:00
20° north 12:32 23:00 06:08 19:34
10° north 12:08 23:24 06:30 19:12
11:46 23:46 06:51 18:51
10° south 11:24 00:08 07:12 18:30
20° south 11:00 00:32 07:34 18:08
30° south 10:32 01:00 08:00 17:42
40° south 09:55 01:37 08:33 17:09
50° south 09:01 02:31 09:22 16:20
60° south 06:53 04:39 11:01 14:41
70° south always never
80° south always never
90° south always never
The transformation of sidereal time to clock time is explained on the Time Page.
When the Milky Way goes straight over your head, then the center line of the Milky Way makes a right angle with the poles of the Milky Way, so then those poles must be on the horizon. The North Pole of the Milky Way has right ascension 12h51m and declination +27°08' (compared to the equinox J2000.0). That point is on the horizon at the two sidereal times that are listed in the preceding table in the column "Straight Overhead".
[322]
## 13. The Neighbors of the Milky Way Galaxy
Our Milky Way Galaxy is part of a group of galaxies that is called the Local Group. The nearest neighbor galaxy that is about as large as our own Galaxy is the Andromeda Nebula (M 31). The Galaxy also has a number of smaller neighbors, which probably have not yet all been discovered. The best-known small neighbors are the Large Magellanic Cloud and the Small Magellanic Cloud. You can find a list on the UniverseFamilyTree Page about galaxy clusters.
[12]
## 14. Thousands of Millions of Galaxies
If we assume that the Universe looks everywhere pretty much the same (against which we have no indications), then there have to be thousands of millions of galaxies in the Universe. A typical galaxy has a mass of about 100 thousand millions times that of the Sun (measured from the motions in and around the galaxy, which depend on all mass, including invisible mass). If we divide the total mass of the visible Universe (around 4 × 1022 solar masses or 9 × 1052 kilograms) by the typical mass of a galaxy, then we get an estimate of about 400 thousand million galaxies in the visible Universe.
Only a very small fraction of all of those galaxies has been investigated. Even the Sloan Digital Sky Survey (www.sdss.org), the largest systematic census of objects in the Universe, will count "only" about 100 million objects, among which an estimated one million galaxies.
[428]
## 15. Active Galaxies
The leading models for active galaxies assume that there is a large black hole at the very center of such a galaxy and that a large amount of energy is released by material from the surroundings that falls into the black hole. See //en.wikipedia.org/wiki/Active_galaxy and //imagine.gsfc.nasa.gov/docs/science/know_l1/active_galaxies.html. For the Active Galaxies Newsletter, see //www.ast.man.ac.uk/~rb/agn/.
[284]
## 16. The Great Wall
There are many very large concentration of galaxies. One, at about 300 million to 500 million lightyears from Earth in the direction of the constellations from at least Leo to Herculus, is known as the "Great Wall". However, its full extent is not yet known, and it is not everywhere very well separated from neighboring concentrations of galaxies. The part we've seen so far measures about 200 by 600 by 20 million lightyears in size. See //www.angelfire.com/id/jsredshift/grtwall.htm and //adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1992ApJ...384..396R.
[365]
## 17. Lonely Galaxies
The galaxies in the Universe are so far apart because they formed out of very much larger clouds of gas and dust that collapsed and shrunk because of their own gravity. All of the gas and dust from a very large region of space concentrated into a single galaxy that filled only a very small part of the region of space, so the rest of the space ended up empty.
If each galaxy ended up where the center of the cloud used to be, and if two of those clouds touched each other, then the distance between the two galaxies that formed out of them would still be many times their size, because those galaxies are so much smaller than the original clouds.
Also, the Universe is expanding, so galaxies are further apart now than they were thousands of millions of years ago.
Space between galaxies is mostly empty. There may be a few stars that have escaped from a galaxy, and there are a few protons and electrons here and there, but all in all there is so little matter there that a similar situation in a laboratory on Earth would be called a good vacuum.
[291]
## 18. Colliding Galaxies
Galaxies are usually millions of lightyears apart, but yet collisions between galaxies do occasionally occur. A galaxy is mostly empty space, so it is very unlikely that stars from both galaxies will collide. What happens, exactly, when two particular galaxies collide depends on the mass of the galaxies, on the smallest distance between them during the collision, on the relative velocity, and on the orientation of the galaxies. It may be that the two galaxies will merge into a single galaxy, but it may also be that they merely have a slight change of direction. It may be that one of the galaxies loses stars to the other one, or that both lose stars to each other. Often, tidal forces cause large groups of stars to be ejected into the Universe, and those form (temporary) tails to the galaxies. It also often happens that the collision generates a pressure wave that courses through the galaxies and causes star formation. Such a galaxy will contains unusually many young, bright stars for a few million years.
[442]
## 19. How to Tell if a Point of Light is a Galaxy or a Star
One can identify the nature of a point of light by its spectrum and its brightness. The spectrum of a galaxy is the combination of the spectra of all of its bright components, mostly the stars, but sometimes also with a large contribution from an Active Galactic Nucleus (AGN, for example such as in quasars).
The spectrum of an AGN looks very different from that of a star, so the difference is quite easy to spot if you can measure the spectrum.
Because the spectrum of a normal galaxy (without an AGN) is the combination of the spectra of stars of many different types, it usually does not look exactly like the spectrum of any particular type of star, and from that you can tell that it is not the spectrum of a single star.
A galaxy is very much bigger than a single star, so a galaxy must be very far away indeed to appear in the sky as just a point of light. At such great distances, the galaxy is likely to show a large redshift of its spectrum (because of Hubble's Law and the expansion of the Universe), which means that all spectral lines are shifted to longer wavelengths compared to the same spectral lines in the spectrum of a nearby star. The redshift indicates the distance, and the distance together with the apparent brightness indicates how much light the object emits, and that is very much greater for a galaxy than for a single star.
A small object in our Solar System appears as a point of light in our sky, and such an object can be told apart from a star or galaxy by its spectrum, too, for example because the spectrum reveals the temperature of the object, and objects in our Solar System (other than the Sun itself) are much cooler than the Sun and other stars are. Such objects also reflect sunlight, which carries the temperature signature of the Sun, so one has to be careful to check for another contribution from the object itself, with a much lower temperature.
[590]
## 20. Moving Galaxies
We have direct and indirect evidence that galaxies move. Indirect evidence is that many galaxies look like they have been disturbed by the gravity of something else, and that must then have been something with a similar amount of mass, and other galaxies are good candidates for that. Sometimes such an other galaxy is close by the disturbed galaxy, but sometimes there is no other galaxy nearby, and then that galaxy must have been closer in the past than it is today, so it must have moved.
The direct evidence is the Doppler shift of the spectral lines in the light coming from those galaxies. Just like the pitch of a siren gets higher when the ambulance drives towards you, and gets lower when the ambulance drives away from you, so does the frequency of light waves increase or decrease as the source of the light moves towards you or away from you.
The light coming from galaxies has many spectral lines of which we know what their real frequencies are, so we can determine their speed along the line of sight.
Determining the speed of galaxies at right angles to the line of sight (i.e., across the sky) is a lot more difficult than determining the speed along the line of sight. Of most galaxies of which we know their speed along the line of sight we do not know their speed across the sky.
We do not have a preferred position in the Universe, so the spread of speeds of galaxies around the local average is about as great at right angles to the line of sight as it is along the line of sight.
The typical speed of a galaxy relative to its neighbors is of the order of 10 to 100 km/s.
languages: [en] [nl]
//aa.quae.nl/en/antwoorden/melkwegstelsels.html;
Last updated: 2017-12-28 | 2018-05-20 20:59:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6071683168411255, "perplexity": 678.3048245590394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863689.50/warc/CC-MAIN-20180520205455-20180520225455-00444.warc.gz"} |
https://itectec.com/ubuntu/ubuntu-correctly-setup-apache-virtual-hosts-with-multiple-users/ | # Ubuntu – Correctly setup apache virtual hosts with multiple users
Apache2usersvirtualhost
Thanks in advance for any help you may provide. I've self taught myself using linux (ubuntu), apache, virtual hosts and a few bits with regards to security, though I've yet to put these all together seamlessly.
I've also learnt the hard way from mistakes made in the past with regards to not setting up user accounts and sites/virtualhosts correctly. I ssh into the server as root (I know, very bad), and created all the virtual hosts and files manually, thus all being owned by root. I've had an old version of joomla exploited and malicious spam code inserted in all my other virtual hosts.
I'm in the process of moving to a new server and am looking for the correct steps to follow with regards to setting up virtual hosts and users accounts that can access them. Because we use 3rd party software, we have on a occasion needed someone else to ftp into the site to investigate their website files.
Our current server is version 10.04, however the new server is version 12.04. All website files are setup under
/home/www/site1.com
/home/www/site2.com
/home/www/site3.com
etc
In short, I guess this is what I'm after.
1. How do I correctly setup a virtual host and user, so that it only has access to its own files, and not any other virtual host on the server. For example, should that site get exploited with malicious code, it can't infect other sites on the server
2. How would I setup users so that they can only ftp into their given files and not access any of the other sites.
As mentioned before, there are numerous articles detailing how to do any one of these steps, but nothing that brings them seamlessly together.
Really appreciate the help anyone can offer. Please feel free to ask for any additional information you may need to give advise. I'm currently following tutorials to setup chroot, however I fear I'm not doing this correctly as I've picked up a few inconsistencies along the way. Am also investigating jailkit.
Since question is the top search result in google when asked "apache virtual host separate user", I'd like to give a more elaborate answer based on this answer. My answer does not cover the usecase of PHP and CGI (you would use suPHP). It focuses on PHP when used as an apache module.
You can use the apache module apache2-mpm-itk. It is available for the current version of apache 2.4. It comes with the directive AssignUserID. This directive defines the user and group a virtual host's request is handled with. This is an example of the apache site configuration I ended up with:
<VirtualHost *:80>
ServerName www.site1.com
DocumentRoot /home/www/site1.com
AssignUserID site1 www-data
Of course, this does only improve security as long as the individual DocumentRoots are owned by their respective users and are not group accesible. For further PHP-related security, scripts are jailed in the DocumentRoot with the open_basedir restriction. As for the backend file access, I prefer SFTP with chroot. | 2021-10-20 09:53:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19117075204849243, "perplexity": 1844.3043649777132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00697.warc.gz"} |
https://kerodon.net/tag/03C3 | Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
7.5.6 The Homotopy Colimit as a Derived Functor
Let $\operatorname{\mathcal{C}}$ be a small category and let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{QCat}$ be a diagram of $\infty$-categories. In §7.5.3, we showed that the homotopy limit $\underset {\longleftarrow }{\mathrm{holim}}( \mathscr {F} )$ can be identified with the limit of an isofibrant replacement for $\mathscr {F}$: that is, there exists an isomorphism $\underset {\longleftarrow }{\mathrm{holim}}( \mathscr {F} ) \simeq \varprojlim ( \mathscr {F}^{+} )$, where $\mathscr {F}^{+}: \operatorname{\mathcal{C}}\rightarrow \operatorname{QCat}$ is an isofibrant diagram equipped with a levelwise categorical equivalence $\mathscr {F} \hookrightarrow \mathscr {F}^{+}$ (Construction 7.5.3.3 and Proposition 7.5.3.7). Our goal in this section is to present a parallel treatment of the homotopy colimit functor of Construction 5.3.2.1. More precisely, we show that the homotopy colimit of a diagram $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ can be identified with the colimit of an auxiliary diagram $\mathscr {F}_{+}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ which is equipped with a levelwise weak homotopy equivalence $\mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$ (Proposition 7.5.6.12).
We begin by introducing some terminology. Recall that a natural transformation $\beta : \widetilde{\mathscr {G}} \rightarrow \mathscr {G}$ is a levelwise trivial Kan fibration if, for each object $C \in \operatorname{\mathcal{C}}$, the morphism $\beta _{C}: \widetilde{\mathscr {G}}(C) \rightarrow \mathscr {G}(C)$ is a trivial Kan fibration of simplicial sets.
Definition 7.5.6.1. Let $\operatorname{\mathcal{C}}$ be a small category. We say that a diagram of simplicial sets $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ is projectively cofibrant if, for every levelwise trivial Kan fibration $\beta : \mathscr {G}' \rightarrow \mathscr {G}$, the induced map
$\operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}},\operatorname{Set_{\Delta }}) }( \mathscr {F}, \mathscr {G}' ) \rightarrow \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}},\operatorname{Set_{\Delta }}) }( \mathscr {F}, \mathscr {G})$
is surjective. That is, every natural transformation $\alpha : \mathscr {F} \rightarrow \mathscr {G}$ factors through $\beta$.
Example 7.5.6.2. Let $\operatorname{\mathcal{C}}$ be a category and let $U: \operatorname{\mathcal{E}}\rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}})$ be a morphism of simplicial sets. Then the diagram
$\mathscr {F}_{\operatorname{\mathcal{E}}}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}\quad \quad \mathscr {F}_{\operatorname{\mathcal{E}}}(C) = \operatorname{N}_{\bullet }( \operatorname{\mathcal{C}}_{/C} ) \times _{ \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) } \operatorname{\mathcal{E}}$
is projectively cofibrant, in the sense of Definition 7.5.6.1. To prove this, we must show that for every levelwise trivial Kan fibration $\mathscr {G}' \rightarrow \mathscr {G}$ between functors $\mathscr {G}', \mathscr {G}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$, the induced map
$\theta : \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{S}}) }( \mathscr {F}_{\operatorname{\mathcal{E}}}, \mathscr {G}') \rightarrow \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{S}}) }( \mathscr {G}_{\operatorname{\mathcal{E}}}, \mathscr {G} )$
is surjective. Using Proposition 5.3.3.21, we can identify $\theta$ with a pullback of the map $\operatorname{Hom}_{\operatorname{Set_{\Delta }}}( \operatorname{\mathcal{E}}, \operatorname{N}_{\bullet }^{\mathscr {G}'}(\operatorname{\mathcal{C}}) ) \rightarrow \operatorname{Hom}_{\operatorname{Set_{\Delta }}}( \operatorname{\mathcal{E}}, \operatorname{N}_{\bullet }^{\mathscr {G} }(\operatorname{\mathcal{C}}) )$, which is surjective by virtue of Exercise 5.3.3.11.
Exercise 7.5.6.3 (Well-Founded Diagrams). Let $(Q, \leq )$ be a well-founded partially ordered set. Show that a diagram of simplicial sets $\mathscr {F}: Q \rightarrow \operatorname{Set_{\Delta }}$ is projectively cofibrant if and only if, for each element $q \in Q$, the associated map $\varinjlim _{ p < q} \mathscr {F}(p) \rightarrow \mathscr {F}(q)$ is a monomorphism of simplicial sets (compare with Proposition 4.5.6.6).
Example 7.5.6.4 (Projectively Cofibrant Sequences). A sequential diagram of simplicial sets
$X(0) \rightarrow X(1) \rightarrow X(2) \rightarrow X(3) \rightarrow \cdots$
is projectively cofibrant (when regarded as a functor $\operatorname{\mathbf{Z}}_{\geq 0} \rightarrow \operatorname{Set_{\Delta }})$ if and only if each of the transition maps $X(n) \rightarrow X(n+1)$ is a monomorphism.
Example 7.5.6.5 (Projectively Cofibrant Squares). A commutative diagram of simplicial sets
7.52
$$\begin{gathered}\label{equation:projectively-cofibrant-squares} \xymatrix { A \ar [r]^{f_0} \ar [d]^{f_1} & A_0 \ar [d]^{f_1} \\ A_1 \ar [r]^{f_0} & A_{01} } \end{gathered}$$
is projectively cofibrant (when regarded as a functor $[1] \times [1] \rightarrow \operatorname{Set_{\Delta }}$) if and only if the morphisms
$f_0: A \rightarrow A_0 \quad \quad f_1: A \rightarrow A_1 \quad \quad (f'_1, f'_0): A_0 \coprod _{A} A_1 \rightarrow A_{01}$
are monomorphisms of simplicial sets. Equivalently, (7.52) is projectively cofibrant if it is a pullback square consisting of monomorphisms.
Remark 7.5.6.6 (Relationship to Isofibrant Diagrams). Let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets, let $\operatorname{\mathcal{D}}$ be an $\infty$-category, and let $\operatorname{\mathcal{D}}^{\mathscr {F}}: \operatorname{\mathcal{C}}^{\operatorname{op}} \rightarrow \operatorname{Set_{\Delta }}$ denote the functor given by the construction $C \mapsto \operatorname{Fun}( \mathscr {F}(C), \operatorname{\mathcal{D}})$. If $\mathscr {F}$ is projectively cofibrant (in the sense of Definition 7.5.6.1), then $\operatorname{\mathcal{D}}^{\mathscr {F}}$ is isofibrant (in the sense of Definition 4.5.6.3). That is, if $\mathscr {E}: \operatorname{\mathcal{C}}^{\operatorname{op}} \rightarrow \operatorname{Set_{\Delta }}$ is a diagram of simplicial sets and $\mathscr {E}_0 \subseteq \mathscr {E}$ is a subfunctor for which the equivalence $\mathscr {E}_0 \hookrightarrow \mathscr {E}$ is a levelwise categorical equivalence, then the restriction map
$\theta : \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}^{\operatorname{op}}, \operatorname{Set_{\Delta }}) }( \mathscr {E}, \operatorname{\mathcal{D}}^{\mathscr {F}} ) \rightarrow \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}^{\operatorname{op}}, \operatorname{Set_{\Delta }}) }( \mathscr {E}_0, \operatorname{\mathcal{D}}^{\mathscr {F}} )$
is surjective. This follows from the observation that $\theta$ can be identified with the map
$\operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}, \operatorname{\mathcal{D}}^{\mathscr {E}} ) \rightarrow \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}, \operatorname{\mathcal{D}}^{\mathscr {E}_0} )$
given by composition with the restriction map $\operatorname{\mathcal{D}}^{\mathscr {E}} \rightarrow \operatorname{\mathcal{D}}^{\mathscr {E}_0}$, which is a levelwise trivial Kan fibration by virtue of Corollary 4.5.5.19.
Proposition 7.5.6.7. Let $\operatorname{\mathcal{C}}$ be a small category and let $\alpha : \mathscr {F} \rightarrow \mathscr {G}$ be a natural transformation between projectively cofibrant diagrams $\mathscr {F}, \mathscr {G}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$. If $\alpha$ is a levelwise categorical equivalence, then the induced map $\varinjlim (\alpha ): \varinjlim ( \mathscr {F} ) \rightarrow \varinjlim ( \mathscr {G} )$ is a categorical equivalence of simplicial sets. If $\alpha$ is a levelwise weak homotopy equivalence, then $\varinjlim (\alpha )$ is a weak homotopy equivalence.
Proof. We will prove the first assertion; the second follows by a similar argument. Assume that $\alpha$ is levelwise categorical equivalence and let $\operatorname{\mathcal{D}}$ be an $\infty$-category; we wish to show that precomposition with $\varprojlim (\alpha )$ induces an equivalence of $\infty$-categories $\alpha ^{\ast }: \operatorname{Fun}( \varinjlim ( \mathscr {G} ), \operatorname{\mathcal{D}}) \rightarrow \operatorname{Fun}( \varinjlim ( \mathscr {F}), \operatorname{\mathcal{D}})$. $\alpha$ is a levelwise categorical equivalence, precomposition with $\alpha$ induces a levelwise categorical equivalence $\beta : \operatorname{\mathcal{D}}^{ \mathscr {G} } \rightarrow \operatorname{\mathcal{D}}^{\mathscr {F}}$ in the category $\operatorname{Fun}( \operatorname{\mathcal{C}}^{\operatorname{op}}, \operatorname{Set_{\Delta }})$. Unwinding the definitions, we see that $\alpha ^{\ast }$ can be identified with the limit $\varprojlim (\beta )$. Since $\operatorname{\mathcal{D}}^{\mathscr {F}}$ and $\operatorname{\mathcal{D}}^{\mathscr {G}}$ are isofibrant diagrams (Remark 7.5.6.6), the functor $\varprojlim (\beta )$ is an equivalence of $\infty$-categories (Corollary 4.5.6.15). $\square$
We now show that every diagram of simplicial sets $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ admits a weak homotopy equivalence from a projectively cofibrant diagram (for a stronger statement, see Proposition 7.5.9.7).
Construction 7.5.6.8 (Explicit Cofibrant Replacement). Let $\operatorname{\mathcal{C}}$ be a small category, let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets, and let $\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} )$ denote the homotopy colimit of $\mathscr {F}$ (Construction 5.3.2.1). For each object $C \in \operatorname{\mathcal{C}}$, we let $\mathscr {F}_{+}(C)$ denote the simplicial set given by the fiber product
$\operatorname{N}_{\bullet }( \operatorname{\mathcal{C}}_{ /C} ) \times _{ \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) } \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) = \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}|_{ \operatorname{\mathcal{C}}_{/C} } ).$
The construction $C \mapsto \mathscr {F}_{+}(C)$ determines a diagram of simplicial sets $\mathscr {F}_{+}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$. This diagram is equipped with a natural transformation $\alpha : \mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$, which carries each object $C \in \operatorname{\mathcal{C}}$ to the comparison map
$\mathscr {F}_{+}(C) = \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}|_{ \operatorname{\mathcal{C}}_{/C} } ) \twoheadrightarrow \varinjlim ( \mathscr {F}|_{ \operatorname{\mathcal{C}}_{/C} } ) \simeq \mathscr {F}(C)$
of Remark 5.3.2.9.
Proposition 7.5.6.9. Let $\operatorname{\mathcal{C}}$ be a small category and let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets. Then the diagram $\mathscr {F}_{+}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ of Construction 7.5.6.8 is projectively cofibrant, and the natural transformation $\alpha : \mathscr {F}_{+} \rightarrow \mathscr {F}$ is a levelwise weak homotopy equivalence. Moreover, $\alpha$ is also an epimorphism.
Proof. Example 7.5.6.2 shows that the diagram $\mathscr {F}_{+}$ is projectively cofibrant and Remark 5.3.2.9 shows that $\alpha$ is an epimorphism. To complete the proof, it will suffice to show that for each object $C \in \operatorname{\mathcal{C}}$, the map $\alpha _{C}: \mathscr {F}_{+}(C) \rightarrow \mathscr {F}(C)$ is a weak homotopy equivalence of simplicial sets. Replacing $\operatorname{\mathcal{C}}$ by the slice category $\operatorname{\mathcal{C}}_{/C}$, we can reduce to the case where $C$ is a final object of $\operatorname{\mathcal{C}}$; in this case, we wish to prove that the comparison map
$\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \rightarrow \varinjlim ( \mathscr {F} ) \simeq \mathscr {F}(C)$
is a weak homotopy equivalence. Note that this map admits a section, given by the inclusion map
$\iota : \mathscr {F}(C) \simeq \{ C\} \times _{ \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) } \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \rightarrow \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ).$
We complete the proof by that our assumption that $C \in \operatorname{\mathcal{C}}$ is a final object guarantees that $\iota$ is right anodyne (Example 7.2.3.12). $\square$
Warning 7.5.6.10. In the situation of Proposition 7.5.6.9, the natural transformation $\alpha : \mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$ is usually not a levelwise categorical equivalence. For example, if $\mathscr {F}$ is the constant functor taking the value $\Delta ^0$, then $\mathscr {F}_{+}$ is given by the construction $C \mapsto \operatorname{N}_{\bullet }( \operatorname{\mathcal{C}}_{/C} )$.
Remark 7.5.6.11. Constructions 7.5.6.8 and 7.5.3.3 are closely related. Let $\operatorname{\mathcal{C}}$ be a small category, let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets, and let $\mathscr {G}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Kan}$ be a diagram of Kan complexes. Combining Corollary 5.3.2.24 with Proposition 5.3.3.21, we obtain canonical isomorphisms of Kan complexes
\begin{eqnarray*} \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}, \mathscr {G}^{+} )_{\bullet } & = & \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}, \operatorname{sTr}_{ \operatorname{N}_{\bullet }^{\mathscr {G}}(\operatorname{\mathcal{C}})/\operatorname{\mathcal{C}}} ) \\ & \simeq & \operatorname{Fun}_{ / \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) }( \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ), \operatorname{N}_{\bullet }^{\mathscr {G}}(\operatorname{\mathcal{C}}) ) \\ & \simeq & \operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}_{+}, \mathscr {G} )_{\bullet }. \end{eqnarray*}
More generally, if $\mathscr {G}$ is a diagram of $\infty$-categories, we can identify $\operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}, \mathscr {G}^{+} )_{\bullet }$ with the full subcategory of $\operatorname{Hom}_{ \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) }( \mathscr {F}_{+}, \mathscr {G} )_{\bullet }$ spanned by those natural transformations $\alpha : \mathscr {F}_{+} \rightarrow \mathscr {G}$ having the property that, for each object $C \in \operatorname{\mathcal{C}}$, the diagram
$\alpha _{C}: \mathscr {F}_{+}(C) = \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}|_{ \operatorname{\mathcal{C}}_{/C} } ) \rightarrow \mathscr {G}(C)$
carries horizontal edges of $\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}|_{ \operatorname{\mathcal{C}}_{/C} } )$ to isomorphisms in the $\infty$-category $\mathscr {G}(C)$.
Proposition 7.5.6.12. Let $\operatorname{\mathcal{C}}$ be a small category, let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets, and let $\mathscr {F}_{+}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be the diagram of Construction 7.5.6.8. Then there is a canonical isomorphism of simplicial sets $\lambda : \varinjlim ( \mathscr {F}_{+} ) \rightarrow \underset { \longrightarrow }{\mathrm{holim}}(\mathscr {F})$ which is characterized by the following requirement: for each object $C \in \operatorname{\mathcal{C}}$, the composition
\begin{eqnarray*} \operatorname{N}_{\bullet }( \operatorname{\mathcal{C}}_{/C} ) \times _{ \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) } \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) & = & \mathscr {F}_{+}(C) \\ & \rightarrow & \varinjlim ( \mathscr {F}_{+} ) \\ & \xrightarrow {\lambda } & \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \end{eqnarray*}
is given by projection onto the second factor.
Proof. It follows from the definition of the colimit that there is a unique morphism of simplicial sets $\lambda : \varinjlim ( \mathscr {F}_{+} ) \rightarrow \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} )$ having the desired property. Using the dual of Lemma 7.5.3.8, we deduce that $\lambda$ is an isomorphism. $\square$
Remark 7.5.6.13. Let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a diagram of simplicial sets, let $\theta : \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \twoheadrightarrow \varinjlim ( \mathscr {F} )$ be the comparison map of Remark 5.3.2.9, and let $\lambda : \varinjlim ( \mathscr {F}_{+} ) \xrightarrow {\sim } \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} )$ be the isomorphism of Proposition 7.5.6.12. Then the composition $(\theta \circ \lambda ): \varinjlim ( \mathscr {F}_{+} ) \rightarrow \varinjlim ( \mathscr {F} )$ is induced by the natural transformation $\alpha : \mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$ appearing in Construction 7.5.6.8.
Corollary 7.5.6.14. Let $\operatorname{\mathcal{C}}$ be a small category and let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{Set_{\Delta }}$ be a projectively cofibrant diagram of simplicial sets. Then the comparison map $\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \twoheadrightarrow \varinjlim ( \mathscr {F} )$ of Remark 5.3.2.9 is a weak homotopy equivalence.
Proof. By virtue of Remark 7.5.6.13, it will suffice to show that the natural transformation $\alpha : \mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$ of Construction 7.5.6.8 induces a weak homotopy equivalence $\varinjlim (\alpha ): \varinjlim ( \mathscr {F}_{+} ) \rightarrow \varinjlim ( \mathscr {F} )$. This is a special case of Proposition 7.5.6.7, since $\alpha$ is a levelwise weak homotopy equivalence between projectively cofibrant diagrams (Proposition 7.5.6.9). $\square$
Warning 7.5.6.15. Let $\mathscr {F}: \operatorname{\mathcal{C}}\rightarrow \operatorname{QCat}$ be a diagram of simplicial sets, let $\alpha : \mathscr {F}_{+} \twoheadrightarrow \mathscr {F}$ be the natural transformation of Construction 7.5.6.8, and let $\lambda : \varinjlim ( \mathscr {F}_{+} ) \xrightarrow {\sim } \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} )$ be the isomorphism of Proposition 7.5.6.12. Then we have a diagram of simplicial sets
$\xymatrix@R =50pt@C=50pt{ \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}_{+} ) \ar [r]^-{ \underset { \longrightarrow }{\mathrm{holim}}( \alpha ) } \ar [d] & \underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} ) \ar [d] \\ \varinjlim ( \mathscr {F}_{+} ) \ar [r]_{\varinjlim (\alpha ) } \ar [ur]^{\lambda }_{\sim } & \varinjlim ( \mathscr {F} ), }$
where the outer square and the lower right triangle are commutative (Remark 7.5.6.13). Beware that the upper left triangle is usually not commutative. That is, $\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F} )$ and $\varinjlim ( \mathscr {F}_{+} )$ are isomorphic when viewed as abstract simplicial sets, but not when viewed as quotients of the simplicial set $\underset { \longrightarrow }{\mathrm{holim}}( \mathscr {F}_{+} )$ (compare with Warning 7.5.3.14).
Remark 7.5.6.16 (The Homotopy Colimit as a Left Derived Functor). The preceding results can be interpreted in the language of model categories. For every small category $\operatorname{\mathcal{C}}$, the category $\operatorname{Fun}( \operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }})$ can be equipped with a model structure in which the fibrations are levelwise Kan fibrations and weak equivalences are levelwise weak homotopy equivalences (see Example ). Combining Propositions 7.5.6.9 and 7.5.6.12, we deduce that the homotopy colimit functor $\underset { \longrightarrow }{\mathrm{holim}}: \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) \rightarrow \operatorname{Set_{\Delta }}$ can be viewed as a left derived functor of the usual colimit $\varinjlim : \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{Set_{\Delta }}) \rightarrow \operatorname{Set_{\Delta }}$ (see Definition ). | 2022-12-10 10:04:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.987028956413269, "perplexity": 213.63287832245499}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00124.warc.gz"} |
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/4776 | ## An IP Approach to Toll Enforcement Optimization on German Motorways
• This paper proposes the first model for toll enforcement optimization on German motorways. The enforcement is done by mobile control teams and our goal is to produce a schedule achieving network-wide control, proportional to spatial and time-dependent traffic distributions. Our model consists of two parts. The first plans control tours using a vehicle routing approach with profits and some side constraints. The second plans feasible rosters for the control teams. Both problems can be modeled as Multi-Commodity Flow Problems. Adding additional coupling constraints produces a large-scale integrated integer programming formulation. We show that this model can be solved to optimality for real world instances associated with a control area in East Germany.
### Additional Services
Author: Ralf Borndörfer, Guillaume Sagnol, Elmar Swarat In Proceedings Operations Research Proceedings 2011 317 322 Operations Research Proceedings 2012 urn:nbn:de:0297-zib-14299 http://dx.doi.org/10.1007/978-3-642-29210-1_51
$Rev: 13581$ | 2017-08-20 04:14:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23463700711727142, "perplexity": 3100.8454684704984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00229.warc.gz"} |
http://robowiki.net/wiki/Segmentation/Autoselected_Segmentation | # Segmentation/Autoselected Segmentation
Automatic Segmentation is a mechanism for automatically selecting from every combination of available axis, so that for any situation the best available data is used. The concept was first proposed on the old wiki by Vuen for the bot Fractal [1].
## Example
To start, you have a number of segmentation axis:
You can assemble all combinations of segments into a long list of every segmentation that uses these three axes:
Segmentation Depth
(no segmentation) 0
LV 1
A 1
TSR 1
LV + A 2
LV + TSR 2
A + TSR 2
LV + A + TSR 3
Once you have the array of segmentations, you need a function to determine how good the data in that segmentation is. One way to do that is with what's called the Crest factor. This is the ratio of the peak of the data to the root-mean-squared value of the data.
Crest factor calculation:
$C = {|x|_\mathrm{peak} \over x_\mathrm{rms}}$
Root mean squared calculation:
$x_{\mathrm{rms}} = \sqrt {{1 \over n} \sum_{i=1}^{n} {x_i}^2} = \sqrt {{{x_1}^2 + {x_2}^2 + \cdots + {x_n}^2} \over n}$
Therefore, the crest factor can be calculated by:
$C = {|x|_\mathrm{peak} \over {\sqrt {{1 \over n} \sum_{i=1}^{n} {x_i}^2}}}$
The segmentation that returns the most useful data (as determined by the crest factor) can used for dodging, aiming, or any other statistical calculation depend on what you need.
## Implementation
The implementation below is used in Watermelon. Fractal uses a similar implementation.
First, you need an abstract class to represent a segmentation axis. It needs a minimum and maximum value, a number of segments, and a function to return an index value given a reference to either a bot or an enemy.
Create a subclass for each segmentation axis you need - Lateral Velocity, Acceleration, etc.
Second, you need a class to represent a single segmentation. It will be initialized with a certain number of axes, and it will segment on all those axes. It also needs to be able to handle being initialized with no axis at all. This class needs to be able to mark a "hit" on itself (possibly with bin smoothing) given the necessary index for each of its axes. It should also be able to indicate how "good" its data is given a certain set of axis indices. It can simply return the Crest Factor, or it can incorporate additional factors such as how many axes it has and how much data has been collected.
Finally you need to assemble every possible segmentation into a large array of segmentations. When you register a "hit", mark it on each segmentation. When you need to take advantage of the data you've collected, find the axis indices for each segmentation and ask it how good its data is for that set of indices. Use the segmentation with the best fitness.
One simple way to assemble these axes for each segmentation takes advantage of properties of binary numbers. You will have 2num_of_axes segmentations, due to some convenient cancellation in the combinations. Count from 0 to num_of_segmentations in binary, and assign each place value to one of your axes. For each number in the sequence, if that place value is a 1, include that axis. | 2018-02-24 14:22:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4524228274822235, "perplexity": 935.5144014360418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815812.83/warc/CC-MAIN-20180224132508-20180224152508-00692.warc.gz"} |
https://www.mlq.ai/portfolio-construction-and-analysis-risk-return/ | # Introduction to Portfolio Construction and Analysis: Risk & Returns
In this article, we'll introduce key concepts of risk and return in portfolio analysis, including Value-at-Risk, Conditional Value-at-Risk, and more.
3 months ago • 6 min read
In this article, we'll introduce key concepts of risk and return related to portfolio construction and portfolio analysis.
In particular, we'll discuss several risk and performance measures that are used in portfolio management and analysis, including risk measures such as Value-at-Risk, Conditional Value-at-Risk, and more.
This article is based on notes from the first course in this Investment Management with Python and Machine Learning Specialization and is organized as follows:
• Fundamentals of Returns
• Measuring Risk and Reward
• Deviations from Normality
• Downside Risk Measures
• Estimating VaR
If you're interested in learning more about machine learning for trading and investing, check out our AI investment research platform: the MLQ app.
The platform combines fundamentals, alternative data, and ML-based insights.
This post may contain affiliate links. See our policy page for more information.
## Fundamentals of Returns
To start, we'll first define define and characterize investment risk and returns.
The return of an asset is defined as follows:
$$R_{t, t+1} = \frac{P_{t+1} - P_t}{P_t}$$
If there were dividends paid during this period, they should be also added to $P_{t+1}$ in order to get the total return, otherwise it is known as the price return.
### Multiperiod Returns
If we have multiple time periods, for example $t_0$ to $t_1$ and $T_0$ to $T_1$, the return over the combine time frame can be calculated as follows:
$$R_{t, t+2} = (1 + R_{t, t+1})(1+R_{t+1, t+2}) - 1$$
This formula provides the compounded return over the two time periods.
### Annualizing Returns
In order to compare monthly returns with quarterly or daily returns, we need to annualize the returns. For example, a monthly return can be annualized with following formula:
$$((1 + R)^{12} - 1)$$
## Measuring Risk and Reward
The first thing to note about returns is that average returns are not a good way to analyze overall returns as they fail to take into volatility.
Volatility can be formally measured using the standard deviation and variance of returns.
In order to get the variance of returns, we compute the average of the square of the deviations from the mean:
$$\sigma^2_R = \frac{1}{N}\sum^N_{i=1}(R_1 - \bar{R})^2$$
where:
• $\bar{R}$ is the arithmetic mean of the returns
Since the variance is the average of the squares of returns, it makes it difficult to compare returns themselves.
To deal with this, we take the square root of the variance in order to get the standard deviation of returns:
$$\sigma_R = \sqrt{\frac{1}{N}\sum^N_{i=1}(R_1 - \bar{R})^2}$$
Now that we have a measure of volatility, we need to be able to annualize it since we can't compare volatility from daily data to monthly data, for example.
In order to annualize volatility for daily data, we multiply the daily volatility by the square root of the number of trading days per year, or 252:
$$\sigma_ann = \sigma_p\sqrt{p} ### Risk Adjusted Measures Now that we have a basic measurement of risk, we can compare risk-adjusted returns. One way to compare risk-adjusted returns is to look at the excess return over the risk-free rate, otherwise known as the Sharpe ratio:$$Sharpe(P) = \frac{R_p - R_f}{\sigma^p}$$### Measuring Max Drawdown Another commonly used risk metric that is different from volatility is the maximum drawdown. Max drawdown is simply the maximum loss from the previous high to the subsequent low. A common risk metric associated with max drawdown is the Calmar ratio, which is defined as the ratio of the annualized returns over the trailing 36 months to the max drawdown over those 36 months. ## Deviations from Normality In this section, we'll look at deviations from normality, or how time series returns are typically not normally distributed. Instead, normal distribution is an simplifying assumption. When we assume a normal distribution in returns, there's a very small probability that the returns will take on very positive or negative returns. In reality, however, we find that the normal distribution underestimates the magnitude of extreme returns. In order to account for this, we should move beyond the mean and variance and look at higher order moments. In particular, we should look at skewness and kurtosis: Skewness is a measure of asymmetry of the distribution and can be calculated as follows:$$S(R) = \frac{E[(R - E(R))^3]}{[Var(R)]^{3/2}}$$Kurtosis is a measure of the thickness of the tail of the distribution. Gaussian distribution has very thin tails that decreases sharply to zero, although as mentioned, these tails tend to be fatter than normal distribution in reality. Kurtosis can be calculated as follows:$$K(R) = \frac{E[(R - E(R))^4]}{[Var(R)]^2}$$Any return distribution that has a kurtosis higher than 3 is considered a fat tail distribution. ## Downside Risk Measures In this section, we'll discuss several downside risk measurements. In particular, we want to go beyond volatility as it is a very symmetric measurement that tells us about the average risk. Instead, we want to understand measurements for more extreme deviations around the mean. ### Semi-Deviation The first risk metric is called semi-deviation or semi-volatility, which is the volatility of the sub-sample of below-average or below-zero returns. Semi-deviation can be calculated as follows:$$\sigma_{semi} = \sqrt{\frac{1}{N}\sum_{R_t\leq\bar{R}}(R_t - \bar{R})^2}$$where: • N is the number of returns that fall below the mean ### Value at Risk (VaR) Another useful risk metric is Value at Risk (VaR), which represents the maximum "expected" loss over a given time period. To calculate VaR, we first define a specified confidence level, for example 99%, and the time period that we're looking at, let's say over 1 month. We'll discuss several methods for estimating VaR more below. ### Conditional Value at Risk (CVaR) A similar risk metric is called Conditional Value at Risk (CVaR), which is defined as the expected loss beyond VaR and is calculated as follows:$$CVaR = -E(R|R \leq -VaR) = \frac{\int_{-\infty}^{-VaR} x \cdot f_R(x)dx}{F_R(-VaR)}$$## Estimating VaR In this section, we'll discuss different methodologies for estimating VaR. There are at least four standard methods for calculating VaR: • Historical (non-parametric) • Variance-Covariance (parametric gaussian) • Parametric non-gaussian • Cornish-Fisher (semi-parametric) Each method has its pros and cons, so in statistical analysis it is about choosing the most suitable method for that particular context. ### Historical VaR Historical VaR is a calculation of VaR based on the distribution of historical changes in the value of the current portfolio under market prices over the specified historical observation window. Historical VaR is calculated as follows:$$VaR = v_m \frac{v_i}{v_{i-1}}
where:
• $v_i$ is the number of variables on day $i$
• $m$ is the number of days from which historical data is taken
The advantage of this method is that you're not making assumptions about asset return distributions.
The drawback of this method is that since we're not making any assumptions, we're relying solely on historical data. This means the estimate may be sensitive to the sample period.
### Parametric Gaussian Methodology
The parametric gaussian methodology is a competing method that calculates VaR based on portfolio volatility, in other words on volatilities and correlations of components.
The benefit of assuming a Gaussian distribution with this method is that you only need to estimate the mean and volatility of the distribution.
The risk of assuming normality, however, is that it may underestimate risk.
### Parametric Non-Gaussian VaR
The parametric non-Gaussian VaR attempts to solve this issue of underestimating risk.
Parametric non-Gaussian VaR can be useful as it mitigates the problem of estimation risk at the cost of model risk.
### Cornish-Fisher VaR
The Cornish-Fisher VaR is an alternative to parametric model that is a semi-parametric approach.
This approach does not force you to assume any return distribution, and thus it can be useful in a non-Gaussian setting.
## Summary: Measuring Risk in Portfolio Analysis
In summary, there are several ways to measure risk in portfolio analysis. The most common measurement is standard deviation, which assumes a normal distribution of returns.
In practice, however, this Gaussian distribution tends to underestimate the potential for extreme returns, either positive or negative. Going beyond standard deviation, several other risk measures include VaR and conditional VaR.
There are several methods for estimating VaR, including:
• Historical (non-parametric)
• Variance-Covariance (parametric gaussian)
• Parametric non-gaussian
• Cornish-Fisher (semi-parametric)
Each of these methods have their advantages and disadvantages, although at the end of the day there is a trade-off between sample risk and model risk.
public
public
public | 2021-08-03 21:05:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6183004975318909, "perplexity": 1288.3350495519132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154471.78/warc/CC-MAIN-20210803191307-20210803221307-00457.warc.gz"} |
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.753364 | Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.753364
Title: A phenomenological mathematical modelling framework for the degradation of bioresorbable composites
Author: Moreno-Gomez, Ismael
ISNI: 0000 0004 7426 456X
Awarding Body: University of Cambridge
Current Institution: University of Cambridge
Date of Award: 2018
Availability of Full Text:
Access from EThOS: Full text unavailable from EThOS. Please try the link below. Access from Institution:
Abstract:
Understanding, and ultimately, predicting the degradation of bioresorbable composites made of biodegradable polyesters and calcium-based ceramics is paramount in order to fully unlock the potential of these materials, which are heavily used in orthopaedic applications and also being considered for stents. A modelling framework which characterises the degradation of bioresorbable composites was generated by generalising a computational model previously reported in literature. The framework uses mathematical expressions to represent the interwoven phenomena present during degradation. Three ceramic-specific models were then created by particularising the framework for three common calcium-based fillers, namely tricalcium phosphate (TCP), hydroxyapatite (HA) and calcium carbonate (CC). In these models, the degradation of a bioresorbable composite is described with four parameters: the non-catalytic and auto-catalytic polymer degradation rates, $k_1$ and $k_2'$ respectively and the ceramic dissolution rate and exponent, $A_\text{d}$ and $\theta$ respectively. A comprehensive data mining exercise was carried out by surveying the existing literature in order to obtain quantitative degradation data for bioresorbable composites containing TCP, HA and CC. This resulted in a database with a variety of case studies. Subsequently, each case study was analysed using the corresponding ceramic-specific model returning a set of values for the four degradation constants. Both cases with agreement and disagreement between model prediction and experimental data were studied. 76% of the 107 analysed case studies displayed the expected behaviour. In general terms, the analysis of the harvested data with the models showed that a wide range of degradation behaviours can be attained using different polymeric matrix - ceramic filler combinations. Furthermore, the existence of discrepancies in degradation behaviour between a priori similar bioresorbable composites became apparent, highlighting the high number of hidden factors affecting composite degradation such as polymer tacticity or ceramic impurities. The analysis of the case studies also highlighted that the ceramic dissolution rate needed to depict the portrayed degradation behaviours is significantly higher than that reported for ceramics alone in dissolution studies under physiological conditions, indicating that studies of the filler elements alone do not provide a complete picture. Lastly, the computational analysis provided insight into the complex influence of factors such as sample porosity and degradation protocol in the degradation behaviour. In addition to the computational analysis of literature data, an experimental degradation study was carried out with nanocomposites made of calcium carbonate and poly(D,L-lactide-co-glycolide). This study showed the existence of a clear buering effect with the addition of the ceramic filler and confirmed the assumptions employed in the modelling framework in this particular bioresorbable composite. The detailed nature and modest size of these data enabled a more precise and thorough analysis using the CC composites degradation model. In summary, the modelling framework is able to capture the main degradation behaviour of bioresorbable composites and also point to factors responsible for dissimilar behaviours. The degradation maps generated with the values of $k_1$, $k_2'$, $A_\text{d}$ and $\theta$ output by the models appear to be a good tool to summarise, classify and facilitate the analysis and search of specific bioresorbable composites.
Supervisor: Best, Serena M. ; Cameron, Ruth E. Sponsor: Lucideon ; "La Caixa Europe Programme"
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.753364 DOI:
Keywords: modelling ; bioresorbable ; composite ; polymeric matrix ; ceramic filler ; tricalcium phosphate ; hydroxyapatite ; calcium carbonate ; polylactide ; polyglycolide ; degradation
Share: | 2022-08-08 03:35:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4863283038139343, "perplexity": 3527.0625675156143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00044.warc.gz"} |
http://gradestack.com/CBSE-Class-10th-Course/Triangle/Summary/15044-3000-5312-study-wtw | # Summary
1. Two triangles are said to be similar if (a) their corresponding sides are proportional, and (b) their corresponding angles are equal.
Â
2. In a triangle, if a line drawn parallel to one of the sides intersects the other two sides in distinct points, then the line divides the two sides in the same ratio.
(Thales’ Theorem).
Â
3. If a line divides any two sides of a triangle in the same ratio, then the line will be parallel to the third side.
Â
4. The bisector of an angle of a triangle, divides the opposites side in the ratio of the sides containing the angle.
Â
5. If two triangles are equiangular, then the triangles are similar.
Â
6. If two angles of one triangle are respectively equal to two angles of another triangle, then the triangles are similar.
Â
7. If the corresponding sides of two triangles are proportional, then they are similar.
Â
8. If in two triangles, one pair of corresponding sides is proportional and the included angles are equal, then the two triangles are similar.
Â
9. In a right-angled triangle, the square of the hypotenuse in equal to the sum of the squares of the two remaining sides.
[Pythagoras’ Theorem]
Â
10. If in a triangle the square of one side is equal to the sum of the squares of the other two sides then it will be a right-angled triangle. [Converse of Pythagoras’ Theorem]
11. CPCTE represents the condition ‘Corresponding parts of congruent triangles are equal’.
 | 2017-03-30 01:16:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448941707611084, "perplexity": 259.59321559143314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00033-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://nn.labml.ai/diffusion/stable_diffusion/index.html | # Stable Diffusion
This is based on official stable diffusion repository CompVis/stable-diffusion. We have kept the model structure same so that open sourced weights could be directly loaded. Our implementation does not contain training code.
### PromptArt
We have deployed a stable diffusion based image generation service at promptart.labml.ai
### Latent Diffusion Model
The core is the Latent Diffusion Model. It consists of:
We have also (optionally) integrated Flash Attention into our U-Net attention which lets you speed up the performance by close to 50% on an RTX A6000 GPU.
The diffusion is conditioned based on CLIP embeddings.
### Sampling Algorithms
We have implemented the following sampling algorithms:
### Example Scripts
Here are the image generation scripts:
#### Utilities
util.py defines the utility functions. | 2023-02-07 18:53:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2375074028968811, "perplexity": 8942.368897079403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00500.warc.gz"} |
http://math.stackexchange.com/questions/82189/affine-independence-and-linear-independence | # Affine Independence and Linear Independence
Definition: Let $v_0, v_1.. v_k$ be points in $\mathbb{R}^d$. These points are called affinely independent if there do not exist real numbers $\alpha_0, \alpha_1...\alpha_k$ that are not all zero such that $\sum_{i=0}^k \alpha_i v_i = 0$ and $\sum_{i=0}^k \alpha_i = 0$.
We need to prove the following: The points $v_0, v_1... v_k$ are affinely independent if and only if the vectors $v_1 - v_0, v_2 -v_0... v_k - v_0$ are linearly independent.
Thank you so much.
-
You can try expressing one of the coefficients, say the first one, with the remaining ones since $\sum_{i=1}^{k-1} \alpha_i = -\alpha_0$. Then plug this into the definition of affine dependence and you will get a pattern. – user13838 Nov 15 '11 at 0:52
Assume affine independence. Assume some linear combination of the $v_i-v_0$ is 0. Write that linear combination down (with unknown coefficients) and manipulate it to a form where you can use the assumption of affine independence.
Conversely, assume linear independence. Let some linear combination of the $v_i$ be zero. Do some manipulation to relate it to a linear combination of the $v_i-v_0$. Use the linear independence hypothesis to draw a conclusion about the coefficients.
If all that is too abstract for you, try to do the case $k=1$ or $k=2$ where you can more easily see what's happening. | 2013-12-13 07:47:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326456189155579, "perplexity": 113.97584483164346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164919525/warc/CC-MAIN-20131204134839-00088-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://quant.stackexchange.com/questions/4160/discrete-returns-versus-log-returns-of-assets?answertab=oldest | # Discrete returns versus log returns of assets
There have been similar posts here already but nevertheless I find the question worth posting: why do some people claim that log returns of assets are more suitable for statistics than discrete returns.
E.g. in the ESMA CESR guidliens about SSRI log returns are used. I personally think that discrete returns are as good for means of risk management as continuous returns. Furthermore in portfolio context I can calculate the portfolio return by weighting the discrete returns of the assets which does not work with log returns. The time-aggregation of log returns is easier that's true. But people rather think in discrete returns. If my NAV drops from $100$ to $92$ then I have lost $8\%$ and that's it.
Is there any study on this - any good reference? Anything that I can tell my regulator why I stick to discrete returns.
-
I'm not sure there needs to be a "study." You seem well aware of the reasoning. Arithmetic returns allow for easier cross-sectional aggregation and log returns allow for easier time-aggregation. The reason people use log returns is that (for equities) they are approximately invariant and are easier to work with in estimating distributions. However, proper procedure is to convert the log returns to arithmetic returns for the purposes of portfolio optimization and risk management. – John Sep 20 '12 at 16:17
@ John What do you mean by 'approximately invariant'? And how/why can you estimate distributions more easily? Can't we fit distributions for both kinds of returns? – Richard Sep 20 '12 at 19:00
If you take normally distributed log returns and convert them to arithmetic, then they will become log normal. That's what I mean by estimating distributions easier. Also, it is easier to project log returns to the appropriate horizon due to time aggregation. As for invariance, see: symmys.com/node/85 – John Sep 20 '12 at 20:23
Agree with John here, an almost exact identical post as yours was answered by me in the same fashion : quant.stackexchange.com/questions/3979/… – Matt Wolf Sep 21 '12 at 6:00
@John thank you for the comment. I have not realized these issues although dealing with this for years now. If you make it an answer then I will accept it. Thanks again. – Richard Sep 23 '12 at 19:12
Arithmetic returns allow for easier cross-sectional aggregation and log returns allow for easier time-aggregation.
The reason people use log returns (for equities) is that they are approximately invariant and hence easier to work with in estimating distributions. Meucci does better justice in describing invariance here. The basic idea (again, for equities) is that the distribution of security prices is log-normal, so the arithmetic returns will also be. However, making a log transformation results in approximately normal returns, which are easier to work with. Also, if you do assume them to be normally distributed, then there are convenient results for the convolution of multivariate normal series. This is what allows for easier time-aggregation.
However, you shouldn't take log returns and use them to obtain the arithmetic portfolio return. This is because while you can link them through time, the math doesn't work out, particularly at long horizons, cross-sectionally. Hence, after estimating the distribution of the log returns, proper procedure is to convert the them to arithmetic returns for the purposes of portfolio optimization and risk management.
-
There's a further reference from Meucci specifically on this latter procedure: papers.ssrn.com/sol3/papers.cfm?abstract_id=1586656 – Quartz Feb 8 '13 at 11:39 | 2015-03-31 00:25:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7043477892875671, "perplexity": 1160.6094173601778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.99/warc/CC-MAIN-20150323172140-00268-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.math-only-math.com/rational-numbers-between-two-unequal-rational-numbers.html | # Rational Numbers Between Two Unequal Rational Numbers
As we know that rational numbers are the numbers which are represented in the form of p/q where ‘p’ and ‘q’ are integers and ‘q’ is not equal to zero. So, we can call rational numbers as fractions too. So, in this topic we will get to know how to find rational numbers between two unequal rational numbers.
Let us suppose ‘x’ and ‘y’ to be two unequal rational numbers. Now, if we are told to find a rational number lying in the mid- way of ‘x’ and ’y’, we can easily find that rational number by using the below given formula:
$$\frac{1}{2}$$(x + y), where ‘x’ and ‘y’ are the two unequal rational numbers between which we need to find the rational number.
Rational numbers are ordered, i.e., given two rational numbers x, y either x > y, x < y or x = y.
Also, between two rational numbers there are infinite number of rational numbers.
Let x, y (x < y) be two rational numbers. Then
$$\frac{x + y}{2}$$ - x = $$\frac{y - x}{2}$$ > 0; Therefore, x < $$\frac{x + y}{2}$$
y - $$\frac{x + y}{2}$$ = $$\frac{y - x}{2}$$ = $$\frac{y - x}{2}$$ > 0; Therefore, $$\frac{x + y}{2}$$ < y.
Therefore, x < $$\frac{x + y}{2}$$ < y.
Thus, $$\frac{x + y}{2}$$ is a rational number between the rational numbers x and y.
For understanding it much better let us have a look at some of the below mentioned examples:
1. Find a rational number lying mid- way between $$\frac{-4}{3}$$ and $$\frac{-10}{3}$$.
Solution:
Let us assume x = $$\frac{-4}{3}$$
y = $$\frac{-10}{3}$$
If we try to solve the problem using formula mentioned above in the text, then it can be solved as:
$$\frac{1}{2}$${( $$\frac{-4}{3}$$)+ ($$\frac{-10}{3}$$)}
⟹ $$\frac{1}{2}$${( $$\frac{-14}{3}$$)}
⟹ $$\frac{-14}{6}$$
⟹ $$\frac{-7}{6}$$
Hence, ($$\frac{-7}{6}$$) or ($$\frac{-14}{3}$$) is the rational number lying mid- way between $$\frac{-4}{3}$$and $$\frac{-10}{3}$$.
2. Find a rational number in the mid- way of $$\frac{7}{8}$$ and $$\frac{-13}{8}$$
Solution:
Let us assume the given rational fractions as:
x = $$\frac{7}{8}$$,
y = $$\frac{-13}{8}$$
Now we see that the two given rational fractions are unequal and we have to find a rational number in the mid- way of these unequal rational fractional. So, by using above mentioned formula in the text we can find the required number. Hence,
From the given formula:
$$\frac{1}{2}$$(x + y) is the required mid- way number.
So, $$\frac{1}{2}$${ $$\frac{7}{8}$$+ ($$\frac{-13}{8}$$)}
⟹ $$\frac{1}{2}$$( $$\frac{-6}{8}$$)
⟹ $$\frac{-6}{16}$$
⟹ ($$\frac{-3}{8}$$)
Hence, ($$\frac{-3}{8}$$) or ($$\frac{-6}{16}$$) is the required number between the given unequal rational numbers.
In the above examples, we saw how to find the rational number lying mid- way between two unequal rational numbers. Now we would see how to find a given amount of unknown numbers between two unequal rational numbers.
The process can be better understood by having a look at following example:
1. Find 20 rational numbers in between ($$\frac{-2}{5}$$) and $$\frac{4}{5}$$.
Solution:
To find 20 rational numbers in between ($$\frac{-2}{5}$$) and $$\frac{4}{5}$$, following steps must be followed:
Step I: ($$\frac{-2}{5}$$) = $$\frac{(-2) × 5}{5 × 5}$$ = $$\frac{-10}{25}$$
Step II: $$\frac{4 × 5}{5 × 5}$$ = $$\frac{20}{25}$$
Step III: Since, -10 < -9 < -8 < -7 < -6 < -5 < -4 ...… < 16 < 17 < 18 < 19 < 20
Step IV: So, $$\frac{-10}{25}$$ < $$\frac{-9}{25}$$ < $$\frac{-8}{25}$$ < …… < $$\frac{16}{25}$$ < $$\frac{17}{25}$$ < $$\frac{18}{25}$$ < $$\frac{19}{25}$$.
Step V: Hence, 20 rational numbers between $$\frac{-2}{5}$$ and $$\frac{4}{5}$$ are:
$$\frac{-9}{25}$$, $$\frac{-8}{25}$$, $$\frac{-7}{25}$$, $$\frac{-6}{25}$$, $$\frac{-5}{25}$$, $$\frac{4}{25}$$ ……., $$\frac{2}{25}$$, $$\frac{3}{25}$$, $$\frac{4}{25}$$, $$\frac{5}{25}$$, $$\frac{6}{25}$$, $$\frac{7}{25}$$, $$\frac{8}{25}$$, $$\frac{9}{25}$$, $$\frac{10}{25}$$.
All the questions of this type can be solved using above steps.
Rational Numbers
Rational Numbers
Decimal Representation of Rational Numbers
Rational Numbers in Terminating and Non-Terminating Decimals
Recurring Decimals as Rational Numbers
Laws of Algebra for Rational Numbers
Comparison between Two Rational Numbers
Rational Numbers Between Two Unequal Rational Numbers
Representation of Rational Numbers on Number Line
Problems on Rational numbers as Decimal Numbers
Problems Based On Recurring Decimals as Rational Numbers
Problems on Comparison Between Rational Numbers
Problems on Representation of Rational Numbers on Number Line
Worksheet on Comparison between Rational Numbers
Worksheet on Representation of Rational Numbers on the Number Line | 2018-10-21 11:56:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927127480506897, "perplexity": 713.240947023014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514005.65/warc/CC-MAIN-20181021115035-20181021140535-00469.warc.gz"} |
https://www.encyclopediaofmath.org/index.php?title=Generating_function&oldid=36076 | # Generating function
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
2010 Mathematics Subject Classification: Primary: 05A15 [MSN][ZBL]
generatrix, of a sequence of numbers or functions
The sum of the power series
with positive radius of convergence. If the generating function is known, then properties of the Taylor coefficients of analytic functions are used in the study of the sequence . The generating function
exists, under certain conditions, for polynomials that are orthogonal over some interval with respect to a weight . For classical orthogonal polynomials the generating function can be explicitly represented in terms of the weight , and it is used in calculating values of these polynomials at individual points, as well as in deriving identity relations between these polynomials and their derivatives.
In probability theory, the generating function of a random variable taking integer values with probabilities is defined by
Using the generating function one can compute the probability distribution of , its mathematical expectation and its variance:
The generating function of a random variable can also be defined as the mathematical expectation of the random variable , i.e. .
#### References
[1] G. Szegö, "Orthogonal polynomials", Amer. Math. Soc. (1975) [2] P.K. Suetin, "Classical orthogonal polynomials", Moscow (1979) (In Russian) [3] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1957–1971) | 2020-02-21 00:52:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458229541778564, "perplexity": 466.1353600560224}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00069.warc.gz"} |
https://www.physicsforums.com/threads/circular-motion-with-respect-to-south-pole.344255/ | # Circular motion with respect to south pole
1. Oct 9, 2009
### duxy
1. The problem statement, all variables and given/known data
The south pole of the circular motion is labeled "O" and it's the center of the xOy axes. The point leaves "O" to the right increasing it's positional vector($$\bar{}r$$) dimension and the $$\alpha$$ angle (the angle is between the positional vector and the Ox axis). The force that keeps the material point on the circular motion is always pointed towards "O". The north pole is labeled "A" and at that specific point the velocity is v$$_{}0$$. The radius of the circle is R. Find the force F and the period T.
2. Relevant equations
r=r($$\alpha$$)
F=ma
v(subscript-alpha)=r(first derivative of alpha)
r^2*(first derivative of alpha)=c
3. The attempt at a solution
If the centripetal force applies i think F=-m(c^2/r^4)r($$\alpha$$)
If i know the force i can compute the T as T=-(2*pi*m*$$\vec{}r$$)/F
Can someone please explain if this is right and if not i am open to suggestions :) Thanks
Sorry for not using the subscripts, but the preview looked kinda weird.
2. Oct 9, 2009
### tiny-tim
Welcome to PF!
Hi duxy! Welcome to PF!
(what is c? and where does your r4 come from? )
Can I check that I understand the question correctly?
An object moves under no constraint (and no gravity), but subject to a force always directed towards O: find how the force depends on position P if the body moves in a circle through O.
Is that correct?
If so, then good ol' Newton's second law means that the component of acceleration perpendicular to OP must always be zero.
Hint: find the radial and tangential acceleration relative to the centre of the circle (because that's fairly easy! ), and from that find the components along and perpendicular to OP.
3. Oct 9, 2009
### duxy
Hey, thanks for the reply and the welcome. About that c it's supposed to be a constant i haven't really figured it out yet. About Newton's second law, the component of acceleration perpendicular to OP is always zero when O is the center and P is a point on the circle, in that case the radial acceleration is always perpendicular on the linear velocity, but if O is at the south point of the circle the tangent line that goes through P is not perpendicular to OP anymore(i think).
On the other hand i guess that when P is a 2R distance away from O the acceleration components have the same value and direction.
Are you implying to find the components along and perpendicular to OP geometrically or is there some kind of connection between them in this particular case?
Thank you
4. Oct 9, 2009
### tiny-tim
Sorry, I don't follow any of that.
First use geometry to find the acceleration.
Use the centre of the circle as I suggested. | 2018-03-21 07:12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497176647186279, "perplexity": 586.0527478001385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00535.warc.gz"} |
http://cogsci.stackexchange.com/questions/1363/is-an-autistic-persons-brain-different-from-a-non-autistic-one?answertab=oldest | # Is an autistic person's brain different from a non-autistic one?
Are there any differences between the autistic person's brain and the non-autistic one?
To be more specific, are there any differences in brain structure or brain activity, that can be used to distinguish between autistic and non-autistic person by analysing their brain structure and brain activity while performing various tasks?
If so, can functional imaging itself be used for diagnosing autism? Is there any research with that theme?
-
Can you clarify what you mean by tomography? Do you mean a regular CT scan (+/- contrast agents, which might give you somewhat of a functional approach), a PET scan (e.g., using radiolabeled Oxygen), or something else? Unfortunately, despite what 'House, MD' would lead us to believe, functional imaging is not quite there as a diagnostic tool for the brain as of yet, but still more in the research domain. – Chuck Sherrington Jul 3 '12 at 1:12
I mean, any form of scan, that could apply – FolksLord Jul 3 '12 at 5:15
– draks ... Jul 3 '12 at 9:03
Well, unfortunatelly only a small part of autistic people are savants. – FolksLord Jul 3 '12 at 16:30
While quantitative changes like the ones described in the EEG in the other answer are critical in distinguishing an person with autism from a non-autistic person, they are not really diagnostic tools yet.
You had mentioned the possibility of using a CT scan for a diagnosis. As far as I can tell, there is no way of doing this as of late, but several groups are taking a look at some of the anatomical differences, which, of my own supposition, in the future could lead to definitive imaging techniques that are diagnostic. Along with the EEG studies, there are currently functional MRI studies that attempt to quantify the differences between persons with autism and others, but these are likely not diagnostic as well.
Oblak et al (2011) found that, despite evidence of "overgrowth" of neurons in areas such as the Prefrontal Cortex (see here for an recent study also drawing that conclusion), neurons in the posterior Cingulate Cortex were normal in number and density, but "irregularly distributed...poorly demarcated layers IV and V, and increased presence of white matter neurons." The diagrams below illustrate the position of the Cingulate (looking from the perspective of the inside of the hemisphere) and the normal arrangements of typical cells in cortical layers (which may have some minor variation depending on where in the cortex you are looking). To illustrate the nuances in the anatomy, it had been found in prior studies that even the anterior Cingulate, by and large, had the increases in density found elsewhere in the cortex.
What's interesting about the much lower cell density in the posterior Cingulate is that that structure is thought to be one of the most important regions of the limbic system for determining emotional salience of a situation, something that has been found to be different between persons with autism and those without autism. So now we know that this region, just like the anterior portion, has an irregular anatomy, just of a different sort.
The study also found that, by and large, the architecture of the neurons in the fusiform gyrus was undisturbed. Given the role of areas in the fusiform gyrus in facial recognition and social interaction, this was surprising, but there may be other clues in these areas as to difference in receptor density (particularly of GABA, though no difference was found in this particular study).
So, in terms of markers for future imaging studies, there are distinct differences that can be picked up in studies of brain tissue, but nothing definitive to be picked up at the scale of CT or MRI in an in vivo imaging study.
From Wikipedia
## Normal cortical layers
Borrowed from here
Oblak, A.L., Rosene, D.L, Kemper, T.L., Baumann, M.L., Blatt, G.J. (2011) Altered posterior cingulate cortical cyctoarchitecture, but normal density of neurons and interneurons in the posterior cingulate cortex and fusiform gyrus in autism. Autism Research 4(3): 200–211. DOI PDF
-
Researchers at Boston Children's Hospital compared raw EEG data from 430 children with autism to data collected from 554 control subjects, all ages 2 to 12. They found that children with autism displayed consistent EEG patterns which indicate altered connectivity between brain regions. They found evidence of altered connectivity throughout the brains of children with autism. Most remarkably, they found a pronounced reduction in connectivity in regions of the left hemisphere controlling for language areas of the brain. Young children with autism showed increased connectivity between brain regions that were farther apart, suggesting they may have developed a compensation mechanism for other connectivity problems.
http://unpa.pro/Articles?mode=PostView&bmi=985722
### References
• Duffy, F.H. & Als, H. (2012). A stable pattern of EEG spectral coherence distinguishes children with autism from neuro-typical controls-a large case control study. BMC Medicine, 10, 64. PDF
- | 2014-08-01 03:50:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4497845470905304, "perplexity": 2821.625310897454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00007-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://hackage.haskell.org/package/haskell-gi-0.21.2/docs/Data-GI-GIR-XMLUtils.html | Data.GI.GIR.XMLUtils
Description
Some helpers for making traversals of GIR documents easier.
Synopsis
# Documentation
Turn a node into an element (if it is indeed an element node).
Find all children of the given element which are XML Elements themselves.
The local name of an element.
Lookup an attribute for an element (with no prefix).
Constructors
GLibGIRNS CGIRNS
Instances
Source # Instance detailsMethodsshowList :: [GIRXMLNamespace] -> ShowS #
Lookup an attribute for an element, given the namespace where it lives.
Restrict to those with the given local name.
Restrict to those with given name.
Find the first child element with the given name.
Get the content of a given element, if it exists.
Construct a Name by only giving the local name.
Construct a Name specifying a namespace too. | 2019-09-22 09:07:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2236948311328888, "perplexity": 5553.591346554389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00336.warc.gz"} |
https://weblog.fryand.com/2021/02/high-school-physics.html | ### High School Physics - Kinematics
Displacement $\vec{x(t)}$, Velocity $\vec{v(t)}$, and Acceleration $\vec{a(t)}$: $\lim\limits_{\Delta t\to 0}\frac{\vec{\Delta x}}{\Delta t} = \vec{v} = \int_{t_i}^{t_f}\vec{a}dt$
Uniform Accelerated Motion ($\vec{a}$ has constant magnitude, direction) $\boxed{5-3-2}$
$\displaystyle \vec{\Delta x} = \vec{v_i}\Delta t+\frac{1}{2}\vec{a}(\Delta t)^2 = \vec{v_f}\Delta t-\frac{1}{2}\vec{a}(\Delta t)^2$
$\displaystyle{\vec{v_f^2} = \vec{v_i^2}+2\vec{a}\vec{\Delta{x}}\Leftarrow \begin{cases}\vec{v}_f = \vec{v}_i+\vec{a}\Delta t\\ \vec{\Delta x} = \Big(\frac{\vec{v_i}+\vec{v_f}}{2}\Big) \Delta t\end{cases}}$ | 2022-09-29 23:54:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506583571434021, "perplexity": 3533.6918313910032}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00799.warc.gz"} |
https://kar.kent.ac.uk/40499/ | Aims. A census of molecular hydrogen flows across the entire Orion A giant molecular cloud is sought. With this paper we aim to associate each flow with its progenitor and associated molecular core, so that the characteristics of the outflows and outflow sources can be established. Methods. We present wide-field near-infrared images of Orion A, obtained with the Wide Field Camera, WFCAM, on the United Kingdom Infrared Telescope. Broad-band K and narrow-band H2 1-0S(1) images of a contiguous ~8 square degree region are compared to mid-IR photometry from the Spitzer Space Telescope and (sub)millimetre dust-continuum maps obtained with the MAMBO and SCUBA bolometer arrays. Using previously-published H2 images, we also measured proper motions for H2 features in 33 outflows, and use these data to help associate flows with existing sources and/or dust cores. Results. Together these data give a detailed picture of dynamical star formation across this extensive region. We increase the number of known H2 outflows to 116. A total of 111 H2 flows were observed with Spitzer; outflow sources are identified for 72 of them (12 more H2 flows have tentative progenitors). The MAMBO 1200 $\mu$m?maps cover 97 H2 flows; 57 of them (59%) are associated with Spitzer sources and either dust cores or extended 1200 $\mu$m?emission. The H2 jets are widely distributed and randomly orientated. The jets do not appear to be orthogonal to large-scale filaments or even to the small-scale cores associated with the outflow sources (at least when traced with the 11´´ resolution of the 1200 $\mu$m?MAMBO observations). Moreover, H2 jet lengths (L) and opening angles ($\theta$) are not obviously correlated with indicators of outflow source age – source spectral index, $\alpha$ (measured from mid-IR photometry), or (sub)millimetre core flux. It seems clear that excitation requirements limit the usefulness of H2 as a tracer of L and $\theta$ (though jet position angles are well defined). Conclusions. We demonstrate that H2 jet sources are predominantly protostellar sources with flat or positive mid-IR spectral indices, rather than disc-excess (or T Tauri) stars. Most protostars associated with molecular cores drive H2 outflows; however, not all molecular cores are associated with protostars or H2 jets. On statistical grounds, the H2 jet phase may be marginally shorter than the protostellar phase, though it must be considerably (by an order of magnitude) shorter than the prestellar phase. In terms of range and mean value of $\alpha$, H2 jet sources are indistinguishable from protostars. The spread in $\alpha$ observed for both protostars and H2 outflow sources is probably a function of inclination angle as much as source age. The few true protostars without H2 jets are almost certainly more evolved than their H2-jet-driving counterparts, although these later stages of protostellar evolution (as the source transitions to being a “disc-excess” source) must be very brief, since a large fraction of protostars do drive H2 flows. We also find that the protostars that power molecular outflows are no more (nor no less) clustered than protostars that do not. This suggests that the H2 emission regions in jets and outflows from young stars weaken and fade very quickly, before the source evolves from protostar to pre-main-sequence star, and on time-scales much shorter than those associated with the T Tauri phase, the Herbig-Haro jet phase, and the dispersal of young stellar objects. | 2019-08-20 19:05:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5005170702934265, "perplexity": 3929.2191058414005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315558.25/warc/CC-MAIN-20190820180442-20190820202442-00112.warc.gz"} |
https://techwhiff.com/learn/show-graphically-and-explain-why-a-government-is/275146 | # Show graphically and explain why a government is more likely to place an excise (per unit)...
###### Question:
Show graphically and explain why a government is more likely to place an excise (per unit) tax on a good with inelastic demand than it is to put an excise tax on a good with elastic demand
#### Similar Solved Questions
##### Let Azfa"b"c" I n 0 }. Answer each of the following question: 1. 2. 3. 4....
Let Azfa"b"c" I n 0 }. Answer each of the following question: 1. 2. 3. 4. Is A a regular language? Is A a context free language? Is A Turing recognizable? Is A Turing decidable?...
##### Golf Ball Impulse Due in 38 minutes A golf ball (mass 0.045 kg) is hit with...
Golf Ball Impulse Due in 38 minutes A golf ball (mass 0.045 kg) is hit with a club from a tee. The plot shows the force on the ball as a function of time in milliseconds. One millisecond is 10-3 seconds. What is its speed right after the hit? Submit Answer Tries 0/7 Force [N] 0 2 4 6 14 16 18 20 8 1...
##### How do you simplify (7 square roots of 5) + (the square root of 50)?
How do you simplify (7 square roots of 5) + (the square root of 50)?...
##### Show the reaction mechanism with arrows for the reaction of HBr with H_2O. Circle the electrophile.
Show the reaction mechanism with arrows for the reaction of HBr with H_2O. Circle the electrophile....
##### For young people, a jobless summer In July 2009, the youth unemployment rate hit 18.5 percent—the...
For young people, a jobless summer In July 2009, the youth unemployment rate hit 18.5 percent—the highest level since the BLS started recording youth labor statistics. The proportion of young people working was 51.4 percent, another historic low for the month of July. Source: The Wall Street J...
##### Reply me as soon as possible.its vey important plzzzzzzzzzzzzzzz
Aftab Company limitedrealized itself as a social responsible company and decidedtoconstruct an employeeshousing society for its employees working in the company,whichwas destroyed by anearth quake few years ago in the region. It was estimatedbyconstruction expertsthat this project would take three y...
##### Enthalpy change and states of matter problem
Q30-what will be the final temperature of the water in an insulated container as the result of passing 5.00g of steam h2o (g)at 100 c into 100 g of water at 25 c ??(deltaHVAP0=40.6KJ/MOL H20)MY QUESTION IS WHY ENERGY RECEIVED BY WATER IS 11.2+DELTAH???...
##### Develop exercise programs that would be suitable for the following elder populations: the chronically ill postcoronary...
Develop exercise programs that would be suitable for the following elder populations: the chronically ill postcoronary patients bedridden or sedentary elders... | 2023-03-20 15:15:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39194682240486145, "perplexity": 2642.511688226263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00369.warc.gz"} |
https://mathematica.stackexchange.com/questions/76018/something-interesting-about-pi-but-something-seems-amiss | # Something interesting about Pi, but something seems amiss
I saw something interesting about text-analysis - bigrams - in Basic Text Analysis in Mathematica
I wondered how the bigrams of Pi's digits might appear. As expected, take enough terms and every integer in the range 0-9 has been followed by every other number (including itself).
Here's my code -- I'm sure it can be improved, but that's my second question:
t = Table[
edgeList = Map[#[[1]] -> #[[2]] &,
Partition[First[RealDigits[N[π, J]]], 2, 1]];
g = Graph[Union[edgeList], VertexLabels -> "Name"];
Total[am, 2], {J, 2, 600}
];
ListLinePlot[t]
t
The idea here is that the total of the adjacency matrix of the bigram graph tells you when you've looked at enough of Pi's digits for every single-digit integer to have been followed by every single digit integer.
The problem I have found in the result (looking at ListLinePlot[t], and then inspecting the sequence of Total[AdjMat, 2]) is that the plot isn't monotonic. I have to think that there's something I've done/omitted in the code, but no matter how hard I look I can't see it.
Can anyone see the problem that causes incorrectness of results?
Change one line of your code :
Partition[IntegerDigits@IntegerPart@(Pi*10^(J - 1)), 2, 1]
You could also use:
First[RealDigits[N[Pi, J+1], 10, J]]
As an aside, here's how I'd do it...
trend = With[{pidi = Partition[First@RealDigits[N[Pi, 1000]], 2, 1],
dummy = Transpose@{Range[0, 9]}},
Table[Length /@ DeleteDuplicates /@ GatherBy[Join[dummy, pidi[[;; j - 1]]], First],
{j, 2, 600}] - 1];
Column[{ListLinePlot[Total /@ trend, ImageSize -> 400],
ListLinePlot[Count[#, 10] & /@ trend, ImageSize -> 400],
ListLinePlot[Transpose@trend, PlotLegends -> Range[0, 9],
PlotStyle -> "Rainbow", ImageSize -> 400]}]
Giving your results, the count of how many digits have reached their fill, and the trends for the individual digits... n.b.: The plots are from 10-650, I later changed the code to match your index start/end.
If you're looking to extend this to bigger searches, you'll probably want to use something more efficent (the above is more efficient than yours, but for huge searches it's not optimal). Mathematica has very efficient string mechanisms, and often search problems of this sort can be done much more efficiently using them.
Here's an example, extending the search to trigrams. Completes in milliseconds on a ratty old netbook:
s = IntegerString[IntegerPart[Pi*10^10000]];
sp = Sort /@
Partition[StringPosition[s, StringTake["00" <> ToString@#, -3], 1] & /@
Range[0, 999], 100][[All, All, 1, -1]];
ListPlot[Table[Tr[UnitStep[z - Flatten@sp]], {z, 3, 10000}], ImageSize -> 400]
• thanks - I see your code is more elegant than mine though I can't see why one would Join static 'dummy' to digits of Pi. I also cannot understand the error in my original code (I see you advised "replace the first line", but that doesn't explain what my error is. – Paul_A Feb 28 '15 at 4:31
• @Paul_A: Sorry, figured it would be evident from the change. Do RealDigits[N[Pi, 5]], then RealDigits[N[Pi, 6]]. Look at the fifth digit in both. Rounding was your nemesis. As far as your code and "elegant", I disagree with your comment: I find your approach quite elegant, the use of Adj. Mat. sums clever. It's just not particularly efficient or quite as extension-friendly. As for the dummy join: gather/tally/et. al "index" in the order of "new" items seen. So this allows me to force 0,1,2... order. Otherwise, you'd have to figure out data order post hoc. Cleaner, IMO. – ciao Feb 28 '15 at 5:33
• Dang! I should have seen that - something was nagging at me and at your mention of 'rounding' it hit home. Thanks also for the explanation of your rationale. – Paul_A Feb 28 '15 at 5:39 | 2020-01-27 09:26:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28774598240852356, "perplexity": 2716.231779271852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00242.warc.gz"} |
https://www.fuzzingbook.org/beta/html/APIFuzzer.html | # Fuzzing APIs¶
So far, we have always generated system input, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate program code that very efficiently invokes functions directly.
from bookutils import YouTubeVideo
YouTubeVideo('CC1VvOGkzm8')
Prerequisites
## Synopsis¶
>>> from fuzzingbook.APIFuzzer import <identifier>
and then make use of the following features.
This chapter provides grammar constructors that are useful for generating function calls.
The grammars are probabilistic and make use of generators, so use ProbabilisticGeneratorGrammarFuzzer as a producer.
>>> from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
INT_GRAMMAR, FLOAT_GRAMMAR, ASCII_STRING_GRAMMAR produce integers, floats, and strings, respectively:
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['-51', '9', '0', '0', '0', '0', '32', '0', '0', '0']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['0e0',
'-9.43e34',
'-7.3282e0',
'-9.5e-9',
'0',
'-30.840386e-5',
'3',
'-4.1e0',
'-9.7',
'413']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['"#vYV*t@I%KNTT[q~}&-v+[zAzj[X-z|RzC$(g$Br]1tC\':5<F-"',
'""',
'"^S/"',
'"y)QDs_9"',
'")dY~?WYqMh,bwn3\\"A!02Pkgx"',
'"01n|(dd$-d.sx\\"83\\"h/]qx)d9LPNdrk$}$4t3zhC.%3VY@AZZ0wCs2 N"', '"D\\6\\xgw#TQ}$\'3"',
'"LaM{"',
## Synthesizing Composite Data¶
From basic data, as discussed above, we can also produce composite data in data structures such as sets or lists. We illustrate such generation on lists.
### Lists¶
LIST_EBNF_GRAMMAR: Grammar = {
"<start>": ["<list>"],
"<list>": [
("[]", opts(prob=0.05)),
"[<list-objects>]"
],
"<list-objects>": [
("<list-object>", opts(prob=0.2)),
"<list-object>, <list-objects>"
],
"<list-object>": ["0"],
}
assert is_valid_grammar(LIST_EBNF_GRAMMAR)
LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)
Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.
def list_grammar(object_grammar, list_object_symbol=None):
obj_list_grammar = extend_grammar(LIST_GRAMMAR)
if list_object_symbol is None:
# Default: Use the first expansion of <start> as list symbol
list_object_symbol = object_grammar[START_SYMBOL][0]
obj_list_grammar.update(object_grammar)
obj_list_grammar[START_SYMBOL] = ["<list>"]
obj_list_grammar["<list-object>"] = [list_object_symbol]
assert is_valid_grammar(obj_list_grammar)
return obj_list_grammar
int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))
[int_list_fuzzer.fuzz() for i in range(10)]
['[0, -4, 23, 0, 0, 9, 0, -6067681]',
'[-1, -1, 0, -7]',
'[-5, 0]',
'[1, 0, -628088, -6, -811, 0, 99, 0]',
'[-35, -10, 0, 67]',
'[-3, 0, -2, 0, 0]',
'[0, -267, -78, -733, 0, 0, 0, 0]',
'[0, -6, 71, -9]',
'[-72, 76, 0, 2]',
'[0, 9, 0, 0, -572, 29, 8, 8, 0]']
string_list_fuzzer = ProbabilisticGrammarFuzzer(
list_grammar(ASCII_STRING_GRAMMAR))
[string_list_fuzzer.fuzz() for i in range(10)]
['["gn-A$j>", "SPX;", "", "", ""]', '["_", "Qp"]', '["M", "5\\"X744", "b+5fyM!", "gR"]', '["^h", "8$u", "", "", ""]',
'["6X;", "", "T1wp%\'t"]',
'["-?Kk", "@B", "}", "", ""]',
'["FD<mqK", ")Y4NI3M.&@1/2.p", "]C#c1}z#+5{7ERA[|", "EOFM])BEMFcGM.~k&RMj*,:m8^!5*:vv%ci"]',
'["", "*B.pKI\\"L", "O)#<Y", "\\", "", "", ""]',
'["g"]',
'["", "\\JS;~t", "h)", "k", "", ""]']
float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(
float_grammar_with_range(900.0, 900.9)))
[float_list_fuzzer.fuzz() for i in range(10)]
['[900.558064701869, 900.6079527708223, 900.1985188111297, 900.5159940886509, 900.1881413629061, 900.4074809145482, 900.8279453113845, 900.1531931708976, 900.2651056125504, inf, 900.828295978669]',
'[900.4956935906264, 900.8166792417645, 900.2044872129637]',
'[900.6177668624133, 900.793129850367, 900.5024769009476, 900.5874531663001, inf, 900.3476216137291, 900.5680329060473, 900.1524624203945, 900.1157565249836, 900.0943774301732, 900.1589468212459, 900.8563415304703, 900.2871041191156, 900.2469765832253, 900.408183791468]',
'[NaN, 900.1152482126347, 900.1139109179966, NaN, 900.0634308730662, 900.1918596242257]',
'[900.49418992478]',
'[900.6566851795975, NaN, 900.5585085641878, 900.8678799526169, 900.5580757140183]',
'[900.6265067760952]',
'[900.5271187218734, 900.3413004135587, 900.0362652510535, 900.2938223153569, 900.6584186055829, 900.5394909707123, 900.5119630230411, 900.2024669591465]',
'[900.5068304562362, 900.5173419618334, 900.5268996804168, 900.5247314889621, 900.1082421801126, 900.761200730868, 900.100950598924, 900.1424140649187, inf, inf, 900.4546924838603, 900.7025508468811, 900.5147250716594, 900.4943696257178, 900.814107878577, 900.3540228715348, 900.6165673939341, 900.121833279104, 900.8337503512706, 900.0607374037857, 900.2746253938637, 900.2491844866619, 900.7325728031923]',
'[900.6962790125643, 900.6055198052603, 900.0950691946015, 900.6283670716376, NaN, 900.112869956762]']
Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.
## Lessons Learned¶
• To fuzz individual functions, one can easily set up grammars that produce function calls.
• Fuzzing at the API level can be much faster than fuzzing at the system level, but brings the risk of false alarms by violating implicit preconditions.
## Next Steps¶
This chapter was all about manually writing test and controlling which data gets generated. In the next chapter, we will introduce a much higher level of automation:
• Carving automatically records function calls and arguments from program executions.
• We can turn these into grammars, allowing to test these functions with various combinations of recorded values.
With these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier.
## Background¶
The idea of using generator functions to generate input structures was first explored in QuickCheck [Claessen et al, 2000]. A very nice implementation for Python is the hypothesis package which allows to write and combine data structure generators for testing APIs.
## Exercises¶
The exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.
### Exercise 1: Deep Arguments¶
In the example generating oracles for urlparse(), important elements such as authority or port are not checked. Enrich URLPARSE_ORACLE_GRAMMAR` with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.
### Exercise 2: Covering Argument Combinations¶
In the chapter on configuration testing, we also discussed combinatorial testing – that is, systematic coverage of sets of configuration elements. Implement a scheme that by changing the grammar, allows all pairs of argument values to be covered.
### Exercise 3: Mutating Arguments¶
To widen the range of arguments to be used during testing, apply the mutation schemes introduced in mutation fuzzing – for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.
The content of this project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The source code that is part of the content, as well as the source code used to format and display that content is licensed under the MIT License. Last change: 2022-05-17 18:02:38+02:00CiteImprint | 2022-07-01 02:17:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36703309416770935, "perplexity": 5686.7388039659845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00300.warc.gz"} |
https://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-7th-edition/chapter-1-section-1-4-introduction-to-identities-1-4-problem-set-page-39/3 | ## Trigonometry 7th Edition
Finding reciprocals- $\sin\theta$ = $\frac{y}{r}$ = $\frac{1}{r/y}$ = $\frac{1}{\csc\theta}$ $\cos\theta$ = $\frac{x}{r}$ = $\frac{1}{r/x}$ = $\frac{1}{\sec\theta}$ $\tan\theta$ = $\frac{y}{x}$ = $\frac{1}{x/y}$ = $\frac{1}{\cot\theta}$ | 2019-11-17 04:16:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4071817398071289, "perplexity": 78.1585716689392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00288.warc.gz"} |
https://www.physicsforums.com/members/aanandpatel.384227/recent-content | # Recent content by aanandpatel
1. ### Finding the basis for a vector space
Makes sense now - got it! Thanks a lot - help was much appreciated :)
2. ### Finding the basis for a vector space
sorry had no idea how to matrices on the forum C = ##\begin{pmatrix} 1 & 2 \\ 3 & 6 \end{pmatrix}## A(my general matrix) = ##\begin{pmatrix} a & b \\ c & d \end{pmatrix}## CA=0 so when I multiplied, I got a+2c=0 b+2d=0 3a+6c=0 3b+6d=0 But two of those equations are the same and...
3. ### Finding the basis for a vector space
Homework Statement Find a basis for the following vector space: The set of 2x2 matrices A such that CA=0 where C is the matrix : 1 2 3 6 The Attempt at a Solution I multiplied C by a general 2x2...
4. ### Complex Solutions
Yes when I say x=0 it means that the 'real part' of the solution is 0
5. ### Complex Solutions
2y = 0 and x=0
6. ### Complex Solutions
Two complex numbers are only equal if their real parts are equal and their imaginary parts are equal so you may have to equate real and imaginary parts to find the values of x and y.
7. ### Implicit differentiation
Thanks guys - helped a lot! :)
8. ### Implicit differentiation
Homework Statement Find the coordinates of the stationary points on the curve: x^3 + (3x^2)(y) -2y^3=16 Homework Equations Stationary points occur when the first derivative of y with respect to x is equal to zero The Attempt at a Solution I implicitly differentiated the...
9. ### Relationship between roots and coefficients
a fourth degree polynomial has 4 roots therefore alpha and beta are double roots?
10. ### Relationship between roots and coefficients
The question says that the curves touch at the points A and B so I assumed they were tangential to each other at those points. Not sure how I would prove it otherwise seeing as I only have an x value for the points.
11. ### Relationship between roots and coefficients
Homework Statement Homework Equations Sum of roots taken one at a time is -b/a Sum of roots taken two at a time is c/a three at a time is -d/a four at a time is e/a The Attempt at a Solution I did part one by solving the two equations...
12. ### CERN team claims measurement of neutrino speed >c
Apologies for the mistake - was quoting Michio Kaku.
13. ### CERN team claims measurement of neutrino speed >c
In 1987, there was a supernova in the Large Magellanic Cloud. This is roughly 50,000 light years away from the Earth. Scientists detected light from the supernova and neutrinos from the supernova at the exact same time meaning they have the exact same velocity. This experiment used distances of...
14. ### What is a constant
It is a number that simply does not change and has a set, defined value. Pi, i (squareroot of -1) and e (Euler's number) are all examples of constants.
15. ### Use De Moivre's Theorem to prove this:
Thanks a bunch - that helped a lot. Converted it into: [(secθ)(cosθ+isinθ)]^n + [(secθ)(cosθ-isinθ)]^n and it was easy from there. Cheers! | 2021-09-28 01:25:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445316553115845, "perplexity": 770.9865716470056}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00417.warc.gz"} |
https://math.stackexchange.com/questions/1323867/divergence-absolute-or-conditional-convergence | # Divergence, absolute or conditional convergence
Consider $$\sum_{n=1}^\infty (3n + \cos(n))^{-n}$$ Is this series either divergent, absolutely convergent, or conditionally convergent?
I've attempted to just start with one type of divergence/convergence (or lack thereof), but didn't get far on this specific question and, generally, find these types of questions very time-consuming (is there a standard go-to method? I usually work on divergence or absolute convergence first)
Thanks for any help.
• For any $n\geq 2$, $$0\leq (3n+\cos n)^{-n}\leq (3n-1)^{-n} \leq \frac{1}{(3n-1)^2}\leq\frac{1}{n^2}.$$ – Jack D'Aurizio Jun 13 '15 at 15:10
• @JackD'Aurizio Just out of curiosity, why didn't you post this as an answer? – qmd Jun 13 '15 at 15:12
• @SuH: it feels like stealing from kids :D – Jack D'Aurizio Jun 13 '15 at 15:29
• @JackD'Aurizio You mean person ;) – qmd Jun 13 '15 at 15:30 | 2019-06-26 02:32:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4533681571483612, "perplexity": 1572.6823003886277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00255.warc.gz"} |
https://study.com/academy/answer/what-is-the-change-in-area-in-cm-2-of-a-60-0-cm-by-150-cm-automobile-windshield-when-the-temperature-changes-from-0-o-c-to-36-0-o-c-the-coefficient-of-linear-expansion-of-this-glass-is-9-0-tomes-10-6-oc.html | What is the change in area (in cm^2 ) of a 60.0 \ cm by 150 \ cm automobile windshield when the...
Question:
What is the change in area (in {eq}cm^2 {/eq}) of a {eq}60.0 \ cm {/eq} by {eq}150 \ cm {/eq} automobile windshield when the temperature changes from {eq}0 {/eq} to {eq}36.0^o C {/eq}? The coefficient of linear expansion of this glass is {eq}9.0 \times 10^{-6}/ ^oC {/eq}.
Linear Expansion:
It can be defined as the variation in the length of any object due to the variation in the temperature. It generally depends on the original length of the object, the difference in the temperature, and the coefficient of thermal expansion.
Given data:
• Initial temperature, {eq}{T_1} = 0{\rm{^\circ C}} {/eq}
• Final temperature, {eq}{T_2} = 36.0{\rm{^\circ C}} {/eq}
• Initial Area, {eq}{A_o} = 60.0 \times 150\;{\rm{c}}{{\rm{m}}^{\rm{2}}} {/eq}
• Coefficient of linear expansion, {eq}\alpha = 9.0 \times {10^{ - 6}}\;{\rm{^\circ }}{{\rm{C}}^{{\rm{ - 1}}}} {/eq}
The change in area can be calculated as,
{eq}\Delta A = {A_o}\alpha \left( {{T_2} - {T_1}} \right) {/eq}
Substitute the values,
{eq}\begin{align*} \Delta A &= 60 \times 150 \times 9 \times {10^{ - 6}} \times \left( {36 - 0} \right)\\ \Delta A &= 2.92\;{\rm{c}}{{\rm{m}}^{\rm{2}}} \end{align*} {/eq}
Therefore, the change in area is {eq}2.92\;{\rm{c}}{{\rm{m}}^{\rm{2}}} {/eq} . | 2020-08-12 06:30:51 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000090599060059, "perplexity": 5487.348370133715}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00529.warc.gz"} |
http://mathoverflow.net/questions/22316/fermat-numbers-and-the-infinitude-of-primes/26334 | # Fermat numbers and the infinitude of primes
Wonder whether any of you guys know why it is that the proof of the infinitude of primes that is based on the coprimality of any pair of (distinct) Fermat numbers is commonly attributed to Pólya.
In the first paragraph of this letter from Golbach to Euler there is already an argument along those lines, but since documents crediting it to Professor Pólya are not rare out there, it seems like it's passed unnoticed by a nonzero number of persons.
So, what do you think about this? It's not like Fermat numbers are essential to the proof or that there are no other demonstrations of the result... It's just that I'd really like to know about the origins of this discrepancy between the sources.
-
Goldbach observes that Fermat numbers are coprime. Nowhere does he mention that this implies the infinitude of primes. – Franz Lemmermeyer Apr 23 '10 at 8:28
Goldbach and Euler were interested in the question whether or not all Fermat numbers are prime. Later Euler would find the factor 641 of the fifth Fermat number. An English translation of the correspondence will appear next year as part of Euler's Opera Omnia. – Franz Lemmermeyer Apr 23 '10 at 9:12
It's interesting that the coprimality of Fermat numbers was already known in Goldbach's time. The reason for attributing the proof to Polya is presumably that such a proof is indicated as an exercise in Polya and Szego (1924). Because of this, Ribenboim, in his Little Book of Big Primes calls it "Polya's proof." Maybe the rumor started there.
[Added later] In the light of the comments that have come in, it now looks to me as though 1. Goldbach could have observed that he had a proof of the infinitude of primes, but didn't care to mention it, and 2. that the attribution of this observation to Polya starts with Hardy.
Re 1. In the 18th century, were people interested in finding new proofs of the infinitude of primes? For example, when Euler proved that $\Sigma 1/p=\infty$ (paper E72 in the Euler Archive) he did not remark that this gives a new proof of the infinitude of primes. It could very well be that Goldbach did not consider it interesting to prove again that there are infinitely many primes.
Re 2. One should bear in mind that Hardy knew Polya well. Polya visited him in England just after the publication of Polya & Szego and collaborated with him on the book Inequalities, published in 1934 ( four years before H&W). So Hardy could well have learned the proof directly from Polya.
-
In the Little Book of Bigger Primes he ends up ascribing it to Goldbach, though. He even adds that it was Władysław Narkiewicz the one who, after calling his attention to the afore-mentioned epistle from Goldbach to Euler, made him changed his mind on this matter. So, it seems like we're back to where we started... – J. H. S. Apr 23 '10 at 18:13
The Polya-Szego reference is Problems and Theorems in Analysis, Volume II, Part VIII, No. 94, page 130. – Gerry Myerson May 7 '10 at 6:43
In my experience one should always be suspect of historical remarks that are not made in purely historical studies. E.g. recently I saw reference to another historical claim in Ribenboim's books - namely that Kummer was the source of the trivial N-1 (vs. N+1) variant of Euclid's proof. I couldn't beieve that Kummer would make such a trivial remark. In fact he didn't. Rather, he had in mind a more interesting proof based on the phi-function. Since this emargin is too small for the proof, please see [2] for the details. – Bill Dubuque Jul 2 '10 at 21:23
Here are said links: [1] at.yorku.ca/cgi-bin/… [2] books.google.com/… – Bill Dubuque Jul 2 '10 at 21:23
Hello,
As far as I know, the problem began with Hardy and Wright's "An introduction to the theory of numbers", first published in 1938. Indeed, in Section 2.4, page 14, they write
Second proof of Euclid’s theorem. Our second proof of Theorem 4, which is due to Polya, depends upon a property of what are called ‘Fermat’s numbers’...
Since Hardy and Wright's book has always been so popular, I suspect that many have given credit to Pólya, following their words.
Notice, however, that Dickson's 1952 "History of the theory of numbers" correctly attributed the theorem back to Goldbach (see p. 375 of Volume I):
Chr. Goldbach called Euler's attention to Fermat's conjecture that $F_n$ is always prime, and remarked that no $F_n$ has a factor $<100$; no two $F_n$ have a common factor.
-
To be 100% honest, I have not been able to spot that (second) sentence in bold in any of the epistles that Professor Dickson mentions in the corresponding footnote of his text. Also, it is really curious that Hardy and Wright adscribe the proof to Pólya. Quite the more so, when one notices that the argument showcased by them is nowhere to be found in the famous problem compendium by Pólya and Szegö. – J. H. S. May 7 '10 at 23:41
My latin is definitely rusty, but I believe that the fact that no two F_n have a common factor is proved at the very beginning of the letter from Goldbach to Euler of July 1730 (that you linked in your original post above), and in fact, the proof alluded by Goldbach is precisely that one given by Hardy and Wright. Namely, he shows that F_n divides F_{n+p}-2, and if a number divided both, it would be 2, but F_n is odd. And then Goldbach says "...omnes numeros seriei Fermatianae esse inter se primos" (all Fermat numbers are pairwise coprime). – Álvaro Lozano-Robledo May 8 '10 at 1:45
I am quoting from the nice book "The development of Prime Number Theory" by W. Narkiewicz, Springer (2000), pg. 8.
Any infinite sequence of pairwise coprime positive integers leads to a proof of [the infinitude of primes]. Such a proof first appears in a letter of C.Goldbach to Euler dated July 20, 1730 [footnote: The original date is July 20/31, the double dating being a consequence of the use of the Julianic calendar in Russia before 1918. It seems that this was the first proof of the infinitude of primes which essentially differed from that of Euclid.] (see Fuss 1843, I, 32-34; Euler-Goldbach 1965) and is sometimes attributed to G.Pólya (e.g. in Hardy, Wright (1960), Chandrasekharan (1968). P.Ribenboim (Nombres premiers: mystères et records. 1994) wrote that this attribution appears in an unpublished list of exercises of A.Hurwitz preserved in ETH in Zürich.) This proof was published in the well-known collection of exercises of G.Pólya and G.Szegö (1925).
What is interesting here is that Hurwitz died in 1919, prior to Hardy & Wright, and to Pólya & Szegő, so it is likely that Pólya rediscovered the argument on his own, unaware of Goldbach's letter, presented it to colleagues, and they would naturally attribute it to him.
-
Thanks for taking the time to post that paragraph, Señor Caicedo. I think it is a great complement to the remarks made by J. Stillwell in his answer. – J. H. S. May 29 '10 at 9:44
On p. 167 of Beiträge zur Zahlentheorie, insbesondere zur Kreis- und Kugeltheilung, mit einem Nachtrage zur Theorie der Gleichungen (1891), Scheffler deduces the infinitude of primes from the fact that Fermat numbers are pairwise coprime. I don't think that Scheffler's book was widely read, however.
-
What an interesting finding, Professor Lemmermeyer! – J. H. S. Jan 25 '12 at 0:12
@Álvaro:
1. Agreed that a proof of the coprimality of any pair of distinct Fermat numbers appears in the very first paragraph of the aforementioned missive from Goldbach to Euler. That is not under discussion here. Thing is that, as Professor Lemmermeyer noted above, Goldbach himself did not seem to notice that this result would (immediately) provide him with a proof of the infinitude of the primes. As I commented before, one of my initials beliefs on this matter was that the exclamation "at quantulum hoc est ad demonstrandum omnes illos numeros esse absolute primos?" in the July 20th letter was somehow implying that Golbach had actually found the connection between both facts. Yet, your knowledgeable comments have just made me change my mind on this wrong impression that I initially had.
2. You are absolutely right when you express that the proof given by Hardy and Wright passes through the argument given by Goldbach in his letter to Euler. That's the reason that I said it is kind of weird to see H & W adscribing the result to Pólya.
-
Even though Goldbach does not mention that this implies the infinitude of the primes, it is such an immediate consequence that I think we all agree that the credit should go to Goldbach. By my comment above I only meant to point out that Goldbach did say "no two F_n have a common factor" (in the "...inter se primos" comment), as Dickson mentions. By the way, the sentence "at quantulum..." means "but how close is this to a proof that all Fermat numbers are primes?", so he is referring to Fermat's conjecture that all F_n are primes, and not to the fact that there are infinitely many primes. – Álvaro Lozano-Robledo May 8 '10 at 13:34
"... inter se primos" Of course! How could I forget about it? As to whether we all agree that the credit should go to Golbach, I'm not that sure. Nonetheless, I think that we all definitely agree that people ought not to continue adscribing it exclusively to Professor Pólya. – J. H. S. May 8 '10 at 15:21
Interestingly, in Spanish sometimes we say "primos entre sí", which means coprime, but now I realize the direct latin origin of this phrase (inter se primos). – Álvaro Lozano-Robledo May 9 '10 at 0:44 | 2014-10-21 20:37:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349526524543762, "perplexity": 853.285471104528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00329-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.imperial.ac.uk/people/yan.liu06/publications.html | # DrYanLiu
Faculty of EngineeringDepartment of Electrical and Electronic Engineering
//
//
### Location
Electrical EngineeringSouth Kensington Campus
//
## Publications
Publication Type
Year
to
36 results found
Liu H, Guo T, Yan P, Qi L, Chen M, Wang G, Liu Yet al., 2022, A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface, IEEE Transactions on Circuits and Systems II: Express Briefs, Vol: 69, Pages: 2682-2686, ISSN: 1549-7747
Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time $\rm \Delta \Sigma$ modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid $1^{st}/2^{nd}$ -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the $1^{st}$ -order loop. The residue signal is digitized by the $2^{nd}$ -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mVpp. And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 $\mu \text{s}$ while small neural signal can be continuously monitored.
Journal article
Chen Y, Liu Y, Li Y, Wang G, Chen Met al., 2022, An Energy-Efficient ASK Demodulator Robust to Power-Carrier-Interference for Inductive Power and Data Telemetry., IEEE Trans Biomed Circuits Syst, Vol: 16, Pages: 108-118
Wireless power and datatelemetry based on amplitude-shift keying (ASK) modulation over dual inductive links has been widely adopted in biomedical implants. Due to the mutual inductance between the power and data links, the large power-carrier-interference (PCI) will inevitably cause low signal-to-interference ratio (SIR) of the received signal, thereby increasing the bit-error-rate (BER) of the ASK demodulation. In this paper, an innovative high energy-efficient ASK demodulator robust to PCI has been proposed. Thanks to the proposed sampling-and-subtraction (SAS) architecture, the demodulator is capable of withstanding PCI with an amplitude up to 2.5 times as the data carrier without the need for any high-order filters. The prototype has been implemented with 180 nm standard CMOS process, occupying a core area of 0.51 mm 2. The experimental results show that with 1 Mbps data rate and 13.56 MHz carrier frequency, the typical BER is less than 1.3×10 -3, while the energy efficiency is 280 pJ/bit, showing 7.5× improvement compared to the prior works. The energy-efficient robustness to PCI demonstrates the potential of the technique to be applied to retina prostheses as well as various kinds of ultra-low-power implantable biomedical devices.
Journal article
Huang J, Zhou T, Liu H, Qi L, Liu Y, Li Yet al., 2022, Low-Noise, High-Linearity Sine-Wave Generation Using Noise-Shaping Phase-Switching Technique, IEEE Transactions on Instrumentation and Measurement, Vol: 71, ISSN: 0018-9456
To develop a lower cost on-chip characterization solution for postsilicon validation, this article proposed a 'noise-shaping phase-switching' technique on a sigma-delta modulated digital-to-analog converter (DAC) to realize a low-noise, high-linearity sinusoidal-wave generator. The proposed technique combines the fifth-order cascade of resonators with distributed feedback (CRFB) type delta-sigma modulation (DSM) and the two-way time-interleaving phase-switching harmonic distortion (HD) cancellation technique without additional cost. The theoretical analysis is verified with MATLAB and further implemented using a low-cost arbitrary waveform generator (AWG) as the output DAC. Compared with a nonideal 12-b DAC, the proposed technique achieved at least 2-dB enhancement in spurious-free dynamic range (SFDR) (measurement) while maintaining an improvement of at least 18 dB in the medium dynamic range (DR) (simulation) over the entire signal bandwidth.
Journal article
Pan L, Chen M, Chen Y, Zhu S, Liu Yet al., 2021, An Energy-Autonomous Power-and-Data Telemetry Circuit With Digital-Assisted-PLL-Based BPSK Demodulator for Implantable Flexible Electronics Applications, IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, Vol: 2, Pages: 721-731
Journal article
Luo J, Firflionis D, Turnball M, Xu W, Walsh D, Escobedo-Cousin E, Soltan A, Ramezani R, Liu Y, Bailey R, O'Neill A, Donaldson N, Constandinou T, Jackson A, Degenaar Pet al., 2020, The neural engine: a reprogrammable low power platform for closed-loop optogenetics, IEEE Transactions on Biomedical Engineering, Vol: 67, Pages: 3004-3015, ISSN: 0018-9294
Brain-machine Interfaces (BMI) hold great potential for treating neurological disorders such as epilepsy. Technological progress is allowing for a shift from open-loop, pacemaker-class, intervention towards fully closed-loop neural control systems. Low power programmable processing systems are therefore required which can operate within the thermal window of 2° C for medical implants and maintain long battery life. In this work, we developed a low power neural engine with an optimized set of algorithms which can operate under a power cycling domain. By integrating with custom designed brain implant chip, we have demonstrated the operational applicability to the closed-loop modulating neural activities in in-vitro brain tissues: the local field potentials can be modulated at required central frequency ranges. Also, both a freely-moving non-human primate (24-hour) and a rodent (1-hour) in-vivo experiments were performed to show system long-term recording performance. The overall system consumes only 2.93mA during operation with a biological recording frequency 50Hz sampling rate (the lifespan is approximately 56 hours). A library of algorithms has been implemented in terms of detection, suppression and optical intervention to allow for exploratory applications in different neurological disorders. Thermal experiments demonstrated that operation creates minimal heating as well as battery performance exceeding 24 hours on a freely moving rodent. Therefore, this technology shows great capabilities for both neuroscience in-vitro/in-vivo applications and medical implantable processing units.
Journal article
Williams I, Brunton E, Rapeaux A, Liu Y, Luan S, Nazarpour K, Constandinou TGet al., 2020, SenseBack-an implantable system for bidirectional neural interfacing, IEEE Transactions on Biomedical Circuits and Systems, Vol: 14, Pages: 1079-1087, ISSN: 1932-4545
Chronic in-vivo neurophysiology experiments require highly miniaturized, remotely powered multi-channel neural interfaces which are currently lacking in power or flexibility post implantation. In this article, to resolve this problem we present the SenseBack system, a post-implantation reprogrammable wireless 32-channel bidirectional neural interfacing that can enable chronic peripheral electrophysiology experiments in freely behaving small animals. The large number of channels for a peripheral neural interface, coupled with fully implantable hardware and complete software flexibility enable complex in-vivo studies where the system can adapt to evolving study needs as they arise. In complementary ex-vivo and in-vivo preparations, we demonstrate that this system can record neural signals and perform high-voltage, bipolar stimulation on any channel. In addition, we demonstrate transcutaneous power delivery and Bluetooth 5 data communication with a PC. The SenseBack system is capable of stimulation on any channel with ±20 V of compliance and up to 315 μA of current, and highly configurable recording with per-channel adjustable gain and filtering with 8 sets of 10-bit ADCs to sample data at 20 kHz for each channel. To the best of our knowledge this is the first such implantable research platform offering this level of performance and flexibility post-implantation (including complete reprogramming even after encapsulation) for small animal electrophysiology. Here we present initial acute trials, demonstrations and progress towards a system that we expect to enable a wide range of electrophysiology experiments in freely behaving animals.
Journal article
Liu Y, Constandinou TG, Georgiou P, 2019, Ultrafast large-scale chemical sensing with CMOS ISFETs: a level-crossing time-domain approach, IEEE Transactions on Biomedical Circuits and Systems, Vol: 13, Pages: 1201-1213, ISSN: 1932-4545
The introduction of large-scale chemical sensing systems in CMOS which integrate millions of ISFET sensors have allowed applications such as DNA sequencing and fine-pixel chemical imaging systems to be realised. Using CMOS ISFETs provides advantages of digitisation directly at the sensor as well as correcting for non-linearity in its response. However, for this to be beneficial and scale, the readout circuits need to have the minimum possible footprint and power consumption. Within this context, this paper analyses an ISFET based pH-to-time readout using an inverter in the time-domain as a level-crossing detector and presents a 32×32 array with in-pixel digitisation for pH sensing. The inverter-based sensing pixel, controlled by a triangular waveform, converts the pH response into a time-domain signal whilst also compensating for sensor offset and thus resulting in an increase in dynamic range. The sensor pixels interface to a 15-bit asynchronous column-wise time-to-digital converter (TDC), enabling fast asynchronous conversion whilst using minimal silicon area. Parallel outputs of 32 TDC interfaces are serialised to achieve fast data throughput. This system is implemented in a standard 0.18um CMOS technology, with a pixel size of 26μm×26μm and a TDC area of 26μm×180μm. Measured results demonstrate the system is able to sense reliably with an average pH sensitivity of 30mVpH, whilst being able to compensate for sensor offset by up to ±7V. A resolution of 0.013pH is achieved and noise measurements show an integrated noise of 0.08pH within 2-500Hz and SFDR of 42.6dB. Total power consumption is 11.286mW.
Journal article
Mirza KB, Kulasekeram N, Liu Y, Nikolic K, Toumazou Cet al., 2019, System on chip for closed loop neuromodulation based on dual mode biosignals, 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: Institute of Electrical and Electronics Engineers (IEEE), ISSN: 2158-1525
Closed loop neuromodulation, where the stimulation is controlled autonomously based on physiological events, has been more effective than open loop techniques. In the few existing closed loop implementations which have a feedback, indirect non-neurophysiological biomarkers have been typically used (e.g. heart rate, stomach distension). Although these biomarkers enable automatic initiation of neural stimulation, they do not enable intelligent control of stimulation dosage. In this paper, we present a novel closed loop neuromodulation System-on-Chip (SoC) based on a dual signal mode that is detecting both electrical and chemical signatures of neural activity. We use vagus nerve stimulation (VNS) as a design case here. Vagal chemical (pH) signal is detected and used for initiating VNS and vagal compound nerve action potential (CNAP) signals are used to determine the stimulation dosage and pattern. Although we used the paradigm of appetite control and neurometabolic therapies for developing the algorithms for neurostimulation control, the SoC described here can be utilised for other types of closed loop neuromodulation implants.
Conference paper
Mazza F, Liu Y, Donaldson N, Constandinou TGet al., 2018, Integrated devices for micro-package integrity monitoring in mm-scale neural implants, IEEE Biomedical Circuits and Systems (BioCAS) Conference 2018, Publisher: IEEE, Pages: 295-298
Recent developments in the design of active im-plantable devices have achieved significant advances, for example,an increased number of recording channels, but too oftenpractical clinical applications are restricted by device longevity.It is important however to complement efforts for increased func-tionality with translational work to develop implant technologiesthat are safe and reliable to be hosted inside the human bodyover long periods of time. This paper first examines techniquescurrently used to evaluate micro-package hermeticity and keychallenges, highlighting the need for new,in situinstrumentationthat can monitor the encapsulation status over time. Two novelcircuits are then proposed to tackle the specific issue of moisturepenetration inside a sub-mm, silicon-based package. They bothshare the use of metal tracks on the different layers of the CMOSstack to measure changes in impedance caused by moisturepresent in leak cracks or diffused into the oxide layers.
Conference paper
Haci D, Liu Y, Nikolic K, Demarchi D, Constandinou TG, Georgiou Pet al., 2018, Thermally controlled lab-on-PCB for biomedical applications, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 655-658
This paper reports on the implementation andcharacterisation of a thermally controlled device forin vitrobiomedical applications, based on standard Printed Circuit Board(PCB) technology. This is proposed as a low cost alternativeto state-of-the-art microfluidic devices and Lab-on-Chip (LoC)platforms, which we refer to as the thermal Lab-on-PCB concept.In total, six different prototype boards have been manufacturedto implement as many mini-hotplate arrays. 3D multiphysicssoftware simulations show the thermal response of the modelledmini-hotplate boards to electrical current stimulation, highlight-ing their versatile heating capability. A comparison with theresults obtained by the characterisation of the fabricated PCBsdemonstrates the dual temperature sensing/heating property ofthe mini-hotplate, exploitable in a larger range of temperaturewith respect to the typical operating range of LoC devices. Thethermal system is controllable by means of external off-the-shelfcircuitry designed and implemented on a single-channel controlboard prototype.
Conference paper
Haci D, Liu Y, Ghoreishizadeh S, Constandinou TGet al., 2018, Design considerations for ground referencing in multi-module neural implants, IEEE Biomedical Circuits and Systems (BioCAS) Conference 2018, Publisher: IEEE, Pages: 563-566
Implantable neural interfaces have evolved in thepast decades from stimulation-only devices to closed-loop record-ing and stimulation systems, allowing both for more targetedtherapeutic techniques and more advanced prosthetic implants.Emerging applications require multi-module active implantabledevices with intrabody power and data transmission. Thisdistributed approach poses a new set of challenges relatedto inter-module connectivity, functional reliability and patientsafety. This paper addresses the ground referencing challenge inactive multi-implant systems, with a particular focus on neuralrecording devices. Three different grounding schemes (passive,drive, and sense) are presented and evaluated in terms of bothrecording reliability and patient safety. Considerations on thepractical implementation of body potential referencing circuitryare finally discussed, with a detailed analysis of their impact onthe recording performance.
Conference paper
Luan S, Williams I, Maslik M, Liu Y, De Carvalho F, Jackson A, Quian Quiroga R, Constandinou Tet al., 2018, Compact standalone platform for neural recording with real-time spike sorting and data logging, Journal of Neural Engineering, Vol: 15, Pages: 1-13, ISSN: 1741-2552
Objective. Longitudinal observation of single unit neural activity from largenumbers of cortical neurons in awake and mobile animals is often a vital step in studying neural network behaviour and towards the prospect of building effective Brain Machine Interfaces (BMIs). These recordings generate enormous amounts of data for transmission & storage, and typically require o ine processing to tease out the behaviour of individual neurons. Our aim was to create a compact system capable of: 1) reducing the data bandwidth by circa 2 to 3 orders of magnitude (greatly improving battery lifetime and enabling low power wireless transmission in future versions); 2) producing real-time, low-latency, spike sorted data; and 3) long term untethered operation. Approach. We have developed a headstage that operates in two phases. In the short training phase a computer is attached and classic spike sorting is performed to generate templates. In the second phase the system is untethered and performs template matching to create an event driven spike output that is logged to a micro-SD card. To enable validation the system is capable of logging the high bandwidth raw neural signal data as well as the spike sorted data. Main results. The system can successfully record 32 channels of raw neural signal data and/or spike sorted events for well over 24 hours at a time and is robust to power dropouts during battery changes as well as SD card replacement. A 24-hour initial recording in a nonhuman primate M1 showed consistent spike shapes with the expected changes in neural activity during awake behaviour and sleep cycles. Signi cance The presented platform allows neural activity to be unobtrusively monitored and processed in real-time in freely behaving untethered animals { revealing insights that are not attainable through scheduled recording sessions. This system achieves the lowest power per channel to date and provides a robust, low-latency, low-bandwidth and veri able output suitable f
Journal article
Ramezani R, Liu Y, Dehkhoda F, Soltan A, Haci D, Zhao H, Hazra A, Cunningham M, Firfilionis D, Jackson A, Constandinou TG, Degenaar Pet al., 2018, On-probe neural interface ASIC for combined electrical recording and optogenetic stimulation, IEEE Transactions on Biomedical Circuits and Systems, Vol: 12, Pages: 576-588, ISSN: 1932-4545
Neuromodulation technologies are progressing from pacemaking and sensory operations to full closed-loop control. In particular, optogenetics—the genetic modification of light sensitivity into neural tissue allows for simultaneous optical stimulation and electronic recording. This paper presents a neural interface application-specified integrated circuit (ASIC) for intelligent optoelectronic probes. The architecture is designed to enable simultaneous optical neural stimulation and electronic recording. It provides four low noise (2.08 μVrms) recording channels optimized for recording local field potentials (LFPs) (0.1–300 Hz bandwidth, ± 5 mV range, sampled 10-bit@4 kHz), which are more stable for chronic applications. For stimulation, it provides six independently addressable optical driver circuits, which can provide both intensity (8-bit resolution across a 1.1 mA range) and pulse-width modulation for high-radiance light emitting diodes (LEDs). The system includes a fully digital interface using a serial peripheral interface (SPI) protocol to allow for use with embedded controllers. The SPI interface is embedded within a finite state machine (FSM), which implements a command interpreter that can send out LFP data whilst receiving instructions to control LED emission. The circuit has been implemented in a commercially available 0.35 μm CMOS technology occupying a 1.95 mm × 1.10 mm footprint for mounting onto the head of a silicon probe. Measured results are given for a variety of bench-top, in vitro and in vivo experiments, quantifying system performance and also demonstrating concurrent recording and stimulation within relevant experimental models.
Journal article
Maslik M, Liu Y, Lande TS, Constandinou TGet al., 2018, Continuous-time acquisition of biosignals using a charge-based ADC topology, IEEE Transactions on Biomedical Circuits and Systems, Vol: 12, Pages: 471-482, ISSN: 1932-4545
This paper investigates continuous-time (CT) signal acquisition as an activity-dependent and nonuniform sampling alternative to conventional fixed-rate digitisation. We demonstrate the applicability to biosignal representation by quantifying the achievable bandwidth saving by nonuniform quantisation to commonly recorded biological signal fragments allowing a compression ratio of ≈5 and 26 when applied to electrocardiogram and extracellular action potential signals, respectively. We describe several desirable properties of CT sampling, including bandwidth reduction, elimination/reduction of quantisation error, and describe its impact on aliasing. This is followed by demonstration of a resource-efficient hardware implementation. We propose a novel circuit topology for a charge-based CT analogue-to-digital converter that has been optimized for the acquisition of neural signals. This has been implemented in a commercially available 0.35 μm CMOS technology occupying a compact footprint of 0.12 mm 2 . Silicon verified measurements demonstrate an 8-bit resolution and a 4 kHz bandwidth with static power consumption of 3.75 μW from a 1.5 V supply. The dynamic power dissipation is completely activity-dependent, requiring 1.39 pJ energy per conversion.
Journal article
Liu Y, Pereira J, Constandinou TG, 2018, Event-driven processing for hardware-efficient neural spike sorting, Journal of Neural Engineering, Vol: 15, Pages: 1-14, ISSN: 1741-2552
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
Journal article
Liu Y, Luan S, Williams I, Rapeaux A, Constandinou TGet al., 2017, A 64-Channel Versatile Neural Recording SoC with Activity Dependant Data Throughput, IEEE Transactions on Biomedical Circuits and Systems, Vol: 11, Pages: 1344-1355, ISSN: 1932-4545
Modern microtechnology is enabling the channel count of neural recording integrated circuits to scale exponentially. However, the raw data bandwidth of these systems is increasing proportionately, presenting major challenges in terms of power consumption and data transmission (especially for wireless systems). This paper presents a system that exploits the sparse nature of neural signals to address these challenges and provides a reconfigurable low-bandwidth event-driven output. Specifically, we present a novel 64-channel low noise (2.1μVrms, low power (23μW per analogue channel) neural recording system-on-chip (SoC). This features individually-configurable channels, 10-bit analogue-to-digital conversion, digital filtering, spike detection, and an event-driven output. Each channel's gain, bandwidth & sampling rate settings can be independently configured to extract Local Field Potentials (LFPs) at a low data-rate and/or Action Potentials (APs) at a higher data rate. The sampled data is streamed through an SRAM buffer that supports additional on-chip processing such as digital filtering and spike detection. Real-time spike detection can achieve ~2 orders of magnitude data reduction, by using a dual polarity simple threshold to enable an event driven output for neural spikes (16-sample window). The SoC additionally features a latency-encoded asynchronous output that is critical if used as part of a closed-loop system. This has been specifically developed to complement a separate on-node spike sorting co-processor to provide a real-time (low latency) output. The system has been implemented in a commercially-available 0.35μm CMOS technology occupying a silicon area of 19.1mm² (0.3mm² gross per channel), demonstrating a low power & efficient architecture which could be further optimised by aggressive technology and supply voltage scaling.
Journal article
Luo J, Firfilionis D, Ramezani R, Dehkhoda F, Soltan A, Degenaar P, Liu Y, Constandinou TGet al., 2017, Live demonstration: a closed-loop cortical brain implant for optogenetic curing epilepsy, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 169-169
Conference paper
Haci D, Liu Y, Constandinou TG, 2017, 32-channel ultra-low-noise arbitrary signal generation platform for biopotential emulation, IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: IEEE, Pages: 698-701
This paper presents a multichannel, ultra-low-noise arbitrary signal generation platform for emulating a wide range of different biopotential signals (e.g. ECG, EEG, etc). This is intended for use in the test, measurement and demonstration of bioinstrumentation and medical devices that interface to electrode inputs. The system is organized in 3 key blocks for generating, processing and converting the digital data into a parallel high performance analogue output. These blocks consist of: (1) a Raspberry Pi 3 (RPi3) board; (2) a custom Field Programmable Gate Array (FPGA) board with low-power IGLOO Nano device; and (3) analogue board including the Digital-to-Analogue Converters (DACs) and output circuits. By implementing the system this way, good isolation can be achieved between the different power and signal domains. This mixed-signal architecture takes in a high bitrate SDIO (Secure Digital Input Output) stream, recodes and packetizes this to drive two multichannel DACs, with parallel analogue outputs that are then attenuated and filtered. The system achieves 32-parallel output channels each sampled at 48kS/s, with a 10kHz bandwidth, 110dB dynamic range and uV-level output noise.
Conference paper
Gao C, Ghoreishizadeh S, Liu Y, Constandinou TGet al., 2017, On-chip ID generation for multi-node implantable devices using SA-PUF, IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: IEEE, Pages: 678-681
This paper presents a 64-bit on-chip identification system featuring low power consumption and randomness compensation for multi-node bio-implantable devices. A sense amplifier based bit-cell is proposed to realize the silicon physical unclonable function, providing a unique value whose probability has a uniform distribution and minimized influence from the temperature and supply variation. The entire system is designed and implemented in a typical 0.35 m CMOS technology, including an array of 64 bit-cells, readout circuits, and digital controllers for data interfaces. Simulated results show that the proposed bit-cell design achieved a uniformity of 50.24% and a uniqueness of 50.03% for generated IDs. The system achieved an energy consumption of 6.0 pJ per bit with parallel outputs and 17.3 pJ per bit with serial outputs.
Conference paper
Maslik M, Liu Y, Lande TS, Constandinou TGet al., 2017, A charge-based ultra-low power continuous-time ADC for data driven neural spike processing, IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: IEEE, Pages: 1420-1423
The paper presents a novel topology of a continuous-time analogue-to-digital converter (CT-ADC) featuring ultra-low static power consumption, activity-dependent dynamic consumption, and a compact footprint. This is achieved by utilising a novel charge-packet based threshold generation method, that alleviates the requirement for a conventional feedback DAC. The circuit has a static power consumption of 3.75uW, with dynamic energy of 1.39pJ/conversion level. This type of converter is thus particularly well-suited for biosignals that are generally sparse in nature. The circuit has been optimised for neural spike recording by capturing a 3kHz bandwidth with 8-bit resolution. For a typical extracellular neural recording the average power consumption is in the order of ~4uW. The circuit has been implemented in a commercially available 0.35um CMOS technology with core occupying a footprint of 0.12 sq.mm
Conference paper
Ghoreishizadeh S, Haci D, Liu Y, Donaldson N, Constandinou TGet al., 2017, Four-Wire Interface ASIC for a Multi-Implant Link, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol: 64, Pages: 3056-3067, ISSN: 1549-8328
This paper describes an on-chip interface for recovering power and providing full-duplex communication over an AC-coupled 4-wire lead between active implantable devices. The target application requires two modules to be implanted in the brain (cortex) and upper chest; connected via a subcutaneous lead. The brain implant consists of multiple identical ‘optrodes’ that facilitate a bidirectional neural interface (electrical recording, optical stimulation), and chest implant contains the power source (battery) and processor module. The proposed interface is integrated within each optrode ASIC allowing full-duplex and fully-differential communication based on Manchester encoding. The system features a head-to-chest uplink data rate(up to 1.6 Mbps) that is higher than that of the chest-to-head downlink (100 kbps) which is superimposed on a power carrier. On-chip power management provides an unregulated 5V DC supply with up to 2.5mA output current for stimulation, and two regulated voltages (3.3V and 3V) with 60 dB PSRR for recording and logic circuits. The 4-wire ASIC has been implemented in a 0.35 um CMOS technology, occupying 1.5mm2 silicon area,and consumes a quiescent current of 91.2u A. The system allows power transmission with measured efficiency of up to 66% from the chest to the brain implant. The downlink and uplink communication are successfully tested in a system with two optrodes and through a 4-wire implantable lead.
Journal article
Ghoreishizadeh S, Haci D, Liu Y, Constandinou Tet al., 2017, A 4-wire interface SoC for shared multi-implant power transfer and full-duplex communication, IEEE Latin American symposium on Circuits and Systems (LASCAS), Publisher: IEEE, Pages: 49-52, ISSN: 2473-4667
This paper describes a novel system for recovering power and providing full-duplex communication over an AC-coupled 4-wire lead between active implantable devices. The target application requires a single Chest Device be connected to a Brain Implant consisting of multiple identical optrodes that record neural activity and provide closed loop optical stimulation. The interface is integrated within each optrode SoC allowing full-duplex and fully-differential communication based on Manchester encoding. The system features a head-to-chest uplink data rate (1.6 Mbps) that is higher than that of the chest-to-head downlink (100kbps) superimposed on a power carrier. On-chip power management provides an unregulated 5 V DC supply with up to 2.5 mA output current for stimulation, and a regulated 3.3 V with 60 dB PSRR for recording and logic circuits. The circuit has been implemented in a 0.35 μm CMOS technology, occupying 1.4 mm 2 silicon area, and requiring a 62.2 μA average current consumption.
Conference paper
Williams I, Rapeaux A, Liu Y, Luan S, Constandinou TGet al., 2017, A 32-channel bidirectional neural/EMG interface with on-chip spike detection for sensorimotor feedback, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 528-531
This paper presents a novel 32-channel bidirectional neural interface, capable of high voltage stimulation and low power, low-noise neural recording. Current-controlled biphasic pulses are output with a voltage compliance of 9.25V, user configurable amplitude (max. 315 uA) & phase duration (max. 2 ms). The low-voltage recording amplifiers consume 23 uW per channel with programmable gain between 225 - 4725. Signals are10-bit sampled at 16 kHz. Data rates are reduced by granular control of active recording channels, spike detection and event-driven communication, and repeatable multi-pulse stimulation configurations.
Conference paper
Luan S, Liu Y, Williams I, Constandinou TGet al., 2017, An Event-Driven SoC for Neural Recording, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 404-407
This paper presents a novel 64-channel ultra-low power/low noise neural recording System-on-Chip (SoC) featuring a highly reconfigurable Analogue Front-End (AFE) and block-selectable data-driven output. This allows a tunable bandwidth/sampling rate for extracting Local Field Potentials (LFPs)and/or Extracellular Action Potentials (EAPs). Realtime spike detection utilises a dual polarity simple threshold to enable an event driven output for neural spikes (16-sample window). The 64-channels are organised into 16 sets of 4-channel recording blocks, with each block having a dedicated 10-bit SAR ADC that is time division multiplexed among the 4 channels. Eachchannel can be individually powered down and configured for bandwidth, gain and detection threshold. The output can thus combine continuous-streaming and event-driven data packets with the system configured as SPI slave. The SoC is implemented in a commercially-available 0.35u m CMOS technology occupying a silicon area of 19.1mm^2 (0.3mm^2 gross per channel) and requiring 32uW/channel power consumption (AFE only).
Conference paper
Zhao H, Dehkhoda F, Ramezani R, Sokolov D, Constandinou TG, Liu Y, Degenaar Pet al., 2016, A CMOS-Based Neural Implantable Optrode for Optogenetic Stimulation and Electrical Recording, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 286-289
This paper presents a novel integrated optrode for simultaneous optical stimulation and electrical recording for closed -loop optogenetic neuro-prosthetic applications. The design has been implemented in a commercially available 0.35μm CMOS process. The system includes circuits for controlling the optical stimulations; recording local field potentials (LFPs); and onboard diagnostics. The neural interface has two clusters of stimulation and recording sites. Each stimulation site has a bonding point for connecting a micro light emitting diode (μLED) to deliver light to the targeted area of brain tissue. Each recording site is designed to be post-processed with electrode materials to provide monitoring ofneural activity. On-chip diagnostic sensing has been included to provide real-time diagnostics for post-implantation and during normal operation.
Conference paper
Ramezani R, Dehkhoda F, Soltan A, Degenaar P, Liu Y, Constandinou TGet al., 2016, An optrode with built-in self-diagnostic and fracture sensor for cortical brain stimulation, IEEE Biomedical Circuits and Systems (BioCAS) Conference, Publisher: IEEE, Pages: 392-395
This paper proposes a self-diagnostic subsystem for a new generation of brain implants with active electronics. The primary objective of such probes is to deliver optical pulses to optogenetic tissue and record the subsequent activity, but lifetime is currently unknown. Our proposed circuits aim to increase the safety of implanting active electronic probes into human brain tissue. Therefore, prolonging the lifetime of the implant and reducing the risks to the patient. The self-diagnostic circuit will examine the optical emitter against any abnormality or malfunctioning. The fracture sensor examinesthe optrode against any rapture or insertion breakage. The optrode including our diagnostic subsystem and fracture sensor has been designed and successfully simulated at 350nm AMS technology node and sent for manufacture.
Conference paper
Liu Y, Pereira J, Constandinou TG, 2016, Clockless Continuous-Time Neural Spike Sorting: Method, Implementation and Evaluation, IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: IEEE, Pages: 538-541
In this paper, we present a new method for neuralspike sorting based on Continuous Time (CT) signal processing.A set of CT based features are proposed and extracted fromCT sampled pulses, and a complete event-driven spike sortingalgorithm that performs classification based on these features isdeveloped. Compared to conventional methods for spike sorting,the hardware implementation of the proposed method does notrequire any synchronisation clock for logic circuits, and thusits power consumption depend solely on the spike activity. Thishas been implemented using a variable quantisation step CTanalogue to digital converter (ADC) with custom digital logicthat is driven by level crossing events. Simulation results usingsynthetic neural data shows a comparable accuracy comparedto template matching (TM) and Principle Components Analysis(PCA) based discrete sampled classification.
Conference paper
Dehkhoda F, Soltan A, Ramezani R, Zhao H, Liu Y, Constandinou TG, Degenaar Pet al., 2015, Smart Optrode for Neural Stimulation and Sensing, 2015 IEEE Sensors Conference, Publisher: IEEE, Pages: 1-4
Implantable neuro-prosthetics considerable clinical benefit to a range of neurological conditions. Optogenetics is a particular recent interest which utilizes high radiance light for photo-activation of genetic cells. This can provide improved biocompatibility and neural targeting over electrical stimuli. To date the primary optical delivery method in tissue for optogenetics has been via optic fibre which makes large scale multiplexing difficult. An alternative approach is to incorporate optical micro-emitters directly on implantable probes but this still requires electrical multiplexing. In this work, we demonstrate a fully active optoelectronic probe utilizing industry standard 0.35μm CMOS technology, capable of both light delivery and electrical recording. The incorporation of electronic circuits onto the device further allows us to incorporate smart sensors to determine diagnostic state to explore long term viability during chronic implantation.
Conference paper
Barsakcioglu D, Liu Y, Bhunjun P, Navajas J, Eftekhar A, Jackson A, Quian Quiroga R, Constandinou TGet al., 2014, An Analogue Front-End Model for Developing Neural Spike Sorting Systems, IEEE Transactions on Biomedical Circuits and Systems, Vol: 8, Pages: 216-227
Journal article
Reverter F, Prodromakis T, Liu Y, Georgiou P, Nikolic K, Constandinou TGet al., 2014, Design Considerations for a CMOS Lab-on-Chip Microheater Array to Facilitate the in vitro Thermal Stimulation of Neurons, IEEE International Symposium on Circuits and Systems (ISCAS), Publisher: IEEE, Pages: 630-633
Conference paper
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00478170&limit=30&person=true | 2022-08-09 08:38:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2337610274553299, "perplexity": 9666.23151124551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00434.warc.gz"} |
https://www.molpro.net/manual/doku.php?id=basis_set_extrapolation | # Basis set extrapolation
Basis set extrapolation can be carried out for correlation consistent basis sets using
EXTRAPOLATE,BASIS=basislist,options
where basislist is a list of at least two basis sets separated by colons, e.g. AVTZ:AVQZ:AV5Z. Some extrapolation types need three or more basis sets, others only two. The default is to use $n^{-3}$ extrapolation of the correlation energies, and in this case two subsequent basis sets and the corresponding energies are needed. The default is not to extrapolate the reference (HF) energies; the value obtained with the largest basis set is taken as reference energy for the CBS estimate. However, extrapolation of the reference is also possible by specifying the METHOD_R option.
The simplest way to perform extraplations for standard methods like MP2 or CCSD(T) is to use, e.g.
***,H2O
memory,32,m
gthresh,energy=1.d-8
r = 0.9572 ang, theta = 104.52
geometry={O;
H1,O,r;
H2,O,r,H1,theta}
basis=avtz
hf
ccsd(t)
extrapolate,basis=avqz:av5z
table,basissets,energr,energy-energr,energy
head,basis,ehf,ecorr,etot
This will perform the first calculation with AVTZ basis, and then compute the estimated basis set limit using the AVQZ and AV5Z basis sets. The correlation energy obtained in the calculation that is performed immediately before the extrapolate command will be extrapolated (in this case the CCSD(T) energy), and the necessary sequence of calculations [here HF;CCSD(T)] will be automatically carried out.
The resulting energies are returned in variables ENERGR (reference energies), ENERGY (total energies), and ENERGD (Davidson corrected energy if available); the corresponding basis sets are returned in variable BASISSETS. The results can be printed, e.g., in a table as shown above, or used otherwise. The above input produces the table
BASIS EHF ECORR ETOT
AVQZ -76.06600082 -0.29758099 -76.36358181
AV5Z -76.06732050 -0.30297495 -76.37029545
CBS -76.06732050 -0.30863418 -76.37595468
The extrapolated total energy is also returned in variable ECBS (ECBSD for Davidson corrected energy if available).
In order to extrapolate the HF energy as well (using exponential extrapolation), three energies are needed. One can modify the input as follows:
extrapolate,basis=avtz:avqz:av5z,method_r=ex1,npc=2
method_r determines the method for extrapolating the reference energy (in this case a single exponential); npc=2 means that only the last two energies should be used to extrapolate the correlation energy (by default, a least square fit to all given energies is used). This yields
BASIS EREF ECORR ETOT
AVTZ -76.06061330 -0.28167606 -76.34228936
AVQZ -76.06600082 -0.29758099 -76.36358180
AV5Z -76.06732050 -0.30297495 -76.37029545
CBS -76.06774863 -0.30863419 -76.37638283
Rather than using the default procedure as above, one can also specify a procedure used to carry out the energy calculation, e.g.
extrapolate,basis=avtz:avqz:av5z,proc=runccsd, method_r=ex1,npc=2}
procedure runccsd
hf
ccsd(t)
endproc
Alternatively, the energies can be provided via variables EREF, ECORR, ETOT etc. These must be vectors, holding as many values as basis sets are given.
The possible options and extrapolation methods are:
• BASIS=basissets Specify as set of correlation consistent basis sets, separated by colons.
• PROC=procname Specify a procedure to run the energy calculations
• STARTCMD=command Start command for the energy calculations: the sequence of commands from STARTCMD and the current EXTRAPOLATE is run. STARTCMD must come before the extrapolate command in the input.
• METHOD=key Specifies a keyword to define the extrapolation function, see section extrapolation functionals.
• METHOD_C=key Specifies a keyword to define the extrapolation function for the correlation energy, see section extrapolation functionals.
• METHOD_R=key Specifies a keyword to define the extrapolation function for the reference energy, see section extrapolation functionals.
• VARIABLE=name Specifies a variable name; this variable should contain the energies to be extrapolated.
• ETOT=variable Provide the total energies in variable (a vector with the same number of energies as basis sets are given) If only ETOT but not EREF is given, the total energy is extrapolated.
• EREF=variable Provide the reference energies to be extrapolated in variable (a vector with the same number of energies as basis sets are given)
• ECORR=variable Provide the correlation energies to be extrapolated in variable (a vector with the same number of energies as basis sets are given)
• ECORRD=variable Provide the Davidson corrected correlation energies to be extrapolated in variable (a vector with the same number of energies as basis sets are given). If both ECORR and ECORRD are given, both will be extrapolated.
• MINB=number First basis set to be used for extrapolation (default 1)
• MAXB=number Last basis set to be used for extrapolation (default number of basis sets)
• NPR=number If given, the last NPR values are used for extrapolating the reference energy. NPR must be smaller or equal to the number of basis sets.
• NPC=number If given, the last NPC values are used for extrapolating the reference energy. NPC must be smaller or equal to the number of basis sets.
• XR=array Provide a vector of exponents to be used for defining the extrapolation functional for the reference energy when using the LX functional.
• XC=array Provide a vector of exponents to be used for defining the extrapolation functional for the correlation energy when using the LX functional.
• PR=array Provide the constant $p$ to be used for defining the extrapolation functional for the reference energy.
• PC=array Provide the constant $p$ to be used for defining the extrapolation functional for the correlation energy.
The extrapolation functional is chosen by a keyword with the METHOD, METHOD_R, and/or METHOD_C options. The default functional is L3. In the following, $n$ is the cardinal number of the basis set (e.g., 2 for VDZ, 3 for VTZ etc), and $x$ is an arbitrary number. $p$ is a constant given either by the PR or PC options (default $p=0$). X is a number or a vector given either by the XR or XC options (only for LX; $nx$ is the number of elements provided in X). $A$, $B$, $A_i$ are the fitting coefficients that are optioized by least-squares fits.
• L$x$ $E_{n} = E_{\tt CBS} + A \cdot (n+p)^{-x}$
• LH$x$ $E_{n} = E_{\tt CBS} + A \cdot (n+\frac{1}{2})^{-x}$
• LX $E_{n} = E_{\tt CBS} + \sum_{i=1}^{nx} A_i \cdot (n+p)^{-x(i)}$
• EX1 $E_{n} =E_{\tt CBS}+A\cdot \exp(-C\cdot n)$
• EX2 $E_{n} =E_{\tt CBS}+A\cdot \exp(-(n-1))+B\cdot\exp(-(n-1)^2)$
• KM Two-point formula for extrapolating the HF reference energy, as proposed by A. Karton and J. M. L. Martin, Theor. Chem. Acc. 115, 330 (2006): $E_{\rm HF,n}=E_{\rm HF,CBS} +A (n+1)\cdot \exp(-9 \sqrt{n})$. Use METHOD_R=KM for this.
The following example shows various possibilities for extrapolation:
examples/h2o_extrapolate_ccsd.inp
***,h2o
gthresh,energy=1.d-9
basis=avtz
r = 0.9572 ang, theta = 104.52
geometry={!nosym
O;
H1,O,r;
H2,O,r,H1,theta}
hf
{ccsd(t)}
text,compute energies, extrapolate reference energy using EX1 and correlation energy using L3
extrapolate,basis=avtz:avqz:av5z,method_c=l3,method_r=ex1,npc=2
ehf=energr(1:3)
etot=energy(1:3)
text,extrapolate total energy using EX2
extrapolate,basis=avtz:avqz:av5z,etot=etot,method=ex2
text,extrapolate reference energy by EX1 and correlation energy by EX2
extrapolate,basis=avtz:avqz:av5z,etot=etot,method_c=ex2,eref=ehf,method_r=ex1
text,extrapolate reference energy by EX1 and correlation energy by LH3
extrapolate,basis=avtz:avqz:av5z,etot=etot,method_c=LH3,eref=ehf,method_r=ex1,npc=2
text,extrapolate reference energy by EX1 and correlation energy by LX
extrapolate,basis=avtz:avqz:av5z,etot=etot,method_c=LX,eref=ehf,method_r=ex1,xc=[3,4],pc=0.5
The second example shows extrapolations of MRCI energies. In this case both the MRCI and the MRCI+Q energies are extrapolated.
examples/h2o_extrapolate_mrci.inp
***,h2o
gthresh,energy=1.d-9
basis=avtz
r = 0.9572 ang, theta = 104.52
geometry={
O;
H1,O,r;
H2,O,r,H1,theta}
hf
multi
mrci
text,Compute energies, extrapolate reference energy using EX1 and correlation energy using L3;
text,The Davidson corrected energy is also extraplated
extrapolate,basis=avtz:avqz:av5z,method_c=l3,method_r=ex1,npc=2
emc=energr
ecorr_mrci=energy-emc
ecorr_mrciq=energd-emc
text,Extrapolate reference energy by EX1 and correlation energy by LH3
text,The Davidson corrected energy is also extraplated
extrapolate,basis=avtz:avqz:av5z,ecorr=ecorr_mrci,ecorrd=ecorr_mrciq,method_c=LH3,eref=emc,method_r=ex1,npc=2
Geometry optimizations are possible by using numerical gradients obtained from extrapolated energies. Analytical energy gradients cannot be used.
The following possibilities exist:
1.) If OPTG directly follows the EXTRAPOLATE command, the extrapolated energy is optimized automatically (only variable settings may occur between EXTRAPOLATE and OPTG).
Examples:
Extrapolating the energy for the last command:
examples/h2o_extrapol_opt1.inp
geometry={o;h1,o,r;h2,o,r,h1,theta}
theta=102
r=0.96 ang
basis=vtz
hf
ccsd(t)
extrapolate,basis=vtz:vqz
optg
Extrapolating the energy computed in a procedure:
examples/h2o_extrapol_opt2.inp
geometry={o;h1,o,r;h2,o,r,h1,theta}
theta=102
r=0.96 ang
proc ccsdt
hf
ccsd(t)
endproc
extrapolate,basis=vtz:vqz,proc=ccsdt
optg
Note that this is not possible if EXTRAPOLATE gets the input energies from variables.
2.) Using a procedure for the extrapolation:
By default, variable ECBS is optimized, but other variables (e.g. ECBSD) can be specified using the VARIABLE option on the OPTG command.
examples/h2o_extrapol_opt3.inp
geometry={o;h1,o,r;h2,o,r,h1,theta}
theta=102
r=0.96 ang
basis=vtz
proc cbs34
hf
ccsd(t)
extrapolate,basis=vtz:vqz
endproc
optg,variable=ecbs,proc=cbs34
examples/h2o_extrapol_opt4.inp
geometry={o;h1,o,r;h2,o,r,h1,theta}
theta=102
r=0.96 ang
proc cbs34
basis=vtz
hf
ccsd(t)
eref(1)=energr
ecc(1)=energy
basis=vqz
hf
ccsd(t)
eref(2)=energr
ecc(2)=energy
extrapolate,basis=vtz:vqz,eref=eref,etot=ecc
endproc
optg,variable=ecbs,proc=cbs34
This is possible by defining the extrapolation in a procedure:
examples/h2o_extrapol_freq.inp
geometry={o;h1,o,r;h2,o,r,h1,theta}
theta=102
r=0.96 ang
basis=vtz
proc cbs34
hf
ccsd(t)
extrapolate,basis=vtz:vqz
endproc
optg,variable=ecbs,proc=cbs34
freq,variable=ecbs,proc=cbs34 | 2021-04-22 13:20:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475393652915955, "perplexity": 3181.997126338697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00389.warc.gz"} |
https://math.stackexchange.com/questions/3870139/defining-a-topology-on-the-plane | # Defining a topology on the plane
1. Define a new topology on the plane as follows. If p is a point on the plane, we say that U is a neighborhood of p if every straight line that contains p also contains an open line segment that is contained in U. Compare this topology with the usual topology on the plane (i.e., is it strictly weaker? strictly coarser? neither? are the two topologies equivalent?)
I am trying to define this topolgy.
I have $$\tau=\{U \subseteq X |\text{ if } (a,b)\cap p\in U, \text{ then } U \in N_{p}\}$$ where $$N_{p}$$ is the neighborhood system at p.
Is this correct? If so, am I writing it right? Something about it feels off. | 2020-10-31 18:57:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910239040851593, "perplexity": 223.6604157043794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922411.94/warc/CC-MAIN-20201031181658-20201031211658-00558.warc.gz"} |
https://physics.paperswithcode.com/paper/gluon-bound-state-and-asymptotic-freedom | ## Gluon bound state and asymptotic freedom derived from the Bethe--Salpeter equation
21 Apr 2017 · Fukamachi Hitoshi, Kondo Kei-Ichi, Nishino Shogo, Shinohara Toru ·
In this paper we study the two-body bound states for gluons and ghosts in a massive Yang-Mills theory which is obtained by generalizing the ordinary massless Yang-Mills theory in a manifestly Lorentz covariant gauge. First, we give a systematic derivation of the coupled Bethe-Salpeter equations for gluons and ghosts by using the Cornwall-Jackiw-Tomboulis effective action of the composite operators within the framework of the path integral quantization... Then, we obtain the numerical solutions for the Bethe-Salpeter amplitude representing the simultaneous bound states of gluons and ghosts by solving the homogeneous Bethe-Salpeter equation in the ladder approximation. We study how the inclusion of ghosts affects the two-gluon bound states in the cases of the standing and running gauge coupling constant. Moreover, we show explicitly that the approximate solutions obtained for the gluon-gluon amplitude are consistent with the ultraviolet asymptotic freedom signaled by the negative $\beta$ function. read more
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
# Categories
High Energy Physics - Theory High Energy Physics - Phenomenology | 2021-07-28 08:32:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043383955955505, "perplexity": 1813.2814900553656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00543.warc.gz"} |
http://mathoverflow.net/questions/13857/schwartz-space-dense-in-l2 | # Schwartz space dense in L2
How can one prove that $S( \mathbb{R}^{d})$, the Schwartz space, is dense in $L^{2}(\mathbb{R}^{d})$?
A slightly stronger result is that $C_{0}^{\infty}( \mathbb{R}^{d})$, the set of infinitely differentiable functions with compact support, is dense in $L^{2}(\mathbb{R}^{d})$, but I think this is probably easy to show once it has been proved that $S( \mathbb{R}^{d})$ is dense in $L^{2}(\mathbb{R}^{d})$.
-
Bump functions approximate piecewise linear functions, which are automatically dense in L^2. – Qiaochu Yuan Feb 2 '10 at 19:46
...but to make it an answer, $C_0^\infty$ is uniformly dense in the the continuous functions that vanish at infinity by Stone-Weierstrauss, whence also dense in $L_2$. – Bill Johnson Feb 2 '10 at 23:27 | 2014-07-25 18:43:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873671531677246, "perplexity": 211.26670620237402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00128-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://zenodo.org/record/4658962/export/dcite4 | Other Open Access
# ATLAS Deliverable 5.4: Willingness to pay for conservation in the North Atlantic deep-sea ecosystems
Ankamah-Yeboah, I; Xuan, BB; Hynes, S; Needham, K; Armstrong, CW
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.4658962</identifier>
<creators>
<creator>
<creatorName>Ankamah-Yeboah, I</creatorName>
<givenName>I</givenName>
<familyName>Ankamah-Yeboah</familyName>
</creator>
<creator>
<creatorName>Xuan, BB</creatorName>
<givenName>BB</givenName>
<familyName>Xuan</familyName>
</creator>
<creator>
<creatorName>Hynes, S</creatorName>
<givenName>S</givenName>
<familyName>Hynes</familyName>
</creator>
<creator>
<creatorName>Needham, K</creatorName>
<givenName>K</givenName>
<familyName>Needham</familyName>
</creator>
<creator>
<creatorName>Armstrong, CW</creatorName>
<givenName>CW</givenName>
<familyName>Armstrong</familyName>
</creator>
</creators>
<titles>
<title>ATLAS Deliverable 5.4: Willingness to pay for conservation in the North Atlantic deep-sea ecosystems</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2021</publicationYear>
<dates>
<date dateType="Issued">2021-04-01</date>
</dates>
<resourceType resourceTypeGeneral="Other"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4658962</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4658961</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/atlas</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>This report presents an assessment of how the public perceives, and values deep-sea ecosystem services in the North Atlantic, and provides a foundation for evaluating and balancing Blue Growth with conservation management in the deep sea. Nonmarket valuation is used to evaluate public perceptions of the deep sea environment and the socio-economic values of new marine management plans. This report presents the results of two discrete choice experiment surveys that were employed to assess the values held by the Scottish and Norwegian public for the Mingulay reef complex and Hola off Lofoten-Vester&aring;len (LoVe), respectively.<br>
Regarding public perception, the results show that public knowledge and awareness of deep-sea ecosystems is relatively higher among Norwegians than among the Scottish public. Specifically, awareness of cold-water corals is high for the LoVe case study amongst the Norwegian public and low for the Mingulay reef complex in the Scottish case. Despite this limited knowledge, many respondents thought changes in the deep sea would have at least some effect on them personally. On average, the public perceives deep-sea conditions to be at most &lsquo;fairly good&rsquo; but are pessimistic about its management: a significantly higher share, 76% of Norwegians perceive the deep sea to be poorly-managed compared to 12% of those surveyed in Scotland.<br>
Results from both countries highlight eco-centric attitudes towards the marine environment, implying that the general public recognise the value of ecosystem services, the current ecological crisis and the need for sustainable management. Demographic profiles of respondents and their experiences play influential roles, with exposure to media-art like the Blue-Planet II series showing prominence in most perception dimensions.<br>
To determine whether the perceived public support translates into monetary support for new management scenarios, a discrete choice experiment was conducted to assess trade-offs for improvement in a number of deep-sea environment attributes; environmental health and quality, an increase in the size of marine protected areas (MPAs) and new marine related job creation. Latent class logit results revealed two distinct groups of public preferences: a minority of respondents who derive minimal value from the marine environment and a second group who exhibit significant positive preferences for all the management attributes and exhibit strong preferences for new policy options.<br>
The most valued of the new policy attributes were those related to the key pressures of the marine environment: commercial fish stocks and marine litter designated as Descriptors 3 and 10 respectively in the GES of the MSF Directive. This was followed by the size of the marine protected area, whilst the creation of jobs is the least valued. Overall, however, weighted average willingness to pay estimates, indicate that the public in both countries is willing to pay to support conservation of the unfamiliar deep-sea ecosystem irrespective of the individual attributes delivered in a new marine management plan. The results highlight the importance of the deep-sea ecosystems to the public and provide support for further collective action required by the EU in moving beyond the 2020 Marine Strategy Framework Directive (MSFD) objective of achieving good environmental status for Europe&rsquo;s seas.&nbsp;</p></description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/678760/">678760</awardNumber>
<awardTitle>A Trans-AtLantic Assessment and deep-water ecosystem-based Spatial management plan for Europe</awardTitle>
</fundingReference>
</fundingReferences>
</resource>
14
10
views | 2021-12-08 10:02:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20665858685970306, "perplexity": 7315.59581731847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00457.warc.gz"} |
https://dsp.stackexchange.com/questions/25133/how-to-take-ifft-of-connes-window | # How to take IFFT of Connes Window?
The Connes Window function is defined as :
w(f)=$(1-(\frac{f}{ \Delta f})^2)^2$ for $f<|\Delta f|$
w(f)=0 otherwise
The inverse fourier transform of this function can be analytically calculated to be a purely real valued function.
However performing IFFT using matlab gives me a complex function whith real and imaginary parts. The code I used in the following:
delta_f=10;
fs=300;
nCyl = 5;
t=0:1/fs:nCyl*1/f;
x=(1-(t/delta_f).^2).^2;
plot(t,x);
NFFT=1024;
X=fftshift(ifft(x,NFFT));
fVals=fs*(-NFFT/2:NFFT/2-1)/NFFT;
plot(fVals,real(X),'b');
I think this discrepancy has something to do with the function being piecewise. Is there something which I am missing?
• is the last sample equal to the first sample? it shouldn't be. it should be cyclic. ifft([1,2,3,4,3,2,1]) is complex but ifft([1,2,3,4,3,2]) is real – endolith Aug 9 '15 at 5:35
• no it still gives imaginary values.but why does that happen to the example you gave? shouldnt the same function same inverse fourier transform? – rsujatha Aug 9 '15 at 5:47
• [1,2,3,4,3,2] is symmetrical, with the first sample occurring right after the last sample. So is [1,0,0,2,0,0], for instance. Anything of the form [a,b,c,d,e,d,c,b] will have a real transform. – endolith Aug 19 '15 at 13:44 | 2019-10-23 14:38:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340736985206604, "perplexity": 1249.6067698533766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00381.warc.gz"} |
https://groups.google.com/g/sci.physics.research/c/aiMUJrOjE8A/m/0OLoWQrV7UcJ | # Planck Units and (slowly) changing Fine-Structure Constant
803 views
### robert bristow-johnson
Sep 6, 2001, 7:35:12 PM9/6/01
to
Background (so youse guys don't talk over my head): i'm an EE doing
designing DSP algorithms to process audio/music for a living (i'm kinda a
guru on comp.dsp). i'm pretty good at most of the math physicists are good
at but pretty ignorant of deeper current physical theories (anything past
classical, special relativity, intro to quantum mechanics) like
string-theory, etc. i wanted to ask an abbreviated version of this question
on Science-Friday 2 weeks ago, but they didn't take my call.
anyway, about 2 decades ago i have been thinking along some of the same
years previously) about a "Universal" set of physical units that would
basically make the Universal Constants equal to unity if expressed in terms
of those units. of course i come up with the Planck Units (except i also
came up with a unit charge not equal to the electron charge).
the governing basic formulae are:
c = L/T
E = M * c^2
E = Hbar * (1/T)
E = G * M^2 / L ( or F = G * M^2 / L^2 with E = F*L )
E = k * Q^2 / L ( or F = k * Q^2 / L^2 with E = F*L )
since k = 1/(4*pi*e0) and c^2 = 1/(e0*u0) and u0 has a defined value (in
SI it's 4*pi * 10^-7 because an ampere was defined as a current that would
yield a force of 10^-7 Nt/m for a pair of infinitely long wires spaced 1
meter apart), anyway, in that case the last equation (Coulomb static force)
becomes
E = (c^2 * u0/(4*pi)) * Q^2 / L
now if we leave off the question of unit charge for the time being and solve
the first four equations to get the four unknowns (L, M, T, E), you get the
Planck values which is what you decided to pick for your fundamental units.
L = sqrt( Hbar * G / c^3 )
M = sqrt( Hbar * c / G )
T = sqrt( Hbar * G / c^5 )
( and consequently E = sqrt( Hbar * c^5 / G ) )
from my POV, the compelling reason to pick those for our fundamental
universal units is that the other fundamental constants c, Hbar, and G all
become unity in terms of those units.
now here's what i thought was a bit interesting coming from a different POV
than most everyone else: rather than just define the unit charge to be the
electronic charge (e), i wanted the Coulomb electrostatic force constant to
be unity also and chose the unit charge to make that happen:
Q = sqrt( (4*pi/u0) * Hbar / c )
which comes out to be 11.70623764 * e, or the square root of alpha^-1 times
the electronic charge where alpha is the Fine-Structure Constant.
Q/e = sqrt( (4*pi/u0) * Hbar / c )/e = 1/sqrt(alpha)
==> alpha = u0/(4*pi) * e^2 * c / Hbar .
anyway, i thought that this was kinda neat that this relationship between
what i would call the natural unit of charge to the electronic charge (both
very fundamental and universal quantities) would be related this way.
likewise if you choose e to be your fundamental unit (as do most) then you
do have to have a non-unity Coulomb force constant which is 1/alpha (about
137.036), but that isn't so ugly either. it's just a question of which unit
is "more" fundamental and that's a toss up i guess.
Now, finally, it seems that we must perceive reality in terms of the Planck
Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
our meter stick would be 1000 meters and we would still call it a "meter"
and the Planck Length would still be about 10^-35. same with time and mass,
if we perceive reality in terms of the Planck Units and the electronic
charge, e, then a change in the Fine-Structure Constant, alpha, would be
noticed as a change in the Coulomb Force constant since
F = c^2*u0/(4*pi*alpha) * q^2/r^2
where q is in units of e.
and we would also notice a change in the Characteristic Impedance of
Z0 = 4*pi*Hbar*alpha/e^2 .
But, if OTOH, we perceived reality in terms of the Planck Units and the
charge Q = e/sqrt(alpha), then a change in alpha would be noticed as a
change in the charge of an electron.
so which is it? or does it make any difference?
i'll try to monitor this newsgroup but feel free to CC: me at
<rob...@wavemechanics.com> if you want to make sure i see a response.
--
r b-j
Wave Mechanics, Inc.
45 Kilburn St.
Burlington VT 05401-4750
--
### Lubos Motl
Sep 9, 2001, 11:06:23 PM9/9/01
to
Dear Robert,
comp.dsp people have a guru who is not silly. It is fine that you could
re-discover Planck units. By the way, in Planck units - where you put
epsilon0=mu0=1, speaking in SI units -, the charge of the electron is
really equal to -sqrt(4.pi/137.036...) just like you say.
Physicists are however convenient and they usually express the charge in
units of (minus) the charge of the electron so that it is always integer
(or integer over three, for quarks). You cannot say that one choice of
units is objectively more fundamental; it is a matter of taste. Both are
certainly more fundamental than using Coulombs. However it depends on your
feelings. Anyway you can see that you cannot get rid of the number
1/137.036..., the fine structure constant. It is a dimensionless number
without any units. Therefore it does not depend on our choice of the
units. And there should be some explanation for its value!
We understand this number in terms of more fundamental constants of the
electroweak theory (g and g'), measured at higher energies (instead of
zero energies - as alpha), but a complete calculation yielding
1/137.036... is still missing. String theory is believed to be capable to
derive its value one day.
Particle physicists usually measure the charge so that the electron has
Q=-1. But then they must include the fine structure constant into the
definition of the energy. The energy density - or the Lagrangian (which is
something related that has the same dimension) - is defined as 1/g^2 times
E^2/2 etc. in the conventions where Q=-1 for the electron.
Either you say that the minimal charge is some strange number (instead of
1), or you can say that it is one but the energy density is not defined as
E^2/2 but this times a strange constant. You cannot get rid of the
constant at both places simultaneously. In fact, both conventions are used
by particle physicists sometimes. It causes a lot of confusion but there
are more difficult problems in the world. ;-)
> Now, finally, it seems that we must perceive reality in terms of the Planck
> Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
> from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
It is correct that you invite us to perceive reality in Planck units, but
you do not do it yourself. If you did so, you would rather say: if one
meter was defined not as 10^35 Planck lengths but only 10^32 Planck
lengths, then we would be 2000 meters tall instead of 2 (anyway, two is
also too much) :-) - because everyone knows that a human being must be
about 2.10^35 Planck lengths tall in order to have the right number of
atoms.
> our meter stick would be 1000 meters and we would still call it a "meter"
> and the Planck Length would still be about 10^-35. same with time and mass,
And you would also realize that you can say the same sentence with the
charge, too. Your problem is that you omitted the units. The Planck length
is not 10^-35. The Planck length is 10^-35 meters. And a meter is a random
consequence of history; a practical unit in everyday life but a silly unit
without any depth; 10^35 Planck lengths, in a more fundamental language.
You can play the same game with Coulombs instead of meters and the result
is similar; you should remember that you are redefining meters and
Coulombs, not Planck length etc. Planck length is equal to one in natural
units and cannot be redefined.
However maybe you did not want to talk about Coulombs but the two
different conventions for the fundamental unit of charge. Well, if you
changed the number 1/137.036 (contained in the ratio of your two
"fundamental" units) to something else, the world would certainly change
dramatically! In fact, life would be killed if you changed the number by
less than one per cent.
The fine structure constant can be seen at many places. For example, the
(squared) speed of electrons in the hydrogen atom is roughly 1/137 of the
(squared) speed of light. As a consequence of this, the spectrum of the
hydrogen atoms have the famous lines with energies 1/n^2, but if you look
at the lines with a better resolution, you find out that they are
separated to several sublines; they form the so-called fine structure of
the Hydrogen spectrum. The distance between the main lines of the spectrum
is - if I simplify - 137 times bigger than the distance between the lines
in the fine structure; therefore the name. If 137 was replaced by 10, the
spectrum would look completely different, most known nuclei would decay
radioactively (because proton repel each other electromagnetically and
this force would be stronger than the extra "chromostatic" attraction
between quarks - in our world, the electromagnetism is weaker and the
attraction by gluons wins). Simply, it would be a different world. A
couple of dimensionful numbers can be changed without changing the world
(at most, their number can equal to the number of independent units); it
just corresponds to redefining your units. However you cannot change
dimensionless numbers. One is always one (for example, it is equal to its
square) and cannot be redefined to be three. On the contrary, there are
three definitions of quarks and leptons and you cannot redefine this
number to five. :-)
Best wishes
Lubos
______________________________________________________________________________
E-mail: lu...@matfyz.cz Web: http://www.matfyz.cz/lumo tel.+1-805/893-5025
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Superstring/M-theory is the language in which God wrote the world.
### robert bristow-johnson
Sep 10, 2001, 8:17:02 PM9/10/01
to
Lubos Motl <mo...@physics.rutgers.edu> wrote in message news:<Pine.SOL.4.10.101090...@physsun3.rutgers.edu>...
> Dear Robert,
>
> comp.dsp people have a guru who is not silly.
Cf:
they were mad at me for saying that if the dependent variable of the
dirac-delta function must be dimensionally in reciprocal units of the
independent variable since the integral must be 1 (dimensionless).
> It is fine that you could
> re-discover Planck units. By the way, in Planck units - where you put
> epsilon0=mu0=1, speaking in SI units -, the charge of the electron is
> really equal to -sqrt(4.pi/137.036...) just like you say.
actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
simple Coulomb force equation has a constant = 1. same for the
gravitational force equation.
> Physicists are however convenient and they usually express the charge in
> units of (minus) the charge of the electron so that it is always integer
> (or integer over three, for quarks).
which is one reason (caveat: i don't really know diddley about quarks)
because of this e/3 charge thing that i didn't like about setting the
unit charge to be e.
> You cannot say that one choice of
> units is objectively more fundamental; it is a matter of taste. Both are
> certainly more fundamental than using Coulombs. However it depends on your
> feelings. Anyway you can see that you cannot get rid of the number
> 1/137.036..., the fine structure constant.
of course not. it's just a matter about where one wishes to see it.
> It is a dimensionless number
> without any units. Therefore it does not depend on our choice of the
> units. And there should be some explanation for its value!
1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
from (being a layman) just hearing that alpha has been measured to
have changed by about 10 ppm (out of 4 ppbillion uncertainty) in the
12 billion years it took for the big-bang background radiation to
befall us. and i'm wondering (if there were a much larger change in
alpha) how that would be noticed. as a change in e? or a change in
epsilon0 and z0? or something else? or is it moot?
> We understand this number in terms of more fundamental constants of the
> electroweak theory (g and g'), measured at higher energies (instead of
> zero energies - as alpha), but a complete calculation yielding
> 1/137.036... is still missing. String theory is believed to be capable to
> derive its value one day.
that'll be interesting.
<snippage of which i understood maybe 1/2)
> > Now, finally, it seems that we must perceive reality in terms of the Planck
> > Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
> > from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
>
> It is correct that you invite us to perceive reality in Planck units,
i meant it more as an (naive) observation rather than an invitation.
> but
> you do not do it yourself. If you did so, you would rather say: if one
> meter was defined not as 10^35 Planck lengths but only 10^32 Planck
> lengths, then we would be 2000 meters tall instead of 2 (anyway, two is
> also too much) :-) - because everyone knows that a human being must be
> about 2.10^35 Planck lengths tall in order to have the right number of
> atoms.
well, i tried to say it as such. however our height not only depends
on the number of atoms but their size and the Rydberg constant (or
more precisely, its reciprocal), which depends on e, seems to have
i would normally think of it as this: we perceive reality (for me it's
just 3D space and time) in terms of, or relative to, the speed of
light, the gravitational constant, Planck's constant, and perhaps the
charge of the electron. so, it seems to me that we cannot really
perceive a change in the speed of light because our sense of length
and time is relative to that. that's why i've always thought that
those "thought experiments" asking "what if the speed of light was 30
miles per hour? what would life be like?" are similar to asking how
many angels dance on the head of a pin.
anyway, if our perception of reality *is* in terms of c, G, and hBar,
then our perception of length, time, and mass must be in terms of the
Planck Units which is a natural reason to use them for theoretical
thinking.
for my "taste", the charge of an electron becomes more secondary being
that it is more of an "object" in the universe and not a parameter of
the universe itself. it seems more logical or "natural" to first
observe the nature of forces of the universe on objects in general,
select appropriate units that would normalize the constants of
proportionality (of the simplest, most basic equations) to one, and
then secondly start looking at some objects (such as atoms and
sub-atomic particle). we sorta do that with Newton's 2nd law: we
don't say that Force is proportional to mass times acceleration
(although it is for |v| << c), we choose our unit of force so that
force *is* mass times acceleration. i would do this for charge also
so that:
E = m (not m * c^2 since we're normalizing c = 1)
E = omega (not hBar*omega for the same reason)
F = m1*m2 / r^2 (not G*m1*m2 / r^2)
and
F = q1*q2 / r^2 (not k*q1*q2 / r^2)
to satisfy the first three, you need to measure length, time, and mass
in units of Planck. to satisfy the fourth in addition, you need to
measure charge in units of e/sqrt(alpha), not e.
> > our meter stick would be 1000 meters and we would still call it a "meter"
> > and the Planck Length would still be about 10^-35. same with time and mass,
> > but what about charge???
>
> And you would also realize that you can say the same sentence with the
> charge, too. Your problem is that you omitted the units. The Planck length
> is not 10^-35. The Planck length is 10^-35 meters. And a meter is a random
> consequence of history; a practical unit in everyday life but a silly unit
> without any depth; 10^35 Planck lengths, in a more fundamental language.
> You can play the same game with Coulombs instead of meters and the result
> is similar;
yes. and the unit charge is not e but would be e/sqrt(alpha),
correct?
> you should remember that you are redefining meters and
> Coulombs, not Planck length etc. Planck length is equal to one in natural
> units and cannot be redefined.
agreed! it just seems to me that it is not consistent to call the
"Planck charge" (i dunno if the term is really used in your biz) e.
it seems much more consistent to me to call the Planck charge such a
charge that (this is hypothetically since the distances are wildly
small, even for electrons) when two such charges are placed one Planck
length apart, you get one Planck unit of force.
you do that definition first, *then* you do some kind of Miliken
experiment and observe that the charge of an electron appears to be
> However maybe you did not want to talk about Coulombs but the two
> different conventions for the fundamental unit of charge. Well, if you
> changed the number 1/137.036 (contained in the ratio of your two
> "fundamental" units) to something else, the world would certainly change
> dramatically! In fact, life would be killed if you changed the number by
> less than one per cent.
well, given the present trend, we have about 12 trillion years left
before life is killed off due to alpha getting "out of bounds".
> The fine structure constant can be seen at many places. For example, the
> (squared) speed of electrons in the hydrogen atom is roughly 1/137 of the
> (squared) speed of light. As a consequence of this, the spectrum of the
> hydrogen atoms have the famous lines with energies 1/n^2, but if you look
> at the lines with a better resolution, you find out that they are
> separated to several sublines; they form the so-called fine structure of
> the Hydrogen spectrum. The distance between the main lines of the spectrum
> is - if I simplify - 137 times bigger than the distance between the lines
> in the fine structure; therefore the name. If 137 was replaced by 10, the
> spectrum would look completely different, most known nuclei would decay
> radioactively (because proton repel each other electromagnetically and
> this force would be stronger than the extra "chromostatic" attraction
> between quarks - in our world, the electromagnetism is weaker and the
> attraction by gluons wins). Simply, it would be a different world.
that i understand. how much different would the world be if alpha
quickly changed by another 10 ppm? BTW, which way did it change in
the last 12 billion years? did it increase or decrease by 10 ppm?
> A
> couple of dimensionful numbers can be changed without changing the world
> (at most, their number can equal to the number of independent units); it
> just corresponds to redefining your units. However you cannot change
> dimensionless numbers.
*that* i understand! (at least that you cannot change dimensionless
numbers by very much without adverse consequences.)
thanks for the response, Lubos.
...
> Superstring/M-theory is the language in which God wrote the world.
some might say that instead "Superstring/M-theory is a language
construct of humankind to try to verbalize and understand what God
was/is doing when He wrote the world." kinda like reading
hieroglyphics. you never know, maybe in 200 years, they'll toss it on
the trash heap with Newton's Laws.
:-/
r b-j
### Lubos Motl
Sep 16, 2001, 1:13:55 PM9/16/01
to
On 10 Sep 2001, 1 day before the attacks, robert bristow-johnson wrote:
> they were mad at me for saying that if the dependent variable of the
> dirac-delta function must be dimensionally in reciprocal units of the
> independent variable since the integral must be 1 (dimensionless).
And you were right. And if I have come here 3 years ago, there would be
two of us attacked by the people who don't know what is the dimension of
the delta function.
Delta(momentum) has units of 1/momentum because it can be written as the
derivative d stepfunction(momentum) / d momentum. Here, the
stepfunction is 0 or 1, so it is dimensionless, and therefore the only
dimension comes from the momentum in the denominator. Or just like you
say, the integral must be one. Delta is a distribution and this kind of
"function" has always the dimension of 1 / thedimension of its parameter.
> actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
> simple Coulomb force equation has a constant = 1. same for the
> gravitational force equation.
Right, I prefer to put epsilon0=1 as SI suggests but this difference is a
psychological one. Your convention with epsilon=1/4.pi corresponds to the
Gaussian units (CGS, centimeter-gram-second) in fact.
> which is one reason (caveat: i don't really know diddley about quarks)
> because of this e/3 charge thing that i didn't like about setting the
> unit charge to be e.
Quarks were known much later than the charge of the electron was called
"-e". ;-) Some string theory models admit even more exotic fractions such
as e/11 etc. (see the Chapter 9 of The Elegant Universe) but "e" is the
minimal unit of something that can exist freely and does not require too
huge energies. There would be a lot of mess if we suddenly decided that
the sign "e" should be replaced by "3e" in all the textbooks.
> of course not. it's just a matter about where one wishes to see it.
Exactly.
> 1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
Great formula. Much better than other people suggested even in their
papers submitted to xxx.lanl.gov. Unfortunately, your formula is most
likely wrong. :-)
> from (being a layman) just hearing that alpha has been measured to
> have changed by about 10 ppm (out of 4 ppbillion uncertainty) in the
> 12 billion years it took for the big-bang background radiation to
> befall us. and i'm wondering (if there were a much larger change in
> alpha) how that would be noticed. as a change in e? or a change in
> epsilon0 and z0? or something else? or is it moot?
Physically, you cannot note the change of the value of letter unless you
precisely define what they mean. Physically you can however measure
frequencies of the spectral lines of Hydrogen (the rainbow coming from the
Hydrogen contains some "lines", discrete strips of color). And the
distance between two lines in the fine structure is say 137 times smaller
than the big distance between two specific lines. So this ratio would
change. There would be very many things that would change. If it was more
than 1 ppm or so, you would certainly notice.
> well, i tried to say it as such. however our height not only depends
> on the number of atoms but their size and the Rydberg constant (or
> more precisely, its reciprocal), which depends on e, seems to have
> something to say about that.
radius of the atom, maybe you can use biological arguments to say that
beings as smart as we are :-) should better be 10 billion atoms tall.
Therefore a natural unit that they want to choose to measure things will
be 10 billion Angstroms i.e. 1 meter. And because of some relation between
the size of the atom and Planck length, you can also say that we are led
to use units 10^35 Planck lengths (we called this one "one meter",
approximately).
> charge of the electron. so, it seems to me that we cannot really
> perceive a change in the speed of light because our sense of length
> and time is relative to that.
Exactly! This is the correct viewpoint that I was just explaining to
someone else. In fact, the SI units directly reflect this approach. 1
meter is currently defined as 1/299 792 458 light seconds. So if you keep
your definition, the Universe can change in any way but the speed of light
is always fixed. A change of the speed of light is just a change of our
definitions and it is useful to keep it fixed because it implies a
relation between space and time which is so important, because of
relativity.
> that's why i've always thought that
> those "thought experiments" asking "what if the speed of light was 30
> miles per hour? what would life be like?" are similar to asking how
> many angels dance on the head of a pin.
Yes, this question physically means just "how it would look like if we
could moved by speeds c/5 or so".
> anyway, if our perception of reality *is* in terms of c, G, and hBar,
> then our perception of length, time, and mass must be in terms of the
> Planck Units which is a natural reason to use them for theoretical
> thinking.
Exactly. However a necessary amount of blood for a hospital is at least a
gallon and therefore we don't use Planck volumes to measure volume, for
instance. Maybe we will use them sometimes...
> E = m (not m * c^2 since we're normalizing c = 1)
> E = omega (not hBar*omega for the same reason)
Right.
> F = m1*m2 / r^2 (not G*m1*m2 / r^2)
> F = q1*q2 / r^2 (not k*q1*q2 / r^2)
Here I would put the usual 4.pi into the denominator as in SI units. The
reason is that 4.pi.r^2 is the surface of sphere - and the electric field
is kind of uniformly distributed over the sphere around your charge. If
you accept my SI conventions with 4.pi, the Maxwell equations (which are
more fundamental, I think) do not have any 4.pi's in them - with your
convention you need to put some 4.pi's into Maxwell's equations. In the
previous case of gravity, we should do the same (with those 4.pi), but in
fact we still use Newton's convention for his constant. A better
denominator could be perhaps 8.pi here. Sometimes a "gravitational
constant" differs from the "Newton's constant" by a factor of 8.pi. All of
this is just a convention.
> yes. and the unit charge is not e but would be e/sqrt(alpha),
> correct?
Correct - up to this convention of 4.pi - I would probably prefer to say
that the unit charge is e/sqrt(4.pi.alpha). Today we say that your
conventions with 4.pi are "not rationalized". :-)
> you do that definition first, *then* you do some kind of Miliken
> experiment and observe that the charge of an electron appears to be
> sqrt(alpha) times your unit charge.
Right, I would say that this is precisely how Millikan did it except for
the various powers of hbar and c that he used everywhere around (and two
of us set them equal to one). His result was something like sqrt(alpha)
times some powers of hbar and c, even without those 4.pi because he was
using your, old conventions.
> well, given the present trend, we have about 12 trillion years left
> before life is killed off due to alpha getting "out of bounds".
Well, and you did not know that we had exactly 1 day before something like
the World War III starts.
I hope that all of you and your families and friends are doing fine and
that the attacks on Tuesday (the day when I defended my thesis) will make
us stronger, not weaker.
God bless you
Lubos
P.S. I am not sure whether the experiments "showing" changing value of
alpha are reliable enough (they contradict some estimates derived from
successes of our theory of the primordial nucleosynthesis) and I don't
know which direction it goes. Sorry that I did not reply to everything.
______________________________________________________________________________
E-mail: lu...@matfyz.cz Web: http://www.matfyz.cz/lumo tel.+1-805/893-5025
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### J. J. Lodder
Sep 17, 2001, 3:38:08 AM9/17/01
to
Lubos Motl <mo...@physics.rutgers.edu> wrote:
> Right, I prefer to put epsilon0=1 as SI suggests but this difference is a
> psychological one. Your convention with epsilon=1/4.pi corresponds to the
> Gaussian units (CGS, centimeter-gram-second) in fact.
With rationalised units it is a matter of convention
whether or not to take the 4pi as part of the equation,
or to incorporate it in epsilon_0 or mu_0.
MKSA chooses the second option.
In Heaviside-Lorentz or rationalized natural units
it seems best to write Coulomb's and Ampere's law
with an explicit 4pi in it
-and- say that you have units with epsilon_0 = 1,
if you would be crazy enough to worry about
what epsilon_0 should be in such a unit system.
To be discouraged, IMHO:
Saying that eps_0 equals 4pi in such unit systems.
The eps_0 and mu_0 are artefacts of the MKSA system,
without any physical meaning or interpretation,
and nobody would even think about introducing them
if a more sensible unit system had been chosen, long ago,
without them.
But indeed, conventions only,
Jan
### J. J. Lodder
Sep 16, 2001, 5:00:28 PM9/16/01
to
Lubos Motl <mo...@physics.rutgers.edu> wrote:
> On 10 Sep 2001, 1 day before the attacks, robert bristow-johnson wrote:
> > they were mad at me for saying that if the dependent variable of the
> > dirac-delta function must be dimensionally in reciprocal units of the
> > independent variable since the integral must be 1 (dimensionless).
>
> And you were right. And if I have come here 3 years ago, there would be
> two of us attacked by the people who don't know what is the dimension of
> the delta function.
The simplest way to see that is to note that delta(x) must be
homogeneous of degree -1 under scale transformations:
delta(ax) = 1/a delta(x),
since integrals involving a delta function should be scale invariant.
Best,
Jan
Sep 23, 2001, 9:53:10 PM9/23/01
to
Lubos Motl <mo...@physics.rutgers.edu <mailto:mo...@physics.rutgers.edu>> wrote
in message <news:Pine.SOL.4.10.101091...@physsun9.rutgers.edu>...
> On 10 Sep 2001, 1 day before the attacks, robert bristow-johnson wrote:
> > 1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
>
> Great formula. Much better than other people suggested even in their
> papers submitted to xxx.lanl.gov. Unfortunately, your formula is most
> likely wrong. :-)
Hello Lubos, and congratulations to your Ph.D.
Robert's formula for the inverse fine structure constant gives
exp(0.5*pi^2) - 2 = 137.0456367
The best formula I have seen on the Arxiv is
4 pi^3 + pi^2 + pi = 137.0363038
The author claimed that the three terms should have something to do with
the groups SU(3), SU(2) and U(1), but I didn't really understand how.
The best experimental data that I found (from a 20 year old PPDB) is
137.03604(11)
### Toby Bartels
Sep 23, 2001, 9:54:48 PM9/23/01
to
robert bristow-johnson wrote in part:
>actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
>simple Coulomb force equation has a constant = 1. same for the
>gravitational force equation.
This convention is called "unrationalised";
I (and apparently Lubos) prefer "rationalised".
The reason is that I find the Maxwell equations
more fundamental than the Coulomb equation.
The debate between rationalised and unrationalised never ends.
I am even more radical than most proponents of rationalisation
in that I also rationalise the constant of gravitation.
Since 8 pi G, rather than G itself, appears in
the Einstein equations of general relativity,
I like to set 8 pi G to 1 rather than G,
which makes my Planck units off from others'
by a factor of about 5.
>we sorta do that with Newton's 2nd law: we
>don't say that Force is proportional to mass times acceleration
>(although it is for |v| << c), we choose our unit of force so that
>force *is* mass times acceleration.
Exactly. This example is a good one to use
when trying to explain Planck units to people.
If you want a good *historical* example,
use our modern measurement of heat with energy units.
Once upon a time, people measured heat in different units,
so we had an extra fundamental constant of nature, 4.184 J/cal.
This reminds me that there is another quantity
that can be measured in energy units but usually isn't:
temperature. The fundamental constant of nature here
is Boltzmann's constant k_B, about 1.381e(-23) J/K.
If you set Boltzmann's constant to 1,
then entropy becomes dimensionless
and you can see that it can measure
the dimensionless quantity of information.
In the International System of units (SI),
there are 7 allegedly physical units,
so we need to set 7 fundamental constants to 1
in order to make everything dimensionless.
Actually, 1 unit, the candela, is not a physical unit at all
but a physiological unit for apparent brightness to a human eye.
So the 6 physical units are the metre, the second,
the kilogramme, the ampere, the kelvin, and the mole.
The 6 constants are c, hbar, G (I prefer 8 pi G),
epsilon_0 (you prefer 4 pi epsilon_0), k_B, and N_A.
Setting it to 1 just says that a mole is about 6.022e23,
which comes as naturally as saying that a dozen is 12.)
Now everything is unitless.
>>Superstring/M-theory is the language in which God wrote the world.
>some might say that instead "Superstring/M-theory is a language
>construct of humankind to try to verbalize and understand what God
>was/is doing when He wrote the world." kinda like reading
>hieroglyphics. you never know, maybe in 200 years, they'll toss it on
>the trash heap with Newton's Laws.
Nope. String theory is the one final theory of physics that explains
every possible phenomenon -- or at least will once we've finished it.
If we ever find something that contradicts string theory,
then that will only prove that what we've found does not exist.
^_^ ^_^ ^_^
(This is just me teasing Lubos; you can ignore it, robert.)
-- Toby
to...@math.ucr.edu
### Toby Bartels
Sep 24, 2001, 2:03:34 PM9/24/01
to
>The best experimental data that I found (from a 20 year old PPDB) is
>137.03604(11)
20 years old? I can beat that!
137.0359895+-61
This is from Brookhaven Natl Lab's Nuclear Wallet Cards, 1995 Jul.
But really it's not that good, since they reveal in the fine print
that their table of fundamental constants is simply swiped from
the 1986 Nov CODATA Bulletin, published 9 long years earlier.
So surely somebody else here can do better!!!
-- Toby
to...@math.ucr.edu
### J. J. Lodder
Sep 24, 2001, 3:50:44 AM9/24/01
to
Toby Bartels <to...@math.ucr.edu> wrote:
> This convention is called "unrationalised";
> I (and apparently Lubos) prefer "rationalised".
> The reason is that I find the Maxwell equations
> more fundamental than the Coulomb equation.
> The debate between rationalised and unrationalised never ends.
>
> I am even more radical than most proponents of rationalisation
> in that I also rationalise the constant of gravitation.
> Since 8 pi G, rather than G itself, appears in
> the Einstein equations of general relativity,
> I like to set 8 pi G to 1 rather than G,
> which makes my Planck units off from others'
> by a factor of about 5.
No need to do that: Planck units are defined up to a proportionality
constant anyway, (it is only dimensional analysis)
so you can mess up the Einstein equation
without changing the Planck units.
There is no end to the confusion you can produce,
once you start meddling.
(I asked John, sometime ago, on precisely this point,
whether he wanted to mess up Planck also, or only Einstein.
No answer, if I remember correctly)
My preference: don't change the established -numerical- values
of the Planck length/time/mass/etc/, change the definitions,
if you feel you must change at all.
Best,
Jan
### Toby Bartels
Sep 25, 2001, 3:36:20 AM9/25/01
to
J. J. Lodder wrote in part:
>Toby Bartels wrote:
>>I like to set 8 pi G to 1 rather than G,
>>which makes my Planck units off from others'
>>by a factor of about 5.
>My preference: don't change the established -numerical- values
>of the Planck length/time/mass/etc/, change the definitions,
>if you feel you must change at all.
Fair enough. Say this:
<<I like to set 8 pi G to 1 rather than G,
<<which makes my natural units off from the Planck units
<<by a factor of about 5.
In real life, I wouldn't confuse anybody
by introducing my natural units as "Planck units"
unless we were only talking about order of magnitude.
-- Toby
to...@math.ucr.edu
### Paul Arendt
Oct 3, 2001, 12:16:37 AM10/3/01
to
In article <1ezuonc.1cv...@de-ster.demon.nl>,
J. J. Lodder <j...@de-ster.demon.nl> wrote:
>The eps_0 and mu_0 are artefacts of the MKSA system,
>without any physical meaning or interpretation,
>and nobody would even think about introducing them
>if a more sensible unit system had been chosen, long ago,
>without them.
Well, I agree that these things can be set to "1" if you
choose your units of charges (and fields) appropriately. But
I don't agree that there is no physical meaning or
interpretation to them!
For instance, eps_0 is more than a pure constant. It's the
relationship between D and E in Maxwell's equations, in free
space. A convention may be chosen so that D = E in free
space, but D and E are still very different entities!
The difference between them is that E is used to find the
force on a charged nonmoving test particle, while D is used
to integrate over a region's boundary to find the total charge
contained within that region (Gauss' law). In other words,
E measures the effect of the field on charged particles, while D
measures the effect of charged particles (as sources of the field).
E tells what charges will do, while D tells where the charges are!
One might argue that these should equal each other by some sort
of action=reaction Newtonian argument (conservation of momentum).
But the fact remains that geometrically, they represent different
entities! This is easiest to understand if one represents E and D
by differential forms. (Eric Forgy just got extremely interested,
didn't he? :-) )
As a differential form in 3-space, E is a one-form. The physical
interpretation is that it represents the differential contribution
to a charged particle's energy (per unit charge) when the particle
moves across the surfaces of the one-form. (For electrostatic
configurations, this one-form field is integrable, so that you
may define E = - grad Phi everywhere.) When the one-form E is
integrated over a line in 3-space, the result is the change in
energy (per unit charge) on a charged particle which has moved
along that line.
As a differential form in 3-space, D is a two-form! The
interpretation is that the integral of D over a closed region
equals the charge contained in that region, which is Gauss' law.
So, as forms, D and E are related by the Hodge star operator, even
when the units of the vectors #(*D) and #E are chosen to be equal to
one another. (Notation: #(one-form) is the vector obtained by
applying the inverse metric tensor to the one-form, and * is the
Hodge star.)
Similar remarks apply when D and E are forms in (3+1)-D spacetime:
they are both 2-forms, but E is a space-time 2-form, while D is a
space-space 2-form.
### J. J. Lodder
Oct 3, 2001, 11:19:20 PM10/3/01
to
Paul Arendt <par...@black.nmt.edu> wrote:
> In article <1ezuonc.1cv...@de-ster.demon.nl>,
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
>
> >The eps_0 and mu_0 are artefacts of the MKSA system,
> >without any physical meaning or interpretation,
> >and nobody would even think about introducing them
> >if a more sensible unit system had been chosen, long ago,
> >without them.
>
> Well, I agree that these things can be set to "1" if you
> choose your units of charges (and fields) appropriately. But
> I don't agree that there is no physical meaning or
> interpretation to them!
>
> For instance, eps_0 is more than a pure constant. It's the
> relationship between D and E in Maxwell's equations, in free
> space. A convention may be chosen so that D = E in free
> space, but D and E are still very different entities!
What -physical- experiment would you propose to demonstrate
a physical (as opposed to conceptual) difference between E and D,
in vacuum?
> The difference between them is that E is used to find the
> force on a charged nonmoving test particle, while D is used
> to integrate over a region's boundary to find the total charge
> contained within that region (Gauss' law). In other words,
> E measures the effect of the field on charged particles, while D
> measures the effect of charged particles (as sources of the field).
> E tells what charges will do, while D tells where the charges are!
Indeed, this is a -conceptual- distinction only:
in vacuum there is only one field,
which you may -call- either E or D.
> One might argue that these should equal each other by some sort
> of action=reaction Newtonian argument (conservation of momentum).
> But the fact remains that geometrically, they represent different
> entities! This is easiest to understand if one represents E and D
> by differential forms. (Eric Forgy just got extremely interested,
> didn't he? :-) )
I would not argue anything of the kind.
Instead I would say that there is only one field E,
and that only the Maxwell egns in vacuum are fundamental.
The Maxwell eqns in matter, with D in them, are approximate eqns,
to be derived from the fundamental eqns in vacuum
by appropriate statistical mechanics.
snip forms, not that they aren't nice, but nice formalism
cannot substitute for physical desciption.
Best,
Jan
### zirkus
Oct 4, 2001, 9:19:21 PM10/4/01
to
Toby Bartels <to...@math.ucr.edu> wrote in message
news:<9onshm$bl2$1...@glue.ucr.edu>...
> 20 years old? I can beat that!
>
> 137.0359895+-61
>
> This is from Brookhaven Natl Lab's Nuclear Wallet Cards, 1995 Jul.
>
> But really it's not that good, since they reveal in the fine print
> that their table of fundamental constants is simply swiped from
> the 1986 Nov CODATA Bulletin, published 9 long years earlier.
>
> So surely somebody else here can do better!!!
There is an astronomy paper [1] which shows that the fine
structure constant used to be about one part in 10^5 less than it is
now! Hopefully, some team
can verify or refute this evidence because this surprising result has
importance for e.g. variable-
speed-of-light (VSL) cosmology.
------------
### Toby Bartels
Oct 10, 2001, 10:44:48 PM10/10/01
to
J. J. Lodder wrote:
>Instead I would say that there is only one field E,
>and that only the Maxwell egns in vacuum are fundamental.
>The Maxwell eqns in matter, with D in them, are approximate eqns,
>to be derived from the fundamental eqns in vacuum
>by appropriate statistical mechanics.
Well, I would argue that the vacuum equations aren't fundamental either.
They are merely a classical approximation to QED.
And even that is merely an approximation to the GWS electroweak theory.
And even that is merely an approximation;
even if there is no grand unification of electroweak and strong forces,
still the appearance of the metric tensor in the GWS theory
must be modified by a theory of quantum gravity.
The value of a physical theory isn't determined by its fundamentalness.
The Maxwell equations in matter, where E and D are different,
are quite useful and accurate across a broad range of phenomena.
You need a relationship between E and D given by the type of matter;
for many types, this can be given by a single constant epsilon.
The value of epsilon in vacuum is epsilon_0 = 1, in appropriate units.
Good, it should be!
-- Toby
to...@math.ucr.edu
### Paul Arendt
Oct 12, 2001, 9:02:14 PM10/12/01
to
In article <1f0p07l.poz...@de-ster.demon.nl>,
J. J. Lodder <j...@de-ster.demon.nl> wrote:
>Paul Arendt <par...@black.nmt.edu> wrote:
>
>> For instance, eps_0 is more than a pure constant. It's the
>> relationship between D and E in Maxwell's equations, in free
>> space. A convention may be chosen so that D = E in free
>> space, but D and E are still very different entities!
>
>What -physical- experiment would you propose to demonstrate
>a physical (as opposed to conceptual) difference between E and D,
>in vacuum?
Well, from what I wrote below:
>> The difference between them is that E is used to find the
>> force on a charged nonmoving test particle, while D is used
>> to integrate over a region's boundary to find the total charge
>> contained within that region (Gauss' law). In other words,
>> E measures the effect of the field on charged particles, while D
>> measures the effect of charged particles (as sources of the field).
>> E tells what charges will do, while D tells where the charges are!
...you can imagine the following experiment: measure the force on
small charged particles (whose charge is as small as possible),
and divide by the charge of the particle. This gives you the
electric field E at a point. Continue this over the entire (2-D)
boundary of a (3-D) volume, and you have defined E everywhere on
the boundary.
Now, take small coiled loops of paramagnetic material, and measure
(somehow!) the total induction H integrated along the loops as you
rotate the loops (at constant angular velocity) around at the
same points where you measured E. This gives you the flux of
D through the loops. Repeat this over the same boundary that
was done for the E field, and you will have the flux of D through
the boundary.
Gauss' Law now says that this total flux of D equals the charge
contained within the volume (whose boundary was the region D and
E were measured over).
You may choose a unit system in which D = E in vacuum in Euclidean
space. Suppose that we do this. If the experiment above has
been performed in Euclidean space, then the total flux of E through
the boundary will also give the charge contained in the volume.
But in a curved space, the flux of E though the surface will *not*
generally give the charge enclosed, unless it happens to be 0.
So, we can conclude that E cannot equal D at every point on that
surface.
You may instead try to adjust the *electric charge* so that the "E-charge"
(giving the force on a particle) is different from the "D-charge" (to be
used in Gauss' Law) in curved spaces, but D and E are defined to be
the same things. However, to be fair, you should also start having
the "D-charge" change in dielectric media too, if that is the route
taken.
### Eric Alan Forgy
Oct 14, 2001, 4:47:46 PM10/14/01
to
Hi,
It's nice to see people pointing out that E and D are two different
things :) I saw a comment in some other thread where it was written E
= D, and I was tempted to pipe in, but now I can't resist :)
I know you know this, and what you said was precisely that E is a
1-form and D is a 2-form. The relation between them involves the
space(time) metric. I personally think it is misguided (and
misleading) to write E = D ever! But I'm probably more passionate
:)
Eric
"Paul Arendt" <par...@black.nmt.edu> wrote:
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
> >Paul Arendt <par...@black.nmt.edu> wrote:
[snip]
### J. J. Lodder
Oct 14, 2001, 4:48:34 PM10/14/01
to
Toby Bartels <to...@math.ucr.edu> wrote:
> J. J. Lodder wrote:
>
> >Instead I would say that there is only one field E,
> >and that only the Maxwell egns in vacuum are fundamental.
> >The Maxwell eqns in matter, with D in them, are approximate eqns,
> >to be derived from the fundamental eqns in vacuum
> >by appropriate statistical mechanics.
>
> Well, I would argue that the vacuum equations aren't fundamental either.
> They are merely a classical approximation to QED.
[snip more irrelevantia]
Sure, but why drag in these irrelevantia?
The physical meaning of eps_0 and mu_0, if any,
is an issue on the pre-1900 level.
It could, and should, have been settled then, once and for all,
by following Heaviside and Lorentz' proposals
for a sensible EM unit system.
If that had been done then you now would not even have known
that it is actually possible to introduce these notions,
unless you had happened to study history of science.
Best,
Jan
--
"The electrical intensity is given in square root psi" (Thomson)
### Toby Bartels
Oct 14, 2001, 8:08:16 PM10/14/01
to
J. J. Lodder wrote:
>Toby Bartels wrote:
>>J. J. Lodder wrote:
>>>Instead I would say that there is only one field E,
>>>and that only the Maxwell egns in vacuum are fundamental.
>>>The Maxwell eqns in matter, with D in them, are approximate eqns,
>>>to be derived from the fundamental eqns in vacuum
>>>by appropriate statistical mechanics.
>>Well, I would argue that the vacuum equations aren't fundamental either.
>>They are merely a classical approximation to QED.
>Sure, but why drag in these irrelevantia?
>The physical meaning of eps_0 and mu_0, if any,
>is an issue on the pre-1900 level.
In the sense that this physical meaning could be understood before 1900, yes.
But the physical meaning remains, albeit a very simple meaning.
>It could, and should, have been settled then, once and for all,
>by following Heaviside and Lorentz' proposals
>for a sensible EM unit system.
I quite agree.
>If that had been done then you now would not even have known
>that it is actually possible to introduce these notions,
>unless you had happened to study history of science.
But now I disagree. Using Heaviside Lorentz units,
I would still have studied Maxwell's equations for dielectric media
and been introduced to the concept of epsilon and mu.
(These quantities would be dimensionless, of course.)
Then I would learn that epsilon and mu for the vacuum are both exactly 1.
How nice!
That the dielectric constant of the vacuum is 1
has as much physical meaning as that the speed of light there is 1.
The speed of light in vacuum may be a very trivial quantity in good units,
but it retains its physical meaning -- that is the speed that light travels.
-- Toby
to...@math.ucr.edu
### J. J. Lodder
Oct 15, 2001, 4:10:51 AM10/15/01
to
Eric Alan Forgy <fo...@uiuc.edu> wrote:
> It's nice to see people pointing out that E and D are two different
> things :) I saw a comment in some other thread where it was written E
> = D, and I was tempted to pipe in, but now I can't resist :)
>
> I know you know this, and what you said was precisely that E is a
> 1-form and D is a 2-form. The relation between them involves the
> space(time) metric. I personally think it is misguided (and
> misleading) to write E = D ever! But I'm probably more passionate
Things which hold in one particular representation
of a theory may be helpful to some,
but they cannot have -physical- content.
They are different descriptions of the same thing.
Likewise, you would not claim that a particle
actually has two positions, x_\mu and x^\nu,
because a covariant vector is something entirely different
than a covariant one, mathematically speaking.
Best,
Jan
--
"Mathematicians are like Frenchmen:
They translate everything you say to them
immediately into their own language,
after which it is something entirely different" (Goethe)
### J. J. Lodder
Oct 15, 2001, 4:09:28 AM10/15/01
to
Paul Arendt <par...@black.nmt.edu> wrote:
> In article <1f0p07l.poz...@de-ster.demon.nl>,
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
> >Paul Arendt <par...@black.nmt.edu> wrote:
> >> For instance, eps_0 is more than a pure constant. It's the
> >> relationship between D and E in Maxwell's equations, in free
> >> space. A convention may be chosen so that D = E in free
> >> space, but D and E are still very different entities!
> >What -physical- experiment would you propose to demonstrate
> >a physical (as opposed to conceptual) difference between E and D,
> >in vacuum?
> ...you can imagine the following experiment: measure the force on
> small charged particles (whose charge is as small as possible),
> and divide by the charge of the particle. This gives you the
> electric field E at a point. Continue this over the entire (2-D)
> boundary of a (3-D) volume, and you have defined E everywhere on
> the boundary.
Sure, gives you E(r) for all r, in principle.
And that is all there is to know,
in an electrostatic situation.
> Now, take small coiled loops of paramagnetic material, and measure
> (somehow!) the total induction H integrated along the loops as you
> rotate the loops (at constant angular velocity) around at the
> same points where you measured E. This gives you the flux of
> D through the loops. Repeat this over the same boundary that
> was done for the E field, and you will have the flux of D through
> the boundary.
No need to introduce paramagnetic matter: a flip coil will do.
And: this measurement will not tell you anything new:
The results of any further experiments can be predicted
from E(r) measured above.
> Gauss' Law now says that this total flux of D equals the charge
> contained within the volume (whose boundary was the region D and
> E were measured over).
>
> You may choose a unit system in which D = E in vacuum in Euclidean
> space. Suppose that we do this. If the experiment above has
> been performed in Euclidean space, then the total flux of E through
> the boundary will also give the charge contained in the volume.
>
> But in a curved space, the flux of E though the surface will *not*
> generally give the charge enclosed, unless it happens to be 0.
> So, we can conclude that E cannot equal D at every point on that
> surface.
Let's not drag curved spaces into this discussion.
The emptiness of the argument can also be seen in Euclidean space,
by using a coordinate system with metric tensor not the identity.
Indeed, there are two ways then to calculate the charge in a given
volume: a correct and an incorrect one.
> You may instead try to adjust the *electric charge* so that the "E-charge"
> (giving the force on a particle) is different from the "D-charge" (to be
> used in Gauss' Law) in curved spaces, but D and E are defined to be
> the same things. However, to be fair, you should also start having
> the "D-charge" change in dielectric media too, if that is the route
> taken.
The 'cure', (two charges) would be worse
than the (non-existent :-) disease,
in my opinion.
Jan
### Eric Alan Forgy
Oct 16, 2001, 3:55:28 AM10/16/01
to
Hi,
"J. J. Lodder" <nos...@de-ster.demon.nl> wrote:
> Eric Alan Forgy <fo...@uiuc.edu> wrote:
> >
> > I know you know this, and what you said was precisely that E is a
> > 1-form and D is a 2-form. The relation between them involves the
> > space(time) metric. I personally think it is misguided (and
> > misleading) to write E = D ever! But I'm probably more passionate
>
> Things which hold in one particular representation
> of a theory may be helpful to some,
> but they cannot have -physical- content.
> They are different descriptions of the same thing.
I agree 100%
> Likewise, you would not claim that a particle
> actually has two positions, x_\mu and x^\nu,
> because a covariant vector is something entirely different
> than a covariant one, mathematically speaking.
I agree 100%
Hmm... if I agree 100% with what you said, then why does it seem like
you were disagreeing with what I said? :) If you are really
disagreeing with what I said, would you mind spelling out a bit more
clearly in what way you disagree? I'd like to know. I'm supposed to be
an expert in EM, so if I am missing something basic, I'd like to know
Maybe I'll explain what I mean more precisely so that if there is any
hole in my logic, it will be easier to spot. I'd say that given E, a
metric, and some information about the material properties, you can
find D. Conversely, given D, a metric, and some information about the
material properties, you can find E. So, essentially, given a metric
and some information about the material properies, then perhaps, in
some sense of the word, you can say E and D are the "same".
so that
d/dt -> i*w,
and we know J and H. Then we can find D simply by
D = [curl H - J]/(i*w).
With this D, a metric, and information about the material properties,
we can then find E. Conversely, if we know E, then we can similarly
find B by
B = [-curl E]/(i*w).
With this B, a metric, and information about the material properties,
we can then find H. So, under the prescribed scenario, given E we can
find H, and given H we can find E. Would you then say that E and H are
the "same"?
One of the reasons I am such a stickler about saying E and D are not
the same comes from experience with numerical solutions to Maxwell's
equations. E, being a 1-form, is naturally associated to the edges of
some mesh. D, being a 2-form, is naturally associated to the faces of
some mesh. To me, saying E and D are the same is like saying edges are
the same as faces :) In 3d, there is a nice way to associate edges and
faces. That is by constructing a dual mesh. For instance, if the mesh
is a simplicial complex, then you can construct the dual mesh in many
ways, e.g. a Poincare dual or a barycentric dual. Then, for every
p-simplex of the primary mesh, you have an (n-p)-cell of the dual
mesh, which is not simplicial. Still, I'd hesitate to say E and D were
the same because that would be like saying an edge is the same as a
dual cell. Sure, they are related, but I wouldn't call them the same.
The last paragraph was based on lattice arguments, but those arguments
do manifest themselves when you go to the continuum limit (if you
desired to do so... I'm personally of the opinion you should do away
with the continuum model of space-time altogether, but that is a
different story). I think it is a subtle, yet important, distinction
between E and D, but I think it should be made. (By the way, those
arguments are of relevance for spin foam models as well.)
Is there really a disagreement here, or is it a semantic issue about
the meaning of the word "same"? If it is the latter, then there is no
need to argue over it. You say tomato, I say tomato... c'est la vie :)
Cheers,
Eric
### Gerard Westendorp
Oct 17, 2001, 3:37:32 PM10/17/01
to
"J. J. Lodder" wrote:
[..]
> Sure, but why drag in these irrelevantia?
> The physical meaning of eps_0 and mu_0, if any,
> is an issue on the pre-1900 level.
As you may remember from previous threads, I disagree with this.
Because eps_0, together with e, h and c fixes the fine-structure
constant, any change in the fine structure constant (I think it
changes at very high energies, like just after the big bang)
will impact either eps_0, e, h or c.
I would choose eps_0 to change, leaving the others as
fundamental constants that can be set to unity. But you could
also change e. (Although that would be on the pre-2000 level)
Gerard
### Gerard Westendorp
Oct 17, 2001, 3:38:10 PM10/17/01
to
Eric Alan Forgy wrote:
[..]
> One of the reasons I am such a stickler about saying E and D are not
> the same comes from experience with numerical solutions to Maxwell's
> equations. E, being a 1-form, is naturally associated to the edges of
> some mesh. D, being a 2-form, is naturally associated to the faces of
> some mesh.
Doe that mean that D transforms as a pseudo-vector?
E is the force on a charge, so it is a vector (Newton/Coulomb).
D is the displacement of (imaginary) charges through a surface.
(Coulomb/m2). It could be a pseudo vector.
Gerard
### J. J. Lodder
Oct 17, 2001, 3:44:01 PM10/17/01
to
Toby Bartels <to...@math.ucr.edu> wrote:
> J. J. Lodder wrote:
snip agree
> >If that had been done then you now would not even have known
> >that it is actually possible to introduce these notions,
> >unless you had happened to study history of science.
>
> But now I disagree. Using Heaviside Lorentz units,
> I would still have studied Maxwell's equations for dielectric media
> and been introduced to the concept of epsilon and mu.
> (These quantities would be dimensionless, of course.)
> Then I would learn that epsilon and mu for the vacuum are both exactly 1.
> How nice!
Indeed :-) But, being propery educated in this way
you would be less likely to make the mistake
of thinking of eps = 1
is a physical property of the vacuum.
> That the dielectric constant of the vacuum is 1
> has as much physical meaning as that the speed of light there is 1.
> The speed of light in vacuum may be a very trivial quantity in good units,
> but it retains its physical meaning -- that is the speed that light travels.
I guess this is the old confusion between
'fundamental speed in our universe' and 'speed of light' again.
The first can be taken to be 1,
and then it cannot be measured.
For light, it is of course necessary to establish
that it is actually massless,
which can in principle be done by verifying experimentally
that it travels one nanosecond in a nanosecond.
Best,
Jan
### Toby Bartels
Oct 17, 2001, 8:51:54 PM10/17/01
to
J. J. Lodder wrote:
>Toby Bartels wrote:
>>But now I disagree. Using Heaviside Lorentz units,
>>I would still have studied Maxwell's equations for dielectric media
>>and been introduced to the concept of epsilon and mu.
>>(These quantities would be dimensionless, of course.)
>>Then I would learn that epsilon and mu for the vacuum are both exactly 1.
>>How nice!
>Indeed :-) But, being properly educated in this way
>you would be less likely to make the mistake
>of thinking of eps = 1 is a physical property of the vacuum.
Well, this *is* how I was educated, more or less --
I was originally taught in SI units, but I already knew how to
reduce the number of units by setting constants to 1,
so I immediately set eps_0 to 1 and thought about things in that way --
and I *do* think that eps = 1 is a physical property of the vacuum.
A property of a rather vacuous (ha! an unintentional pun!) sort,
but a property nonetheless.
Like saying cardinality = 0 is a mathematical property of the empty set.
Well, this analogy probably won't be so clear to people here,
but it really makes it click for me!
>>That the dielectric constant of the vacuum is 1
>>has as much physical meaning as that the speed of light there is 1.
>>The speed of light in vacuum may be a very trivial quantity in good units,
>>but it retains its physical meaning -- that is the speed that light travels.
>I guess this is the old confusion between
>'fundamental speed in our universe' and 'speed of light' again.
>The first can be taken to be 1,
>and then it cannot be measured.
When I say "speed of light", I mean the speed that light travels.
For the fundamental speed in the universe, I say "one".
>For light, it is of course necessary to establish
>that it is actually massless,
>which can in principle be done by verifying experimentally
>that it travels one nanosecond in a nanosecond.
Yes, and this is a physical fact.
Thus it is a physical property of the vacuum that
light travels there at the speed of 1,
just as it's a physical property of other materials that
light travels at certain speeds in them.
The speed of light in certain materials, as you know,
can be calculated as c = 1/sqrt(eps mu)[*].
Thus, c = 1/sqrt(1*1) = 1 in the vacuum.
[*]Or something like that.
-- Toby
to...@math.ucr.edu
### Toby Bartels
Oct 17, 2001, 8:53:24 PM10/17/01
to
Gerard Westendorp wrote:
>Because eps_0, together with e, h and c fixes the fine-structure
>constant, any change in the fine structure constant (I think it
>changes at very high energies, like just after the big bang)
>will impact either eps_0, e, h or c.
Yeah, specifically e ^_^.
>I would choose eps_0 to change, leaving the others as
>fundamental constants that can be set to unity. But you could
>also change e. (Although that would be on the pre-2000 level)
Should we reopen the discussion about which to change?
It seems patently obvious to me that you would change e,
having set eps_0 and c to 1, and h to 2 pi.
Planck, as we know, originally set e to 1 (and h to 1),
but Planck was not perfect.
-- Toby
to...@math.ucr.edu
### Toby Bartels
Oct 17, 2001, 8:54:00 PM10/17/01
to
Gerard Westendorp wrote:
>Eric Alan Forgy wrote:
Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
Most people turn 2forms into vectors of course,
but this requires an orientation in addition to the metric.
I do find it more fundamental not even to assume the metric
and just to deal with the 2form itself -- in certain contexts.
-- Toby
to...@math.ucr.edu
### Eric Forgy
Oct 17, 2001, 8:56:34 PM10/17/01
to
Hi,
"Gerard Westendorp" <wes...@xs4all.nl> wrote:
>
> Doe that mean that D transforms as a pseudo-vector?
> E is the force on a charge, so it is a vector (Newton/Coulomb).
> D is the displacement of (imaginary) charges through a surface.
> (Coulomb/m2). It could be a pseudo vector.
That is a really good question :) I am not an expert on pseudo vectors
because I usually think of them as artifacts of misinterpreting 2-forms (or
bivectors) as vectors. But if that is really true then it would be tempting
to think of D as a pseudo vector also, so there must be something else to
it. I don't think D is a pseudo vector because I have never heard of that
and I probably should have by now if it were (not a very scientific reason,
eh? :)). So let's try see why not (if not).
Let A be a 1-form in 4d space-time and let
F = dA.
This is a 2-form in space-time. As such, it is (4!)/(2!2!) = 6 dimensional
with three space-space dimensions and three space-time dimensions. That is
just big enough to accomodate 2 3d vectors. So, if you choose a reference
frame, which amount to choosing a time axis, you can decompose F into two
parts:
F = B + E/\dt.
(Note: This decomposition is quite arbitrary because dt is arbitrary.)
If you write out F in all its gory details it becomes:
F = (B_23 dx^23 + B_31 dx^31 + B_12 dx^12)
+ (E_1 dx^1 + E_2 dx^2 + E_3 dx^3)/\dt
where dx^ij = dx^i /\ dx^j. Under a parity transformation dx^i -> -dx^i, the
components of E change sign whereas the components of B do not change sign.
Thus, you can conclude that E is a vector and B is a pseudo vector. This
follows simply because B is a 2-form.
Now, the Hodge star # acts on the space-space basis elements as
#dx^23 = O(123)*dx^1/\dt
#dx^31 = O(123)*dx^2/\dt
#dx^12 = O(123)*dx^3/\dt
where O(123) is +/-1 and keeps track of the orientation. The Hodge star acts
on the space-time basis elements as
#(dx^1/\dt) = -O(123)*dx^23
#(dx^2/\dt) = -O(123)*dx^31
#(dx^3/\dt) = -O(123)*dx^12
Therefore
#F
= #B + #(E/\dt)
= O(123)*(B_23 dx^1 + B_31 dx^2 + B_12 dx^3)/\dt
-O(123)*(E_1 dx^23 + E_2 dx^31 + E_3 dx^12)
= H/\dt - D
= H_1 dx^1 + H_2 dx^2 + H_3 dx^3)/\dt
- (D_23 dx^23 + D_31 dx^31 + D_12 dx^12)
so that
H_1 = O(123)*B_23
H_2 = O(123)*B_31
H_3 = O(123)*B_12
and
D_23 = O(123)*E_1
D_31 = O(123)*E_2
D_12 = O(123)*E_3.
Ok, now the trick is that under the parity transformation
O(123) -> -O(123)
so you pick up one "-" for the basis elements of H, but then you pick up
ANOTHER "-" from the Hodge star. Then the overall sign of H is not changed
under the parity transformation so that H is a pseudo vector as well.
On the other hand, the sign of the basis elements of D do NOT change sign
under the parity transformation, but the components DO pick up a "-", so
overall, D picks up a "-" under the parity transformation. Hence D is a
vector as well (as it
should be based on my earlier unscientific reasoning :))
After all that mess, the short answer to the question "Is D a pseudo
vector?" is apparent. Although D is a 2-form, that does not mean that D is a
pseudo vector because D is the Hodge dual of a 1-form E and the HODGE STAR
PICKS UP AN ADDITIONAL SIGN UNDER A PARITY TRANSFORMATION (in this case, but
not in general).
Thanks for a nice question! It made me think. I hope my answer makes sense.
Eric
### Eric Forgy
Oct 18, 2001, 4:31:46 PM10/18/01
to
Hi,
"Toby Bartels" <to...@math.ucr.edu> wrote:
> Gerard Westendorp wrote:
>
> >Doe that mean that D transforms as a pseudo-vector?
> >E is the force on a charge, so it is a vector (Newton/Coulomb).
> >D is the displacement of (imaginary) charges through a surface.
> >(Coulomb/m2). It could be a pseudo vector.
>
> Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
> Most people turn 2forms into vectors of course,
> but this requires an orientation in addition to the metric.
> I do find it more fundamental not even to assume the metric
> and just to deal with the 2form itself -- in certain contexts.
Are you saying D DOES correspond to a pseudo vector? I just wrote a
long post declaring that D does NOT correspond to a pseudo vector, but
rather to a vector. Unless I made an error, the Hodge star causes a
sign reversal (in the case I was considering) under a parity
transformation. This additional sign due to the Hodge star makes the 2
form D correspond to a vector while the 1 form H corresponds to a
pseudo vector. This seemed to make perfect sense while I was writing
it :)
Eric
PS: The moral to this story is that VECTORS ARE EVIL!! :) Everyone
should start using differential forms and all this "pseudo" nonsense
will disappear once and for all :)
### Gordon D. Pusch
Oct 18, 2001, 4:32:24 PM10/18/01
to to...@math.ucr.edu
Toby Bartels <to...@math.ucr.edu> writes:
In Applied Differential Geometry,'' [1] William Burke claims
that D is not an ordinary'' 2-form, but a twisted'' 2-form;
twisted'' 2-forms apparently transform oppositely to ordinary
2-forms under parity. So apparently, Burke does not think one can
dismiss the orientation so cavalierly... (In fact, he appears to
explicitly represent the orientation by introducing two different
Hodge-operators into his 3+1 decompositions of forms: One for 3-D
forms, and one for 4-D forms.)
-- Gordon D. Pusch
### Paul Arendt
Oct 25, 2001, 9:28:26 PM10/25/01
to
In article <1f17ur9.1wi...@de-ster.demon.nl>,
J. J. Lodder <j...@de-ster.demon.nl> wrote:
>Paul Arendt <par...@black.nmt.edu> wrote:
(snip descriptions of how to separately measure E and flux of D)
>> You may choose a unit system in which D = E in vacuum in Euclidean
>> space. Suppose that we do this. If the experiment above has
>> been performed in Euclidean space, then the total flux of E through
>> the boundary will also give the charge contained in the volume.
>>
>> But in a curved space, the flux of E though the surface will *not*
>> generally give the charge enclosed, unless it happens to be 0.
>> So, we can conclude that E cannot equal D at every point on that
>> surface.
>
>Let's not drag curved spaces into this discussion.
On the contrary -- they are essential to the point I was trying to
make! If we restrict the situation to Euclidean spaces, then I
would have to agree with your original statement: that if sensible
units where D = E were chosen long ago, there would have never been
any reason to introduce the constant epsilon_0. I disagree with
this strongly: epsilon_0 will still show up in some guise or
another when electromagnetic experiments are performed in situations
where the curvature of space changes with position.
There are even easier experiments to measure D and E than the ones I
proposed, in Bamberg and Sternberg's "A Course in Mathematics for
Students of Physics." Measure the kinetic energy change imparted to
various charged particles when they have moved in various directions.
In the limit of small charges and short distances, the ratio of this
energy change to the product of the distance traveled and the charge
is the (component of the) electric field E (in the direction traveled).
Now, take two very thin conducting sheets of metal of equal area, touch
them together, and bring them back apart. Measure the charge on
each plate, and divide by the area of the plate. In the limit as this
area becomes very small, this number is the component of D (oriented
with the plates' orientation).
My point is that: if units are chosen such that the magnitudes of
D and E are equal in a flat space, then they will NOT be equal,
using the exact same procedures, in certain locations in curved
spaces. And if they are found to be equal at some point in a space of
varying curvature, then they will not generally be equal at another
point in the same space.
I hope that the above experiments make the difference between D and
E very clear: E is associated with the direction "radial" to a
point charge, while D is associated with the two *transverse*
directions. (Going between the two is the role performed by
the Hodge star operator.)
If we never consider non-flat spaces, then I agree that D and E can
always be chosen to have the same magnitude in vacuum. But that's like
trying to argue that gauge fields can have no physical meaning -- by
restricting oneself to gauge fields that are "pure gauge" only! Not
fair.
In another article in this thread, J. J. Lodder wrote:
> Likewise, you would not claim that a particle
> actually has two positions, x_\mu and x^\nu,
> because a covariant vector is something entirely different
> than a covariant one, mathematically speaking.
The metric is certainly used to raise and lower indices on a
position vector. The metric can also be used to get D from E
in vacuum and vice-versa. So I think I can see your point here:
that knowledge of g allows us to do either.
But I think that the example may be somewhat misleading beyond
that, for two reasons. The first is that D and E are *not* simply
related to each other by raising and lowering indices! The Hodge
star operator is also involved (although g determines it by providing
a preferred way to measure volumes). The second reason is that
although I cannot think of a way to experimentally measure a
particle's covariant versus its contravariant position, the above
experiments are conceptually and operationally *different* ways
of getting numbers for D and E out.
And in another article:
>D is just a partial field,
>which arises because we find it convenient
>to split the total electric field -mentally- into parts.
Now, this I do not agree with at all! Maxwell's equations show
quite clearly that the way E can be derived from a four-potential
is *very* different from the way D can be, for instance. | 2021-11-30 13:29:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7971377968788147, "perplexity": 2261.2678232761696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00510.warc.gz"} |
https://mailman.ntg.nl/pipermail/ntg-context/2006/017043.html | # [NTG-context] List of builtin TeX commands
Nikolai Weibull now at bitwi.se
Sat Mar 25 11:41:53 CET 2006
Hi!
Is there a list of builtin TeX commands anywhere on the net? I need
it for a syntax definition of the plain TeX format.
With builtin TeX commands I mean stuff like \def, \global, and such.
A list of commands defined in plain.tex would be great as well, but at
least that's easier to compile by myself, as I can just look in the
actual file.
The tex source is a bit harder to overview...
Thanks!
nikolai | 2021-06-17 01:46:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999850988388062, "perplexity": 6359.040527512735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00158.warc.gz"} |
http://blog.csdn.net/sdfzyhx/article/details/52345986 | poj3167 Cow Patterns
78人阅读 评论(0)
Description A particular subgroup of K (1 <= K <= 25,000) of Farmer
John’s cows likes to make trouble. When placed in a line, these
troublemakers stand together in a particular order. In order to locate
these troublemakers, FJ has lined up his N (1 <= N <= 100,000) cows.
The cows will file past FJ into the barn, staying in order. FJ needs
your help to locate suspicious blocks of K cows within this line that
might potentially be the troublemaking cows.
FJ distinguishes his cows by the number of spots 1..S on each cow’s
coat (1 <= S <= 25). While not a perfect method, it serves his
purposes. FJ does not remember the exact number of spots on each cow
in the subgroup of troublemakers. He can, however, remember which cows
in the group have the same number of spots, and which of any pair of
cows has more spots (if the spot counts differ). He describes such a
pattern with a sequence of K ranks in the range 1..S. For example,
consider this sequence:
1 4 4 3 2 1
In this example, FJ is seeking a consecutive sequence of 6 cows from
among his N cows in a line. Cows #1 and #6 in this sequence have the
same number of spots (although this number is not necessarily 1) and
they have the smallest number of spots of cows #1..#6 (since they are
labeled as ‘1’). Cow #5 has the second-smallest number of spots,
different from all the other cows #1..#6. Cows #2 and #3 have the same
number of spots, and this number is the largest of all cows #1..#6.
If the true count of spots for some sequence of cows is:
5 6 2 10 10 7 3 2 9
then only the subsequence 2 10 10 7 3 2 matches FJ’s pattern above.
cows that match his specified pattern.
Input Line 1: Three space-separated integers: N, K, and S
Lines 2..N+1: Line i+1 describes the number of spots on cow i.
Lines N+2..N+K+1: Line i+N+1 describes pattern-rank slot i.
Output Line 1: The number of indices, B, at which the pattern matches
Lines 2..B+1: An index (in the range 1..N) of the starting location
where the pattern matches.
tot1[j+1]==qry(a[i]-1)&&tot2[j+1]==qry(a[i])
#include<cstdio>
#include<cstring>
#include<vector>
using namespace std;
vector<int> ans;
int a[100010],b[25010],next[25010],sum[30],cnt[30],s,tot1[25010],tot2[25010];
int qry(int x)
{
int ans=0;
for (int i=1;i<=x;i++)
ans+=sum[i];
return ans;
}
int main()
{
int i,j,k,m,n,p,q,x,y,z;
scanf("%d%d%d",&n,&m,&s);
for (i=1;i<=n;i++)
scanf("%d",&a[i]);
for (i=1;i<=m;i++)
scanf("%d",&b[i]);
for (i=1;i<=m;i++)
{
tot1[i]=qry(b[i]-1);
tot2[i]=qry(b[i]);
sum[b[i]]++;
}
memset(sum,0,sizeof(sum));
for (i=2,j=0;i<=m;i++)
{
while (j&&(tot1[j+1]!=qry(b[i]-1)||tot2[j+1]!=qry(b[i])))
{
for (k=i-j;k<i-next[j];k++)
sum[b[k]]--;
j=next[j];
}
if (tot1[j+1]==qry(b[i]-1)&&tot2[j+1]==qry(b[i])) j++;
next[i]=j;
sum[b[i]]++;
}
memset(sum,0,sizeof(sum));
for (i=1,j=0;i<=n;i++)
{
while (j==m||(j&&(tot1[j+1]!=qry(a[i]-1)||tot2[j+1]!=qry(a[i]))))
{
for (k=i-j;k<i-next[j];k++)
sum[a[k]]--;
j=next[j];
}
if (tot1[j+1]==qry(a[i]-1)&&tot2[j+1]==qry(a[i])) j++;
if (j==m) ans.push_back(i-m+1);
sum[a[i]]++;
}
printf("%d\n",ans.size());
for (i=0;i<ans.size();i++)
printf("%d\n",ans[i]);
}
0
0
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
个人资料
• 访问:121543次
• 积分:9437
• 等级:
• 排名:第1911名
• 原创:857篇
• 转载:3篇
• 译文:0篇
• 评论:45条
联系方式
QQ:1723010279
邮箱:1723010279@qq.com
欢迎交流。
我的朋友们
文章分类
阅读排行
最新评论 | 2017-08-23 05:14:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29225558042526245, "perplexity": 7850.003967026767}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00396.warc.gz"} |
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-8-right-triangles-and-trigonometry-8-1-the-pythagorean-theorem-and-it-s-converse-practice-and-problem-solving-exercises-page-496/26 | ## Geometry: Common Core (15th Edition)
Yes $33^{2}$+$56^{2}$=$65^{2}$
The Pythagorean Theorem says the $(leg_{1})^{2}$+$(leg_{2})^{2}$=$hypotenuse^{2}$. If this is true, then a triangle is a right triangle. This formula is more commonly referred to as $a^{2}$+$b^{2}$=$c^{2}$ You are given the lengths of the legs as 33 and 56. You are given the length of the hypotenuse as 65. Substitute 33 in for a, 56 in for b, and 65 in for c. $a^{2}$+$b^{2}$=$c^{2}$ $33^{2}$+$56^{2}$=$65^{2}$ 1089+3136=4225 4225=4225 | 2020-03-29 17:46:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7322806119918823, "perplexity": 678.8296923978371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00028.warc.gz"} |
https://www.clutchprep.com/physics/practice-problems/144607/how-do-you-add-capacitors-for-two-capacitors-c1-and-c2-when-connected-in-paralle | Capacitors & Capacitance Video Lessons
Concept
# Problem: How do you add capacitance for two capacitors, C1 and C2, when connected in parallel?
###### FREE Expert Solution
The charge stored by a capacitor:
$\overline{){\mathbf{Q}}{\mathbf{=}}{\mathbf{C}}{\mathbf{V}}}$
When capacitors are connected in parallel, they have the same potential difference, V, across them.
99% (178 ratings)
###### Problem Details
How do you add capacitance for two capacitors, C1 and C2, when connected in parallel? | 2021-09-24 11:38:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29142704606056213, "perplexity": 2751.3980983500956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00200.warc.gz"} |
https://encyclopediaofmath.org/wiki/Spherical_coordinates | # Spherical coordinates
The numbers $\rho , \theta , \phi$ which are related to the Cartesian coordinates $x, y, z$ by the formulas
$$x = \rho \cos \phi \sin \theta ,\ \ y = \rho \sin \phi \sin \theta ,\ \ z = \rho \cos \theta ,$$
where $0 \leq \rho < \infty$, $0 \leq \phi < 2 \pi$, $0 \leq \theta \leq \pi$.
Figure: s086660a
The coordinate surfaces are (see Fig.): concentric spheres with centre $O$ $( \rho = OP = \textrm{ const } )$; half-planes that pass through the axis $Oz$ $( \phi = \textrm{ angle } xOP ^ \prime = \textrm{ const } )$; circular cones with vertex $O$ and axis $Oz$ $( \theta = \textrm{ angle } zOP = \textrm{ const } )$. The system of spherical coordinates is orthogonal.
The Lamé coefficients are
$$L _ \rho = 1,\ \ L _ \phi = \rho \sin \theta ,\ \ L _ \theta = \rho .$$
The element of surface area is
$$d \sigma = \ \sqrt {\rho ^ {2} \sin ^ {2} \theta \ ( d \rho d \phi ) ^ {2} + \rho ^ {2} ( d \rho d \theta ) ^ {2} + \rho ^ {4} \sin ^ {2} \theta ( d \phi d \theta ) ^ {2} } .$$
The volume element is
$$dV = \rho ^ {2} \sin \theta d \rho d \phi d \theta .$$
The basic operations of vector calculus are
$$\mathop{\rm grad} _ \rho f = \ \frac{\partial f }{\partial \rho } ,\ \ \mathop{\rm grad} _ \phi f = \frac{1}{\rho \sin \theta } \frac{\partial f }{ \partial \phi } ,\ \ \mathop{\rm grad} _ \theta f = \frac{1} \rho \frac{\partial f }{\partial \theta } ;$$
$$\mathop{\rm div} \mathbf a = \frac{2} \rho a _ \rho + \frac{\partial a _ \rho }{\partial \rho } + \frac{1}{\rho \sin \theta } \frac{\partial a _ \phi }{\partial \phi } + \frac{1}{\rho \mathop{\rm tan} \theta } a _ \theta + \frac{1} \rho \frac{\partial a _ \theta }{\partial \theta } ;$$
$$\mathop{\rm rot} _ \rho \mathbf a = \frac{1}{\rho \sin \theta } \frac{\partial a _ \theta }{\partial \phi } - \frac{1} \rho \frac{\partial a _ \phi }{ \partial \theta } - \frac{1}{\rho \mathop{\rm tan} \theta } a _ \phi ;$$
$$\mathop{\rm rot} _ \phi \mathbf a = \frac{1} \rho \frac{\partial a _ \rho }{\partial \theta } - \frac{\partial a _ \theta }{\partial \rho } - \frac{a _ \theta } \rho ;$$
$$\mathop{\rm rot} _ \theta \mathbf a = \frac{\partial a _ \phi }{\partial \rho } + \frac{a _ \phi } \rho - \frac{1}{\rho \ \sin \theta } \frac{\partial a _ \rho }{\partial \phi } ;$$
$$\Delta f = \frac{\partial ^ {2} f }{\partial \rho ^ {2} } + \frac{2} \rho \frac{\partial f }{\partial \rho } + \frac{1}{\rho ^ {2} \sin ^ {2} \theta } \frac{\partial ^ {2} f }{\partial \phi ^ {2} } + \frac{1}{\rho ^ {2} } \frac{\partial ^ {2} f }{\partial \theta ^ {2} } + \frac{ \mathop{\rm cot} \theta }{\rho ^ {2} } \frac{\partial f }{\partial \theta } .$$
The numbers $u , v, w$, called generalized spherical coordinates, are related to the Cartesian coordinates $x, y, z$ by the formulas
$$x = au \cos v \sin w,\ \ y = bu \sin v \sin w,\ \ z = cu \cos w,$$
where $0 \leq u < \infty$, $0 \leq v < 2 \pi$, $0 \leq w \leq \pi$, $a > b$, $b > 0$. The coordinate surface are: ellipsoids $( u = \textrm{ const } )$, half-planes $( v= \textrm{ const } )$ and elliptical cones $( w = \textrm{ const } )$.
If the surface has been given by $R = R( \phi , \theta )$, then the element of surface area can be written as:
$$dS = R \sqrt {\left \{ R ^ {2} + \left ( \frac{\partial R }{\partial \theta } \right ) ^ {2} \right \} \sin ^ {2} \theta + \left ( \frac{\partial R }{\partial \theta } \right ) ^ {2} } \ d \theta d \phi .$$ | 2022-08-14 16:22:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870351552963257, "perplexity": 480.04109349972384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00420.warc.gz"} |
https://solvedlib.com/n/a-sample-of-33-university-students-recorded-the-number-of,5160987 | # A sample of 33 university students recorded the number of cinema films watched in the last four months The data
###### Question:
A sample of 33 university students recorded the number of cinema films watched in the last four months The data is given below as a frequency distribution table Intervals Frequency [2-5] 3.5 [6-9] [10-13] [14-17] [18-21] 7.5 11.5 15.5 19.5 Calculate mean; average spread and relative spread (Use group data formulas) (Round your answer to 2 decimal places) Mean is 8.59 films Average spread is 3.78 films Relative spread is (Give your answer as %} 22.7
#### Similar Solved Questions
##### Point estimates, margins of error, and confidence intervals Question An online store sells flower arrangements for...
Point estimates, margins of error, and confidence intervals Question An online store sells flower arrangements for birthdays and other special events. In a review of their production costs, the online store found the mean wholesale cost of one shipment of vases was 54 dollars, with a margin of error...
##### Question (1 [wints} The kemnisente (the spe) given bw the pelar equntict 2sin(20). Find the ure of the right lolu: of thee kemniseate.Question ? [ointy) Lot R {(I,") "/2 < r < 37/2, (onlr) <v <0}ie the r"gloll the plane helow: the line gment from (7/2,0) t0 37/2,0) aud ahove the graph of "( Evalunte tlc" integtal [ [oin6) WM MNT Us VMIIIUTY
Question (1 [wints} The kemnisente (the spe) given bw the pelar equntict 2sin(20). Find the ure of the right lolu: of thee kemniseate. Question ? [ointy) Lot R {(I,") "/2 < r < 37/2, (onlr) <v <0}ie the r"gloll the plane helow: the line gment from (7/2,0) t0 37/2,0) aud ...
##### Use the table to evaluate the expression.f(x) g(x)f(f(1))
Use the table to evaluate the expression. f(x) g(x) f(f(1))...
##### Read Chapter 8 in your text. Describe the basic pattern of the evolution of the vertebrate eye including the role of the genes for opsin and crystallin proteins.
Read Chapter 8 in your text. Describe the basic pattern of the evolution of the vertebrate eye including the role of the genes for opsin and crystallin proteins....
##### Question 10 Not yet answered Marked out of 1.00 Flag question8 = (x-1)i -2j +(x-1OJk, =3i -(x-9)j +Zk, if U 1 V, evaluate (x )Select one: 56 / 974/1166 / 1091 / 12
Question 10 Not yet answered Marked out of 1.00 Flag question 8 = (x-1)i -2j +(x-1OJk, =3i -(x-9)j +Zk, if U 1 V, evaluate (x ) Select one: 56 / 9 74/11 66 / 10 91 / 12...
##### After the centrifuge is switched off, the test tube slows to a stop. The angular velocity...
After the centrifuge is switched off, the test tube slows to a stop. The angular velocity of the platelet (still at the same radius) is given by: with w, 155 rad/s, 1, 6.3 S, and t now the time since the centrifuge has been switched off. If the intitial time is the time when the centrifuge was switc...
##### CHEM 1211 Sprlrg 2019 (Kuykecounse | he(eole Cotculallona Exorclao 3,61Cakc ulalo Iria Hhbot UAeeMIcIIMUAMMePort A70 ? & H; CCla Exprosa YoNf Aflawer rvith tho mppropalalen(H,CCl) -ValueUnitsaubinilHeutanurtpurCuHzO4 Lxprate Youf nertorappuprialu unitap(CiHaOn) = ValueUnitsGulmnllHenyrlanrreeMacBook ProIstty
CHEM 1211 Sprlrg 2019 (Kuyke counse | he (eole Cotculallona Exorclao 3,61 Cakc ulalo Iria Hhbot UAee MIcIIMUAMMe Port A 70 ? & H; CCla Exprosa YoNf Aflawer rvith tho mppropalale n(H,CCl) - Value Units aubinil Heutanurt pur CuHzO4 Lxprate Youf nertor appuprialu unita p(CiHaOn) = Value Units Gulm...
##### Often K,SO4 added this wvill have Darlum mcal" Usc vour equatlon the solubility Why predict what sifect BaSOa do you think KSO4 dd-d7Factors affecting solubility of salts; Considecth following solubility equilbnurn CaFs(s) Cal" (aa) 2F (aq) What eicct adding each of the following to the equllibrium above? morc soluble; Ull unc less soluble salt become will there be no (hints_ etfect? consider common on present; and what effect that will have, consider if the added species can react wit
Often K,SO4 added this wvill have Darlum mcal" Usc vour equatlon the solubility Why predict what sifect BaSOa do you think KSO4 dd-d7 Factors affecting solubility of salts; Considecth following solubility equilbnurn CaFs(s) Cal" (aa) 2F (aq) What eicct adding each of the following to the e...
##### A circular trajectory An object moves clockwise around a circle centered at the origin with radius 5 m beginning at the point (0,5)a. Find a position function $r$ that describes the motion if the object moves with a constant speed, completing 1 lap every 12 s.b. Find a position function $r$ that describes the motion if it occurs with speed $e^{-t}$
A circular trajectory An object moves clockwise around a circle centered at the origin with radius 5 m beginning at the point (0,5) a. Find a position function $r$ that describes the motion if the object moves with a constant speed, completing 1 lap every 12 s. b. Find a position function $r$ that d...
##### Skewness of the data using Pearson's coefficient if variance is 398809.5. Interpret your answer.marks)
Skewness of the data using Pearson's coefficient if variance is 398809.5. Interpret your answer. marks)...
##### In winter, Earth's axis points toward the star Polaris. In spring. the axis points toward (a) Polaris. (b) Vega. (c) the Sun.
In winter, Earth's axis points toward the star Polaris. In spring. the axis points toward (a) Polaris. (b) Vega. (c) the Sun....
##### A father who has two children (JD and Sofia) decided to give an award to the...
A father who has two children (JD and Sofia) decided to give an award to the child who did best on their math test. JD scored a 57 on his math test while Sofia scored a 32. The dad called the teachers and found that JD's test had a class average of 52.4 with a standard deviation of 6.8 points. S...
##### Of thc fizht nCapei JC0 N The fulciu Ethz mtczfcfehk iesza [EchN ca & Tro chilen teal Lemtelt €4 Ot Keet4w Tke one on the left hxi 4 Wetzhi ed 3C0 Vnhi- tk askoftr child oa the Irtr: (Tuaen Mntente S0 I from thc fulcrum and the Cctaw' 4 balanced Vtha iorau Frovidedt) Menttnoly Lt FLIL-450 NM+J00 Nm+450 Nm
Of thc fizht nCapei JC0 N The fulciu Ethz mtczfcfehk iesza [EchN ca & Tro chilen teal Lemtelt €4 Ot Keet4w Tke one on the left hxi 4 Wetzhi ed 3C0 Vnhi- tk askoftr child oa the Irtr: (Tuaen Mntente S0 I from thc fulcrum and the #Cctaw' 4 balanced Vtha iorau Frovidedt) Menttnoly Lt ...
##### This Question: 1 pt 10 of 15 (12 complete) A Data Table One of the biggest...
This Question: 1 pt 10 of 15 (12 complete) A Data Table One of the biggest factors in determining the value of a home dollars) for a random sample of homes for sale. Complete part Es: Click the icon to view the data table. = Click the icon to view a table of critical values for the corre (a) Which v...
##### Tutorial Exercise 9This set of exercises covers material mainly from lectures 15 and 16. Q1. Consider a random variable Y having probability density function3 exp{~ (y _ 0)} , y > 0; fr(y) = 0 otherwiseGiven Yi, = Yn, a sequence of i.i.d. observations on Y: Determine the method of moments estimator of 0_ What is the maximum likelihood estimator (MLE) of 0?
Tutorial Exercise 9 This set of exercises covers material mainly from lectures 15 and 16. Q1. Consider a random variable Y having probability density function 3 exp{~ (y _ 0)} , y > 0; fr(y) = 0 otherwise Given Yi, = Yn, a sequence of i.i.d. observations on Y: Determine the method of moments esti...
##### By explicit integration, determine the average potential energy for the 2s wavefunction of the hydrogen atom, given below:Y = 2-1_ 2a0 4oACHTUNG: This function is NOT normalized.
By explicit integration, determine the average potential energy for the 2s wavefunction of the hydrogen atom, given below: Y = 2-1_ 2a0 4o ACHTUNG: This function is NOT normalized....
##### THEOREM 12.3 Position Vector for a Projectile Neglecting air resistance. the path ol a projectile launched from an initial height h with initial speed Vo and angle of elevation € is described by the vector funetion r(t) (Vo cos O)ti (Vo = sin O)t %gre]i where g is the acceleration due to gravity:
THEOREM 12.3 Position Vector for a Projectile Neglecting air resistance. the path ol a projectile launched from an initial height h with initial speed Vo and angle of elevation € is described by the vector funetion r(t) (Vo cos O)ti (Vo = sin O)t %gre]i where g is the acceleration due to grav...
##### Exercise 4: Let following:RnxnRm and To be particular solution to the system AzProve theA vector y in Rn will be solution to Az = if and only if y = To + 2, where e N(A) 2. iE N(A) = {0} , then the solution Io is unique
Exercise 4: Let following: Rnxn Rm and To be particular solution to the system Az Prove the A vector y in Rn will be solution to Az = if and only if y = To + 2, where e N(A) 2. iE N(A) = {0} , then the solution Io is unique...
##### Benefits of Trade 2020 With Producer and Consumer Surplus Assume a world with only two countries...
Benefits of Trade 2020 With Producer and Consumer Surplus Assume a world with only two countries (country A and country B). Previously, the countries had closed economies. But now they decide to trade widgets. The supply and demand equations are given below. Country A Country B World QDA = 100-10P Q...
##### What is Current Ration, Leverage, and Inventory turnover in companies financial report?
What is Current Ration, Leverage, and Inventory turnover in companies financial report?...
##### (Kopois What is dSeCI the A population has V V deviation doimhe} distribution 134.3 jistcibution sample mcans? 1 of sample 49.8. means? You intend t0 draw a random sample of size n
(Kopois What is dSeCI the A population has V V deviation doimhe} distribution 134.3 jistcibution sample mcans? 1 of sample 49.8. means? You intend t0 draw a random sample of size n...
##### You are studying the relationship between stock returns (S) and bond returns (B). To do this...
You are studying the relationship between stock returns (S) and bond returns (B). To do this you gather the daily returns of the stock market and the daily return of US Government bonds and perform a regression analysis that shows the following linear relationship: S = 0.25 B +0.05 Rsquared = 0.95 W...
##### Cinturas Company produces two types of men’s shirts: casual and formal. There are four activities associated...
Cinturas Company produces two types of men’s shirts: casual and formal. There are four activities associated with the two products. Drivers for the four activities are as follows: Casual Formal Cutting hours 12,000 18,000 Sewing hours 3,000 7,000 Inspection hours 2,000 4,000 Rework hours 400 6...
##### Hypothesis test statement: State the Null and Alternative hypothesis for each of the following claims_A generic drug relieves symptoms for eight hours On average. pharmacist claims that the new formulation of the drug relieves symptoms for longer period_Null HypothesisAlternative Hypothesis:Americans gain an average of one pound per year as they age between 25 and 45. public health professional disputes this claim:Null HypothesisAlternative Hypothesis:The following statistics output is Two- Samp
Hypothesis test statement: State the Null and Alternative hypothesis for each of the following claims_ A generic drug relieves symptoms for eight hours On average. pharmacist claims that the new formulation of the drug relieves symptoms for longer period_ Null Hypothesis Alternative Hypothesis: Amer...
##### Thank you for answering both. I appreciate your hard work! Question 32 (2 points) If the...
Thank you for answering both. I appreciate your hard work! Question 32 (2 points) If the world price is higher than the domestic price Country should engage in import of the good Country should engage in export of the good O Country has a comparative advantage of the good Both B and C are correc Qu...
##### Answer t0 Given a round fixed Question 10 decimal places) area plot with plot radius of 71.6 whatis the expansion factor? (Please 1 10ptswhat is the basal 7 H sample trees U epue trees imperial 10 843 5
answer t0 Given a round fixed Question 10 decimal places) area plot with plot radius of 71.6 whatis the expansion factor? (Please 1 10pts what is the basal 7 H sample trees U epue trees imperial 10 843 5...
##### Part A How much work is required to accelerate a proton from rest up to a...
Part A How much work is required to accelerate a proton from rest up to a speed of 0.995c? VO ASM O O ? - W= Gev Submit Request Answer Part B What would be the momentum of this proton? VAEV * R 0 2 ? 7 GeV/c Submit Request Answer...
##### Public transportation and the automobile are two methodsan employee can use to get to work each day. Samples of timesrecorded for each method are shown. Times are in minutes.Public Transportation 18 15 13 20 22 17 11 9 3 16Automobile 12 24 19 21 19 25 23 14 17 10a. Compute the sample mean time to get to work for eachmethod.b. Compute the sample standard deviation for each method. c.On the basis of your results from a) and b), which method oftransportation should be preferred? Explain.
Public transportation and the automobile are two methods an employee can use to get to work each day. Samples of times recorded for each method are shown. Times are in minutes. Public Transportation 18 15 13 20 22 17 11 9 3 16 Automobile 12 24 19 21 19 25 23 14 17 10 a. Compute the sample mean time ...
##### Please Solve "Try Yourself" questions! thanks! C.l. with Specified Precision Terminating Simulations] Example-The cross-replication summary outputs from M/G/1 simulation model based on 10 rep...
Please Solve "Try Yourself" questions! thanks! C.l. with Specified Precision Terminating Simulations] Example-The cross-replication summary outputs from M/G/1 simulation model based on 10 replications is as follows: Total Cost (\$) Percent Rejected 11.12 1.30 0.93 9.43 13.55 Sample Mean Sampl...
##### There are four possible 1-tert-buthil-4-methilcyclohexane isometric, in which the cyclohexane is is in a chair conformation....
There are four possible 1-tert-buthil-4-methilcyclohexane isometric, in which the cyclohexane is is in a chair conformation. la) Draw the possible structures b) Indicate the most stable structure...
##### A) A population undergoing logistic growth should reach it's maximum growth rate when... the population is...
A) A population undergoing logistic growth should reach it's maximum growth rate when... the population is equal to the carying capacity. the population size is very small but not quite zero. the population size is farthest from the carying capacity. the population is hal... | 2022-08-10 03:06:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4396839141845703, "perplexity": 5342.0041655079485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00559.warc.gz"} |
https://www.storyofmathematics.com/math-transformations/ | # Math Transformations — Explanation and Examples
Math transformations relate one geometric object or function to another through a series of translations, rotations, reflections, and dilations.
Depending on the context, math transformations are sometimes called geometric transformations or algebraic transformations.
Although it is possible to do some transformations in pure geometry, most math transformations occur in coordinate geometry. Make sure to review both before proceeding.
This section covers:
• What are the Four Types of Transformations?
• How to Do Transformations in Geometry
## What are the Four Types of Transformations?
Transformations are broken down into four different types: translations, rotations, reflections, and dilations. If it is possible to map one object onto another using any combination of transformations, the objects are said to be similar.
### Translations
A translation slides an object up, down, left, or right.
Translations can be represented through words, such as “an object is translated two units downward,” or through mapping notation. Mapping notation is a shorthand way of showing how a function or point changes with a transformation.
For example, $(x, y) → (x+1, y-4)$ means that the x-coordinate of every point in an object will increase by one, and the y-coordinate of every point in an object will decrease by four. Effectively, the object will move one unit to the right and four units downward.
Mapping notation for a function is $f(x)$ → $f(x-a)+b$. This means that the function moves $a$ units to the right and $b$ units upward. If $a$ is negative, the function moves left, and, if $b$ is negative, the function moves downward.
As before, the x-values of functions operate in a “mirror world” of sorts where transformations done directly to $x$ are the opposite of what is expected. That is, positive numbers indicate a shift to the right, while negative numbers indicate a shift to the left.
### Rotations
Rotational transformations move a geometric object about a fixed point in the plane.
To visualize how a rotation works, picture an invisible string connecting each point in the given object to the point of rotation. Then, imagine moving this string a certain angle measure clockwise or counterclockwise, keeping the point of rotation fixed and only moving the end that is part of the geometric object. The new positions of the points in the original object represent a rotational transformation.
Rotations can be given in degrees or radians, and they can be done clockwise or counterclockwise.
For example, A’B’C’ is the rotation of ABC 90 degrees about the origin.
### Reflections
When you look in the mirror, you see your reflection. That is, you see a backward image of yourself. The version of yourself appears closer to the glass when you are closer to the glass and farther away when you are farther away.
A mathematical reflection is similar. We select a line in the plane and treat it like a mirror. Everything on one side of the line is the mirror image of everything on the other side.
The most common lines of reflection are the x-axis, the y-axis, and the line $y=x$.
### Dilation
A dilation expands or contracts a geometric object to create a new, scale object with the same proportions. Note that the term dilation refers only to an expansion in some contexts, while the term compression refers to a shrinkage. Here, however, we will use it to mean both.
A dilation needs a scale and a fixed point. A scale greater than one indicates that the object will increase in size, while a scale less than one indicates that an object will decrease. The dilation factor must always be greater than zero.
The fixed point in a dilation orients the object. It indicates the one point that stays the same. Often, it is the vertex or center of an object.
For example, you can think of a circle with a radius of 1 and another circle with a radius of 2. Suppose both are centered at the origin. The second circle is a dilation of the first by a factor of 2 with a fixed center point. Likewise, the first is a dilation of the second by a factor of $\frac{1}{2}$ with a fixed center point.
## How to Do Transformations in Geometry
Geometric transformations often combine more than one type of transformation.
The easiest way to do transformations in geometry is to find the new key points’ location and use those to find other points.
For example, when transforming a triangle, first map the vertices to their new location. Then, draw the lines connecting the vertices.
Alternatively, when transforming a function, find the key points such as the x- and y-intercepts and the vertex. This will show the function’s general shape, and all that’s left to do is connect the dots.
## Examples
This section covers common examples of problems involving math transformations and their step-by-step solutions.
### Example 1
What transformations are required to map triangle ABC to DEF?
### Example 1 Solution
The wording of this problem actually gives an important clue to the solution. The triangle ABC maps to DEF. Therefore, point A maps to D, B maps to E, and C maps to F. Thus, the problem boils down to “how do we map the vertices of ABC to the corresponding vertices in DEF?”
First, note that the triangle DEF has a different orientation from ABC. This means that either a rotation or a reflection is involved. However, since DEF is the mirror image of ABC and not just a rotation, we know a reflection is involved. Since the mirror image is horizontal, the most likely line of reflection is the y-axis.
Reflecting ABC over the y-axis makes the x-value of all of the vertices negative. That is, the reflection A’B’C’ has the coordinates (-2, 1), (-3, 4), and (-7, 3), respectively.
Then, A’B’C’ and DEF are related by a translation. In this case, moving the triangle down two units maps the first to the second.
Therefore, the series of transformations reflect the y-axis followed by a translation two units downward.
Note, however, that this is not unique. If instead, the ABC shifts down two units and then reflects over the y-axis, it still maps to DEF. Likewise, reflecting ABC over any vertical line x=k, where k<2, including a horizontal translation, works.
### Example 2
What transformations are required to map $f(x)$ to $g(x)$?
### Example 2 Solution
In this example, $f$ and $g$ are both quadratic functions. The difference is that $g$, relative to $f$, is upside down, further right, and thinner.
These three facts indicate the transformations. The fact that $g$ is upside down relative to $f$ means that it has been reflected over the x-axis.
Since $g$ is further right, there is a translation of three units to the right.
Finally, since $g$ is stretched vertically, there is a dilation by a factor greater than 1. In this case, the factor is 2.
### Example 3
What is the relationship between a rotation and a reflection? Are there any situations where they might be the same?
### Example 3 Solution
Objects with internal lines of symmetry may map to the same figure through rotation and reflection.
For example, consider an isosceles triangle with vertices ABC at (2, 0), (0, 4) and (-2, 0). This triangle is symmetric about the y-axis.
A reflection over the x-axis followed by a reflection over the y-axis gives the triangle DEF with vertices at (-2, 0), (-4, 0), (2, 0).
Likewise, a rotation of ABC about the origin 180 degrees also yields the same triangle DEF.
Note that the reflection over the y-axis is necessary because, otherwise, the triangle ABC corresponds to FED instead. That is the point A maps to F, and point C maps to D.
### Example 4
Compare a rotation of the circle about the center to a rotation about point B.
### Example 4 Solution
It should be pretty clear that rotating a circle about its center results in the circle’s points being mapped to other points on the circle. That is, the new figure will have the same shape and coordinates.
If, for example, the circle rotated about the center counterclockwise by 90 degrees, point B would end up at (0, 3) instead of (3, 0). If no points on the circumference are given, however, the figures will be indistinguishable.
On the other hand, however, a rotation about point B causes the circle to move to different places on the plane. The center of the circle will carve out its own circle with center B and radius 3.
### Example 5
Does the transformation from ABC to FDE represent a translation? How do you know?
### Example 5 Solution
This does not represent a translation because a translation keeps the shape of the original figure. In fact, that is why translations (along with rotations and reflections) are called “rigid transformations.”
In the new figure, FD has a slope of $\frac{1}{4}$. However, the slope of AB in the original figure is $\frac{1}{2}$.
### Practice Questions
1. Suppose that the point, $(-1, 2)$, is reflected over the $x$-axis, where is the point located now?
2. Suppose that the point, $(3, 4)$, is translated $5$ units upward and $2$ units to the left, where is the point located now?
3. Which of the following transformations will order ever matter?
4. Which of the following describes the transformation of $ABCD$ to $A’B’C’D’$?
5. True or False: The two functions shown below are related by math transformations.
### Open Problems
1. Compare a dilation of the circle by a factor of $2$ with the center of rotation at $A$ and a dilation by a factor of $2$ with the center of dilation $B$.
2. Consider the hexagon ABCDEF. Which transformations map the points on the hexagon onto other points on the original hexagon?
### Open Problem Solutions
1.
2.
Rotations about the center by 180/6=30 degrees and reflections about any line connecting two opposite vertices or the centers of opposite edges will map the points on the hexagon to other points on the hexagon.
Images/mathematical drawings are created with GeoGebra. | 2022-10-04 20:22:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7358434200286865, "perplexity": 427.2633054948917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00127.warc.gz"} |
https://www.cuemath.com/ncert-solutions/q-1-exercise-2-3-polynomials-class-10-maths/ | # Ex.2.3 Q1 Polynomials Solution - NCERT Maths Class 10
Go back to 'Ex.2.3'
## Question
Divide the polynomial $$p(x)$$ by the polynomial $$g(x)$$ and find the quotient and remainder in each of the following:
(i)
\begin{align}\,\,\,p(x) &= {x^3} - 3{x^2} + 5x - 3, \\ \quad\;\; g(x) &= {x^2} - 2\end{align}
(ii)
\begin{align}\,\,\,p(x)& = {x^4} - 3{x^2} + 4x + 5, \\ \quad\; g(x) &= {x^2} + 1 - x\end{align}
(iii)
\begin{align}\,\,\,p(x) &= {x^4} - 5x + 6, \qquad \\ \quad g(x) &= 2 - {x^2}\end{align}
Video Solution
Polynomials
Ex 2.3 | Question 1
## Text Solution
What is unknown?
The quotient and remainder of the given polynomials.
Reasoning:
You can solve this question by following the steps given below:
First, arrange the divisor as well as dividend individually in decreasing order of their degree of terms.
In case of division, we seek to find the quotient. To find the very first term of the quotient, divide the first term of the dividend by the highest degree term in the divisor.
Now write the quotient.
Multiply the divisor by the quotient obtained. Put the product underneath the dividend.
Subtract the product obtained as happens in case of a division operation.
Write the result obtained after drawing another bar to separate it from prior operations performed.
Bring down the remaining terms of the dividend.
Again, divide the dividend by the highest degree term of the remaining divisor.
Repeat the previous three steps on the interim quotient.
Steps:
(i)\begin{align}\,\,\,p(x) &= {x^3} - 3{x^2} + 5x - 3, \\ \quad\;\; g(x) &= {x^2} - 2\end{align}
Quotient $$= x - 3,$$ Remainder $$= 7x - 9$$
(ii) \begin{align}\,\,\,p(x)& = {x^4} - 3{x^2} + 4x + 5, \\ \quad\; g(x) &= {x^2} + 1 - x\end{align}
$$={x}^{4}+0.{x}^{3}-3{{x}^{2}}+4x+5$$
Quotient = $${x^2} + x - 3$$ Remainder $$= 8$$
(iii) \begin{align}\,\,\,p(x) &= {x^4} - 5x + 6, \qquad \\ \quad g(x) &= 2 - {x^2}\end{align}
\begin{align}= {x^4} + 0.{x^2}-5x + {\text{ }}6\end{align}
Quotient $$= - {x^2} - 2,$$ Remainder $$= - 5x + 10$$
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school | 2020-01-28 13:48:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 14, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 2277.581155181031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00325.warc.gz"} |
https://physics.stackexchange.com/questions/478380/a-pendulum-question-in-my-revision-guide-that-has-a-worked-solution-that-i-don-t | # A pendulum question in my revision guide that has a worked solution that I don’t understand [closed]
I just don’t understand part (a). Why is the length vertically downwards, not 0.5 as that’s the length of the string in the pendulum. Why is it the other length instead that’s 0.5? I’m clearly being dumb but I can’t see why they would be different as the string doesn’t change length. Any replies would be very appreciated.
• It's a triangle with .5m being the length of the longest side. – Bobak Hashemi May 6 at 20:19
• Hi @Mark, welcome to SE! Posting screenshots is discouraged -- if you write out the problem it will be easier for people to read! See math.meta.stackexchange.com/questions/5020/… for some hints about equation formatting. The screenshot is unfortunately probably why people downvoted, since you have a valid question. – Will May 7 at 0:35
They are determining the change in potential energy compared to when the pendulum bob is being held straight down. When the bob is fully downwards, the pendulum arm has a length of $$0.50 \ m$$. The change in potential energy when that arm is lifted only depends on the change in height of the mass.
The change in height of the mass would only be equal to $$0.50 \ m$$ if $$\theta$$ was $$90°$$. You can see from the diagram where they labelled $$\Delta h$$, and the new height $$0.50 \cos 30°$$; which means the change in height is obviously $$\Delta h = 0.50 - 0.50 \cos 30°$$, just like they've done here. | 2019-12-12 05:39:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548608779907227, "perplexity": 401.3273860043692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00161.warc.gz"} |
https://mozilla.github.io/application-services/book/design/components-strategy.html | ## High level firefox sync interactions
On a high level, Firefox Sync has three main components:
• The Firefox Account Server: Which uses oauth to authenticate and provide users with scoped access. The FxA Server also stores input that will be used by the clients to generate the sync keys.
• Firefox: This is the firefox app itself, which implements the client logic to communicate with the firefox account servers, generate sync keys, use them to encrypt data and send/receive encrypted data to/from the sync storage servers
• Sync Storage Server: The server that stores encrypted sync data. The clients would retrieve the encrypted data and decrypt it client side
Additionally, the token server assists in providing metadata to Firefox, so that it knows which sync server to communicate with.
## Multi-platform sync diagram
Since we have multiple Firefox apps (Desktop, iOS, Android, Focus, etc) Firefox sync can sync across platforms. Allowing users to access their up-to-date data across apps and devices.
### Before: How sync was
Before our Rust Components came to life, each application had its own implementation of the sync and FxA client protocols. This lead to duplicate logic across platforms. This was problematic since any modification to the sync or FxA client business logic would need to be modified in all implementations and the likelihood of errors was high.
### Now: Sync is starting to streamline its components
Currently, we are in the process of migrating many of the sync implementation to use our Rust Component strategy. Fenix primarily uses our Rust Components and iOS has some integrated as well. Additionally, Firefox Desktop also uses one Rust component (Web Extension Storage).
The Rust components not only unify the different implementations of sync, they also provide a convenient local storage for the apps. In other words, the apps can use the components for storage, with or without syncing to the server.
#### Current Status
The following table has the status of each of our sync Rust Components | Application\Component | Bookmarks | History | Tabs | Passwords | Autofill | Web Extension Storage | FxA Client | |-----------------------|-----------|---------|------|-----------|----------|-----------------------|------------| | Fenix | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ | | Firefox iOS | ✔️ | | ✔️ | ✔️ | | | ✔️ | | Firefox Desktop | | | | | | ✔️ | | | Focus | | | | | | | |
### Future: Only one implementation for each sync engine
In an aspirational future, all the applications would use the same implementation for Sync. However, it's unlikely that we would migrate everything to use the Rust components since some implementations may not be prioritized, this is especially true for desktop which already has stable implementations. That said, we can get close to this future and minimize duplicate logic and the likelihood of errors.
You can edit the diagrams in the following lucid chart (Note: Currently only Mozilla Employees can edit those diagrams): https://lucid.app/lucidchart/invitations/accept/inv_ab72e218-3ad9-4604-a7cd-7e0b0c259aa2
Once they are edited, you can re-import them here by replacing the old diagrams in the docs/diagrams directory on GitHub. As long as the names are the same, you shouldn't need to edit those docs! | 2023-04-01 19:39:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20699824392795563, "perplexity": 6192.466879507634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00514.warc.gz"} |
https://juliareach.github.io/LazySets.jl/dev/lib/sets/Line/ | Line
LazySets.LineType
Line{N, VN<:AbstractVector{N}} <: AbstractPolyhedron{N}
Type that represents a line of the form
$$$\{y ∈ \mathbb{R}^n: y = p + λd, λ ∈ \mathbb{R}\}$$$
where $p$ is a point on the line and $d$ is its direction vector (not necessarily normalized).
Fields
• p – point on the line
• d – direction
Examples
The line passing through the point $[-1, 2, 3]$ and parallel to the vector $[3, 0, -1]$:
julia> Line([-1, 2, 3.], [3, 0, -1.])
Line{Float64, Vector{Float64}}([-1.0, 2.0, 3.0], [3.0, 0.0, -1.0])
source
LazySets.dimMethod
dim(L::Line)
Return the ambient dimension of a line.
Input
• L – line
Output
The ambient dimension of the line.
source
LazySets.ρMethod
ρ(d::AbstractVector, L::Line)
Evaluate the support function of a line in a given direction.
Input
• d – direction
• L – line
Output
Evaluation of the support function in the given direction.
source
LazySets.σMethod
σ(d::AbstractVector, L::Line)
Return a support vector of a line in a given direction.
Input
• d – direction
• L – line
Output
A support vector in the given direction.
source
Base.:∈Method
∈(x::AbstractVector, L::Line)
Check whether a given point is contained in a line.
Input
• x – point/vector
• L – line
Output
true iff x ∈ L.
Algorithm
The point $x$ belongs to the line $L : p + λd$ if and only if $x - p$ is proportional to the direction $d$.
source
Base.randMethod
rand(::Type{Line}; [N]::Type{<:Real}=Float64, [dim]::Int=2,
[rng]::AbstractRNG=GLOBAL_RNG, [seed]::Union{Int, Nothing}=nothing)
Create a random line.
Input
• Line – type for dispatch
• N – (optional, default: Float64) numeric type
• dim – (optional, default: 2) dimension
• rng – (optional, default: GLOBAL_RNG) random number generator
• seed – (optional, default: nothing) seed for reseeding
Output
A random line.
Algorithm
All numbers are normally distributed with mean 0 and standard deviation 1.
source
LazySets.isuniversalMethod
isuniversal(L::Line; [witness::Bool]=false)
Check whether a line is universal.
Input
• P – line
• witness – (optional, default: false) compute a witness if activated
Output
• If witness is false: true if the ambient dimension is one, false
otherwise.
• If witness is true: (true, []) if the ambient dimension is one,
(false, v) where $v ∉ P$ otherwise.
source
Base.isemptyMethod
isempty(L::Line)
Check whether a line is empty or not.
Input
• L – line
Output
false.
source
LazySets.constraints_listMethod
constraints_list(L::Line)
Return the list of constraints of a line.
Input
• L – line
Output
A list containing 2n-2 half-spaces whose intersection is L, where n is the ambient dimension of L.
source
LazySets.translateMethod
translate(L::Line, v::AbstractVector)
Translate (i.e., shift) a line by a given vector.
Input
• L – line
• v – translation vector
Output
A translated line.
Notes
See also translate! for the in-place version.
source
LazySets.translate!Method
translate!(L::Line, v::AbstractVector)
Translate (i.e., shift) a line by a given vector, in-place.
Input
• L – line
• v – translation vector
Output
The line L translated by v.
source
LinearAlgebra.normalizeFunction
normalize(L::Line, p::Real=2.0)
Normalize the direction of a line.
Input
• L – line
• p – (optional, default: 2.0) vector p-norm used in the normalization
Output
A line whose direction has unit norm w.r.t. the given p-norm.
Notes
See also normalize!(::Line, ::Real) for the in-place version.
source
LinearAlgebra.normalize!Function
normalize!(L::Line, p::Real=2.0)
Normalize the direction of a line storing the result in L.
Input
• L – line
• p – (optional, default: 2.0) vector p-norm used in the normalization
Output
A line whose direction has unit norm w.r.t. the given p-norm.
source
ReachabilityBase.Arrays.distanceMethod
distance(x::AbstractVector, L::Line; [p]::Real=2.0)
Compute the distance between point x and the line with respect to the given p-norm.
Input
• x – point/vector
• L – line
• p – (optional, default: 2.0) the p-norm used; p = 2.0 corresponds to the usual Euclidean norm
Output
A scalar representing the distance between x and the line L.
source
LazySets.linear_mapMethod
linear_map(M::AbstractMatrix, L::Line)
Concrete linear map of a line.
Input
• M – matrix
• L – line
Output
The line obtained by applying the linear map, if that still results in a line, or a Singleton otherwise.
Algorithm
We apply the linear map to the point and direction of L. If the resulting direction is zero, the result is a singleton.
source
Inherited from LazySet: | 2023-03-23 20:07:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977244853973389, "perplexity": 5440.710932440042}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00359.warc.gz"} |
https://www.anl.gov/article/doe-awards-argonne-415-million-for-research-in-quantum-computing-and-networking | # Argonne National Laboratory .st0{fill:none;} .st1{fill:#007934;} .st2{fill:#0082CA;} .st3{fill:#101E8E;} .st4{fill:#FFFFFF;} .st5{fill:#A22A2E;} .st6{fill:#D9272E;} .st7{fill:#82BC00;} Argonne National Laboratory
Press Release | Argonne National Laboratory
# DOE awards Argonne $4.15 million for research in quantum computing and networking Argonne joins other labs on the threshold of a new era in quantum information science and quantum computing. The U.S. Department of Energy’s (DOE) Argonne National Laboratory was recently awarded a total of$4.15 million for research in quantum computing and networking as part of the 2019 Advanced Scientific Computing Research (ASCR) Quantum Computing and Quantum Network Awards. The awards will fund three multi-year projects in an effort to secure the nation’s leadership in the field of quantum information science.
The three projects aim to advance the development of quantum computing and networking, from quantum-driven algorithms, programming languages, compilers and debugging approaches to metropolitan-scale quantum networks that take advantage of existing fiber optic connections.
Scientists from Argonne’s Mathematics and Computer Science (MCS) division and Computing, Environment and Life Sciences (CELS) directorate will participate in two of the projects in ASCR’s Accelerated Research in Quantum Computing (ARQC) category. One of the projects, titled Fundamental Algorithmic Research for Quantum Computing (FAR-QC),” is a multi-disciplinary collaboration between the national laboratories, academia and industry that aims to better understand the significance of the potential impact of quantum computing.
Leveraging the power of quantum mechanics, quantum computers offer potentially game-changing advantages over classical computers. However, scientists are still developing quantum computing technology, and the magnitude of the potential advantage of quantum computing is still unknown.
We are on the threshold of a new era in quantum information science and quantum computing and networking, with potentially great promise for science and society. These projects will help ensure U.S. leadership in these important new areas of science and technology.” — Paul Dabbar, Under Secretary for Science
Using quantum systems for computing is a promising thought,” said computational mathematician Jeffrey Larson of Argonne’s MCS division, who is a leader on the project. To prove these systems can surpass the capabilities of classical computers for certain DOE-relevant problems requires researchers from diverse scientific backgrounds, including Argonne’s optimization expertise.”
The FAR-QC team of physicists, applied mathematicians, quantum information scientists and computer scientists will develop quantum, classical and hybrid algorithms to advance quantum computing capabilities in quantum simulation, optimization and machine learning. These algorithms will serve as a template for future quantum technologies. In addition, the scientists will analyze the performance of quantum simulated systems to characterize the advantages that quantum algorithms could ultimately realize.
Altogether, the five-year project will receive $19.5 million, with$1.3 million going to Argonne specifically.
Scientists from Argonne’s CELS directorate will also receive funding from the ARQC category for a project titled Advancing Integrated Development Environments for Quantum Computing through Fundamental Research (AIDE-QC).” The team, consisting of five DOE laboratories and the University of Chicago, will develop open-source computing, programming and simulation environments that support the diversity of quantum computing research at DOE.
We are entering an era of quantum computing where the available devices are noisy intermediate-scale quantum (NISQ) devices,” said Stefan Wild, MCS deputy division director and a scientist on the project. The devices will be able to perform tasks that classical computers can’t, but they won’t be capable of full-fledged, unhindered quantum computing. We need to develop techniques to handle this critical, transitional time of scientific exploration.”
Argonne will use the expertise of its scientists to address critical aspects of computer science research that accelerate the integration of NISQ devices, including high-level programming languages, novel error-mitigation techniques for near- and mid-term hardware devices, leading-edge platform-agnostic compilers and tools for validation, verification and debugging.
Classical information comes in the form of bits, and quantum information comes in qubits, which computers handle in significantly different ways. The increasing number of qubits on emerging computer chips presents an exciting opportunity to develop scalable quantum circuit compilation techniques and to test them on cutting-edge hardware,” said Argonne’s computer scientist Martin Suchara, a member of the AIDE-QC team who specializes in quantum computing.
AIDE-QC will receive over $18.6 million; around$2.4 million of that will go to Argonne. The laboratory’s participation in both AIDE-QC and FAR-QC is sure to enhance collaboration across the newly established projects.
Finally, scientists from Argonne’s CELS directorate will participate in a project, titled Illinois-Express Quantum Network (IEQNET),” funded by the DOE ASCR Transparent Optical Quantum Networks for Distributed Science program. The project team, led by scientists at DOE’s Fermilab, plans to develop and demonstrate a metropolitan-scale, quantum-classical hybrid network with nodes at Argonne, Fermilab, and Northwestern University and in downtown Chicago.
Scientists have previously demonstrated point-to-point quantum communication over short distances in fiber optic cables (on the order of 10 miles) and long distances using free space optics, but IEQNET’s goal is to demonstrate a multi-node fiber optic quantum network that supports multiple users and co-exists with classical networks.
We want to utilize existing links because we have significant infrastructure that has already been laid for classical communication,” said Rajkumar Kettimuthu, an Argonne scientist on the study. The challenge will be to achieve classical and quantum coexistence in the same fibers.”
The program funds five projects — involving national laboratories, universities and industry — that will develop complementary technology over the next four years. The total grant is for $3.2 million, and Argonne will receive$400,000. The applications of large-scale quantum networks include better detection of earthquakes and gravitational waves, as well as better synchronization of atomic clocks and arrays of research devices for many scientific fields.
We are on the threshold of a new era in quantum information science and quantum computing and networking, with potentially great promise for science and society,” said Under Secretary for Science Paul Dabbar in a news release announcing the funding awards. These projects will help ensure U.S. leadership in these important new areas of science and technology.”
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. | 2019-10-15 16:13:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3078472316265106, "perplexity": 2841.6200798284945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00026.warc.gz"} |
https://www.groundai.com/project/local-stellar-kinematics-from-rave-data-iii-radial-and-vertical-metallicity-gradients-based-on-red-clump-stars/ | # Local stellar kinematics from RAVE data: III. Radial and Vertical Metallicity Gradients based on Red Clump Stars
## Abstract
We investigate radial and vertical metallicity gradients for a sample of red clump stars from the RAdial Velocity Experiment (RAVE) Data Release 3. We select a total of 6781 stars, using a selection of colour, surface gravity and uncertainty in the derived space motion, and calculate for each star a probabilistic (kinematic) population assignment to a thin or thick disc using space motion and additionally another (dynamical) assignment using stellar vertical orbital eccentricity. We derive almost equal metallicity gradients as a function of Galactocentric distance for the high probability thin disc stars and for stars with vertical orbital eccentricities consistent with being dynamically young, , i.e. and dex kpc. Metallicity gradients as a function of distance from the Galactic plane for the same populations are steeper, i.e. and dex kpc, respectively. and are the arithmetic mean of the perigalactic and apogalactic distances, and the maximum distance to the Galactic plane, respectively. Samples including more thick disc red clump giant stars show systematically shallower abundance gradients. These findings can be used to distinguish between different formation scenarios of the thick and thin discs.
###### keywords:
Galaxy: abundances – Galaxy: disc – stars: abundances – Galaxy: evolution
12
## 1 Introduction
Metallicity gradients play an important role in understanding the formation of disc populations of the galaxies. In the Milky Way Galaxy, there is extensive information establishing a radial gradient in young stars and in the interstellar medium (Shaver et al., 1983; Luck, Kovtyukh & Andrievsky, 2006; Luck & Lambert, 2011). Values typically derived are dex kpc, within 2-3 kpc of the Sun. Much effort to search for local abundance variations, and abundance variations with azimuth, show little if any detectable variation in young systems (Luck et al., 2011), showing the inter-stellar medium to be well-mixed on quite large scales, supporting the importance of gas flows.
Extant data suggest vertical metallicity gradients in the dex kpc range for relatively small distances from the Galactic plane, i.e. kpc (Trefzger, Pel & Gabi, 1995; Karaali et al., 2003; Du et al., 2004; Ak et al., 2007a; Peng, Du & Wu, 2011). For intermediate distances, where the thick disc is dominant, the vertical metallicity gradient is low, dex kpc and the radial gradient is only marginal, dex kpc (Rong et al., 2001). There is some evidence that metallicity gradients for relatively short vertical distances, kpc, show systematic fluctuations with Galactic longitude, similar to those of thick-disc scaleheight, which may be interpreted as a common flare effect of the disc (Bilir et al., 2006; Cabrera-Lavers et al., 2007; Ak et al., 2007b; Bilir et al., 2008; Yaz & Karaali, 2010).
Quantifying the abundance distribution functions and their radial and vertical gradients in both thin and thick discs can be achieved using stellar abundances, especially those from major surveys such as RAdial Velocity Experiment (RAVE; Steinmetz et al., 2006) and Sloan Digital Sky Survey (SDSS; Abazajian et al., 2003). RAVE is a multi-fibre spectroscopic astronomical survey in the Milky Way, which covers just over the half of the Southern hemisphere, using the 1.2-m UK Schmidt Telescope of the Australian Astronomical Observatory (Steinmetz et al., 2006; Zwitter et al., 2008; Siebert et al., 2011). RAVE’s primary aim is to derive the radial velocity of stars from the observed spectra for solar neighbourhood stars. Additional information is also derived, such as photometric parallax and stellar atmospheric parameters, i.e. effective temperature, surface gravity, metallicity and elemental abundance data. This information is important in calculating metallicity gradients, which provides data about the formation and evolution of the Galaxy. As the data were obtained from solar neighbourhood stars, we have limitations to distance and range of metallicity. However, the metallicity measurements are of high internal accuracy which is an advantage for our work.
In a recent study carried out by Chen et al. (2011), a vertical metallicity gradient of -0.22 dex kpc was claimed for the thick disc. They used the SDSS DR8 (Aihara et al., 2011) data set to identify a sample of red horizontal branch stars (RHB) and for this sample they derived the steepest metallicity gradient for the thick disc currently in the literature. RHB stars are very old, on average, so it is feasible that they can have a different metallicity variation than younger stars. However, the difference between their metallicity gradient and others in the literature for the thick disc is large, which motivates confirmation by other works with different data.
In our previous study (Coşkunoğlu et al., 2011a, paper II), we investigated the metallicity gradient of a dwarf sample and we confirmed the radial metallicity gradient of -0.04 dex kpc based on calibrated metallicities from RAVE DR3 (Coşkunoğlu et al., 2011a, and the references therein, Paper I). Additionally, we showed that the radial metallicity gradient is steeper for our sample which is statistically selected to favour younger stars, i.e. F-type stars with orbital eccentricities , i.e. dex kpc. Vertical metallicity gradients could not be derived in this study due to short -distances from the Galactic plane. Therefore, in this present study, we analyses stellar abundance gradients of red clump stars from RAVE data.
Red Clump (RC) stars are core helium-burning stars, in an identical evolutionary phase to those which make up the horizontal branch in globular clusters. However, in intermediate-and higher-metallicity systems only the red end of the distribution is seen, forming a clump of stars in the colour-magnitude diagram. In recent years much work has been devoted to studying the suitability of RC stars for application as a distance indicator. Their absolute magnitude in the optical ranges from mag for those of spectral type G8 III to mag for type K2 III (Keenan & Barnbaum, 1999). The absolute magnitude of these stars in the band is mag with negligible dependence on metallicity (Alves, 2000), but with a real dispersion (see below), and in the band mag, again without dependence on metallicity (Paczynski & Stanek, 1998). RAVE DR3 red clump giants stars occupy a relatively larger -distance interval than do RAVE DR3 dwarfs. Hence, we should have sufficient data to be able to derive abundance gradients for both directions, radial and vertical. As the RHB stars are on the extended branch of RC stars, we anticipate the results to be similar to RHB stars claimed in Chen et al. (2011), and so are able to test their result.
The structure of the paper is: Data selection is described in Section 2; calculated space velocities and orbits of sample stars is described in Section 3. Population analysis and results are given in Section 4 and Section 5, respectively. Finally, a discussion and conclusion are presented in Section 6.
## 2 Data
The data were selected from the third data release of RAVE (DR3; Siebert et al., 2011). RAVE DR3 reports 83072 radial velocity measurements for stars mag. This release also provides stellar atmospheric parameters for 41 672 spectra representing 39833 individual stars (Siebert et al., 2011). The accuracy of the radial velocities is high, marginally improved with DR3: the distribution of internal errors in radial velocities has a mode of 0.8 km s and a median of 1.2 km s, while 95 per cent of the sample has an internal error smaller than 5 km s. The uncertainties for the stellar atmospheric parameters are typically: 250 K for effective temperature , 0.43 dex for surface gravity and 0.2 dex for . While RAVE supports a variety of chemical abundance scales, we use here just the public DR3 values. Since anticipated gradients are small, this provides a well-defined set of parameters for analysis. The proper motions of the stars were taken from RAVE DR3 values, which were compiled from PPMX, Tycho-2, SSS and UCAC2 catalogs. The distribution of RAVE DR3 stars in the Equatorial and Galactic coordinate planes are shown in Fig. 1.
We applied the following constraints to obtain a homogeneous RC sample with best quality: i) (Puzeras et al., 2010), ii) the Two Micron All Sky Survey (2MASS; Skrutskie et al., 2006) photometric data are of quality labelled as “AAA”, and iii) ´ (Bilir et al., 2011). The numerical values for and the median of S/N value of sample spectra (totally 7985 stars), thus obtained are 671 and 41, respectively. Proper motions for 139 out of the 7985 stars are not available in the RAVE DR3, hence these values are provided from the PPMXL Catalog of Roeser, Demleitner & Schilbach (2010). Distances were obtained by combining the apparent magnitude of the star in query and the absolute magnitude mag, adopted for all RC stars (Groenewegen, 2008). Whereas the reddening were obtained iteratively by using published methodology (for more detailed information regarding the iterations see Coşkunoğlu et al., 2011b, and the references therein;). Then, the de-reddening of magnitudes and colours in 2MASS were carried out by the following equations with the co-efficient of Fiorucci & Munari (2003):
Jo=J−0.887×E(B−V) (J−H)o=(J−H)−0.322×E(B−V) (1) (H−Ks)o=(H−Ks)−0.183×E(B−V)
Note the real dispersion in absolute magnitude among RC stars (Fig. 2). This will have the effect of blurring the derived distances, and so smoothing any derived gradient. Given that we search for a linear gradient, such distance uncertainties will tend to somewhat reduce any measured gradient. Given our results below, we consider any such effect to be second order.
The distance range of the sample and the median of the distances are kpc and 1.34 kpc, respectively (Fig. 3), which are sufficient to investigate both radial and vertical metallicity gradients. The distribution of colour excess is given in three categories, i.e. , , and , whose mean values are 0.14, 0.06, and 0.02 mag, respectively (Fig. 4). The reddening is rather small at intermediate and high Galactic latitudes, as expected. The projection of the sample stars onto the (, ) and (, ) planes (Fig. 5) show that their distribution is biassed (by design: RAVE does not observe towards the Galactic bulge and so), the median of the heliocentric coordinates are , , kpc.
## 3 Space Velocities and Orbits
We combined the distances estimated in Section 2 with RAVE kinematics and the available proper motions, applying the algorithms and the transformation matrices of Johnson & Soderblom (1987) to obtain their Galactic space velocity components (, , ). In the calculations epoch J2000 was adopted as described in the International Celestial Reference System (ICRS) of the Hipparcos and Tycho-2 Catalogues (ESA, 1997). The transformation matrices use the notation of a right handed system. Hence, , , and are the components of a velocity vector of a star with respect to the Sun, where is positive towards the Galactic centre (, ), is positive in the direction of Galactic rotation (, ) and is positive towards the North Galactic Pole ().
We adopted the value of the rotation speed of the Sun as 222.5 km s. Correction for differential Galactic rotation is necessary for accurate determination of , , and velocity components. The effect is proportional to the projection of the distance to the stars onto the Galactic plane, i.e. the velocity component is not affected by Galactic differential rotation (Mihalas & Binney, 1981). We applied the procedure of Mihalas & Binney (1981) to the distribution of the sample stars and estimated the first order Galactic differential rotation corrections for and velocity components of the sample stars. The range of these corrections is and km s for and , respectively. As expected, is affected more than the component. Also, the high values for the component show that corrections for differential Galactic rotation can not be ignored. One notices that Galactic differential rotation corrections are rather larger than the corresponding ones for dwarfs Coşkunoğlu et al. (2011b). The , , and velocities were reduced to LSR by adopting the solar LSR velocities in (Coşkunoğlu et al., 2011b), i.e. (, , )=(8.83, 14.19, 6.57) km s.
The uncertainty of the space velocity components , and were computed by propagating the uncertainties of the proper motions, distances and radial velocities, again using an algorithm by Johnson & Soderblom (1987). Then, the error for the total space motion of a star follows from the equation:
S2err=U2err+V2err+W2err. (2)
The mean S and standard deviation () for space velocity errors are S km s and km s, respectively. We now remove the most discrepant data from the analysis, knowing that outliers in a survey such as this will preferentially include stars which are systematically mis-analysed binaries, etc. Astrophysical parameters for such stars are also likely to be relatively unreliable. Thus, we omit stars with errors that deviate by more than the sum of the standard error and the standard deviation, i.e. 75 km s. This removes 1204 stars, 15.1 per cent of the sample. Thus, our sample was reduced to 6781 stars, those with more robust space velocity components. After applying this constraint, the mean values and the standard deviations for the velocity components were reduced to (, , )=(, , ) km s. The distribution of the errors for the space velocity components is given in Fig. 6. In this study, we used only the sub-sample of stars (6781 stars) with standard error km s. The velocity diagrams for these stars are shown in Fig. 7. The centre of the distributions deviate from the zero points of the , , and velocity components, further indicating the need for a Local Standard of Rest reduction.
To complement the chemical abundance data, accurate kinematic data have been obtained and used to calculate individual Galactic orbital parameters for all stars. The shape of the stellar orbit is a proxy, through the age-velocity dispersion relation, for age, with more circular orbits hosting statistically younger stars.
In order to calculate those parameters we used standard gravitational potentials well-described in the literature (Miyamoto & Nagai, 1975; Hernquist, 1990; Johnston, Spergel & Hernquist, 1995; Dinescu, Girard & van Altena, 1999) to estimate orbital elements of each of the sample stars. The orbital elements for a star used in our work are the mean of the corresponding orbital elements calculated over 15 orbital periods of that specific star. The orbital integration typically corresponds to 3 Gyr, and is sufficient to evaluate the orbital elements of solar neighbourhood stars, most of which have orbital periods below 250 Myr.
Solar neighbourhood velocity space includes well-established substructures that resemble classic moving groups or stellar streams (Dehnen, 1998; Skuljan, Hearnshaw & Cottrell, 1999; Nordström et al., 2004). Famaey et al. (2005), Famaey, Siebert & Jorissen (2008) and Pompéia et al. (2011) show that, although these streams include clusters, after which they are named, and evaporated remnants from these clusters, the majority of stars in these streams are not coeval but include stars of different ages, not necessarily born in the same place nor at the same time. They argue these streams are dynamical (resonant) in origin, probably related to dynamical perturbations by transient spiral waves (De Simone, Wu & Tremaine, 2004), which migrate stars to specific regions of the -plane. Stars in a dynamical stream just share a common velocity vector at this particular epoch. These authors further point out the obvious and important point that dynamical streams are kinematically young and so integrating backwards in a smooth stationary axisymmetric potential the orbits of the stars belonging to these streams is non-physical. RAVE stars are selected to avoid the Galactic plane () . Dynamical perturbations by transient spiral waves are strongest closest to the Galactic plane so there will be fewer dynamical stream stars in our RAVE sample. Hence, contamination of the dynamical stream stars is unlikely to affect our statistical results.
To determine a possible orbit, we first perform test-particle integration in a Milky Way potential which consists of a logarithmic halo of the form
Φhalo(r)=v20ln(1+r2d2), (3)
with km s and kpc. The disc is represented by a Miyamoto-Nagai potential:
with , kpc and kpc. Finally, the bulge is modelled as a Hernquist potential
Φbulge(r)=−GMbr+c, (5)
using and kpc. The superposition of these components gives quite a good representation of the Milky Way. The circular speed at the solar radius is km s. years is the orbital period of the LSR and km s denotes the circular rotational velocity at the solar Galactocentric distance, kpc.
For our analysis of gradients, we are interested in the mean radial Galactocentric distance () as a function of the stellar population and the orbital shape. Wilson et al. (2011) has analysed the radial orbital eccentricities of a RAVE sample of thick disc stars, to test thick disc formation models. Here we focus on possible local gradients, so instead consider the vertical orbital eccentricity, . is defined as the arithmetic mean of the final perigalactic () and apogalactic () distances, and and are the final maximum and minimum distances, respectively, to the Galactic plane. Whereas is defined as follows:
ev=(|zmax|+|zmin|)Rm, (6)
where (Pauli, 2005). Due to -excursions and can vary, however this variation is not more than 5 per cent.
## 4 Population Analysis
### 4.1 Classification using space motions
The procedure of ?Bensby03, Bensby05 was used to separate sample stars into different populations. This kinematic methodology assumes that Galactic space velocities for the thin disc (), thick disc (), and stellar halo () with respect to the LSR have Gaussian distributions as follows:
f(U, V, W)=k × exp(−U2LSR2σ2ULSR−(VLSR−Vasym)22σ2VLSR−W2LSR2σ2WLSR), (7)
where
k=1(2π)3/2σULSRσVLSRσWLSR, (8)
normalizes the expression. For consistency with other analyses , and were adopted as the characteristic velocity dispersions: 35, 20 and 16 km s for thin disc (); 67, 38 and 35 km s for thick disc (); 160, 90 and 90 km s for halo (), respectively (Bensby et al., 2003). is the asymmetric drift: -15, -46 and -220 km s for thin disc, thick disc and halo, respectively. LSR velocities were taken from Coşkunoğlu et al. (2011b) and these values are km s.
The probability of a star of being “a member” of a given population with respect to a second population is defined as the ratio of the distribution functions times the ratio of the local space densities for two populations. Thus,
TD/D=XTDXD×fTDfD TD/H=XTDXH×fTDfH, (9)
are the probabilities for a star being classified as a thick disc star relative to it being a thin disc star, and relative to it being a halo star, respectively. , and are the local space densities for thin disc, thick disc and halo, i.e. 0.94, 0.06, and 0.0015, respectively (Robin et al., 1996; Buser, Rong & Karaali, 1999). We followed the argument of Bensby et al. (2005) and separated the sample stars into four categories: (high probability thin disc stars), (low probability thin disc stars), (low probability thick disc stars) and (high probability thick disc stars). Fig. 8 shows the and diagrams as a function of population types defined by using Bensby et al. (2003)’s criteria. It is evident from Fig. 8 that the kinematic population assignments are strongly affected by space-motion uncertainties. 3385 and 1151 stars of the sample were classified as high and low probability thin disc stars, respectively, whereas 646 and 1599 stars are low and high probability thick disc stars (Table 1). The relative number of high probability thick disc (RC) stars are much larger than the corresponding ones in Coşkunoğlu et al. (2011a) (2 per cent), i.e. 24 per cent.
### 4.2 Population classification using stellar vertical orbital shape
Both radial and vertical orbital eccentricities contain valuable information: here we consider the vertical orbit shape. Vertical orbital eccentricities were calculated, as described above, from numerically-integrated orbits. We term this the dynamical method of population assignment, which complements the Bensby et al. (2003)’s approach. The distribution function of is not consistent with a single Gaussian distribution, as is shown in Fig. 9. A two-Gaussian model however does provide an acceptable fit. For convenience, we separated our sample into three categories, i.e. stars with (3448 stars), (2389 stars) and (944 stars), and fitted their metallicities to their mean radial distances () in order to investigate the presence of a metallicity gradient for RAVE RC stars. We provide in Table 2 (electronically) for each star, stellar parameters from RAVE DR3, calculated kinematical and dynamical parameters and our stellar population assignment.
## 5 Results
### 5.1 First hints of a metallicity gradient apparent in the RC sample of stars.
We show in Fig. 10 the distribution of metallicities for our final sample of RAVE RC stars. The metallicity distribution for all populations is rather symmetric with a mode at dex. In Fig. 11 we show the normalized metallicity distribution functions, with the sample sub-divided by probabilistic population assignment, as described above. This Fig. 11 gives an indication of a systematic shift of the mode, shifting to low metallicities when one goes from the thin disc stars to the thick disc stars (Fig. 11). The -distance distribution of our sample is shown in Fig. 12. While the typical star is at distance kpc, there is a significant sample at larger distances. The range in distance is large enough to allow us to consider vertical metallicity gradient estimation.
The whole sample of stars, with no consideration of population assignment, were separated into five bins in distance, and mean -distances from the Galactic plane, mean metallicities, and mean vertical eccentricities were calculated for each bin. These are presented in Table 3. Fig. 13 summarises the dependence of the results on vertical distance from the Galactic plane. The apparent variation of the mean metallicity with -distance from the Galactic plane in Fig. 13 indicates the existence of a vertical metallicity gradient for RC stars. This figure does not however allow discrimination between a true gradient in a consistently defined population (e.g. “thick disc”) and a changing relative contribution from two populations (e.g. “thin disc” and “thick disc”), which have different modal abundances, and for which either one, both, or neither has an intrinsic gradient. Also given in Fig. 13 is the variation of the vertical eccentricity with , which again shows either a smooth transition from thin disc to thick disc eccentricities, or a changing population mix, or both disc.
### 5.2 Metallicity gradients using the kinematical population assignment method
We consider the metallicities as a function of the mean orbital Galactocentric radial distance () and maximum distance to the Galactic plane () for each different population defined in Section 4, and test for the presence of vertical and radial metallicity gradients for each population. We fitted the distributions to linear equations, whose gradient is any metallicity gradient, or . The results are shown in Fig. 14.
The radial gradients are small or consistent with zero. The best determined value (largest ratio of gradient value to error value) is for low probability thin disc stars, where the gradient is dex kpc (Table 4). The only metallicity gradient consistent with zero (with small formal errors) is for low probability thick disc stars.
The vertical metallicity gradients are (absolutely) much larger than the radial ones, and statistically are detected. The largest gradient in the vertical direction is steeper for high probability thin disc stars, relative to the other populations, viz dex kpc. Additionally, the metallicity gradient for high probability thick disc stars is not zero, i.e. dex kpc.
### 5.3 Metallicity gradients using the dynamical population assignment method
We now consider the metallicities as a function of the mean orbital Galactocentric radial distance () and maximum distance to the Galactic plane () for the populations defined by their eccentricities in Section 4.2, with one slight modification, in that we add a population defined by (1269 stars) representing the blue stars. We make this modification due to our experience in analysing the RAVE dwarf stars. In Coşkunoğlu et al. (2011a), we showed that the blue stars which have the smallest orbital eccentricities (most circular orbits) have also steeper metallicity gradients relative to samples with larger orbital eccentricities.
The results are presented in Table 4 and Fig. 15. The steepest metallicity gradient is for in the vertical direction, i.e. dex kpc. The vertical metallicity gradient systematically decreases with increasing values, becoming close to zero for the largest vertical eccentricities, . As noted above, our (red clump giants) sample consists of thin and thick disc stars. The largest orbital eccentricities correspond to thick disc stars.
That is, the vertical metallicity gradient for the thick disc is close to zero. The trend of the radial metallicity gradient is almost the same as the vertical metallicity gradient; less steep. For example, dex kpc for .
## 6 Discussion and Conclusion
We have used the RAVE DR3 data release to identify RC stars, further excluding cool stars, and those with the most uncertain space motions. We used the calibrated RAVE DR3 metallicities and the mean radial and maximum distances to investigate the presence of radial and vertical metallicity gradients, dividing the sample into a variety of subsamples.
We derive significant radial and vertical metallicity gradients for high probability thin disc stars and for the subsample with . We derive significant and marginally shallower gradients for the other subsamples. We do not detect any significant gradients for thick disc stars. Vertical metallicity gradients are much steeper than the radial ones, for the same subsample, as (perhaps) expected. We derive the metallicity gradients for the subsample which are the youngest sample stars, to be dex kpc and dex kpc.
The radial metallicity gradients for all subsamples are rather close to the corresponding ones obtained for F-type dwarfs. Hence, the discussion in Coşkunoğlu et al. (2011a) is also valid here. We cannot determine any vertical metallicity gradient for RAVE dwarfs due to their small distances from the Galactic plane. However, we can compare our results with those obtained for giants. Chen et al. (2011) investigated the metallicity gradient of the thick disc by using Red Horizontal Branch (RHB) stars from SDSS DR8 (Aihara et al., 2011) and they found two different vertical metallicity gradients estimated in two ways. One is a fit to the Gaussian peaks of the metallicity histograms of the thick disc by subtracting minor contributions from the thin disc and the inner halo based on the Besançon Galaxy model (Robin et al., 1996). The resulting gradient is dex kpc for kpc. The other method is fitting the data linearly for the stars kpc where the thick disc is the dominant population. Five subgroups were then selected in different directions in the plane to investigate the difference in the vertical metallicity gradient between the Galactocentric and anti-Galactocentric directions. They found that a vertical gradient of dex kpc was detected for five directions except for one involving the population of stars from the bulge.
Neither of the vertical metallicity gradients claimed by Chen et al. (2011) are in agreement with our results, nor are they consistent with studies appearing in previous works in the literature.
Another recent investigation of the vertical metallicity gradient for the thick disc is that of Ruchti et al. (2011). In that paper the authors gave a sample of 214 Red Giant Branch stars, 31 red clump/red horizontal branch stars, and 74 main-sequence/sub-giant branch metal-poor stars. They found that the thick disc ratios are enhanced, and have little variation ( dex). Their sample further allowed, for the first time, investigation of the gradients in the metal-poor thick disc. For stars with dex, the thick disc shows very small gradients, dex kpc, in enhancement, while they found a dex kpc radial gradient and a dex kpc vertical gradient in iron abundance. We consider only the gradient in iron abundance, not the gradient in enhancement. We may transform published iron abundances to RAVE metallicity values by means of the equation (Zwitter et al., 2008),
[M/H]=[Fe/H]+0.11[1±(1−e−3.6|[Fe/H]+0.55|)]. (10)
This reveals that the Ruchti et al. (2011) vertical gradient is consistent with the vertical metallicity gradient determined here, within the errors, for high probability thick disc stars, dex kpc. However there is a difference between the corresponding radial metallicity gradients in the two studies. An explanation for this disagreement may be the differing metallicity range of the sample stars used in the two works. In the present study we consider stars with dex, with a minority at the metal-poor tail in our work, while the corresponding selection is dex in Ruchti et al. (2011).
The radial metallicity gradients we have estimated from red clumps giants stars are consistent with those derived in paper II for dwarfs (Coşkunoğlu et al., 2011a). The robust metallicity gradients we determine are dex kpc for the high probability thin disc stars, the population type labelled with , and dex kpc for the sample with eccentricity . Samples biased to low probability thin disc and thick disc stars show systematically shallower gradients. Complementary to the dwarf sample, the distance range of the red clump giant stars, with median distance of 1.4 kpc, provides information on vertical metallicity gradients. The vertical metallicity gradients for the high probability thin disc stars and for the sample with are dex kpc and dex kpc, respectively. For high probability thick disc stars we could detect a vertical metallicity gradient; a shallow one however, i.e. dex kpc.
From our analysis, we may conclude that, despite their greater distances from the Galactic plane, the RAVE DR3 red clump giant stars confirm the radial metallicity gradients found for RAVE DR3 dwarf stars. Because of their greater distances from the Galactic plane, the RAVE DR3 RC stars also permit vertical metallicity gradients to be measured. These findings can be used to constrain formation scenarios of the thick and thin discs.
## 7 Acknowledgments
We would like to thank the referee Dr. Martin López-Corredoira for his useful suggestions that improved the readability of this paper.
Funding for RAVE has been provided by: the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation; the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency; the Swiss National Science Foundation; the Science & Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney. The RAVE web site is at http://www.rave-survey.org
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the SIMBAD, NASA’s Astrophysics Data System Bibliographic Services and the NASA/IPAC ExtraGalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
### Footnotes
1. pagerange: Local stellar kinematics from RAVE data: III. Radial and Vertical Metallicity Gradients based on Red Clump StarsLABEL:lastpage
2. pubyear:
### References
1. Abazajian K, et al., 2003, AJ, 126, 2081
2. Aihara H., et al, 2011, ApJS, 193, 29
3. Ak S., Bilir S., Karaali S., Buser R., 2007a, AN, 328, 169
4. Ak S., Bilir S., Karaali S., Buser R., Cabrera-Lavers A., 2007b, NewA, 12, 605
5. Alves D. R., 2000, ApJ, 539, 732
6. Bensby T., Feltzing S., Lundström I., 2003, A&A 410, 527
7. Bensby T., Feltzing S., Lundström I., Ilyin I., 2005, A&A, 433, 185
8. Bilir S., Karaali S., Ak S., Yaz E., Hamzaoğlu E., 2006, NewA, 12, 234
9. Bilir S., Cabrera-Lavers A., Karaali S., Ak S., Yaz E., López-Corredoira M., 2008, PASA, 25, 69
10. Bilir S., Karaali S., Ak S., Önal Ö., Coşkunoǧlu B., Seabroke G. M., 2011, MNRAS, 418, 444
11. Buser R., Rong J., Karaali S., 1999, A&A, 348, 98
12. Cabrera-Lavers A., Bilir S., Ak S., Yaz E., López-Corredoira M., 2007, A&A, 464, 565
13. Chen Y. Q., Zhao G., Carrell K., Zhao J. K., 2011, AJ, 142, 184
14. Coşkunoğlu B., Ak S., Bilir S., Karaali S., Önal Ö., Yaz E., Gilmore G., Seabroke G. M., 2011a, MNRAS, tmp.1916
15. Coşkunoğlu B., et al., 2011b, MNRAS, 412, 1237
16. Cutri R. M., et al., 2003, 2MASS All-Sky Catalogue of Point Sources, CDS/ADC Electronic Catalogues, 2246
17. Dehnen W., 1998, AJ, 115, 2384
18. De Simone R. S., Wu, X., Tremaine S., 2004, MNRAS, 350, 627
19. Dinescu D. I., Girard T. M., van Altena W. F., 1999, AJ, 117, 1792
20. Du C., Zhou X., Ma J., Shi J., Chen A. B., Jiang Z., Chen J., 2004, AJ, 128, 2265
21. ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200. ESA, Noordwijk
22. Famaey B., et al., 2005, A&A, 430, 165
23. Famaey B., Siebert A., Jorissen A., 2008, A&A, 483, 453
24. Fiorucci M., Munari U., 2003, A&A, 401, 781
25. Gilmore G., Wyse R. F. G., 1985, AJ, 90, 2015
26. Groenewegen M. A. T., 2008, A&A, 488, 25
27. Hernquist L., 1990, ApJ, 356, 359
28. Johnson D. R. H., Soderblom D. R., 1987, AJ, 93, 864
29. Johnston K. V., Spergel D. N., Hernquist L., 1995, ApJ, 451, 598
30. Karaali S., Bilir S., Karataş Y., Ak S. G., 2003, PASA, 20, 165
31. Keenan P. C., Barnbaum C., 1999, ApJ, 518, 859
32. Luck R. E., Kovtyukh V. V., Andrievsky S. M., 2006, AJ, 132, 902
33. Luck R. E., Andrievsky S. M., Kovtyukh V. V., Gieren W., Graczyk D., 2011, AJ, 142, 51
34. Luck R. E., Lambert D. L., 2011, AJ, 142, 136
35. Mihalas D., Binney J., 1981, Galactic Astronomy: Structure and Kinematics, 2nd edition, San Francisco, CA, W. H. Freeman and Co.
36. Miyamoto M., Nagai R., 1975, PASJ, 27, 533
37. Nordström B., et al., 2004, A&A, 418, 989
38. Paczynski B., Stanek K. Z., 1998, ApJ, 494L, 219
39. Pauli E. M., 2005, Prof. G. Manev’s Legacy in Contemporary Astronomy, Theoretical and Gravitational Physics (Eds. V. Gerdjikov and M. Tsvetkov), 185, Sofia, Bulgaria, Heron Press Limited, 2005.
40. Peng X., Du C., Wu Z., 2011, arXiv1111.4719
41. Pompéia L., et al., 2011, MNRAS, 415, 1138
42. Puzeras E., Tautvaišienė G., Cohen J. G., Gray D. F., Adelman S. J., Ilyin I., Chorniy Y., 2010, MNRAS, 408, 1225
43. Robin A. C., Haywood M., Créze M., Ojha D.K., Bienaymé O., 1996, A&A, 305, 125
44. Roeser S., Demleitner M., Schilbach E., 2010, AJ, 139, 2440
45. Rong J., Buser R., Karaali S., 2001, A&A, 365, 431
46. Ruchti G. R., et al., 2011, ApJ, 737, 9
47. Shaver P. A., McGee R. X., Newton L. M., Danks A. C., Pottasch S. R., 1983, MNRAS, 204, 53
48. Siebert A., et al., 2011, AJ, 141, 187
49. Skrutskie M. F., et al., 2006, AJ, 131, 1163
50. Skuljan J., Hearnshaw J. B., Cottrell P. L., 1999, MNRAS, 308, 731
51. Steinmetz M., et al., 2006, AJ, 132, 1645
52. Trefzger C. F., Pel J. W., Gabi S., 1995, A&A, 304, 381
53. Wilson M. L., et al., 2011, MNRAS, 413, 2235
54. van Leeuwen F., 2007, A&A, 474, 653
55. Yaz E., Karaali S., 2010, NewA, 15, 234
56. Zwitter T., et al., 2008, AJ, 136, 421
185765 | 2018-11-20 08:01:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729408979415894, "perplexity": 3518.869944832026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00527.warc.gz"} |
https://www.mastguru.com/profit-and-loss-questions-answers/question/24/3 | # Profit and Loss Questions Answers
• #### 15. A shopkeeper sells a transistor at Rs. 840 at a gain of 20% and another for Rs. 960 at the loss of 4%. Find his total gain percent.
1. \begin{aligned} 5\frac{12}{17}\% \end{aligned}
2. \begin{aligned} 5\frac{13}{17}\% \end{aligned}
3. \begin{aligned} 5\frac{14}{17}\% \end{aligned}
4. \begin{aligned} 5\frac{15}{17}\% \end{aligned}
Explanation:
In this type of question, we will first find total C.P. of items, then total S.P. of items, then we will get gain or loss. From which we can easily calculate its percentage.
So lets solve it now.
\begin{aligned}
\text{So, C.P. of 1st transistor = }\\
\left( \frac{100}{120} * 840 \right) = 700 \\
\text{C.P. of 2nd transistor = }\\
\left( \frac{100}{96} * 960 \right) = 1000 \\
\text{Total C.P. = 1700 }\\
\text{Total S.P. = 1800 }\\
\text{Gain = 1800 - 1700 = 100}\\
\text{Gain% = } \left( \frac{100}{1700} * 100 \right) \\
= 5\frac{15}{17}\%
\end{aligned}
• #### 16. A man gains 20% by selling an article for a certain price. If he sells it at double the price, the percentage of profit will be.
1. 130%
2. 140%
3. 150%
4. 160%
Explanation:
Let the C.P. = x,
Then S.P. = (120/100)x = 6x/5
New S.P. = 2(6x/5) = 12x/5
Profit = 12x/5 - x = 7x/5
Profit% = (Profit/C.P.) * 100
=> (7x/5) * (1/x) * 100 = 140 %
• #### 17. If the cost price of 12 pens is equal to the selling price of 8 pens, the gain percent is ?
1. 12%
2. 30%
3. 50%
4. 60%
Explanation:
Friends, we know we will need gain amount to get gain percent, right. So lets get gain first.
Let the cost price of 1 pen is Re 1
Cost of 8 pens = Rs 8
Selling price of 8 pens = 12
Gain = 12 - 8 = 4
\begin{aligned}
Gain\% = \left( \frac{Gain}{Cost} * 100 \right)\% \\
= \left( \frac{4}{8} * 100 \right)\% = 50\%
\end{aligned}
• #### 18. The cost price of 20 articles is the same as the selling price of x articles. If the profit is 25% then determine the value of x.
1. 14
2. 15
3. 16
4. 17
Explanation:
Let the cost price 1 article = Re 1
Cost price of x articles = x
S.P of x articles = 20
Gain = 20 -x
\begin{aligned}
=> 25 = \left( \frac{20-x}{x} * 100 \right) \\
=> 2000 - 100x = 25 x \\
=> x = 16
\end{aligned}
• #### 19. In a certain store, the profit is 320% of the cost. If the cost increases by 25% but the selling price remains constant, approximately what percentage of the selling price is the profit
1. 70%
2. 80%
3. 90%
4. None of above
Explanation:
Let C.P.= Rs. 100.
Then, Profit = Rs. 320,
S.P. = Rs. 420.
New C.P. = 125% of Rs. 100 = Rs. 125
New S.P. = Rs. 420.
Profit = Rs. (420 - 125) = Rs. 295
Required percentage = (295/420) * 100
= 70%(approx)
• #### 20. If the cost price of 12 items is equal to the selling price of 16 items, the loss percent is
1. 20%
2. 25%
3. 30%
4. 35%
Explanation:
Let the Cost Price of 1 item = Re. 1
Cost Price of 16 items = 16
Selling Price of 16 items = 12
Loss = 16 - 12 = Rs 4
Loss % = (4/16)* 100 = 25%
• #### 21. A man bought an article and sold it at a gain of 5 %. If he had bought it at 5% less and sold it for Re 1 less, he would have made a profit of 10%. The C.P. of the article was
1. Rs 100
2. Rs 150
3. Rs 200
4. Rs 250
Explanation:
Let original Cost price is x
Its Selling price = 105/100 * x = 21x/20
New Cost price = 95/100 * x = 19x/20
New Selling price = 110/100 * 19x/20 = 209x/200
[(21x/20) - (209x/200)] = 1
=> x = 200 | 2020-02-25 17:44:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157015681266785, "perplexity": 4793.151997857777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00331.warc.gz"} |
https://byjus.com/chemistry/bohrs-atomic-model-and-its-limitations/ | # Bohr's Atomic Model And Limitations
## Bohr’s Atomic Model
Thomson’s atomic model and Rutherford’s atomic model failed to answer many questions related to the energy of an atom and its stability. In the year 1913, Niels Bohr proposed an atomic structure model, describing an atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits around the positively charged nucleus as planets around the sun in our solar system, with attraction provided by electrostatic forces, popularly known as Bohr’s atomic model. It was basically an improved version of Rutherford’s atomic model overcoming its limitations. On most of the points, he is in agreement with him, like concepts of nucleus and electrons orbiting it. Salient features of Bohr’s atomic model are:
• Electrons revolve around the nucleus in stable orbits without emission of radiant energy. Each orbit has a definite energy and is called an energy shell or energy level.
• An orbit or energy level is designated as K, L, M, N shells. When the electron is in the lowest energy level, it is said to be in the ground state.
• An electron emits or absorbs energy when it jumps from one orbit or energy level to another. When it jumps from a higher energy level to lower energy level it emits energy while it absorbs energy when it jumps from a lower energy level to a higher energy level.
• The energy absorbed or emitted is equal to the difference between the energies of the two energy levels (E1, E2) and is determined by Plank’s equation.
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ΔE = E2-E1 = hv
Where,
ΔE = energy absorbed or emitted
h= Plank’s constant
v= frequency of electromagnetic radiation emitted or absorbed
• The angular momentum of an electron revolving in energy shells is given by:
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ $m_evr$ = $\frac {nh}{2\pi}$
Where,
n= number of corresponding energy shell; 1, 2, 3 …..
me= mass of the electron
v= velocity
h= Plank’s constant
### Limitations of Bohr Atomic Model Theory
• It violates the Heisenberg Uncertainty Principle. The Bohr atomic model theory considers electrons to have both a known radius and orbit i.e. known position and momentum at the same time, which is impossible according to Heisenberg.
• The Bohr atomic model theory made correct predictions for smaller sized atoms like hydrogen, but poor spectral predictions are obtained when larger atoms are considered.
• It failed to explain the Zeeman effect when the spectral line is split into several components in the presence of a magnetic field.
• It failed to explain the Stark effect when the spectral line gets split up into fine lines in the presence of an electric field. | 2018-11-19 20:23:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5719941854476929, "perplexity": 477.3297578608655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00256.warc.gz"} |
http://math.stackexchange.com/questions/82140/show-that-f-0-f-1-f-2-cdots-f-2n-1-f-2n-f-2n-1-1-when-n?answertab=votes | # Show that $f_0 - f_1 + f_2 - \cdots - f_{2n-1} + f_{2n} = f_{2n-1} - 1$ when $n$ is a positive integer
Letting $f_n$ be the Fibonacci numbers, show that $f_0 - f_1 + f_2 - \cdots - f_{2n-1} + f_{2n} = f_{2n-1} - 1$ when $n$ is a positive integer.
Just some homework help. Need to prove. Thank you in advance.
-
What is $f_n$? Fibonacci I assume.... – N. S. Nov 14 '11 at 21:32
Then it is not true ;) The problem should specify what $f_n$ is, and if not, it was probably defined earlier in another problem or textbook... – N. S. Nov 14 '11 at 21:36
Think about the problem the following way: pick any sequence which verifies this relation, pick any $n$ you want and then change ONLY $f_{2n}$... What happens to the equality? So, unless you know exactly what $f_n$ is for all $n$, you can't prove this is right... – N. S. Nov 14 '11 at 21:41
Yes, it is Fibonacci. Doesn't claim on the assinment, but does reference a set of problems from the book. My mistake, I apologize. – Chris Nov 14 '11 at 22:07
Yet another case where homework difficulties would better be solved by asking the instructor, rather than here! – GEdgar Nov 14 '11 at 22:16
Hint 1: Induction...
Hint 2: replace $f_n$ by their "closed" form, and see how you can calculate those sums.. It is easy...
Each hint leads to a different solution...
P.S. I assumed that $f_n$ is the Fibonacci sequence, I am pretty sure it is...
-
Thank you for the prompt answer, by the way. – Chris Nov 14 '11 at 22:56
In case you already now the formula for the sum of first $n$ Fibonacci numbers $$\sum_{j=0}^n F_j = F_{n+2} - 1$$ you can use it as follows:
$F_0-F_1+F_2-F_3+\dots-F_{2n-1}+F_{2n}=$ $F_0+(F_2-F_1)+(F_4-F_3)+\dots+(F_{2n}-F_{2n-1})=$ $0+F_0+F_2+\dots+F_{2n-2}=$ $0+(F_0+F_1)+\dots+(F_{2n-4}+F_{2n-3})=$ $\sum_{k=0}^{2n-3} F_n = F_{2n-1}-1.$
A proof of the above formula for the sum of the first $n$ Fibonacci numbers can be found e.g. at proofwiki. | 2013-12-19 07:06:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791764378547668, "perplexity": 504.4071983221818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762220/warc/CC-MAIN-20131218054922-00042-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/235714/what-is-the-amount-of-dark-energy-in-the-universe-in-joules/473370 | # What is the amount of dark energy in the Universe, in joules?
I'm toying with an idea, and need absolute amounts of dark energy. I'd also like the same for dark matter. If these values are not estimated, why not?
• Thanks to all for the info. Very helpful and informative. I now move from crazy idea to crazy calculation. I like absolute numbers. – Incredible II Feb 13 '16 at 16:44
The dark energy density in the universe is about $7 \times 10^{-30}$g/cm$^3$ on average according to Wikipedia. This is uniform through out the Hubble volume of the entire universe i.e. the volume of the universe with which we are in causal contact. The Hubble volume is $10^{31} \ ly^3$ i.e. cubic light years. This gives $8.46732 \times 10^{84}$ cm$^3$ as the volume of the universe. Using the mass-energy equivalence, you find that the total dark energy content in the entire universe is around $10^{69}$ Joules, which is truly massive. This is in agreement with the result here. | 2019-08-21 11:40:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5767448544502258, "perplexity": 287.5992912615006}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00081.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/want-raise-temperature-350-moles-ideal-gas-200-degrees-celsius-890-degrees-celsius--calcul-q1272791 | We want to raise the temperature of 3.50 moles of an ideal gas from -20.0 degrees celsius to 89.0 degrees celsius.
a. Calculate the amount of heat (in joules) needed if the gas is ideal argon at a fixed volume of 8.80 meters cubed.
b.Calculate the amount of heat (in joules) needed if the gas is ideal helium at a constant pressure of 1.07 atm.
c. Calculate the amount of heat (in joules) needed if the gas is ideal nitrogen at a constant pressure of 1.26 atm.
d. Calculate the amount of heat (in joules) needed if the gas is ideal chlorine at a constant volume of 24.6 L. | 2015-03-30 20:06:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100493788719177, "perplexity": 484.42212445757644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299877.5/warc/CC-MAIN-20150323172139-00035-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://hidocsndaw.web.app/76961/46711.html | 25x ^ 2 + 10x + 1 = 9
Simplifying 25x 2 + 10x + 1 = 9 Reorder the terms: 1 + 10x + 25x 2 = 9 Solving 1 + 10x + 25x 2 = 9 Solving for variable 'x'. Reorder the terms: 1 + -9 + 10x + 25x 2 = 9 + -9 Combine like terms: 1 + -9 = -8 -8 + 10x + 25x 2 = 9 + -9 Combine like terms: 9 + -9 = 0 -8 + 10x + 25x 2 = 0 Factor a trinomial. (-4 + -5x)(2 + -5x) = 0 Subproblem 1 Set the factor '(-4 + -5x)' equal to zero and attempt to solve: Simplifying -4 + -5x = 0 …
9-4 Solving There is one "special" factoring type that can actually be done using the usual .. .so x2 + 6x + 9 is a perfect square trinomial. x2 + 10x + 25 = (x + 5)2 However , looking back at the original quadratic, it had a midd 16. 9. 11.
Remember that a Example 1: Factorize the quadratic expression: x2 -8x -9 The middle term needs to be split into two terms whose product is -10x2 while the sum remains - 9x. Answer: 125 x3 - 8 | ( 5 x - 2 ) ( 25 x2 + 10 x + 4 ) | 125 x3 - 8. This statement in the form of equation $1s:$ $\left(1$ Point) $\right)$ $○5x+1$ $1eft\left(x+1$ sum $ofx-9$ with $3x-x^{A}\left(2\right)$ Multinlication takes pla The correct answer seems to be 12, whereas I get 0. Here's how I do this problem : √25x Two real roots, indicating that the parabola (graph of a quadratic) intersects Direct factoring when A ≠ 1 is a little trickier, but you can still learn to recognize the patterns: I once made a spreadsheet of quadratic functions w ALL of the them lasts around 2 weeks (3-4 hours of use per day) and then it deformed.
Mar 29, 2017 · At how many points does the graph of #y=25x^2-10x+1# intersect the x-axis? Algebra. 1 Answer Tony B Mar 29, 2017 1 point. Explanation: One of the
This means that for any number x, x 2 - 25 = (x - 5)(x + 5). For example if x = 12 then . 12 2 - 5 2 = 144 - 25 = 119 = (12 - 5)(12 + 5) = 7 × 17.
2.375" x 3.375" 2.25" x 3.125" #17 (Mr. & Mrs.)* 2.6875" x 3.6875" 2.5625" x 3.5625" Gladstone* 3.563" x 5.562" 3.375" x 5.375" 4-Bar: 3.625" x 5.125" 3.475" x 4.875" 5½-Bar: 4.375" x 5.75" 4.25" x 5.5" 6-Bar/Walton Outside: 4.75" x 6.5" 4.625" x 6.25" Lee: 5.25" x 7.25" 5.125" x 7" Linwood/Monona Inside: 5.25"" x 7.5"" 5" x 7.25" Monona
May 11, 2018 1. 2.
Beileshi Watch Repair Magnifier Loupe Jeweler Magnifying Glasses Tool Set with LED Light with 8 Interchangeable Lens-2.5X 4X 6X 8X 10x 15x 20x 25x 4.1 out of 5 stars 531 \$19.99 4.2 Solving 25x 2 +10x-8 = 0 by Completing The Square . Divide both sides of the equation by 25 to have 1 as the coefficient of the first term : x 2 +(2/5)x-(8/25) = 0 Simple and best practice solution for 25x^2+10x+1=9 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution of your homework. If it's not what You are looking for type in the equation solver your own equation and let us solve it. Equation at the end of step 1 : (5 2 x 2 - 10x) - 1 = 0 Step 2 : Trying to factor by splitting the middle term 2.1 Factoring 25x 2-10x-1 The first term is, 25x 2 its coefficient is 25 . The middle term is, -10x its coefficient is -10 .
Factor using the perfect square … 21/01/2020 23/07/2009 Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange 31/03/2009 Integral of 1/sqrt(9 - 25 x^2) dxWatch more videos at https://www.tutorialspoint.com/videotutorials/index.htmLecture By: Er. Ridhi Arora, Tutorials Point Ind Here is a conversion table for pan sizes that has been brought to you courtesy of joyofbaking website it is very useful when converting baking pan sizes from round tins square and many more.. hope this helps .. happy baking It is the hypergeometric distribution H ( x; 3, 7 ), or equivalently, H ( 3; x, 10-x). The corresponding expectation 0.3 x, obtained from the general formula + for H ( n; R, W), is nothing but the conditional expectation E (Y | X = x) = 0.3 x. Treating H ( X; 3, 7 ) as a random distribution (a random vector in the four-dimensional space of all measures on {0,1,2,3}), one may take its expectation, getting the … 8 Interchangeable Magnification - 2.5X 4X 6X 8X 10X 15X 20X 25X magnifications can be interchanged easily and put in corresponding lens storage box. Bonus: A cleaning cloth are included to remove dust, dirt & fingerprints to enhance the view for the small parts.
Tap for more steps Rewrite as . Check the middle term by multiplying and compare this result with the middle term in the original expression. Simplify. Factor using the perfect square … 21/01/2020 23/07/2009 Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange 31/03/2009 Integral of 1/sqrt(9 - 25 x^2) dxWatch more videos at https://www.tutorialspoint.com/videotutorials/index.htmLecture By: Er. Ridhi Arora, Tutorials Point Ind Here is a conversion table for pan sizes that has been brought to you courtesy of joyofbaking website it is very useful when converting baking pan sizes from round tins square and many more.. hope this helps ..
& Mrs.)* 2.6875" x 3.6875" 2.5625" x 3.5625" Gladstone* 3.563" x 5.562" 3.375" x 5.375" 4-Bar: 3.625" x 5.125" 3.475" x 4.875" 5½-Bar: 4.375" x 5.75" 4.25" x 5.5" 6-Bar/Walton Outside: 4.75" x 6.5" 4.625" x 6.25" Lee: 5.25" x 7.25" 5.125" x 7" Linwood/Monona Inside: 5.25"" x 7.5"" 5" x 7.25" Monona So we have 2 times negative 5x is negative 10x. And then 2 times 8 is 16. So plus 16 is equal to negative 2. Now we have 10x minus 10x. Those guys cancel out.
eth crash dnes
145 7 usd v eurách
vlnky xrp graf | 2023-03-24 00:18:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3030186891555786, "perplexity": 2541.5121592950154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00188.warc.gz"} |
http://physicsgoeasy.blogspot.com/2012/06/ | ## Pages
### What is entropy?
Entropy is a state function of a system, it depends only on the equilibrium state of the system. The chenge in entropy between initial and final equilibrium states is
$\Delta S=\int_{i}^{f}\frac{dQ}{T}$
where dQ is the infinitesimal heat transfer that takes place reversibly. The change in entropy for process, including irreversible one between given initial and final equilibrium state, is the same.The second law of thermodynamics may be expressed in terms of entropy, $\Delta S \geqslant 0$.
For reversible process $\Delta S = 0$.
For irreversible process $\Delta S > 0$.
Entropy is a measure of the disorder in a system. The second law states that the natural (irreversible) process tend to evolve to state of greater disorder, or from states of low probability to states of high probability.
### CBSE Class 9 physics notes on Newton's Laws of motion
Full length CBSE Class 9 physics notes on Newton's Laws of motion for study is now available.
To study the chapter visit the link given below
Newton's Laws of motion
### Force : Class 9 physics notes for CBSE and other boards
Class 9 physics notes on Force for CBSE and other board is now available. To read the notes visit the link
### Vector Differentiation full length notes for CSIR-NET/GATE/JAM
Full notes on vector Differentiation is now available at the website physicscatalyst.com covering following topics
1. Differentiation of vectors
2. Scalar and vector fields
3. Gradient of a scalar field
4. The operator delta
5. Divergence and curl of a vector
6. Product Rules
7. Second Derivative
For complete notes visit this link
### free physics questions with solutions for competitive exams : class 11 and class 12
HI
I am pleased to announce the beginning of our all new and free online learning program in physics where you can find physics questions along with their answers and solutions. For this you just have to register onto our website by following this
So what are you waiting for register with our site and start preparing physics for your exams.
physicsexpert
### CBSE class 10 Reflection of light full length notes
Class 9 physics notes on chapter Reflection of light is now available at the following link
CLASS 10 REFLECTION OF LIGHT NOTES (FULL LENGTH)
### CBSE class 9 motion full length notes
Class 9 physics notes on chapter motion is now available at the following link
CLASS 9 MOTION NOTES (FULL LENGTH) | 2017-06-27 22:31:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2677488923072815, "perplexity": 1652.4369311692558}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321938.75/warc/CC-MAIN-20170627221726-20170628001726-00291.warc.gz"} |